HP Integrity NonStop NS16000 Planning Guide Abstract This guide describes the HP Integrity NonStop™ NS16000 system hardware and provides examples of system configurations to assist you in planning for installation of a new system. It also provides a guide to other Integrity NonStop NS-series manuals. Product Version N.A. Supported Release Version Updates (RVUs) This publication supports H06.08 and all subsequent H-series RVUs until otherwise indicated by its replacement publication.
Document History Part Number Product Version Published 529567-005 N.A. November 2005 529567-006 N.A. February 2006 529567-007 N.A. August 2006 529567-008 N.A. August 2006 529567-009 N.A.
HP Integrity NonStop NS16000 Planning Guide Glossary Index What’s New in This Manual vii Manual Information vii New and Changed Information Figures Tables vii About This Guide ix Who Should Use This Guide ix What’s in This Guide ix Where to Get More Information x Notation Conventions x 1. System Hardware Overview Hardware Enclosures and Configurations 1-1 System I/O Hardware Configuration 1-3 IOAM Enclosure 1-3 NonStop S-Series I/O Enclosure 1-4 Preparation for Other Server Hardware 1-4 2.
2. Installation Facility Guidelines (continued) Contents 2. Installation Facility Guidelines (continued) Zinc Particulates 2-6 Space for Receiving and Unpacking Operational Space 2-7 2-6 3.
4. Integrity NonStop NS16000 System Description (continued) Contents 4.
5. Modular System Hardware (continued) Contents 5.
6. System Configuration Guidelines (continued) Contents 6. System Configuration Guidelines (continued) Fibre Channel Devices 6-19 Factory-Default Disk Volume Locations 6-22 Configurations for Fibre Channel Devices 6-22 Configuration Restrictions for Fibre Channel Devices 6-22 Configuration Recommendations for Fibre Channel Devices 6-23 Example Configurations of the IOAM Enclosure and Fibre Channel Disk Module 6-25 G4SAs to Networks 6-34 Default Naming Conventions 6-35 PDU Strapping Configurations 6-36 7.
B. Control, Configuration, and Maintenance Tools (continued) Contents B.
What’s New in This Manual Manual Information HP Integrity NonStop NS16000 Planning Guide Abstract This guide describes the HP Integrity NonStop™ NS16000 system hardware and provides examples of system configurations to assist you in planning for installation of a new system. It also provides a guide to other Integrity NonStop NS-series manuals. Product Version N.A. Supported Release Version Updates (RVUs) This publication supports H06.
What’s New in This Manual New and Changed Information HP Integrity NonStop NS16000 Planning Guide— 529567-009 viii
About This Guide Who Should Use This Guide This guide is written for those responsible for planning the installation, configuration, and maintenance of the server and the software environment at a particular site. Appropriate personnel must have completed HP training courses on system support for Integrity NonStop NS-series servers. Note. Integrity NonStop NS-series, Integrity NonStop NS16000, and NonStop S-series refer to hardware systems.
Where to Get More Information About This Guide Where to Get More Information For information about Integrity NonStop NS-series hardware, software, and operations, refer to Appendix C, Guide to Manuals for the Integrity NonStop NS-Series Server. Notation Conventions Hypertext Links Blue underline is used to indicate a hypertext link within text. By clicking a passage of text with a blue underline, you are taken to the location described.
1 System Hardware Overview Integrity NonStop NS16000 servers use the NonStop advanced architecture (NSAA), which includes a number of duplex or triplex NonStop Blade Elements plus various combinations of hardware enclosures. These enclosures are installed in 42U modular cabinets.
Hardware Enclosures and Configurations System Hardware Overview This figure shows an example modular cabinet with a duplex processor and hardware for a complete system (rear view).
System Hardware Overview System I/O Hardware Configuration Because of the large number of possible configurations, you calculate the total power consumption, heat dissipation, and weight of each modular cabinet based on the hardware configuration that you order from HP. For site preparation specifications of the modular cabinet and the individual enclosures, see Section 3, System Installation Specifications.
System Hardware Overview NonStop S-Series I/O Enclosure NonStop S-Series I/O Enclosure NonStop S-series I/O enclosures equipped with model 1980 I/O multifunction 2 customer replaceable units (IOMF 2 CRUs) can be connected to the NonStop NS16000 server via fiber-optic ServerNet cables and the processor switch (p-switch).
2 Installation Facility Guidelines This section provides guidelines for preparing the installation site for Integrity NonStop NS16000 systems: Topic Page Modular Cabinet Power and I/O Cable Entry 2-1 Emergency Power-Off Switches 2-2 Electrical Power and Grounding Quality 2-2 Uninterruptible Power Supply (UPS) 2-3 Cooling and Humidity Control 2-4 Weight 2-5 Flooring 2-5 Dust and Pollution Control 2-6 Zinc Particulates 2-6 Space for Receiving and Unpacking 2-6 Operational Space 2-7 M
Installation Facility Guidelines Emergency Power-Off Switches Emergency Power-Off Switches Emergency Power Off (EPO) switches are required by local codes or other applicable regulations when computer equipment contains batteries capable of supplying more than 750 volt-amperes (VA) for more that five minutes. Systems that have these batteries also have internal EPO hardware for connection to a site EPO switch or relay.
Installation Facility Guidelines Power Quality Power Quality This equipment is designed to operate reliably over a wide range of voltages and frequencies, described in Enclosure AC Input on page 3-5. However, damage can occur if these ranges are exceeded. Severe electrical disturbances can exceed the design specifications of the equipment.
Installation Facility Guidelines Cooling and Humidity Control at a predetermined time in the event of an extended power failure. A timely and orderly shutdown prevents an uncontrolled and asymmetric shutdown of the system resources from depleted UPS batteries. The HP model R5500 XR UPS supports the OSM Power Fail Support function that allows you to set a ride-through time. If AC power is not restored before the specified ride-through time expires, OSM initiates an orderly system shutdown.
Installation Facility Guidelines Weight racks in large numbers. This higher concentration of devices results in localized heat, which increases the potential for hot spots that can damage the equipment. Additionally, variables in the installation site layout can adversely affect air flows and create hot spots by allowing hot and cool air streams to mix. Studies have shown that above 70°F (20°C), every increase of 18°F (10°C) reduces long-term electronics reliability by 50%.
Installation Facility Guidelines Dust and Pollution Control For your site’s floor system, consult with your HP site preparation specialist or an appropriate floor system engineer. If raised flooring is to be used, the design of the Integrity NonStop NS16000 server modular cabinet is optimized for placement on 24-inch floor panels. Dust and Pollution Control NonStop servers do not have air filters. Any computer equipment can be adversely affected by dust and microscopic particles in the site environment.
Installation Facility Guidelines Operational Space All modular cabinets have small casters to facilitate moving them on hard flooring from the unpacking area to the site. Because of these small casters, rolling modular cabinets along carpeted or tiled pathways might be difficult. If necessary, plan for a temporary hard floor covering in affected pathways for easier movement of the equipment. For physical dimensions of the server equipment, refer to Dimensions and Weights on page 3-7.
Installation Facility Guidelines HP Integrity NonStop NS16000 Planning Guide— 529567-009 2 -8 Operational Space
3 System Installation Specifications This section provides these specifications necessary for planning the system installation site: Topic Page Processor Type and Memory Size AC Input Power for Modular Cabinets Dimensions and Weights Environmental Specifications Calculating Specifications for Enclosure Combinations 3-1 3-2 3-11 3-11 3-13 Note.
System Installation Specifications AC Input Power for Modular Cabinets AC Input Power for Modular Cabinets This subsection provides information about AC input power for modular cabinets and covers these topics: Topic Page North America and Japan: 208 V AC PDU Power 3-3 North America and Japan: 200 to 240 V AC PDU Power 3-3 International: 380 to 415 V AC PDU Power 3-4 International: 200 to 240 V AC PDU Power 3-4 Grounding 3-4 Branch Circuits and Circuit Breakers 3-5 Enclosure AC Input 3-5 E
North America and Japan: 208 V AC PDU Power System Installation Specifications North America and Japan: 208 V AC PDU Power The PDU power characteristics are: PDU input characteristics PDU output characteristics • • • • • • • 208 V ac, 3-phase delta, 24A RMS, 4-wire 50/60Hz NEMA L15-30 input plug 6.5 feet (2 m) attached power cord 3 circuit-breaker-protected 13.
International: 380 to 415 V AC PDU Power System Installation Specifications International: 380 to 415 V AC PDU Power The PDU power characteristics are: PDU input characteristics PDU output characteristics • • • • • • • 380 to 415 V ac, 3-phase Wye, 16A RMS, 5-wire 50/60Hz IEC309 5-pin, 16A input plug 6.
Branch Circuits and Circuit Breakers System Installation Specifications Branch Circuits and Circuit Breakers Modular cabinets for the Integrity NonStop NS16000 system contain two PDUs. Each of the two PDUs requires a separate branch circuit of these ratings: Region Volts Amps North America and Japan 208 24 North America and Japan 200 - 240 40 International 380 - 415 16 International 200 - 240 32 Caution.
Enclosure Power Loads System Installation Specifications Enclosure Power Loads The total power and current load for each modular cabinet depends on the number and type of enclosures installed in it. Therefore, the total load is the sum of the loads for all enclosures installed. For examples of calculating the power and current load for various enclosure combinations, refer to Calculating Specifications for Enclosure Combinations on page 3-13.
Model R5500 XR Integrated UPS System Installation Specifications Model R5500 XR Integrated UPS Version Operating Voltage Settings Power Out (VA/Watts) Input Plug Branch Circuit North America and Japan 200/208*, 220, 230, 240 5000/4500 L6-30P Dedicated 30 Amp Other International 200, 230*, 240 6000/5400 Dedicated 30 Amp If 200/208 Then 5000/4500 IEC-309 32 Amp * Factory-default setting For complete information and specifications, refer to the HP UPS R5500 XR Models User Guide.
Plan View From Above the Modular Cabinet System Installation Specifications Plan View From Above the Modular Cabinet 40 in. (102 cm) 46 in. (116.84 cm) 81.5 in. (207 cm) 24 in. (60.96 cm) VST102.vsd Service Clearances for the Modular Cabinet Aisles: 6 feet (182.9 centimeters) Front: 3 feet (91.4 centimeters) Rear: 3 feet (91.
Modular Cabinet Physical Specifications System Installation Specifications Modular Cabinet Physical Specifications Item Height in. Width cm in. Depth Weight cm in. cm Modular cabinet 78.7 199.9 24.0 60.96 46.0 116.84 Rack 78.5 199.4 23.62 60.0 40.0 101.9 Front door 78.5 199.4 23.5 59.7 3.0 7.6 Left-rear door 78.5 199.4 11.0 27.9 1.0 2.5 Right-rear door 78.5 199.4 12.0 30.5 1.0 2.5 Shipping (palletized) 86.22 219.0 32.0 81.28 54.0 137.
Modular Cabinet and Enclosure Weights With Worksheet System Installation Specifications Modular Cabinet and Enclosure Weights With Worksheet The total weight of each modular cabinet is the sum the weights of the cabinet plus each enclosure installed in it. Use this worksheet to determine the total weight: Enclosure Type Number of Enclosures Weight Total lbs kg 42U Modular cabinet* 303 137.44 NonStop Blade Element 112 50.8 Processor switch 70 32.8 LSU 96 43.5 IOAM 200 90.
Environmental Specifications System Installation Specifications Environmental Specifications This subsection provides information about environmental specifications and covers these topics: Topic Page Heat Dissipation Specifications and Worksheet 3-11 Operating Temperature, Humidity, and Altitude 3-12 Nonoperating Temperature, Humidity, and Altitude 3-12 Cooling Airflow Direction 3-12 Typical Acoustic Noise Emissions 3-12 Tested Electrostatic Immunity 3-12 Heat Dissipation Specifications and
Operating Temperature, Humidity, and Altitude System Installation Specifications Operating Temperature, Humidity, and Altitude Specification Operating Range1 Recommended Range1 Maximum Rate of Change per Hour Temperature (all except Fibre Channel disk module) 41° to 95° F (5° to 35° C) 68° to 72° F (20° to 25° C) 9° F (5° C) Repetitive 36° F (20° C) Nonrepetitive Temperature (Fibre Channel disk module) 50° to 95° F (10° to 35° C) - 0.6° F (1° C) Repetitive 1.
System Installation Specifications Calculating Specifications for Enclosure Combinations Calculating Specifications for Enclosure Combinations Figure 3-1, Example Duplex Configuration, on page 3-14 shows components installed in 42U modular cabinets. Cabinet weight includes the PDUs and their associated wiring and receptacles. Power and thermal calculations assume that each enclosure in the cabinet is fully populated; for example, a NonStop Blade Element with four processors.
Calculating Specifications for Enclosure Combinations System Installation Specifications Figure 3-1, Example Duplex Configuration has 16 logical processors with two IOAM enclosures, and 14 Fibre Channel disk modules installed in three 42U modular cabinets.
Calculating Specifications for Enclosure Combinations System Installation Specifications Table 3-1 shows the weight, power, and thermal calculations for Cabinet One in Figure 3-1, Example Duplex Configuration, on page 3-14. Table 3-1. Cabinet One Load Calculations Component Quantity Height (U) Weight (lbs) Weight (kg) Volt-amps per AC feed, one feed powered NonStop Blade Element 3 15 336 152.4 2130 1170 7986 LSU 1 4 96 43.5 150 110 750 Processor switch 2 6 140 65.
Calculating Specifications for Enclosure Combinations System Installation Specifications Table 3-2 shows the weight, power, and thermal calculations for Cabinet Two in Figure 3-1, Example Duplex Configuration, on page 3-14. Table 3-2. Cabinet Two Load Calculations Component Quantity Height (U) Weight (lbs) Weight (kg) Volt-amps per AC feed, one feed powered NonStop Blade Element 3 15 336 152.4 2130 1170 7986 LSU 1 4 96 43.
Calculating Specifications for Enclosure Combinations System Installation Specifications Table 3-3 shows the weight, power, and thermal calculations for Cabinet Three in Figure 3-1, Example Duplex Configuration, on page 3-14. Table 3-3. Cabinet Three Load Calculations Component Quantity Height (U) Weight (lbs) Weight (kg) Volt-amps per AC feed, one feed powered NonStop Blade Element 2 10 224 101.
System Installation Specifications Calculating Specifications for Enclosure Combinations HP Integrity NonStop NS16000 Planning Guide— 529567-009 3- 18
4 Integrity NonStop NS16000 System Description This section describes the Integrity NonStop NS16000 system and covers these topics: Topic Page NonStop System Primer 4-1 NonStop Advanced Architecture 4-2 NonStop Blade Complex 4-2 Processor Element 4-5 Duplex Processor 4-6 Triplex Processor 4-7 Processor Synchronization and Rendezvous 4-8 Memory Reintegration 4-8 Failure Recovery for Duplex Processor 4-8 Failure Recovery for Triplex Processor 4-9 ServerNet Fabric I/O 4-9 System Archite
Integrity NonStop NS16000 System Description NonStop Advanced Architecture However, contemporary high-speed microprocessors make lock-step processing no longer practical because of: • • • Variable frequency processor clocks with multiple clock domains Higher transient error rates than in earlier, simpler microprocessor designs Chips with multiple processor cores NonStop Advanced Architecture Integrity NonStop NS16000 systems employ a unique method for achieving fault tolerance in a clustered processor
NonStop Blade Complex Integrity NonStop NS16000 System Description All input and output to and from each NonStop Blade Element goes through a logical synchronization unit (LSU). The LSU interfaces with the ServerNet fabrics and contains logic that compares all output operations of the PEs in a logical processor, ensuring that all NonStop Blade Elements agree on the result before the data is passed to the ServerNet fabrics.
Integrity NonStop NS16000 System Description NonStop Blade Complex In the event of a processor fault in either a duplex or triplex processor, the failed component within a NonStop Blade Element (processor element, power supply, and so forth) or the entire Blade Element can be replaced while the system continues to run. A single Integrity NonStop NS16000 system can have up to four NonStop Blade Complexes for a total of 16 processors.
Processor Element Integrity NonStop NS16000 System Description Processor Element Each of the two or four processor elements in a NonStop Blade Element includes: • • • • • A standard Intel Itanium microprocessor running at 1.
Duplex Processor Integrity NonStop NS16000 System Description Duplex Processor The DMR or duplex processor uses two NonStop Blade Elements, A and B, both with two or four microprocessors. Fiber-optic cables from each NonStop Blade Element connect the PEs to the LSUs. These LSUs then connect to two independent ServerNet fabrics. These two connections create communications redundancy in case one of the fabrics fails.
Triplex Processor Integrity NonStop NS16000 System Description Triplex Processor The TMR or triplex processor uses three NonStop Blade Elements, A, B, and C. As with the duplex processor, the optic cables connect the PEs to the LSUs, and these LSUs then connect to the two independent ServerNet fabrics. Dual ServerNet fabrics create communications redundancy in case one of the fabrics fails. For a description of the LSU functions, see Processor Synchronization and Rendezvous on page 4-8.
Integrity NonStop NS16000 System Description Processor Synchronization and Rendezvous Processor Synchronization and Rendezvous Synchronization and rendezvous at the LSUs perform two main functions: • • Keep the individual PEs in a logical processor in loose lock-step through a technique called rendezvous. Rendezvous occurs to: ° Periodically synchronize the PEs so they execute the same instruction at the same time. Synchronization accommodates the slightly different clock speed within each PE.
Failure Recovery for Triplex Processor Integrity NonStop NS16000 System Description Failure Recovery for Triplex Processor In triplex processors, each LSU has inputs from the three processor elements within a logical processor. As with the duplex processor, the LSU keeps the three PEs in loose lockstep. The LSU also checks the outputs from the three PEs.
Simplified ServerNet System Diagram Integrity NonStop NS16000 System Description The ServerNet network architecture is full-duplex, packet-switched, and point-to-point in a star configuration. It employs two independent I/O communications paths throughout the system: the ServerNet X fabric and the ServerNet Y fabric. These dual paths ensure that no single failure disrupts communications among the remaining system components.
ServerNet Pathways in the P-Switch Integrity NonStop NS16000 System Description ServerNet Pathways in the P-Switch This drawing shows the ServerNet communication pathways through the p-switch PICs and routers. Two p-switches are required in each system; one p-switch serves the ServerNet X fabric and the other the ServerNet Y fabric. In this drawing, the nomenclature PIC n means the PIC in slot n. For example, PIC 4 is the PIC in slot 4.
ServerNet Pathways in the IOAM Enclosure Integrity NonStop NS16000 System Description ServerNet Pathways in the IOAM Enclosure This drawing shows the ServerNet communication pathways through the routers in each ServerNet switch board to the ServerNet I/O adapters in each IOAM module.
Examples of ServerNet Pathways Integrity NonStop NS16000 System Description Examples of ServerNet Pathways This example shows the redundant routing and connection of the ServerNet X and Y fabric within a simple example system. This example system includes: • Four processors with their requisite four LSU optics adapters • One IOAM enclosure connected to the PIC in slot 4 of each p-switch, making the IAOM enclosure group 110.
Examples of ServerNet Pathways Integrity NonStop NS16000 System Description This example shows the ServerNet X fabric routing within an example system of: • • • 16 processors with their requisite LSU optics adapters Four IOAMs of group numbers 110 through 113 Four NonStop S-series I/O enclosures of group numbers 61 through 64 The IOAM enclosures can reside in the same or different cabinets with MMF fiber-optic cables carrying the ServerNet communications between the IOAM enclosures and the p-switches.
Integrity NonStop NS16000 System Description Examples of ServerNet Pathways If a cable, connection, router, or other failure occurs, only the system resources that are downstream of the failure on the same fabric are affected. Because of the redundant ServerNet architecture, communication takes the alternate path on the other fabric to the peer resources.
Examples of ServerNet Pathways Integrity NonStop NS16000 System Description This example shows a logical representation of a complete typical 4-processor triplex system with the X and Y fabrics: SCSI Fibre Channel Disk Module FCAL A FCAL B SCSI Tape Fibre Channel Tape FC-SCSI Router ESS Fibre Channel Fibre Channel High-Speed Ethernet I/O Adapter Module 2 S-Series I/O Enclosure I/O I/O Adapter Module 3 S-Series I/O Enclosure I/O IOAM X ServerNet Switch X ServerNet Links Y ServerNet Switch PI
Examples of ServerNet Pathways Integrity NonStop NS16000 System Description This example shows a logical representation of a triplex 16-processor NonStop Blade Complex (NSBC) with the associated NonStop Blade Elements (NSBEs) and their Blade optics adapters (BOAs), the LSUs, and the p-switch for X ServerNet fabric: NSBE 400 S Q Group Module1A R T BOA BOA Slot 1 Slot 2 NSBE Group 400 S Q Module 1B R T BOA BOA Slot 1 Slot 2 NSBE Group 400 S Q Module 1C R T BOA BOA Slot 1 Slot 2 NSBE 401 S Q Group Module
Examples of ServerNet Pathways Integrity NonStop NS16000 System Description This example shows a logical representation of a triplex 16-processor NonStop Blade Complex (NSBC) with the associated NSBEs and their BOAs, the LSUs, and the p-switch for Y ServerNet fabric: NSBE 400 S Q Group Module1A R T BOA BOA Slot 1 Slot 2 NSBE Group 400 S Q Module 1B R T BOA BOA Slot 1 Slot 2 NSBE Group 400 S Q Module 1C R T BOA BOA Slot 1 Slot 2 NSBE 401 S Q Group Module A R T BOA BOA Slot 1 Slot 2 NSBE 401 S Q Group
System Architecture Integrity NonStop NS16000 System Description System Architecture This example shows elements of an Integrity NonStop NS16000 system with four triplex processors: Fibre Channel Disk Module Fibre Channel Disk Module High-Speed Ethernet Fibre Channel High-Speed Ethernet Fibre Channel ServerNet Adapters S-Series I/O Enclosure I/O ServerNet Adapters S-Series I/O Enclosure I/O Adapter Module I/O / 4 / 4 ServerNet Fabrics X Fabric Y Fabric Processor Switch on X Fabric Processor
Integrity NonStop NS16000 System Description Modular Hardware Modular Hardware Hardware for Integrity NonStop NS16000 systems is implemented in modules (enclosures installed in modular cabinets). For descriptions of the modular hardware components, see Section 5, Modular System Hardware. All Integrity NonStop NS16000 server components are field-replaceable units (FRUs) that can only be serviced by service providers trained by HP.
Default Startup Characteristics Integrity NonStop NS16000 System Description Default Startup Characteristics Each system ships with these default startup characteristics: • $SYSTEM disks residing in one of these two locations: ° In a Fibre Channel disk module connected to IOAM enclosure group 110 with the disks in these locations: IOAM Path • • ° FCSA Fibre Channel Disk Module Group Module Slot SAC Shelf Bay Primary 110 2 1 1 1 1 Backup 110 3 1 1 1 1 Mirror 110 3 1 2 1 1
Default Startup Characteristics Integrity NonStop NS16000 System Description Load Path Description Source Disk Destination Processor ServerNet Fabric (page 2 of 2) 14 Mirror $SYSTEM-M 1 Y 15 Mirror backup $SYSTEM-M 1 X 16 Mirror backup $SYSTEM-M 1 Y This illustration shows the system load paths: Fibre Channel Disk Module 2 2 MB M 1 1 Fibre Channel Disk Module 2 2 B 1 1P IOAM Enclosure Group 110 X Fabric Y Fabric ServerNet Switch Board ServerNet Switch Board 1 2 1 2 FCSAs Processo
Integrity NonStop NS16000 System Description Migration Considerations Migration Considerations This subsection provides migration information migration about migrating from NonStop S-series systems to Integrity NonStop NS16000 systems: Topic Page Migrating Applications 4-23 Migration Considerations 4-23 Migrating Hardware Products to Integrity NonStop NS16000 Servers 4-24 Other Manuals Containing Software Migration Information 4-24 Migrating Applications The H-Series Application Migration Guide
Integrity NonStop NS16000 System Description Migrating Hardware Products to Integrity NonStop NS16000 Servers Migrating Hardware Products to Integrity NonStop NS16000 Servers Connecting NonStop S-series hardware to an Integrity NonStop NS16000 server is only one step in the overall migration. Software and application changes might be required to complete the migration. Any hardware migration should be planned as part of the overall application and software migration tasks.
Integrity NonStop NS16000 System Description • • • System Installation Document Packet H06.xx README Interactive Upgrade Guide 2 If you are moving a NonStop S-series I/O enclosure from a NonStop S-series system to an Integrity NonStop NS16000 system and want to migrate the data online, you can perform a migratory revive if: ° ° Your data is mirrored. You have another NonStop S-series system or NonStop S-series I/O enclosure connected to the NonStop S-series system.
Integrity NonStop NS16000 System Description • ServerNet Adapter Configuration Forms Each ServerNet cable with: ° ° ° Source and destination enclosure, component, and connector Cable part number Source and destination connection labels ServerNet Adapter Configuration Forms ServerNet adapters can include the Fibre Channel ServerNet adapter (FCSA) and Gigabit Ethernet 4-port ServerNet adapter (G4SA) that are installed in the Integrity NonStop NS16000 system or the various ServerNet adapters installed in
5 Modular System Hardware This section describes the hardware used in Integrity NonStop NS16000 systems: Topic Page Modular Hardware Components 5-1 Component Location and Identification 5-28 NonStop S-Series I/O Enclosures 5-36 Modular Hardware Components These hardware components can be part of an Integrity NonStop NS16000 system: Topic Page Modular Cabinets 5-3 Power Distribution Units (PDUs) 5-5 NonStop Blade Element 5-10 Logical Synchronization Unit (LSU) 5-13 Processor Switch 5-15
Modular Hardware Components Modular System Hardware All Integrity NonStop NS16000 server components are field-replaceable units (FRUs) that can only be serviced by service providers trained by HP. For information about installing the Integrity NonStop NS16000 server hardware, refer to the NonStop NS16000 Hardware Installation Manual.
Modular System Hardware Modular Cabinets Modular Cabinets The modular cabinet is a EIA standard 19-inch, 42U high, rack for mounting modular components. The modular cabinet comes equipped with front and rear doors and includes a rear extension that makes it deeper than some industry-standard racks. The Power Distribution Units (PDUs) are mounted along the rear extension without occupying any U-space in the cabinet and are oriented inward, facing the components within the modular cabinet.
Modular System Hardware Modular Cabinets North America and Japan 200 to 240 V AC input, Single phase, 40A RMS Power • • • • • • • EIA standard 19-inch rack with 42U of rack space Geography: North America and Japan Recommended for most configurations Includes 2 power distribution units (PDU): Zero-U rack design PDU input characteristics ° 200 to 240 V ac, single phase, 40A RMS, 3-wire ° 50/60Hz ° Non-NEMA Locking CS8265C, 50A input plug ° 6.
Modular System Hardware Power Distribution Units (PDUs) International 200 to 240 V AC input, Single phase, 32A RMS Power • • • • • • • • EIA standard 19-inch rack with 42U of rack space Geography: International Recommended for most configurations Harmonized power cord Includes 2 power distribution units (PDU): Zero-U rack design PDU input characteristics ° 200 to 240 V ac, single phase, 32A RMS, 3-wire ° 50/60Hz ° IEC309 3-pin, 32A input plug ° 6.
Power Distribution Units (PDUs) Modular System Hardware Each PDU in a modular cabinet has: • • • 36 AC receptacles per PDU (12 per segment) - IEC 320 C13 10A receptacle type 3 AC receptacles per PDU (1 per segment) - IEC 320 C19 12A receptacle type 3 circuit-breakers These PDU options are available to receive power from the site AC power source: • • • • 208 V ac, three-phase delta for North America and Japan 200 to 240 V ac, single phase for North America and Japan 380 to 415 V ac three-phase wye for
Power Distribution Units (PDUs) Modular System Hardware This illustration shows the AC power feed cables on PDUs for AC feed at the top of the cabinet: To AC Power Source or Site UPS AC Power Cords Power Distribution Unit (PDU) 42 42 41 41 40 40 39 39 38 38 37 37 36 36 35 35 34 34 06 06 05 05 04 04 03 03 02 02 01 01 VST140.
Power Distribution Units (PDUs) Modular System Hardware If your system includes the optional rackmounted HP R5500 XR UPS, the modular cabinet will have one PDU located on the rear left side and four extension bars on the rear right side. The PDU and extension bars are oriented inward, facing the components within the modular cabinet. To provide redundancy, components are plugged into the left-side PDU and the extension bars. Each extension bar is plugged into the UPS.
Power Distribution Units (PDUs) Modular System Hardware This illustration shows the AC power feed cables for the PDU and UPS for AC power feed from the bottom of the cabinet when the optional UPS and ERM are installed: Extension bars are installed along the rear right side instead of a PDU when a UPS is installed in the modular cabinet To AC Power Source Power Distribution Unit (PDU) Extended Runtime Module (ERM) Uninterruptible Power Supply (UPS) 42 42 41 41 40 40 39 39 38 38 37 37 09 09
Modular System Hardware NonStop Blade Element NonStop Blade Element The NonStop Blade Element enclosure, which is 5U high and weighs 112 pounds (46 kilograms), has these physical attributes: • • • • Rackmountable Redundant AC power feeds Front-to-rear cooling Cable connections at rear (power, reintegration, LSU) with cable management equipment on the rear of the cabinet Each NonStop Blade Element includes these field replaceable units (FRUs): • • • • • • • • • Processor board with up to four Itanium
NonStop Blade Element Modular System Hardware This illustration shows the rear of the NonStop Blade Element, equipped with two power supplies and eight Blade optics adapters: NonStop Blade Element Enclosure (back) Reintegration Connectors (cables to peer NSBEs) S T Q R AC Power Connector J1 J0 Power Supply (2) J3 J2 J5 J4 J Blade Optics Adapters (cables to LSU) J7 J8 K1 K0 K3 K2 K5 K7 K4 K8 K Blade Optics Adapters (cables to LSU) VST170.
NonStop Blade Element Modular System Hardware However, to help reduce the complexity of cable connections, HP recommends that you use a physically sequential order of slots for fiber-optic cable connections on the LSU and do not randomly mix the LSU slots. Cable connections to the LSU have no bearing on NonStop Blade Complex number, but HP also recommends you connect NonStop Blade Elements A to the NonStop Blade Element A connection on the LSU.
Logical Synchronization Unit (LSU) Modular System Hardware Front Panel Indicator LEDs LED Indicator State Meaning Power Flashing green Power is on; NonStop Blade Element is available for normal operation. Flashing yellow NonStop Blade Element is in power mode. Off Power is off. Steady amber Hardware or software fault exists. Off NonStop Blade Element is available for normal operation. Flashing blue System locator is activated.
Logical Synchronization Unit (LSU) Modular System Hardware The LSU module consists of two types of FRUs: • • • LSU logic board (accessible from the front of the LSU enclosure) LSU optics adapters (accessible from the rear of the LSU enclosure) AC power assembly (accessible from the rear of the LSU enclosure) Caution. To maintain proper cooling air flow, blank panels must be installed in all slots that do not contain logic adapter PICs or logic boards.
LSU Indicator LEDs Modular System Hardware LSU Indicator LEDs LED State Meaning LSU optics adapter PIC (green LED) Green Power is on; LSU is available for normal operation. Off Power is off. LSU optics adapter PIC (amber LED) Amber Power is in progress, board is being reset, or a fault exists. Off Normal operation or powered off. LSU optics adapter connectors (green LEDs) Green NonStop Blade Element optics link or ServerNet link is functional.
Modular System Hardware • • • Processor Switch ServerNet I/O PICs (slots 4 to 9); provide 24 ServerNet 3 connections to one or more IOAMs and to optional NonStop S-series I/O enclosures Processor I/O PICs (slots 10 to 13); connect to LSU for ServerNet 3 I/O with the processors Cable management and connectivity on the rear of the cabinet Caution. To maintain proper cooling air flow, blank panels must be installed in all slots that do not contain PICs.
P-Switch Indicator LEDs Modular System Hardware This illustration shows the front of the p-switch: PWR SPON PWR FAN FAN 100/10 ENET 1 DISPLAY 2 3 4 VST402.
Processor Numbering Modular System Hardware Processor Numbering Connection of the ServerNet cables from the LSU to the PICs in p-switch slots 10 through 13 determines the number of the associated logical processor. For more information, see LSUs to Processor Switches and Processor IDs on page 6-7. This example of a triplex processor shows the ServerNet cabling to the p-switch PIC in slot 10 that defines processors 0, 1, 2, and 3.
Modular System Hardware I/O Adapter Module (IOAM) Enclosure and I/O Adapters I/O Adapter Module (IOAM) Enclosure and I/O Adapters An IOAM provides the Integrity NonStop NS16000 system with its system I/O using Gigabit 4-port Ethernet ServerNet adapters (G4SAs) for LAN connectivity and Fibre Channel ServerNet adapters (FCSAs) for storage connectivity.
I/O Adapter Module (IOAM) Enclosure and I/O Adapters Modular System Hardware This illustration shows the front and rear of the IOAM enclosure and details: Maintenance Connector (100 BaseT RJ-45) ServerNet Links From P-Switch (MMF LC Connectors) LDC X ServerNet Switch Board (Module 2, Slot 14) Fans (Mod 2 Slot 17) Slot 5 Slot 4 Slot 5 Slot 1 Slot 4 Slot 3 Slot 2 Fans (Mod 2 Slot 16) Slot 1 Fans (Mod 3 Slot 16) Slot 3 IOAM (Module 3) IOAM (Module 2) Slot 2 Fans (Mod 3 Slot 17) Power Suppli
I/O Adapter Module (IOAM) Enclosure and I/O Adapters Modular System Hardware IOAM Enclosure Indicator LEDs ServerNet Switch Board LED State Meaning Power Green Power is on; board is available for normal operation. Off Power is off. Amber A fault exists. Off Normal operation or powered off. Green Link is functional. Off Link is not functional. LCD Display Messages Message as is displayed. ServerNet Ports Green ServerNet link is functional. Off ServerNet link is not functional.
I/O Adapter Module (IOAM) Enclosure and I/O Adapters Modular System Hardware This illustration shows the front of an FCSA: Port 1 2Gb 1Gb Port 1 Port 2 2Gb 1Gb Port 1 FCSA Fibre Ethernet ports: not available for FCSA Ethernet ports: not available for FCSA D C Ethernet ports: not available for FCSA VST001.vsd FCSAs are installed in pairs and can reside in slots 1 through 5 of either IOAM (module 2 or 3) in an IOAM enclosure.
I/O Adapter Module (IOAM) Enclosure and I/O Adapters Modular System Hardware Gigabit Ethernet 4-Port ServerNet Adapter The Gigabit Ethernet 4-port ServerNet adapter (G4SA) provides gigabit connectivity to Ethernet LANs. G4SAs can reside in slots 1 through 5 of each IOAM module.
Fibre Channel Disk Module Modular System Hardware A G4SA complies with the 1000 Base-T standard (802.3ab),1000 Base-SX standard (802.3z), and these Ethernet LANs: • • • • • 802.3 (10 Base-T) 802.1Q (VLAN tag-aware switch) 802.3u (Auto negotiate) 802.3x (Flow control) 802.3u (100 Base-T and 1000 Base-T) For detailed information on the G4SA, see the NonStop Gigabit Ethernet 4-Port Installation and Support Guide.
Modular System Hardware Maintenance Switch (Ethernet) Maintenance Switch (Ethernet) The ProCurve 2524 maintenance switch includes management features that NSAA requires and provides the communication between the Integrity NonStop NS16000 system at the switch boards in the p-switches and IOAM enclosure, and optional UPS and the system console running HP NonStop Open System Management (OSM). The maintenance switch includes 24 ports, which is enough capacity to support multiple systems.
UPS and ERM (Optional) Modular System Hardware UPS and ERM (Optional) An uninterruptible power supply (UPS) is optional but recommended where a site UPS is not available. You can use any UPS that meets the modular cabinet power requirements for all enclosures being powered by the UPS. One UPS option is the HP R5500 XR UPS. For information about the requirements for installing a UPS other than the HP R5500 XR UPS in an Integrity NonStop NS16000 system, see Uninterruptible Power Supply (UPS) on page 2-3.
System Console Modular System Hardware For power and environmental requirements, planning, installation, and emergency power-off (EPO) instructions for the R5500 XR UPS, refer to the documentation shipped with the UPS. System Console A system console is a personal computer (PC) running maintenance and diagnostic software for NonStop systems. When supplied with a new NonStop system, system consoles have factory-installed HP and third-party software for managing the system.
Component Location and Identification Modular System Hardware This illustration shows an example of connections between two IOAM enclosures and an ESS via the separate Fibre Channel switch: Fibre Channel Switch FCSA IOAM Enclosure IOAM Enclosure ESS FCSA Fibre Channel Switch VST 068.vsd For fault tolerance, the primary and backup paths to an ESS logical device (LDEV) must go through different Fibre Channel switches.
Terminology Modular System Hardware Terminology These terms are used in locating and describing components: Term Definition Cabinet Computer system housing that includes a structure of external panels, front and rear doors, internal racking, and dual PDUs. Rack Structure integrated into the cabinet into which rackmountable components are assembled.
Rack and Offset Physical Location Modular System Hardware On Integrity NonStop NS16000 systems, locations of the physical and logical modular components are identified by: • • Physical location: ° Rack number ° Rack offset Logical location: GMS notation determined by the position of the component on ServerNet In NonStop S-series systems, group, module, and slot (GMS) notation identifies the physical location of a component.
NonStop Blade Element Group-Module-Slot Numbering Modular System Hardware NonStop Blade Element Group-Module-Slot Numbering • • • • Processor group: 400 through 403 relates to NonStop Blade Complex 0 through 3. Example: group 403 = NonStop Blade Complex 3 Module: 1 through 3 relates to the processor NonStop Blade Element ID A through C. Example: module 2 = NonStop Blade Element B Slot: 71 through 78 relates to location of the Blade optics adapter.
LSU Group-Module-Slot Numbering Modular System Hardware LSU Group-Module-Slot Numbering This table shows the default numbering for the LSUs: Item Group (NonStop Blade Complex)1 Individual LSU J set 400 - 403 Individual LSU K set Not used at this time Module I/O Position (Slot) 100 + NonStop Blade Complex number 1 - Optics adapter (rear side, slots 20-27) 2 - Logic board (front side, slots 50-57) 1 See NonStop Blade Element Group-Module-Slot Numbering on page 5-31.
Processor Switch Group-Module-Slot Numbering Modular System Hardware Processor Switch Group-Module-Slot Numbering This table shows the default numbering for the p-switch: Group X ServerNet Module Y ServerNet Module Slot Item 100 2 3 1 Maintenance PIC 2 Cluster PIC 3 Crosslink PIC 4-9 ServerNet I/O PICs 10 ServerNet PIC (processors 0-3) 11 ServerNet PIC (processors 4-7) 12 ServerNet PIC (processors 8-11) 13 ServerNet PIC (processors 12-16) 14 P-switch logic board 15, 18 Power sup
Processor Switch Group-Module-Slot Numbering Modular System Hardware IOAM Enclosure Group-Module-Slot Numbering These tables shows the default numbering for the IOAM enclosure: IOAM Group P-Switch PIC Slot PIC Port Numbers 110 4 1-4 111 5 1-4 112 6 1-4 113 7 1-4 114 8 1-4 115 9 1-4 IOAM Group X ServerNet Module Y ServerNet Module Slot Item Port 110 - 115 (See preceding table.
Fibre Channel Disk Module Group-Module-Slot Numbering Modular System Hardware Fibre Channel Disk Module Group-Module-Slot Numbering This table shows the default numbering for the Fibre Channel disk module: IOAM Enclosure Group Module 110-115 2-X fabric; FCDM FCSA F-SACs Slot 1-5 1, 2 3-Y fabric Shelf 1 - 4 if daisychained; 1 if single disk enclosure Slot Item 0 Fibre Channel disk module 1-14 Disk drive bays 89 Transceiver A1 90 Transceiver A2 91 Transceiver B1 92 Transceiver B2 93
NonStop S-Series I/O Enclosures Modular System Hardware NonStop S-Series I/O Enclosures Topics discussed in this subsection are: Topic Page IOMF 2 CRU 5-37 NonStop S-Series Disk Drives and ServerNet Adapters 5-37 NonStop S-Series I/O Enclosure Group Numbers 5-37 NonStop S-series I/O enclosures can be connected to Integrity NonStop NS16000 systems to retain not only previously installed hardware but also data stored on disks mounted in the NonStop S-series I/O enclosures.
Modular System Hardware IOMF 2 CRU Each p-switch (for the X or Y fabric) has up to six I/O PICs, with one I/O PIC required for each IOAM enclosure in the system. Each NonStop S-series I/O enclosure uses one port and one PIC, so a maximum of 24 NonStop S-series I/O enclosures can be connected to an Integrity NonStop NS16000 system if no IOAM enclosure is installed.
NonStop S-Series I/O Enclosure Group Numbers Modular System Hardware This table shows the group number assignments for the NonStop S-series I/O enclosures: P-Switch PIC Slot (X and Y Fabrics) P-Switch PIC Connector NonStop S-Series I/O Enclosure Group 4 1 11 2 12 3 13 4 14 1 21 2 22 3 23 4 24 1 31 2 32 3 33 4 34 1 41 2 42 3 43 4 44 1 51 2 52 3 53 4 54 1 61 2 62 3 63 4 64 5 6 7 8 9 HP Integrity NonStop NS16000 Planning Guide— 529567-009 5- 38
NonStop S-Series I/O Enclosure Group Numbers Modular System Hardware This illustration shows the group number assignments on the p-switch: To I/O Enclosure Groups 11-14 To I/O Enclosure Groups 21-24 To I/O Enclosure Groups 31-34 To I/O Enclosure Groups 41-44 To I/O Enclosure Groups 51-54 To I/O Enclosure Groups 61-64 3 ENET 2 SPON 1 SER 1 2 4 3 2 1 4 3 2 1 4 3 2 1 4 3 2 1 4 3 2 1 4 3 2 1 4 3 2 1 4 3 2 1 4 3 2 1 4 3 2 1 4 3 2 1 3 4 5 6 7 8 9 10 11 12 13 For Connections to IOAM (or
Modular System Hardware NonStop S-Series I/O Enclosure Group Numbers HP Integrity NonStop NS16000 Planning Guide— 529567-009 5- 40
6 System Configuration Guidelines This section provides guidelines for Integrity NonStop NS16000 system configurations: Topic Page Enclosure Locations in Cabinets 6-2 Internal ServerNet Interconnect Cabling 6-3 P-Switch to NonStop S-Series I/O Enclosure Cabling 6-16 IOAM Enclosure and Disk Storage Considerations 6-19 Fibre Channel Devices 6-19 G4SAs to Networks 6-34 Default Naming Conventions 6-35 PDU Strapping Configurations 6-36 Integrity NonStop NS16000 systems use a flexible modular ar
Enclosure Locations in Cabinets System Configuration Guidelines For other example configurations, see Section 7, Examples of Configurations. Enclosure Locations in Cabinets In this table, the enclosure location refers to the U in the cabinet where the lower edge of the enclosure resides, such as the bottom of a NonStop Blade Element enclosure at 28U.
Internal ServerNet Interconnect Cabling System Configuration Guidelines Enclosure or Component Height (U) Required Cabinet Location IOAM enclosure 11U Any available 11U high space with the middle or upper Us of the modular cabinet preferred Location is optional depending on required mounting of other enclosures installed in the modular cabinet (UPS, NonStop Blade Elements, and so on) Fibre Channel disk module 3U Any available 3U high space Restricted service clearances might exist with a Fibre C
Cable Labeling System Configuration Guidelines To identify correct cable connections to factory-installed hardware, every interconnect cable has a plastic label affixed to each end. Extra sheets of preprinted labels that you can fill in are also provided. These labels are attached to the cable and connector and contain information about which enclosure and connector the cable is to connect.
System Configuration Guidelines Cable Management System Cable Management System Integrity NonStop NS16000 systems include the cable management system (CMS) to protect all power, fiber-optic, and CAT5 Ethernet cables within the systems. The CMS maintains a 25 millimeter (1 inch) minimum bend radius for the fiber-optic cables and provides strain relief for all cables.
Dedicated Service LAN Cables System Configuration Guidelines Fiber-optic cables use either LC or SC connectors at one or both ends. This illustration shows the connector pair for an LC fiber-optic cable: VST598.vsd This illustration shows the connector pair for an SC fiber cable: VST599.vsd Dedicated Service LAN Cables The system also uses Category 5, unshielded twisted-pair Ethernet cables for the internal dedicated service LAN and for connections between the G4SA and the application LAN equipment.
System Configuration Guidelines Internal Cable Product IDs Although a considerable cable length can exist between the modular enclosures in the system, HP recommends placing all cabinets adjacent to each other and bolting them together, with cable length between each of the enclosures as short as possible. Internal Cable Product IDs For product IDs, see Internal Cables on page A-1.
LSUs to Processor Switches and Processor IDs System Configuration Guidelines This table lists the default p-switch PIC slot and port coupling to the processor number: P-Switch PIC Slot PIC Port Processor Number 10 1 0 2 1 3 2 4 3 1 4 2 5 3 6 4 7 1 8 2 9 3 10 4 11 1 12 2 13 3 14 4 15 11 12 13 The four cabling diagrams on the next pages illustrate the default configuration and connections for a triplex system processor.
LSUs to Processor Switches and Processor IDs System Configuration Guidelines This figure shows example connections to the default configuration of the NonStop Blade Element reintegration links (NonStop Blade Element connectors S, T, Q, R) and ports 1 to 4 on the p-switch PIC in slot 10, which defines triplex processor numbers to 0 to 3.
LSUs to Processor Switches and Processor IDs System Configuration Guidelines This figure shows example connections of the NonStop Blade Element reintegration links (NonStop Blade Element connectors S, T, Q, R) and ports 1 to 4 of the p-switch PIC in slot 11 for triplex processor numbers to 4 to 7: P-switch slot 11, P-switch slot 11, P-switch slot 11, P-switch slot 11, 10 port 1: Processor 4 port 2: Processor 5 port 3: Processor 6 port 4: Processor 7 11 12 13 4 3 2 1 10 Y Fabric Processor Switch 1
LSUs to Processor Switches and Processor IDs System Configuration Guidelines This figure shows example connections of the NonStop Blade Element reintegration links (NonStop Blade Element connectors S, T, Q, R) and ports 1 to 4 of the p-switch PIC in slot 12 for triplex processor numbers to 8 to 11: P-switch slot 12, port 1: Processor 8 P-switch slot 12, port 2: Processor 9 P-switch slot 12, port 3: Processor 10 P-switch slot 12, port 4: Processor 11 4 3 2 1 10 11 Y-Fabric Processor Switch 12 13 4 3
LSUs to Processor Switches and Processor IDs System Configuration Guidelines This figure shows example connections of the NonStop Blade Element reintegration links (NonStop Blade Element connectors S, T, Q, R) and ports 1 to 4 of the p-switch PIC in slot 13 for triplex processor numbers to 12 to 15: P-switch slot 13, P-switch slot 13, P-switch slot 13, P-switch slot 13, port 1: Processor 12 port 2: Processor 13 port 3: Processor 14 port 4: Processor 15 4 3 2 1 10 11 12 Y Fabric Processor Switch 13 4
Processor Switch ServerNet Connections System Configuration Guidelines Processor Switch ServerNet Connections ServerNet connections to the system I/O devices (storage disk and tape drive as well as Ethernet communication to networks) radiate out from the p-switches for both the X and Y ServerNet fabrics to the IOAMs in one or more IOAM enclosures.
Processor Switches to IOAM Enclosures System Configuration Guidelines • • Each port on the p-switch PIC must connect to the same numbered port on the IOAM enclosure’s ServerNet switch board (port 1 to port 1, port 2 to port 2, and so forth). Connections to an IOAM enclosure cannot co-exist on the same p-switch PIC with connections to a NonStop S-series I/O enclosure.
FCSA to Fibre Channel Disk Modules System Configuration Guidelines FCSA to Fibre Channel Disk Modules See Example Configurations of the IOAM Enclosure and Fibre Channel Disk Module on page 6-25 FCSA to Tape Devices Fibre Channel tape devices can be connected directly to an FCSA in an IOAM enclosure. Integrity NonStop NS16000 systems do not support SCSI buses or adapters to connect tape devices.
System Configuration Guidelines P-Switch to NonStop S-Series I/O Enclosure Cabling P-Switch to NonStop S-Series I/O Enclosure Cabling Each NonStop S-series I/O enclosure uses one port of one PIC in each of the two p-switches for ServerNet connection. If no IOAM enclosure is installed in the system, up to 24 NonStop S-series I/O enclosures can be connected to an Integrity NonStop NS16000 system via these ServerNet links.
System Configuration Guidelines P-Switch to NonStop S-Series I/O Enclosure Cabling This illustration shows the cables from the NonStop S-series IOMF 2 CRUs connected to port 1 of the PICs in slot 4 of the X and Y p-switch, assigning the group number of 11: Integrity NonStop NS-Series System Fiber-Optic ServerNet Cables NonStop S-Series I/O Enclosure Power-On Cables P-Switch ServerNet I/O PICs P-Switch Y Fabric P-Switch X Fabric VST767.
System Configuration Guidelines • P-Switch to NonStop S-Series I/O Enclosure Cabling Disk drives and ServerNet adapters (except SEB and MSEB CRUs) used in NonStop S-series I/O enclosures, as well as devices that are downstream of these enclosures, are compatible with NonStop NS-series hardware. For information about the disk drives and adapters, see the manual for that disk drive or adapter. Caution.
System Configuration Guidelines IOAM Enclosure and Disk Storage Considerations IOAM Enclosure and Disk Storage Considerations When deciding between one IOAM enclosure or two (or more), consider: One IOAM Enclosure Two IOAM Enclosures High-availability and fault-tolerant attributes of NonStop S-series systems with I/O enclosures using tetra-8 and tetra-16 topologies. Greater availability because of multiple redundant ServerNet paths and FCSAs.
System Configuration Guidelines Fibre Channel Devices This illustration shows an FCSA with indicators and ports that are used and not used in Integrity NonStop NS16000 systems: Port 1 2Gb 1Gb Port 1 Port 2 2Gb 1Gb Port 1 FCSA Fibre Ethernet ports: not available for FCSA Ethernet ports: not available for FCSA D C Ethernet ports: not available for FCSA VST001.
System Configuration Guidelines Fibre Channel Devices This illustration shows the locations of the hardware in the Fibre Channel disk module as well as the Fibre Channel port connectors at the back of the enclosure: FC-AL Port B2 FC-AL Port A2 Fibre Channel Disk Module (rear) EMU Port A1 Port B1 Fibre Channel Disk Module (front) Disk Drive Bays 1-14 VSD.503.vst Fibre Channel disk modules connect to Fibre Channel ServerNet adapters (FCSAs) via Fiber Channel arbitrated loop (FC-AL) cables.
System Configuration Guidelines Factory-Default Disk Volume Locations Factory-Default Disk Volume Locations This illustration shows where the factory-default locations for the primary and mirror system disk volumes reside in separate Fibre Channel disk modules: $SYSTEM (bay 1) $AUDIT (bay 3) Fibre Channel Disk Module (front) $DSMSCM (bay 2) $OSS (bay 4) VSD.082.vst FCSA location and cable connections vary according to the various controller and Fibre Channel disk module combinations.
System Configuration Guidelines • Configuration Recommendations for Fibre Channel Devices The mirror path and mirror backup Fibre Channel communication links to a disk drive should not connect to FCSAs in the same module of an IOAM enclosure. In a fully populated system, loss of one FCSA can make up to 56 disk drives inaccessible on a single Fibre Channel communications path. This configuration is allowed, but only if you override an SCF warning message.
System Configuration Guidelines • • • • • • • Configuration Recommendations for Fibre Channel Devices In systems with one IOAM enclosure: ° With two FCSAs and two Fibre Channel disk modules, the primary FCSA resides in module 2 of the IOAM enclosure, and the backup FCSA resides in module 3. (See the example configuration in Two FCSAs, Two FCDMs, One IOAM Enclosure on page 6-26.
System Configuration Guidelines Example Configurations of the IOAM Enclosure and Fibre Channel Disk Module Example Configurations of the IOAM Enclosure and Fibre Channel Disk Module These subsections show various example configurations of FCSA controllers and Fibre Channel disk modules with IOAM enclosures. Note. Although it is not a requirement for fault tolerance to house the primary and mirror disk drives in separate FCDMs.
System Configuration Guidelines Example Configurations of the IOAM Enclosure and Fibre Channel Disk Module Two FCSAs, Two FCDMs, One IOAM Enclosure This illustration shows example cable connections between the two FCSAs and the primary and mirror Fibre Channel disk modules: Mirror FCDM Primary FCDM FibreChannel Cables FibreChannel Cables FCSA FCSA VST088.
System Configuration Guidelines Example Configurations of the IOAM Enclosure and Fibre Channel Disk Module Four FCSAs, Four FCDMs, One IOAM Enclosure This illustration shows example cable connections between the four FCSAs and the two sets of primary and mirror Fibre Channel disk modules: Mirror FCDM 2 Primary FCDM 2 Mirror FCDM1 Primary FCDM1 FCSAs FCSAs VST089.
System Configuration Guidelines Example Configurations of the IOAM Enclosure and Fibre Channel Disk Module Two FCSAs, Two FCDMs, Two IOAM Enclosures This illustration shows example cable connections between the two FCSAs split between two IOAM enclosures and one set of primary and mirror Fibre Channel disk modules: Primary FCDM FCSA Mirror FCDM FCSA IOAM Enclosure VST086.
System Configuration Guidelines Example Configurations of the IOAM Enclosure and Fibre Channel Disk Module Four FCSAs, Four FCDMs, Two IOAM Enclosures This illustration shows example cable connections between the four FCSAs split between two IOAM enclosures and two sets of primary and mirror Fibre Channel disk modules: Mirror FCDM 1 Mirror FCDM 2 Primary FCDM 1 Primary FCDM 2 FCSA FCSA FCSA FCSA IOAM Enclosure VST087.
System Configuration Guidelines Example Configurations of the IOAM Enclosure and Fibre Channel Disk Module Daisy-Chain Configurations When planning for possible use of daisy-chained disks, consider: Daisy-Chained Disks Recommended Daisy-Chained Disks Not Recommended Requirements for Daisy-Chain1 Cost-sensitive storage and applications using low-bandwidth disk I/O. Many volumes in a large Fibre Channel loop.
System Configuration Guidelines Example Configurations of the IOAM Enclosure and Fibre Channel Disk Module This illustration shows an example of cable connections between the two FCSAs and four Fibre Channel disk modules in a single daisy-chain configuration: Terminator FCDM 4 FCDM 3 B Side A Side ID Expanders FCDM 2 FCDM 1 Fibrer-Optic Cables Fiber-Optic Cables Terminator FCSA FCSA IOAM Enclosure VST081.
System Configuration Guidelines Example Configurations of the IOAM Enclosure and Fibre Channel Disk Module Four FCSAs, Three FCDMs, One IOAM Enclosure This illustration shows example cable connections between the four FCSAs and three Fibre Channel disk modules with the primary and mirror drives split within each Fibre Channel disk module: Primary 3 Mirror 1 FCDM 3 Mirror 3 Primary 2 FCDM 2 Primary 1 Mirror 2 FCDM 1 IOAM Enclosure FCSAs FCSAs VST085.
System Configuration Guidelines Example Configurations of the IOAM Enclosure and Fibre Channel Disk Module This illustration shows the factory-default locations for the configurations of four FCSAs and three Fibre Channel disk modules where the primary system file disk volumes are in Fibre Channel disk module 1: $SYSTEM (bay 1) $AUDIT (bay 3) Fibre Channel Disk Module (front) $DSMSCM (bay 2) $OSS (bay 4) VSD.082.
System Configuration Guidelines G4SAs to Networks G4SAs to Networks The G4SA provides Gigabit connectivity between Integrity NonStop NS16000 systems and Ethernet LANs. The G4SA is an end node on the ServerNet and uses either fiberoptic or copper cable for connectivity to user application LANs, as well as for the dedicated service LAN. For more information on the G4SA, see Gigabit Ethernet 4-Port ServerNet Adapter on page 5-23 or the Gigabit 4-Port ServerNet Adapter Installation and Support Guide.
System Configuration Guidelines Default Naming Conventions This illustration show a conceptual example for copper and fiber-optic connectivity to the various LANs: Operations LAN IOAM Enclosure G4SA To Application LAN (10/100/1000 Mbps Fiber) To Application LAN (10/100/1000 Mbps Fiber) To Application LAN (10/100/1000 Mbps copper) To Application LAN (10/100/1000 Mbps copper) To Maintenance Switch (10/100 Mbps copper) and then to Operations LAN To Maintenance Switch (10/100 Mbps copper) and then to O
System Configuration Guidelines PDU Strapping Configurations Type of Object Naming Convention Example Description (page 2 of 2) TCP/IP process $ZTC number $ZTC0 First TCP6SAM or TCP/IP process for the system Telserv process $ZTN number $ZTN0 First Telserv process for the system Listener process $LSN number LSN0 First Listener process for the system TFTP process Automatically created by WANMGR None None WANBOOT process Automatically created by WANMGR None None SWAN Concentrator S n
7 Examples of Configurations This section shows examples of configurations of the Integrity NonStop NS16000 hardware that can be installed in a modular cabinet. A number of other configurations are also possible because of the flexibility inherent to the NonStop advanced architecture and ServerNet. Note. Hardware configuration drawings in this appendix represent the physical arrangement of the modular enclosures but do not show location of the PDU junction boxes.
Examples of Configurations Enclosure or Component (page 2 of 2) Typical Configurations Duplex Processor Triplex Processor (page 2 of 2) Minimum Typical Maximum Minimum Typical Maximum IOAM enclosure 1 2 6 1 2 6 FCSA 2 2 Up to 60 in mixture set by disks and I/O 2 G4SA Up to 20 in mixture set by disks and I/O Up to 20 in mixture set by disks and I/O Up to 60 in mixture set by disks and I/O Fibre Channel disk module 2 4 8 2 4 8 Fibre Channel disk drives 14 56 112 14 56 11
Examples of Configurations Duplex 8-Processor System, Two Cabinets Duplex 8-Processor System, Two Cabinets This duplex configuration has a maximum of eight logical processors with one IOAM enclosure and up to 12 Fibre Channel disk modules (four Fibre Channel disk modules in a typical system): 42 42 41 40 Available Space (or additional FCDM) 41 39 39 38 38 37 37 36 36 35 35 IOAM Enclosure 33 32 32 31 31 30 30 29 29 28 28 27 P-Switch P-Switch Console 18 Available Space (or addi
Examples of Configurations Duplex 16-Processor System, Three Cabinets Duplex 16-Processor System, Three Cabinets This duplex configuration has a maximum of 16 logical processors with two IOAM enclosures and up to 14 Fibre Channel disk modules (one IOAM enclosure and eight Fibre Channel disk modules in a typical system): 42 42 41 41 Available Space 40 40 39 39 38 38 37 37 36 36 35 35 34 34 33 IOAM Enclosure * 32 31 31 30 30 29 29 28 28 P-Switch 26 24 25 P-Switch 23 22 2
Examples of Configurations Duplex 16-Processor System, Two Cabinets Duplex 16-Processor System, Two Cabinets This duplex configuration has a maximum of 16 logical processors with one IOAM enclosure and 4 Fibre Channel disk modules: 42 41 40 42 Fibre Channel Disk Module 41 40 39 39 38 38 37 37 36 36 35 35 34 33 IOAM Enclosure 34 32 31 31 30 30 29 29 28 28 P-Switch 26 25 25 P-Switch 23 22 21 18 Console Fibre Channel Disk Module 17 24 22 21 20 18 16 LSU 15 14 13 13 1
Examples of Configurations Triplex 8-Processor System, Two Cabinets Triplex 8-Processor System, Two Cabinets This triplex configuration has a maximum of eight logical processors with one IOAM enclosure, and ten Fibre Channel disk modules (four Fibre Channel disk modules in a typical system): 42 42 41 40 Available Space (or additional FCDM) 41 40 39 39 38 38 37 37 36 36 35 35 34 34 33 IOAM Enclosure 32 31 31 30 30 29 29 28 28 P-Switch 25 25 24 P-Switch 23 22 22 20 19 NonS
Examples of Configurations Triplex 16-Processor System, Three Cabinets Triplex 16-Processor System, Three Cabinets This duplex configuration has a maximum of 16 logical processors with one IOAM enclosure, and ten Fibre Channel disk modules (eight Fibre Channel disk modules in a typical system): 42 41 40 41 40 39 39 38 38 37 37 36 36 35 35 34 34 33 IOAM Enclosure 32 31 31 30 30 29 29 28 28 27 27 P-Switch 26 26 25 25 P-Switch 24 23 23 22 22 21 21 20 19 NonStop Blade
Examples of Configurations Example System With UPS and ERM Example System With UPS and ERM UPS and ERM (two ERMs maximum) must reside in the bottom of the cabinet: 42 42 41 41 40 40 39 39 38 38 37 37 36 36 35 35 34 34 33 33 32 32 31 31 30 30 29 29 28 28 27 27 26 26 25 25 24 24 23 23 22 22 21 21 20 20 19 19 18 18 17 17 16 16 15 15 14 14 13 13 12 12 11 11 10 10 09 09 08 08 07 07 06 06 05 05 04 04 03 03 02 02 01 01 Maintenanc
Examples of Configurations Example System With One NonStop S-Series I/O Enclosure Example System With One NonStop S-Series I/O Enclosure Integrity NonStop NS-Series System Fibre Channel Disk Modules IOAM Enclosure Fiber-Optic ServerNet Cables Power-On Cables P-Switch ServerNet I/O PICs P-Switch Y Fabric NonStop S-Series I/O Enclosure P-Switch X Fabric LSU NonStop Blade Element B NonStop Blade Element A VST810.
Examples of Configurations Example 4-Processor Duplex System Cabling Example 4-Processor Duplex System Cabling This illustration shows an example 4-processor duplex system in a single cabinet. This simplified, conceptual representation shows the X and Y ServerNet cabling between the NonStop Blade Element, LSU, p-switch, and IOAM enclosures. To simplify the drawing, power and Ethernet cables are not shown. For cable-by-cable interconnect diagrams, see Internal ServerNet Interconnect Cabling on page 6-3.
Examples of Configurations Example 16-Processor Triplex System Cabling IOAM is the two-controller, two-Fibre Channel disk module configuration shown in detail in Two FCSAs, Two FCDMs, One IOAM Enclosure on page 6-26. For details and instructions on connecting cables as part of the system installation, refer to the NonStop NS16000 Hardware Installation Manual. Example 16-Processor Triplex System Cabling The next two illustrations are an example of a 16-processor triplex system with four cabinets.
Examples of Configurations Example 16-Processor Triplex System Cabling Components of the cable management system that are part of each modular enclosure and the modular cabinet are not shown, so actual cable routing using is slightly different from that shown. For detailed information on cable routing and connection as part of the system installation, refer to the NonStop NS16000 Hardware Installation Manual.
A Cables Internal Cables Available internal cables and their lengths are: Cable Type Connectors Length (meters) Length (feet) Product ID MMF LC-LC 2 7 M8900-02 5 16 M8900-05 15 49 M8900-15 40 131 M8900-40 80 262 M8900-80 100 328 M8900100 1251 4101 M8900125 2001 6561 M8900200 2501 8201 M8900250 10 33 M8910-10 20 66 M8910-20 50 164 M8910-50 100 328 TBD 1251 4101 M8910125 3 10 M8920-3 5 16 M8920-5 10 33 M8920-10 30 98 M8920-30 50 164 M8920-50 0.
Cables ServerNet Cluster Cables ServerNet Cluster Cables These cables connect the Integrity NonStop NS16000 systems to a ServerNet cluster (zone) with Model 6780 NonStop ServerNet switches: Cable Type Connectors Length (meters) Length (feet) Product ID SMF LC-LC 2 7 M8921-2 5 16 M8921-5 10 33 M8921-10 25 82 M8921-25 40 131 M8921-40 80 262 M8921-80 100 410 M8921100 These cables connect the p-switches on the Integrity NonStop NS16000 to the Model 6770 NonStop ServerNet Cluster Sw
Cables Cable Length Restrictions Cable Length Restrictions Maximum allowable lengths of cables connecting the modular system components are: Fiber Type Connectors Maximum Length Product ID NonStop Blade Element to LSU enclosure MMF LC-LC 100 m M8900nnn1 NonStop Blade Element to NonStop Blade Element MMF MTP 50 m M8920nnn1 LSU enclosure to p-switch MMF LC-LC 125 m M8900nnn1 P-switch to p-switch crosslink MMF LC-LC 125 m M8900nnn1 P-switch to IOAM enclosure MMF LC-LC 125 m M8900n
Cables Cable Management System This illustration shows the CMS components for an example Integrity NonStop NS16000 server: Cabin et Cable way Cabin et Spools Cabinet-toCabinet HalfSpools Containment Spools Cables Tray Cable Management Arm 1 2 3 4 Cable Vertical Radius Guide (VRG) Cable Management Cartridge VST11 9.
B Control, Configuration, and Maintenance Tools This section introduces the control, configuration, and maintenance tools used in Integrity NonStop NS16000 systems: Topic Page Support and Service Library B-1 System Console B-1 Maintenance Architecture B-6 Dedicated Service LAN B-9 OSM B-23 System-Down OSM Low-Level Link B-23 AC Power Monitoring B-24 AC Power-Fail States B-25 Support and Service Library See Support and Service Library on page C-1.
Control, Configuration, and Maintenance Tools System Console Configurations Some system console hardware, including the PC system unit, monitor, and keyboard, can be mounted in the cabinet. Other PCs are installed outside the cabinet and require separate provisions or furniture to hold the PC hardware. System consoles communicate with Integrity NonStop NS16000 servers over a dedicated service local area network (LAN) or a secure operations LAN.
Control, Configuration, and Maintenance Tools System Console Configurations One System Console Managing One System (Setup Configuration) Remote Service Provider DHCP DNS server (optional) Secure Operations LAN Modem Primary System Console Optional Connection to a Secure Operations LAN (One or Two Connections) Maintenance Switch 1 Processor Switches 4 3 2 1 4 4 3 2 1 5 2 4 3 2 1 3 4 3 2 1 4 4 3 2 1 5 6 6 7 7 FCSA FCSA FCSA G4SA G4SA 3 2 1 1 2 4 3 2 1 3 8 8 9 4 3 2 1 10 4 3 2 1 11 4
Control, Configuration, and Maintenance Tools System Console Configurations One System Console Managing Multiple Systems The one OSM system console on the LAN must be configured as the primary system console. Because all servers are shipped with the same preconfigured IP addresses for MSP0, MSP1, $ZTCP0, and $ZTCP1, you must change these IP addresses for the second and subsequent servers before you can add them to the LAN.
Control, Configuration, and Maintenance Tools System Console Configurations This configuration is recommended. It is similar to the setup configuration, but for fault-tolerant redundancy, it includes a second maintenance switch, backup system console, and modem. The maintenance switches provide a dedicated LAN in which all nodes use the same subnet. Note. A subnet is a network division within the TCP/IP model. Within a given network, each subnet is treated as a separate network.
Control, Configuration, and Maintenance Tools Maintenance Architecture Multiple System Consoles Managing Multiple Systems The servers must have fault-tolerant dedicated service LAN connections to the Ethernet switches or hubs. In addition, the system consoles and the servers must be on the same subnet. If a server is configured to receive dial-ins, the server must occupy the same subnet as the system console receiving the dial-ins.
Control, Configuration, and Maintenance Tools Fabrics Functional Element As this illustration shows, the OSM console connects to a closed and private service LAN, connecting these modules together via a maintenance switch (a ProCurve 2524): • • Processor switch (p-switch) I/O adapter module (IOAM) Other hardware modules contain at least one microprocessor and firmware that performs maintenance functions for their local logic: • • • NonStop Blade Element Logical synchronization unit (LSU) Fibre Channe
Control, Configuration, and Maintenance Tools I/O Functional Element I/O Functional Element The IOAM enclosure contains the functionality of two fabric functional elements. In addition, each IOAM enclosure is divided into two IOAMs, each of which contains five ServerNet adapters and one ServerNet switch board. (See I/O Adapter Module (IOAM) Enclosure and I/O Adapters on page 5-19.
Control, Configuration, and Maintenance Tools Dedicated Service LAN The ME firmware in each p-switch only interacts with a logical processor via ServerNet using the SMIP ServerNet-based communication protocol to send command requests to a logical processor and respond to command requests from a logical processor. The p-switch ME firmware does not have any data regarding the number of PEs per NonStop Blade Element, NonStop Blade Elements per NonStop Blade Complex, or the LSUs.
Control, Configuration, and Maintenance Tools Basic Configuration An Integrity NonStop NS16000 system requires a dedicated LAN for system maintenance through OSM. Only components specified by HP can be connected to a dedicated LAN. No other access to the LAN is permitted. The dedicated service LAN provides connectivity between the OSM console running on a PC and the maintenance firmware in the system hardware. Note.
Control, Configuration, and Maintenance Tools • • • Fault-Tolerant Configuration One of the two IOAM enclosure ServerNet switch boards to each maintenance switch G4SA, E4SA, FESA, or GESA on the X fabric to the first maintenance switch G4SA, E4SA, FESA, or GESA on the X fabric to the second maintenance switch Caution.
Control, Configuration, and Maintenance Tools Fault-Tolerant Configuration This illustration shows a fault-tolerant LAN configuration with two maintenance switches: DHCP DNS serv er (optional) Remote Service Provider Remote Service Provider Secure Operations LAN Modem Primary Sy stem Console BackupSy stem Console Modem Optional Connection to a Secure Operations LAN (One or Two Connections) Maintenance Switch 2 Maintenance Switch 1 3 2 1 1 Maintenance PIC Enet (Slot 1, Port 3) 3 2 1 1 2 4 3 2
Control, Configuration, and Maintenance Tools IP Addresses IP Addresses Integrity NonStop NS16000 servers require Internet protocol (IP) addresses for these components that are connected to the dedicated service LAN: • • • • • • • • • ServerNet switch boards in the p-switch ServerNet switch boards in the IOAM enclosure FESAs, G4SAs, E4SAs, and GESAs Maintenance switch System consoles OSM Low-Level Link OSM Service Connection UPS (optional) Fibre Channel to SCSI converter (optional) These components hav
Control, Configuration, and Maintenance Tools Component IP Addresses Location Group.Module.Slot Primary system console (rackmounted or stand-alone) N/A N/A Backup system console (rackmounted only) N/A Up to two additional system consoles (rackmounted only) N/A UPS (rackmounted only) Rack 01 N/A N/A Default IP Address Used By (page 2 of 2) 192.231.36.1 OSM LowLevel Link 192.231.36.4 OSM Service Connection 192.231.36.5 OSM Notification Director 192.231.36.6 N/A 192.231.36.
Control, Configuration, and Maintenance Tools IP Addresses Static IP Addresses Static IP addresses have to be configured manually for each component. If a DHCP server already exists on the dedicated service LAN: • • Configure the static IP addresses to be in the same subnet as the dynamic IP addresses assigned by that server. If IOMF 2 CRUs are connected to the LAN, the static addresses for IOMF 2 CRUs must be in the same range as each other and as the DHCP server.
Control, Configuration, and Maintenance Tools Ethernet Cables Component IP Address (page 2 of 2) IOAM enclosures Dynamic System consoles Dynamic Fibre Channel to SCSI router (model M8201R) (optional). (For IP address information, see the M8201R Fibre Channel to SCSI Router Installation and User’s Guide.
Control, Configuration, and Maintenance Tools System-Up Dedicated Service LAN System-Up Dedicated Service LAN When the system is up and the OS running, the ME connects to the NonStop NS-series system’s dedicated service LAN using one of the PIFs on each of two G4SAs. This connection enables OSM Service Connection and OSM Notification Director communication for maintenance in a running system.
Control, Configuration, and Maintenance Tools Dedicated Service LAN Links With One IOAM Enclosure Dedicated Service LAN Links With One IOAM Enclosure This illustration shows the dedicated service LAN cables connected to the G4SAs in slot 5 of both modules of an IOAM enclosure and to the maintenance switch: Maintenance Switch Module 2 Module 3 G4SA Ethernet PIF Connectors D C B A Cable to Maintenance Switch IOAM Enclosure (Group 110) VST340.
Control, Configuration, and Maintenance Tools Dedicated Service LAN Links to Two IOAM Enclosures Dedicated Service LAN Links to Two IOAM Enclosures This illustration shows dedicated service LAN cables connected to G4SAs in two IOAM enclosures and to the maintenance switch: Maintenance Switch Module 2 Module 3 IOAM Enclosure (Group 110) IOAM Enclosure (Group 111) VST341.
Control, Configuration, and Maintenance Tools Dedicated Service LAN Links With IOAM Enclosure and NonStop S-Series I/O Enclosure Dedicated Service LAN Links With IOAM Enclosure and NonStop S-Series I/O Enclosure This illustration shows dedicated service LAN cables connected to a G4SA in an IOAM enclosure and at least one NonStop S-series Ethernet adapter (E4SA, FESA, or GESA) in a NonStop S-series I/O enclosure (module 12 in this example) and to the maintenance switch: Maintenance Switch NonStop S-Series
Control, Configuration, and Maintenance Tools Dedicated Service LAN Links With NonStop SSeries I/O Enclosure Dedicated Service LAN Links With NonStop S-Series I/O Enclosure This illustration shows dedicated service LAN cables connected to two NonStop S-series Ethernet adapters (E4SA, FESA, or GESA) in a NonStop S-series I/O enclosure (module 12 in this example) and to the maintenance switch: Maintenance Switch NonStop S-Series I/O Enclosure (Module 12) VST343.
Control, Configuration, and Maintenance Tools Initial Configuration for a Dedicated Service LAN Initial Configuration for a Dedicated Service LAN New systems are shipped with an initial set of IP addresses configured. For a listing of these initial IP addresses, see IP Addresses on page B-13. Factory-default IP addresses for the G4SA and E4SA adapters are in the LAN Configuration and Management Manual. IP addresses for SWAN concentrators are in the WAN Subsystem Configuration and Management Manual.
Control, Configuration, and Maintenance Tools OSM OSM OSM client-based components are installed on new system console shipments and also delivered by an OSM installer on the HP NonStop System Console (NSC) Installer CD. The NSC CD also delivers all other client software required for managing and servicing NonStop NS16000 servers. For installation instructions, see the NonStop System Console Installer Guide.
Control, Configuration, and Maintenance Tools AC Power Monitoring AC Power Monitoring Integrity NonStop NS16000 servers require either the optional HP model R5500 XR UPS (with one or two ERMs for additional battery power) or a user-supplied UPS installed in each modular cabinet, or a user-supplied site UPS to support system operation through power transients or an orderly system shutdown during a power failure.
Control, Configuration, and Maintenance Tools AC Power-Fail States AC Power-Fail States These states occur when a power failure occurs and an optional HP model R5500 XR UPS is installed in each cabinet within the system: System State Description NSK_RUNNING NonStop operating system is running normally. RIDE_THRU OSM has detect a power failure and begins timing the outage. AC power returning terminates RIDE_THRU and puts the operating system back into an NSK_RUNNING state.
Control, Configuration, and Maintenance Tools HP Integrity NonStop NS16000 Planning Guide— 529567-009 B -26 AC Power-Fail States
C Guide to Manuals for the Integrity NonStop NS-Series Server These manuals support the Integrity NonStop NS-series systems: Category Purpose Title Reference Provide information about the manuals, the RVUs, and hardware that support NonStop NS-series servers NonStop Systems Introduction for H-Series RVUs Describe how to prepare for changes to software or hardware configurations Managing Software Changes Describe how to install, configure, and upgrade components and systems H06.
Guide to Manuals for the Integrity NonStop NSSeries Server Support and Service Library Within these categories, where applicable, content might be further categorized according to server or enclosure type. Authorized service providers can also order the NTL Support and Service Library CD: • • HP employees: Subscribe at World on a Workbench (WOW). Subscribers automatically receive CD updates. Access the WOW order form at http://hps.knowledgemanagement.hp.com/wow/order.asp.
Safety and Compliance This section contains three types of required safety and compliance statements: • • • Regulatory compliance Waste Electrical and Electronic Equipment (WEEE) Safety Regulatory Compliance Statements The following regulatory compliance statements apply to the products documented by this manual. FCC Compliance This equipment has been tested and found to comply with the limits for a Class A digital device, pursuant to part 15 of the FCC Rules.
Safety and Compliance Regulatory Compliance Statements Korea MIC Compliance Taiwan (BSMI) Compliance Japan (VCCI) Compliance This is a Class A product based on the standard or the Voluntary Control Council for Interference by Information Technology Equipment (VCCI). If this equipment is used in a domestic environment, radio disturbance may occur, in which case the user may be required to take corrective actions.
Safety and Compliance Regulatory Compliance Statements European Union Notice Products with the CE Marking comply with both the EMC Directive (89/336/EEC) and the Low Voltage Directive (73/23/EEC) issued by the Commission of the European Community.
Safety and Compliance SAFETY CAUTION SAFETY CAUTION The following icon or caution statements may be placed on equipment to indicate the presence of potentially hazardous conditions: DUAL POWER CORDS CAUTION: “THIS UNIT HAS MORE THAN ONE POWER SUPPLY CORD. DISCONNECT ALL POWER SUPPLY CORDS TO COMPLETELY REMOVE POWER FROM THIS UNIT." "ATTENTION: CET APPAREIL COMPORTE PLUS D'UN CORDON D'ALIMENTATION. DÉBRANCHER TOUS LES CORDONS D'ALIMENTATION AFIN DE COUPER COMPLÈTEMENT L'ALIMENTATION DE CET ÉQUIPEMENT".
Safety and Compliance Waste Electrical and Electronic Equipment (WEEE) HIGH LEAKAGE CURRENT To reduce the risk of electric shock due to high leakage currents, a reliable grounded (earthed) connection should be checked before servicing the power distribution unit (PDU).
Safety and Compliance Important Safety Information HP Integrity NonStop NS16000 Planning Guide— 529567-009 Statements -6
Index A AC current calculations 3-13 AC power 200 to 240 V ac single phase 32A RMS 5-5 200 to 240 V ac single phase 40A RMS 5-4 208 V ac 3-phase delta 24A RMS 3-3, 5-3 380 to 415 V ac 3-phase Wye 16A RMS 5-4 enclosure input specifications 3-5 feed, top or bottom 1-1 input 3-2 power-fail monitoring B-24 power-fail states B-25 unstrapped PDU 6-36 AC power feed 5-6 bottom of cabinet 5-6 top of cabinet 5-7 with cabinet UPS 5-8, 5-9 air conditioning 2-4 air filters 2-6 B branch circuit 3-5 C cabinet 5-29 cabin
Index D controls, NonStop Blade Element front panel 5-12 cooling assessment 2-5 D daisy-chain disk configuration recommendations 6-24 dedicated service LAN B-9 default disk drive locations 6-22 default startup characteristics 4-21 dimensions enclosures 3-9 modular cabinet 3-8 service clearances 3-8 disk drive configuration recommendations 6-23 display IOAM switch boards 5-19 p-switch 5-16 divergence recovery 4-8 documentation factory-installed hardware 4-25 NonStop NS-series server C-1 packet 4-25 Server
Index G FCSA to FCDM cabling 6-25 FCSA to tape cabling 6-15 FCSA, configuration recommendations 6-23 FC-AL configuration recommendations 6-23 fiber-optic cable specifications 6-5 fiber-optic cables 6-3 Fibre Channel arbitrated loop (FCAL) 5-24, 6-21 Fibre Channel device considerations 6-22 Fibre Channel devices 6-19 Fibre Channel disk module 5-24, 6-19 Fibre Channel ServerNet adapter (FCSA) 5-21 flooring 2-5 forms ServerNet adapter configuration 4-26 front panel, NonStop Blade Element buttons 5-12 indicat
Index L IOAM configuration considerations 6-19 enclosure 5-19 FCSA 5-21 FRUs 5-19 G4SA 5-23 ME firmware B-8 ServerNet pathways 4-11, 4-12 IOAM enclosure 1-3 IOMF 2 CRU 5-37 IP addresses components connected to LAN B-13 dynamic B-15 static B-15 I/O connectivity 1-3 I/O functional element B-8 I/O interface board, NonStop Blade Element 5-10 L labeling, optic cables 6-4 LAN dedicated service B-9 fault-tolerant maintenance B-10 non-fault-tolerant maintenance B-10 service, G4SA PIF B-17 LCD IOAM switch boards
Index O NonStop S-series I/O enclosures 5-36 NS16000 server 4-1 NSAA 4-2 logical processors 4-2 terms 4-4 O operating system load paths 4-21 operational space 2-7 optic adapter LSU 5-14 NonStop Blade Element 5-10 NonStop Blade Element J connectors 5-11 OSM B-2, B-23 P particulates, metallic 2-6 paths, operating system load 4-21 pathways, ServerNet 4-11 PDU AC power feed 5-6 description 5-6 fuses 5-6, 5-7 receptacles 5-9 strapping configurations 6-36 PDU power International 200 to 240 V AC input, Single
Index R R R5500 XR UPS 5-26 rack 5-29 rack offset 5-29, 5-30 raised flooring 2-5 receive and unpack 2-6 receptacles, PDU 5-9 recovery, failure 4-8 reintegration board 5-10 reintegration, memory 4-8 related manuals NonStop NS-series server C-1 software migration 4-24 rendezvous 4-8 processor synchronization 4-8 restrictions cable length 6-6, A-3 cabling NonStop S-series I/O enclosure 6-17 Fibre Channel device configuration 6-22 p-switch cabling 6-13 routers ServerNet 4-11 system pathways 4-13 S safety gro
Index W W weight calculation 2-5, 3-10 worksheet heat calculation 3-11 weight calculation 3-10 Z zinc, cadmium, or tin particulates 2-6 Special Characters $SYSTEM disk locations 4-21 HP Integrity NonStop NS16000 Planning Guide—529567-009 Index -7
Index Special Characters HP Integrity NonStop NS16000 Planning Guide—529567-009 Index -8