HP Integrity NonStop NS-Series Planning Guide Abstract This guide describes the HP Integrity NonStop™ NS-series system hardware and provides examples of system configurations to assist in planning for installation of a new system. It also provides a guide to other Integrity NonStop NS-series manuals. Product Version N.A. Supported Release Version Updates (RVUs) This publication supports H06.05 and all subsequent H-series RVUs until otherwise indicated by its replacement publication.
Document History Part Number Product Version Published 529567-004 N.A. July 2005 529567-005 N.A. November 2005 529567-006 N.A. February 2006 529567-007 N.A. August 2006 529567-008 N.A.
HP Integrity NonStop NS-Series Planning Guide Glossary Index What’s New in This Manual ix Manual Information ix New and Changed Information Figures Tables x About This Guide xiii Who Should Use This Guide xiii What’s in This Guide xiii Where to Get More Information xiv Notation Conventions xiv 1.
2. Installation Facility Guidelines (continued) Contents 2. Installation Facility Guidelines (continued) Zinc Particulates 2-6 Space for Receiving and Unpacking Operational Space 2-7 2-6 3.
4. Integrity NonStop NS16000 System Description (continued) Contents 4.
5. Modular System Hardware (continued) Contents 5.
6. System Configuration Guidelines (continued) Contents 6.
. Integrity NonStop NS14000 System Description (continued) Contents 8.
B. Control, Configuration, and Maintenance Tools (continued) Contents B.
Contents HP Integrity NonStop NS-Series Planning Guide —529567-008 viii
What’s New in This Manual Manual Information HP Integrity NonStop NS-Series Planning Guide Abstract This guide describes the HP Integrity NonStop™ NS-series system hardware and provides examples of system configurations to assist in planning for installation of a new system. It also provides a guide to other Integrity NonStop NS-series manuals. Product Version N.A. Supported Release Version Updates (RVUs) This publication supports H06.
New and Changed Information What’s New in This Manual Document History Part Number Product Version Published 529567-004 N.A. July 2005 529567-005 N.A. November 2005 529567-006 N.A. February 2006 529567-007 N.A. August 2006 529567-008 N.A. August 2006 New and Changed Information This new and changed information is documented in the guide. Section and Title Changes Guide-wide Updated all illustrations to reflect standard component layout and PDU placement in 42U modular cabinets.
New and Changed Information What’s New in This Manual Section and Title Changes Section 4, Integrity NonStop NS16000 System Description Updated the information under Modular Hardware on page 4-20. Section 5, Modular System Hardware Added information about the 42U modular cabinet to Modular Cabinets on page 5-4. Added information about Power Distribution Units (PDUs) on page 5-6.
What’s New in This Manual New and Changed Information HP Integrity NonStop NS-Series Planning Guide —529567-008 xii
About This Guide Who Should Use This Guide This guide is written for those responsible for planning the installation, configuration, and maintenance of the server and the software environment at a particular site. Appropriate personnel must have completed HP training courses on system support for Integrity NonStop NS-series servers. Note. Integrity NonStop NS-series and NonStop S-series refer to hardware systems.
Where to Get More Information About This Guide Section Title Contents (page 2 of 2) A Cables This appendix identifies the cables used with the Integrity NonStop NS-series hardware B Control, Configuration, and Maintenance Tools This section introduces the control, configuration, and maintenance tools used in Integrity NonStop NS-series systems, C Guide to Integrity NonStop NS-Series Server Manuals This appendix lists the manuals that support the Integrity NonStop NS-series servers.
1 System Hardware Overview Integrity NonStop NS-series servers use the NonStop advanced architecture (NSAA), which includes a number of duplex or triplex NonStop Blade Elements plus various combinations of hardware enclosures. These enclosures are installed in 42U modular cabinets.
Hardware Enclosures and Configurations System Hardware Overview This figure shows an example modular cabinet with a duplex processor and hardware for a complete system (rear view).
System Hardware Overview System I/O Hardware Configuration Because of the large number of possible configurations, you calculate the total power consumption, heat dissipation, and weight of each modular cabinet based on the hardware configuration that you order from HP. For site preparation specifications of the modular cabinet and the individual enclosures, see Section 3, System Installation Specifications.
System Hardware Overview NonStop S-Series I/O Enclosure NonStop S-Series I/O Enclosure NonStop S-series I/O enclosures equipped with model 1980 I/O multifunction 2 customer replaceable units (IOMF 2 CRUs) can be connected to the NonStop NS16000 server via fiber-optic ServerNet cables and the processor switch (p-switch).
2 Installation Facility Guidelines This section provides guidelines for preparing the installation site for Integrity NonStop NS-series systems: Note. For installation facility guidelines specific to the Integrity NonStop NS14000 system, see Section 8, Integrity NonStop NS14000 System Description.
Installation Facility Guidelines Emergency Power-Off Switches Emergency Power-Off Switches Emergency Power Off (EPO) switches are required by local codes or other applicable regulations when computer equipment contains batteries capable of supplying more than 750 volt-amperes (VA) for more that five minutes. Systems that have these batteries also have internal EPO hardware for connection to a site EPO switch or relay.
Installation Facility Guidelines Power Quality Power Quality This equipment is designed to operate reliably over a wide range of voltages and frequencies described in Enclosure AC Input on page 3-5. However, damage can occur if these ranges are exceeded. Severe electrical disturbances can exceed the design specifications of the equipment.
Installation Facility Guidelines Cooling and Humidity Control at a predetermined time in the event of an extended power failure. A timely and orderly shutdown prevents an uncontrolled and asymmetric shutdown of the system resources from depleted UPS batteries. The HP model R5500 XR UPS supports the OSM Power Fail Support function that allows you to set a ride-through time. If AC power is not restored before the specified ride-through time period expires, OSM initiates an orderly system shutdown.
Installation Facility Guidelines Weight mounted in ultra-thin server and storage enclosures, which are then deployed into modular cabinets in large numbers. This higher concentration of devices results in localized heat, which increases the potential for hot spots that can damage the equipment. Additionally, variables in the installation site layout can adversely affect air flows and create hot spots by allowing hot and cool air streams to mix.
Installation Facility Guidelines Dust and Pollution Control For your site’s floor system, consult with your HP site preparation specialist or an appropriate floor system engineer. If raised flooring is to be used, the design of the Integrity NonStop NS-series server modular cabinet is optimized for placement on 24-inch floor panels. Dust and Pollution Control NonStop servers do not have air filters.
Installation Facility Guidelines Operational Space All modular cabinets have small casters to facilitate moving them on hard flooring from the unpacking area to the site. Because of these small casters, rolling modular cabinets along carpeted or tiled pathways might be difficult. If necessary, plan for a temporary hard floor covering in affected pathways for easier movement of the equipment. For physical dimensions of the server equipment, refer to Dimensions and Weights on page 3-7.
Installation Facility Guidelines HP Integrity NonStop NS-Series Planning Guide —529567-008 2 -8 Operational Space
3 System Installation Specifications This section provides these specifications necessary for planning the system installation site: Note. For system installation specifications specific to the Integrity NonStop NS14000 system, see Section 8, Integrity NonStop NS14000 System Description. Topic Page Processor Type and Memory Size AC Input Power for Modular Cabinets Dimensions and Weights 3-1 Environmental Specifications Calculating Specifications for Enclosure Combinations 3-11 3-2 3-11 3-13 Note.
System Installation Specifications AC Input Power for Modular Cabinets AC Input Power for Modular Cabinets This subsection provides information about AC input power for modular cabinets and covers these topics: Topic Page North America and Japan: 208 V AC PDU Power 3-3 North America and Japan: 200 to 240 V AC PDU Power 3-3 International: 380 to 415 V AC PDU Power 3-4 International: 200 to 240 V AC PDU Power 3-4 Grounding 3-4 Branch Circuits and Circuit Breakers 3-5 Enclosure AC Input 3-5 E
North America and Japan: 208 V AC PDU Power System Installation Specifications North America and Japan: 208 V AC PDU Power The PDU power characteristics are: PDU input characteristics PDU output characteristics • • • • • • • 208 V ac, 3-phase delta, 24A RMS, 4-wire 50/60Hz NEMA L15-30 input plug 6.5 feet (2 m) attached power cord 3 circuit-breaker-protected 13.
International: 380 to 415 V AC PDU Power System Installation Specifications International: 380 to 415 V AC PDU Power The PDU power characteristics are: PDU input characteristics PDU output characteristics • • • • • • • 380 to 415 V ac, 3-phase Wye, 16A RMS, 5-wire 50/60Hz IEC309 5-pin, 16A input plug 6.
Branch Circuits and Circuit Breakers System Installation Specifications Branch Circuits and Circuit Breakers Modular cabinets for the Integrity NonStop NS-series system contain two PDUs. Each of the two PDUs requires a separate branch circuit of these ratings: Region Volts Amps North America and Japan 208 24 North America and Japan 200 - 240 40 International 380 - 415 16 International 200 - 240 32 Caution.
Enclosure Power Loads System Installation Specifications Enclosure Power Loads The total power and current load for each modular cabinet depends on the number and type of enclosures installed in it. Therefore, the total load is the sum of the loads for all enclosures installed. For examples of calculating the power and current load for various enclosure combinations, refer to Calculating Specifications for Enclosure Combinations on page 3-13.
Model R5500 XR Integrated UPS System Installation Specifications Model R5500 XR Integrated UPS Version Operating Voltage Settings Power Out (VA/Watts) Input Plug Branch Circuit North America and Japan 200/208*, 220, 230, 240 5000/4500 L6-30P Dedicated 30 Amp Other International 200, 230*, 240 6000/5400 Dedicated 30 Amp If 200/208 Then 5000/4500 IEC-309 32 Amp * Factory-default setting For complete information and specifications, refer to the HP UPS R5500 XR Models User Guide.
Plan View From Above the Modular Cabinet System Installation Specifications Plan View From Above the Modular Cabinet 40 in. (102 cm) 46 in. (116.84 cm) 81.5 in. (207 cm) 24 in. (60.96 cm) VST102.vsd Service Clearances for the Modular Cabinet Aisles: 6 feet (182.9 centimeters) Front: 3 feet (91.4 centimeters) Rear: 3 feet (91.
Modular Cabinet Physical Specifications System Installation Specifications Modular Cabinet Physical Specifications Item Height in. Width cm in. Depth Weight cm in. cm Modular cabinet 78.7 199.9 24.0 60.96 46.0 116.84 Rack 78.5 199.4 23.62 60.0 40.0 101.9 Front door 78.5 199.4 23.5 59.7 3.0 7.6 Left-rear door 78.5 199.4 11.0 27.9 1.0 2.5 Right-rear door 78.5 199.4 12.0 30.5 1.0 2.5 Shipping (palletized) 86.22 219.0 32.0 81.28 54.0 137.
Modular Cabinet and Enclosure Weights With Worksheet System Installation Specifications Modular Cabinet and Enclosure Weights With Worksheet The total weight of each modular cabinet is the sum the weights of the cabinet plus each enclosure installed in it. Use this worksheet to determine the total weight: Enclosure Type Number of Enclosures Weight Total lbs kg 42U Modular cabinet* 303 137.44 NonStop Blade Element 112 50.8 Processor switch 70 32.8 LSU 96 43.5 IOAM 200 90.
Environmental Specifications System Installation Specifications Environmental Specifications This subsection provides information about environmental specifications and covers these topics: Topic Page Heat Dissipation Specifications and Worksheet 3-11 Operating Temperature, Humidity, and Altitude 3-12 Nonoperating Temperature, Humidity, and Altitude 3-12 Cooling Airflow Direction 3-12 Typical Acoustic Noise Emissions 3-12 Tested Electrostatic Immunity 3-12 Heat Dissipation Specifications and
Operating Temperature, Humidity, and Altitude System Installation Specifications Operating Temperature, Humidity, and Altitude Specification Operating Range1 Recommended Range1 Maximum Rate of Change per Hour Temperature (all except Fibre Channel disk module) 41° to 95° F (5° to 35° C) 68° to 72° F (20° to 25° C) 9° F (5° C) Repetitive 36° F (20° C) Nonrepetitive Temperature (Fibre Channel disk module) 50° to 95° F (10° to 35° C) - 0.6° F (1° C) Repetitive 1.
System Installation Specifications Calculating Specifications for Enclosure Combinations Calculating Specifications for Enclosure Combinations Figure 3-1, Example Duplex Configuration, on page 3-14 shows components installed in 42U modular cabinets. Cabinet weight includes the PDUs and their associated wiring and receptacles. Power and thermal calculations assume that each enclosure in the cabinet is fully populated; for example, a NonStop Blade Element with four processors.
Calculating Specifications for Enclosure Combinations System Installation Specifications Figure 3-1, Example Duplex Configuration has 16 logical processors with two IOAM enclosures, and 14 Fibre Channel disk modules installed in three 42U modular cabinets.
Calculating Specifications for Enclosure Combinations System Installation Specifications The weight, power, and thermal calculations for Cabinet One in Figure 3-1, Example Duplex Configuration, on page 3-14 is: Table 3-1. Cabinet One Load Calculations Component Quantity Height (U) Weight (lbs) Weight (kg) Volt-amps per AC feed, one feed powered Volt-amps per AC feed, both feeds powered Heat (Btu) NonStop Blade Element 3 15 336 152.4 2130 1170 7986 LSU 1 4 96 43.
Calculating Specifications for Enclosure Combinations System Installation Specifications The weight, power, and thermal calculations for Cabinet Two in Figure 3-1, Example Duplex Configuration, on page 3-14 is: Table 3-2. Cabinet Two Load Calculations Component Quantity Height (U) Weight (lbs) Weight (kg) Volt-amps per AC feed, one feed powered Volt-amps per AC feed, both feeds powered Heat (Btu) NonStop Blade Element 3 15 336 152.4 2130 1170 7986 LSU 1 4 96 43.
Calculating Specifications for Enclosure Combinations System Installation Specifications The weight, power, and thermal calculations for Cabinet Three in Figure 3-1, Example Duplex Configuration, on page 3-14 is: Table 3-3. Cabinet Three Load Calculations Component Quantity Height (U) Weight (lbs) Weight (kg) Volt-amps per AC feed, one feed powered Volt-amps per AC feed, both feeds powered Heat (Btu) NonStop Blade Element 2 10 224 101.
System Installation Specifications Calculating Specifications for Enclosure Combinations HP Integrity NonStop NS-Series Planning Guide —529567-008 3- 18
4 Integrity NonStop NS16000 System Description This section describes the Integrity NonStop NS16000 system and covers these topics: Note. For system description information about the Integrity NonStop NS14000 system, see Section 8, Integrity NonStop NS14000 System Description.
Integrity NonStop NS16000 System Description NonStop Advanced Architecture However, contemporary high-speed microprocessors make lock-step processing no longer practical because of: • • • Variable frequency processor clocks with multiple clock domains Higher transient error rates than in earlier, simpler microprocessor designs Chips with multiple processor cores NonStop Advanced Architecture Integrity NonStop NS16000 systems employ a unique method for achieving fault tolerance in a clustered processor
NonStop Blade Complex Integrity NonStop NS16000 System Description All input and output to and from each NonStop Blade Element goes through a logical synchronization unit (LSU). The LSU interfaces with the ServerNet fabrics and contains logic that compares all output operations of the PEs in a logical processor, ensuring that all NonStop Blade Elements agree on the result before the data is passed to the ServerNet fabrics.
Integrity NonStop NS16000 System Description NonStop Blade Complex In the event of a processor fault in either a duplex or triplex processor, the failed component within a NonStop Blade Element (processor element, power supply, and so forth) or the entire Blade Element can be replaced while the system continues to run. A single Integrity NonStop NS16000 system can have up to four NonStop Blade Complexes for a total of 16 processors.
Processor Element Integrity NonStop NS16000 System Description Processor Element Each of the two or four processor elements in a NonStop Blade Element includes: • • • • • A standard Intel Itanium microprocessor running at 1.
Duplex Processor Integrity NonStop NS16000 System Description Duplex Processor The DMR or duplex processor uses two NonStop Blade Elements, A and B, both with two or four microprocessors. Fiber optic cables from each NonStop Blade Element connect the PEs to the LSUs. These LSUs then connect to two independent ServerNet fabrics. These two connections create communications redundancy in case one of the fabrics fails.
Triplex Processor Integrity NonStop NS16000 System Description Triplex Processor The TMR or triplex processor uses three NonStop Blade Elements, A, B, and C. As with the duplex processor, the optic cables connect the PEs to the LSUs, and these LSUs then connect to the two independent ServerNet fabrics. Dual ServerNet fabrics create communications redundancy in case one of the fabrics fails. For a description of the LSU functions, see Processor Synchronization and Rendezvous on page 4-8.
Integrity NonStop NS16000 System Description Processor Synchronization and Rendezvous Processor Synchronization and Rendezvous Synchronization and rendezvous at the LSUs perform two main functions: • • Keep the individual PEs in a logical processor in loose lock-step through a technique called rendezvous. Rendezvous occurs to: ° Periodically synchronize the PEs so they execute the same instruction at the same time. Synchronization accommodates the slightly different clock speed within each PE.
Failure Recovery for Triplex Processor Integrity NonStop NS16000 System Description Failure Recovery for Triplex Processor In triplex processors, each LSU has inputs from the three processor elements within a logical processor. As with the duplex processor, the LSU keeps the three PEs in loose lockstep. The LSU also checks the outputs from the three PEs.
Simplified ServerNet System Diagram Integrity NonStop NS16000 System Description The ServerNet network architecture is full-duplex, packet-switched, and point-to-point in a star configuration. It employs two independent I/O communications paths throughout the system: the ServerNet X fabric and the ServerNet Y fabric. These dual paths ensure that no single failure disrupts communications among the remaining system components.
P-Switch ServerNet Pathways Integrity NonStop NS16000 System Description P-Switch ServerNet Pathways This drawing shows the ServerNet communication pathways through the p-switch PICs and routers. Two p-switches are required in each system; one p-switch serves the ServerNet X fabric and the other the ServerNet Y fabric. In this drawing, the nomenclature PIC n means the PIC in slot n. For example, PIC 4 is the PIC in slot 4.
IOAM Enclosure ServerNet Pathways Integrity NonStop NS16000 System Description IOAM Enclosure ServerNet Pathways This drawing shows the ServerNet communication pathways through the routers in each ServerNet switch board to the ServerNet I/O adapters in each IOAM module.
Example System ServerNet Pathways Integrity NonStop NS16000 System Description Example System ServerNet Pathways This drawing shows the redundant routing and connection of the ServerNet X and Y fabric within a simple example system. This example system includes: • Four processors with their requisite four LSU optics adapters • One IOAM enclosure connected to the PIC in slot 4 of each p-switch, making the IAOM enclosure group 110.
Example System ServerNet Pathways Integrity NonStop NS16000 System Description This drawing shows the ServerNet X fabric routing within an example system of: • • • 16 processors with their requisite LSU optics adapters Four IOAMs of group numbers 110 through 113 Four NonStop S-series I/O enclosures of group numbers 61 through 64 The IOAM enclosures can reside in the same or different cabinets with MMF fiber-optic cables carrying the ServerNet communications between the IOAM enclosures and the p-switche
Integrity NonStop NS16000 System Description Example System ServerNet Pathways If a cable, connection, router, or other failure occurs, only the system resources that are downstream of the failure on the same fabric are affected. Because of the redundant ServerNet architecture, communication takes the alternate path on the other fabric to the peer resources.
Example System ServerNet Pathways Integrity NonStop NS16000 System Description This illustration shows a logical representation of a complete typical 4-processor triplex system with the X and Y fabrics: SCSI Fibre Channel Disk Module FCAL A FCAL B SCSI Tape Fibre Channel Tape FC-SCSI Router ESS Fibre Channel Fibre Channel High-Speed Ethernet I/O Adapter Module 2 S-Series I/O Enclosure I/O I/O Adapter Module 3 S-Series I/O Enclosure I/O IOAM X ServerNet Switch X ServerNet Links Y ServerNet Sw
Example System ServerNet Pathways Integrity NonStop NS16000 System Description This illustration shows a logical representation of a triplex 16-processor NonStop Blade Complex (NSBC) with the associated NonStop Blade Elements (NSBEs) and their Blade optics adapters (BOAs), the LSUs, and the p-switch for X ServerNet fabric: NSBE 400 S Q Group Module1A R T BOA BOA Slot 1 Slot 2 NSBE Group 400 S Q Module 1B R T BOA BOA Slot 1 Slot 2 NSBE Group 400 S Q Module 1C R T BOA BOA Slot 1 Slot 2 NSBE 401 S Q Grou
Example System ServerNet Pathways Integrity NonStop NS16000 System Description This illustration shows a logical representation of a triplex 16-processor NonStop Blade Complex (NSBC) with the associated NSBEs and their BOAs, the LSUs, and the p-switch for Y ServerNet fabric: NSBE 400 S Q Group Module1A R T BOA BOA Slot 1 Slot 2 NSBE Group 400 S Q Module 1B R T BOA BOA Slot 1 Slot 2 NSBE Group 400 S Q Module 1C R T BOA BOA Slot 1 Slot 2 NSBE 401 S Q Group Module A R T BOA BOA Slot 1 Slot 2 NSBE 401 S
System Architecture Integrity NonStop NS16000 System Description System Architecture This diagram shows elements of an example Integrity NonStop NS16000 system with four triplex processors: Fibre Channel Disk Module Fibre Channel Disk Module High-Speed Ethernet Fibre Channel High-Speed Ethernet Fibre Channel ServerNet Adapters S-Series I/O Enclosure I/O ServerNet Adapters S-Series I/O Enclosure I/O Adapter Module I/O / 4 / 4 ServerNet Fabrics X Fabric Y Fabric Processor Switch on X Fabric Pr
Integrity NonStop NS16000 System Description Modular Hardware Modular Hardware Hardware for Integrity NonStop NS16000 systems is implemented in modules, or enclosures that are installed in modular cabinets. For descriptions of the modular hardware components, see Section 5, Modular System Hardware. All Integrity NonStop NS-series server components are field-replaceable units (FRUs) that can only be serviced by service providers trained by HP.
Default Startup Characteristics Integrity NonStop NS16000 System Description Default Startup Characteristics Each system ships with these default startup characteristics: • $SYSTEM disks residing in one of these two locations: ° In a Fibre Channel disk module connected to IOAM enclosure group 110 with the disks in these locations: IOAM • • ° FCSA Fibre Channel Disk Module Path Group Module Slot SAC Shelf Bay Primary 110 2 1 1 1 1 Backup 110 3 1 1 1 1 Mirror 110 3 1 2 1
Default Startup Characteristics Integrity NonStop NS16000 System Description Load Path Description Source Disk Destination Processor ServerNet Fabric (page 2 of 2) 14 Mirror $SYSTEM-M 1 Y 15 Mirror backup $SYSTEM-M 1 X 16 Mirror backup $SYSTEM-M 1 Y This illustration shows the system load paths.
Integrity NonStop NS16000 System Description Migration Considerations Migration Considerations This subsection provides migration information migration about migrating from NonStop S-series systems to Integrity NonStop NS16000 systems: Topic Page Migrating Applications 4-23 Migration Considerations 4-23 Migrating Hardware Products to Integrity NonStop NS-Series Servers 4-24 Other Manuals Containing Software Migration Information 4-24 Migrating Applications The H-Series Application Migration Guid
Integrity NonStop NS16000 System Description Migrating Hardware Products to Integrity NonStop NS-Series Servers Migrating Hardware Products to Integrity NonStop NS-Series Servers Connecting NonStop S-series hardware to an Integrity NonStop NS-series server is only one step in the overall migration. Software and application changes might be required to complete the migration. Any hardware migration should be planned as part of the overall application and software migration tasks.
Integrity NonStop NS16000 System Description • • • System Installation Document Packet H06.xx README Interactive Upgrade Guide 2 If you are moving a NonStop S-series I/O enclosure from a NonStop S-series system to an Integrity NonStop NS16000 system and want to migrate the data online, you can perform a migratory revive if: ° ° Your data is mirrored. You have another NonStop S-series system or NonStop S-series I/O enclosure connected to the NonStop S-series system.
Integrity NonStop NS16000 System Description • ServerNet Adapter Configuration Forms Each ServerNet cable with: ° ° ° Source and destination enclosure, component, and connector Cable part number Source and destination connection labels ServerNet Adapter Configuration Forms ServerNet adapters can include the Fibre Channel ServerNet adapter (FCSA) and Gigabit Ethernet 4-port ServerNet adapter (G4SA) that are installed in the Integrity NonStop NS16000 system or the various ServerNet adapters installed in
5 Modular System Hardware This section describes the hardware used in Integrity NonStop NS-series systems: Note. For hardware descriptions specific to the Integrity NonStop NS14000 system, see Section 8, Integrity NonStop NS14000 System Description.
Modular System Hardware Modular Hardware Components as well as to LANs. Additional IOAM enclosures increase connectivity and storage resources, by way of ServerNet links to the p-switch. NonStop S-series I/O enclosures equipped with IOMF 2 CRUs can connect to the Integrity NonStop NS-series systems using fiber-optic ServerNet links from the p-switch. Note. The Integrity NonStop NS14000 system does not support connections to NonStop S-series I/O enclosures.
Modular Hardware Components Modular System Hardware This example shows a 42U modular cabinet with a duplex processor and hardware for a complete system (rear view): To AC Power Source or Site UPS IOAM Enclosure Power Distribution Units (PDUs) 42 42 41 41 40 40 39 39 38 38 37 37 36 36 35 35 34 34 33 33 32 32 31 31 30 30 29 29 28 28 27 27 26 26 25 25 24 24 23 23 22 22 21 21 20 20 19 19 18 18 17 17 16 16 15 15 14 14 13 13 12 12 11 11 10 10
Modular System Hardware Modular Cabinets Modular Cabinets The modular cabinet is a EIA standard 19-inch, 42U high, rack for mounting modular components. The modular cabinet comes equipped with front and rear doors and includes a rear extension that makes it deeper than some industry-standard racks. The Power Distribution Units (PDUs) are mounted along the rear extension without occupying any U-space in the cabinet, and are oriented inward, facing the components within the modular cabinet.
Modular System Hardware Modular Cabinets North America and Japan 200 to 240 V AC input, Single phase, 40A RMS Power • • • • • • • EIA standard 19-inch rack with 42U of rack space Geography: North America and Japan Recommended for most configurations Includes 2 power distribution units (PDU) ° Zero-U rack design PDU input characteristics ° 200 to 240 V ac, single phase, 40A RMS, 3-wire ° 50/60Hz ° Non-NEMA Locking CS8265C, 50A input plug ° 6.
Modular System Hardware Power Distribution Units (PDUs) International 200 to 240 V AC input, Single phase, 32A RMS Power • • • • • • • • EIA standard 19-inch rack with 42U of rack space Geography: International Recommended for most configurations Harmonized power cord Includes 2 power distribution units (PDU) ° Zero-U rack design PDU input characteristics ° 200 to 240 V ac, single phase, 32A RMS, 3-wire ° 50/60Hz ° IEC309 3-pin, 32A input plug ° 6.
Power Distribution Units (PDUs) Modular System Hardware Each PDU in a modular cabinet has: • • • 36 AC receptacles per PDU (12 per segment) - IEC 320 C13 10A receptacle type 3 AC receptacles per PDU (1 per segment) - IEC 320 C19 12A receptacle type 3 circuit-breakers These PDU options are available to receive power from the site AC power source: • • • • 208 V ac, three-phase delta for North America and Japan 200 to 240 V ac, single phase for North America and Japan 380 to 415 V ac three-phase wye for
Power Distribution Units (PDUs) Modular System Hardware This illustration shows the AC power feed cables on PDUs for AC feed at the top of the cabinet: To AC Power Source or Site UPS AC Power Cords Power Distribution Unit (PDU) 42 42 41 41 40 40 39 39 38 38 37 37 36 36 35 35 34 34 06 06 05 05 04 04 03 03 02 02 01 01 VST140.
Power Distribution Units (PDUs) Modular System Hardware If your system includes the optional rackmounted HP R5500 XR UPS, the modular cabinet will have one PDU located on the rear left side and four extension bars on the rear right side. The PDU and extension bars are oriented inward, facing the components within the modular cabinet. To provide redundancy, components are plugged into the left-side PDU and the extension bars. Each extension bar is plugged into the UPS.
Power Distribution Units (PDUs) Modular System Hardware This illustration shows the AC power feed cables for the PDU and UPS for AC power feed from the bottom of the cabinet when the optional UPS and ERM are installed: Extension bars are installed along the rear right side instead of a PDU when a UPS is installed in the modular cabinet To AC Power Source Power Distribution Unit (PDU) Extended Runtime Module (ERM) Uninterruptible Power Supply (UPS) 42 42 41 41 40 40 39 39 38 38 37 37 09 09
Modular System Hardware NonStop Blade Element NonStop Blade Element The NonStop Blade Element enclosure, which is 5U high and weighs 112 pounds (46 kilograms), has these physical attributes: • • • • Rackmountable Redundant AC power feeds Front-to-rear cooling Cable connections at rear (power, reintegration, LSU) with cable management equipment on the rear of the cabinet Each NonStop Blade Element includes these field replaceable units (FRUs): • • • • • • • • • Processor board with up to four Itanium
NonStop Blade Element Modular System Hardware This illustration shows the rear of the NonStop Blade Element, equipped with two power supplies and eight Blade optics adapters: NonStop Blade Element Enclosure (back) Reintegration Connectors (cables to peer NSBEs) S T Q R AC Power Connector J1 J0 Power Supply (2) J3 J2 J5 J4 J Blade Optics Adapters (cables to LSU) J7 J8 K1 K0 K3 K2 K5 K7 K4 K8 K Blade Optics Adapters (cables to LSU) VST170.
NonStop Blade Element Modular System Hardware However, to help reduce the complexity of cable connections, HP recommends that you use a physically sequential order of slots for fiber-optic cable connections on the LSU and that you not randomly mix the LSU slots. Cable connections to the LSU has no bearing on NonStop Blade Complex number, but HP also recommends you connect NonStop Blade Elements A to the NonStop Blade Element A connection on the LSU.
Logical Synchronization Unit (LSU) Modular System Hardware Front Panel Indicator LEDs: LED Indicator State Meaning Power Flashing green Power is on; NonStop Blade Element is available for normal operation. Flashing yellow NonStop Blade Element is in power mode. Off Power is off. Steady amber Hardware or software fault exists. Off NonStop Blade Element is available for normal operation. Flashing blue System locator is activated.
Logical Synchronization Unit (LSU) Modular System Hardware The LSU module consists of two types of FRUs: • • • LSU logic board (accessible from the front of the LSU enclosure) LSU optics adapters (accessible from the rear of the LSU enclosure) AC power assembly (accessible from the rear of the LSU enclosure) Caution. To maintain proper cooling air flow, blank panels must be installed in all slots that do not contain logic adapter PICs or logic boards.
LSU Indicator LEDs Modular System Hardware LSU Indicator LEDs LED State Meaning LSU optics adapter PIC (green LED) Green Power is on; LSU is available for normal operation. Off Power is off. LSU optics adapter PIC (amber LED) Amber Power is in progress, board is being reset, or a fault exists. Off Normal operation or powered off. LSU optics adapter connectors (green LEDs) Green NonStop Blade Element optics link or ServerNet link is functional.
Modular System Hardware • • • Processor Switch ServerNet I/O PICs (slots 4 to 9); provide 24 ServerNet 3 connections to one or more IOAMs and to optional NonStop S-series I/O enclosures Processor I/O PICs (slots 10 to 13); connect to LSU for ServerNet 3 I/O with the processors Cable management and connectivity on the rear of the cabinet Caution. To maintain proper cooling air flow, blank panels must be installed in all slots that do not contain PICs.
P-Switch Indicator LEDs Modular System Hardware This illustration shows the front of the p-switch: PWR SPON PWR FAN FAN 100/10 ENET 1 DISPLAY 2 3 4 VST402.
Processor Numbering Modular System Hardware Processor Numbering Connection of the ServerNet cables from the LSU to the PICs in p-switch slots 10 through 13 determines the number of the associated logical processor. For more information, see LSUs to Processor Switches and Processor IDs on page 6-8. This example of a triplex processor shows the ServerNet cabling to the p-switch PIC in slot 10 that defines processors 0, 1, 2, and 3.
Modular System Hardware I/O Adapter Module (IOAM) Enclosure and I/O Adapters I/O Adapter Module (IOAM) Enclosure and I/O Adapters An IOAM provides the Integrity NonStop NS-series system with its system I/O using Gigabit 4-port Ethernet ServerNet adapters (G4SAs) for LAN connectivity and Fibre Channel ServerNet adapters (FCSAs) for storage connectivity.
I/O Adapter Module (IOAM) Enclosure and I/O Adapters Modular System Hardware This illustration shows the front and rear of the IOAM enclosure and details: Maintenance Connector (100 BaseT RJ-45) ServerNet Links From P-Switch (MMF LC Connectors) LDC X ServerNet Switch Board (Module 2, Slot 14) Fans (Mod 2 Slot 17) Slot 5 Slot 4 Slot 5 Slot 1 Slot 4 Slot 3 Slot 2 Fans (Mod 2 Slot 16) Slot 1 Fans (Mod 3 Slot 16) Slot 3 IOAM (Module 3) IOAM (Module 2) Slot 2 Fans (Mod 3 Slot 17) Power Suppli
I/O Adapter Module (IOAM) Enclosure and I/O Adapters Modular System Hardware IOAM Enclosure Indicator LEDs ServerNet Switch Board LED State Meaning Power Green Power is on; board is available for normal operation. Off Power is off. Amber A fault exists. Off Normal operation or powered off. Green Link is functional. Off Link is not functional. LCD Display Messages Message as is displayed. ServerNet Ports Green ServerNet link is functional. Off ServerNet link is not functional.
I/O Adapter Module (IOAM) Enclosure and I/O Adapters Modular System Hardware This illustration shows the front of an FCSA: Port 1 2Gb 1Gb Port 1 Port 2 2Gb 1Gb Port 1 FCSA Fibre Ethernet ports: not available for FCSA Ethernet ports: not available for FCSA D C Ethernet ports: not available for FCSA VST001.vsd FCSAs are installed in pairs and can reside in slots 1 through 5 of either IOAM (module 2 or 3) in an IOAM enclosure.
I/O Adapter Module (IOAM) Enclosure and I/O Adapters Modular System Hardware Gigabit Ethernet 4-Port ServerNet Adapter The Gigabit Ethernet 4-port ServerNet adapter (G4SA) provides gigabit connectivity to Ethernet LANs. G4SAs can reside in slots 1 through 5 of each IOAM module.
Fibre Channel Disk Module Modular System Hardware A G4SA complies with the 1000 Base-T standard (802.3ab),1000 Base-SX standard (802.3z), and these Ethernet LANs: • • • • • 802.3 (10 Base-T) 802.1Q (VLAN tag-aware switch) 802.3u (Auto negotiate) 802.3x (Flow control) 802.3u (100 Base-T and 1000 Base-T) For detailed information on the G4SA, see the NonStop Gigabit Ethernet 4-Port Installation and Support Guide.
Modular System Hardware Maintenance Switch (Ethernet) Maintenance Switch (Ethernet) The ProCurve 2524 maintenance switch includes management features that NSAA requires and provides the communication between the Integrity NonStop NS-series system at the switch boards in the p-switches and IOAM enclosure, and optional UPS and the system console running HP NonStop Open System Management (OSM). The maintenance switch includes 24 ports, which is enough capacity to support multiple systems.
UPS and ERM (Optional) Modular System Hardware UPS and ERM (Optional) An uninterruptible power supply (UPS) is optional but recommended where a site UPS is not available. You can use any UPS that meets the modular cabinet power requirements for all enclosures being powered by the UPS. One UPS option is the HP R5500 XR UPS. For information about the requirements for installing a UPS other than the HP R5500 XR UPS in an Integrity NonStop NS-series system, see Uninterruptible Power Supply (UPS) on page 2-3.
System Console Modular System Hardware For power and environmental requirements, planning, installation, and emergency power-off (EPO) instructions for the R5500 XR UPS, refer to the documentation shipped with the UPS. System Console A system console is a personal computer (PC) running maintenance and diagnostic software for Integrity NonStop NS-series systems.
Component Location and Identification Modular System Hardware This illustration shows an example of connections between two IOAM enclosures and an ESS via the separate Fibre Channel switch: Fibre Channel Switch FCSA IOAM Enclosure IOAM Enclosure ESS FCSA Fibre Channel Switch VST 068.vsd For fault tolerance, the primary and backup paths to an ESS logical device (LDEV) must go through different Fibre Channel switches.
Terminology Modular System Hardware Terminology Terms used in locating and describing components are: Term Definition Cabinet Computer system housing that includes a structure of external panels, front and rear doors, internal racking, and dual PDUs. Rack Structure integrated into the cabinet into which rackmountable components are assembled. Rack Offset The physical location of components installed in a modular cabinet, measured in U values numbered 1 to 42, with 1U at the bottom of the cabinet.
Rack and Offset Physical Location Modular System Hardware On Integrity NonStop NS-series systems, locations of the physical and logical modular components are identified by: • • Physical location: ° Rack number ° Rack offset Logical location: GMS notation determined by the position of the component on ServerNet In NonStop S-series systems, group, module, and slot (GMS) notation identifies the physical location of a component.
NonStop Blade Element Group-Module-Slot Numbering Modular System Hardware NonStop Blade Element Group-Module-Slot Numbering • • • • Processor group: 400 through 403 relates to NonStop Blade Complex 0 through 3 Example: group 403 = NonStop Blade Complex 3 Module: 1 through 3 relates to the CPU NonStop Blade Element ID A through C Example: module 2 = NonStop Blade Element B Slot: 71 through 78 relates to location of the Blade optics adapter Example: Slot 72 = Blade optics adapter in slot 72 Port: J0 throu
LSU Group-Module-Slot Numbering Modular System Hardware LSU Group-Module-Slot Numbering This table shows the default numbering for the LSUs: Item Group (NonStop Blade Complex)1 Individual LSU J set 400-403 Individual LSU K set Not used at this time Module I/O Position (Slot) 100 + NonStop Blade Complex number 1 - Optic Adapter (rear side, slots 20-27) 2 - Logic Board (front side, slots 50-57) 1 See NonStop Blade Element Group-Module-Slot Numbering on page 5-32.
Processor Switch Group-Module-Slot Numbering Modular System Hardware Processor Switch Group-Module-Slot Numbering This table shows the default numbering for the p-switch: Group X ServerNet Module Y ServerNet Module Slot Item 100 2 3 1 Maintenance PIC 2 Cluster PIC 3 Crosslink PIC 4-9 ServerNet I/O PICs 10 ServerNet PIC (processors 0-3) 11 ServerNet PIC (processors 4-7) 12 ServerNet PIC (processors 8-11) 13 ServerNet PIC (processors 12-16) 14 P-switch logic board 15, 18 Power sup
Processor Switch Group-Module-Slot Numbering Modular System Hardware IOAM Enclosure Group-Module-Slot Numbering These tables shows the default numbering for the IOAM enclosure: Note. Unlike the Integrity NonStop NS16000, the group number for the IOAM enclosure in an Integrity NonStop NS14000 server is 100.
Fibre Channel Disk Module Group-Module-Slot Numbering Modular System Hardware Fibre Channel Disk Module Group-Module-Slot Numbering This table shows the default numbering for the Fibre Channel disk module: IOAM Group IOAM Module IOAM Slot FCSA F-SACs 110-115 2-X fabric; 1-5 1, 2 3-Y fabric Fibre Channel Disk Module Shelf 1 - 4 if daisychained; 1 if single disk enclosure FCDM Slot Item 0 Fibre Channel disk module 1-14 Disk drive bays 89 Transceiver A1 90 Transceiver A2 91 Transceiver B
Modular System Hardware NonStop S-Series I/O Enclosures NonStop S-Series I/O Enclosures Topics discussed in this subsection are: Topic Page IOMF 2 CRU 5-39 NonStop S-Series Disk Drives and ServerNet Adapters 5-39 NonStop S-Series I/O Enclosure Group Numbers 5-39 Note. The Integrity NonStop NS14000 system does not support connections to NonStop S-series I/O enclosures.
NonStop S-Series I/O Enclosures Modular System Hardware This illustration shows connection of a NonStop S-series I/O enclosure to an Integrity NonStop NS16000 system: Integrity NonStop NS-Series System Fiber-Optic ServerNet Cables NonStop S-Series I/O Enclosure Power-On Cables P-Switch ServerNet I/O PICs P-Switch Y Fabric P-Switch X Fabric VST767.vsd Each p-switch (for the X or Y fabric) has up to six I/O PICs, with one I/O PIC required for each IOAM enclosure in the system.
Modular System Hardware IOMF 2 CRU IOMF 2 CRU The Integrity NonStop NS16000 system is compatible with most disks and ServerNet adapters contained in currently installed NonStop S-series I/O enclosures equipped with IOMF 2 CRUs. I/O multifunction 2 (IOMF 2) CRUs, each equipped with an MMF PIC, are required for connecting NonStop S-series I/O enclosures to Integrity NonStop NS16000 systems.
NonStop S-Series I/O Enclosure Group Numbers Modular System Hardware This table shows the group number assignments for the NonStop S-series I/O enclosures: P-Switch PIC Slot (X and Y Fabrics) P-Switch PIC Connector NonStop S-Series I/O Enclosure Group 4 1 11 2 12 3 13 4 14 1 21 2 22 3 23 4 24 1 31 2 32 3 33 4 34 1 41 2 42 3 43 4 44 1 51 2 52 3 53 4 54 1 61 2 62 3 63 4 64 5 6 7 8 9 HP Integrity NonStop NS-Series Planning Guide —529567-008 5- 40
NonStop S-Series I/O Enclosure Group Numbers Modular System Hardware This illustration shows the group number assignments on the p-switch: To I/O Enclosure Groups 11-14 To I/O Enclosure Groups 21-24 To I/O Enclosure Groups 31-34 To I/O Enclosure Groups 41-44 To I/O Enclosure Groups 51-54 To I/O Enclosure Groups 61-64 3 ENET 2 SPON 1 SER 1 2 4 3 2 1 4 3 2 1 4 3 2 1 4 3 2 1 4 3 2 1 4 3 2 1 4 3 2 1 4 3 2 1 4 3 2 1 4 3 2 1 4 3 2 1 3 4 5 6 7 8 9 10 11 12 13 For Connections to IOAM (or
Modular System Hardware NonStop S-Series I/O Enclosure Group Numbers HP Integrity NonStop NS-Series Planning Guide —529567-008 5- 42
6 System Configuration Guidelines This section provides guidelines for Integrity NonStop NS-series system configurations: Note. For system configuration guidelines specific to the Integrity NonStop NS14000 system, see Section 8, Integrity NonStop NS14000 System Description.
Enclosure Locations in Cabinets System Configuration Guidelines This example shows one possible system configuration using a duplex processor, or two NonStop Blade Elements: To AC Power Source or Site UPS Maintenance Switch Fibre Channel Disk Modules IOAM Enclosure P-Switch Y Fabric P-Switch X Fabric PDU LSU Enclosure NSBE B NSBE A VST423.vsd For other example configurations, see Section 7, Example Configurations.
Enclosure Locations in Cabinets System Configuration Guidelines Enclosure or Component Height (U) Required Cabinet Location Extended runtime module (ERM) 3U Immediately above UPS (and first ERM if two ERMs installed) Up to two ERMs can be installed. Cabinet Stabilizer N/A Bottom front exterior of cabinet Required when you have less than four cabinets bayed together. Cabinet stabilizer is not required when cabinet is bolted to its adjacent cabinet.
System Configuration Guidelines Internal ServerNet Interconnect Cabling Internal ServerNet Interconnect Cabling This subsection includes: Topic Page Cable Labeling 6-4 Cable Management System 6-5 Internal Interconnect Cables 6-6 Dedicated Service LAN Cables 6-7 Cable Length Restrictions 6-7 Internal Cable Product IDs 6-8 NonStop Blade Elements to LSUs 6-8 LSUs to Processor Switches and Processor IDs 6-8 Processor Switch ServerNet Connections 6-14 Processor Switches to IOAM Enclosures
Cable Management System System Configuration Guidelines This label identifies the cable connecting the p-switches at U31 of both cabinet 1 and 2 at slot 3, connectors 1 or 2, which are crosslink connections between the two p-switches. Node: N1-R1-U31-3.1 (Near) Node: N1-R2-U31-3.2 (Near) Node: N1-R2-U31-3.2 (Far) Node: N1-R1-U31-3.1 (Far) Near End (early version) Far End (early version) __N1-R1-U31-3.1 __N1-R2-U31-3.2 (__N1-R2-U31-3.2) (__N1-R1-U31-3.
System Configuration Guidelines Internal Interconnect Cables Several Integrity NonStop NS-series enclosures, specifically the NonStop Blade Element, p-switch, and LSU, integrate CMS provisions for routing and securing the fiber-optic cables to prevent damaging them when the enclosures are moved out and back into the cabinet for servicing.
Dedicated Service LAN Cables System Configuration Guidelines Fiber-optic cables use either LC or SC connectors at one or both ends. This illustration shows an LC fiber-optic cable connector pair: VST598.vsd This illustration shows an SC fiber cable connector pair: VST599.vsd Dedicated Service LAN Cables The system also uses Category 5, unshielded twisted pair Ethernet cables for the internal dedicated service LAN and for connections between the G4SA and the application LAN equipment.
Internal Cable Product IDs System Configuration Guidelines Although a considerable cable length can exist between the modular enclosures in the system, HP recommends placing all cabinets adjacent to each other and bolting them together, with cable length between each of the enclosures be as short as possible. Internal Cable Product IDs For product IDs, see Internal Cables on page A-1.
LSUs to Processor Switches and Processor IDs System Configuration Guidelines P-Switch PIC Slot PIC Port Processor Number (page 2 of 2) 11 1 4 2 5 3 6 4 7 1 8 2 9 3 10 4 11 1 12 2 13 3 14 4 15 12 13 The four cabling diagrams on the next pages illustrate the default configuration and connections for a triplex system processor. These diagrams are not for use in installing or cabling the system.
LSUs to Processor Switches and Processor IDs System Configuration Guidelines This figure shows example connections to the default configuration of the NonStop Blade Element reintegration links (NonStop Blade Element connectors S, T, Q, R) and ports 1 to 4 on the p-switch PIC in slot 10, which defines triplex processor numbers to 0 to 3.
LSUs to Processor Switches and Processor IDs System Configuration Guidelines This figure shows example connections of the NonStop Blade Element reintegration links (NonStop Blade Element connectors S, T, Q, R) and ports 1 to 4 of the p-switch PIC in slot 11 for triplex processor numbers to 4 to 7: P-switch slot 11, P-switch slot 11, P-switch slot 11, P-switch slot 11, 10 port 1: Processor 4 port 2: Processor 5 port 3: Processor 6 port 4: Processor 7 11 12 13 4 3 2 1 10 Y Fabric Processor Switch 1
LSUs to Processor Switches and Processor IDs System Configuration Guidelines This figure shows example connections of the NonStop Blade Element reintegration links (NonStop Blade Element connectors S, T, Q, R) and ports 1 to 4 of the p-switch PIC in slot 12 for triplex processor numbers to 8 to 11: P-switch slot 12, port 1: Processor 8 P-switch slot 12, port 2: Processor 9 P-switch slot 12, port 3: Processor 10 P-switch slot 12, port 4: Processor 11 4 3 2 1 10 11 Y-Fabric Processor Switch 12 13 4 3
LSUs to Processor Switches and Processor IDs System Configuration Guidelines This figure shows example connections of the NonStop Blade Element reintegration links (NonStop Blade Element connectors S, T, Q, R) and ports 1 to 4 of the p-switch PIC in slot 13 for triplex processor numbers to 12 to 15: P-switch slot 13, P-switch slot 13, P-switch slot 13, P-switch slot 13, port 1: Processor 12 port 2: Processor 13 port 3: Processor 14 port 4: Processor 15 4 3 2 1 10 11 12 Y Fabric Processor Switch 13 4
Processor Switch ServerNet Connections System Configuration Guidelines Processor Switch ServerNet Connections ServerNet connections to the system I/O devices (storage disk and tape drive as well as Ethernet communication to networks) radiate out from the p-switches for both the X and Y ServerNet fabrics to the IOAMs in one or more IOAM enclosures.
Processor Switches to IOAM Enclosures System Configuration Guidelines • • Each port on the p-switch PIC must connect to the same numbered port on the IOAM enclosure’s ServerNet switch board (port 1 to port 1, port 2 to port 2, and so forth). Connections to an IOAM enclosure cannot co-exist on the same p-switch PIC with connections to a NonStop S-series I/O enclosure.
FCSA to Fibre Channel Disk Modules System Configuration Guidelines FCSA to Fibre Channel Disk Modules See Example Configurations of the IOAM Enclosure and Fibre Channel Disk Module on page 6-26 FCSA to Tape Devices Fibre Channel tape devices can be connected directly to an FCSA in an IOAM enclosure. Integrity NonStop NS-series systems do not support SCSI buses or adapters to connect tape devices.
System Configuration Guidelines P-Switch to NonStop S-Series I/O Enclosure Cabling P-Switch to NonStop S-Series I/O Enclosure Cabling Note. The Integrity NonStop NS14000 system does not use p-switches or support connections to NonStop S-series I/O enclosures. Each NonStop S-series I/O enclosure uses one port of one PIC in each of the two p-switches for ServerNet connection.
System Configuration Guidelines P-Switch to NonStop S-Series I/O Enclosure Cabling This illustration shows the cables from the NonStop S-series IOMF 2 CRUs connected to port 1 of the PICs in slot 4 of the X and Y p-switch, assigning the group number of 11: Integrity NonStop NS-Series System Fiber-Optic ServerNet Cables NonStop S-Series I/O Enclosure Power-On Cables P-Switch ServerNet I/O PICs P-Switch Y Fabric P-Switch X Fabric VST767.
System Configuration Guidelines P-Switch to NonStop S-Series I/O Enclosure Cabling See the conversion instructions in the Hardware Service and Maintenance Publications category of the Support and Service Library of NTL. See Support and Service Library on page C-1. • Disk drives and ServerNet adapters (except SEB and MSEB CRUs) used in NonStop S-series I/O enclosures, as well as devices that are downstream of these enclosures, are compatible with NonStop NS-series hardware.
System Configuration Guidelines IOAM Enclosure and Disk Storage Considerations IOAM Enclosure and Disk Storage Considerations When deciding between one IOAM enclosure or two (or more), consider: One IOAM Enclosure Two IOAM Enclosures High-availability and fault-tolerant attributes of NonStop S-series systems with I/O enclosures using tetra-8 and tetra-16 topologies. Greater availability because of multiple redundant ServerNet paths and FCSAs.
System Configuration Guidelines Fibre Channel Devices This illustration shows an FCSA with indicators and ports that are used and not used in Integrity NonStop NS-series systems: Port 1 2Gb 1Gb Port 1 Port 2 2Gb 1Gb Port 1 FCSA Fibre Ethernet ports: not available for FCSA Ethernet ports: not available for FCSA D C Ethernet ports: not available for FCSA VST001.
System Configuration Guidelines Fibre Channel Devices This illustration shows the locations of the hardware in the Fibre Channel disk module as well as the Fibre Channel port connectors at the back of the enclosure: FC-AL Port B2 FC-AL Port A2 Fibre Channel Disk Module (rear) EMU Port A1 Port B1 Fibre Channel Disk Module (front) Disk Drive Bays 1-14 VSD.503.vst Fibre Channel disk modules connect to Fibre Channel ServerNet adapters (FCSAs) via Fiber Channel arbitrated loop (FC-AL) cables.
System Configuration Guidelines Factory-Default Disk Volume Locations Factory-Default Disk Volume Locations This illustration shows where the factory-default locations for the primary and mirror system disk volumes reside in separate Fibre Channel disk modules: $SYSTEM (bay 1) $AUDIT (bay 3) Fibre Channel Disk Module (front) $DSMSCM (bay 2) $OSS (bay 4) VSD.082.vst FCSA location and cable connections vary according to the various controller and Fibre Channel disk module combinations.
System Configuration Guidelines • Configuration Recommendations for Fibre Channel Devices The mirror path and mirror backup Fibre Channel communication links to a disk drive should not connect to FCSAs in the same module of an IOAM enclosure. In a fully populated system, loss of one FCSA can make up to 56 disk drives inaccessible on a single Fibre Channel communications path. This configuration is allowed, but only if you override an SCF warning message.
System Configuration Guidelines • • • • • • • Configuration Recommendations for Fibre Channel Devices In systems with one IOAM enclosure: ° With two FCSAs and two Fibre Channel disk modules, the primary FCSA resides in module 2 of the IOAM enclosure with the backup FCSA residing in module 3. (See the example configuration in Two FCSAs, Two FCDMs, One IOAM Enclosure on page 6-27.
System Configuration Guidelines Example Configurations of the IOAM Enclosure and Fibre Channel Disk Module Example Configurations of the IOAM Enclosure and Fibre Channel Disk Module These subsections show various example configurations of FCSA controllers and Fibre Channel disk modules with IOAM enclosures. Note. Although it is not a requirement for fault tolerance to house the primary and mirror disk drives in separate FCDMs.
System Configuration Guidelines Example Configurations of the IOAM Enclosure and Fibre Channel Disk Module Two FCSAs, Two FCDMs, One IOAM Enclosure This illustration shows example cable connections between the two FCSAs and the primary and mirror Fibre Channel disk modules: Mirror FCDM Primary FCDM FibreChannel Cables FibreChannel Cables FCSA FCSA VST088.
System Configuration Guidelines Example Configurations of the IOAM Enclosure and Fibre Channel Disk Module Four FCSAs, Four FCDMs, One IOAM Enclosure This illustration shows example cable connections between the four FCSAs and the two sets of primary and mirror Fibre Channel disk modules: Mirror FCDM 2 Primary FCDM 2 Mirror FCDM1 Primary FCDM1 FCSAs FCSAs VST089.
System Configuration Guidelines Example Configurations of the IOAM Enclosure and Fibre Channel Disk Module Two FCSAs, Two FCDMs, Two IOAM Enclosures This illustration shows example cable connections between the two FCSAs split between two IOAM enclosures and one set of primary and mirror Fibre Channel disk modules: Primary FCDM FCSA Mirror FCDM FCSA IOAM Enclosure VST086.
System Configuration Guidelines Example Configurations of the IOAM Enclosure and Fibre Channel Disk Module Four FCSAs, Four FCDMs, Two IOAM Enclosures This illustration shows example cable connections between the four FCSAs split between two IOAM enclosures and two sets of primary and mirror Fibre Channel disk modules: Mirror FCDM 1 Mirror FCDM 2 Primary FCDM 1 Primary FCDM 2 FCSA FCSA FCSA FCSA IOAM Enclosure VST087.
System Configuration Guidelines Example Configurations of the IOAM Enclosure and Fibre Channel Disk Module Daisy-Chain Configurations When planning for possible use of daisy-chained disks, consider: Daisy-Chained Disks Recommended for ... Daisy-Chained Disks Not Recommended for ... Requirements for Daisy-Chain1 Cost-sensitive storage and applications using low-bandwidth disk I/O Many volumes in a large Fiber Channel loop.
System Configuration Guidelines Example Configurations of the IOAM Enclosure and Fibre Channel Disk Module This illustration shows an example of cable connections between the two FCSAs and four Fibre Channel disk modules in a single daisy-chain configuration. Terminator FCDM 4 FCDM 3 B Side A Side ID Expanders FCDM 2 FCDM 1 Fibrer-Optic Cables Fiber-Optic Cables Terminator FCSA FCSA IOAM Enclosure VST081.
System Configuration Guidelines Example Configurations of the IOAM Enclosure and Fibre Channel Disk Module Four FCSAs, Three FCDMs, One IOAM Enclosure This illustration shows example cable connections between the four FCSAs and three Fibre Channel disk modules with the primary and mirror drives split within each Fibre Channel disk module: Primary 3 Mirror 1 FCDM 3 Mirror 3 Primary 2 FCDM 2 Primary 1 Mirror 2 FCDM 1 IOAM Enclosure FCSAs FCSAs VST085.
System Configuration Guidelines Example Configurations of the IOAM Enclosure and Fibre Channel Disk Module This illustration shows the factory-default locations for the configurations of four FCSAs and three Fibre Channel disk modules where the primary system file disk volumes are in Fibre Channel disk module 1: $SYSTEM (bay 1) $AUDIT (bay 3) Fibre Channel Disk Module (front) $DSMSCM (bay 2) $OSS (bay 4) VSD.082.
System Configuration Guidelines G4SAs to Networks G4SAs to Networks The G4SA provides Gigabit connectivity between Integrity NonStop NS-series systems and Ethernet LANs. The G4SA is an end node on the ServerNet and uses either fiberoptic or copper cable for connectivity to user application LANs, as well as for the dedicated service LAN. For more information on the G4SA, see Gigabit Ethernet 4-Port ServerNet Adapter on page 5-24 or the Gigabit 4-Port ServerNet Adapter Installation and Support Guide.
System Configuration Guidelines Default Naming Conventions This illustration show a conceptual example for copper and fiber-optic connectivity to the various LANs: Operations LAN IOAM Enclosure G4SA To Application LAN (10/100/1000 Mbps Fiber) To Application LAN (10/100/1000 Mbps Fiber) To Application LAN (10/100/1000 Mbps copper) To Application LAN (10/100/1000 Mbps copper) To Maintenance Switch (10/100 Mbps copper) and then to Operations LAN To Maintenance Switch (10/100 Mbps copper) and then to O
System Configuration Guidelines PDU Strapping Configurations Type of Object Naming Convention Example Description TCP/IP process $ZTC number $ZTC0 First TCP6SAM or TCP/IP process for the system Telserv process $ZTN number $ZTN0 First Telserv process for the system Listener process $LSN number LSN0 First Listener process for the system TFTP process Automatically created by WANMGR None None WANBOOT process Automatically created by WANMGR None None SWAN Concentrator S number S10 Te
System Configuration Guidelines PDU Strapping Configurations HP Integrity NonStop NS-Series Planning Guide—529567-008 6- 38
7 Example Configurations This section shows example configurations of the Integrity NonStop NS-series hardware that can be installed in a modular cabinet. A number of other configurations are also possible because of the flexibility inherent to the NonStop advanced architecture and ServerNet. Note. Hardware configuration drawings in this appendix represent the physical arrangement of the modular enclosures, but do not show location of the PDU junction boxes.
Example Configurations Enclosure or Component (page 2 of 2) Typical Configurations Duplex Processor Triplex Processor Minimum Typical Maximum Minimum Typical Maximum IOAM enclosure 1 2 6 1 2 6 FCSA 2 2 Up to 60 in mixture set by disks and I/O 2 G4SA Up to 20 in mixture set by disks and I/O Up to 20 in mixture set by disks and I/O Up to 60 in mixture set by disks and I/O Fibre Channel disk module 2 4 8 2 4 8 Fibre Channel disk drives 14 56 112 14 56 112 2 Typical Conf
Example Configurations Duplex 8-Processor System, Two Cabinets Duplex 8-Processor System, Two Cabinets This duplex configuration has a maximum of eight logical processors with one IOAM enclosure and up to 12 Fibre Channel disk modules (four Fibre Channel disk modules in a typical system): 42 42 41 40 Available Space (or additional FCDM) 41 39 39 38 38 37 37 36 36 35 35 IOAM Enclosure 33 32 32 31 31 30 30 29 29 28 28 27 P-Switch P-Switch Console 18 Available Space (or addition
Example Configurations Duplex 16-Processor System, Three Cabinets Duplex 16-Processor System, Three Cabinets This duplex configuration has a maximum of 16 logical processors with two IOAM enclosures and up to 14 Fibre Channel disk modules (one IOAM enclosure and eight Fibre Channel disk modules in a typical system): 42 42 41 41 Available Space 40 40 39 39 38 38 37 37 36 36 35 35 34 34 33 IOAM Enclosure * 32 31 31 30 30 29 29 28 28 P-Switch 26 24 25 P-Switch 23 22 22 2
Example Configurations Duplex 16-Processor System, Two Cabinets Duplex 16-Processor System, Two Cabinets This duplex configuration has a maximum of 16 logical processors with one IOAM enclosure and 4 Fibre Channel disk modules: 42 41 40 42 Fibre Channel Disk Module 41 40 39 39 38 38 37 37 36 36 35 35 34 33 IOAM Enclosure 34 32 31 31 30 30 29 29 28 28 P-Switch 26 25 25 P-Switch 23 22 21 18 Console Fibre Channel Disk Module 17 24 22 21 20 18 16 LSU 15 14 13 13 12 1
Example Configurations Triplex 8-Processor System, Two Cabinets Triplex 8-Processor System, Two Cabinets This triplex configuration has a maximum of eight logical processors with one IOAM enclosure, and ten Fibre Channel disk modules (four Fibre Channel disk modules in a typical system): 42 42 41 40 Available Space (or additional FCDM) 41 40 39 39 38 38 37 37 36 36 35 35 34 34 33 IOAM Enclosure 32 31 31 30 30 29 29 28 28 P-Switch 25 25 24 P-Switch 23 22 22 20 19 NonStop
Example Configurations Triplex 16-Processor System, Three Cabinets Triplex 16-Processor System, Three Cabinets This duplex configuration has a maximum of 16 logical processors with one IOAM enclosure, and ten Fibre Channel disk modules (eight Fibre Channel disk modules in a typical system): 42 41 40 41 40 39 39 38 38 37 37 36 36 35 35 34 34 33 IOAM Enclosure 32 31 31 30 30 29 29 28 28 27 27 P-Switch 26 26 25 25 P-Switch 24 23 23 22 22 21 21 20 19 NonStop Blade Elem
Example Configurations Example System With UPS and ERM Example System With UPS and ERM UPS and ERM (two ERMs maximum) must reside in the bottom of the cabinet: 42 42 41 41 40 40 39 39 38 38 37 37 36 36 35 35 34 34 33 33 32 32 31 31 30 30 29 29 28 28 27 27 26 26 25 25 24 24 23 23 22 22 21 21 20 20 19 19 18 18 17 17 16 16 15 15 14 14 13 13 12 12 11 11 10 10 09 09 08 08 07 07 06 06 05 05 04 04 03 03 02 02 01 01 Maintenance Sw
Example Configurations Example System With One NonStop S-Series I/O Enclosure Example System With One NonStop S-Series I/O Enclosure Integrity NonStop NS-Series System Fibre Channel Disk Modules IOAM Enclosure Fiber-Optic ServerNet Cables Power-On Cables P-Switch ServerNet I/O PICs P-Switch Y Fabric NonStop S-Series I/O Enclosure P-Switch X Fabric LSU NonStop Blade Element B NonStop Blade Element A VST810.
Example Configurations Example 4-Processor Duplex System Cabling Example 4-Processor Duplex System Cabling This illustration shows an example 4-processor duplex system in a single cabinet. This simplified, conceptual representation shows the X and Y ServerNet cabling between the NonStop Blade Element, LSU, p-switch, and IOAM enclosures. For clarity, power and Ethernet cables are not shown. For cable-by-cable interconnect diagrams, see Internal ServerNet Interconnect Cabling on page 6-4.
Example Configurations Example 16-Processor Triplex System Cabling IOAM is the two-controller, two-Fibre Channel disk module configuration shown in detail in Two FCSAs, Two FCDMs, One IOAM Enclosure on page 6-27. For details and instructions on connecting cables as part of the system installation, refer to the NonStop NS-Series Hardware Installation Manual. Example 16-Processor Triplex System Cabling The next two illustrations are an example of a 16-processor triplex system with four cabinets.
Example Configurations Example 16-Processor Triplex System Cabling Components of the cable management system that are part of each modular enclosure and the modular cabinet are not shown, so actual cable routing using is slightly different from that shown. For detailed information on cable routing and connection as part of the system installation, refer to the NonStop NS-Series Hardware Installation Manual.
8 Integrity NonStop NS14000 System Description This section describes the Integrity NonStop NS14000 system and covers these topics: Topic Page Server Description 8-1 Installation Facility Guidelines 8-3 System Installation Specifications 8-3 System Implementation 8-3 Modular System Hardware 8-14 System Configuration Guidelines 8-16 Server Description The Integrity NonStop NS14000 server uses Intel Itanium processors and NonStop advanced architecture (NSAA) in duplex and triplex configurations.
Integrity NonStop NS14000 System Description Server Description Figure 8-1 shows a 4-processor, duplex configuration example of an Integrity NonStop NS14000 server. Figure 8-1.
Integrity NonStop NS14000 System Description Installation Facility Guidelines Installation Facility Guidelines The Integrity NonStop NS14000 system uses the same power and cooling architecture as the Integrity NonStop NS16000 server. For information on these topics, see Section 2, Installation Facility Guidelines. System Installation Specifications The Integrity NonStop NS14000 system adheres to the same system installation specifications as the Integrity NonStop NS16000 server.
Integrity NonStop NS14000 System Description NonStop Advanced Architecture NonStop Advanced Architecture The Integrity NonStop NS14000 system implements the NonStop advanced architecture (NSAA) using NonStop Blade Elements in duplex and triplex configurations. See NonStop Advanced Architecture on page 4-2.
Integrity NonStop NS14000 System Description Triplex Processor Triplex Processor The Integrity NonStop NS14000 system implements a triplex processor (see Triplex Processor on page 4-7) similarly to the Integrity NonStop NS16000 system, with these exceptions. • • Logical synchronization units (LSUs) connect directly to 4PSEs in the IOAM enclosure. Unlike the Integrity NonStop NS16000 system, the Integrity NonStop NS14000 system does not use p-switches.
Integrity NonStop NS14000 System Description ServerNet Fabric I/O ServerNet Fabric I/O This subsection provides information about the ServerNet network in an Integrity NonStop NS14000 system and covers these topics: Topic Page Overview 8-6 Simplified ServerNet System Diagram 8-7 P-Switch ServerNet Pathways 8-7 IOAM Enclosure ServerNet Pathways 8-8 Example System ServerNet Pathways 8-9 For further information on the ServerNet network, protocols, IP addresses, and naming conventions, see the Int
Integrity NonStop NS14000 System Description ServerNet Fabric I/O Simplified ServerNet System Diagram This simplified diagram shows the ServerNet architecture in the Integrity NonStop NS14000 system. It shows the X and Y ServerNet communication pathways between the NonStop Blade Complexes (NSBC), 4PSEs, ServerNet switch boards, and ServerNet I/O adapters.
Integrity NonStop NS14000 System Description ServerNet Fabric I/O IOAM Enclosure ServerNet Pathways This drawing shows the IOAM enclosure ServerNet communication pathways. Optic lines connect the LSU with the 4PSEs in each IOAM module. The 4PSEs in IOAM Module 2 communicate with the X ServerNet switch board. The 4PSEs in IOAM Module 3 communicate with the Y ServerNet switch board. Each ServerNet switch board communicates with the FCSAs and G4SAs located in IOAM Module 2 and IOAM Module 3.
Integrity NonStop NS14000 System Description ServerNet Fabric I/O Example System ServerNet Pathways This drawing shows the redundant routing and connection of the ServerNet X and Y fabric within a simple example system. This example system includes: Four processors (0 through 3) with their requisite four LSU optics adapters. One IOAM enclosure, group 100, connected to the LSUs in a NonStop Blade Complex.
Integrity NonStop NS14000 System Description ServerNet Fabric I/O If a cable, connection, router, or other failure occurs, only the system resources that are downstream of the failure on the same fabric are affected. Because of the redundant ServerNet architecture, communication takes the alternate path on the other fabric to the peer resources.
Integrity NonStop NS14000 System Description System Architecture System Architecture This diagram shows elements of an example Integrity NonStop NS14000 system with four triplex processors: Fibre Channel Disk Module FCAL A FCAL B ESS Fibre Channel Tape High-Speed Ethernet 4PSE 4PSE G4SA FCSA FCSA 4PSE 4PSE FCSA FCSA G4SA High-Speed Ethernet I/O Adapter Module 2 I/O Adapter Module 3 IOAME X Y LSU 0 A B C X Y LSU 1 A B C X Y LSU 2 A B C X Y LSU 3 A B C NSBE C PE 0 PE 1 PE 2 PE 3 NSBE B PE
Integrity NonStop NS14000 System Description NonStop S-Series I/O Hardware NonStop S-Series I/O Hardware The Integrity NonStop NS14000 system does not support connections to NonStop S-series I/O enclosures.
Integrity NonStop NS14000 System Description Migrating NonStop S-Series Systems Load Path Description Source Disk Destination Processor ServerNet Fabric (page 2 of 2) 13 Mirror $SYSTEM-M 1 X 14 Mirror $SYSTEM-M 1 Y 15 Mirror backup $SYSTEM-M 1 X 16 Mirror backup $SYSTEM-M 1 Y This illustration shows the system load paths.
Integrity NonStop NS14000 System Description System Installation Document Packet System Installation Document Packet The Installation Document Packet for an Integrity NonStop NS14000 system contains forms similar to those contained in an Integrity NonStop NS16000 System Installation Document Packet.
Integrity NonStop NS14000 System Description Hardware Component Modular System Hardware Differences Specific to Integrity NonStop NS14000 Servers (page 2 of 2) I/O adapter module (IOAM) enclosure • • • • • • • • Only one IOAM enclosure is supported in an Integrity NonStop NS14000 server. The group number for the IOAM enclosure is 100 in an Integrity NonStop NS14000 system. The maintenance entity on the ServerNet switch board supports direct connections to both LSUs and ServerNet I/O adapters.
Integrity NonStop NS14000 System Description System Configuration Guidelines System Configuration Guidelines The system configuration guidelines for the Integrity NonStop NS14000 system are similar to those for the Integrity NonStop NS16000 system. For general information on this topic, see Section 6, System Configuration Guidelines. For example configurations, see Section 7, Example Configurations.
Integrity NonStop NS14000 System Description Internal ServerNet Interconnect Cabling Topic Page (page 2 of 2) LSUs to IOAM Enclosure and Processor IDs 8-18 Processor Switch ServerNet Connections 8-22 Processor Switches to IOAM Enclosures 8-22 FCSA to Fibre Channel Disk Modules 8-22 FCSA to Tape Devices 8-22 Cable Labeling The Integrity NonStop NS14000 system implements cable labeling in the same manner as the Integrity NonStop NS16000 server.
Integrity NonStop NS14000 System Description Internal ServerNet Interconnect Cabling Cable Length Restrictions Aside from the cable length restrictions described here, the Integrity NonStop NS14000 system has similar cable length restrictions as those of the Integrity NonStop NS16000 system. See Cable Length Restrictions on page A-3.
Integrity NonStop NS14000 System Description Internal ServerNet Interconnect Cabling 2.2, 3.1, and 3.2. Therefore, fiber-optic cable connections from the LSUs to the 4PSEs in the IOAM enclosure determine the number of each NonStop Blade Complex. Note. Only 4PSEs are supported for installation in slots 2.1, 2.2, 3.1, and 3.2 of the IOAM enclosure for an Integrity NonStop NS14000 server. This table lists the default 4PSE slot and port coupling to the processor number: Module.
Integrity NonStop NS14000 System Description Internal ServerNet Interconnect Cabling This figure shows example connections to the default configuration of the NonStop Blade Element reintegration links (NonStop Blade Element connectors S, T, Q, R) and ports 1 to 4 on the 4PSE in the IOAM enclosure, which defines triplex processor numbers 0 to 3. Two 4PSEs are required: one for the X-fabric and the other for the Y-fabric: LSU position 20 supports processor 0. LSU position 21 supports processor 1.
Integrity NonStop NS14000 System Description Internal ServerNet Interconnect Cabling This figure shows example connections of the NonStop Blade Element reintegration links (NonStop Blade Element connectors S, T, Q, R) and ports 1 to 4 on the 4PSE in the IOAM enclosure, which defines triplex processor numbers to 4 to 7: LSU position 24 supports processor 4. LSU position 25 supports processor 5. LSU position 26 supports processor 6. LSU position 27 supports processor 7.
Integrity NonStop NS14000 System Description P-Switch to NonStop S-Series I/O Enclosure Cabling Processor Switch ServerNet Connections Integrity NonStop NS14000 servers do not use p-switches. Instead, ServerNet cluster connections connect to ServerNet switch boards in the IOAM enclosure. Processor Switches to IOAM Enclosures Integrity NonStop NS14000 servers do not use p-switches. Instead, LSUs in a NonStop Blade Complex connect directly to 4PSEs installed in the IOAM enclosure.
Integrity NonStop NS14000 System Description Fibre Channel Devices Fibre Channel Devices This subsection describes Fibre Channel devices and covers these topics: Topic Page Factory-Default Disk Volume Locations 8-23 Configurations for Fibre Channel Devices 8-23 Configuration Restrictions for Fibre Channel Devices 8-23 Configuration Recommendations for Fibre Channel Devices 8-23 Example Configurations of the IOAM Enclosure and Fibre Channel Disk Module 8-24 For more information on the FCSA, see
Integrity NonStop NS14000 System Description • • G4SAs to Networks The group number for the IOAM enclosure is 100. The Integrity NonStop NS14000 supports only configurations with one IOAM enclosure. In systems with one IOAM enclosure: ° With two FCSAs and two Fibre Channel disk modules, the primary FCSA resides in module 2 of the IOAM enclosure, and the backup FCSA resides in module 3. (See the example configuration in Two FCSAs, Two FCDMs, One IOAM Enclosure on page 6-27.
A Cables Internal Cables Available internal cables and their lengths are: Cable Type Connectors Length (meters) Length (feet) Product ID MMF LC-LC 2 7 M8900-02 5 16 M8900-05 15 49 M8900-15 40 131 M8900-40 80 262 M8900-80 100 328 M8900100 1251 4101 M8900125 2001 6561 M8900200 2501 8201 M8900250 10 33 M8910-10 20 66 M8910-20 50 164 M8910-50 100 328 TBD 1251 4101 M8910125 3 10 M8920-3 5 16 M8920-5 10 33 M8920-10 30 98 M8920-30 50 164 M8920-50 0.
Cables ServerNet Cluster Cables ServerNet Cluster Cables These cables connect the Integrity NonStop NS-series systems to a ServerNet cluster (zone) with Model 6780 NonStop ServerNet switches: Cable Type Connectors Length (meters) Length (feet) Product ID SMF LC-LC 2 7 M8921-2 5 16 M8921-5 10 33 M8921-10 25 82 M8921-25 40 131 M8921-40 80 262 M8921-80 100 410 M8921100 These cables connect the p-switches on the Integrity NonStop NS-series to the Model 6770 NonStop ServerNet Cluste
Cables Cable Length Restrictions Cable Length Restrictions Maximum allowable lengths of cables connecting the modular system components are: Fiber Type Connectors Maximum Length Product ID NonStop Blade Element to LSU enclosure MMF LC-LC 100 m M8900nnn1 NonStop Blade Element to NonStop Blade Element MMF MTP 50 m M8920nnn1 LSU enclosure to p-switch MMF LC-LC 125 m M8900nnn1 P-switch to p-switch crosslink MMF LC-LC 125 m M8900nnn1 P-switch to IOAM enclosure MMF LC-LC 125 m M8900n
Cables Cable Management System This illustration shows the CMS components for an example Integrity NonStop NS-Series server: Cabin et Cable way Cabin et Spools Cabinet-toCabinet HalfSpools Containment Spools Cables Tray Cable Management Arm 1 2 3 4 Cable Vertical Radius Guide (VRG) Cable Management Cartridge VST11 9.
B Control, Configuration, and Maintenance Tools This section introduces the control, configuration, and maintenance tools used in Integrity NonStop NS-series systems: Topic Page Support and Service Library B-1 System Console B-1 Maintenance Architecture B-6 Dedicated Service LAN B-9 OSM B-22 System-Down OSM Low-Level Link B-22 AC Power Monitoring B-23 AC Power-Fail States B-24 Support and Service Library See Support and Service Library on page C-1.
Control, Configuration, and Maintenance Tools System Console Configurations Some system console hardware, including the PC system unit, monitor, and keyboard, can be mounted in the cabinet. Other PCs are installed outside the cabinet and require separate provisions or furniture to hold the PC hardware. System consoles communicate with Integrity NonStop NS-series servers over a dedicated service local area network (LAN) or a secure operations LAN.
Control, Configuration, and Maintenance Tools System Console Configurations One System Console Managing One System (Setup Configuration) Remote Service Provider DHCP DNS server (optional) Secure Operations LAN Modem Primary System Console Optional Connection to a Secure Operations LAN (One or Two Connections) Maintenance Switch 1 Processor Switches 4 3 2 1 4 4 3 2 1 5 2 4 3 2 1 3 4 3 2 1 4 4 3 2 1 5 6 6 7 7 FCSA FCSA FCSA G4SA G4SA 3 2 1 1 2 4 3 2 1 3 8 8 9 4 3 2 1 10 4 3 2 1 11 4
Control, Configuration, and Maintenance Tools System Console Configurations One System Console Managing Multiple Systems The one OSM system console on the LAN must be configured as the primary system console. Because all servers are shipped with the same preconfigured IP addresses for MSP0, MSP1, $ZTCP0, and $ZTCP1, you must change these IP addresses for the second and subsequent servers before you can add them to the LAN.
Control, Configuration, and Maintenance Tools System Console Configurations This configuration is recommended. It is similar to the setup configuration, but for fault-tolerant redundancy, it includes a second maintenance switch, backup system console, and modem. The maintenance switches provide a dedicated LAN in which all nodes use the same subnet. Note. A subnet is a network division within the TCP/IP model. Within a given network, each subnet is treated as a separate network.
Control, Configuration, and Maintenance Tools Maintenance Architecture For best OSM performance, no more than 10 servers should be included within one subnet. Because all servers are shipped with the same preconfigured IP addresses for MSP0, MSP1, $ZTCP0, and $ZTCP1, you must change these IP addresses for the second and subsequent servers before you can add them to the LAN. You must change the preconfigured IP addresses of the second and subsequent system consoles before you can add them to the LAN.
Control, Configuration, and Maintenance Tools Fabrics Functional Element Other hardware modules contain at least one microprocessor and firmware that performs maintenance functions for their local logic: • • • NonStop Blade Element Logical synchronization unit (LSU) Fibre Channel disk module The ServerNet fabrics, rather than the dedicated service LAN, provide maintenance interconnection to the OSM console for these modules.
Control, Configuration, and Maintenance Tools IOAM ME Firmware ServerNet adapters and one ServerNet switch board. (See I/O Adapter Module (IOAM) Enclosure and I/O Adapters on page 5-20.) Each IOAM connects directly to a p-switch and contains a single ME that resides within the ServerNet switch board, along with the ServerNet fabric interconnect.
Control, Configuration, and Maintenance Tools Dedicated Service LAN Dedicated Service LAN A dedicated service LAN provides connectivity between the OSM console running in a PC and the maintenance firmware in the system hardware. This dedicated service LAN uses a ProCurve 2524 Ethernet switch for connectivity between the p-switches and ServerNet switch boards for each IAOM) and the system console.
Control, Configuration, and Maintenance Tools • • IOAM module 3 ServerNet board for OSM control of the I/O hardware Connections to both X and Y fabrics (for fault tolerance) for OSM system-up maintenance (any one of these connections is valid as long as there are at least two connections total): ° ° • • Fault-Tolerant Configuration Gigabit 4-port ServerNet adapters (G4SAs) installed in an IOAM enclosure Ethernet 4-port ServerNet adapters (E4SAs), Fast Ethernet ServerNet adapters (FESAs), or Gigabit Et
Control, Configuration, and Maintenance Tools Fault-Tolerant Configuration This illustration shows a fault-tolerant LAN configuration with two maintenance switches: DHCP DNS serv er (optional) Remote Service Provider Remote Service Provider Secure Operations LAN Modem Primary Sy stem Console BackupSy stem Console Modem Optional Connection to a Secure Operations LAN (One or Two Connections) Maintenance Switch 2 Maintenance Switch 1 3 2 1 1 Maintenance PIC Enet (Slot 1, Port 3) 3 2 1 1 2 4 3 2
Control, Configuration, and Maintenance Tools IP Addresses IP Addresses Integrity NonStop NS-series servers require Internet protocol (IP) addresses for these components that are connected to the dedicated service LAN: • • • • • • • • • ServerNet switch boards in the P-switch ServerNet switch boards in the IOAM enclosure FESAs, G4SAs, E4SAs, and GESAs Maintenance switch System consoles OSM Low-Level Link OSM Service Connection UPS (optional) Fibre Channel to SCSI converter (optional) These components h
Control, Configuration, and Maintenance Tools Component IP Addresses Default IP Address Used By (page 2 of 2) 192.231.36.1 OSM LowLevel Link Location Group.Module.Slot Primary system console (rackmounted or standalone) N/A N/A Backup system console (rackmounted only) N/A N/A 192.231.36.4 Up to two additional system consoles (rack-mounted only) N/A N/A 192.231.36.5 UPS (rack-mounted only) Rack 01 OSM Service Connection OSM Notification Director 192.231.36.6 N/A 192.231.36.
Control, Configuration, and Maintenance Tools IP Addresses Static IP Addresses Static IP addresses have to be configured manually for each component. If a DHCP server already exists on the dedicated service LAN: • • Configure the static IP addresses to be in the same subnet as the dynamic IP addresses assigned by that server. If IOMF 2 CRUs are connected to the LAN, the static addresses for IOMF 2 CRUs must be in the same range as each other and as the DHCP server.
Control, Configuration, and Maintenance Tools Ethernet Cables Component IP Address (page 2 of 2) IOAM enclosures Dynamic System consoles Dynamic Fibre Channel to SCSI router (model M8201R) (optional). (For IP address information, see the M8201R Fibre Channel to SCSI Router Installation and User’s Guide.
Control, Configuration, and Maintenance Tools System-Up Dedicated Service LAN System-Up Dedicated Service LAN When the system is up and the OS running, the ME connects to the NonStop NS-series system’s dedicated service LAN using one of the PIFs on each of two G4SAs. This connection enables OSM Service Connection and OSM Notification Director communication for maintenance in a running system.
Control, Configuration, and Maintenance Tools Dedicated Service LAN Links With One IOAM Enclosure Dedicated Service LAN Links With One IOAM Enclosure This illustration shows the dedicated service LAN cables connected to the G4SAs in slot 5 of both modules of an IOAM enclosure and to the maintenance switch: Maintenance Switch Module 2 Module 3 G4SA Ethernet PIF Connectors D C B A Cable to Maintenance Switch IOAM Enclosure (Group 110) VST340.
Control, Configuration, and Maintenance Tools Dedicated Service LAN Links to Two IOAM Enclosures Dedicated Service LAN Links to Two IOAM Enclosures This illustration shows dedicated service LAN cables connected to G4SAs in two IOAM enclosures and to the maintenance switch: Maintenance Switch Module 2 Module 3 IOAM Enclosure (Group 110) IOAM Enclosure (Group 111) VST341.
Control, Configuration, and Maintenance Tools Dedicated Service LAN Links With IOAM Enclosure and NonStop S-Series I/O Enclosure Dedicated Service LAN Links With IOAM Enclosure and NonStop S-Series I/O Enclosure This illustration shows dedicated service LAN cables connected to a G4SA in an IOAM enclosure and at least one NonStop S-series Ethernet adapter (E4SA, FESA, or GESA) in a NonStop S-series I/O enclosure (module 12 in this example) and to the maintenance switch: Maintenance Switch NonStop S-Series
Control, Configuration, and Maintenance Tools Dedicated Service LAN Links With NonStop SSeries I/O Enclosure Dedicated Service LAN Links With NonStop S-Series I/O Enclosure This illustration shows dedicated service LAN cables connected to two NonStop S-series Ethernet adapters (E4SA, FESA, or GESA) in a NonStop S-series I/O enclosure (module 12 in this example) and to the maintenance switch: Maintenance Switch NonStop S-Series I/O Enclosure (Module 12) VST343.
Control, Configuration, and Maintenance Tools Initial Configuration for a Dedicated Service LAN Initial Configuration for a Dedicated Service LAN New systems are shipped with an initial set of IP addresses configured. For listing of these initial IP addresses, see IP Addresses on page B-12. Factory-default IP addresses for the G4SA and E4SA adapters are in the LAN Configuration and Management Manual. IP addresses for SWAN concentrators are in the WAN Subsystem Configuration and Management Manual.
Control, Configuration, and Maintenance Tools OSM OSM OSM client-based components are installed on new system console shipments and also delivered by an OSM installer on the HP NonStop System Console (NSC) Installer CD. The NSC CD also delivers all other client software required for managing and servicing NonStop NS-series servers. Installation instructions are included in the NonStop System Console Installer Guide.
Control, Configuration, and Maintenance Tools AC Power Monitoring AC Power Monitoring Integrity NonStop NS-series servers require either the optional HP model R5500 XR UPS (with one or two ERMs for additional battery power) or a user-supplied UPS installed in each modular cabinet, or a user-supplied site UPS to support system operation through power transients or an orderly system shutdown during a power failure.
Control, Configuration, and Maintenance Tools AC Power-Fail States AC Power-Fail States These states occur when a power failure occurs and an optional HP model R5500 XR UPS is installed in each cabinet within the system: System State Description NSK_RUNNING NonStop operating system is running normally. RIDE_THRU OSM has detect a power failure and begins timing the outage. AC power returning terminates RIDE_THRU and puts the operating system back into an NSK_RUNNING state.
C Guide to Integrity NonStop NSSeries Server Manuals These manuals support the Integrity NonStop NS-series systems. Category Purpose Title Reference Provide information about the manuals, the RVUs, and hardware that support NonStop NS-series servers NonStop Systems Introduction for H-Series RVUs Describe how to prepare for changes to software or hardware configurations Managing Software Changes Describe how to install, configure, and upgrade components and systems H06.
Guide to Integrity NonStop NS-Series Server Manuals Support and Service Library Authorized service providers can also order the NTL Support and Service Library CD: • • HP employees: Subscribe at World on a Workbench (WOW). Subscribers automatically receive CD updates. Access the WOW order form at http://hps.knowledgemanagement.hp.com/wow/order.asp. HP Authorized Service Providers and Channel Partners: Send an inquiry to pubs.comments@hp.com.
Safety and Compliance This section contains three types of required safety and compliance statements: • • • Regulatory compliance Waste Electrical and Electronic Equipment (WEEE) Safety Regulatory Compliance Statements The following regulatory compliance statements apply to the products documented by this manual. FCC Compliance This equipment has been tested and found to comply with the limits for a Class A digital device, pursuant to part 15 of the FCC Rules.
Safety and Compliance Regulatory Compliance Statements Korea MIC Compliance Taiwan (BSMI) Compliance Japan (VCCI) Compliance This is a Class A product based on the standard or the Voluntary Control Council for Interference by Information Technology Equipment (VCCI). If this equipment is used in a domestic environment, radio disturbance may occur, in which case the user may be required to take corrective actions.
Safety and Compliance Regulatory Compliance Statements European Union Notice Products with the CE Marking comply with both the EMC Directive (89/336/EEC) and the Low Voltage Directive (73/23/EEC) issued by the Commission of the European Community.
Safety and Compliance SAFETY CAUTION SAFETY CAUTION The following icon or caution statements may be placed on equipment to indicate the presence of potentially hazardous conditions: DUAL POWER CORDS CAUTION: “THIS UNIT HAS MORE THAN ONE POWER SUPPLY CORD. DISCONNECT ALL POWER SUPPLY CORDS TO COMPLETELY REMOVE POWER FROM THIS UNIT." "ATTENTION: CET APPAREIL COMPORTE PLUS D'UN CORDON D'ALIMENTATION. DÉBRANCHER TOUS LES CORDONS D'ALIMENTATION AFIN DE COUPER COMPLÈTEMENT L'ALIMENTATION DE CET ÉQUIPEMENT".
Safety and Compliance Waste Electrical and Electronic Equipment (WEEE) HIGH LEAKAGE CURRENT To reduce the risk of electric shock due to high leakage currents, a reliable grounded (earthed) connection should be checked before servicing the power distribution unit (PDU).
Safety and Compliance Important Safety Information HP Integrity NonStop NS-Series Planning Guide—529567-008 Statements -6
Index A AC current calculations 3-13 AC power 200 to 240 V ac single phase 32A RMS 5-6 200 to 240 V ac single phase 40A RMS 5-5 208 V ac 3-phase delta 24A RMS 3-3, 5-4 380 to 415 V ac 3-phase Wye 16A RMS 5-5 enclosure input specifications 3-5 feed, top or bottom 1-1 input 3-2 power-fail monitoring B-23 power-fail states B-24 unstrapped PDU 6-37, 8-24 AC power feed 5-7 bottom of cabinet 5-7 top of cabinet 5-8 with cabinet UPS 5-9, 5-10 air conditioning 2-4 air filters 2-6 B branch circuit 3-5 C cabinet 5-3
Index D configuration examples 7-1 configuration, factory-installed hardware documentation 4-25 CONFTEXT file processor types specified in 3-1 controls, NonStop Blade Element front panel 5-13 cooling assessment 2-5 D daisy-chain disk configuration recommendations 6-25 dedicated service LAN B-9 default disk drive locations 6-23 default startup characteristics 4-21, 8-12 dimensions enclosures 3-9 modular cabinet 3-8 service clearances 3-8 disk drive configuration recommendations 6-24, 8-23 display IOAM swi
Index G failure recovery 4-8, 8-5 fan NonStop Blade Element 5-11 OAM enclosure 5-20 p-switch 5-16 FCSA to FCDM cabling 6-26 FCSA to tape cabling 6-16 FCSA, configuration recommendations 6-24, 8-23 FC-AL configuration recommendations 6-24 fiber-optic cable specifications 6-6 fiber-optic cables 6-4 Fibre Channel arbitrated loop (FCAL) 5-25, 6-22 Fibre Channel device considerations 6-23 Fibre Channel devices 6-20 Fibre Channel disk module 5-25, 6-20 Fibre Channel disk module, overview 8-15 Fibre Channel Serv
Index L input power 3-2 inrush current 2-3 Integrity NonStop NS14000 server 8-1 Integrity NonStop NS16000 server 4-1 internal cable product IDs A-1 internal interconnect cabling 6-6 IOAM configuration considerations 6-20 enclosure 5-20 FCSA 5-22 FRUs 5-20 G4SA 5-24 ME firmware B-8 ServerNet pathways 4-11, 4-12, 8-8 IOAM enclosure 1-3 IOMF 2 CRU 5-39 IP addresses components connected to LAN B-12 dynamic B-14 static B-14 I/O connectivity 1-3 I/O functional element B-7 I/O interface board, NonStop Blade Elem
Index N N naming conventions 6-36, 8-24 NonStop advanced architecture 4-2, 4-19, 8-4, 8-11 NonStop Blade Element 4-2, 5-11, 5-30 NonStop S-series I/O enclosure 1-4 NonStop S-series I/O enclosures 5-37 overview 8-15 NS14000 server 8-1 NS16000 server 4-1 NSAA 4-2, 8-4 logical processors 4-2 terms 4-4 O operating system load paths 4-21, 8-12 operational space 2-7 optic adapter LSU 5-15 NonStop Blade Element 5-11 NonStop Blade Element J connectors 5-12 OSM B-2, B-22 Overview cables 8-15 Fibre Channel disk mo
Index Q p-switch cabling to IOMA enclosure 6-14 cabling to NonStop S-series I/O enclosure 6-17 description 5-16 display 5-17 FRUs 5-17 functions 5-17 ME firmware B-7 Q quad MMF PIC 5-17 R R5500 XR UPS 5-27 rack 5-30 rack offset 5-30, 5-31 raised flooring 2-5 receive and unpack 2-6 receptacles, PDU 5-10 recovery, failure 4-8, 8-5 reintegration board 5-11 reintegration, memory 4-8, 8-5 related manuals NonStop NS-series server C-1 software migration 4-24 rendezvous 4-8 processor synchronization 4-8 restric
Index T T tech doc, factory installed hardware 4-25 terminology 5-30 This 8-9 triple modular redundant (TMR, triplex) processor 4-3, 4-7 U U height, enclosures 3-8 uninterruptible power supply (UPS) 2-3, 5-27 UPS HP R5500 XR 2-3, 5-27 user-supplied rackmounted 2-4 user-supplied site 2-4 UPS, HP R5500 XR 3-7 W weight calculation 2-5, 3-10 worksheet heat calculation 3-11 weight calculation 3-10 Z zinc, cadmium, or tin particulates 2-6 Special Characters $SYSTEM disk locations 4-21, 8-12 HP Integrity No
Index Special Characters HP Integrity NonStop NS-Series Planning Guide—529567-008 Index -8