HP Integrity NonStop NS-Series Planning Guide Abstract This guide describes the HP Integrity NonStop™ NS-series system hardware and provides examples of system configurations to assist in planning for installation of a new system. It also provides a guide to other Integrity NonStop NS-series manuals. Product Version N.A. Supported Release Version Updates (RVUs) This publication supports H06.04 and all subsequent H-series RVUs until otherwise indicated by its replacement publication.
Document History Part Number Product Version Published 529567-001 N.A. February 2005 529567-002 N.A. May 2005 529567-003 N.A. June 2005 529567-004 N.A. July 2005 529567-005 N.A.
HP Integrity NonStop NS-Series Planning Guide Index What’s New in This Manual vii Manual Information vii New and Changed Information vii About This Manual ix Who Should Use This Guide ix What’s in This Guide ix Where to Get More Information x Notation Conventions x 1.
3. System Installation Specifications Contents Dust and Pollution Control 2-5 Zinc Particulates 2-5 Receiving and Unpacking Space Operational Space 2-6 2-5 3.
. Modular System Hardware Contents Memory Reintegration 4-8 Failure Recovery for Duplex Processor 4-8 Failure Recovery for Triplex Processor 4-8 ServerNet Fabric I/O 4-9 Simplified ServerNet System Diagram 4-10 P-Switch ServerNet Pathways 4-10 IOAM Enclosure ServerNet Pathways 4-11 Example System ServerNet Pathways 4-12 System Architecture 4-17 Modular Hardware 4-18 NonStop S-Series I/O Hardware 4-18 System Models 4-18 Default Startup Characteristics 4-18 Migration Considerations 4-20 Migrating Applicati
. System Configuration Guidelines Contents Optional UPS and ERM 5-21 System Console 5-22 Enterprise Storage System 5-23 Component Location and Identification 5-24 Terminology 5-24 Rack and Offset Physical Location 5-25 NonStop Blade Element Group-Module-Slot Numbering 5-26 LSU Group-Module-Slot Numbering 5-27 Processor Switch Group-Module-Slot Numbering 5-28 IOAM Enclosure Group-Module-Slot Numbering 5-29 Fibre Channel Disk Module Group-Module-Slot Numbering 5-30 NonStop S-Series I/O Enclosures 5-31 IOMF
. Example Configurations Contents Example IOAM and Fibre Channel Disk Module Configurations G4SAs to Networks 6-31 Default Naming Conventions 6-32 PDU Strapping Configurations 6-34 6-23 7.
C.
What’s New in This Manual Manual Information HP Integrity NonStop NS-Series Planning Guide Abstract This guide describes the HP Integrity NonStop™ NS-series system hardware and provides examples of system configurations to assist in planning for installation of a new system. It also provides a guide to other Integrity NonStop NS-series manuals. Product Version N.A. Supported Release Version Updates (RVUs) This publication supports H06.
What’s New in This Manual New and Changed Information the system. Non-essential information or information replicated in other NTLresident documentation has been removed from the combined guide. Additional changes include terminology and clarification of system configurations.
About This Manual Who Should Use This Guide This guide is written for those responsible for planning the installation, configuration, and maintenance of the server and the software environment at a particular site. Appropriate personnel must have completed HP training courses on system support for Integrity NonStop NS-series servers. Note. Integrity NonStop NS-series and NonStop S-series refer to hardware systems.
Where to Get More Information About This Manual Where to Get More Information For information about Integrity NonStop NS-series hardware, software, and operations, refer to Appendix C, Guide to Integrity NonStop NS-Series Server Manuals. Notation Conventions Hypertext Links Blue underline is used to indicate a hypertext link within text. By clicking a passage of text with a blue underline, you are taken to the location described.
1 System Hardware Overview Integrity NonStop NS-series servers use the NonStop advanced architecture (NSAA), which includes a number of duplex or triplex NonStop Blade Elements plus various combinations of hardware enclosures. These enclosures are installed in modular cabinets equipped with 42U-high, 19-inch racks.
Hardware Enclosures and Configurations System Hardware Overview This figure shows an example modular cabinet with a duplex processor and hardware for a complete system (rear view).
System Hardware Overview Uninterruptible Power Supply (UPS) Because of the large number of possible configurations, you calculate the total power consumption, heat dissipation, and weight of each modular cabinet based on the hardware configuration that you order from HP. For site preparation specifications of the modular cabinet and the individual enclosures, see Section 3, System Installation Specifications.
System Hardware Overview NonStop S-Series I/O Enclosure NonStop S-Series I/O Enclosure NonStop S-series I/O enclosures equipped with model 1980 I/O multifunction 2 customer replaceable units (IOMF 2 CRUs) can be connected to the NonStop NS-series server via fiber-optic ServerNet fabrics and the processor switch (p-switch).
2 Installation Facility Guidelines This section provides guidelines for preparing the installation site for Integrity NonStop NS-Series systems: Topic Page Modular Cabinet Power and I/O Cable Entry 2-1 Emergency Power-Off Switches 2-1 Electrical Power and Grounding Quality 2-2 Uninterruptible Power Supply (UPS) 2-3 Cooling and Humidity Control 2-4 Weight 2-4 Flooring 2-5 Dust and Pollution Control 2-5 Zinc Particulates 2-5 Receiving and Unpacking Space 2-5 Operational Space 2-6 Mod
Installation Facility Guidelines EPO Requirement for HP 5500 XP UPS EPO Requirement for HP 5500 XP UPS The rack mounted HP 5500 XR UPS that can be optionally installed in a modular cabinet contains batteries and has an EPO circuit. Consult your HP site preparation specialist or electrical engineer regarding requirements for site EPO switches or relays.
Installation Facility Guidelines Grounding Systems For steps to take to ensure proper power for the servers, consult with your HP site preparation specialist or power engineer. Grounding Systems The site building must provide a power distribution safety ground/protective earth for each AC service entrance to all NonStop server equipment. This safety grounding system must comply with local codes and any other applicable regulations for the installation locale.
Installation Facility Guidelines Cooling and Humidity Control Cooling and Humidity Control Do not rely on an intuitive approach to cooling design or to simply achieve an energy balance—that is, summing up to the total power dissipation from all the hardware and sizing a comparable air conditioning capacity. Today’s high-performance servers use semiconductors that integrate multiple functions on a single chip with very high power densities.
Installation Facility Guidelines Flooring Flooring Integrity NonStop NS-series servers can be installed either on the site’s floor with the cables entering from above the equipment or on raised flooring with power and I/O cables entering from underneath. Because cooling airflow through each enclosure in the modular cabinets is front-to-back, raised flooring is not required for system cooling.
Installation Facility Guidelines Operational Space personnel are present to remove each cabinet from its shipping pallet and to safely move it to the installation site. WARNING. A fully populated cabinets is unstable when moving down the unloading ramp from its shipping pallet. Arrange for enough personnel to stabilize each cabinet during removal from the pallet and to prevent the cabinet from falling. A falling cabinet can cause serious or fatal personal injury.
3 System Installation Specifications This section provides these specifications necessary for planning the system installation site: Topic Page Modular Cabinet AC Input Power Dimensions and Weights Environmental Specifications Calculating Specifications for Enclosure Combinations 3-1 3-9 3-9 3-11 Note. All specifications provided in this section assume that each enclosure in the modular cabinet is fully populated; for example, a NonStop Blade Element with four processors and maximum memory.
System Installation Specifications North America and Japan: 208 V AC PDU Power Cords North America and Japan: 208 V AC PDU Power Cords • • • • Three-phase delta, four-wire with insulated ground conductor, 60A (or optional 30A) RMS AC power cord (two supplied per modular cabinet): length 7 feet (183 centimeters) 60A AC power cable plug (one per power cord): four-wire, three-phase ° HP Product ID: M8950-4 ° HP part number: 527993 ° Manufacturer: Hubbell ° HP supplied plug manufacturer number: HBL460P9W °
System Installation Specifications ° ° Other International: 200 to 250 V AC PDU Power Cords HP supplied plug manufacturer number: HBL2811 Required customer-supplied receptacle manufacturer number: HBL2810 (wall mount recepticle), HBL2813 (connector body for cable), or equivalent receptacle Other International: 200 to 250 V AC PDU Power Cords • • • • • Single-phase, three-wire with insulated ground conductor, 63A (or optional 30A) RMS AC power cord (two supplied per modular cabinet): length 7 feet (1
Branch Circuits and Circuit Breakers System Installation Specifications Branch Circuits and Circuit Breakers Modular cabinets for the Integrity NonStop NS-series system contain two PDUs. Each of the two PDUs requires a separate branch circuit of these ratings: Region Volts Amps Amps (optional) North America and Japan Europe, Middle East, and Africa Other International 208 230/400 200-250 60 63 63 30 30 30 Caution.
Enclosure Power Loads System Installation Specifications enclosures installed. For examples of calculating the power and current load for various enclosure combinations, refer to Calculating Specifications for Enclosure Combinations on page 3-11. In normal operation, the AC power is split equally between the two PDUs in the modular cabinet. However, if one of the two AC power feeds fails, the remaining AC power feed and PDU must carry the power for all enclosures in that cabinet.
Model R5500 XR Integrated UPS System Installation Specifications Model R5500 XR Integrated UPS Version Operating Voltage Settings Power Out (VA/Watts) Input Plug Branch Circuit North America and Japan 200/208*, 220, 230, 240 5000/4500 L6-30P Dedicated 30 Amp Other International 200, 230*, 240 6000/5400 Dedicated 30 Amp If 200/208 Then 5000/4500 IEC-309 32 Amp * Factory-default setting For complete information and specifications, refer to the HP UPS R5500 XR Models User Guide (HP part num
Service Clearances for the Modular Cabinet System Installation Specifications Service Clearances for the Modular Cabinet Aisles: 6 feet (182.9 centimeters) Front: 3 feet (91.4 centimeters) Rear: 3 feet (91.
Modular Cabinet Physical Specifications System Installation Specifications Modular Cabinet Physical Specifications Item Height Width Depth Weight in. cm in. cm in. cm Modular cabinet 78.5 199.4 23.5 59.7 44.0 111.8 Rack 78.5 199.4 23.5 59.7 40.0 101.9 Front door 78.5 199.4 23.5 59.7 3.0 7.6 Left-rear door 78.5 199.4 11.0 27.9 1.0 2.5 Right-rear door 78.5 199.4 12.0 30.5 1.0 2.5 Shipping (palletized) 83.5 212.0 39.0 99.0 48.0 121.
Modular Cabinet and Enclosure Weights With Worksheet System Installation Specifications Modular Cabinet and Enclosure Weights With Worksheet The total weight of each modular cabinet is the sum the weights of the cabinet plus each enclosure installed in it. Use this worksheet to determine the total weight: Enclosure Type Number of Enclosures Weight Total lbs kg Modular cabinet* 370 167.8 NonStop Blade Element 112 50.8 Processor switch 70 32.8 LSU 96 43.5 IOAM 200 90.
Heat Dissipation Specifications and Worksheet System Installation Specifications Heat Dissipation Specifications and Worksheet Unit Heat (Btu/hour with single AC line powered) Unit Heat (Btu/hour with both AC lines powered) 4-processor NonStop Blade Element: 16 GB memory boards 32 GB memory boards 2320 2423 2594 2662 LSU (with four LSUs) 512 750 Processor switch1 546 682 IOAM2 1433 1706 Disk drive3 751 956 Maintenance switch (Ethernet)4 171 - Rackmount console system unit 1194 - Key
System Installation Specifications Nonoperating Temperature, Humidity, and Altitude Nonoperating Temperature, Humidity, and Altitude • • • Temperature: ° Up to 72-hour storage: - 40° to 150° F (-40° to 66° C) ° Up to 6-month storage: -20° to 131° F (-29° to 55° C) ° Reasonable rate of change with noncondensing relative humidity during the transition from warm to cold Relative humidity: 10% to 80%, noncondensing Altitude: 0 to 40,000 feet (0 to 12,192 meters) Cooling Airflow Direction Each enclosure in
Calculating Specifications for Enclosure Combinations System Installation Specifications This duplex configuration has 16 logical processors with two IOAM enclosures, 14 Fibre Channel disk modules, and three cabinets. The load for cabinet one is: Component Quantity Height (U) Weight (lbs) Weight (kg) Volt-amps per AC feed, one feed powered Volt-amps per AC feed, both feeds powered Heat (Btu) NonStop Blade Element 3 15 336 152.4 2130 1170 7986 LSU 1 4 96 43.
Calculating Specifications for Enclosure Combinations System Installation Specifications The load for cabinet three is: Component Quantity Height (U) Weight (lbs) Weight (kg) Volt-amps per AC feed, one feed powered NonStop Blade Element 2 10 224 101.6 1420 780 5324 LSU - - - - - - - Processor switch - - - - - - - IOAM enclosure 1 11 200 90.7 420 250 1706 Fibre Channel disk module 7 21 546 247.8 1540 980 6692 Console - - - - - - - Maint.
System Installation Specifications Calculating Specifications for Enclosure Combinations HP Integrity NonStop NS-Series Planning Guide—529567-005 3- 14
4 Integrity NonStop NS-Series System Description This section describes the Integrity NonStop NS-series systems and covers these topics: Topic Page NonStop System Primer 4-1 NonStop Advanced Architecture 4-2 NonStop Blade Complex 4-2 Processor Element 4-4 Duplex Processor 4-5 Triplex Processor 4-6 Processor Synchronization and Rendezvous 4-7 Memory Reintegration 4-8 Failure Recovery for Duplex Processor 4-8 Failure Recovery for Triplex Processor 4-8 ServerNet Fabric I/O 4-9 System Ar
Integrity NonStop NS-Series System Description NonStop Advanced Architecture However, contemporary high-speed microprocessors make lock-step processing no longer practical because of: • • • Variable frequency processor clocks with multiple clock domains Higher transient error rates than in earlier, simpler microprocessor designs Chips with multiple processor cores NonStop Advanced Architecture Integrity NonStop NS-series systems employ a unique method for achieving fault tolerance in a clustered proces
NonStop Blade Complex Integrity NonStop NS-Series System Description that all NonStop Blade Elements agree on the result before the data is passed to the ServerNet fabrics. A processor with two NonStop Blade Elements (NSBEs) and their associated LSUs make up the dual modular redundant (DMR) NonStop Blade Complex, which is also referred to as a duplex processor.
Integrity NonStop NS-Series System Description Processor Element Complexes for a total of 16 processors. Processors communicate with each other and with the system I/O over dual ServerNet fabrics. In the term ServerNet fabric, the word fabric is significant because it contrasts with the concept of a bus. A bus provides a single, fixed communications path between start and end points.
Duplex Processor Integrity NonStop NS-Series System Description • • • I/O interface with maintenance logic shared with the other PEs in the NonStop Blade Element Interface for fiber-optic I/O communications with the corresponding LSU Memory reintegration logic and fiber-optic links shared with the other PEs in the NonStop Blade Element and used for memory rendezvous between the NonStop Blade Elements The diagram provides an overview of the processor element architecture: Fiber Optic Link to LSUs Logic d
Triplex Processor Integrity NonStop NS-Series System Description A duplex processor includes these elements: ServerNet Links to Processor Switch NonStop Balde Complex 0 LSU 0 (for PEs A0, B0) X Y Optic I/O X Y Optic I/O X Y Optic I/O X Y Optic I/O LSU 0 Vote Logic A B C LSU 1 Vote Logic A B C LSU 2 Vote Logic A B C LSU 3 Vote Logic A B C NSBE B PE B0 PE B1 PE B2 PE B3 NSBE A PE A0 PE A1 PE A2 PE A3 0 1 2 3 LSU Enclosure Fiber Optic Cables Logical Processor 0 (includes PEs A0, B0)
Processor Synchronization and Rendezvous Integrity NonStop NS-Series System Description A triplex processor includes these elements: ServerNet Links to Processor Switch NonStop Blade Complex 0 LSU 0 (for PEs A0, B0, C0) X Y Optic I/O X Y Optic I/O X Y Optic I/O X Y Optic I/O LSU 0 Vote Logic A B C LSU 1 Vote Logic A B C LSU 2 Vote Logic A B C LSU 3 Vote Logic A B C NSBE C PE C0 PE C1 PE C2 PE C3 NSBE B PE B0 PE B1 PE B2 PE B3 NSBE A PE A0 PE A1 PE A2 PE A3 0 1 2 3 LSU Enclosure
Integrity NonStop NS-Series System Description • Memory Reintegration ° Allow each PE to individually and deterministically respond to asynchronous incoming interrupts and then to respond collectively as a single logical processor. ° Exchange software state information when performing operations that are distributed across PEs; for example, memory reintegration, error handling, and memory scrubbing. Compare output from each PE. If identical, the output is transmitted over the ServerNet fabrics.
Integrity NonStop NS-Series System Description ServerNet Fabric I/O restores the system to triplex operation. If failure of an LSU takes down its associated logical processor, the operating system activates the backup processes in other logical processors. The system runs user applications as if no failure occurred. As with a duplex processor, the errant processor is reset, and then it is synchronized with the running processors.
Simplified ServerNet System Diagram Integrity NonStop NS-Series System Description Simplified ServerNet System Diagram This simplified diagram shows the ServerNet architecture in the Integrity NonStop NS-series system: Disks Ethernet Router ServerNet Adapter ServerNet Adapter ServerNet Adapter Ethernet I/O Adapter Module NonStop S-Series I/O Enclosure P-Switch X Fabric P-Switch Y Fabric X Y NSBC X Y NSBC X Y NSBC X Y NSBC X Fabric Y Fabric ServerNet link (X Fabric) ServerNet link (Y
IOAM Enclosure ServerNet Pathways Integrity NonStop NS-Series System Description ServerNet X fabric and the other the ServerNet Y fabric. In this drawing, the nomenclature PIC n means the PIC in slot n. For example, PIC 4 is the PIC in slot 4. One Optic Line to IOMF 2 CRU in Four Optic Lines to Each IOAM ServerNet Switch Board or NonStop S-Series I/O Enclosure PIC 4 PIC 6 PIC 5 PIC 7 PIC 8 PIC 9 P I C To Cluster Switch 2 Router P I C Router P I C Maint. To Other P-Switch 3 1 Maint.
Example System ServerNet Pathways Integrity NonStop NS-Series System Description Example System ServerNet Pathways This drawing shows the redundant routing and connection of the ServerNet X and Y fabric within a simple example system. This example system includes: • • • Four processors with their requisite four LSU optics adapters One IOAM enclosure connected to the PIC in slot 4 of each p-switch, making the IAOM enclosure group 110.
Example System ServerNet Pathways Integrity NonStop NS-Series System Description The IOAM enclosures can reside in the same or different cabinets with MMF fiber-optic cables carrying the ServerNet communications between the IOAM enclosures and the p-switches. ServerNet routing for the Y fabric is the same through the Y fabric peer pswitch and ServerNet switch boards in the IOAM enclosures and the IOMF 2 CRUs in the NonStop S-series I/O enclosures.
Example System ServerNet Pathways Integrity NonStop NS-Series System Description In a second scenario, if the one of the ServerNet adapters fails, only the Fibre Channel or Ethernet devices that are connected to the failed adapter are affected. The failure has no effect on the other resources on the same or the other ServerNet fabric.
Example System ServerNet Pathways Integrity NonStop NS-Series System Description This illustration shows a logical representation of a triplex 16-processor NonStop Blade Complex (NSBC) with the associated NonStop Blade Elements (NSBEs) and their Blade optics adapters (BOAs), the LSUs, and the p-switch for X ServerNet fabric: NSBE 400 S Q Group Module1A R T BOA BOA Slot 1 Slot 2 NSBE Group 400 S Q Module 1B R T BOA BOA Slot 1 Slot 2 NSBE Group 400 S Q Module 1C R T BOA BOA Slot 1 Slot 2 NSBE 401 S Q Gr
Example System ServerNet Pathways Integrity NonStop NS-Series System Description This illustration shows a logical representation of a triplex 16-processor NSBC with the associated NSBEs and their BOAs, the LSUs, and the p-switch for Y ServerNet fabric: NSBE 400 S Q Group R Module1A T BOA BOA Slot 1 Slot 2 NSBE Group 400 S Q Module 1B R T BOA BOA Slot 1 Slot 2 NSBE Group 400 S Q Module 1C R T BOA BOA Slot 1 Slot 2 NSBE 401 S Q Group R Module A T BOA BOA Slot 1 Slot 2 NSBE 401 S Q Group R Module B T B
System Architecture Integrity NonStop NS-Series System Description System Architecture This diagram shows elements of an example Integrity NonStop NS-series system with four triplex processors: Fibre Channel Disk Module Fibre Channel Disk Module High-Speed Ethernet Fibre Channel High-Speed Ethernet Fibre Channel ServerNet Adapters S-Series I/O Enclosure I/O ServerNet Adapters S-Series I/O Enclosure I/O Adapter Module I/O / 4 / 4 ServerNet Fabrics X Fabric Y Fabric Processor Switch on X Fabric
Modular Hardware Integrity NonStop NS-Series System Description Modular Hardware Hardware for Integrity NonStop NS-series systems is implemented in modules, or enclosures that are installed in modular cabinets. For descriptions of the modular hardware, see Section 5, Modular System Hardware. NonStop S-Series I/O Hardware Equipment from NonStop S-series systems can be connected to Integrity NonStop NS-series systems via fiber-optic ServerNet cables.
Default Startup Characteristics Integrity NonStop NS-Series System Description • • ° In NonStop S-series I/O enclosure group, module, slot 11.1.11 Configured system load paths Enabled command interpreter input (CIIN) function If the automatic system load is not successful, additional paths for loading are available in the boot task. Using one load path, the system load task attempts to use another path and keeps trying until all possible paths have been used or the system load is successful.
Migration Considerations Integrity NonStop NS-Series System Description This illustration shows the system load paths. Fibre ChannelDisk Module 2 2 B 1 1P Fibre ChannelDisk Module 2 2 MB 1 1M IOAM Enclosure Group 110 ServerNet Switch Board ServerNet Switch Board XFabric YFabric 1 2 1 2 FCSAs Processor 0 P-Switch XFabric X P-Switch YFabric Y X Y Processor 1 VST 956.vsd The command interpreter input (CIIN) file is automatically invoked after the first processor is loaded.
Integrity NonStop NS-Series System Description • • • • • • Other Manuals Containing Software Migration Information Source code changes required for C/C++, COBOL, and pTAL programs User library changes Changes in the application development environment, including compilers, linkers, and debuggers Changes in native process architecture and process environment Changes required for independent products: ° ° ° HP NonStop Server for Java HP NonStop CORBA HP NonStop Tuxedo How to get help with migration tas
Integrity NonStop NS-Series System Description Migration Considerations Any hardware migration should be planned as part of the overall application and software migration tasks. The next subsections refer you to the documentation for tasks involved in physically moving legacy IOAM enclosures and NonStop S-series I/O enclosures to an Integrity NonStop NS-series server.
Integrity NonStop NS-Series System Description System Installation Document Packet ° Disconnect the NonStop S-series I/O enclosure from the NonStop S-series system and connect it to the Integrity NonStop NS-series system. ° Change the mirror on the Integrity NonStop NS-series system to be the primary disk.
Factory-Installed Hardware Configuration Tech Memo Integrity NonStop NS-Series System Description This document is an example only and will not be the same as the one included with your system. E x a m p le Rear U # 41-42 40 40 40 40 40 39 39 37 37 37 36 36 34 34 34 33 32 Source PWR CORD:12A,C20-C13,0.
ServerNet Adapter Configuration Forms Integrity NonStop NS-Series System Description Example Tech Memo for Duplex Processor System (Sheet 2 of 2) 16 14 13 12 LSUE 0 : Slot 20 : SNet Y LSUE 0 : Slot 20 : SNet X LSUE 0 : Slot 21 : SNet Y LSUE 0 : Slot 21 : SNet X LSUE 0 : Slot 22 : SNet Y LSUE 0 : Slot 22 : SNet X LSUE 0 : Slot 23 : SNet Y LSUE 0 : Slot 23 : SNet X LSUE 0 : Slot 20 : Strand B LSUE 0 : Slot 21 : Strand B LSUE 0 : Slot 22 : Strand B LSUE 0 : Slot 23 : Strand B LSUE 0 : Slot 20 : Strand A
Integrity NonStop NS-Series System Description ServerNet Cluster Configuration Form HP Integrity NonStop NS-Series Planning Guide—529567-005 4- 26
5 Modular System Hardware This section describes the hardware used in Integrity NonStop NS-series systems: Topic Page Modular Hardware Components 5-1 Component Location and Identification 5-24 NonStop S-Series I/O Enclosures 5-31 Modular Hardware Components These hardware components can be part of an Integrity NonStop NS-series system: Topic Page Cabinets 5-3 AC Power PDUs 5-3 NonStop Blade Element 5-7 Logical Synchronization Unit (LSU) 5-10 Processor Switch 5-12 I/O Adapter Module (IO
Modular Hardware Components Modular System Hardware p-switch.
Cabinets Modular System Hardware Cabinets HP modular cabinets for the Integrity NonStop NS-series system are 42U high with labels on the sides of the rack indicating the U positions: 11 Rack 1 (first 10 U shown) 11 10 10 09 09 08 08 07 07 06 06 05 05 04 04 03 03 02 02 01 01 NSBE B (offset 8U) NSBE A (offset 3U) VST404.
AC Power PDUs Modular System Hardware Junction boxes for the PDUs and AC power feed cables are factory-installed and configured at either the upper or lower rear corners depending on what is ordered for the site power feed.
AC Power PDUs Modular System Hardware This illustration shows the location of PDUs for AC from the top of the cabinet when an optional UPS and ERM are also installed: To AC Power Source PDU Junction Box Extended Runtime Module (ERM) Uninterruptible Power Supply (UPS) 42 42 41 41 40 40 39 39 38 38 37 37 09 09 08 08 07 07 06 06 05 05 04 04 03 03 02 02 01 01 Power Distribution Unit (PDU) PDU Junction Box VST120.
AC Power PDUs Modular System Hardware Each PDU is factory-wired to distribute the phases to its receptacles. Caution. If you are installing Integrity NonStop NS-series enclosures in a rack, balance the current load among the three phases. Using only one of the available three phases, especially for larger systems, can cause unbalanced loading and might violate applicable electrical codes. Connecting the two power plugs from an enclosure to the same phase causes failure of the hardware if that phase fails.
Modular System Hardware Modular Cabinet PDU Keepout Panel Modular Cabinet PDU Keepout Panel PDU keepout applies only when the PDU junction boxes reside at the bottom of the modular cabinet. PDUs overlap the outside of the rack by 2 U (3.5 inches). A PDU keepout panel is installed in affected space when: • • 1U keepout applies when a UPS occupies the bottom position of the modular cabinet. 2U keepout applies when a NonStop Blade Element occupies the bottom position of the modular cabinet.
NonStop Blade Element Modular System Hardware Elements provide four processors with 12 NonStop Blade Elements providing a full 16-processor triplex system. Note. Integrity NonStop NS-series systems do not support duplex and triplex processors within the same system.
NonStop Blade Element Modular System Hardware so forth. These IDs reference the appropriate NonStop Blade Element for proper connection of the fiber-optic cables. The optic cables provide communications between each NonStop Blade Element and the LSU as well as between the LSU and the p-switch PICs on the X fabric and Y fabric. No requirement exists to connect cables from a particular Blade optics adapter on a NonStop Blade Element to a physically corresponding adapter on an LSU.
Logical Synchronization Unit (LSU) Modular System Hardware Front Panel Buttons Button Function Condition Operation Power Hard reset Power is on. Cycle power and reset or reconfigure logic. Power in standby. Remain in standby. Power is on. Send initialize interrupt to processors, but without reset or reconfiguration of logic.
Logical Synchronization Unit (LSU) Modular System Hardware Logically the LSU: • • • Implements a fault domain that affects only a single logical processor Has environmental sense and control (ESC) aspects managed by the logical processor Supports single-point system power-on The LSU module consists of two types of FRUs: • • • LSU logic board (accessible from the front of the LSU enclosure) LSU optics adapters (accessible from the rear of the LSU enclosure) AC power assembly (accessible from the rear
LSU Indicator LEDs Modular System Hardware This illustration shows an example LSU configuration as viewed from the front of the enclosure and equipped with four LSU logic boards in positions 50 through 53: K set in LSU positions 54-57 for logical processors 8 through 15 J set in LSU positions 50-53 for logical processors 0 through 7 Green LED Amber LED LSU Enclosure (front) 57 56 55 54 53 55 51 50 LSU Logic Board Positions (Slots) 50 - 57 VST09.
Modular System Hardware Processor Switch Two p-switches are required, one each for the X and Y ServerNet fabrics.
P-Switch Indicator LEDs Modular System Hardware ° ° ° Environmental sense and control (ESC) Coldload of TACL and EMS windows ServerNet configuration This illustration shows the front of the p-switch: PWR SPON PWR FAN FAN 100/10 ENET 1 DISPLAY 2 3 4 VST402.
Processor Numbering Modular System Hardware Processor Numbering Connection of the ServerNet cables from the LSU to the PICs in p-switch slots 10 through 13 determines the number of the associated logical processor. For more information, see LSUs to Processor Switches and Processor IDs on page 6-7. This example of a triplex processor shows the ServerNet cabling to the p-switch PIC in slot 10 that defines processors 0, 1, 2, and 3.
Modular System Hardware I/O Adapter Module (IOAM) Enclosure and I/O Adapters I/O Adapter Module (IOAM) Enclosure and I/O Adapters An IOAM provides the Integrity NonStop NS-series system with its system I/O using Gigabit 4-port Ethernet ServerNet adapters (G4SAs) for LAN connectivity and Fibre Channel ServerNet adapters (FCSAs) for storage connectivity.
I/O Adapter Module (IOAM) Enclosure and I/O Adapters Modular System Hardware This illustration shows the front and rear of the IOAM enclosure and details: Maintenance Connector (100 BaseT RJ-45) ServerNet Links From P-Switch (MMF LC Connectors) LDC X ServerNet Switch Board (Module 2, Slot 14) Fans (Mod 2 Slot 17) Slot 5 Slot 4 Slot 5 Slot 1 Slot 4 Slot 3 Slot 2 Fans (Mod 2 Slot 16) Slot 1 Fans (Mod 3 Slot 16) Slot 3 IOAM (Module 3) IOAM (Module 2) Slot 2 Fans (Mod 3 Slot 17) Power Suppli
I/O Adapter Module (IOAM) Enclosure and I/O Adapters Modular System Hardware IOAM Enclosure Indicator LEDs ServerNet Switch Board LED State Meaning Power Green Power is on; board is available for normal operation. Off Power is off. Amber A fault exists. Off Normal operation or powered off. Green Link is functional. Off Link is not functional. LCD Display Messages Message as is displayed. ServerNet Ports Green ServerNet link is functional. Off ServerNet link is not functional.
I/O Adapter Module (IOAM) Enclosure and I/O Adapters Modular System Hardware FCSAs are installed in pairs and can reside in slots 1 through 5 of either IOAM (module 2 or 3) in an IOAM enclosure. The pairs can be installed one each in the two IOAM modules in the same IOAM enclosure, or the pair can be installed one each in different IOAM enclosures. The FCSA allows either a direct connection to an Enterprise Storage System (ESS) or connection through a storage area network.
Fibre Channel Disk Module Modular System Hardware • • 802.3x (Flow control) 802.3u (100 Base-T and 1000 Base-T) For detailed information on the G4SA, see the NonStop Gigabit Ethernet 4-Port Installation and Support Guide.
Optional UPS and ERM Modular System Hardware systems. The maintenance switch mounts in the 19-inch rack within a modular cabinet, but no restrictions exist for its placement. This illustration shows an example of two maintenance switches installed in the top of a cabinet: RJ-45 EtherNet Connections Maintenance Switch ( two installed) VST345.
System Console Modular System Hardware Cabinet configurations that include an R5500 XR UPS also have one extended runtime module (ERM). An ERM is a battery module that extends the overall battery-supported system run time. A second ERM can be added for even longer battery-supported system run time. Adding an R5500 XR UPS to a modular cabinet in the field requires changing the PDU on the right to be compatible with UPS. Both the UPS and the ERM are 3U high and must reside in the bottom of the cabinet.
Enterprise Storage System Modular System Hardware are installed outside the rack and require separate provisions or furniture to hold the PC hardware. For more information on the system console, refer to System Console on page B-1. Enterprise Storage System An Enterprise Storage System (ESS) is a collection of magnetic disks, their controllers, and the disk cache in one or more standalone cabinets.
Component Location and Identification Modular System Hardware For fault tolerance, the primary and backup paths to an ESS logical device (LDEV) must go through different Fibre Channel switches. Some storage area procedures, such as reconfiguration, can cause the affected switches to pause. If the pause is long enough, I/O failure occurs on all paths connected to that switch. If both the primary and the backup paths are connected to the same switch, the LDEV goes down.
Rack and Offset Physical Location Modular System Hardware Term Definition (page 2 of 2) Port A connector to which a cable can be attached and which transmits and receives data. Group-Module-Slot (GMS) A notation method used by hardware and software in NonStop systems for organizing and identifying the location of certain hardware components. NonStop Blade Complex A set of two or three NonStop Blade Elements, identified as A, B, or C, and their associated LSUs.
NonStop Blade Element Group-Module-Slot Numbering Modular System Hardware This example shows the location of NonStop Blade Element A in rack 1 with an offset of 3U and NonStop Blade Element B with an offset of 8U: 11 Rack 1 (first 10 U shown) 11 10 10 09 09 08 08 07 07 06 06 05 05 04 04 03 03 02 02 01 01 NSBE B (offset 8U) NSBE A (offset 3U) VST404.
LSU Group-Module-Slot Numbering Modular System Hardware This illustration shows GMS numbering for a NonStop Blade Element enclosure: NSBE Reintegration (Slot 80) Q R S Group (400-403) 71 T 72 73 Module (1-3) 74 Group 75 76 77 78 Blade Optics Adapters (Slots 71-78) Module NSBE Enclosure (rear view) J1 J0 J3 J2 J5 J4 J7 J6 K1 K0 K3 K2 K5 K4 K7 K6 VST757.
Processor Switch Group-Module-Slot Numbering Modular System Hardware Processor Switch Group-Module-Slot Numbering Group X ServerNet Module Y ServerNet Module Slot Item 100 2 3 1 Maintenance PIC 2 Cluster PIC 3 Crosslink PIC 4-9 ServerNet I/O PICs 10 ServerNet PIC (processors 0-3) 11 ServerNet PIC (processors 4-7) 12 ServerNet PIC (processors 8-11) 13 ServerNet PIC (processors 12-16) 14 P-switch logic board 15, 18 Power supply A and B 16, 17 Fan A and B This illustration shows
IOAM Enclosure Group-Module-Slot Numbering Modular System Hardware IOAM Enclosure Group-Module-Slot Numbering GIOAM Group P-Switch PIC Slot PIC Port Numbers 110 4 1-4 111 5 1-4 112 6 1-4 113 7 1-4 114 8 1-4 115 9 1-4 IOAM Group X ServerNet Module Y ServerNet Module Slot Item Port 110 - 115 (See preceding table.
Fibre Channel Disk Module Group-Module-Slot Numbering Modular System Hardware Fibre Channel Disk Module Group-Module-Slot Numbering IOAM Group IOAM Module IOAM Slot FCSA F-SACs 110-115 2-X fabric; 1-5 1, 2 3-Y fabric Fibre Channel Disk Module Shelf 1 - 4 if daisychained; 1 if single disk enclosure FCDM Slot Item 0 Fibre Channel disk module 1-14 Disk drives 89 Transceiver A1 90 Transceiver A2 91 Transceiver B1 92 Transceiver B2 93 Left FC-AL board 94 Right FC-AL board 95 Left p
NonStop S-Series I/O Enclosures Modular System Hardware NonStop S-Series I/O Enclosures Topics discussed in this subsection are: Topic Page IOMF 2 CRU 5-32 NonStop S-Series Disk Drives and ServerNet Adapters 5-32 NonStop S-Series I/O Enclosure Group Numbers 5-32 NonStop S-series I/O enclosures can be connected to Integrity NonStop NS-Series systems to retain not only previously installed hardware but also data stored on disks mounted in the NonStop S-series I/O enclosures.
Modular System Hardware IOMF 2 CRU Each p-switch (for the X or Y fabric) has up to six I/O PICs, with one I/O PIC required for each IOAM enclosure in the system. Each NonStop S-series I/O enclosure uses one port and one PIC, so a maximum of 24 NonStop S-series I/O enclosures can be connected to an Integrity NonStop NS-series system if no IOAM enclosure is installed.
NonStop S-Series I/O Enclosure Group Numbers Modular System Hardware This table shows the group number assignments for the NonStop S-series I/O enclosures: P-Switch PIC Slot (X and Y Fabrics) P-Switch PIC Connector NonStop S-Series I/O Enclosure Group 4 1 11 2 12 3 13 4 14 1 21 2 22 3 23 4 24 1 31 2 32 3 33 4 34 1 41 2 42 3 43 4 44 1 51 2 52 3 53 4 54 1 61 2 62 3 63 4 64 5 6 7 8 9 HP Integrity NonStop NS-Series Planning Guide—529567-005 5- 33
NonStop S-Series I/O Enclosure Group Numbers Modular System Hardware This illustration shows the group number assignments on the p-switch: To I/O Enclosure Groups 11-14 To I/O Enclosure Groups 21-24 To I/O Enclosure Groups 31-34 To I/O Enclosure Groups 41-44 To I/O Enclosure Groups 51-54 To I/O Enclosure Groups 61-64 3 ENET 2 SPON 1 SER 1 2 4 3 2 1 4 3 2 1 4 3 2 1 4 3 2 1 4 3 2 1 4 3 2 1 4 3 2 1 4 3 2 1 4 3 2 1 4 3 2 1 4 3 2 1 3 4 5 6 7 8 9 10 11 12 13 For Connections to IOAM (or
6 System Configuration Guidelines This section provides guidelines for Integrity NonStop NS-series system configurations: Topic Page Enclosure Locations in Cabinets 6-2 Internal ServerNet Interconnect Cabling 6-3 P-Switch to NonStop S-Series I/O Enclosure Cabling 6-15 IOAM Enclosure and Disk Storage Considerations 6-18 Fibre Channel Devices 6-18 G4SAs to Networks 6-31 Default Naming Conventions 6-32 PDU Strapping Configurations 6-34 Integrity NonStop NS-series systems use a flexible modula
Enclosure Locations in Cabinets System Configuration Guidelines For other example configurations, see Section 7, Example Configurations. Enclosure Locations in Cabinets In this table, the enclosure location refers to the U in the rack where the lower edge of the enclosure resides, such as the bottom of a NonStop Blade Element enclosure at 28U.
Internal ServerNet Interconnect Cabling System Configuration Guidelines Enclosure or Component Height (U) Required Cabinet (Rack) Location Processor switch 3U Immediately above LSU enclosures and NonStop Blade Elements if installed in same cabinet Two p-switches required.
Cable Labeling System Configuration Guidelines provide high-speed, low-latency communications, all of the cables look the same and are the same color, usually orange. To identify correct cable connections to factory-installed hardware, every interconnect cable has a plastic label affixed to each end. Extra sheets of preprinted labels that you can fill in are also provided.
System Configuration Guidelines Cable Management System Cable Management System Integrity NonStop NS-series systems include the cable management system (CMS) to protect all power, fiber-optic, and CAT5 Ethernet cables within the systems. The CMS maintains a 25 millimeter (1 inch) minimum bend radius for the fiber-optic cables and provides strain relief for all cables.
Dedicated Service LAN Cables System Configuration Guidelines Fiber-optic cables use either LC or SC connectors at one or both ends. This illustration shows an LC fiber-optic cable connector pair: VST598.vsd This illustration shows an SC fiber cable connector pair: VST599.vsd Dedicated Service LAN Cables The system also uses Category 5, unshielded twisted pair Ethernet cables for the internal dedicated service LAN and for connections between the G4SA and the application LAN equipment.
Internal Cable Part Numbers System Configuration Guidelines Although a considerable cable length can exist between the modular enclosures in the system, HP recommends placing all cabinets adjacent to each other and bolting them together, with cable length between each of the enclosures as short as possible. Internal Cable Part Numbers For part numbers, see Internal Cables on page A-1.
LSUs to Processor Switches and Processor IDs System Configuration Guidelines P-Switch PIC Slot PIC Port Processor Number (page 2 of 2) 11 1 4 2 5 3 6 4 7 1 8 2 9 3 10 4 11 1 12 2 13 3 14 4 15 12 13 The four cabling diagrams on the next pages illustrate the default configuration and connections for a triplex system processor. These diagrams are not for use in installing or cabling the system.
LSUs to Processor Switches and Processor IDs System Configuration Guidelines ports 1 to 4 on the p-switch PIC in slot 10, which defines triplex processor numbers to 0 to 3.
LSUs to Processor Switches and Processor IDs System Configuration Guidelines This figure shows example connections of the NonStop Blade Element reintegration links (NonStop Blade Element connectors S, T, Q, R) and ports 1 to 4 of the p-switch PIC in slot 11 for triplex processor numbers to 4 to 7: P-switch slot 11, P-switch slot 11, P-switch slot 11, P-switch slot 11, 10 port 1: Processor 4 port 2: Processor 5 port 3: Processor 6 port 4: Processor 7 11 12 13 4 3 2 1 10 Y Fabric Processor Switch 1
LSUs to Processor Switches and Processor IDs System Configuration Guidelines This figure shows example connections of the NonStop Blade Element reintegration links (NonStop Blade Element connectors S, T, Q, R) and ports 1 to 4 of the p-switch PIC in slot 12 for triplex processor numbers to 8 to 11: P-switch slot 12, port 1: Processor 8 P-switch slot 12, port 2: Processor 9 P-switch slot 12, port 3: Processor 10 P-switch slot 12, port 4: Processor 11 4 3 2 1 10 11 Y-Fabric Processor Switch 12 13 4 3
Processor Switch ServerNet Connections System Configuration Guidelines This figure shows example connections of the NonStop Blade Element reintegration links (NonStop Blade Element connectors S, T, Q, R) and ports 1 to 4 of the p-switch PIC in slot 13 for triplex processor numbers to 12 to 15: P-switch slot 13, P-switch slot 13, P-switch slot 13, P-switch slot 13, port 1: Processor 12 port 2: Processor 13 port 3: Processor 14 port 4: Processor 15 4 3 2 1 10 11 12 Y Fabric Processor Switch 13 4 3 2 1
Processor Switches to IOAM Enclosures System Configuration Guidelines ServerNet cables connected to the p-switch PICs in slots 10 through 13 come from the LSUs and processors, with the cable connection to these PICs determining the processor identification. (See LSUs to Processor Switches and Processor IDs on page 6-7.) Cables connected to the PICs in slots 4 though 9 connect to one or more IOAM enclosures or to NonStop S-series I/O enclosures equipped with IOMF 2 CRUs.
FCSA to Fibre Channel Disk Modules System Configuration Guidelines This illustration shows an example of a fault-tolerant ServerNet configuration connecting two FCSAs, one in each IOAM module, to a pair of Fibre Channel disk modules: Fibre Channel Disk Modules Fibre Channel Links EMU PIC (slot 4 shown) 4 3 2 1 4 4 3 2 1 5 P-Switch X Fabric EMU 6 7 8 B A 2 2 1 1 I/O I/O B A 2 2 1 1 I/O I/O 9 Fibre Channel Links ServerNet Links PIC (slot 4 shown) 4 3 2 1 4 4 3 2 1 5 P-Switch Y
P-Switch to NonStop S-Series I/O Enclosure Cabling System Configuration Guidelines Integrity NonStop NS-series systems do not support SCSI buses or adapters to connect tape devices. However, SCSI tape devices can be connected through a T1200 Fibre Channel to SCSI converter device (model M8201) that allows connection to SCSI tape drives. For interconnect cable information and installation instructions, see the M8201 Fibre Channel to SCSI Router Installation and User’s Guide. Note.
P-Switch to NonStop S-Series I/O Enclosure Cabling System Configuration Guidelines up to 24 NonStop S-series I/O enclosures can be connected to an Integrity NonStop NS-series system via these ServerNet links. A single fiber-optic cable provides the ServerNet link between an I/O PIC port on both the X and Y p-switch and the I/O multifunction 2 (IOMF 2) CRUs in a NonStop S-series I/O enclosure. For cable types and lengths, see Enterprise Storage System on page 5-23.
System Configuration Guidelines P-Switch to NonStop S-Series I/O Enclosure Cabling These restrictions or requirements apply when integrating NonStop S-series I/O enclosures into an Integrity NonStop NS-series system: • Only NonStop S-series I/O enclosures equipped with IOMF 2 CRUs can be connected to an Integrity NonStop NS-series system. The IOMF 2 CRU must have an MMF PIC installed.
IOAM Enclosure and Disk Storage Considerations System Configuration Guidelines ° A serial cable from each SPON connector on the p-switch carries power-on signals to the NonStop S-series I/O enclosure. This is a unidirectional SPON cable used only for connection between a a p-switch in an Integrity NonStop NS-series system and the IOMF 2 CRU in the NonStop S-series I/O enclosure.
Fibre Channel Devices System Configuration Guidelines Fibre Channel ServerNet Adapter (FCSA) on page 5-18 or the Fibre-Channel ServerNet Adapter Installation and Support Guide This illustration shows an FCSA with indicators and ports that are used and not used in Integrity NonStop NS-series systems: Fibre Channel Port 1 FCSA Fibre Channel Port 2 FCSA Fibre Ethernet Ports (not used) Ethernet Ports (not used) VST001.
Factory-Default Disk Volume Locations System Configuration Guidelines Fibre Channel disk modules connect to Fibre Channel ServerNet adapters (FCSAs) via Fiber Channel arbitrated loop cables. This drawing shows the two Fibre Channel arbitrated loops implemented within the Fibre Channel disk module.
System Configuration Guidelines Fibre Channel Device Configuration Restrictions Fibre Channel Device Configuration Restrictions To avoid creating configurations that are not fault-tolerant or do not promote high availability, these restrictions apply and are invoked by SCF: • Primary and mirror disk drives cannot connect to the same Fibre Channel loop. Loss of the Fibre Channel loop makes both the primary volume and the mirrored volume inaccessible. This configuration inhibits fault-tolerance.
System Configuration Guidelines • • • • • • • • • • Fibre Channel Device Configuration Recommendations In systems with two or more cabinets, primary and mirror Fibre Channel disk modules reside in separate cabinets to prevent application or system outage if a power outage affects one cabinet. With primary and mirror Fibre Channel disk modules in the same cabinet, the primary Fibre Channel disk module resides in a lower U than the mirror Fibre Channel disk module.
System Configuration Guidelines • Example IOAM and Fibre Channel Disk Module Configurations After you connect all Fibre Channel disk modules in configurations of four FCSAs and four Fibre Channel disk modules, yet three Fibre Channel disk modules remain not connected, connect them to the four FCSAs. (See the example configuration in Four FCSAs, Three FCDMs, One IOAM Enclosure on page 6-30.
Example IOAM and Fibre Channel Disk Module Configurations System Configuration Guidelines Two FCSAs, Two FCDMs, One IOAM Enclosure This illustration shows example cable connections between the two FCSAs and the primary and mirror Fibre Channel disk modules: Mirror FCDM Primary FCDM FibreChannel Cables FibreChannel Cables FCSA FCSA VST088.
Example IOAM and Fibre Channel Disk Module Configurations System Configuration Guidelines Four FCSAs, Four FCDMs, One IOAM Enclosure This illustration shows example cable connections between the four FCSAs and the two sets of primary and mirror Fibre Channel disk modules: Mirror FCDM 2 Primary FCDM 2 Mirror FCDM1 Primary FCDM1 FCSAs FCSAs VST089.
Example IOAM and Fibre Channel Disk Module Configurations System Configuration Guidelines Two FCSAs, Two FCDMs, Two IOAM Enclosures This illustration shows example cable connections between the two FCSAs split between two IOAM enclosures and one set of primary and mirror Fibre Channel disk modules: Primary FCDM FCSA Mirror FCDM FCSA IOAM Enclosure VST086.
Example IOAM and Fibre Channel Disk Module Configurations System Configuration Guidelines Four FCSAs, Four FCDMs, Two IOAM Enclosures This illustration shows example cable connections between the four FCSAs split between two IOAM enclosures and two sets of primary and mirror Fibre Channel disk modules: Mirror FCDM 1 Mirror FCDM 2 Primary FCDM 1 Primary FCDM 2 FCSA FCSA FCSA FCSA IOAM Enclosure VST087.
Example IOAM and Fibre Channel Disk Module Configurations System Configuration Guidelines Daisy-Chain Configurations When planning for possible use of daisy-chained disks, consider: Daisy-Chained Disks Recommended for ... Daisy-Chained Disks Not Recommended for ... Requirements for Daisy-Chain1 Cost-sensitive storage and applications using lowbandwidth disk I/O Many volumes in a large Fiber Channel loop.
Example IOAM and Fibre Channel Disk Module Configurations System Configuration Guidelines number as its primary disk in not required, but it is recommend to simplify the physical management and identification of the disks. FCDM 4 Terminator FCDM 3 B Side A Side ID Expanders FCDM 2 Fibrer-Optic Cables FCDM 1 Fiber-Optic Cables Terminator FCSA FCSA IOAM Enclosure VST081.
Example IOAM and Fibre Channel Disk Module Configurations System Configuration Guidelines Four FCSAs, Three FCDMs, One IOAM Enclosure This illustration shows example cable connections between the four FCSAs and three Fibre Channel disk modules with the primary and mirror drives split within each Fibre Channel disk module: Primary 3 Mirror 1 FCDM 3 Mirror 3 Primary 2 FCDM 2 Primary 1 Mirror 2 FCDM 1 IOAM Enclosure FCSAs FCSAs VST085.
G4SAs to Networks System Configuration Guidelines This illustration shows the factory-default locations for the configurations of four FCSAs and three Fibre Channel disk modules where the primary system file disk volumes are in Fibre Channel disk module 1: $SYSTEM (slot 1) $AUDIT (slot 3) Fibre Channel DiskModule (front) $DSMSCM (slot 2) $OSS (slot 4) VSD.082.
Default Naming Conventions System Configuration Guidelines This illustration shows the G4SA: G4SA G4SA LC Connectors (Fiber) RJ-45 Connector (10/100/1000 Mbps) RJ-45 Connector (10/100 Mbps) VST002.
Default Naming Conventions System Configuration Guidelines can name their resources at will and use the appropriate management applications and tools to find out where the resource is. However, default naming conventions for certain resources simplify creation of the initial configuration files and automatic generation of the names of the modular resources.
System Configuration Guidelines PDU Strapping Configurations OSM Service Connection provides the location of the resource by adding an identifying suffix to the names of all the system resources. Other interfaces, such as SCF, also provide means to locate named resources. PDU Strapping Configurations PDUs are factory-strapped for the type and voltage of AC power at the intended installation site for the system.
PDU Strapping Configurations System Configuration Guidelines For reference, this illustration shows the PDU unstrapped: PDU Power Outlets N L 1 G L 2 N G N L Wire Color Code, North America: L = line pin, black insulation N = neutral pin, white insulation G = ground pin, green insulation 3 G N L 4 G N L 5 G N L 6 Unstrapped PDU G L 7 N G N L 8 G N L 9 G L 10 Wire Color Code, EU Harmonized: L = line pin, brown insulation N = neutral pin, blue insulation 11 G = ground pin, yellow
PDU Strapping Configurations System Configuration Guidelines This illustration shows the PDU strapped for 208 V ac North America delta power: PDU Power Outlets L 1 N G L 2 N G N L 3 G L 4 Wire Color Code, North America: L = line pin, black insulation N = neutral pin, white insulation G = ground pin, green insulation N G L 5 N G L 6 N G N L 7 G L PDU Strapped for 208 V ac North America Delta 8 N G L 9 N G N L 10 Wire Color Code, EU Harmonized: L = line pin, brown insulation N = n
PDU Strapping Configurations System Configuration Guidelines This illustration shows the PDU strapped for 250 V ac 3-phase power: PDU Power Outlets N L 1 G L 2 N G N L 3 G L 4 N G N L 5 Wire Color Code, North America: L = line pin, black insulation N = neutral pin, white insulation G = ground pin, green insulation G N L 6 G N L 7 G N L 8 G L PDU Strapped for 250 V ac 3-Phase 9 N G L 10 N G N L 11 Wire Color Code, EU Harmonized: L = line pin, brown insulation N = neutral pin,
PDU Strapping Configurations System Configuration Guidelines This illustration shows the PDU strapped for 250 V ac single-phase power: PDU Power Outlets L 1 N G N L 2 G L 3 N G N L 4 G N L 5 Wire Color Code, North America: L = line pin, black insulation N = neutral pin, white insulation G = ground pin, green insulation G N L 6 G L 7 N G N L 8 PDU Strapped for 250 V ac Single Phase G N L 9 G L 10 N G L 11 Wire Color Code, EU Harmonized: L = line pin, brown insulation 12 N = neut
7 Example Configurations This section shows example configurations of the Integrity NonStop NS-series hardware that can be installed in a modular cabinet. A number of other configurations are also possible because of the flexability inherent to the NonStop advanced architecture and ServerNet. Note. Hardware configuration drawings in this appendix represent the physical arrangement of the modular enclosures, but do not show location of the PDU junction boxes.
Typical Configurations Example Configurations Enclosure or Component (page 2 of 2) Duplex Processor Triplex Processor Minimum Typical Maximum Minimum Typical Maximum IOAM enclosure 1 2 6 1 2 6 FCSA 2 2 Up to 60 in mixture set by disks and I/O 2 G4SA Up to 20 in mixture set by disks and I/O Up to 20 in mixture set by disks and I/O Up to 60 in mixture set by disks and I/O Fibre Channel disk module 2 4 8 2 4 8 Fibre Channel disk drives 14 56 112 14 56 112 2 Typical Conf
Duplex 8-Processor System Example Configurations Duplex 8-Processor System This duplex configuration has a maximum of eight logical processors with one IOAM enclosure and up to 12 Fibre Channel disk modules (four Fibre Channel disk modules in a typical system): 42 42 41 40 Available Space (or additional FCDM) 41 39 39 38 38 37 37 36 36 35 35 IOAM Enclosure 33 32 32 31 31 30 30 29 29 28 28 27 P-Switch P-Switch Console 18 Available Space (or additional FCDM) 21 18 LSU 15 1
Duplex 16-Processor System, Three Cabinets Example Configurations Duplex 16-Processor System, Three Cabinets This duplex configuration has a maximum of 16 logical processors with two IOAM enclosures and up to 14 Fibre Channel disk modules (one IOAM enclosure and eight Fibre Channel disk modules in a typical system): 42 41 41 40 40 39 39 38 38 37 37 36 36 35 35 34 34 33 IOAM Enclosure * 32 31 31 30 30 29 29 28 28 P-Switch 26 25 25 P-Switch 23 22 22 21 21 19 NonStop Bl
Duplex 16-Processor System, Two Cabinets Example Configurations Duplex 16-Processor System, Two Cabinets This duplex configuration has a maximum of 16 logical processors with one IOAM enclosure and 4 Fibre Channel disk modules: 42 41 40 42 Fibre Channel Disk Module 41 40 39 39 38 38 37 37 36 36 35 35 34 33 IOAM Enclosure 34 32 31 31 30 30 29 29 28 28 P-Switch 26 25 25 P-Switch 23 22 21 18 Console Fibre Channel Disk Module 17 24 22 21 20 18 16 LSU 15 14 13 13 12 1
Triplex 8-Processor System Example Configurations Triplex 8-Processor System This triplex configuration has a maximum of eight logical processors with one IOAM enclosure, and ten Fibre Channel disk modules (four Fibre Channel disk modules in a typical system): 42 42 41 40 Available Space (or additional FCDM) 41 40 39 39 38 38 37 37 36 36 35 35 34 34 33 IOAM Enclosure 32 31 31 30 30 29 29 28 28 P-Switch 25 25 24 P-Switch 23 22 22 20 19 NonStop Blade Element 21 20 19 18
Triplex 16-Processor System, Three Cabinets Example Configurations Triplex 16-Processor System, Three Cabinets This duplex configuration has a maximum of 16 logical processors with one IOAM enclosure, and ten Fibre Channel disk modules (eight Fibre Channel disk modules in a typical system): 42 41 40 41 40 39 39 38 38 37 37 36 36 35 35 34 34 33 IOAM Enclosure 32 31 31 30 30 29 29 28 28 27 27 P-Switch 26 26 25 25 P-Switch 24 23 23 22 22 21 21 20 19 NonStop Blade Elem
Example System With UPS and ERM Example Configurations Example System With UPS and ERM UPS and ERM (two ERMs maximum) must reside in the bottom of the cabinet with the UPS at cabinet offset 2U.
Example System With One NonStop S-Series I/O Enclosure Example Configurations Example System With One NonStop S-Series I/O Enclosure Integrity NonStop NS-Series System Fibre Channel Disk Modules IOAM Enclosure Fiber-Optic ServerNet Cables NonStop S-Series I/O Enclosure Power-On Cables P-Switch ServerNet I/O PICs P-Switch Y Fabric P-Switch X Fabric LSU NonStop Blade Element B NonStop Blade Element A VST810.
Example Internal Cabling Example Configurations Example Internal Cabling Topics included are: Topic Page Example Internal Cabling 7-10 Example 4-Processor Duplex System Cabling 7-14 Example 16-Processor Triplex System Cabling 7-15 Example Cabling in System Ready for Shipment 7-17 This cabling chart for an example Integrity NonStop NS-series system, with a duplex processor, lists the relative U location where the cable connects to its source and destination connectors, the cable part numbers, and
Example Internal Cabling Example Configurations U# Source Label Cable Destination Label U# 29 IOAME1: FCSA 3.2: 1 N1-R1-U23-3-2-1 522745-002 M8201 (TSI T1200) Fibre Ch Port N1-R1-U40-FC 40 IOAME1: FCSA 3.1: 1 N1-R1-U23-3-1-1 522745-002 Primary FCDM1: FC-AL A2 N1-R1-U37-A-2 37 IOAME1: FCSA 3.1: 2 N1-R1-U23-3-1-2 522745-002 Mirror FCDM1: FC-AL A2 N1-R1-U34-A-2 34 IOAME1: FCSA 2.1: 1 N1-R1-U23-2-1-1 522745-002 Primary FCDM1: FC-AL B1 N1-R1-U37-B-1 39 IOAME1: FCSA 2.
Example Internal Cabling Example Configurations U# Source Label Cable Destination Label U# 14 LSUE 0: Slot 20: Strand B N1-R1-U13-20-B 522745-002 NSBE 1B: Slot 71: N1-R1-U8-71-J0 LSUE 0: Slot 21: Strand B N1-R1-U13-21-B 522745-002 NSBE 1B: Slot 71: J1 N1-R1-U8-71-J1 910 LSUE 0: Slot 22: Strand B N1-R1-U13-22-B 522745-002 NSBE 1B: Slot 72: J2 N1-R1-U8-72-J2 LSUE 0: Slot 23: Strand B N1-R1-U13-23-B 522745-002 NSBE 1B: Slot 72: J3 N1-R1-U8-72-J3 LSUE 0: Slot 20: Strand A N1-R1-U
Example Internal Cabling Example Configurations This illustration shows the U location of the modular enclosures listed in the preceding table: To AC Power Source or Site UPS IOAM Enclosure (U23) 42 42 41 41 40 40 39 39 38 38 37 37 36 36 35 35 34 34 33 33 32 32 31 31 30 30 29 29 28 28 27 27 26 26 25 25 24 24 23 23 22 22 21 21 20 20 19 19 18 18 17 17 16 16 15 15 14 14 13 13 12 12 11 11 10 10 09 09 08 08 07 07 06 06 05 05 04 04
Example 4-Processor Duplex System Cabling Example Configurations Example 4-Processor Duplex System Cabling This illustration shows and example 4-processor duplex system in a single cabinet. This simplified, conceptual representation shows the X and Y ServerNet cabling between the NonStop Blade Element, LSU, p-switch, and IOAM enclosures. For clarity, power and Ethernet cables are not shown. For cable-by-cable interconnect diagrams, see Internal ServerNet Interconnect Cabling on page 6-3.
Example 16-Processor Triplex System Cabling Example Configurations IOAM is the two-controller, two-Fibre Channel disk module configuration shown in detail in Two FCSAs, Two FCDMs, One IOAM Enclosure on page 6-24. For details and instructions on connecting cables as part of the system installation, refer to the NonStop NS-Series Hardware Installation Manual. Example 16-Processor Triplex System Cabling The next two illustrations show an example 16-processor triplex system with four cabinets.
Example 16-Processor Triplex System Cabling Example Configurations Components of the cable management system that are part of each modular enclosure and the rack are not shown, so actual cable routing using is slightly different from that shown. For detailed information on cable routing and connection as part of the system installation, refer to the NonStop NS-Series Hardware Installation Manual.
Example Configurations Example Cabling in System Ready for Shipment Example Cabling in System Ready for Shipment This photograph shows the cabling for a system with two cabinets that is prepared for shipment. All internal cables where the source and destination hardware are within the same cabinet are connected at the factory. For cables that connect between the cabinets, each source end is connected to the correct device, and the cable is coiled and secured for shipment: vst018.
Example Cabling in System Ready for Shipment Example Configurations At the installation site, each intercabinet cable is routed from the source cabinet to the destination cabinet and connected to the hardware device noted on the connector identification label affixed to each end of the cable: Node: N1-R1-U31-3.1 (Near) Node: N1-R2-U31-3.2 (Near) Node: N1-R2-U31-3.2 (Far) Node: N1-R1-U31-3.1 (Far) Near End (early version) Far End (early version) __N1-R1-U31-3.1 __N1-R2-U31-3.2 (__N1-R2-U31-3.
A Cables Internal Cables Available internal cables and their lengths are: Cable Type Connectors Length (meters) Length (feet) Product ID Part Number MMF LC-LC 2 7 M8900-02 522745 5 16 M8900-05 526016 15 49 M8900-15 526017 40 131 M8900-40 525518 80 262 M8900-80 525519 100 328 M8900100 525520 1251 4101 M8900125 526127 2001 6561 M8900200 525522 2501 8201 M8900250 525521 10 33 M8910-10 526941 20 66 M8910-20 526156 50 164 M8910-50 526994 100 328 TBD 52
ServerNet Cluster Cables Cables ServerNet Cluster Cables These cables connect the Integrity NonStop NS-series systems to a ServerNet cluster (zone) with Model 6780 NonStop ServerNet switches: Cable Type Connectors Length (meters) Length (feet) Product ID Part Number SMF LC-LC 2 7 M8921-2 525555 5 16 M8921-5 525565 10 33 M8921-10 522746 25 82 M8921-25 526107 40 131 M8921-40 525516 80 262 M8921-80 522747 100 410 M8921100 522749 These cables connect the p-switches on the I
Cable Length Restrictions Cables Cable Length Restrictions Maximum allowable lengths of cables connecting the modular system components are: Fiber Type Connectors Maximum Length Product ID NonStop Blade Element to LSU enclosure MMF LC-LC 100 m M8900nnn1 NonStop Blade Element to NonStop Blade Element MMF MTP 50 m M8920nnn1 LSU enclosure to p-switch MMF LC-LC 125 m M8900nnn1 P-switch to p-switch crosslink MMF LC-LC 125 m M8900nnn1 P-switch to IOAM enclosure MMF LC-LC 125 m M8900n
Cable Management System Cables This photograph shows an example of CMS trays and clamps for the components installed in an IOAM enclosure: VST013.vsd For details of using the CMS, refer to the NonStop NS-Series Hardware Installation Manual.
B Control, Configuration, and Maintenance Tools This section introduces the control, configuration, and maintenance tools used in Integrity NonStop NS-series systems: Topic Page Support and Service Library B-1 System Console B-1 Maintenance Architecture B-6 Dedicated Service LAN B-9 AC Power Monitoring B-21 System-Down OSM Low-Level Link B-21 AC Power Monitoring B-21 AC Power-Fail States B-23 Support and Service Library See Support and Service Library on page C-1.
System Console Configurations Control, Configuration, and Maintenance Tools Some system console hardware, including the PC system unit, monitor, and keyboard, can be mounted in the cabinet. Other PCs are installed outside the cabinet and require separate provisions or furniture to hold the PC hardware. System consoles communicate with Integrity NonStop NS-series servers over a dedicated service local area network (LAN) or a secure operations LAN.
One System Console Managing Multiple Systems Control, Configuration, and Maintenance Tools One System Console Managing One System (Setup Configuration) System Console Remote Service Provider Modem DHCP DNS server (optional) OperationsLAN Optional Connection to Operations LAN Maintenance Switch 1 Processor Switches 4 3 2 1 4 4 3 2 1 5 2 4 3 2 1 3 4 3 2 1 4 4 3 2 1 5 6 6 7 7 8 8 9 4 3 2 1 10 4 3 2 1 11 4 4 3 3 2 2 1 1 12 13 9 4 4 3 3 2 2 1 1 10 11 4 4 3 3 2 2 1 1 12 13 ServerNet Switch
Control, Configuration, and Maintenance Tools Primary and Backup System Consoles Managing One System Because all servers are shipped with the same preconfigured IP addresses for MSP0, MSP1, $ZTCP0, and $ZTCP1, you must change these IP addresses for the second and subsequent servers before you can add them to the LAN. Primary and Backup System Consoles Managing One System This configuration is recommended.
Multiple System Consoles Managing One System Control, Configuration, and Maintenance Tools Primary System Console Remote Service Provider Modem DHCP DNS server (optional) Remote Service Provider Backup System Console OperationsLAN Modem Maintenance Switch 2 Maintenance Switch 1 4 3 2 1 4 4 3 2 1 5 2 4 3 2 1 3 4 3 2 1 4 4 3 2 1 5 6 6 7 7 8 8 9 4 3 2 1 10 4 3 2 1 11 4 3 2 1 12 4 3 2 1 13 9 4 4 4 4 3 3 3 3 2 2 2 2 1 1 1 1 10 11 12 13 FCSA FCSA FCSA G4SA G4SA ServerNet Switches FC
Maintenance Architecture Control, Configuration, and Maintenance Tools on the same subnet. If a server is configured to receive dial-ins, the server must occupy the same subnet as the system console receiving the dial-ins. For best OSM performance, no more than 10 servers should be included within one subnet.
Control, Configuration, and Maintenance Tools Fabrics Functional Element Other hardware modules contain at least one microprocessor and firmware that performs maintenance functions for their local logic: • • • NonStop Blade Element Logical synchronization unit (LSU) Fibre Channel disk module The ServerNet fabrics, rather than the dedicated service LAN, provide maintenance interconnection to the OSM console for these modules.
Control, Configuration, and Maintenance Tools IOAM ME Firmware ServerNet adapters and one ServerNet switch board. (See I/O Adapter Module (IOAM) Enclosure and I/O Adapters on page 5-16.) Each IOAM connects directly to a p-switch and contains a single ME that resides within the ServerNet switch board, along with the ServerNet fabric interconnect.
Control, Configuration, and Maintenance Tools Dedicated Service LAN Dedicated Service LAN A dedicated service LAN provides connectivity between the OSM console running in a PC and the maintenance firmware in the system hardware. This dedicated service LAN uses a ProCurve 2524 Ethernet switch for connectivity between the p-switches and ServerNet switch boards for each IAOM) and the system console.
Control, Configuration, and Maintenance Tools • Dedicated Service LAN Connections to both X and Y fabrics (for fault tolerance) for OSM system-up maintenance (any one of these connections is valid as long as there are at least two connections total): ° ° Gigabit 4-port ServerNet adapters (G4SAs) installed in an IOAM enclosure Ethernet 4-port ServerNet adapters (E4SAs), Fast Ethernet ServerNet adapters (FESAs), or Gigabit Ethernet ServerNet adapters (GESA) installed in a NonStop S-series I/O enclosure w
IP Addresses Control, Configuration, and Maintenance Tools This illustration shows a fault-tolerant LAN configuration with two maintenance switches: Primary System Console Remote Service Provider Modem DHCP DNS server (optional) OperationsLAN Backup System Console Remote Service Provider Modem Optional Connection to Operations LAN (One or Two Connections) Maintenance Switch 2 Maintenance Switch 1 3 2 1 1 Maintenance PIC - Enet (Slot 1, Port 3) 3 2 1 1 2 4 3 2 1 3 4 3 2 1 4 4 3 2 1 5 2 4 3
IP Addresses Control, Configuration, and Maintenance Tools • T1200 FC to SCSI converter (optional) These components have default IP addresses that are preconfigured at the factory: Component (page 1 of 2) Location ServerNet switch boards — P-switch N/A ServerNet switch boards — IOAM Enclosure N/A G4SA and NonStop S-series E4SA, FESA, GESA N/A Maintenance switch (ProCurve 2524) Rack 01 Rack 02 GMS Default IP Address 100.2.14 192.231.36.202 100.3.14 192.231.36.203 110.2.14 192.231.36.
IP Addresses Control, Configuration, and Maintenance Tools Component (page 2 of 2) Location UPS (rackmounted only) Rack 01 TCP/IP processes for OSM: $ZTCP0 $ZTCP1 GMS N/A Default IP Address 192.231.36.31 Used By OSM Service Connection Rack 02 192.231.36.32 Rack 03 192.231.36.33 Rack 04 192.231.36.34 Rack 05 192.231.36.35 Rack 06 192.231.36.36 Rack 07 192.231.36.37 Rack 08 192.231.36.38 First G4SA (port A) and NonStop S-series E4SA, FESA, or GESA 192.231.36.
Control, Configuration, and Maintenance Tools Ethernet Cables Some guidelines for configuring the DHCP server: • • Configure the range of IP addresses to be assigned dynamically by the DHCP server to be in the same subnet as existing IP addresses on the LAN and any static IP address included in the dedicated service LAN.
System-Up Dedicated Service LAN Control, Configuration, and Maintenance Tools One possible fault-tolerant configuration is a pair of G4SAs (each in a different IOAM) with a dedicated service LAN connected to the A port on each G4SA. The B port is then available to support the SWAN. System-Up Dedicated Service LAN When the system is up and the OS running, the ME connects to the NonStop NS-series system’s dedicated service LAN using one of the PIFs on each of two G4SAs.
Dedicated Service LAN Links With One IOAM Enclosure Control, Configuration, and Maintenance Tools Dedicated Service LAN Links With One IOAM Enclosure This illustration shows the dedicated service LAN cables connected to the G4SAs in slot 5 of both modules of an IOAM enclosure and to the maintenance switch: Maintenance Switch Module 2 Module 3 G4SA Ethernet PIF Connectors D C B A Cable to Maintenance Switch IOAM Enclosure (Group 110) VST340.
Dedicated Service LAN Links to Two IOAM Enclosures Control, Configuration, and Maintenance Tools Dedicated Service LAN Links to Two IOAM Enclosures This illustration shows dedicated service LAN cables connected to G4SAs in two IOAM enclosures and to the maintenance switch: Maintenance Switch Module 2 Module 3 IOAM Enclosure (Group 110) IOAM Enclosure (Group 111) VST341.
Dedicated Service LAN Links With IOAM Enclosure and NonStop S-Series I/O Enclosure Control, Configuration, and Maintenance Tools Dedicated Service LAN Links With IOAM Enclosure and NonStop S-Series I/O Enclosure This illustration shows dedicated service LAN cables connected to a G4SA in an IOAM enclosure and at least one NonStop S-series Ethernet adapter (E4SA, FESA, or GESA) in a NonStop S-series I/O enclosure (module 12 in this example) and to the maintenance switch: Maintenance Switch NonStop S-Series
Dedicated Service LAN Links With NonStop SSeries I/O Enclosure Control, Configuration, and Maintenance Tools Dedicated Service LAN Links With NonStop S-Series I/O Enclosure This illustration shows dedicated service LAN cables connected to two NonStop Sseries Ethernet adapters (E4SA, FESA, or GESA) in a NonStop S-series I/O enclosure (module 12 in this example) and to the maintenance switch: Maintenance Switch NonStop S-Series I/O Enclosure (Module 12) VST343.
Control, Configuration, and Maintenance Tools Operating Configurations for Dedicated Service LANs Factory-default IP addresses for the G4SA and E4SA adapters are in the LAN Configuration and Management Manual. IP addresses for SWAN concentrators are in the WAN Subsystem Configuration and Management Manual. HP recommends that you change these preconfigured IP addresses to addresses appropriate for your LAN environment.
System-Down OSM Low-Level Link Control, Configuration, and Maintenance Tools For information on how to configure and start OSM server-based processes and components, see the OSM Migration Guide.
Control, Configuration, and Maintenance Tools AC Power Monitoring be used to let the system continue operation for a short period in case the power outage was only a momentary transient. One or two ERMs installed in each cabinet can extend the battery-supported system runtime. The system user must configure the system ride-through time to execute an orderly shut-down before the UPS batteries are depleted.
Control, Configuration, and Maintenance Tools AC Power-Fail States AC Power-Fail States These states occur when a power failure occurs and an optional HP model R5500 XR UPS is installed in each cabinet within the system: System State Description NSK_RUNNING NonStop operating system is running normally. RIDE_THRU OSM has detect a power failure and begins timing the outage. AC power returning terminates RIDE_THRU and puts the operating system back into an NSK_RUNNING state.
Control, Configuration, and Maintenance Tools HP Integrity NonStop NS-Series Planning Guide—529567-005 B -24 AC Power-Fail States
C Guide to Integrity NonStop NS-Series Server Manuals These manuals support the Integrity NonStop NS-series systems. Category Purpose Title Reference Provide information about the manuals, the RVUs, and hardware that support NonStop NS-series servers NonStop Systems Introduction for H-Series RVUs Describe how to prepare for changes to software or hardware configurations Managing Software Changes Describe how to install, configure, and upgrade components and systems H06.
Guide to Integrity NonStop NS-Series Server Manuals Support and Service Library Authorized service providers can also order the NTL Support and Service Library CD: • • Channel Partners and Authorized Service Providers: Order the CD from the SDRC at https://scout.nonstop.compaq.com/SDRC/ce.htm. HP employees: Subscribe at World on a Workbench (WOW). Subscribers automatically receive CD updates. Access the WOW order form at http://hps.knowledgemanagement.hp.com/wow/order.asp.
Safety and Compliance This sections contains three types of required safety and compliance statements: • • • Regulatory compliance Waste Electrical and Electronic Equipment (WEEE) Safety Regulatory Compliance Statements The following regulatory compliance statements apply to the products documented by this manual. FCC Compliance This equipment has been tested and found to comply with the limits for a Class A digital device, pursuant to part 15 of the FCC Rules.
Safety and Compliance Regulatory Compliance Statements Korea MIC Compliance Taiwan (BSMI) Compliance Japan (VCCI) Compliance This is a Class A product based on the standard or the Voluntary Control Council for Interference by Information Technology Equipment (VCCI). If this equipment is used in a domestic environment, radio disturbance may occur, in which case the user may be required to take corrective actions.
Safety and Compliance Regulatory Compliance Statements European Union Notice Products with the CE Marking comply with both the EMC Directive (89/336/EEC) and the Low Voltage Directive (73/23/EEC) issued by the Commission of the European Community.
Safety and Compliance SAFETY CAUTION SAFETY CAUTION The following icon or caution statements may be placed on equipment to indicate the presence of potentially hazardous conditions: DUAL POWER CORDS CAUTION: “THIS UNIT HAS MORE THAN ONE POWER SUPPLY CORD. DISCONNECT ALL POWER SUPPLY CORDS TO COMPLETELY REMOVE POWER FROM THIS UNIT." "ATTENTION: CET APPAREIL COMPORTE PLUS D'UN CORDON D'ALIMENTATION. DÉBRANCHER TOUS LES CORDONS D'ALIMENTATION AFIN DE COUPER COMPLÈTEMENT L'ALIMENTATION DE CET ÉQUIPEMENT".
Safety and Compliance Waste Electrical and Electronic Equipment (WEEE) HIGH LEAKAGE CURRENT To reduce the risk of electric shock due to high leakage currents, a reliable grounded (earthed) connection should be checked before servicing the power distribution unit (PDU).
Safety and Compliance Important Safety Information Statements -6
Index A AC current calculations 3-11 AC power 208 V ac delta 3-2, 6-36 250 V ac 3-phase 3-2, 6-37 250 V ac single-phase 3-3, 6-38 enclosure input specifications 3-4 feed, top or bottom 1-1 power-fail monitoring B-21 power-fail states B-23 unstrapped PDU 6-34 AC power feed 5-4 bottom of cabinet 5-4 top of cabinet 5-4 with cabinet UPS 5-5 air conditioning 2-4 air filters 2-5 B branch circuit 3-3 C cabinet 1-1, 5-24 cabinet dimensions, 3-6 cabinet, offset 5-3 cable connections LSUs to p-switches 6-7 NonStop
E Index IOAM switch boards 5-16 p-switch 5-13 divergence recovery 4-8 documentation factory-installed hardware 4-24 NonStop NS-Series C-1 packet 4-23 ServerNet adapter configuration 4-25 software migration 4-21 dual modular redundant (DMR, duplex) processor 4-3, 4-5 dust and microscopic particles 2-5 dynamic IP addresses B-13 E electrical disturbances 2-2 electrical power loading 3-4 elements, processor 4-2, 4-4 emergency power off (EPO) switches HP 5500 XP UPS 2-2 Integrity NonStop NS-series servers 2-1
G Index forms ServerNet adapter configuration 4-25 front panel, NonStop Blade Element buttons 5-10 indicator LEDs 5-10 FRU AC power assembly, LSU 5-11 fan IOAM enclosure 5-16 NonStop Blade Element 5-7 p-switch 5-13 Fibre Channel disk module 5-20 I/O interface board 5-7 logic board, LSU 5-11 memory board 5-7 optic adapter LSU 5-11 NonStop Blade Element 5-7 power supply 5-7, 5-13 processor board 5-7 reintegration board 5-7 fuses, PDU 5-4 G G4SA network connections 6-31 service LAN PIF B-15 Gigabit Ethernet
K Index K keepout panel 5-7 L labeling, optic cables 6-4 LAN dedicated service B-9 fault-tolerant maintenance B-10 non-fault-tolerant maintenance B-9 service, G4SA PIF B-15 LCD IOAM switch boards 5-16 p-switch 5-13 load operating system paths 4-18 logical processor 4-2 LSU description 5-10, 5-25 FRUs 5-11 function and features 5-10 indicator LEDs 5-12 logic boards 5-12 M M8201 Fibre Channel to SCSI router 6-15 maintenance architecture B-6 maintenance PIC 5-13 maintenance switch 5-20 manuals NonStop NS-S
Q Index PE 4-2 port 5-25 power and thermal calculations 3-11 power consumption 2-3 power distribution units (PDUs) 1-1, 2-1 power failures 2-3 power feed, top or bottom 2-1 power input Integrity NonStop NS-series server 3-1 power plugs Integrity NonStop NS-series server 3-2 power quality 2-2 power receptacles, PDU 5-6 power supply IOAM enclosure 5-16 NonStop Blade Element 5-7 p-switch 5-13 power-fail monitoring B-21 states B-23 primary and mirror disk drive location recommendations 6-22 processor board, F
T Index optic cabling 6-3 processor connections 5-15 switch board IOAM enclosure 5-16 p-switch 5-13 service clearances 3-6 service LAN B-9 slot, position 5-24 SMF PIC 5-13 specification assumptions 3-1 cabinet physical 3-7 enclosure dimensions 3-8 heat 3-10 nonoperating temperature, humidity, altitude 3-11 operating temperature, humidity, altitude 3-10 weight 3-9 specifications cable 6-5 startup characteristics, default 4-18 static IP addresses B-13 SWAN concentrator restriction B-14 synchronization, proc