HP Integrity NonStop NS-Series Planning Guide Abstract This guide describes the Integrity NonStop NS-series system hardware and provides examples of system configurations to assist in planning for installation of a new system. It also provides a guide to other Integrity NonStop NS-series manuals. Product Version N.A. Supported Release Version Updates (RVUs) This publication supports H06.03 and subsequent H-series RVUs until otherwise indicated by its replacement publication.
Document History Part Number Product Version Published 529567-001 NA February 2005 529567-002 NA May 2005 529567-003 NA June 2005 529567-004 NA July 2005
HP Integrity NonStop NS-Series Planning Guide Glossary Index What’s New in This Manual ix Manual Information ix New and Changed Information ix About This Manual xi Who Should Use This Guide xi What’s in This Guide xi Where to Get More Information xii Notation Conventions xii 1.
3.
4. System Configurations Contents Concurrent Messages 3-2 Interconnected Components 3-2 Fault Tolerance for Complex Operations 3-2 ServerNet Communications Network 3-2 Simplified ServerNet System Diagram 3-3 P-Switch ServerNet Pathways 3-3 IOAM Enclosure ServerNet Pathways 3-4 Example System ServerNet Pathways 3-5 4.
. Planning for LAN Communications Contents Maintenance Entity 5-4 Maintenance Architecture Overview 5-5 Fabrics Functional Element 5-6 P-Switch ME Firmware 5-6 I/O Functional Element 5-7 IOAM ME Firmware 5-7 Processor Functional Element 5-7 Dedicated Maintenance LAN 5-8 System-Down OSM Low-Level Link 5-8 System-Up Dedicated Maintenance LAN 5-8 OSM 5-9 OSM Service Connection 5-10 OSM Notification Director 5-10 OSM Low-Level Link 5-11 OSM Event Viewer 5-11 OSM Replacement Procedures 5-11 System Servicing I
. System Installation Planning Contents Operating Configurations for Dedicated Maintenance LANs 6-12 Public LANs for Applications 6-12 System Console Configurations 6-12 One System Console Managing Multiple Systems 6-13 Primary and Backup System Consoles Managing One System 6-14 Multiple System Consoles Managing One System 6-15 Cascading Ethernet Switch or Hub Configuration 6-15 Multiple System Consoles Managing Multiple Systems 6-15 Remote Dial-In and Dial-Out Support 6-16 Dial-Out 6-16 Dial-In 6-16 7.
9.
B. Example Modular Configurations Contents Nonoperating Temperature, Humidity, and Altitude Typical Acoustic Noise Emissions A-8 Tested Electrostatic Immunity A-8 A-8 B.
C. Guide to Integrity NonStop NS-Series Server Manuals Contents C.
What’s New in This Manual Manual Information HP Integrity NonStop NS-Series Planning Guide Abstract This guide describes the Integrity NonStop NS-series system hardware and provides examples of system configurations to assist in planning for installation of a new system. It also provides a guide to other Integrity NonStop NS-series manuals. Product Version N.A. Supported Release Version Updates (RVUs) This publication supports H06.
What’s New in This Manual New and Changed Information HP Integrity NonStop NS-Series Planning Guide— 529567-004 x
About This Manual Who Should Use This Guide This guide is written for those responsible for planning the installation, configuration, and maintenance of the server and the software environment at a particular site. Those who perform the hardware tasks documented in this guide must have completed HP training courses on system support for Integrity NonStop NS-series servers. Note. Integrity NonStop NS-series and NonStop S-series refer to hardware systems.
Where to Get More Information About This Manual Section Title Contents 5 Control, Configuration, and Maintenance Tools This section introduces the control, configuration, and maintenance tools used in Integrity NonStop NS-series systems. 6 Planning for LAN Communications This section provides requirements and considerations for planning local area networks (LANs) for Integrity NonStop NS-series systems.
Change Bar Notation About This Manual margin of changed portions of text, figures, tables, examples, and so on. Change bars highlight new or revised information. For example: The CRE has many new message types and some new message type codes for old message types. In the CRE, the message type SYSTEM includes all messages except LOGICAL-CLOSE and LOGICAL-OPEN.
Change Bar Notation About This Manual HP Integrity NonStop NS-Series Planning Guide—529567-004 xiv
1 Introduction to Integrity NonStop NS-Series Systems This section introduces the Integrity NonStop NS-series servers and covers these topics: Topic Page NonStop System Primer 1-1 NonStop Advanced Architecture 1-2 Processor Complex 1-3 Processor Element 1-5 Duplex Processor 1-5 Triplex Processor 1-7 Processor Synchronization and Rendezvous 1-8 Memory Reintegration 1-8 Failure Recovery for Duplex Processor 1-8 Failure Recovery for Triplex Processor 1-9 ServerNet Fabric I/O 1-9 System
Introduction to Integrity NonStop NS-Series Systems NonStop Advanced Architecture However, contemporary high-speed microprocessors make lock-step processing no longer practical because of: • • • Variable frequency processor clocks with multiple clock domains Higher transient error rates than in earlier, simpler microprocessor designs Chips with multiple processor cores NonStop Advanced Architecture Integrity NonStop NS-series systems employ a unique method for achieving fault tolerance in a clustered p
Introduction to Integrity NonStop NS-Series Systems Processor Complex Processor Complex The basic building block of the NSAA compute engine is the processor complex, which consists of two or three CPU modules called slices. Note. In this publication, the term processor complex is equivalent to the term NonStop Blade Complex (NSBC).
Processor Complex Introduction to Integrity NonStop NS-Series Systems This diagram is an overview of the modular NSAA and shows one processor complex with four processors, the I/O hardware and the ServerNet fabrics.
Processor Element Introduction to Integrity NonStop NS-Series Systems In summary, these terms describe the NSAA processor: Term Description Processor element (PE) A single Itanium microprocessor with its associated memory. A PE is capable of executing an individual instruction stream and I/O communication through fiber-optic links. Processor slice Two or four PEs contained within a single slice enclosure. Logical processor One or more PEs from each slice executing a single instruction stream.
Duplex Processor Introduction to Integrity NonStop NS-Series Systems communications redundancy in case one of the fabrics fails. For description of the LSU functions, see Processor Synchronization and Rendezvous on page 1-8.
Triplex Processor Introduction to Integrity NonStop NS-Series Systems Triplex Processor The TMR or triplex processor uses three slices, A, B, and C. As with the duplex processor, the slice optic cables connect the PEs to the LSUs with these LSUs then connecting to the two independent ServerNet fabrics. Dual ServerNet fabrics create communications redundancy in case one of the fabrics fails. For a description of the LSU functions, see Processor Synchronization and Rendezvous on page 1-8.
Introduction to Integrity NonStop NS-Series Systems Processor Synchronization and Rendezvous Processor Synchronization and Rendezvous Synchronization and rendezvous at the LSUs perform two main functions: • • Keep the individual PEs in a logical processor in loose lock-step through a technique called rendezvous. Rendezvous occurs to: ° Periodically synchronize the PEs so they execute the same instruction at the same time. Synchronization accommodates the slightly different clock speed within each PE.
Introduction to Integrity NonStop NS-Series Systems Failure Recovery for Triplex Processor Failure Recovery for Triplex Processor In triplex processors, each LSU has inputs from the three processor elements within a logical processor. As with the duplex processor, the LSU keeps the three PEs in loose lockstep. The LSU also checks the outputs from the three PEs.
ServerNet Fabric I/O Introduction to Integrity NonStop NS-Series Systems ServerNet adapters in the I/O adapter module (IOAM) enclosure or via NonStop S-series I/O enclosures.
System Architecture Introduction to Integrity NonStop NS-Series Systems System Architecture This diagram shows elements of an example Integrity NonStop NS-series system with four triplex processors.
Introduction to Integrity NonStop NS-Series Systems Modular Hardware Modular Hardware Hardware for Integrity NonStop NS-series systems is implemented in modules, or enclosures that are installed in modular cabinets. For descriptions of the modular hardware, see Section 2, Modular System Hardware.
Default Startup Characteristics Introduction to Integrity NonStop NS-Series Systems Default Startup Characteristics Each system ships with these default startup characteristics: • $SYSTEM disks residing in one of these two locations: ° Disk drive enclosure connected to IOAM enclosure group 110 with the disks in these locations: IOAM • • ° FCSA Disk Drive Enclosure Path Group Module Slot SAC Shelf Bay Primary 110 2 1 1 1 1 Backup 110 3 1 1 1 1 Mirror 110 3 1 2 1 1 Backu
Migration Considerations Introduction to Integrity NonStop NS-Series Systems Load Path Description Source Disk Destination Processor ServerNet Fabric (page 2 of 2) 14 Mirror $SYSTEM-M 1 Y 15 Mirror backup $SYSTEM-M 1 X 16 Mirror backup $SYSTEM-M 1 Y This illustration shows the system load paths.
2 Modular System Hardware This section introduces the hardware used in Integrity NonStop NS-series systems: Topic Page Modular Hardware Components 2-1 Component Location and Identification 2-27 NonStop S-Series I/O Enclosures 2-35 Modular Hardware Components These hardware components can be part of an Integrity NonStop NS-series system: Topic Page Cabinets 2-4 AC Power PDUs 2-4 Processor Slice 2-8 Logical Synchronization Unit (LSU) 2-11 Processor Switch 2-13 I/O Adapter Module (IOAM) E
Modular System Hardware Modular Hardware Components as well as to LANs. Additional IOAM enclosures increase connectivity and storage resources, by way of ServerNet links to the p-switch. NonStop S-series I/O enclosures equipped with IOMF 2 CRUs connect to the Integrity NonStop NS-series systems using fiber-optic ServerNet links from the p-switch. Note. IOMF 2 CRUs are required in the NonStop S-series I/O enclosure for connection to the Integrity NonStop NS-series system.
Modular Hardware Components Modular System Hardware This example shows a modular cabinet with a duplex processor and hardware for a complete system (rear view): To AC Power Source or Site UPS PDU Junction Boxes IOAM Enclosure Power Distribution Units (PDUs) 42 42 41 41 40 40 39 39 38 38 37 37 36 36 35 35 34 34 33 33 32 32 31 31 30 30 29 29 28 28 27 27 26 26 25 25 24 24 23 23 22 22 21 21 20 20 19 19 18 18 17 17 16 16 15 15 14 14 13 13 12 12
Cabinets Modular System Hardware Cabinets HP modular cabinets for the Integrity NonStop NS-series system are 42U high with labels on the sides of the rack indicating the U positions. Rack 1 (first 10 U shown) 11 11 10 10 09 09 08 08 07 07 06 06 05 05 04 04 03 03 02 02 01 01 Slice B (offset 8U) Slice A (offset 3U) VST404.
AC Power PDUs Modular System Hardware Junction boxes for the PDUs and AC power feed cables are factory-installed and configured at either the upper or lower rear corners depending on what is ordered for the site power feed.
AC Power PDUs Modular System Hardware This illustration shows the location of PDUs for AC from the top of the cabinet when an optional UPS and ERM are also installed: To AC Power Source PDU Junction Box Extended Runtime Module (ERM) Uninterruptible Power Supply (UPS) 42 42 41 41 40 40 39 39 38 38 37 37 09 09 08 08 07 07 06 06 05 05 04 04 03 03 02 02 01 01 Power Distribution Unit (PDU) PDU Junction Box VST120.
AC Power PDUs Modular System Hardware Each PDU is factory-wired to distribute the phases to its receptacles. Caution. If you are installing Integrity NonStop NS-series enclosures in a rack, balance the current load among the three phases. Using only one of the available three phases, especially for larger systems, can cause unbalanced loading and might violate applicable electrical codes.
Modular System Hardware Modular Cabinet PDU Keepout Panel Modular Cabinet PDU Keepout Panel PDU keepout applies only when the PDU junction boxes reside at the bottom of the modular cabinet. PDUs overlap the outside of the rack by 2 U (3.5 inches). A PDU keepout panel is installed in affected space when: • • 1U keepout applies when a UPS occupies the bottom position of the modular cabinet. 2U keepout applies when a processor slice occupies the bottom position of the modular cabinet.
Processor Slice Modular System Hardware The slice midplane for logic interconnection and power distribution, which is part of the chassis assembly, is not a FRU. Two slices provide up to four processor elements in high-availability duplex configuration, with eight slices providing a full 16-processor duplex system. For a faulttolerant triplex system, three slices provide four processors with 12 slices providing a full 16-processor triplex system. Note.
Processor Slice Modular System Hardware ID, such as A1, A2, A3, and so forth. These IDs reference the appropriate slice for proper connection of the optic cables. The slice optic cables provide communications between each slice and the LSU as well as between the LSU and the X-fabric and Y-fabric p-switch PICs. No requirement exists to connect cables from a particular slice optic adapter on a slice to a physically corresponding adapter on an LSU.
Logical Synchronization Unit (LSU) Modular System Hardware Front Panel Indicator LEDs: LED Indicator State Meaning Power Flashing green Power is on; slice is available for normal operation. Flashing yellow Slice is in power mode. Off Power is off. Steady amber Hardware or software fault exists. Off Slice is available for normal operation. Flashing blue System locator is activated.
Logical Synchronization Unit (LSU) Modular System Hardware This illustration shows an example LSU configuration as viewed from the rear of the enclosure and equipped with four LSU optics adapter PICs in I/O positions 20 through 23: LSU Optic Adapter PICs Positions (Slots) 20 - 27 J set in I/O positions 20 - 23 for K set in I/O positions 24 - 27 for logical processors 0 through 7 logical processors 8 through 15 Y X C B A Y X C B A Y X C B A 20 21 Y X C B A 22 23 Y X C B A Green LED (power) Amber
LSU Indicator LEDs Modular System Hardware LSU Indicator LEDs LED State Meaning LSU Optic Adapter PIC (green LED) Green Power is on; LSU is available for normal operation. Off Power is off. LSU Optic Adapter PIC (amber LED) Amber Power is in progress, board is being reset, or a fault exists. Off Normal operation or powered off. LSU Optic Adapter connectors (green LEDs) Green Slice optic or ServerNet link is functional. Off Power is off, or a fault exists (amber LED is on).
Processor Switch Modular System Hardware • • • ServerNet I/O PICs (slots 4 to 9); provide 24 ServerNet 3 connections to one or more IOAMs and to optional NonStop S-series I/O enclosures Processor I/O PICs (slots 10 to 13); connect to LSU for ServerNet 3 I/O with the processors Cable management and connectivity on the rear of the cabinet Caution. To maintain proper cooling air flow, blank panels must be installed in all slots that do not contain PICs.
P-Switch Indicator LEDs Modular System Hardware • • • • • IP address Group-module-slot Cabinet name and offset ME firmware revision Floating point gate array (FPGA) firmware revision This illustration shows the rear of a fully populated p-switch: Cluster PIC Maintenance PIC 3 EFT 2 1 Slots 1 Crosslink PIC I/O PICs Processor PICs 4 4 4 4 4 4 4 4 4 4 4 4 3 3 3 3 3 3 3 3 3 3 3 3 2 2 2 2 2 2 2 2 2 2 2 2 1 1 1 1 1 1 1 1 1 1 1 1 2 3 4 5 6 7 8 9 10
Processor Numbering Modular System Hardware This example of a triplex processor shows the ServerNet cabling to the p-switch PIC in slot 10 that defines processors 0, 1, 2, and 3. This configuration is only an example to be used for understanding the interconnection.
Modular System Hardware I/O Adapter Module (IOAM) Enclosure and I/O Adapters I/O Adapter Module (IOAM) Enclosure and I/O Adapters An IOAM provides Integrity NonStop NS-series with its system I/O using Gigabit 4-port Ethernet ServerNet adapters (G4SAs) for LAN connectivity and Fibre Channel ServerNet adapters (FCSAs) for storage connectivity.
I/O Adapter Module (IOAM) Enclosure and I/O Adapters Modular System Hardware This illustration shows the front and rear of the IOAM enclosure and details: LDC Maintenance Connector Display (100 BaseT RJ-45) ServerNet Links From P-Switch (MMF LC Connectors) ServerNet Switch Board (Module 2, Slot 14) Fans (Mod 2 Slot 17) Power Supplies (Modules 2 and 3) Slot 5 Slot 4 Slot 5 Slot 1 Slot 4 Slot 3 Slot 2 Fans (Mod 2 Slot 16) Slot 1 Fans (Mod 3 Slot 16) Slot 3 IOAM (Module 3) IOAM (Module 2) Slo
I/O Adapter Module (IOAM) Enclosure and I/O Adapters Modular System Hardware IOAM Enclosure Indicator LEDs ServerNet Switch Board LED State Meaning Power Green Power is on; board is available for normal operation. Off Power is off. Amber A fault exists. Off Normal operation or powered off. Green Link is functional. Off Link is not functional. LCD Display Messages Message as is displayed. ServerNet Ports Green ServerNet Link is functional. Off ServerNet Link is not functional.
I/O Adapter Module (IOAM) Enclosure and I/O Adapters Modular System Hardware FCSAs are installed in pairs and can reside in slots 1 through 5 of either IOAM (module 2 or 3) in an IOAM enclosure. The pairs can be installed one each in the two IOAM modules in the same IOAM enclosure, or the pair can be installed one each in different IOAM enclosures. The FCSA allows either a direct connection to an Enterprise Storage System (ESS) or connection through a storage area network.
Disk Drive Enclosure Modular System Hardware • 802.3u (100 Base-T and 1000 Base-T) For detailed information on the G4SA, see the NonStop Gigabit Ethernet 4-Port Installation and Support Guide.
Optional UPS and ERM Modular System Hardware maintenance switch mounts to the 19-inch rack within a NonStop NS-series cabinet, but no restrictions exist for its placement. This illustration shows an example of two maintenance switches installed in the top of a cabinet: Maintenance Switch ( two installed) RJ-45 EtherNet Conenctions VST345.
System Console Modular System Hardware Cabinet configurations that include an R5500 XR UPS also have one extended runtime module (ERM). An ERM is a battery module that extends the overall battery-supported system run time. A second ERM can be added for even longer battery-supported system run time. Adding an R5500 XR UPS to a modular cabinet in the field requires changing the PDU on the right to be compatible with UPS. Both the UPS and the ERM are 3U high and must reside in the bottom of the cabinet.
Internal Cables Modular System Hardware Some system console hardware, including the PC system unit, monitor, and keyboard, can be mounted in the Integrity NonStop NS-series system’s 19-inch rack. Other PCs are installed outside the rack and require separate provisions or furniture to hold the PC hardware. For more information on the system console, refer to System Console on page 5-1.
ServerNet Cluster Cables Modular System Hardware ServerNet Cluster Cables These cables connect the Integrity NonStop NS-series systems to a ServerNet cluster (zone) with Model 6780 NonStop ServerNet switches: Cable Type Connectors Length (meters) Length (feet) Product ID Part Number SMF LC-LC 2 7 M8921-2 525555 5 16 M8921-5 525565 10 33 M8921-10 522746 25 82 M8921-25 526107 40 131 M8921-40 525516 80 262 M8921-80 522747 100 410 M8921100 522749 These cables connect the p-
Enterprise Storage System Modular System Hardware This photo shows an example of CMS trays and clamps for the components installed in an IOAM enclosure: VST013.vsd For details of using the CMS, refer to the Integrity NonStop NS-Series Hardware Installation Manual. Enterprise Storage System An enterprise storage system (ESS) is a collection of magnetic disks, their controllers, and the disk cache in one or more standalone cabinets.
Component Location and Identification Modular System Hardware This illustration shows an example of connections between two Integrity NonStop NSseries server IOAM enclosures and an ESS via the separate Fibre Channel switch: Fibre Channel Switch FCSA IOAM Enclosure IOAM Enclosure ESS FCSA Fibre Channel Switch VST 068.vsd For fault tolerance, the primary and backup paths to an ESS logical device (LDEV) must go through different Fibre Channel switches.
Terminology Modular System Hardware Terminology Terms used in locating and describing components are: Term Definition Cabinet Computer system housing that includes a structure of external panels, front and rear doors, rack, and dual PDUs. Rack Structure inside the cabinet into which rackmountable components are assembled. Rack Offset The physical location of components installed in a rack, measured in U values numbered 1 to 42, with 1U at the bottom of the rack. A U is 1.75 inches (44 millimeters).
Rack and Offset Physical Location Modular System Hardware In NonStop S-series systems, group, module, and slot (GMS) notation identifies the physical location of a component. However, GMS notation in Integrity NonStop NS-series systems is the logical location of particular components rather than the physical location. Rack and Offset Physical Location Rack name and rack offset identify the physical location of components in an Integrity NonStop NS-series system.
Slice Group-Module-Slot Numbering Modular System Hardware A number of GMS configurations are possible in the modular Integrity NonStop NSseries system. This table shows the default numbering for the logical processors: Logical Processors Group (Processor Complex) Module (Slice) 0-3* 400 1 (A) 4-7 401 8-12 402 13-15 403 2 (B) 3 (C) Slot (Optic Port Optical adapters 1-8 (software identified as slots 71-78) J0-J7, K0-K7 * Logical processor 0 must be in processor complex 0 (group 400).
LSU Group-Module-Slot Numbering Modular System Hardware LSU Group-Module-Slot Numbering This table shows the default numbering for the LSUs: Item Group (Processor Complex)1 Individual LSU J set 400-403 Individual LSU K set Not used at this time Module I/O Position (Slot) 100 + processor complex number 1 - LSU optics adapter 2 - LSU logic board 1 See Slice Group-Module-Slot Numbering on page 2-29.
Processor Switch Group-Module-Slot Numbering Modular System Hardware Processor Switch Group-Module-Slot Numbering Group X ServerNet Module Y ServerNet Module Slot Item 100 2 3 1 Maintenance PIC 2 Cluster PIC 3 Crosslink PIC 4-9 ServerNet I/O PICs 10 ServerNet PIC (processors 0-3) 11 ServerNet PIC (processors 4-7) 12 ServerNet PIC (processors 8-11) 13 ServerNet PIC (processors 12-16) 14 Processor switch logic board 15, 18 Power supply A and B 16, 17 Fan A and B This illustrati
IOAM Enclosure Group-Module-Slot Numbering Modular System Hardware IOAM Enclosure Group-Module-Slot Numbering GIOAM Group P-Switch PIC Slot PIC Port Numbers 110 4 1-4 111 5 1-4 112 6 1-4 113 7 1-4 114 8 1-4 115 9 1-4 IOAM Group X ServerNet Module Y ServerNet Module Slot Item Port 110 - 115 (See preceding table) 2 3 1 to 5 ServerNet adapters 1 - n: where n is number of ports on adapter 14 ServerNet switch logic board 1-4 15, 18 Power supplies - 16, 17 Fans - This i
Disk Drive Enclosure Group-Module-Slot Numbering Modular System Hardware Disk Drive Enclosure Group-Module-Slot Numbering IOAM Group IOAM Module IOAM Slot FCSA FSACs 110-115 2-X fabric; 1-5 1, 2 3-Y fabric Disk Drive Enclosure Shelf 1 - 4 if daisychained); 1 if single disk enclosure DDE Slot Item 0 Disk drive enclosure 1-14 Disk drives 89 Transceiver A1 90 Transceiver A2 91 Transceiver B1 92 Transceiver B2 93 Left FC_AL board 94 Right FC_AL board 95 Left power supply 96 Righ
NonStop S-Series I/O Enclosures Modular System Hardware NonStop S-Series I/O Enclosures Topics discussed in this subsection are: Topic Page IOMF 2 CRU 2-36 NonStop S-Series Disks and ServerNet Adapters 2-36 NonStop S-Series I/O Enclosure Group Numbers 2-36 NonStop S-series I/O enclosures can be connected to Integrity NonStop NS-Series systems to retain not only previously installed hardware, but also data stored on disks mounted in the NonStop S-series I/O enclosures.
Modular System Hardware IOMF 2 CRU Each p-switch (for the X or Y ServerNet fabric) has up to six I/O PICs, with one I/O PIC required for each IOAM enclosure in the system. Each NonStop S-series I/O enclosure uses one port and one PIC, so a maximum of 24 NonStop S-series I/O enclosures can be connected to an Integrity NonStop NS-series system, if no IOAME is installed.
NonStop S-Series I/O Enclosure Group Numbers Modular System Hardware This table shows the group number assignments for the NonStop S-series I/O enclosures: P-Switch PIC Slot (X and Y ServerNet) P-Switch PIC Connector NonStop S-Series I/O Enclosure Group 4 1 11 2 12 3 13 4 14 1 21 2 22 3 23 4 24 1 31 2 32 3 33 4 34 1 41 2 42 3 43 4 44 1 51 2 52 3 53 4 54 1 61 2 62 3 63 4 64 5 6 7 8 9 HP Integrity NonStop NS-Series Planning Guide—529567-004 2- 37
NonStop S-Series I/O Enclosure Group Numbers Modular System Hardware This illustration shows the group number assignments on the p-switch: To I/O Enclosure Groups 11-14 To I/O Enclosure Groups 21-24 To I/O Enclosure Groups 31-34 To I/O Enclosure Groups 41-44 To I/O Enclosure Groups 51-54 To I/O Enclosure Groups 61-64 3 ENET 2 SPON 1 SER 1 2 4 3 2 1 4 3 2 1 4 3 2 1 4 3 2 1 4 3 2 1 4 3 2 1 4 3 2 1 4 3 2 1 4 3 2 1 4 3 2 1 4 3 2 1 3 4 5 6 7 8 9 10 11 12 13 For connections to IOAM (or
3 ServerNet Communications Network The ServerNet communications network is a high-speed network within an Integrity NonStop NS-series system that connects processor complexes to the ServerNet adapters and system I/O such as disks and Ethernet LANs by way of the p-switches and the IOAM switch boards or NonStop S-series IO enclosures. This network uses the ServerNet architecture, which is full-duplex, packet-switched, and point-to-point in a star configuration.
ServerNet Communications Network Concurrent Messages fabric and the Y fabric, together provide a fault-tolerant interconnection for the ServerNet network. Each processor complex connects to both fabrics with ServerNet adapters also connecting to both fabrics. The X fabric and the Y fabric are independent and therefore, are not connected to each other. Communications occurring on one ServerNet fabric cannot cross over to the other ServerNet fabric.
Simplified ServerNet System Diagram ServerNet Communications Network Simplified ServerNet System Diagram This is a simplified diagram of the ServerNet architecture in the Integrity NonStop NS-series system: Disks Ethernet Router ServerNet Adapter ServerNet Adapter ServerNet Adapter Ethernet I/O Adapter Module NonStop S-Series I/O Enclosure P-Switch Y Fabric X Fabric P-Switch X Fabric X Y Processor Complex X Y Processor Complex X Y Processor Complex X Y Processor Complex Y Fabric ServerNet
IOAM Enclosure ServerNet Pathways ServerNet Communications Network X ServerNet fabric and the other the Y fabric. In this drawing, the nomenclature PIC n means the PIC in slot n. For example, PIC 4 is the PIC in slot 4. 4-Optic Fat Pipe to Each On Optic Line to IOMF 2 CRU in IOAM ServerNet Switch Board or NonStop S-Series I/O Enclosure PIC 4 PIC 6 PIC 5 PIC 7 PIC 8 PIC 9 P I C To Cluster Switch 2 Router P I C Router P I C Maint. 3 1 Maint.
Example System ServerNet Pathways ServerNet Communications Network Example System ServerNet Pathways This drawing shows the redundant routing and connection of the ServerNet X and Y fabric within a simple example system. This example system includes: • • • Four processors with their requisite four LSU optic adapters One IOAM enclosure connected to the PIC in slot 4 of each p-switch, making the IAOM enclosure group 110.
Example System ServerNet Pathways ServerNet Communications Network • • Four IOAMs of group numbers 110 through 113 Four NonStop S-series I/O enclosures of group numbers 61 through 64 The IOAM enclosures can reside in the same or different cabinets with MMF fiber-optic cables carrying the ServerNet communications between the IOAM enclosures and the p-switches.
Example System ServerNet Pathways ServerNet Communications Network replaced, the affected fabric and resources connected to it are again available to the system. In a second scenario, if the one of the ServerNet adapters fails, only the Fibre Channel or Ethernet devices that are connected to the failed adapter are affected. The failure has no effect on the other resources on the same or the other ServerNet fabric.
Example System ServerNet Pathways ServerNet Communications Network This illustration shows a logical representation of a 16-processor complex with the associated slices and their optics adapters, the LSUs, and the p-switch for X ServerNet fabric: Slice 400 S Q Group R Module1A T SOA SOA Slot 1 Slot 2 Slice Group 400 S Q Module 1B R T SOA SOA Slot 1 Slot 2 Slice Group 400 S Q Module 1C R T SOA SOA Slot 1 Slot 2 Slice 401 S Q Group R Module A T SOA SOA Slot 1 Slot 2 Slice 401 S Q Group R Module B T SOA
Example System ServerNet Pathways ServerNet Communications Network This illustration shows a logical representation of a 16-processor complex with the associated slices and their optics adapters, the LSUs, and the p-switch for Y ServerNet fabric: Slice 400 S Q Group R Module1A T SOA SOA Slot 1 Slot 2 Slice Group 400 S Q Module 1B R T SOA SOA Slot 1 Slot 2 Slice Group 400 S Q Module 1C R T SOA SOA Slot 1 Slot 2 Slice 401 S Q Group R Module A T SOA SOA Slot 1 Slot 2 Slice 401 S Q Group R Module B T SOA
ServerNet Communications Network Example System ServerNet Pathways HP Integrity NonStop NS-Series Planning Guide—529567-004 3- 10
4 System Configurations This section introduces Integrity NonStop NS-series system configurations: Topic Page Enclosure Locations in Cabinets 4-2 Internal ServerNet Interconnect Cabling 4-4 P-Switch to NonStop S-Series I/O Enclosure Cabling 4-15 IOAM Enclosure and Disk Storage Considerations 4-19 Fibre Channel Devices 4-19 G4SAs to Networks 4-31 Default Naming Conventions 4-32 PDU Strapping Configurations 4-34 Integrity NonStop NS-series systems use a flexible modular architecture so alm
Enclosure Locations in Cabinets System Configurations This example system shows one possible configuration using a duplex processor, or two slices: To AC Power Source or Site UPS Maintenance Switch PDU Junction Boxes Disk Drive Enclosures IOAM Enclosure P-Switch Y Fabric P-Switch X Fabric LSU Enclosure Slice B Slice A VST423.vsd For other example configurations, see Appendix B, Example Modular Configurations.
Enclosure Locations in Cabinets System Configurations Enclosure or Component Height (U) Required Cabinet (Rack) Location Extended runtime module (ERM) 3U Immediately above UPS (and first ERM if two ERMs installed) Up to two ERMs can be installed. Tip kit N/A Bottom front exterior of cabinet Required for single cabinet systems equipped with one or more slices or when the cabinet is not bolted to its adjacent cabinet. Tip kit is not required when cabinet is bolted to its adjacent cabinet.
Internal ServerNet Interconnect Cabling System Configurations Internal ServerNet Interconnect Cabling This subsection includes: Topic Page Cable Labeling 4-4 Cable Management System 4-5 Internal Interconnect Cables 4-5 Maintenance LAN Cables 4-6 Cable Length Restrictions 4-7 Internal Interconnect Cable Part Numbers 4-7 Processor Slices to LSUs 4-7 LSUs to Processor Switches and Processor IDs 4-8 Processor Switch ServerNet Connections 4-12 Processor Switches to IOAM Enclosures 4-13 FC
Cable Management System System Configurations This label identifies the cable connecting the p-switches at U31 of both cabinet 1 and 2 at slot 3, connectors 1 or 2, which are crosslink connections between the two pswitches. Each label conveys this information: Nn Identifies the node number. One node can include up to six racks. Rn Identifies the rack number within the node. Un Identifies the offset that is the physical location of the component within the rack.
Maintenance LAN Cables System Configurations MMF cables are usually orange with a minimum bend radius of: • • Unsheathed: 1.0 inch (25 millimeter) Sheathed (ruggedized): 4.2 inch (107 millimeter) You can use fiber-optic cables available from HP, or you can provide your own fiberoptic cables.
Cable Length Restrictions System Configurations Cable Length Restrictions Maximum allowable lengths of cables connecting the modular system components are: Connection Fiber Type Connectors Maximum Length Product ID Slice to LSU enclosure MMF LC-LC 100 m M8900nnn1 Slice to slice MMF MTP 50 m M8920nnn1 LSU enclosure to p-switch MMF LC-LC 125 m M8900nnn1 P-switch to p-switch crosslink MMF LC-LC 125 m M8900nnn1 P-switch to IOAM enclosure MMF LC-LC 125 m M8900nnn1 FCSA to disk dri
Processor Slice to Processor Slice System Configurations Processor Slice to Processor Slice Reintegration cables interconnect each of the slices within individual duplex or triplex processor complexes using connectors S, T, Q, and R as shown in the illustrations on the next four pages. LSUs to Processor Switches and Processor IDs Each slice contains four processor elements, and each element is a part of a numbered processor complex, such as 0, 1, 2, or 3.
LSUs to Processor Switches and Processor IDs System Configurations This figure shows example connections to the default configuration of the slice reintegration links (slice connectors S, T, Q, R) and ports 1 to 4 on the p-switch PIC in slot 10, which defines triplex processor numbers to 0 to 3.
LSUs to Processor Switches and Processor IDs System Configurations This figure shows example connections of the slice reintegration links (slice connectors S, T, Q, R) and ports 1 to 4 of the p-switch PIC in slot 11 for triplex processor numbers to 4 to 7: P-switch slot 11, P-switch slot 11, P-switch slot 11, P-switch slot 11, 10 port 1: Processor 4 port 2: Processor 5 port 3: Processor 6 port 4: Processor 7 11 12 13 4 3 2 1 10 Y Fabric Processor Switch 11 12 13 4 3 2 1 20 S T Q R 21 22 23
LSUs to Processor Switches and Processor IDs System Configurations This figure shows example connections of the slice reintegration links (slice connectors S, T, Q, R) and ports 1 to 4 of the p-switch PIC in slot 12 for triplex processor numbers to 8 to 11: P-switch slot 12, port 1: Processor 8 P-switch slot 12, port 2: Processor 9 P-switch slot 12, port 3: Processor 10 P-switch slot 12, port 4: Processor 11 4 3 2 1 10 11 Y-Fabric Processor Switch 12 13 4 3 2 1 10 20 Y X C B A S T Q R 21 Y X C B A
Processor Switch ServerNet Connections System Configurations This figure shows example connections the slice reintegration links (slice connectors S, T, Q, R) and ports 1 to 4 of the p-switch PIC in slot 13 for triplex processor numbers to 12 to 15: P-switch slot 13, P-switch slot 13, P-switch slot 13, P-switch slot 13, port 1: Processor 12 port 2: Processor 13 port 3: Processor 14 port 4: Processor 15 4 3 2 1 10 11 12 Y Fabric Processor Switch 13 4 3 2 1 10 20 21 22 23 11 24 Y X C B A 12 25
Processor Switches to IOAM Enclosures System Configurations ServerNet cables connected to the p-switch PICs in slots 10 through 13 come from the LSUs and processors, with the cable connection to these PICs determining the processor identification. (See LSUs to Processor Switches and Processor IDs on page 4-8.) Cables connected to the PICs in slots 4 though 9 connect to one or more IOAM enclosures or to NonStop S-series I/O enclosures equipped with IOMF 2 CRUs.
FCSA to Disk Drive Enclosures System Configurations This illustration shows an example of a fault-tolerant ServerNet configuration connecting two FCSAs, one in each IOAM module, to a pair of disk drive enclosures: Disk Drive Enclosures Fibre Channel Links EMU PIC (slot 4 shown) 4 3 2 1 4 4 3 2 1 5 P-Switch X Fabric EMU 6 7 8 B A 2 2 1 1 I/O I/O B A 2 2 1 1 I/O I/O 9 Fibre Channel Links ServerNet Links PIC (slot 4 shown) 4 3 2 1 4 4 3 2 1 5 P-Switch Y Fabric 6 7 X ServerNet
P-Switch to NonStop S-Series I/O Enclosure Cabling System Configurations Integrity NonStop NS-series systems do not support SCSI buses or adapters to connect tape devices. However, SCSI tape devices can be connected through a T1200 Fibre Channel to SCSI converter device (model M8201) that allows connection to SCSI tape drives. For interconnect cable information and installation instructions, see the M8201 Fibre Channel to SCSI Router Installation and User’s Guide. Note.
P-Switch to NonStop S-Series I/O Enclosure Cabling System Configurations up to 24 NonStop S-series I/O enclosures can be connected to an Integrity NonStop NS-series system via these ServerNet links. A single fiber-optic cable provides the ServerNet link between an I/O PIC port on both the X and Y p-switch and the I/O multifunction 2 (IOMF 2) CRUs in a NonStop S-series I/O enclosure. For cable types and lengths, see Internal Cables on page 2-24.
P-Switch to NonStop S-Series I/O Enclosure Cabling System Configurations These restrictions or requirements apply when integrating NonStop S-series I/O enclosures into an Integrity NonStop NS-series system: • Only NonStop S-series I/O enclosures equipped with IOMF 2 CRUs can be connected to an Integrity NonStop NS-series system. The IOMF 2 CRU must have an MMF PIC installed.
Software and Migration Requirements System Configurations ° A serial cable from each SPON connector on the p-switch carries power-on signals to the NonStop S-series I/O enclosure. This cable is a unidirectional SPON cable used only for connection between an Integrity NonStop NS-series system p-switch and the IOMF 2 CRU in the NonStop S-series I/O enclosure.
IOAM Enclosure and Disk Storage Considerations System Configurations IOAM Enclosure and Disk Storage Considerations When deciding between one IOAM enclosure or two (or more), consider: One IOAM Enclosure Two IOAM Enclosures High-availability and fault-tolerant attributes of a NonStop S-series systems with I/O enclosures using tetra-8 and tetra-16 topologies. Greater availability because of multiple redundant ServerNet paths and FCSAs.
Fibre Channel Devices System Configurations This illustration shows an FCSA with the ports that are used and not used in Integrity NonStop NS-series systems: Fibre Channel Port 1 FCSA Fibre Channel Port 2 FCSA Fibre Ethernet Ports (not used) Ethernet Ports (not used) VST001.
Factory-Default Disk Volume Locations System Configurations Disk drive enclosures connect to Fibre Channel ServerNet adapters (FCSAs) via Fiber Channel arbitrated loop cables. This drawing shows the two Fibre Channel arbitrated loops implemented within the disk drive enclosure.
System Configurations Fibre Channel Device Configuration Restrictions Fibre Channel Device Configuration Restrictions To prevent creating configurations that are not fault-tolerant or do not promote high availability, these restrictions exist and are invoked by SCF: • Primary and mirror disk drives cannot connect to the same Fibre Channel loop. Loss of the Fibre Channel loop makes not only the primary volume inaccessible but also the mirrored volume. This configuration inhibits fault-tolerance.
Fibre Channel Device Configuration Recommendations System Configurations • • • • • • • • • • With primary and mirror disk drive enclosures in the same cabinet, the primary disk drive enclosure resides in a lower U than the mirror disk drive enclosure. Fibre Channel disk drives are configured with dual paths. Where possible, FCSAs and disk drive enclosures are configured in the four FCSA, four disk drive enclosure configuration for maximum fault tolerance.
Example IOAM and Disk Drive Enclosure Configurations System Configurations Example IOAM and Disk Drive Enclosure Configurations These subsections show examples of various configurations of FCSA controllers, and disk drive enclosures with IOAM enclosures.
Example IOAM and Disk Drive Enclosure Configurations System Configurations Disk Volume Name FCSA GMSP Disk GMES* $OSS (primary) 110.2.1.1 and 110.3.1.1 110.211.104 $SYSTEM (mirror) 110.2.1.2 and 110.3.1.2 110.212.101 $DSMSCM (mirror) 110.2.1.2 and 110.3.1.2 110.212.102 $AUDIT (mirror) 110.2.1.2 and 110.3.1.2 110.212.103 $OSS (mirror) 110.2.1.2 and 110.3.1.2 110.212.
Example IOAM and Disk Drive Enclosure Configurations System Configurations This table list the FCSA group-module-slot-port (GMSP) and disk group-moduleenclosure-slot (GMES) identification for the factory-default system disk locations in the configuration of four FCSAs, four disk drive enclosures, and one IOAM enclosure: Disk Volume Name FCSA GMSP Disk GMES* $SYSTEM (primary 1) 110.2.1.1 and 110.3.1.1 110.211.101 $DSMSCM (primary 1) 110.2.1.1 and 110.3.1.1 110.211.102 $AUDIT (primary 1) 110.2.1.
Example IOAM and Disk Drive Enclosure Configurations System Configurations Disk Volume Name FCSA GMSP Disk GMES* $OSS (primary 1) 110.2.1.1 and 111.2.1.1 110.211.104 $SYSTEM (mirror 1) 110.2.1.2 and 111.2.1.2 110.212.101 $DSMSCM (mirror 1) 110.2.1.2 and 111.2.1.2 110.212.102 $AUDIT (mirror 1) 110.2.1.2 and 111.2.1.2 110.212.103 $OSS (mirror 1) 110.2.1.2 and 111.2.1.2 110.212.
Example IOAM and Disk Drive Enclosure Configurations System Configurations Disk Volume Name FCSA GMSP Disk GMES* $AUDIT (mirror) 110.2.1.2 and 111.2.1.2 110.212.103 $OSS (mirror) 110.2.1.2 and 111.2.1.2 110.212.104 * For an illustration of the disk drive enclosure factory-default slot locations, see Factory- Default Disk Volume Locations on page 4-21 Daisy-Chain Configurations When planning for possible use of daisy-chained disks, consider: Daisy-Chained Disks Recommended for ...
Example IOAM and Disk Drive Enclosure Configurations System Configurations A second identical configuration, including an IOAM and four disk drive enclosures, is required for fault-tolerant mirrored disk storage. DDE 4 Terminator DDE 3 B Side A Side ID Expanders DDE 2 DDE 1 FibreChannel Cables FibreChannel Cables Terminator FCSA FCSA IOAM Enclosure VST081.
Example IOAM and Disk Drive Enclosure Configurations System Configurations Four FCSAs, Three DDEs, One IOAM Enclosure This illustration shows example Fibre Channel cable connections between the four FCSAs and three disk drive enclosures with the primary and mirror drives split within each disk drive enclosure: Primary 3 Mirror 1 DDE 3 Mirror 3 Primary 2 DDE 2 Primary 1 Mirror 2 DDE 1 IOAM Enclosure FCSAs FCSAs VST085.
G4SAs to Networks System Configurations This illustration shows the factory-default locations for the configurations of four FCSAs and three disk drive enclosures where the primary system file disk volumes are in disk drive enclosure 1: $SYSTEM (slot 1) $AUDIT (slot 3) Disk Drive Enclosure (front) $DSMSCM (slot 2) $OSS (slot 4) VSD.082.
Default Naming Conventions System Configurations This illustration shows the G4SA: G4SA G4SA LC Connectors (Fiber) RJ-45 Connector (10/100/1000 Mbps) RJ-45 Connector (10/100 Mbps) VST002.
Default Naming Conventions System Configurations can name their resources at will and use the appropriate management applications and tools to find out where the resource is. However, default naming conventions for certain resources simplify creation of the initial configuration files and automatic generation of the names of the modular resources.
PDU Strapping Configurations System Configurations PDU Strapping Configurations PDUs are factory-strapped for the type and voltage of AC power at the intended installation site for the system. If necessary, a licensed electrician or trained service provider can change the strapping in the field.
PDU Strapping Configurations System Configurations This illustration shows the PDU strapped for 208 V ac North America delta power: PDU Power Outlets L 1 N G L 2 N G N L 3 G L 4 Wire Color Code, North America: L = line pin, black insulation N = neutral pin, white insulation G = ground pin, green insulation N G L 5 N G L 6 N G N L 7 G L PDU Strapped for 208 V ac North America Delta 8 N G L 9 N G N L 10 Wire Color Code, EU Harmonized: L = line pin, brown insulation N = neutral pin
PDU Strapping Configurations System Configurations This illustration shows the PDU strapped for 250 V ac 3-phase power: PDU Power Outlets N L 1 G L 2 N G N L 3 G L 4 N G N L 5 Wire Color Code, North America: L = line pin, black insulation N = neutral pin, white insulation G = ground pin, green insulation G N L 6 G N L 7 G N L 8 G L PDU Strapped for 250 V ac 3-Phase 9 N G L 10 N G N L 11 Wire Color Code, EU Harmonized: L = line pin, brown insulation N = neutral pin, blue insul
PDU Strapping Configurations System Configurations This illustration shows the PDU strapped for 250 V ac single-phase power: PDU Power Outlets L 1 N G N L 2 G L 3 N G N L 4 G N L 5 Wire Color Code, North America: L = line pin, black insulation N = neutral pin, white insulation G = ground pin, green insulation G N L 6 G L 7 N G N L 8 PDU Strapped for 250 V ac Single Phase G N L 9 G L 10 N G L 11 Wire Color Code, EU Harmonized: L = line pin, brown insulation 12 N = neutral pin, b
System Configurations PDU Strapping Configurations HP Integrity NonStop NS-Series Planning Guide—529567-004 4- 38
5 Control, Configuration, and Maintenance Tools This section introduces the control, configuration, and maintenance tools used in Integrity NonStop NS-series systems: Topic Page System Console 5-1 Maintenance Entity 5-4 Dedicated Maintenance LAN 5-8 System-Down OSM Low-Level Link 5-8 System-Up Dedicated Maintenance LAN 5-8 OSM 5-9 System Servicing Information on the CSSI Web 5-11 Subsystem Control Facility (SCF) 5-12 NonStop Common Foundation (NCF) 5-13 Neighbor-Check ServerNet Cable Ver
System Console Control, Configuration, and Maintenance Tools Some system console hardware, including the PC system unit, monitor, and keyboard, can be mounted in the Integrity NonStop NS-series system’s 19-inch rack. Other PCs are installed outside the rack and require separate provisions or furniture to hold the PC hardware. System consoles communicate with Integrity NonStop NS-series servers over a dedicated maintenance local area network (LAN) or a nondedicated (public) LAN.
System Console Control, Configuration, and Maintenance Tools You can perform these the types of operations using HP and third-party software installed on your system console: Software Type of Operation DSM/SCM Perform software configuration changes OSM Low-Level Link • OSM Service Connection • • • • • OSM Event Viewer • • OSM Notification Director • • • By connecting to the maintenance entity (ME) logic and firmware in the processor switches and IOAM ServerNet switches, allows client software
Maintenance Entity Control, Configuration, and Maintenance Tools Your system console configuration can be any of: • • • • One system console managing one system (not recommended) One system console managing multiple systems (not recommended) Multiple system consoles managing one system Multiple system consoles managing multiple systems For Information About ... Refer to ...
Maintenance Architecture Overview Control, Configuration, and Maintenance Tools Maintenance Architecture Overview This simplified illustration shows the three elements of the maintenance architecture plus the OSM maintenance console applications: To Remote Support Center Maintenance Switch Maintenance LAN ServerNet Fabric OSM Console IOAM ME P-Switch ME IOAM P-Switch G4SA LSU Slice FCSA PE PE DDE FC-AL EMU Fabrics Functional Element NonStop S-Series I/O ENet I/O Functional Element PE PE P
Control, Configuration, and Maintenance Tools Fabrics Functional Element Fabrics Functional Element The p-switch module contains the functionality of a single-fabric functional element. Each of the p-switches in the system provides a single-fabric ServerNet interconnect to processor functional elements, IOAMs, and NonStop S-Series I/O enclosure. Each p-switch contains a single ME that controls only its internal hardware.
Control, Configuration, and Maintenance Tools I/O Functional Element I/O Functional Element The IOAM enclosure contains the functionality of two fabric functional elements. In addition, each IOAM enclosure is divided into two IOAMs, each of which contains five ServerNet adapters and one ServerNet switch board. (See I/O Adapter Module (IOAM) Enclosure and I/O Adapters on page 2-17).
Dedicated Maintenance LAN Control, Configuration, and Maintenance Tools Dedicated Maintenance LAN A dedicated maintenance LAN provides connectivity between the OSM console running in a PC and the maintenance firmware in the system hardware. This maintenance LAN uses a ProCurve 2524 Ethernet switch for connectivity between the Integrity NonStop NS-series hardware elements (such as the p-switches in ServerNet switch boards for each IAOM) and the system console.
OSM Control, Configuration, and Maintenance Tools Director communication for maintenance in a running system. An example connection follows: Maintenance Switch Module 2 Module 3 G4SA EtherNet PIF Connectors D C B A Cable to Maintenance Switch IOAM Enclosure (Group 110) VST340.vsd For additional information on the dedicated maintenance LAN, see Dedicated Maintenance LAN on page 6-1.
OSM Service Connection Control, Configuration, and Maintenance Tools For information on how to configure and start OSM server-based processes and components, see the OSM Migration Guide.
Control, Configuration, and Maintenance Tools OSM Low-Level Link OSM Low-Level Link The OSM Low-Level Link enables you to communicate with a server even when the NonStop operating system is not running. Also, some actions that are performed on a running server, such as starting the system or priming a processor for reload, require you to use the OSM Low-Level Link. OSM Low-Level Link can be installed and used only if you select the dedicated maintenance LAN option during OSM client installation.
Control, Configuration, and Maintenance Tools • • • • Subsystem Control Facility (SCF) ° ° ° ° ° FC-AL I/O module EMU Power supply Blower Fibre Channel disk Replace LSU logic board and optics adapter Replace slice FRUs ° Processor board ° Memory board ° I/O Interface board ° Optics adapter ° Fan (three types) ° Reintegration board Replace IOAM enclosure FRUs ° ServerNet switch board ° Power supply ° FCSA ° G4SA ° Fan Replace modular cabinet FRUs ° AC power cable ° Fuse ° Front and rear doors Subsyste
Control, Configuration, and Maintenance Tools NonStop Common Foundation (NCF) NonStop Common Foundation (NCF) NonStop Common Foundation (NCF) is a set of TACL programs, guidelines, and utilities that simplifies, automates, and integrates configuration and management of NonStop servers. NCF includes configuration scripts to tailor server resources, a startup and shutdown application to improve system availability, and a collection of programs to automate server management.
Control, Configuration, and Maintenance Tools AC Power Monitoring AC Power Monitoring Unlike previous NonStop servers that used internal batteries for memory hold-up through power failures, Integrity NonStop NS-series servers require either the optional HP model R5500 XR UPS (with one or two ERMs for additional battery power) installed in each modular cabinet or a user-supplied site UPS to support system operation through power transients or an orderly system shutdown in a power failure.
Control, Configuration, and Maintenance Tools AC Power-Fail States AC Power-Fail States These states occur when a power failure occurs and an optional HP model R5500 XR UPS is installed in each cabinet within the system: System State Description NSK_RUNNING NonStop operating system is running normally. RIDE_THRU OSM has detect a power failure and begins timing the outage. AC power returning terminates RIDE_THRU and puts the operating system back into an NSK_RUNNING state.
Tools and Supplies Control, Configuration, and Maintenance Tools Tools and Supplies You might need these tools and supplies when replacing a FRU: Tool Purpose ESD protection kit Protect components against electrostatic discharge Safety glasses Prevent eye injury from flying particles Scissors or cutters Cut cable ties Cable ties Secure cable assemblies to cable tie mounts in a system enclosure Ladder Reach to the top of the modular cabinet Flashlight To peer into dark places Labels Pens or p
Control, Configuration, and Maintenance Tools Planning for Replacing FRUs HP Integrity NonStop NS-Series Planning Guide—529567-004 5- 17
Control, Configuration, and Maintenance Tools Planning for Replacing FRUs HP Integrity NonStop NS-Series Planning Guide—529567-004 5- 18
6 Planning for LAN Communications This section describes LAN considerations for Integrity NonStop NS-series systems: Topic Page Dedicated Maintenance LAN 6-1 Public LANs for Applications 6-12 Planning for a Dedicated Maintenance LAN 6-11 System Console Configurations 6-12 Remote Dial-In and Dial-Out Support 6-16 Dedicated Maintenance LAN This subsection includes: Topic Page IP Addresses 6-3 Ethernet Cables 6-6 SWAN Concentrator Restriction 6-6 System-Up Dedicated Maintenance LAN 6-7 Ma
Planning for LAN Communications • • • • • • • Dedicated Maintenance LAN Processor switch on the X fabric for OSM Low-Level Link to a down system Processor switch on the Y fabric for OSM Low-Level Link to a down system IOAM module 2 ServerNet board for OSM control of the I/O hardware IOAM module 3 ServerNet board for OSM control of the I/O hardware UPS (optional) for power-fail monitoring T1200 Fibre Channel to SCSI converter (optional) for Fibre Channel tape Connections to both X and Y fabrics (for fault
IP Addresses Planning for LAN Communications This illustration shows a fault-tolerant LAN configuration with two maintenance switches: Primary System Console Remote Service Provider Modem DHCP DNS server (optional) OperationsLAN Backup System Console Remote Service Provider Modem Optional Connection to Operations LAN (One or Two Connections) Maintenance Switch 2 Maintenance Switch 1 3 2 1 1 4 3 2 1 5 2 4 3 2 1 3 4 3 2 1 4 4 3 2 1 5 6 6 7 7 8 8 9 4 4 3 3 2 2 1 1 10 11 4 4 3 3 2 2 1 1
IP Addresses Planning for LAN Communications • T1200 FC to SCSI converter (optional) These components have the default IP addresses that are preconfigured at the factory: Component (page 1 of 2) Location P-switch ServerNet switch boards N/A IOAM Enclosure ServerNet switch boards N/A G4SA and NonStop S-series E4SA, FESA, GESA N/A Maintenance switch (ProCurve 2524) Rack 01 Rack 02 GMS Default IP Address 100.2.14 192.231.36.202 100.3.14 192.231.36.203 110.2.14 192.231.36.222 110.3.
IP Addresses Planning for LAN Communications Component (page 2 of 2) Location UPS (rackmounted only) Rack 01 TCP/IP processes for OSM: $ZTCP0 $ZTCP1 GMS N/A Default IP Address 192.231.36.31 Used By OSM Service Connections Rack 02 192.231.36.32 Rack 03 192.231.36.33 Rack 04 192.231.36.34 Rack 05 192.231.36.35 Rack 06 192.231.36.36 Rack 07 192.231.36.37 Rack 08 192.231.36.38 First G4SA (port A) and NonStop S-series E4SA, FESA, or GESA 192.231.36.
Ethernet Cables Planning for LAN Communications Some guidelines for configuring the DHCP server: • • Configure the range of IP addresses to be assigned dynamically by the DHCP server to be in the same subnet as existing IP addresses on the LAN and any static IP address included in the maintenance LAN. IP addresses for these components should remain static: ° The four IP addresses reserved for the OSM Low-Level Link and Service Connection.
System-Up Dedicated Maintenance LAN Planning for LAN Communications One possible fault-tolerant configuration is a pair of G4SAs (each in a different IOAM module) with a maintenance LAN connected to the A port on each G4SA. The B port is then available to support the SWAN LAN. System-Up Dedicated Maintenance LAN When the system is up and the OS running, the ME connects to the NonStop NS-series system’s dedicated maintenance LAN using one of the PIFs on each of two G4SA adapters.
Maintenance LAN Links With Two IOAM Enclosures Planning for LAN Communications The values in this table show the identification for the G4SAs in the preceding illustration: GMS for G4SA Location in IOAME G4SA PIF G4SA PIF TCP/IP Stack 110.2.5 G11025.0.A L1102R $ZTCP0 IP: 192.231.36.10 GW: 192.231.36.9 Subnet: %hFFFFFF00 Hostname: osmlanx 110.3.5 G11035.0.A L1103R $ZTCP1 IP: 192.231.36.11 GW: 192.231.36.
Maintenance LAN Links With IOAM Enclosure and NonStop S-Series I/O Enclosure Planning for LAN Communications The values in this table show the identification for the G4SAs in the preceding illustration: GMS for G4SA Location in IOAME G4SA PIF G4SA PIF TCP/IP Stack IP Configuration 110.2.5 G11025.0.A L1102R $ZTCP0 IP: 192.231.36.10 GW: 192.231.36.9 Subnet: %hFFFFFF00 Hostname: osmlanx 111.3.5 G11135.0.A L1113R $ZTCP1 IP: 192.231.36.11 GW: 192.231.36.
Maintenance LAN Links With NonStop S-Series I/O Enclosure Planning for LAN Communications The values in this table show the identification for the adapters in the preceding illustration: GMS for G4SA Location in IOAME G4SA PIF G4SA PIF TCP/IP Stack IP Configuration 110.2.5 G11025.0.A L1102R $ZTCP0 IP: 192.231.36.10 GW: 192.231.36.9 Subnet: %hFFFFFF00 Hostname: osmlanx 12.1.54 E1254.0.A L12C $ZTCP1 IP: 192.231.36.11 GW: 192.231.36.
Planning for a Dedicated Maintenance LAN Planning for LAN Communications This configuration can be used in cases where no IOAM enclosure is in a NonStop NS-series system and only the NonStop S-series I/O enclosure provides the system I/O connections and mass storage: GMS for G4SA Location in IOAME G4SA PIF G4SA PIF TCP/IP Stack IP Configuration 11.1.53 E1153.0.A L118 $ZTCP0 IP: 192.231.36.10 GW: 192.231.36.9 Subnet: %hFFFFFF00 Hostname: osmlanx 11.1.54 E1154.0.A L11C $ZTCP1 IP: 192.231.36.
Planning for LAN Communications Operating Configurations for Dedicated Maintenance LANs Keep track of all the IP addresses in your system so that no IP address is assigned twice. Operating Configurations for Dedicated Maintenance LANs You can configure the dedicated maintenance LAN in several different ways, as described in the OSM Migration Guide. HP recommends that you use a fault-tolerant LAN configuration.
One System Console Managing Multiple Systems Planning for LAN Communications One System Console Managing One System (Setup Configuration) System Console Remote Service Provider Modem DHCP DNS server (optional) OperationsLAN Optional Connection to Operations LAN Maintenance Switch 1 Processor Switches 4 3 2 1 4 4 3 2 1 5 2 4 3 2 1 3 4 3 2 1 4 4 3 2 1 5 6 6 7 7 8 8 9 4 3 2 1 10 4 3 2 1 11 4 4 3 3 2 2 1 1 12 13 9 4 4 3 3 2 2 1 1 10 11 4 4 3 3 2 2 1 1 12 13 ServerNet Switch Boards FCSA
Planning for LAN Communications Primary and Backup System Consoles Managing One System Because all servers are shipped with the same preconfigured IP addresses for MSP0, MSP1, $ZTCP0, and $ZTCP1, you must change these IP addresses for the second and subsequent servers before you can add them to the LAN. Primary and Backup System Consoles Managing One System This is the recommended configuration.
Multiple System Consoles Managing One System Planning for LAN Communications Primary System Console Remote Service Provider Modem DHCP DNS server (optional) Remote Service Provider Backup System Console OperationsLAN Modem Maintenance Switch 2 Maintenance Switch 1 4 3 2 1 4 4 3 2 1 5 2 4 3 2 1 3 4 3 2 1 4 4 3 2 1 5 6 6 7 7 8 8 9 4 3 2 1 10 4 3 2 1 11 4 3 2 1 12 4 3 2 1 13 9 4 4 4 4 3 3 3 3 2 2 2 2 1 1 1 1 10 11 12 13 FCSA FCSA FCSA G4SA G4SA ServerNet Switches FCSA Processor
Planning for LAN Communications Remote Dial-In and Dial-Out Support same subnet. If a server is configured to receive dial-ins, the server must occupy the same subnet as the system console receiving the dial-ins. For best OSM performance, no more than 10 servers should be included within one subnet.
7 System Installation Planning Each Integrity NonStop NS-series system installation is unique, not only in its hardware configuration, but also because of the purpose and applications that your system will be running. Also unique to your system is its peripheral hardware, whether or not you will integrate NonStop S-series hardware into the Integrity NonStop NS-series system, and the physical site where your system will be installed.
Factory-Installed Hardware Configuration Documentation System Installation Planning This document is an example only and will not be the same as the one included with your system. E x a m p le Rear U # 41-42 40 40 40 40 40 39 39 37 37 37 36 36 34 34 34 33 32 Source PWR CORD:12A,C20-C13,0.68M TSI #T1200 Serial TSI #T1200 SCSI :1 TSI #T1200 SCSI :0 TSI #T1200 Fibre Ch Port PRIMARY JBOD 1 : FC-AL A1 PRIMARY JBOD 1 : FC-AL B1 PRIMARY JBOD 1 : FC-AL A2 PRIMARY JBOD 1 : FC-AL B2 PWR CORD : 12A, C20-13, 0.
ServerNet Adapter Configuration Forms System Installation Planning Example Tech Doc for Duplex Processor System (Sheet 2 of 2) 16 14 13 12 LSUE 0 : Slot 20 : SNet Y LSUE 0 : Slot 20 : SNet X LSUE 0 : Slot 21 : SNet Y LSUE 0 : Slot 21 : SNet X LSUE 0 : Slot 22 : SNet Y LSUE 0 : Slot 22 : SNet X LSUE 0 : Slot 23 : SNet Y LSUE 0 : Slot 23 : SNet X LSUE 0 : Slot 20 : Strand B LSUE 0 : Slot 21 : Strand B LSUE 0 : Slot 22 : Strand B LSUE 0 : Slot 23 : Strand B LSUE 0 : Slot 20 : Strand A LSUE 0 : Slot 21 :
System Installation Planning Site Preparation Walk-Through Site Preparation Walk-Through This subsection introduces the site appraisal form that can used by HP account teams and professional service delivery consultants. This example form is a guide to collect data and identify areas of concern that can have an effect on installation, supportability, or operational reliability of new or relocated equipment.
Process System Installation Planning Process 1. Inspect and mark either the Yes, No or RISK box in each line item. Provide an explanation in the comments area for all items marked as RISK. Some No replies might require an explanation for clarification. 2. Ensure all items are completed. 3. For questionable items, contact the appropriate Data Center Services group, such as a Data Center Thermal Analysis. An item deemed as a RISK can impact the installation and or stability of the system.
Computer Room System Installation Planning Computer Room Section A: Computer room Yes No RISK A-1 Is there a copy of a customer provided floor plan available? ? ? ? A-2 Is new equipment or any equipment to be relocated indicated on the floor plan? ? ? ? A-3 Is there adequate space for airflow and safe maintenance needs? ? ? ? • (see local codes and product specifications) A-4 Is construction or modification to the data center complete? ? ? ? A-5 Is the location where new or relo
Computer Room System Installation Planning Section A: A-18 What is the depth of raised floor? • 46cm [18'' Min], but 61cm[24in] or more is needed for high density cooling. A-19 How old is the Data Center? A-20 What type of ceiling is being used? • A-21 (open, drop, solid, etc.) Can the room handle High Density Equipment? • (The ability of data centers to meet increasing power and cooling demands) Section A: Comments: VST034.
Power and Lighting System Installation Planning Power and Lighting Section B: Power and Lighting B-1 Are lighting levels adequate for safe maintenance? • B-2 No RISK ? ? ? ? ? ? Observe behind equipment rows, particularly those adjacent to walls. Does the customer intend to dual source AC input? • Yes (2 power sources, etc.
Power and Lighting System Installation Planning Section B: B12 Supply Voltage Total Harmonic Distortion? (THD) B13 Capacity of input AC? (Source 1 KVA) B14 Capacity of input AC? (Source 2 KVA) B15 System load kW/kVA? B16 Distance from dedicated branch circuit to equipment breaker? • • • • Customer supplied information. System load should be expressed as kVA and kW sine the power factor is significantly different from that of most UPS. Max HP recommended is 75 feet [22.
System Installation Planning Power and Lighting Section B: Comments: VST037.
Safety System Installation Planning Safety Section C: Safety Yes No RISK C1 Is there an emergency power shut-off switch? ? ? ? C2 Does the emergency power off simultaneously shutdown airflow to the room? ? ? ? C4 Is there a fire protection system in the computer room? ? ? ? C5 Is antistatic flooring installed? ? ? ? C6 Are there any equipment servicing hazards? ? ? ? • • Customer supplied information.
Air Conditioning System Installation Planning Air Conditioning Section D: Air Conditioning Yes No RISK D1 Can inlet air temperature be maintained at HP Recommended levels 68 o - 77 o F (20 o and 25 o C) for best reliability? ? ? ? D2 Can temperature changes be held to less than 9o F [5o C] per hour? ? ? ? D3 Can humidity level be maintained between 40-60%RM ? ? ? D4 Is Airflow adequate to prevent hotspots in HIGH DENSITY SITUATIONS? ? ? ? • Difficult to tell just by looking, measu
EMI and RFI System Installation Planning EMI and RFI Section E: Electromagnetic Interference (Radio Frequencies) E-1 Are two-way radios used in the computer room? • E-2 No RISK ? ? ? ? ? ? (Not recommended by HP) Are cell or cordless phones used in the computer room? • Yes (Not recommended) E-3 Will facility employees or security guards use 2 way radios in the building? ? ? ? E-4 Is RFI a concern? ? ? ? ? ? ? • E-5 (Wireless LANS, 2-way pagers, other wireless equipment, etc.
Storage System Installation Planning Storage Section F: Storage for Media, S/W, Manuals, Documentation, etc. Yes No RISK F-1 Are cabinets available for tape and disc media? ? ? ? F-2 Is storage available for system documentation? ? ? ? Section F: Comments: VST041.
Building Access, Security, and Equipment Delivery System Installation Planning Building Access, Security, and Equipment Delivery Section G: Building Access, Security & Equipment Delivery Yes No RISK G-1 Is there security or access control for the customer site? ? ? ? G-2 Is there security or access control for the computer room? ? ? ? G-3 Are there any restrictions for bringing laptops, electronic tools, cell phones, cameras into the computer room? ? ? ? G-4 Are there any restrictions
Building Access, Security, and Equipment Delivery System Installation Planning G-18 Is door/elevator height requirement of 90 inch for racked delivery? ? ? ? G-19 Is the path from loading bay to computer room robust enough to support the weight of the equipment? ? ? ? • (see product specifications) G-20 Will the equipment fit through all doors, corridors and elevators, both in size and weight without tilting? ? ? ? G-21 Are there any ramp angles or thresholds that are of concern? ? ?
Building Access, Security, and Equipment Delivery System Installation Planning Section G: G-24 Date Any special delivery windows or appointment times required for the carrier? • Specify any special delivery date/time: • Note (1): Transit time begins counting the day following the HP ship date.
Communication System Installation Planning Communication Section H: Communication Yes No RISK H-1 Is there a corded telephone available for HP Support’s use in close proximity to the new equipment? ? ? ? H-2 Are there communication lines available for the remote support use? ? ? ? ? ? ? • H-3 (see Product Specifications) Are there additional phone lines available for any Remote Support modems (NSC, HSSC)? • (Systems, XP, or other equipment) H-4 Are there IP addresses available for a
System Installation Planning Floor Plan Floor Plan Proposed Floor Plan (use for sketching – supply a finished copy via VISIO) If a floor plan is not available from the Customer, draw in where equipment is proposed to be located. Include all cabinets, walls, doors and air conditioners. 1 square = 600mm. VST046.
System Installation Planning Floor Plan Comments. (Hp Only) Completion Date Engineer Name VST047.
8 Planning for System Availability and Maintenance This section describes advance planning required for minimizing the total number of minutes that an application or system is unavailable.
How Availability Is Measured Planning for System Availability and Maintenance downtime caused by a problem situation such as an AC power outage, faulty system or application software, faulty hardware, operator error, disaster, and so forth. How Availability Is Measured HP believes that availability should be measured from the end user’s perspective. Simply recording that a certain hardware or software component is not operating is not enough.
Planning for System Availability and Maintenance Evaluating System Performance and Growth Evaluating System Performance and Growth Evaluating system performance and growth involves tracking and anticipating growth and then establishing plans to accommodate that growth: Common PerformanceManagement Tasks Definition How This Task Helps Plan for Growth Application sizing Using models to determine how well new applications will handle their intended workloads Helps you plan for growth in system workloads
Planning for System Availability and Maintenance Implementing a Formal Change-Control Process to Manage Change Implementing a Formal Change-Control Process to Manage Change Change control is the process for proposing, planning, implementing, and testing change and is a key requirement for minimizing the duration of planned outages.
Planning for System Availability and Maintenance Preventing Unplanned Outages Preventing Unplanned Outages In addition to minimizing the number and duration of planned outages, preventing unplanned outages is an important component of minimizing outage minutes. Causes of Unplanned Outages Studies have identified four common causes of unplanned outages (in order of greatest frequency): 1. Operations management errors 2. Hardware configuration that is not fault-tolerant 3.
Planning for System Availability and Maintenance • Minimizing Unplanned Outage Minutes Creating an alternate system disk so that it is possible to recover from unexpected difficulties in any of these problem-management procedures.
Planning for System Availability and Maintenance ServerNet Communication Pathway Considerations for FRU Replacement ServerNet Communication Pathway Considerations for FRU Replacement ServerNet communications within Integrity NonStop NS-series systems is point-to-point with routers implemented in star configurations. Because of the flexibility of the system architecture, no standard configuration or topology exists, such as the topologies for the NonStop S-series systems.
Planning for System Availability and Maintenance Planning for an Alternate System Disk HP Integrity NonStop NS-Series Planning Guide—529567-004 8 -8
9 NonStop Migration Documentation and Considerations This section describes where to find documentation about migrating from NonStop S-series systems to Integrity NonStop NS-series systems: Topic Page Migrating Applications 9-1 Other Manuals Containing Software Migration Information 9-1 Migrating Legacy Hardware Products to NonStop NS-Series Servers 9-2 Migration Considerations 9-3 Migrating Applications The H-Series Application Migration Guide is the primary source of information for migrating ap
NonStop Migration Documentation and Considerations • • • • • • • • • • Migrating Legacy Hardware Products to NonStop NS-Series Servers pTAL Reference Manual SQL Supplement for H-Series Releases NonStop Server for Java 4 Programmer’s Reference Tuxedo 8.0 Supplement for H-series RVUs Migrating CORBA Applications to H-Series RVUs H06.xx Software Installation and Upgrade Guide H06.xx Release Version Update Compendium NonStop System Console Installer Guide H06.
NonStop Migration Documentation and Considerations Migration Considerations Migration Considerations • • • SQL/MP objects that are present on disks installed in a NonStop S-series I/O enclosure are not immediately usable after the enclosure is connected to an Integrity NonStop NS-Series system. The file labels and catalogs must be updated to reflect the new system name and number. You can use the SQLCI MODIFY command to update SQL/MP objects.
NonStop Migration Documentation and Considerations Migration Considerations HP Integrity NonStop NS-Series Planning Guide—529567-004 9 -4
A Specifications This appendix includes: • • • • AC Input Power Model R5500 XR Integrated UPS Dimensions and Weights Environmental Specifications Note. All specifications provided in this appendix assume that each enclosure in the modular cabinet is fully populated; for example, a slice with four processors and maximum memory. The maximum current for each AC service depends on the number and type of enclosures installed in the modular cabinet.
Grounding Specifications ° ° ° ° ° HP Product ID: M8950-3 HP part number: 527994 Manufacturer: Hubbell HP supplied plug manufacturer number: HBL363P6W Required customer-supplied receptacle manufacturer number: HBL363C6W or equivalent receptacle Grounding A safety ground/protective earth conductor is required for each AC service entrance to the NonStop server equipment.
Enclosure Power Loads Specifications Enclosure Power Loads Power and current specifications for each type of enclosure are: AC Power Lines per Enclosure1 Enclosure Type Apparent Power (volt-amps measured on single AC line with one line powered) Apparent Power (volt-amps measured on single AC line with both lines powered)6 Peak Inrush Current (amps) Total: 760 780 17 17 Four-processor slice: 16 GB memory boards 32 GB memory boards 2 2 680 710 Per line: 380 390 LSU (with four LSUs) 2 150 110
Dimensions and Weights Specifications Dimensions and Weights Plan View of the Modular Cabinet 40 in. (102 cm) 44 in. (112 cm) 75.5 in. (192 cm) 23.5 in. (59.7 cm) VST102.vsd Service Clearances for the Modular Cabinet Aisles: 6 feet. (182.9 centimeters) Front: 3 feet. (91.4 centimeters) Rear: 3 feet. (91.
Unit Sizes Specifications Unit Sizes Enclosure Type Height (U) Modular cabinet 42 Processor slice 5 Processor switch 3 LSU 4 IOAM 11 Disk drive enclosure 3 Maintenance switch (Ethernet) 1 R5500 XR UPS 3 Extended runtime module 3 PDU keepout panel 1 or 2 Rackmount Console 2 Modular Cabinet Physical Specifications Item Height Width Depth Weight in. mm in. mm in. mm Modular cabinet 78.5 1994 23.5 597 44.0 1118 Rack 78.5 1994 23.5 597 40.0 1020 Front door 78.
Enclosure Dimensions Specifications Enclosure Dimensions Height Width Depth Enclosure Type in mm in mm in mm Processor slice 8.8 222 19.0 483 27.0 686 Processor switch 5.3 133 19.0 483 24.5 622 LSU 7.0 179 19.0 483 27.0 686 IOAM 19.25 489 19.0 483 27.0 686 Disk drive 5.2 131 19.9 505 17.6 448 Maintenance switch (Ethernet) 1.8 46 17.4 442 8.0 203 Rackmount console system unit 1.7 4.3 16.8 42.6 24.0 61.0 Keyboard and display 1.7 4.3 15.6 39.
Environmental Specifications Specifications Environmental Specifications Heat Dissipation Specifications and Worksheet Unit Heat (Btu/hour with single AC line powered) Unit Heat (Btu/hour with both AC lines powered) 4-processor slice: 16 GB memory boards 32 GB memory boards 2320 2423 2594 2662 LSU (with four LSUs) 512 750 Processor switch1 546 682 IOAM2 1433 1706 Disk drive3 751 956 Maintenance switch (Ethernet)4 171 - Rackmount console system unit 1194 - Keyboard and display 171
Nonoperating Temperature, Humidity, and Altitude Specifications Nonoperating Temperature, Humidity, and Altitude • • • Temperature: ° Up to 72-hour storage: - 40° to 150° F (-40° to 66° C) ° Up to 6-month storage: -20° to 131° F (-29° to 55° C) ° Reasonable rate of change with noncondensing relative humidity during the transition from warm to cold Relative humidity: 10% to 80%, noncondensing Altitude: 0 to 40,000 feet (0 to 12,192 meters) Typical Acoustic Noise Emissions • 70 dB(A) (sound pressure l
B Example Modular Configurations This appendix shows example configurations of the Integrity NonStop NS-series modular hardware that can be installed in a cabinet. A number of other configurations are also possible because of the flexibly inherent to the NonStop advanced architecture and ServerNet, and the number of cabinets in a system. Note. Hardware configuration drawings in this appendix represent the physical arrangement of the modular enclosures, but do not show location of the PDU junction boxes.
Typical Configurations Example Modular Configurations Enclosure or Component (page 2 of 2) Duplex Processor Triplex Processor Minimum Typical Maximum Minimum Typical Maximum FCSA 2 2 Up to 60 in mixture determined by disks and I/O 2 G4SA Up to 20 in mixture determi ned by disks and I/O Up to 20 in mixture determi ned by disks and I/O Up to 60 in mixture determined by disks and I/O Disk drive enclosure (Fibre Channel disk module) 2 4 8 2 4 8 Fibre Channel disk drives 14 56 112 1
Duplex 4-Processor System Example Modular Configurations Duplex 4-Processor System This duplex configuration has a maximum of four logical processors with one IOAM enclosure, and two disk drive enclosures: 42 41 40 Disk Drive Enclosure 39 38 37 36 35 34 IOAM Enclosure 33 32 31 30 29 28 27 P-Switch 26 25 24 P-Switch 23 22 21 Console 20 19 18 Disk Drive Enclosure 17 16 15 LSU 14 13 12 11 10 Processor Slice 09 08 07 06 05 Processor Slice 04 03 Maint Switch 02 01 Available Space VST801.
Duplex 8-Processor System Example Modular Configurations Duplex 8-Processor System This duplex configuration has a maximum of eight logical processors with one IOAM enclosure, and up to 12 disk drive enclosures (four disk drive enclosures in a typical system): 42 42 41 40 Available Space (or additional DDE) 41 39 39 38 38 37 37 36 36 35 35 IOAM Enclosure 33 32 32 31 31 30 30 29 29 28 28 27 P-Switch P-Switch Console 18 Available Space (or additional DDE) 21 18 LSU 15 14
Duplex 12-Processor System Example Modular Configurations Duplex 12-Processor System This duplex configuration has a maximum of 12 logical processors with one IOAM enclosure, and eight disk drive enclosures (six disk drive enclosures in a typical system): 42 42 41 40 Disk Drive Enclosure 41 40 39 39 38 38 37 37 36 36 35 35 34 34 33 IOAM Enclosure 32 31 31 30 30 29 29 28 28 P-Switch 25 25 24 P-Switch 23 22 22 20 19 Processor Slice 21 20 19 18 18 17 17 LSU 15 14 1
Duplex 16-Processor System, Three Cabinets Example Modular Configurations Duplex 16-Processor System, Three Cabinets This duplex configuration has a maximum of 16 logical processors with two IOAM enclosures, and up to 14 disk drive enclosures (one IOAM enclosure and eight disk drive enclosures in a typical system): 42 42 41 41 Available Space 40 40 39 39 38 38 37 37 36 36 35 35 34 34 33 IOAM Enclosure * 32 31 31 30 30 29 29 28 28 P-Switch 26 25 24 25 P-Switch 23 22 22
Duplex 16-Processor System, Two Cabinets Example Modular Configurations Duplex 16-Processor System, Two Cabinets This duplex configuration has a maximum of 16 logical processors with one IOAM enclosure, and 4 disk drive enclosures: 42 41 40 42 Disk Drive Enclosure 41 40 39 39 38 38 37 37 36 36 35 35 34 33 IOAM Enclosure 34 32 31 31 30 30 29 29 28 28 P-Switch 26 25 25 P-Switch 23 22 21 18 Console Disk Drive Enclosure 17 24 22 21 20 18 16 LSU 15 14 13 13 12 10 Pro
Triplex 4-Processor System Example Modular Configurations Triplex 4-Processor System This triplex configuration has a maximum of four logical processors with one IOAM enclosure, and two disk drive enclosures (maintenance switch and console are external to the modular cabinet: 42 41 40 Disk Drive Enclosure 39 38 37 36 35 34 IOAM Enclosure 33 32 31 30 29 28 27 P-Switch 26 25 Maint Switch 24 P-Switch 23 22 Console 21 20 19 Processor Slice 18 External to Modular Cabinet 17 16 15 LSU 14 13 12
Triplex 8-Processor System Example Modular Configurations Triplex 8-Processor System This triplex configuration has a maximum of eight logical processors with one IOAM enclosure, and ten disk drive enclosures (four disk drive enclosures in a typical system): 42 42 41 40 Available Space (or additional DDE) 41 40 39 39 38 38 37 37 36 36 35 35 34 34 33 IOAM Enclosure 32 31 31 30 30 29 29 28 28 P-Switch 25 25 24 P-Switch 23 22 22 20 19 Processor Slice 21 20 19 18 18 17
Triplex 12-Processor System Example Modular Configurations Triplex 12-Processor System This triplex configuration has a maximum of 12 logical processors with one IOAM enclosure, and 14 disk drive enclosures (six disk drive enclosures in a typical system): 42 42 41 41 Available Space 40 40 39 39 38 38 37 37 36 36 35 35 34 34 33 IOAM Enclosure 32 31 31 30 30 29 29 28 28 P-Switch 26 25 25 24 P-Switch 23 23 22 21 20 19 24 21 20 19 38 37 35 34 32 31 29 28 26 25 23
Triplex 16-Processor System, Three Cabinets Example Modular Configurations Triplex 16-Processor System, Three Cabinets This duplex configuration has a maximum of 16 logical processors with one IOAM enclosure, and ten disk drive enclosures (eight disk drive enclosures in a typical system): 42 41 40 41 40 39 39 38 38 37 37 36 36 35 35 34 34 33 IOAM Enclosure 32 31 31 30 30 29 29 28 28 27 27 P-Switch 26 26 25 25 P-Switch 24 23 23 22 22 21 21 20 19 Processor Slice Dis
Example Modular Configurations Other Configurations Other Configurations Other configurations are possible for particular applications or purposes.
Example Modular Configurations Duplex Basic Cabinet Duplex Basic Cabinet This configuration has two slices with an LSU enclosure, one IOAM enclosure, three disk-drive enclosures, and two processor switches (and requires a total of four connections to the maintenance switches): Disk Drive Encl. Cabinet (42 U) Disk Drive Encl. Disk Drive Encl. IOAM Enclosure Processor Switch Processor Switch LSU Enclosure Processor Slice Processor Slice PDU Keepout (2U) VST059.
Example Modular Configurations Triplex Single-Footprint System Without Maintenance Switches Triplex Single-Footprint System Without Maintenance Switches This configuration has three slices with an LSU enclosure, two processor switches, one IOAM enclosure, and two disk-drive enclosures. This system configuration does not include the maintenance switches. Disk Drive Encl. Cabinet Disk Drive Encl.
Example Modular Configurations Triplex Add-On Cabinet With Disks Triplex Add-On Cabinet With Disks This configuration has three slices and eight disk-drive enclosures. The slices in this cabinet must be connected to the processor switches in another cabinet in the system. The disk-drive enclosures must be connected to an IOAM in another cabinet in the system: Disk Drive Encl. Cabinet Disk(42 DriveU) Encl. Disk Drive Encl. Disk Drive Encl. Disk Drive Encl. Disk Drive Encl. Disk Drive Encl.
Example Modular Configurations Disk and I/O Cabinet Disk and I/O Cabinet This configuration houses one IOAM enclosure and 10 disk-drive enclosures. This cabinet must be connected to the processor switches in another cabinet in the system: Disk Drive Encl. Cabinet (42 U) Disk Drive Encl. Disk Drive Encl. Disk Drive Encl. IOAM Enclosure Disk Drive Encl. Disk Drive Encl. Disk Drive Encl. Disk Drive Encl. Disk Drive Encl. Disk Drive Encl. VST055.
Example Modular Configurations Disk Cabinet Disk Cabinet This configuration has 14 disk-drive enclosures. The disk-drive enclosures in this cabinet must be connected to an IOAM enclosure in another cabinet in the system: Disk Drive Encl. Cabinet (42 U) Disk Drive Encl. Disk Drive Encl. Disk Drive Encl. Disk Drive Encl. Disk Drive Encl. Disk Drive Encl. Disk Drive Encl. Disk Drive Encl. Disk Drive Encl. Disk Drive Encl. Disk Drive Encl. Disk Drive Encl. Disk Drive Encl. VST057.
Example Modular Configurations I/O Cabinet I/O Cabinet This configuration has three IOAM enclosures. The cabinet must be connected to the processor switches in another cabinet in the system. This cabinet requires a total of six connections to the maintenance switches: Cabinet (42 U) IOAM Enclosure IOAM Enclosure IOAM Enclosure PDU Keepout (2U) VST063.
Example System With UPS and ERM Example Modular Configurations Example System With UPS and ERM UPS and ERM (two ERMs maximum) must reside in the bottom of the cabinet with the UPS at cabinet offset 2U. A 1U keepout panel is installed in the 1U offset of the cabinet as shown in this illustration.
Example System With One NonStop S-Series I/O Enclosure Example Modular Configurations Example System With One NonStop S-Series I/O Enclosure Integrity NonStop NS-Series System Disk Drive Enclosures IOAM Enclosure Fiber-Optic ServerNet Cables Power-On Cables NonStop S-Series I/O Enclosure P-Switch ServerNet I/O PICs P-Switch Y Fabric P-Switch X Fabric LSU Processor Slice B Processor Slice A VST810.
Example Internal Cabling Example Modular Configurations Example Internal Cabling Topics covered are: Topic Page Example Internal Cabling B-21 Example 4-Processor Duplex System Cabling B-25 Example 16-Processor Triplex System Cabling B-26 Example Cabling in System Ready for Shipment B-28 This cabling chart for an example Integrity NonStop NS-series system, with a duplex processor, lists the relative U location where the cable connects to its source and destination connectors, the cable part numbe
Example Internal Cabling Example Modular Configurations U# Source Label Cable Destination Label U# 29 IOAME1: FCSA 3.2: 1 N1-R1-U23-3-2-1 522745-002 M8201 (TSI T1200) Fibre Ch Port N1-R1-U40-FC 40 IOAME1: FCSA 3.1: 1 N1-R1-U23-3-1-1 522745-002 Primary DDE1: FC-AL A2 N1-R1-U37-A-2 37 IOAME1: FCSA 3.1: 2 N1-R1-U23-3-1-2 522745-002 Mirror DDE1: FC-AL A2 N1-R1-U34-A-2 34 IOAME1: FCSA 2.1: 1 N1-R1-U23-2-1-1 522745-002 Primary DDE1: FC-AL B1 N1-R1-U37-B-1 39 IOAME1: FCSA 2.
Example Internal Cabling Example Modular Configurations U# Source Label Cable Destination Label U# 14 LSUE 0: Slot 20: Strand B N1-R1-U13-20-B 522745-002 Slice 1B: Slot 71: N1-R1-U8-71-J0 LSUE 0: Slot 21: Strand B N1-R1-U13-21-B 522745-002 Slice 1B: Slot 71: J1 N1-R1-U8-71-J1 910 LSUE 0: Slot 22: Strand B N1-R1-U13-22-B 522745-002 Slice 1B: Slot 72: J2 N1-R1-U8-72-J2 LSUE 0: Slot 23: Strand B N1-R1-U13-23-B 522745-002 Slice 1B: Slot 72: J3 N1-R1-U8-72-J3 LSUE 0: Slot 20: Stran
Example Internal Cabling Example Modular Configurations This illustration shows the U location of the modular enclosures listed in the preceding table: To AC Power Source or Site UPS IOAM Enclosure (U23) 42 42 41 41 40 40 39 39 38 38 37 37 36 36 35 35 34 34 33 33 32 32 31 31 30 30 29 29 28 28 27 27 26 26 25 25 24 24 23 23 22 22 21 21 20 20 19 19 18 18 17 17 16 16 15 15 14 14 13 13 12 12 11 11 10 10 09 09 08 08 07 07 06 06 05 05
Example 4-Processor Duplex System Cabling Example Modular Configurations Example 4-Processor Duplex System Cabling This illustration shows example 4-processor duplex system in a single cabinet. This is a simplified and conceptual representation of the X and Y ServerNet cabling between the slice, LSU, p-switch, and IOAM enclosures. For clarity, power and Ethernet cables are not shown. For cable-by-cable interconnect diagrams, see Internal ServerNet Interconnect Cabling on page 4-4.
Example 16-Processor Triplex System Cabling Example Modular Configurations the two-controller, two-disk drive enclosure configuration shown in detail in Two FCSAs, Two DDEs, One IOAM Enclosure on page 4-24. For details and instructions on connecting cables as part of the system installation, refer to the NonStop NS-Series Hardware Installation Manual. Example 16-Processor Triplex System Cabling The next two illustrations show an example 16-processor triplex system with four cabinets.
Example 16-Processor Triplex System Cabling Example Modular Configurations Components of the cable management system that are part of each modular enclosure as well as the rack are not shown, so actual cable routing using is slightly different form that shown. For detailed information on cable routing and connection as part of the system installation, refer to the Integrity NonStop NS-Series Hardware Installation Manual.
Example Modular Configurations Example Cabling in System Ready for Shipment Example Cabling in System Ready for Shipment This photograph shows the cabling for a system with two cabinets that is prepared for shipment. All internal cables where the source and destination hardware are within the same cabinet are connected at the factory. For cables that connect between the cabinets, each source end is connected to the correct device and the cable is coiled and secured for shipment, as shown in this photo.
Example Modular Configurations Example Cabling in System Ready for Shipment At the installation site, each inter-cabinet cable is routed from the source cabinet to the destination cabinet and connected to the hardware device noted on the connector identification label affixed to each end of the cable. Node: N1-R1-U31-3.1 (Near) Node: N1-R2-U31-3.2 (Near) Node: N1-R2-U31-3.2 (Far) Node: N1-R1-U31-3.1 (Far) Near End Far End HP Integrity NonStop NS-Series Planning Guide—529567-004 B -29 VST012.
Example Modular Configurations Example Cabling in System Ready for Shipment HP Integrity NonStop NS-Series Planning Guide—529567-004 B -30
C Guide to Integrity NonStop NS-Series Server Manuals These manuals support the Integrity NonStop NS-series systems. Abstracts for each of these manuals begin on the next page. Category Purpose Title Reference Provide information about the manuals, the RVUs, and hardware that support NonStop NS-series servers NonStop Systems Introduction for H-Series RVUs Describe how to prepare for changes to software or hardware configurations Managing Software Changes Change planning and control H06.
Guide to Integrity NonStop NS-Series Server Manuals Title Abstract (page 1 of 2) H06.03 Release Version Update Compendium Provides a summary for the products that have major changes in the H06.03 release version update (RVU), including the products’ new features, migration issues, and fallback considerations. The compendium is written for system managers or anyone who needs to understand how migrating to H06.
Guide to Integrity NonStop NS-Series Server Manuals Title Abstract (page 2 of 2) NonStop NS-Series Planning Guide Explains how to plan Integrity NonStop NS-series server configurations. In addition, the guide describes the ServerNet system area network and the available hardware and system configurations. The guide provides a glossary and a guide to other NonStop NS-series manuals. This guide is written for those who plan the installation and maintenance of the server.
Guide to Integrity NonStop NS-Series Server Manuals HP Integrity NonStop NS-Series Planning Guide—529567-004 C- 4
Safety and Compliance This sections contains three types of required safety and compliance statements: • • • Regulatory compliance Waste Electrical and Electronic Equipment (WEEE) Safety Regulatory Compliance Statements The following regulatory compliance statements apply to the products documented by this manual. FCC Compliance This equipment has been tested and found to comply with the limits for a Class A digital device, pursuant to part 15 of the FCC Rules.
Safety and Compliance Regulatory Compliance Statements Korea MIC Compliance Taiwan (BSMI) Compliance Japan (VCCI) Compliance This is a Class A product based on the standard or the Voluntary Control Council for Interference by Information Technology Equipment (VCCI). If this equipment is used in a domestic environment, radio disturbance may occur, in which case the user may be required to take corrective actions.
Safety and Compliance Regulatory Compliance Statements European Union Notice Products with the CE Marking comply with both the EMC Directive (89/336/EEC) and the Low Voltage Directive (73/23/EEC) issued by the Commission of the European Community.
Safety and Compliance SAFETY CAUTION SAFETY CAUTION The following icon or caution statements may be placed on equipment to indicate the presence of potentially hazardous conditions: DUAL POWER CORDS CAUTION: “THIS UNIT HAS MORE THAN ONE POWER SUPPLY CORD. DISCONNECT ALL POWER SUPPLY CORDS TO COMPLETELY REMOVE POWER FROM THIS UNIT." "ATTENTION: CET APPAREIL COMPORTE PLUS D'UN CORDON D'ALIMENTATION. DÉBRANCHER TOUS LES CORDONS D'ALIMENTATION AFIN DE COUPER COMPLÈTEMENT L'ALIMENTATION DE CET ÉQUIPEMENT".
Safety and Compliance Waste Electrical and Electronic Equipment (WEEE) HIGH LEAKAGE CURRENT To reduce the risk of electric shock due to high leakage currents, a reliable grounded (earthed) connection should be checked before servicing the power distribution unit (PDU).
Safety and Compliance Important Safety Information Statements -6
Glossary 3-phase. Describes a single power source with three output phases (A, B, and C). The phase difference between any two of the three phases is 120 degrees. A. See ampere (A). AC. See alternating current (AC). alternating current (AC). An electric current having a waveform that regularly reverses in positive and negative directions. North American electrical power alternates 60 times/second (60 hertz). European electrical power alternates at 50 hertz. Contrast with direct current (DC). amperage.
conduit Glossary conduit. A tubular raceway, usually constructed of rigid or flexible metal, through which insulated power and ground conductors or data cables are run. Nonmetallic conduits, although available, are not recommended. configuration. (1) The arrangement of enclosures, system components, and peripheral devices into a working unit. (2) The definition or alteration of characteristics of an object. CPU. See central processing unit (CPU). current. The flow of electrons in a conductor.
electromagnetic interference (EMI) Glossary electromagnetic interference (EMI). Forms of conducted or radiated interference that might appear in a facility as either normal or common-mode signals. The frequency of the interference can range from the kilohertz to gigahertz range. However, the most troublesome interference signals are usually found in the kilohertz to low megahertz range.
fiber-optics Glossary fiber-optics. A medium for data transmission that conveys light or images through very fine, flexible, glass or plastic fibers. Fiber-optic cables (light guides) are a direct replacement for conventional coaxial and wire pairs. frequency. The number of complete cycles/second of sinusoidal variation. For alternatingcurrent (AC) power lines, the most common frequencies are 60 hertz and 50 hertz. Global Customer Support Center (GCSC).
IEEE Glossary IEEE. Institute of Electrical and Electronics Engineers. IEEE is a professional organization whose committees develop and propose computer standards that define the physical and data link protocols of entities such as communication networks. IEEE 802.3 protocol. Institute of Electrical and Electronics Engineers (IEEE) standard defining the hardware layer and transport layer of (a variant of) Ethernet. The maximum segment length is 500 meters, and the maximum total length is 2.5 kilometers.
local area network (LAN) Glossary local area network (LAN). A network that is located in a small geographical area and whose communications technology provides a high-bandwidth, low-cost medium to which low-cost nodes can be connected. One or more LANs can be connected to the system such that the LAN users can access the system as if their workstations were connected directly to it. Contrast with wide area network (WAN). logical synchronization unit (LSU).
ohm Glossary ohm. The standard unit for measuring resistance. A one-ohm resistor will conduct one ampere when one volt is applied. Online Support Center (OSC). The group of support specialists within the HP Global Customer Support Center (GCSC) who respond to telephone calls regarding system problems and diagnose malfunctioning systems using remote diagnostic links. PDU. See power distribution unit (PDU). peak load current. The maximum instantaneous load over a designated interval of time.
release version update (RVU) Glossary range. At present, the terms radio frequency interference and electromagnetic interference (EMI) are usually used interchangeably. release version update (RVU). A collection of compatible revisions of HP NonStop operating system software products, identified by an RVU ID, and shipped and supported as a unit. An RVU consists of the object modules, supporting files, and documentation for the product revisions.
ServerNet II Glossary speed, 6-port ServerNet routers, 8b/9b encoding, and a 64-byte maximum packet size. See also ServerNet II. ServerNet II. The second-generation ServerNet network. ServerNet II architecture is backward-compatible with ServerNet I architecture, and it features 125 (or 50) megabytes/second speed, 12-port ServerNet routers, 8b/9b and 8b/10b (serializer ready) encoding, and a 512-byte maximum packet size. See also ServerNet I. ServerNet cable. A cable that conducts ServerNet signals.
shielded twisted pair (STP) Glossary NonStop S-series servers but are optional on earlier servers. Service-side doors are cosmetic and are not required for system cooling. shielded twisted pair (STP). A transmission medium consisting of two twisted conductors with a foil or braid shield. Contrast with unshielded twisted pair (UTP). signal reference grid. A series of conductors, constructed of pure or composite metals (for example, copper) with good surface conductivity.
system serial number Glossary system serial number. A unique identifier, typically five or six alphanumeric characters, assigned to an HP NonStop™ server when it is built. TMR. See triplex. transient. A short-duration, high-amplitude impulse that is imposed on the normal voltage or current. triple-modular redundancy (TMR). See triplex. triplex. Having three active processor elements per logical processor. Also called triplemodular redundancy (TMR). undervoltage.
Glossary wide area network (WAN) subsystem wide area network (WAN) subsystem. The Subsystem Control Facility (SCF) subsystem for configuration and management of WAN objects in G-series release version updates (RVUs). wye. A polyphase electrical supply where the conductors of the source transformer are connected to the terminals in a physical arrangement that resembles the letter Y. Each point of the Y represents the connection for a conductor at high potential.
Index A AC power 208 V ac delta 4-35, A-1 250 V ac 3-phase 4-36, A-1 250 V ac single-phase 4-37, A-1 enclosure input specifications A-2 power-fail monitoring 5-14 power-fail states 5-15 unstrapped PDU 4-34 AC power failure 8-6 AC power feed 2-5 bottom of cabinet 2-5 top of cabinet 2-5 with cabinet UPS 2-6 access readiness inspection 7-15 air conditioning power-fail 8-6 readiness inspection 7-12 alternate system disk 8-7 appraisal, site readiness 7-4 availability, how measured 8-2 B Blade Complex 1-3 Blade
E Index dial-out and dial-in 6-16 dimensions enclosures A-6 modular cabinet A-4 service clearances A-4 disk drive configuration recommendations 4-22 disk drive enclosure 2-21 display IOAM switch boards 2-17 p-switch 2-14 divergence recovery 1-8 documentation factory-installed hardware 7-2 NonStop NS-Series C-1 packet 7-1 ServerNet adapter configuration 7-3 software migration 9-1 system availability 8-1 titles and abstracts C-2 dual modular redundant (DMR, duplex) processor 1-3, 1-5 dynamic IP addresses 6-
G Index Fibre Channel ServerNet adapter (FCSA) 2-19 floor plan grid form 7-19 forms floor plan grid 7-19 ServerNet adapter configuration 7-3 site prep walk-through inspection 7-5 front panel, slice buttons 2-10 indicator LEDs 2-11 FRU AC power assembly, LSU 2-11 disk drive enclosure 2-21 fan IOAM enclosure 2-17 p-switch 2-13 slice 2-8 I/O interface board 2-8 logic board, LSU 2-11 memory board 2-8 optic adapter LSU 2-11 slice 2-8 power supply 2-8, 2-14 processor board 2-8 reintegration board 2-8 replacemen
K Index I/O interface board, slice 2-8 K keepout panel 2-8 L labeling, optic cables 4-4 LAN dedicated maintenance 5-8 fault-tolerant maintenance 6-2 maintenance, G4SA PIF 6-7 non-fault-tolerant maintenance 6-1 public 6-12 LCD IOAM switch boards 2-17 p-switch 2-14 load operating system paths 1-13 logical processor 1-3 LSU description 2-11, 2-28 FRUs 2-11 function and features 2-11 indicator LEDs 2-13 logic boards 2-12 M M8201 Fibre Channel to SCSI router 4-15 maintenance architecture 5-5 maintenance ent
O Index O operating system load paths 1-13 optic adapter LSU 2-11 slice 2-8 slice J connectors 2-9 OSM 5-2, 5-8, 5-9 outage minutes/year 8-2 planned or unplanned 8-1 P paths, operating system load 1-13 pathways, ServerNet 3-3 PDU 2-4 AC power feed 2-5 fuses 2-5 keepout 2-8 receptacles 2-7 PDU strapping configurations 4-34 PE 1-3 planning, installation 7-1 port 2-28 power and lighting readiness inspection 7-8 power receptacles, PDU 2-7 power supply IOAM enclosure 2-17 p-switch 2-14 slice 2-8 power-fail mo
S Index Fibre Channel device configuration 4-22 p-switch cabling 4-13 router system pathways 3-5 routers, ServerNet 3-3 S safety ground/protective earth A-2 safety readiness inspection 7-11 security readiness inspection 7-15 ServerNet 16-processor complex 3-8 architecture 3-3 cable verifier 5-13 fabric 1-4, 3-1 interconnection cabling 4-5 internal cabling B-21 link 3-1 node 3-1 optic cabling 4-4 processor connections 2-15 router 3-1 switch board IOAM enclosure 2-17 p-switch 2-14 ServerNet fabric 3-1 serv
Special Characters Index worksheet weights A-6 worksheet, heat A-7 WOW order form 5-11 Special Characters $SYSTEM disk locations 1-13 HP Integrity NonStop NS-Series Planning Guide—529567-004 Index -7
Special Characters Index HP Integrity NonStop NS-Series Planning Guide—529567-004 Index -8