HP Integrity NonStop NS16000 Series Planning Guide HP Part Number: 529567-023 Published: February 2012 Edition: H06.
© Copyright 2012 Hewlett-Packard Development Company, L.P. Legal Notice Confidential computer software. Valid license from HP required for possession, use or copying. Consistent with FAR 12.211 and 12.212, Commercial Computer Software, Computer Software Documentation, and Technical Data for Commercial Items are licensed to the U.S. Government under vendor’s standard commercial license. The information contained herein is subject to change without notice.
Contents About This Document.....................................................................................7 Supported Release Version Updates (RVUs)..................................................................................7 Intended Audience....................................................................................................................7 New and Changed Information..................................................................................................
Cooling and Humidity Control..................................................................................................34 Weight..................................................................................................................................35 Flooring................................................................................................................................35 Dust and Pollution Control...............................................................................
Factory-Default Disk Volume Locations for SAS Disk Devices.....................................................82 SAS Ports to SAS Disk Enclosures.........................................................................................82 SAS Ports to SAS Tape Devices............................................................................................82 Configuration Restrictions for Storage CLIMs..........................................................................
DHCP, TFTP, and DNS Windows-Based Services..................................................................139 IP Addresses....................................................................................................................140 Ethernet Cables...............................................................................................................142 SWAN Concentrator Restriction.........................................................................................
About This Document This guide describes HP Integrity NonStop™ NS16000 series servers and provides examples of system configurations to assist you in planning for installation of a new system. The NonStop NS16000 series of servers consists of the NonStop NS16000 server and the NonStop NS16200 server. Supported Release Version Updates (RVUs) This manual supports H06.11 and all subsequent H-series RVUs until otherwise indicated in a replacement publication.
• Changed “QMS Tech Doc” to “Technical Document shipped with the system in “CLIM Cable Management Ethernet Patch Panel” (page 122). • Restricted coexistence of DL385 G5 Storage CLIMs and DL380 G6 Storage CLIMs in “Storage CLuster I/O Module (CLIM)” (page 123). • Changed RVU requirement to H06.24 or later to support SSDs in D2700 disk enclosures in “Serially Attached SCSI (SAS) Disk Enclosure” (page 124). New and Changed Information for 529567–022 With H06.
• Added IB CLIM by changing “IP or Telco CLIM” to “networking CLIM” in “Processor Switches to Networking CLIMs” (page 76). • Added notes to illustrations in “Storage CLIM Devices” (page 80). • Changed “SAS Ports to SAS Disk Enclosures” (page 82). • Changed “Configuration Restrictions for Storage CLIMs” (page 82). • Added IB CLIM and corrected Storage CLIM naming convention in “Default Naming Conventions” (page 105). • Changed “IP CLIM” to “CLIM” in “Processor Switch” (page 112).
• In “Operations and Management Using OSM Applications” (page 155) changed table and note below table. • Changed “AC Power Monitoring” (page 156), including “OSM Power Fail Support” (page 156) and “Considerations for Ride-Through Time Configuration” (page 157). New and Changed Information for 529567–021 10 • Added DL385 G2 and G5 CLIM and DL380 G6 CLIM designations throughout the document.
• Changed IP CLIM A and IP CLIM B to IP CLIM option 1 and IP CLIM option 2, respectively, in “IP CLIM Ethernet Interfaces” (page 103) and added DL380 G6 IP CLIMs. • Added “Telco CLIM Ethernet Interfaces” (page 104).
• 12 ◦ “Unit Sizes” (page 57) ◦ “Enclosure Dimensions” (page 58) ◦ “Modular Cabinet and Enclosure Weights With Worksheet” (page 59) ◦ “Heat Dissipation Specifications and Worksheet” (page 60) ◦ “Operating Temperature, Humidity, and Altitude” (page 61) ◦ “Enclosure Locations in Cabinets” (page 135) ◦ “Cable Length Restrictions” (page 69) ◦ “Processor Switch ServerNet Connections” (page 75) ◦ “Processor Switches to Storage CLIMs” (page 77) ◦ “Storage CLIM Devices” (page 80) ◦ “Configura
New and Changed Information for 529567–019 • • Changed the C13 receptacle type from 10A to 12A in these sections: ◦ “Monitored Single-Phase PDUs” (page 37) ◦ “North America and Japan: 200 to 240 V AC, Monitored Single-Phase PDUs” (page 41) ◦ “International: 200 to 240 V AC, Monitored Single-Phase PDUs” (page 42) ◦ “Monitored Three-Phase PDUs” (page 42) ◦ “North America and Japan: 200 to 240 V AC, Monitored Three-Phase PDUs” (page 46) ◦ “International: 380 to 415 V AC, Monitored Three-Phase PDUs
• Added section, “BladeCluster Solution Installation and Migration Tasks” (page 31), indicating that the checklist for installing a BladeCluster Solution is located in the BladeCluster Solution Manual. • Under Appendix C: Default Startup Characteristics (page 159), added a note stating that the configurations documented here are typical for most sites. Your system load paths might be different, depending upon how your system is configured.
New and Changed Information for 529567–016 • Corrected the operating temperatures under “Operating Temperature, Humidity, and Altitude” (page 61) and “Nonoperating Temperature, Humidity, and Altitude” (page 62). • Indicated that when connecting a networking CLIM, slot 4 might already be used for S-series networking. If that is the case, use slots 5 through 9, starting with 9, for NS-series networking CLIMs.
Chapter 7 (page 136) This chapter describes the planning tasks for your dedicated service LAN and system consoles. Appendix A: Cables This appendix identifies the cables used with the Integrity NonStop NS16000 series hardware. Appendix B: Operations and Management Using OSM Applications This appendix describes the OSM management tools used in Integrity NonStop NS16000 series systems. Appendix C (page 159) This appendix describes the default startup characteristics for system disks.
[ ] Brackets Brackets enclose optional syntax items. For example: TERM [\system-name.]$terminal-name INT[ERRUPTS] A group of items enclosed in brackets is a list from which you can choose one item or none. The items in the list can be arranged either vertically, with aligned brackets on each side of the list, or horizontally, enclosed in a pair of brackets and separated by vertical lines.
Item Spacing Spaces shown between items are required unless one of the items is a punctuation symbol such as a parenthesis or a comma. For example: CALL STEPMOM ( process-id ) ; If there is no space between two items, spaces are not permitted. In this example, no spaces are permitted between the period and any other items: $process-name.
1 NonStop NS16000 Series System Overview Integrity NonStop NS16000 series servers use the “NonStop NS16000 Series System Architecture” (page 20), a number of duplex or triplex processors, and various combinations of modular hardware components installed in 42U modular cabinets. All Integrity NonStop NS16000 series server components are field-replaceable units (FRUs) that can only be serviced by service providers trained by HP.
Table 1 Characteristics of a NonStop NS16000 Series Systems (continued) BladeCluster Solution connection Supported NonStop S-series I/O enclosures Supported connection M8201R Fibre Channel to SCSI router Supported NonStop NS16000 Series System Architecture Integrity NonStop NS16000 series systems employ a unique method for achieving fault tolerance in a clustered processor environment: the modular NonStop NS16000 series system architecture, which utilizes Intel® Itanium® microprocessors without cycle-by-c
• SAS disk enclosure NOTE: MSA70 SAS disk enclosures can contain hard disk drives (HDDs). D2700 SAS disk enclosures can contain hard disk drives (HDDs) or Solid State Drives (SSDs).
A large number of enclosure combinations are possible within the modular cabinet(s) that make up an Integrity NonStop NS16000 series server. The applications and purpose of any NonStop NS-series server determine the number and combinations of the enclosures and modular cabinets. Because of the large number of possible configurations, you calculate the total power consumption, heat dissipation, and weight of each modular cabinet based on the hardware configuration that you order from HP.
• “NonStop Blade Element Group-Module-Slot Numbering” (page 24) • “LSU Group-Module-Slot Numbering” (page 25) • “Processor Switch Group-Module-Slot Numbering” (page 26) • “CLIM Connection Group-Module-Slot-Port Numbering” (page 27) • “IOAM Group-Module-Slot Numbering” (page 27) • “Fibre Channel Disk Module Group-Module-Slot Numbering” (page 28) • “NonStop S-Series I/O Enclosure Group Numbers” (page 29) Terminology These terms are used in locating and describing components: Term Definition Ca
On Integrity NonStop NS16000 series systems, locations of the physical and logical modular components are identified by: • • Physical location: ◦ Rack number ◦ Rack offset Logical location: GMS notation determined by the position of the component on ServerNet In NonStop S-series systems, group, module, and slot (GMS) notation identifies the physical location of a component.
A number of GMS configurations are possible in the modular Integrity NonStop NS16000 series system. This table shows the default numbering for the logical processors: Logical Processors Group (NonStop Blade Complex) Module (NonStop Blade Element) 0-3* 400 1 (A) 4-7 401 8-12 402 13-15 403 2 (B) 3 (C) Slot (Optics) Port Blade optics adapters 1-8 (software identified as slots 71-78) J0-J7, K0-K7 * Logical processor 0 must be in NonStop Blade Complex 0 (group 400).
Processor Switch Group-Module-Slot Numbering This table shows the default numbering for the p-switch: Group X ServerNet Module Y ServerNet Module Slot Item 100 2 3 1 Maintenance PIC 2 Cluster PIC 3 Crosslink PIC 1 1 4-9 ServerNet I/O PICs 10 ServerNet PIC (processors 0-3) 11 ServerNet PIC (processors 4-7) 12 ServerNet PIC (processors 8-11) 13 ServerNet PIC (processors 12-16) 14 P-switch logic board 15, 18 Power supply A and B 16, 17 Fan A and B Networking CLIMs (IB, IP and Tel
Figure 1 Slot and Connector Locations for the Processor Switch. CLIM Connection Group-Module-Slot-Port Numbering This table lists the default numbering for P-switch connections to a CLIM: CLIM Group1 Module P-Switch PIC Slot2 PIC Port Numbers 100 2 4-9 1-4 100 3 4-9 1-4 1 For RVU requirements for CLIMs, see Table 5 (page 116). 2 CLIMs can use slots 4 - 9. Networking CLIMs start with slot 9. Storage CLIMs start with slot 4. If slot 4 is needed for S-series networking, CLIMs use slots 5 - 9.
IOAM Group X ServerNet Module Y ServerNet Module Slot Item Port 15, 18 Power supplies - 16, 17 Fans - This illustration shows the slot locations for the IOAM enclosure: Fibre Channel Disk Module Group-Module-Slot Numbering This table shows the default numbering for the Fibre Channel disk module: IOAM Enclosure FCDM Group Module Slot FCSA F-SACs Shelf 110-115 2 - X fabric; 1-5 1, 2 1 - 4 if 0 daisy-chained; Fibre Channel disk module 1 if single disk 1-14 enclosure 89 Disk drive bay
IOAM Enclosure Group FCDM Module Slot FCSA F-SACs Shelf Slot Item 98 Right blower 99 EMU The form of the GMS numbering for a disk in a Fibre Channel disk module is: This example shows the disk in bay 03 of the Fibre Channel disk module that connects to the FCSA in the IOAM group 111, module 2, slot 1, FSAC 1: NonStop S-Series I/O Enclosure Group Numbers Assignment of the group number of each NonStop S-series I/O enclosure depends on the cable connection to the p-switch PIC by slot and port.
P-Switch PIC Slot (X and Y Fabrics) P-Switch PIC Connector NonStop S-Series I/O Enclosure Group 6 1 31 2 32 3 33 4 34 1 41 2 42 3 43 4 44 1 51 2 52 3 53 4 54 1 61 2 62 3 63 4 64 7 8 9 This illustration shows the group number assignments on the p-switch: System Installation Document Packet To keep track of the hardware configuration, internal and external communications cabling, IP addresses, and connect networks, assemble and retain as the systems records an Installat
Technical Document for the Factory-Installed Hardware Configuration Each new Integrity NonStop NS16000 series system includes a document called a technical document.
2 Site Preparation Guidelines This section describes power, environmental, and space considerations for your site. Modular Cabinet Power and I/O Cable Entry Power and I/O cables can enter the Integrity NonStop NS16000 series server from either the top or the bottom rear of the modular cabinets, depending on how the cabinets are ordered from HP and the routing of the AC power feeds at the site.
S-Series Hardware Installation and FastPath Guide for connector and wiring information. This guide is available in the NonStop Technical Library (NTL). Electrical Power and Grounding Quality Proper design and installation of a power distribution system for an Integrity NonStop NS16000 series server requires specialized skills, knowledge, and understanding of appropriate electrical codes and the limitations of the power systems for computer and data processing equipment.
planned orderly shutdown at a predetermined time in the event of an extended power failure. A timely and orderly shutdown prevents an uncontrolled and asymmetric shutdown of the system resources from depleted UPS batteries. The R5000 UPS supports the OSM power failure support function that allows you to set a ride-through time. If AC power is not restored before the specified ride-through time expires, OSM initiates an orderly system shutdown.
NOTE: Failure of site cooling with the server continuing to run can cause rapid heat buildup and excessive temperatures within the hardware. Excessive internal temperatures can result in full or partial system shutdown. Ensure that the site’s cooling system remains fully operational when the server is running.
Space for Receiving and Unpacking Identify areas that are large enough to receive and to unpack the system from its shipping cartons and pallets. Be sure to allow adequate space to remove the system equipment from the shipping pallets using supplied ramps. Also be sure adequate personnel are present to remove each cabinet from its shipping pallet and to safely move it to the installation site. WARNING! A fully populated cabinets is unstable when moving down the unloading ramp from its shipping pallet.
3 System Installation Specifications This section provides the specifications necessary for planning your system installation site. Modular Cabinets The modular cabinet is a EIA standard 19-inch, 42U high, rack for mounting modular components. The modular cabinet comes equipped with front and rear doors and includes a rear extension that makes it deeper than some industry-standard racks.
Each single-phase PDU in a modular cabinet has: • 36 AC receptacles per PDU (12 per segment) - IEC 320 C13 12A receptacle type • 3 AC receptacles per PDU (1 per segment) - IEC 320 C19 16A receptacle type • 3 circuit-breakers These PDU options are available to receive power from the site AC power source: • 200 to 240 V AC, single-phase for North America and Japan • 200 to 240 V AC single-phase for International Each PDU distributes the site AC power as single phase 200 to 240 V AC to the 39 outlets
Bottom AC Power Feed, Monitored Single-Phase PDUs Figure 3 shows the AC power feed cables on PDUs for AC feed at the bottom of the cabinet and the AC power outlets along the PDU.
Figure 4 Top AC Power Feed When Optional UPS and ERM are Installed Bottom AC Power Feed When Optional UPS and ERM are Installed If your system includes the optional rackmounted HP R5000 UPS, the modular cabinet will have one PDU located on the rear right side and four extension bars on the rear left side. The PDU and extension bars are oriented inward, facing the components within the modular cabinet. To provide redundancy, components are plugged into the right-side PDU and the extension bars.
Figure 5 Bottom AC Power Feed When Optional UPS and ERM are Installed Input and Output Power Characteristics, Monitored Single-Phase PDUs The cabinet includes two monitored single-phase PDUs. North America and Japan: 200 to 240 V AC, Monitored Single-Phase PDUs The North America and Japan PDU power characteristics are: PDU input characteristics • 200 to 240 V AC, single phase, 40A RMS, 3-wire • 50/60Hz • Non-NEMA Locking CS8265C, 50A input plug • 6.
International: 200 to 240 V AC, Monitored Single-Phase PDUs The international PDU power characteristics are: • 200 to 240 V AC, single phase, 32A RMS, 3-wire PDU input characteristics • 50/60Hz • IEC309 3-pin, 32A input plug • 6.
For information about specific characteristics for PDUs factory-installed in monitored three-phase cabinets, refer to • “AC Power Feeds, Monitored Three-Phase PDUs” (page 43) • “Input and Output Power Characteristics, Monitored Three-Phase PDUs” (page 46) • “Branch Circuits and Circuit Breakers, Monitored Three-Phase PDUs” (page 47) Each three-phase monitored PDU in a modular cabinet has: • 36 AC receptacles per PDU (12 per segment) - IEC 320 C13 12A receptacle type • 3 AC receptacles per PDU (1 pe
Figure 6 Top AC Power Feed, Monitored Three-Phase PDUs Bottom AC Power Feed, Monitored Three-Phase PDUs Figure 7 shows the AC power feed cables on PDUs for AC feed at the bottom of the cabinet and the AC power outlets along the PDU.
bars are oriented inward, facing the components within the modular cabinet. To provide redundancy, components are plugged into the right-side PDU and the extension bars. Each extension bar is plugged into the UPS. Figure 8 shows the AC power feed cables for the PDU and UPS for AC power feed from the top of the cabinet when the optional UPS and ERM are installed. Also see “UPS and ERM (Optional)” (page 131).
Figure 9 Bottom AC Power Feed When Optional UPS and ERM are Installed Input and Output Power Characteristics, Monitored Three-Phase PDUs The cabinet includes two monitored three-phase PDUs. North America and Japan: 200 to 240 V AC, Monitored Three-Phase PDUs The North America and Japan PDU power characteristics are: PDU input characteristics • 200 to 240 V AC, 3–phase delta, 30A, 4-wire • 50/60Hz • NEMA L15-30 input plug • 6.
International: 380 to 415 V AC, Monitored Three-Phase PDUs The international PDU power characteristics are: • 380 to 415 V AC, 3-phase Wye, 16A RMS, 5-wire PDU input characteristics • 50/60Hz • IEC309 5-pin, 16A input plug • 6.
Each three-phase modular PDU in a modular cabinet has: • 28 AC receptacles per PDU (7 per extension bar) - IEC 320 C13 12A receptacle type • 6 circuit-breakers These PDU options are available to receive power from the site AC power source: • 200 to 240 V AC, three-phase delta for North America and Japan • 380 to 415 V AC, three-phase wye for International Each PDU distributes site three-phase power to 34 single-phase 200 to 240 V AC outlets for connecting the power cords from the components mounted
Figure 10 Top AC Power Feed, Modular Three-Phase PDUs Bottom AC Power Feed, Modular Three-Phase PDUs Figure 11 shows the power feed cables on modular three-phase PDUs with AC feed at the bottom of the cabinet and the output connections for the three-phase modular PDU.
Figure 11 Bottom AC Power Feed, Modular Three-Phase PDUs Top AC Power Feed When Optional UPS and ERM are Installed Figure 12 (page 51) shows the AC power feed cables for the PDU and UPS for AC power feed from the top of the cabinet when the optional UPS and ERM are installed. Also see “UPS and ERM (Optional)” (page 131).
Figure 12 Top AC Power Feed, Modular Three-Phase PDU with UPS and ERM Bottom AC Power Feed When Optional UPS and ERM are Installed Figure 13 (page 52) shows the AC power feed cables for the PDU and UPS for AC power feed from the bottom of the cabinet when the optional UPS and ERM are installed. Also see “UPS and ERM (Optional)” (page 131).
Figure 13 Bottom AC Power Feed, Modular Three-Phase PDU with UPS and ERM Input and Output Power Characteristics, Modular Three-Phase PDUs The cabinet includes two modular three-phase PDUs. North America and Japan: 200 to 240 V AC, Modular Three-Phase PDUs and Extension Bars The North America and Japan PDU power characteristics are: PDU input characteristics • 200 to 240 V AC, 3-phase delta, 30A, 4-wire • 50/60Hz • NEMA L15-30 input plug • 12 feet (3.
• 200V to 240 V AC, 3-phase delta, 16A RMS, 4-wire Extension bar input characteristics • 50/60Hz • IEC 320–C20 input plug • 6.5 feet (2.
Select circuit breaker ratings according to local codes and any applicable regulations for the circuit capacity. Note that circuit breaker ratings vary if your system includes the optional rackmounted HP R5000 UPS.
Specification Value Nominal line frequency 50 or 60 Hz Frequency ranges 47-53 Hz or 57-63 Hz Number of phases 1 1 Voltage range for the maintenance switch is 200-240 V AC. Each PDU is wired to distribute the load segments to its receptacles. Factory-installed enclosures are connected to the PDUs for a balanced load among the load segments.
Enclosure Type AC Power Lines per Enclosure1 Typical Power Consumption (VA) Maximum Power Consumption (VA) Peak Inrush Current (amps) MSA70 SAS disk enclosure (empty) 2 125 180 5 D2700 SAS disk enclosure (empty) 2 75 125 5 SAS 2.5 in., 10k rpm disk drive - 5 9 - SAS 2.5 in., 15k rpm disk drive - 4 7 - SAS SSD - 6 6 1.5 (5V current) 1.
Plan View From Above the Modular Cabinet Service Clearances for the Modular Cabinet Aisles: 6 feet (182.9 centimeters) Front: 3 feet (91.4 centimeters) Rear: 3 feet (91.
1 For RVU requirements for CLIMs, see Table 5 (page 116). Modular Cabinet Physical Specifications Item Height in. Width Depth Weight cm in. cm in. cm Modular Cabinet (HP 78.7 10000 G2 Series rack with extension, doors, and side panels) 199.9 24.0 60.96 46.7 118.6 Rack 78.5 199.4 23.62 60.0 42.5 108.0 Front door 78.5 199.4 23.5 59.7 3.2 8.1 Left-rear door 78.5 199.4 11.0 27.9 1.0 2.5 Right-rear door 78.5 199.4 12.0 30.5 1.0 2.5 Shipping (palletized) 86.5 219.
1 For RVU requirements for CLIMs, see Table 5 (page 116). Modular Cabinet and Enclosure Weights With Worksheet The total weight of each modular cabinet is the sum the weights of the cabinet plus each enclosure installed in it. Use this worksheet to determine the total weight: Enclosure Type Number of Enclosures Weight Total lbs kg 328 148.8 328 149 303 137 NonStop Blade Element 112 50.8 Processor switch 70 32.8 96 43.
For examples of calculating the weight for various enclosure combinations, refer to “Calculating Specifications for Enclosure Combinations” (page 62). Modular Cabinet Stability Cabinet stabilizers are required when you have fewer than four cabinets bayed together. NOTE: Cabinet stability is of special concern when equipment is routinely installed, removed, or accessed within the cabinet. Stability is addressed through the use of leveling feet, baying kits, fixed stabilizers, and/or ballast.
Enclosure Type Unit Heat (Btu/hour, typical) Unit Heat (Btu/hour, maximum) SAS 2.5 in., 10k rpm HDD 17 31 SAS 2.5 in., 15k rpm HDD 14 23 SAS SSD 19.4 19.
Nonoperating Temperature, Humidity, and Altitude • Temperature: ◦ Up to 72-hour storage: -40° to 151° F (-40° to 66° C) ◦ Up to 6-month storage: -20° to 131° F (-29° to 55° C) ◦ Reasonable rate of change with noncondensing relative humidity during the transition from warm to cold • Relative humidity: 10% to 80%, noncondensing • Altitude: 0 to 40,000 feet (0 to 12,192 meters) Cooling Airflow Direction Each enclosure includes its own forced-air cooling fans or blowers.
Figure 14 Example Duplex Configuration Table 2 shows the weight, power, and thermal calculations for Cabinet One in Figure 14 (page 63).
Table 3 shows the weight, power, and thermal calculations for Cabinet Two in Figure 14 (page 63).
Table 4 Cabinet Three Load Calculations (continued) Component Quantity Height (U) Weight Total Volt-amps (VA) BTU/hour lbs kg Typical Maximum Typical Maximum Power Power Power Power Consumption Consumption Consumption Consumption Fibre Channel disk module with 14 disk drives 7 21 546 248 2436 2436 8312 8312 Cabinet 1 42 328 149 - - - - Total - 37 1333 605 4386 4526 14965 15443 Calculating Specifications for Enclosure Combinations 65
4 System Configuration Guidelines This section provides guidelines for Integrity NonStop NS16000 series system configurations. Integrity NonStop NS16000 series systems use a flexible modular architecture. Therefore, almost any configuration of the system’s modular components is possible within a few configuration restrictions stated in “IOAM Enclosure and Disk Storage Considerations” (page 92).
Each label conveys this information: Nn Identifies the node number. One node can include up to six cabinets. Rn Identifies the cabinet number within the node. Un Identifies the offset that is the physical location of the component within the cabinet. n is the lowest U number in the cabinet that the component occupies. nn.nn All but LSU: Identifies the slot location and port connection of the component.
• P-switches to IOAMs • P-switches to IOMF 2 CRUs in NonStop S-series I/O enclosure • P-switches to p-switches (crosslink) • P-switches to ESS disks • FCSA to M8201R Fibre Channel to SCSI router MMF cables are usually orange with a minimum bend radius of: • Unsheathed: 1.0 inch (25 millimeter) • Sheathed (ruggedized): 4.2 inch (107 millimeter) You can use fiber-optic cables available from HP, or you can provide your own fiber-optic cables.
Cable Length Restrictions Maximum allowable lengths of cables connecting the modular system components are: Connection Fiber Type Connectors Maximum Length Product ID NonStop Blade Element to LSU enclosure MMF LC-LC 100 m M8900nnn1 NonStop Blade Element to NonStop Blade Element MMF MTP 50 m M8920nnn1 LSU enclosure to p-switch MMF LC-LC 125 m M8900nnn1 P-switch to p-switch crosslink MMF LC-LC 125 m M8900nnn1 P-switch to networking MMF CLIM LC-LC 125 m M8900nnn1 P-switch to IOAM e
Connection Fiber Type Connectors Maximum Length Product ID Storage CLIM FC interface to ESS MMF LC-LC 250 m M8900nnn1 Storage CLIM FC interface to FC tape MMF LC-LC 250 m M8900nnn1 1 2 nnn indicates the length of the cable in meters. For example, M8900125 is 125 meters long; M8900-15 is 15 meters long. Daisy-chaining of D2700 SAS disk enclosures is not supported.
P-Switch PIC Slot 11 12 13 PIC Port Processor Number 4 3 1 4 2 5 3 6 4 7 1 8 2 9 3 10 4 11 1 12 2 13 3 14 4 15 The four cabling diagrams on the next pages illustrate the default configuration and connections for a triplex system processor. These diagrams are not for use in installing or cabling the system. For instructions on connecting the cables, see the NonStop NS16000 Series Hardware Installation Manual.
This figure shows example connections of the NonStop Blade Element reintegration links (NonStop Blade Element connectors S, T, Q, R) and ports 1 to 4 of the p-switch PIC in slot 11 for triplex processor numbers to 4 to 7: 72 System Configuration Guidelines
This figure shows example connections of the NonStop Blade Element reintegration links (NonStop Blade Element connectors S, T, Q, R) and ports 1 to 4 of the p-switch PIC in slot 12 for triplex processor numbers to 8 to 11: Internal ServerNet Interconnect Cabling 73
This figure shows example connections of the NonStop Blade Element reintegration links (NonStop Blade Element connectors S, T, Q, R) and ports 1 to 4 of the p-switch PIC in slot 13 for triplex processor numbers to 12 to 15: 74 System Configuration Guidelines
Processor Switch ServerNet Connections ServerNet connections to the system I/O devices (storage disk and tape drive as well as Ethernet communication to networks) radiate out from the p-switches for both the X and Y ServerNet fabrics to the IOAMs in one or more IOAM enclosures or the CLIMs. ServerNet cables connected to the p-switch PICs in slots 10 through 13 come from the LSUs and processors, with the cable connection to these PICs determining the processor identification.
Unlike the fixed hardware I/O configurations and topologies in NonStop S-series systems, I/O configurations in Integrity NonStop NS16000 series system are flexible with few restrictions. Those few restrictions prevent I/O configurations that compromise fault tolerance or high availability, especially with disk storage as outlined in “Configuration Restrictions for Fibre Channel Devices” (page 95).
These restrictions apply to connecting the p-switches to the networking CLIMs: • The same PIC number in the X and Y p-switch must be used, such as PIC 9 as shown in the illustration below. There is one connection to the X p-switch and one connection to the Y p-switch. • Each port on the p-switch PIC must connect to the same numbered port on the networking CLIM's PIC (port 1 to port 1, port 2 to port 2, and so forth).
This illustration shows an example of two DL385 G2 or G5 Storage CLIMs connected to two p-switches (slot 4, ports 1 through 4). Connections for both ServerNet fabrics are shown. Storage CLIMs connect to slots 4 through 9 of the p-switch: Processor Switches to IOAM Enclosures Each p-switch (for the X or Y ServerNet fabric) has up to six I/O PICs. One I/O PIC is required for each IOAM enclosure in the system, allowing up six IOAM enclosures in the system.
FCSA to Fibre Channel Disk Modules See “Example Configurations of the IOAM Enclosure and Fibre Channel Disk Module” (page 96). FCSA to Tape Devices Fibre Channel tape devices can be connected directly to an FCSA in an IOAM enclosure. Integrity NonStop NS16000 series systems do not support SCSI buses or adapters to connect tape devices. However, SCSI tape devices can be connected through a Fibre Channel to SCSI converter device (model M8201R) that allows connection to SCSI tape drives.
This illustration shows an example communication configuration of a table-top tape drive with ACL to an FCSA via an M8201R Fibre Channel to SCSI router: With a tape drive connected to a server, you can use the BACKUP and RESTORE utilities to save data to and restore data from tape. Storage CLIM Devices The NonStop NS16000 series server uses the rack-mounted SAS disk enclosure and its SAS disk drives, which are controlled through the Storage CLIM.
NOTE: All CLIMs use the Cluster I/O Protocols (CIP) subsystem. For more information about the CIP subsystem, refer to the Cluster I/O Protocols (CIP) Configuration and Management Manual. This illustration shows the locations of the hardware in the SAS disk enclosure as well as the I/O modules on the rear of the enclosure for connecting to the Storage CLIM.
SAS disk enclosures connect to Storage CLIMs via SAS cables. For details on cable types, refer to Appendix A (page 150). Factory-Default Disk Volume Locations for SAS Disk Devices This illustration shows where the factory-default locations for the primary and mirror system disk volumes reside in separate disk enclosures: SAS Ports to SAS Disk Enclosures SAS disk enclosures can be connected directly to the HBA SAS ports on a Storage CLIM. A Storage CLIM pair supports a maximum of four SAS disk enclosures.
Use only the supported configurations as described below.
Four DL385 G2 or G5 Storage CLIMs, Four MSA70 SAS Disk Enclosures This illustration shows example cable connections for the four DL385 G2 or G5 Storage CLIM, four MSA70 SAS disk enclosures configuration: Figure 16 Four DL385 G2 or G5 Storage CLIMs, Four MSA70 SAS Disk Enclosure Configuration This table lists the Storage CLIM, LUN, and bay identification for the factory-default system disk locations in the configuration of four DL385 G2 or G5 Storage CLIMs and four MSA70 SAS disk enclosures.
DL380 G6 Storage CLIM and SAS Disk Enclosure Configurations • “Two DL380 G6 Storage CLIMs, Two D2700 SAS Disk Enclosures” (page 85) • “Two DL380 G6 Storage CLIMs, Four D2700 SAS Disk Enclosures” (page 87) • “Four DL380 G6 Storage CLIMs, Four D2700 SAS Disk Enclosures” (page 87) • “Four DL380 G6 Storage CLIMs, Eight D2700 SAS Disk Enclosures” (page 88) Two DL380 G6 Storage CLIMs, Two D2700 SAS Disk Enclosures This illustration shows example cable connections for the two DL380 G6 Storage CLIM, two D27
Figure 17 Two DL380 G6 Storage CLIMs, Two D2700 SAS Disk Enclosure Configuration This table lists the Storage CLIM, LUN, and bay identification for the factory-default system disk locations in the configuration of two DL380 G6 Storage CLIMs and two D2700 SAS disk enclosures.
Two DL380 G6 Storage CLIMs, Four D2700 SAS Disk Enclosures This illustration shows example cable connections for the two DL385 G2 or G5 Storage CLIM, four D2700 SAS disk enclosures configuration. This configuration uses two SAS HBAs in slots 2 and 3 of each DL380 G6 Storage CLIM.
Figure 19 Four DL380 G6 Storage CLIMs, Four D2700 SAS Disk Enclosure Configuration This table lists the Storage CLIM, LUN, and bay identification for the factory-default system disk locations in the configuration of four DL380 G6 Storage CLIMs and four D2700 SAS disk enclosures.
Figure 20 Four DL380 G6 Storage CLIMs, Eight D2700 SAS Disk Enclosure Configuration This table lists the Storage CLIM, LUN, and bay identification for the factory-default system disk locations in the configuration of four DL380 G6 Storage CLIMs and eight D2700 SAS disk enclosures.
P-Switch to NonStop S-Series I/O Enclosure Cabling Each NonStop S-series I/O enclosure uses one port of one PIC in each of the two p-switches for ServerNet connection. If no IOAM enclosure is installed in the system, up to 24 NonStop S-series I/O enclosures can be connected to an Integrity NonStop NS16000 series system through these ServerNet links.
These restrictions or requirements apply when integrating NonStop S-series I/O enclosures into an Integrity NonStop NS16000 series system: • Only NonStop S-series I/O enclosures equipped with IOMF 2 CRUs can be connected to an Integrity NonStop NS16000 series system. The IOMF 2 CRU must have an MMF PIC installed. NonStop S-series system enclosures can be converted to NonStop S-series I/O enclosures, replacing the two PMF CRUs in each enclosure with IOMF 2 CRUs.
IOAM Enclosure and Disk Storage Considerations When deciding between one IOAM enclosure or two (or more), consider: One IOAM Enclosure Two IOAM Enclosures High-availability and fault-tolerant attributes of NonStop S-series systems with I/O enclosures using tetra-8 and tetra-16 topologies. Greater availability because of multiple redundant ServerNet paths and FCSAs.
This illustration shows the locations of the hardware in the Fibre Channel disk module as well as the Fibre Channel port connectors at the back of the enclosure: Fibre Channel Devices 93
Fibre Channel disk modules connect to Fibre Channel ServerNet adapters (FCSAs) via Fiber Channel arbitrated loop (FC-AL) cables.
and disk drives always have a fixed logical location with standardized location IDs of group-module-slot. Only the group number changes as determined by the enclosure position in the ServerNet topology. However, the Integrity NonStop NS16000 series systems have no fixed boundaries for the hardware layout. Up to 60 FCSA (or 120 ServerNet addressable controllers) and 240 Fibre Channel disk enclosures, with identification depending on the ServerNet connection of the IOAM and slot housing in the FCSAs.
four, the remaining FCSAs and Fibre Channel disk modules can be configured in other fault-tolerant configurations such as with two FCSAs and two Fibre Channel disk modules or four FCSAs and three Fibre Channel disk modules. • • In systems with one IOAM enclosure: ◦ With two FCSAs and two Fibre Channel disk modules, the primary FCSA resides in module 2 of the IOAM enclosure, and the backup FCSA resides in module 3.
Two FCSAs, Two FCDMs, One IOAM Enclosure This illustration shows example cable connections between the two FCSAs and the primary and mirror Fibre Channel disk modules: This table lists the FCSA group-module-slot-port (GMSP) and disk group-module-shelf-bay (GMSB) identification for the factory-default system disk locations in the configuration of two FCSAs, two Fibre Channel disk modules, and one IOAM enclosure: Disk Volume Name FCSA GMSP Disk GMSB* $SYSTEM (primary) 110.2.1.1 and 110.3.1.1 110.211.
This table lists the FCSA group-module-slot-port (GMSP) and disk group-module-shelf-bay (GMSB) identification for the factory-default system disk locations in the configuration of four FCSAs, four Fibre Channel disk modules, and one IOAM enclosure: Disk Volume Name FCSA GMSP Disk GMSB1 $SYSTEM (primary 1) 110.2.1.1 and 110.3.1.1 110.211.101 $DSMSCM (primary 1) 110.2.1.1 and 110.3.1.1 110.211.102 $AUDIT (primary 1) 110.2.1.1 and 110.3.1.1 110.211.103 $OSS (primary 1) 110.2.1.1 and 110.3.1.
This table list the FCSA group-module-slot-port (GMSP) and disk group-module-shelf-bay (GMSB) identification for the factory-default system disk locations in the configuration of two FCSAs, two Fibre Channel disk modules, and two IOAM enclosures: Disk Volume Name FCSA GMSP Disk GMSB1 $SYSTEM (primary 1) 110.2.1.1 and 111.2.1.1 110.211.101 $DSMSCM (primary 1) 110.2.1.1 and 111.2.1.1 110.211.102 $AUDIT (primary 1) 110.2.1.1 and 111.2.1.1 110.211.103 $OSS (primary 1) 110.2.1.1 and 111.2.1.1 110.
This table lists the FCSA group-module-slot-port (GMSP) and disk group-module-shelf-bay (GMSB) identification for the factory-default system disk locations in the configuration of four FCSAs, four Fibre Channel disk modules, and two IOAM enclosures: Disk Volume Name FCSA GMSP Disk GMSB* $SYSTEM (primary) 110.2.1.1 and 111.2.1.1 110.211.101 $DSMSCM (primary) 110.2.1.1 and 111.2.1.1 110.211.102 $AUDIT (primary) 110.2.1.1 and 111.2.1.1 110.211.103 $OSS (primary) 110.2.1.1 and 111.2.1.1 110.211.
Daisy-Chained Disks Recommended Daisy-Chained Disks Not Recommended Requirements for Daisy-Chain1 FCSA for each Fibre Channel loop is installed in a different IOAM module for fault tolerance. Two Fibre Channel disk modules minimum, with four Fibre Channel disk modules maximum per daisy chain. 1 See “Fibre Channel Devices” (page 92).
Disk Volume Name FCSA GMSP Disk GMSB* $OSS 110.2.1.1 and 110.3.1.1 110.211.104 * For an illustration of the factory-default slot locations for a Fibre Channel disk module, see “Factory-Default Disk Volume Locations” (page 94).
This illustration shows the factory-default locations for the configurations of four FCSAs and three Fibre Channel disk modules where the primary system file disk volumes are in Fibre Channel disk module 1: This illustration shows the factory-default locations for the configurations of four FCSAs with three Fibre Channel disk modules where the mirror system file disk volumes are in Fibre Channel disk module 3: Ethernet to Networks Depending on your configuration, Gigabit Ethernet connectivity is provided
Telco CLIM Ethernet Interfaces The Telco CLIM Ethernet interfaces are five Ethernet copper ports identical to the IP CLIM option 1 configuration. For illustrations showing the Ethernet interfaces and ServerNet fabric connections on DL385 G2 or G5 and DL380 G6 Telco CLIMs, see “Telco CLuster I/O Module (CLIM)” (page 119). All CLIMs use the Cluster I/O Protocols (CIP) subsystem. For more information about the CIP subsystem, refer to the Cluster I/O Protocols (CIP) Configuration and Management Manual.
Default Naming Conventions With a few exceptions, default naming conventions are not necessary for the modular resources that make up Integrity NonStop NS16000 series systems. In most cases, users can name their resources at will and use the appropriate management applications and tools to find the location of the resource. However, default naming conventions for certain resources simplify creation of the initial configuration files and automatic generation of the names of the modular resources.
1 For more information about CLIM processes that use the CIP subsystem and the naming conventions for these processes, refer to the Cluster I/O Protocols (CIP) Configuration and Management Manual. On new NonStop systems, only one of each of these processes and names is configured: • TCP6SAM - $ZTC0 • Telserv - $ZTCN0 • Listener - $LNS0 No TFTP or WANBOOT process is configured for new NonStop systems.
5 Modular System Hardware This section describes the hardware used in Integrity NonStop NS16000 series systems: • “NonStop Blade Element” • “Logical Synchronization Unit (LSU)” • “Processor Switch” • “CLuster I/O Modules (CLIMs)” (page 115) • “Serially Attached SCSI (SAS) Disk Enclosure” (page 124) • “I/O Adapter Module (IOAM) Enclosure and I/O Adapters” (page 125) • “Fibre Channel Disk Module” (page 130) • “Tape Drive and Interface Hardware” (page 130) • “Maintenance Switch (Ethernet)” (pa
The NonStop Blade Element midplane for logic interconnection and power distribution, which is part of the chassis assembly, is not a FRU. Two NonStop Blade Elements provide up to four processor elements in a high-availability duplex configuration, and eight NonStop Blade Elements provide a full 16-processor duplex system. For a fault-tolerant triplex system, three NonStop Blade Elements provide four processors, and 12 NonStop Blade Elements provide a full 16-processor triplex system.
HP recommends that you use a physically sequential order of slots for fiber-optic cable connections on the LSU and do not randomly mix the LSU slots. Cable connections to the LSU have no bearing on NonStop Blade Complex number, but HP also recommends you connect NonStop Blade Elements A to the NonStop Blade Element A connection on the LSU. Optic cable connections to the p-switch PICs determine the identification numbers of each NonStop Blade Complex.
LED Indicator Locator State Meaning Off NonStop Blade Element is available for normal operation. Flashing blue System locator is activated. Logical Synchronization Unit (LSU) The LSU is the heart of both the high-availability duplex NonStop Blade Complex and the fault-tolerant triplex NonStop Blade Complex. In Integrity NonStop NS16000 series systems, each LSU is associated with only one logical processor.
This illustration shows an example LSU configuration as viewed from the front of the enclosure and equipped with four LSU logic boards in positions 50 through 53: LSU Indicator LEDs LED State Meaning LSU optics adapter PIC (green LED) Green Power is on; LSU is available for normal operation. Off Power is off. Amber Power is in progress, board is being reset, or a fault exists. Off Normal operation or powered off.
LED State Meaning LSU logic board PIC (amber LED) Amber Power is in progress, board is being reset, or a fault exists. Off Normal operation or powered off. Processor Switch The processor switch, or p-switch, provides the first level of ServerNet fabric interconnect for the Integrity NonStop NS16000 series processors. ServerNet connection from the LSU also defines the ID numbers, 0 through 15, for the logical processors within the system.
• Fans (2) • 20-character by 2-line liquid crystal display (LCD) for configuration information: ◦ IP address ◦ Group-module-slot ◦ Cabinet name and offset ◦ ME firmware revision ◦ Field programmable gate array (FPGA) firmware revision Each p-switch is the ServerNet fabric (X or Y) hub for all local and remote ServerNet connections.
P-Switch Indicator LEDs LED State Meaning All PICs Green Power is on with PIC available for normal operation. Off Power is off. Amber A fault exists. Off Normal operation or powered off. Green ServerNet link is functional. Off ServerNet link is not functional. Messages Status messages are displayed.
CLuster I/O Modules (CLIMs) CLIMs are rack-mounted servers that can function as ServerNet Ethernet or I/O adapters. The CLIM complies with Internet Protocol version 6 (IPv6), an Internet Layer protocol for packet-switched networks, and has passed official certification of IPv6 readiness. Two models of base servers are used for CLIMs. You can determine a CLIM's model by looking at the label on the back of the unit (behind the cable arm).
although it is not the PID. The same number is listed as the part number in OSM. Below is the mapping for CLIM models and earliest supported RVUs: Table 5 CLIM Models and RVU Requirements Model Name on Label DL385 G2 or G5 414109-B21 or 453060-B21 Earliest Supported RVU For IP CLIMs: H06.16 and later RVUs For Telco CLIMs: H06.18 and later RVUs For Storage CLIMs: H06.20 and later RVUs DL380 G6 494329-B21 For IP CLIMs, either: • H06.17 through H06.
HP standard Gigabit Ethernet Network Interface Cards (NICs) to implement one of these IP CLIM configurations: • “DL385 G2 or G5 IP CLIM Option 1 — Five Ethernet Copper Ports” (page 118) • “DL385 G2 or G5 IP CLIM Option 2 — Three Ethernet Copper and Two Ethernet Optical Ports” (page 119) • “DL380 G6 IP CLIM Option 1 — Five Ethernet Copper Ports” (page 119) • “DL380 G6 IP CLIM Option 2 — Three Ethernet Copper and Two Ethernet Optical Ports” (page 119) These illustrations show the Ethernet interfaces an
NOTE: All CLIMs use the Cluster I/O Protocols (CIP) subsystem. For more information about the CIP subsystem, refer to the Cluster I/O Protocols (CIP) Configuration and Management Manual.
DL385 G2 or G5 IP CLIM Option 2 — Three Ethernet Copper and Two Ethernet Optical Ports IP CLIM Port or Slot Eth 1 port Provides One copper 1 Gb Ethernet interface via embedded Gigabit NIC Slot 1 Two copper 1 Gb Ethernet interfaces via PCIe Gigabit NIC Slot 2 One 1 Gb Ethernet optical port via PCIe Gigabit NIC Slot 3 ServerNet fabric connections via a PCIe 4x adapter Slot 4 One 1 Gb Ethernet optical port via PCIe Gigabit NIC Slot 5 Empty DL380 G6 IP CLIM Option 1 — Five Ethernet Copper Ports IP C
These illustrations show the Ethernet interfaces and ServerNet fabric connections on a DL385 G2 or G5 and DL380 G6 Telco CLIM. For illustrations of the fronts of these CLIMs, see “CLuster I/O Modules (CLIMs)” (page 115).
DL380 G6 Telco CLIM — Five Ethernet Copper Ports Telco CLIM Port or Slot Provides Eth 1 port One copper 1 Gb Ethernet interface via embedded Gigabit NIC Eth 2 port One copper 1 Gb Ethernet interface via embedded Gigabit NIC Eth 3 port One copper 1 Gb Ethernet interface via embedded Gigabit NIC Slot 1 ServerNet fabric connections via a PCIe 4x adapter Slot 2 Two 1 Gb Ethernet copper interfaces via PCIe Gigabit NIC Slot 3 Empty Slot 4 Empty Slot 5 Empty IB CLuster I/O Module (CLIM) (Optional
IB CLIM Port or Slot Description Eth 1, Eth 2, Eth 3 ports Each Eth port provides one 1 Gb Ethernet copper interface via embedded Gigabit NIC Slot 1 ServerNet fabric connections via a PCIe 4x adapter Slot 2 and Slot 3 Unused Slot 4 Two InfiniBand interfaces (ib0 and ib1 ports) via the IB HCA card. Only one IB interface port is utilized by the Informatica software. HP recommends connecting to the ib0 interface for ease of manageability.
patch panel simplifies and organizes the cable connections to allow easy access to the CLIM's customer-usable interfaces. IP and Telco CLIMs each have five customer-usable interfaces. The patch panel connects these interfaces and brings the usable interface ports to the patch panel. Each Ethernet patch panel has 24 slots, is 1U high, and should be the topmost unit to the rear of the rack. Each Ethernet patch panel can handle cables for up to five CLIMs. It has no power connection.
DL385 G2 or G5 Storage CLIM The DL385 G2 or G5 Storage CLIMs contain 5 PCIe HBA slots with these characteristics: Storage CLIM HBA Slot 1 Configuration Provides Optional customer order SAS or Fibre Channel NOTE: 2 Optional customer order Not part of base configuration. Optional customer order. SAS or Fibre Channel NOTE: Not part of base configuration. Optional customer order. 3 Part of base configuration ServerNet fabric connections via a PCIe 4x adapter.
Two models of SAS disk enclosures are supported. This table describes the type of SAS disk enclosures and shows compatibility between different CLIM and SAS disk enclosure models: SAS Disk Enclosure Model Description MSA70 D2700 1 Compatibility With CLIM Models1 Daisy-chain SAS Disk Enclosures? DL385 G2 or G5 DL380 G6 HP Storage 70 Modular Smart Array Enclosure, holding 25 2.
IOAM Enclosure An IOAM enclosure is 11U high, Each enclosure contains: • Four power supplies with dual AC input • Four fans provide front-to-rear cooling • Two ServerNet switch boards (for the X fabric and Y fabric) for ServerNet routing to the ServerNet adapters in the IOAM enclosure.
Because the IOAM enclosure contains two modules, one IOAM enclosure can provide fault-tolerant paths. However, the paths must be configured through both modules. For example, you could configure these paths through four FCSAs: Path Group Module Slot Primary 110 02 02 Primary backup 110 03 02 Mirror 110 03 04 Mirror backup 110 02 04 These paths all exist within the same group. But they are divided between two modules, so the configuration is fault-tolerant.
ServerNet Switch Board LED State Meaning Reset Amber Board is in logic reset Off Normal operation or powered off.
FCSAs are installed in pairs and can reside in slots 1 through 5 of either IOAM (module 2 or 3) in an IOAM enclosure. The pairs can be installed one each in the two IOAM modules in the same IOAM enclosure, or the pair can be installed one each in different IOAM enclosures. The FCSA allows either a direct connection to an Enterprise Storage System (ESS) or connection through a storage area network. For detailed information on the FCSA, see the Fibre Channel ServerNet Adapter Installation and Support Guide.
A G4SA can be configured as: • Two 10/100/1000 Mbps copper ports and two 10/100 Mbps copper ports • Two 10/100/1000 Mbps multimode fiber-optic ports and two 10/100 Mbps copper ports A G4SA complies with the 1000 Base-T standard (802.3ab), 1000 Base-SX standard (802.3z), and these Ethernet LANs: • 802.3 (10 Base-T) • 802.1Q (VLAN tag-aware switch) • 802.3u (Auto negotiate) • 802.3x (Flow control) • 802.
in the p-switches and IOAM enclosure, and optional UPS and the system console running HP NonStop Open System Management (OSM). The maintenance switch includes enough ports to support multiple systems. The maintenance switch mounts in the modular cabinet, but no restrictions exist for its placement. This illustration shows an example of two maintenance switches installed in the top of a cabinet: Each system requires multiple connections to the maintenance switch.
This illustration shows the location of an R5000 UPS and an ERM in a modular cabinet: This illustration shows the location of an R5500 XR UPS and an ERM in a modular cabinet: For power and environmental requirements, planning, installation, and emergency power-off (EPO) instructions for the UPS, refer to the documentation shipped with the UPS. System Console A system console is an HP approved personal computer (PC) running maintenance and diagnostic software for NonStop systems.
NOTE: The NonStop system console must be configured with some ports open. For more information, see the NonStop System Console Installer Guide. For more information on the system console, refer to “System Console” (page 147). Enterprise Storage System An Enterprise Storage System (ESS) is a collection of magnetic disks, their controllers, and a disk cache in one or more standalone cabinets.
Refer to the documentation that accompanies the ESS. NonStop S-Series I/O Enclosure NonStop S-series I/O enclosures equipped with model 1980 I/O multifunction 2 customer replaceable units (IOMF 2 CRUs) can be connected to the NonStop NS16000 series server via fiber-optic ServerNet cables and the processor switch (p-switch).
6 Hardware Configurations Minimum and Maximum Hardware Configuration This table shows the minimum, typical, and maximum number of the modular components installed in a system. These values might not reflect the system you are planning and are provided only as an example, not as exact values. Enclosure or Component Duplex Processor Minimum Triplex Processor Maximum Minimum Maximum 4-processor NonStop 2 Blade Element with 16 DIMMs 8 3 12 4-GB memory quad 32 6 48 Processor board with 2 two 1.
7 Maintenance and Support Connectivity Local monitoring and maintenance of the Integrity NonStop NS16000 series system occurs over the dedicated service LAN. The dedicated service LAN provides connectivity between the system console and the maintenance infrastructure in the system hardware. Remote support is provided in conjunction with OSM, which runs on the system console and communicates with the chosen remote access solution.
Basic LAN Configuration A basic dedicated service LAN that does not provide a fault-tolerant configuration requires connection of these components to the ProCurve maintenance switch installed in the modular cabinet: • One connection for each system console running OSM • One connection to the processor switch on the X fabric for OSM Low-Level Link to a down system • One connection to the Processor switch on the Y fabric for OSM Low-Level Link to a down system • One connection to the maintenance interf
Fault-Tolerant Configuration Your HP-authorized service provider configures the dedicated service LAN as described in the NonStop Dedicated Service LAN Installation and Configuration Guide. HP recommends that you use a fault-tolerant LAN configuration.
• If G4SAs are used to configure the maintenance LAN, connect the G4SA that configures $ZTCP0 to one maintenance switch, and connect the other G4SA that configures $ZTCP1 to the second maintenance switch.
located in the Service Information section of the Service Procedures collection of NTL. For details, see: • Changing the DHCP, DNS, or BOOTP Server from CLIMs to System Consoles • Changing the DHCP, DNS, or BOOTP Server from System Consoles to CLIMs CAUTION: You must have only two sources of these services in the same dedicated service LAN. If these services are installed on any other sources, they must be disabled.
Component Location Default IP Address CLIM at 100.2.5.2 192.231.36.52 CLIM at 100.2.5.3 192.231.36.53 CLIM at 100.2.5.4 192.231.36.54 CLIM at 100.2.6.1 192.231.36.61 CLIM at 100.2.6.2 192.231.36.62 CLIM at 100.2.6.3 192.231.36.63 CLIM at 100.2.6.4 192.231.36.64 CLIM at 100.2.7.1 192.231.36.71 CLIM at 100.2.7.2 192.231.36.72 CLIM at 100.2.7.3 192.231.36.73 CLIM at 100.2.7.4 192.231.36.74 CLIM at 100.2.8.1 192.231.36.81 CLIM at 100.2.8.2 192.231.36.82 CLIM at 100.2.8.3 192.231.
Component Location Default IP Address TCP/IP processes for OSM Service Connection: $ZTCP0 192.231.36.10 255.255.255.0 subnet mask $ZTCP1 192.231.36.11 255.255.255.0 subnet mask Ethernet Cables Ethernet connections for a dedicated service LAN require Category 6 (CAT 6) unshielded twisted-pair cables. Category 5e (CAT 5e) cable is also acceptable. SWAN Concentrator Restriction • Isolate the dedicated service LAN from any ServerNet wide area network (SWAN) on a system.
GMS for G4SA Location in IOAME G4SA PIF G4SA PIF TCP/IP Stack IP Configuration Subnet: %hFFFFFF00 Hostname: osmlanx 110.3.5 G11035.0.A L1103R $ZTCP1 IP: 192.231.36.11 Subnet: %hFFFFFF00 Hostname: osmlany NOTE: For a fault-tolerant dedicated service LAN, two G4SAs are required, with each G4SA connected to a separate maintenance switch. These G4SA can reside in modules 2 and 3 of the same IOAM enclosure or in module 2 of one IOAM enclosure and module 3 of a second IOAM enclosure.
GMS for G4SA Location in IOAME G4SA PIF G4SA PIF TCP/IP Stack IP Configuration %hFFFFFF00 Hostname: osmlany Dedicated Service LAN Links to Two IOAM Enclosures This illustration shows dedicated service LAN cables connected to G4SAs in two IOAM enclosures and to the maintenance switch: The values in this table show the identification for the G4SAs in the preceding illustration: GMS for G4SA Location in IOAME G4SA PIF G4SA PIF TCP/IP Stack IP Configuration 110.2.5 G11025.0.
Dedicated Service LAN Links With IOAM Enclosure and NonStop S-Series I/O Enclosure This illustration shows dedicated service LAN cables connected to a G4SA in an IOAM enclosure and at least one NonStop S-series Ethernet adapter (E4SA, FESA, or GESA) in a NonStop S-series I/O enclosure (module 12 in this example) and to the maintenance switch: In this example, the G4SA in module 2 of the IOAM enclosure connects to the X ServerNet fabric while the adapter in the NonStop S-series I/O enclosure connects to the
This configuration can be used in cases where a NonStop NS-series system does not have an IOAM enclosure and only the NonStop S-series I/O enclosure provides the system I/O connections and mass storage: GMS for G4SA Location in IOAME G4SA PIF G4SA PIF TCP/IP Stack 11.1.53 E1153.0.A L118 $ZTCP0 IP Configuration IP: 192.231.36.10 GW: 192.231.36.9 Subnet: %hFFFFFF00 Hostname: osmlanx 11.1.54 E1154.0.A L11C $ZTCP1 IP: 192.231.36.11 GW: 192.231.36.
Additional Configuration for OSM If you are using OSM Notification Director for remote support services and/or require a faster connection to the OSM Service Connection, see Configuring Additional TCP/IP Processes for OSM Connectivity in the OSM Migration and Configuration Guide. System Console New system consoles are preconfigured with the required HP and third-party software. When upgrading to the latest RVU, you can install software upgrades from the HP NonStop System Console Installer DVD image.
One System Console Managing One System (Setup Configuration) The one NonStop system console on the LAN must be configured as the primary system console. This configuration can be called the setup configuration and is used during initial setup and installation of the system console and the server. The setup configuration is an example of a secure, stand-alone network as shown in “Basic LAN Configuration” (page 137).
NOTE: Do not perform the next two bulleted items if your backup system console is shipped with a new NonStop NS16000 series system. In this case, HP has already configured these items for you. • Change the preconfigured DHCP configuration of the backup system console before you add it to the LAN. • Change the preconfigured IP address of the backup system console before you add it to the LAN.
A Cables Cable Types, Connectors, Lengths, and Product IDs TIP: Although a considerable distance can exist between the modular enclosures in the system, HP recommends placing all cabinets adjacent to each other and bolting them together, with cable length between each of the enclosures as short as possible. Available cables and their lengths are: Connection From...
Connection From...
Connection From... Cable Type Length in meters Product ID DL385 G5 Storage N.A. CLIM SAS HBA port to SAS tape (carrier-grade tape only) SFF-8470 to SFF-8470 2 M8908–2 4 M8908-4 Internal power-on cable between p-switches and IOMF 2 CRU RJ11 - RJ45 2 542380-001 10 542381-001 2 M8921-2 5 M8921-5 10 M8921-10 25 M8921-25 40 M8921-40 80 M8921-80 100 M8921-100 2 M8922-2 5 M8922-5 10 M8922-10 25 M8922-25 40 M8922-40 80 M8922-80 0.6 M8926–02 1.2 M8926–04 1.5 M8926–05 1.
Connection From... Cable Type Connectors SAS disk enclosure (daisy-chain) Storage CLIM FC HBA MMF to: LC-LC ESS FC tape FCSA in IOAM Enclosure to: MMF LC-LC ESS FC switch DL380 G6 IB CLIM HCA port to customer-supplied IB switch Copper or Fiber ETH1 port on Storage CAT 6 UTP CLIM with encryption to customer-supplied switch.
Connection From... Cable Type Maintenance LAN interconnect CAT 5e UTP RJ-45 to RJ-45 (GESA external ports support CAT 5e or CAT 6 cables that you provide.) Maintenance LAN interconnect Connectors CAT 6 UTP RJ-45 to RJ-45 (GESA external ports support CAT 5e or CAT 6 cables that you provide.) Length in meters Product ID 1.5 M8926–05 3 M8926–10 4.6 M8926–15 7.6 M8926–25 0.6 M8926–02 1.2 M8926–04 1.5 M8926–05 1.8 M8926–06 2.1 M8926–07 2.4 M8926–08 2.7 M8926–09 3 M8926–10 4.
B Operations and Management Using OSM Applications OSM server-based components are incorporated in a single OSM server-based SPR, T0682 (OSM Service Connection Suite), that is installed on Integrity NonStop NS16000 series servers running the HP NonStop operating system. For information on how to install, configure and start OSM server-based processes and components, see the OSM Migration and Configuration Guide.
Using OSM for Down-System Support In Integrity NonStop NS16000 series systems, the maintenance entity (ME) in the p-switches provides dedicated service LAN services via the OSM Low-Level Link for both OS coldload, system management, and hardware configuration when hardware is powered up but the OS is not running. LAN connections are direct from the maintenance PIC in slot 1 of the p-switch to the maintenance switch that is installed in the modular cabinet.
The actions that OSM takes next are directly tied to that specified ride-through time: • If AC power is restored before the ride-through period ends, the ride-through countdown terminates and OSM does not initiate a controlled shutdown of I/O operations and processors.
NOTE: OSM does not make dynamic computations based on remaining capacity of the rack-mounted UPS. The ride-through time is statically configured in SCF for OSM use. For example, when power comes back before the initiated shutdown, but then fails again shortly afterward, the UPS has been depleted by some amount and does not last for the ride-through time until it is fully recharged. OSM does not account for multiple power failures that occur within the recharge time of the rack-mounted UPS.
C Default Startup Characteristics NOTE: The configurations documented here are typical for most sites. Your system load paths might be different, depending upon how your system is configured. To determine the configuration of your system, refer to the system attributes in the OSM Service Connection. You can select this from within the System Load dialog box in the OSM Low-Level Link.
Load Path Description Source Disk Destination Processor ServerNet Fabric 15 Mirror backup $SYSTEM-M 1 X 16 Mirror backup $SYSTEM-M 1 Y This illustration shows the system load paths: The command interpreter input file (CIIN) is automatically invoked after the first processor is loaded. The CIIN file shipped with new systems contains the TACL RELOAD * command, which loads the remaining processors.
D NonStop S-Series Systems: Connecting to or Migrating From Topics described in this appendix are: • “Connecting to NonStop S-Series I/O Enclosures” (page 161) • “Migrating From a NonStop S-Series Systems to a NonStop NS16000 Series Systems” (page 162) Connecting to NonStop S-Series I/O Enclosures NOTE: For NonStop S-Series I/O enclosure group numbers, refer to “NonStop S-Series I/O Enclosure Group Numbers” (page 29).
Each p-switch (for the X or Y fabric) has up to six I/O PICs, with one I/O PIC required for each IOAM enclosure in the system. Each NonStop S-series I/O enclosure uses one port and one PIC, so a maximum of 24 NonStop S-series I/O enclosures can be connected to an Integrity NonStop NS16000 series system if no IOAM enclosure is installed.
Migration Considerations • SQL/MP objects that are present on disks installed in a NonStop S-series I/O enclosure are not immediately usable after the enclosure is connected to an Integrity NonStop NS16000 series system. The file labels and catalogs must be updated to reflect the new system name and number. You can use the SQLCI MODIFY command to update SQL/MP objects. Refer to the migration information contained in the SQL Supplement for H-Series RVUs.
• Interactive Upgrade Guide 2 • If you are moving a NonStop S-series I/O enclosure from a NonStop S-series system to an Integrity NonStop NS16000 series system and want to migrate the data online, you can perform a migratory revive if: ◦ Your data is mirrored. ◦ You have another NonStop S-series system or NonStop S-series I/O enclosure connected to the NonStop S-series system.
Index Symbols $SYSTEM disk locations, 159 A AC current calculations, 62 AC power enclosure input specifications, 54 feed, top or bottom, 21 input, 38, 43, 48 power-fail monitoring, 156 power-fail states, 158 unstrapped PDU, 54 AC power feed bottom of cabinet, 39, 44, 49 modular three-phase, 48 monitored single-phase, 38 monitored three-phase, 43 top of cabinet, 38, 43, 49 with cabinet UPS, 39, 40, 44, 45, 50, 51 air conditioning, 34 air filters, 35 B branch circuit, 42, 47, 53 C cabinet, 23 cabinet dimen
E electrical disturbances, 33 electrical power loading, 55 emergency power off (EPO) switches HP 5000 UPS, 32 HP 5500 XR UPS, 32 Integrity NonStop NS16000 series servers, 32 NonStop S-series I/O enclosure, 32 enclosure combinations, 22 dimensions, 58 height in U, 57 minimum, typical, maximum number, 135 power loading, 55 types, 20 weight, 59 enclosure height in U, 57 enclosure location, 135 Enterprise Storage System (ESS), 133 environmental monitoring unit, 130 example system configurations, 62 duplex, 21,
L labeling, optic cables, 66 LAN fault-tolerant maintenance, 137 non-fault-tolerant maintenance, 137 service, G4SA PIF, 143 service, IP CLIM, 142 LCD IOAM switch boards, 126 p-switch, 113 load operating system paths, 159 Low Latency Solution, 121 LSU description, 23, 110 FRUs, 110 function and features, 110 indicator LEDs, 111 logic boards, 111 M M8201R Fibre Channel to SCSI router, 80 maintenance PIC, 112 maintenance switch, 130 manuals software migration, 163 memory board, FRU, 107 metallic particulate c
R R5000 UPS, 131 rack, 23 rack offset, 23, 24 raised flooring, 35 receive and unpack, 36 receptacles, PDU, 37 reintegration board, 107 related manuals software migration, 163 restrictions cable length, 69 cabling NonStop S-series I/O enclosure, 91 Fibre Channel device configuration, 95 p-switch cabling, 77, 78 system disk location, 159 T tech doc, factory installed hardware, 31 Telco CLIM , 119 see also CLIMs Terminal Emulator File Converter, 155 terminology, 23 Three-Phase Modular INTL PDU input characte