HP Integrity NonStop NS1000 Planning Guide Abstract This guide describes the HP Integrity NonStop™ NS1000 system and provides examples of system configurations to assist in planning for installation of a new system. Product Version N.A. Supported Release Version Updates (RVUs) This publication supports H06.06 and all subsequent H-series RVUs until otherwise indicated by its replacement publication.
Document History Part Number Product Version Published 542527-001 N.A. March 2006 542527-002 N.A.
HP Integrity NonStop NS1000 Planning Guide Glossary Index What’s New in This Manual vii Manual Information vii New and Changed Information Figures viii About This Guide xi Who Should Use This Guide xi What’s in This Guide xi Where to Get More Information xii Notation Conventions xii 1. System Overview System Description 1-1 Hardware Enclosures and Configurations 1-3 Preparing for Other Than Integrity NonStop NS1000 Server Hardware 2.
3. System Installation Specifications Contents 3.
4. Integrity NonStop NS1000 System Description (continued) Contents 4. Integrity NonStop NS1000 System Description (continued) System Models 4-9 Default Startup Characteristics 4-10 System Installation Document Packet 4-12 Tech Memo for the Factory-Installed Hardware Configuration Configuration Forms for the ServerNet Adapter 4-12 4-12 5.
. System Configuration Guidelines (continued) Contents 6.
B. Control, Configuration, and Maintenance Tools (continued) Contents B. Control, Configuration, and Maintenance Tools (continued) SWAN Concentrator Restriction B-12 System-Up Dedicated Service LAN B-13 Dedicated Service LAN Links With One IOAM Enclosure B-14 Initial Configuration for a Dedicated Service LAN B-15 Operating Configurations for Dedicated Service LANs B-15 OSM B-15 System-Down OSM Low-Level Link B-16 AC Power Monitoring B-17 AC Power-Fail States B-18 C.
Contents HP Integrity NonStop NS1000 Planning Guide —542527-002 vi
What’s New in This Manual Manual Information HP Integrity NonStop NS1000 Planning Guide Abstract This guide describes the HP Integrity NonStop™ NS1000 system and provides examples of system configurations to assist in planning for installation of a new system. Product Version N.A. Supported Release Version Updates (RVUs) This publication supports H06.06 and all subsequent H-series RVUs until otherwise indicated by its replacement publication.
New and Changed Information What’s New in This Manual Document History Part Number Product Version Published 542527-001 N.A. March 2006 542527-002 N.A. May 2006 New and Changed Information Section and Title Changes (page 1 of 3) Manual-wide Editorial corrections. Added support for the optional HP R5500 XR UPS. Added notes describing use of HP extension bars instead of one PDU when the system includes a rackmounted HP R5500 XR UPS.
New and Changed Information What’s New in This Manual Section and Title Changes (page 2 of 3) Section 4, Integrity NonStop NS1000 System Description Added information about processor model under the System Models on page 4-9.
New and Changed Information What’s New in This Manual Section and Title Changes (page 3 of 3) Section 6, System Configuration Guidelines Updated Enclosure Locations in Cabinets on page 6-3 to reflect supported component placement in the 42U modular cabinet. Removed references to the T1200 Fibre Channel to SCSI router (model M8201R) under FCSA to Tape Devices on page 6-9. This product is not supported by Integrity NonStop NS1000 systems. Added information about PDU Strapping Configurations on page 6-23.
About This Guide Who Should Use This Guide This guide is written for those responsible for planning the installation, configuration, and maintenance of the server and the software environment at a particular site. Appropriate personnel must have completed HP training courses on system support for Integrity NonStop NS1000 servers. Note. Integrity NonStop NS1000 refers to hardware systems. H-series refers to release version updates (RVUs).
Where to Get More Information About This Guide Where to Get More Information For information about Integrity NonStop NS-series hardware, software, and operations, refer to Appendix C, Guide to Integrity NonStop NS-Series Server Manuals. Notation Conventions Hypertext Links Blue underline is used to indicate a hypertext link within text. By clicking a passage of text with a blue underline, you are taken to the location described.
1 System Overview This section provides an overview of the Integrity NonStop NS1000 system and covers these topics: Topic Page System Description 1-1 Hardware Enclosures and Configurations 1-3 Preparing for Other Than Integrity NonStop NS1000 Server Hardware 1-5 System Description The Integrity NonStop NS1000 system combines up to eight HP Integrity rx2620 servers with the NonStop operating system to create the NonStop value architecture (NSVA).
System Description System Overview T1200 Fibre Channel to SCSI router (model M8201R) Not Supported Connection to NonStop S-series I/O Not Supported Connection to NonStop ServerNet Clusters Not Supported Figure 1-1 shows a configuration example of a 2-processor Integrity NonStop NS1000 server. Figure 1-1.
Hardware Enclosures and Configurations System Overview Hardware Enclosures and Configurations Enclosures that house specific hardware components in an Integrity NonStop NS1000 system include: • • • • • • Blade element (HP Integrity rx2620 server) I/O adapter module (IOAM) Fibre Channel disk module (FCDM) Maintenance switch (Ethernet) Uninterruptible power supply (UPS) Extended runtime module (ERM) A large number of enclosure combinations are possible within the modular cabinet of an Integrity NonStop N
Hardware Enclosures and Configurations System Overview This illustration shows the rear view of an example modular cabinet with eight blade elements and hardware for a complete system: IOAM Enclosure 42 42 41 41 40 40 39 39 38 38 37 37 36 36 35 35 34 34 33 33 32 32 31 31 30 30 29 29 28 28 27 27 26 26 25 25 24 24 23 23 22 Processor 7 Processor 6 Processor 5 Processor 4 Processor 3 Processor 2 Processor 1 Processor 0 System Console FCDMs ServerNet Switch Boards
System Overview Preparing for Other Than Integrity NonStop NS1000 Server Hardware Preparing for Other Than Integrity NonStop NS1000 Server Hardware This guide provides the specifications only for the Integrity NonStop NS1000 server modular cabinet and enclosures identified earlier in this section. For site preparation specifications for other HP hardware that will be installed at the site with the Integrity NonStop NS-series servers, consult with your HP account team.
System Overview Preparing for Other Than Integrity NonStop NS1000 Server Hardware HP Integrity NonStop NS1000 Planning Guide —542527-002 1 -6
2 Installation Facility Guidelines This section provides guidelines for preparing the installation site for Integrity NonStop NS1000 systems: Topic Page Modular Cabinet Power and I/O Cable Entry 2-1 Emergency Power-Off Switches 2-2 Electrical Power and Grounding Quality 2-2 Uninterruptible Power Supply (UPS) 2-3 Cooling and Humidity Control 2-4 Weight 2-5 Flooring 2-5 Dust and Pollution Control 2-5 Zinc Particulates 2-6 Space for Receiving and Unpacking the System 2-6 Operational Spa
Installation Facility Guidelines Emergency Power-Off Switches Emergency Power-Off Switches Emergency power off (EPO) switches are required by local codes or other applicable regulations when computer equipment contains batteries capable of supplying more than 750 volt-amperes (VA) for more that five minutes. Systems that have these batteries also have internal EPO hardware for connection to a site EPO switch or relay.
Installation Facility Guidelines • • • Grounding Systems Electrical storms Large inductive sources (such as motors and welders) Faults in the distribution system wiring (such as loose connections) Computer systems can be protected from the sources of many of these electrical disturbances by using: • • • A dedicated power distribution system Power conditioning equipment Lightning arresters on power cables to protect equipment against electrical storms For steps to take to ensure proper power for the s
Installation Facility Guidelines Cooling and Humidity Control You can order an optional HP R5500 XR UPS for each modular cabinet to supply power to the enclosures within that cabinet. Extended runtime modules (ERMs) can be included with the R5500 XR UPS to extend the power back-up time. If your applications require a UPS that supports the entire system or even a UPS or motor generator for all computer and support equipment in the site, you must plan the site’s electrical infrastructure accordingly.
Installation Facility Guidelines Weight Weight Because modular cabinets for Integrity NonStop NS1000 servers house a unique combination of enclosures, total weight must be calculated based on what is in the specific cabinet, as described in Modular Cabinet and Enclosure Weights With Worksheet on page 3-8.
Installation Facility Guidelines Zinc Particulates Zinc Particulates Over time, fine whiskers of pure metal can form on electroplated zinc, cadmium, or tin surfaces such as aged raised flooring panels and supports. If these whiskers are disturbed, they can break off and become airborne, possibly causing computer failures or operational interruptions. This metallic particulate contamination is a relatively rare but possible threat.
Installation Facility Guidelines Operational Space Also consider the location and orientation of current or future air conditioning ducts and airflow direction and eliminate any obstructions to equipment intake or exhaust air flow. Refer to Cooling and Humidity Control on page 2-4. Space planning should also include the possible addition of equipment or other changes in space requirements.
Installation Facility Guidelines HP Integrity NonStop NS1000 Planning Guide —542527-002 2 -8 Operational Space
3 System Installation Specifications This section provides these specifications necessary for system installation site planning: Topic Page Processor Type and Memory Size 3-1 Modular Cabinet AC Input Power 3-2 Dimensions and Weights 3-9 Environmental Specifications 3-9 Calculating Specifications for Enclosure Combinations 3-11 Note. All specifications provided in this section assume that each enclosure in the modular cabinet is fully populated.
System Installation Specifications Verify the Processor Type Verify the Processor Type The SYSTEM_PROCESSOR_TYPE for and Integrity NonStop NS1000 system should be specified as NSE-P in the CONFTEXT configuration file: ALLPROCESSORS: SYSTEM_PROCESSOR_TYPE NSE-P Note. The OSM Service Connection will display the processor type as NSE-B. Changing the CONFTEXT File You can modify the CONFTEXT file using DSM/SCM. Any changes to the CONFTEXT file take effect after the next system load.
System Installation Specifications North America and Japan: 208 V AC PDU Power Cords North America and Japan: 208 V AC PDU Power Cords For information about power cords for a 208 V AC input, 3-phase delta, 24A RMS modular cabinet, refer to North America and Japan 208 V AC input, 3-phase delta, 24A RMS Modular Cabinet on page 5-3.
Enclosure AC Input System Installation Specifications Note. If your system includes the optional rackmounted HP R5500 XR UPS, the modular cabinet will have one PDU located on the rear left side and four extension bars on the rear right side. Branch circuit requirements vary by the input voltage and the local codes and applicable regulations regarding maximum circuit and total distribution loading.
Model R5500 XR Integrated UPS System Installation Specifications Power and current specifications for each type of enclosure are: AC Power Lines per Enclosure1 Apparent Power (volt-amps measured on single AC line with one line powered) Blade element (rx2620): 4 GB memory 8 GB memory 2 2 220 240 Per line: 125 140 Total: 250 280 17 17 IOAM enclosure2 4 420 250 500 68 Fibre Channel disk module3 2 220 140 280 14 Maintenance switch (Ethernet)4 1 50 - Enclosure Type Apparent Power (vol
Dimensions and Weights System Installation Specifications Dimensions and Weights Topic Page Plan View of the Modular Cabinet 3-6 Service Clearances for the Modular Cabinet 3-6 Unit Sizes 3-7 Modular Cabinet Physical Specifications 3-7 Enclosure Dimensions 3-7 Modular Cabinet and Enclosure Weights With Worksheet 3-8 Plan View of the Modular Cabinet 40 in. (102 cm) 46 in. (116.84 cm) 81.5 in. (207 cm) 24 in. (60.96 cm) VST102.
Unit Sizes System Installation Specifications Unit Sizes Enclosure Type Height (U) Modular cabinet 42 Blade element (rx2620) 2 IOAM 11 Fibre Channel disk module 3 Maintenance switch (Ethernet) 1 R5500 XR UPS 3 Extended runtime module 3 Rackmount console 2 Modular Cabinet Physical Specifications Item Height Width Depth Weight in. cm in. cm in. cm Modular cabinet 78.7 199.9 24.0 60.96 46.0 116.84 Rack 78.5 199.4 23.62 60.0 40.0 101.9 Front door 78.5 199.4 23.
Modular Cabinet and Enclosure Weights With Worksheet System Installation Specifications Modular Cabinet and Enclosure Weights With Worksheet The total weight of each modular cabinet is the sum the weights of the cabinet plus each enclosure installed in it. Use this worksheet to determine the total weight: Enclosure Type Number of Enclosures Weight Total lbs kg Modular cabinet* 579 262.63 Blade element (rx2620) 56 25 IOAM 200 90.7 Fibre Channel disk module 78 35.
Environmental Specifications System Installation Specifications Environmental Specifications Topic Page Heat Dissipation Specifications and Worksheet 3-9 Operating Temperature, Humidity, and Altitude 3-9 Nonoperating Temperature, Humidity, and Altitude 3-10 Cooling Airflow Direction 3-10 Typical Acoustic Noise Emissions 3-10 Tested Electrostatic Immunity 3-10 Heat Dissipation Specifications and Worksheet Number Installed Enclosure Type Unit Heat (Btu/hour with single AC line powered) Uni
Nonoperating Temperature, Humidity, and Altitude System Installation Specifications Specification Operating Range1 Recommended Range1 Maximum Rate of Change per Hour (page 2 of 2) Humidity 15% to 80%, noncondensing 40% to 50%, noncondensing 6%, noncondensing Altitude2 0 to 10,000 feet (0 to 3,048 meters) - - 1. Operating and recommended ranges refer to the ambient air temperature and humidity measured 19.7 in. (50 cm) from the front of the air intake cooling vents. 2.
System Installation Specifications Calculating Specifications for Enclosure Combinations Calculating Specifications for Enclosure Combinations The example configuration in this subsection shows listed components installed in a single modular (42U) HP modular cabinet. Cabinet weight includes the PDUs and their associated wiring and receptacles. Power and thermal calculations assume that each enclosure in the cabinet is fully populated.
Calculating Specifications for Enclosure Combinations System Installation Specifications This sample configuration has four processors (one in each blade element), one IOAM enclosure, and two Fibre Channel disk modules installed in an HP modular cabinet: 42 41 Configurable Space 40 39 38 37 Fibre Channel Disk Module 36 35 34 Fibre Channel Disk Module 33 32 31 30 29 28 27 IOAM Enclosure 26 25 24 23 22 21 Console 20 19 18 17 16 Configurable Space 15 14 13 12 11 Blade Element 10 09 08 07 06 05
Calculating Specifications for Enclosure Combinations System Installation Specifications This table shows the completed weight, power, and thermal calculations for the 4-processor sample configuration cabinet: Weight Component Volt-amps per AC feed Heat (BTU) AC line(s) powered AC line(s) powered Quantity Height (U) (lbs) (kg) Single Both Single Both Blade element (rx2620) with 4 GB memory 4 8 224 100 880 1000 3004 3412 IOAM enclosure 1 11 200 90.
System Installation Specifications Calculating Specifications for Enclosure Combinations HP Integrity NonStop NS1000 Planning Guide —542527-002 3- 14
4 Integrity NonStop NS1000 System Description This section describes the Integrity NonStop NS1000 system and covers these topics: Topic Page NonStop Value Architecture 4-1 Blade Complex 4-2 Blade Element 4-3 Processor Element 4-4 ServerNet Fabric I/O 4-4 System Architecture 4-8 Modular Hardware 4-9 System Models 4-9 Default Startup Characteristics 4-10 System Installation Document Packet 4-12 For information about installing the Integrity NonStop NS1000 server hardware, refer to the No
Blade Complex Integrity NonStop NS1000 System Description Blade Complex The basic building block of the NSVA is the blade complex, which consists of rx2620 servers that are also referred to as blade elements. Each blade element houses one microprocessor called a processor element (PE). All input to and output from each blade element occurs through a ServerNet PCI adapter card located in each blade element. The ServerNet PCI adapter card interfaces with the ServerNet fabrics.
Integrity NonStop NS1000 System Description Blade Element In the term ServerNet fabric, the word fabric is significant because it contrasts with the concept of a bus. A bus provides a single, fixed communications path between start and end points. A fabric is a complex web of links between electronic routers that provide a large number of possible paths from one point to another. Two ServerNet communications fabrics, the X and Y, provide redundant, fault-tolerant communications pathways.
Processor Element Integrity NonStop NS1000 System Description For information about installing Integrity NonStop NS-series server hardware, refer to the NonStop NS-Series Hardware Installation Manual. For information about installing the Integrity NonStop NS1000 server hardware, refer to the NonStop NS1000 Hardware Installation Manual. Processor Element The processor element (PE) in a blade element in an Integrity NonStop NS1000 system includes: • • A standard Intel Itanium microprocessor running at 1.
ServerNet Fabric I/O Integrity NonStop NS1000 System Description ServerNet switch boards in the IOAM enclosure provide logical connectivity to the blade elements and I/O devices. In the Integrity NonStop1000 server, fiber-optic links connect ServerNet PCI adapter cards in the blade elements to 4PSEs located in the IOAM enclosure. The 4PSEs provide the physical ServerNet connectivity to the blade elements by externalizing the ServerNet links from the IOAM enclosure midplane.
ServerNet Fabric I/O Integrity NonStop NS1000 System Description IOAM Enclosure ServerNet Pathways This drawing shows the ServerNet communication pathways in the IOAM enclosure. Optic lines connect the ServerNet PCI adapter cards in the blade elements with the 4PSEs in each IOAM module. The 4PSEs in IOAM module 2 communicate with the X ServerNet switch board. The 4PSEs in IOAM module 3 communicate with the Y ServerNet switch board.
ServerNet Fabric I/O Integrity NonStop NS1000 System Description Example System ServerNet Pathways This drawing shows the redundant routing and connection of the ServerNet X and Y fabric within a simple example system. This example system includes: Four processors (0 through 3) contained in blade elements 1 through 4. One IOAM enclosure, group 100, connected to the ServerNet PCI adapter card in a blade element.
System Architecture Integrity NonStop NS1000 System Description If a cable, connection, router, or other failure occurs, only the system resources that are downstream of the failure on the same fabric are affected. Because of the redundant ServerNet architecture, communication takes the alternate path on the other fabric to the peer resources.
Modular Hardware Integrity NonStop NS1000 System Description Modular Hardware Hardware for Integrity NonStop NS1000 systems is implemented in modules, or enclosures that are installed in modular cabinets. For descriptions of the components and cabinets, see Modular Hardware Components on page 5-1. All Integrity NonStop NS1000 server components are field-replaceable units (FRUs) that can only be serviced by service providers trained by HP.
Default Startup Characteristics Integrity NonStop NS1000 System Description Default Startup Characteristics The SYSTEM_PROCESSOR_TYPE for and Integrity NonStop NS1000 system should be specified as NSE-P in the CONFTEXT configuration file: ALLPROCESSORS: SYSTEM_PROCESSOR_TYPE NSE-P Note. The OSM Service Connection will display the processor type as NSE-B.
Default Startup Characteristics Integrity NonStop NS1000 System Description Load Path Description Source Disk Destination Processor ServerNet Fabric (page 2 of 2) 12 Backup $SYSTEM-P 1 Y 13 Mirror $SYSTEM-M 1 X 14 Mirror $SYSTEM-M 1 Y 15 Mirror backup $SYSTEM-M 1 X 16 Mirror backup $SYSTEM-M 1 Y This illustration shows the system load paths: Fibre Channel Disk Module 2 1 MB 2 M 1 Fibre Channel Disk Module 2 2 B 1 1P X Fabric Y Fabric ServerNet Switch Board ServerNet Switch
Integrity NonStop NS1000 System Description System Installation Document Packet System Installation Document Packet To keep track of the hardware configuration, internal and external communications cabling, IP addresses, and connect networks, assemble and retain as the systems records an Installation Document Packet.
5 Modular System Hardware This section describes the hardware used in the Integrity NonStop NS1000 system: Topic Page Modular Hardware Components 5-1 Component Location and Identification 5-17 Modular Hardware Components These hardware components can be part of an Integrity NonStop NS1000 system: Topic Page Modular Cabinets 5-3 Power Distribution Units (PDUs) 5-5 Blade Element 5-9 I/O Adapter Module (IOAM) Enclosure and I/O Adapters 5-13 Fibre Channel Disk Module 5-13 Tape Drive and Inte
Modular Hardware Components Modular System Hardware This example shows a modular cabinet with eight blade elements and hardware for a complete system (rear view): IOAM Enclosure 42 42 41 41 40 40 39 39 38 38 37 37 36 36 35 35 34 34 33 33 32 32 31 31 30 30 29 29 28 28 27 27 26 26 25 25 24 24 23 23 22 Processor 7 Processor 6 Processor 5 Processor 4 Processor 3 Processor 2 Processor 1 Processor 0 System Console FCDMs ServerNet Switch Boards 4-Port ServerNet Exten
Modular System Hardware Modular Cabinets Modular Cabinets The modular cabinet is a 19-inch, 42 U high, rack for mounting modular components. The modular cabinet comes equipped with front and rear doors and includes a rear extension. The Power Distribution Units (PDUs) are mounted along the rear extension without occupying any U-space in the cabinet.
Modular System Hardware • • • • Modular Cabinets Includes 2 power distribution units (PDU) ° Zero-U rack design PDU input characteristics ° 200 to 240 V ac, single phase, 40A RMS, 3-wire ° 50/60Hz ° Non-NEMA Locking CS8265C, 50A input plug ° 6.
Modular System Hardware • • Power Distribution Units (PDUs) PDU output characteristics ° 3 circuit-breaker-protected 20A load segments ° 36 AC receptacles per PDU (12 per segment) - IEC 320 C13 10A receptacle type ° 3 AC receptacles per PDU (1 per segment) - IEC 320 C19 12A receptacle type Includes front and rear doors Power Distribution Units (PDUs) Two power distribution units (PDUs) are installed to provide redundant power outlets for the components mounted in the modular cabinet.
Power Distribution Units (PDUs) Modular System Hardware The AC power feed cables for the PDUs are mounted to exit the modular cabinet at either the top or bottom of the cabinet depending on what is ordered for the site power feed. This illustration shows the AC power feed cables on PDUs for AC feed at the bottom of the cabinet and the AC power outlets along the PDU.
Power Distribution Units (PDUs) Modular System Hardware This illustration shows the AC power feed cables on PDUs for AC feed at the top of the cabinet: To AC Power Source or Site UPS Power Distribution Unit (PDU) 42 42 41 41 40 40 39 39 38 38 37 37 36 36 35 35 34 34 06 06 05 05 04 04 03 03 02 02 01 01 VST140.
Power Distribution Units (PDUs) Modular System Hardware If your system includes the optional rackmounted HP R5500 XR UPS, the modular cabinet will have one PDU located on the rear left side of the modular cabinet and four HP extension bars located on the rear right side of the modular cabinet.
Blade Element Modular System Hardware This illustration shows the AC power feed cables for the PDU and UPS for AC power feed from the bottom of the cabinet when the optional UPS and ERM are installed: Extension bars are installed along the rear right side instead of a PDU when a UPS is installed in the modular cabinet To AC Power Source Power Distribution Unit (PDU) Extended Runtime Module (ERM) Uninterruptible Power Supply (UPS) 42 42 41 41 40 40 39 39 38 38 37 37 09 09 08 08 07 07
Blade Element Modular System Hardware rackmounted HP R5500 XR UPS, up to four blade elements can be installed in a single footprint. To reduce ambiguity in identifying proper cable connections to the blade element, an identification convention uses numbers to refer to each connection. A number such as 1, 2, 3, and 4 identifies each blade element. These IDs reference the appropriate blade element for proper connection of the fiber-optic cables. For more information, see ServerNet Fabric I/O on page 4-4.
Blade Element Modular System Hardware HP Integrity rx2620 Server The HP Integrity rx2620 server components are field-replaceable units (FRUs) that can only be serviced by service providers trained by HP.
Blade Element Modular System Hardware Front Panel Illustration Reference Number LED Indicator 3 LAN LED 4 System LED State Meaning (page 2 of 2) Flashing /Off The LAN LED provides status information about the LAN interface. When the LAN LED is flashing, there is activity on the LAN. The System LED indicates problems with the system hardware. Color The color of the LED indicates error conditions. • • • Flashing /Solid Green - there are no warnings or faults.
Modular System Hardware I/O Adapter Module (IOAM) Enclosure and I/O Adapters For information about installing the Integrity NonStop NS1000 server hardware, refer to the NonStop NS1000 Hardware Installation Manual. For additional information about the HP Integrity rx2620 server, refer to the HP Cybrary: rx2620 home page: http://cybrary.inet.cpqcorp.net/HW/SYS/INTEL/SERVER/SERVER_RX262 0/index.
Modular System Hardware • • • Tape Drive and Interface Hardware Only configurations with one IOAM enclosure containing a maximum of four FCSAs are supported. The IOAM enclosure group number is 100. FCSAs are supported for installation in slots 2.3, 2.4, 2.5, 3.3, 3.4, and 3.5 of the IOAM enclosure for an Integrity NonStop NS1000 server. However, a minimum of two FCSAs and two G4SAs must be installed.
System Console Modular System Hardware compatible with the UPS. A factory-installed UPS ships with the HP extension bars already installed on the right side of the modular cabinet. Both the UPS and the ERM are 3U high and must reside in the bottom of the cabinet. Note. Retrofitting a system in the field with a UPS and ERMs will likely require moving all installed enclosures up in the rack to provide space for the new hardware.
Enterprise Storage System Modular System Hardware Enterprise Storage System An Enterprise Storage System (ESS) is a collection of magnetic disks, their controllers, and the disk cache in one or more standalone cabinets. ESS connects to the Integrity NonStop NS1000 system either directly via FCSAs in an IOAM enclosure (direct connect) or through a separate storage area network (SAN) using a Fibre Channel SAN switch (switched connect).
Component Location and Identification Modular System Hardware Some storage area procedures, such as reconfiguration, can cause the affected switches to pause. If the pause is long enough, I/O failure occurs on all paths connected to that switch. If both the primary and the backup paths are connected to the same switch, the LDEV goes down. Refer to the documentation that accompanies the ESS.
Modular System Hardware Rack and Offset Physical Location Rack and Offset Physical Location Rack name and rack offset identify the physical location of components in an Integrity NonStop NS-series system. The rack name is located on an external label affixed to the rack, which includes the system name plus a 2-digit rack number. Rack offset is labeled on the rails in each side of the rack. These rails are measured vertically in units called U, with one U measuring 1.75 inches (44 millimeters).
Blade Element Group-Module-Slot Numbering Modular System Hardware ° 16 through 19 relates to the location of the fans on the blade element. Example: slot 18 = the fans in slot 18 ° In the OSM Low-Level Link, 1 relates to the location of the processor. Because each blade element contains only one processor, it is always located in slot 1. Example: slot 1 = any processor in any blade element. • Port: X and Y relate to the two ServerNet fabric ports in slot 2.
Blade Element Group-Module-Slot Numbering Modular System Hardware The form of the GMS numbering for a blade element displayed in the OSM Service Connection is: 401.1.3 Blade element slot (for example, a power supply) Blade element module Blade group number comprising blade element components 401.101 Processor (for example, processor 1) Blade element module Blade group number comprising blade element components VST013.
Blade Element Group-Module-Slot Numbering Modular System Hardware This illustration shows the physical GMS numbering for the front and top view of a blade element: Memory Fan Slot 18 PCI Fan Slot 19 SystemFan Slot 17 SystemFan Slot 16 Front Bezel Removed Power Supply Slot 3 Power Supply Slot 4 (Removed In This Example) Blade Element Front and Top View HP Integrity NonStop NS1000 Planning Guide —542527-002 5- 21 VST024.
Blade Element Group-Module-Slot Numbering Modular System Hardware IOAM Enclosure Group-Module-Slot Numbering An Integrity NonStop NS1000 system supports only one IOAM enclosure, identified as group 100: IOAM Group X ServerNet Module Y ServerNet Module Slot Item Port 100* 2 3 1 and 2 4PSEs 3, 4, 5 ServerNet adapters 1 - n: where n is the port number on the adapter or extender 14 ServerNet switch logic board 1-4 15, 18 Power supplies - 16, 17 Fans - * In the OSM Service Connection, th
Fibre Channel Disk Module (FCDM) Group-ModuleSlot Numbering Modular System Hardware Fibre Channel Disk Module (FCDM) Group-Module-Slot Numbering IOAM Group IOAM Module IOAM Slot FCSA Controller Port 100 2: X fabric; 3-5 1, 2 3: Y fabric FCDM Shelf 1 - 4 if daisychained; 1 if single disk module FCDM Slot Item 0 FCDM 1-14 Disk drive bays 89 Transceiver A1 90 Transceiver A2 91 Transceiver B1 92 Transceiver B2 93 Left FC-AL board 94 Right FC-AL board 95 Left power supply 96 Righ
Modular System Hardware Fibre Channel Disk Module (FCDM) Group-ModuleSlot Numbering HP Integrity NonStop NS1000 Planning Guide —542527-002 5- 24
6 System Configuration Guidelines This section provides configuration guidelines for an Integrity NonStop NS1000 system: Topic Page Enclosure Locations in Cabinets 6-3 Internal ServerNet Interconnect Cabling 6-4 IOAM Enclosure and Disk Storage Considerations 6-9 Fibre Channel Devices 6-10 G4SAs to Networks 6-20 Default Naming Conventions 6-22 PDU Strapping Configurations 6-23 Integrity NonStop NS1000 systems use a flexible modular architecture.
System Configuration Guidelines This example shows one possible system configuration using eight blade elements: IOAM Enclosure 42 42 41 41 40 40 39 39 38 38 37 37 36 36 35 35 34 34 33 33 32 32 31 31 30 30 29 29 28 28 27 27 26 26 25 25 24 24 23 23 22 Processor 7 Processor 6 Processor 5 Processor 4 Processor 3 Processor 2 Processor 1 Processor 0 System Console FCDMs ServerNet Switch Boards 4-Port ServerNet Extenders ServerNet Adapters IOAM Power Supplies 22 21
Enclosure Locations in Cabinets System Configuration Guidelines Enclosure Locations in Cabinets This table provides details about the location of Integrity NonStop NS1000 server enclosures and components within a cabinet. The enclosure location refers to the U location on the rack where the lower edge of the enclosure resides, such as the bottom of an HP Integrity rx2620 server at 28U.
System Configuration Guidelines Internal ServerNet Interconnect Cabling Internal ServerNet Interconnect Cabling This subsection includes: Topic Page Cable Labeling 6-4 Internal Interconnect Cables 6-5 Dedicated Service LAN Cables 6-6 Cable Length Restrictions 6-6 Internal Cable Product IDs 6-6 Blade Elements to IOAM Enclosure 6-7 FCSA to Fibre Channel Disk Modules 6-9 FCSA to Tape Devices 6-9 For general information about internal ServerNet interconnect cabling, refer to the NonStop NS-S
System Configuration Guidelines Internal Interconnect Cables Each label conveys this information: N1 Identifies the node number. R1 Identifies the rack number within the node. Un Identifies the offset that is the physical location of the component within the rack. n is the lowest U number on the rack that the component occupies. nn.nn Identifies the slot location and port connection of the component. Near Refers to the information for this end of this cable.
Dedicated Service LAN Cables System Configuration Guidelines Fiber-optic cables use either LC or SC connectors at one or both ends. This illustration shows an LC fiber-optic cable connector pair: VST598.vsd This illustration shows an SC fiber cable connector pair: VST599.
Blade Elements to IOAM Enclosure System Configuration Guidelines Blade Elements to IOAM Enclosure Fiber-optic cables provide communication between the ServerNet PCI adapter card in each blade element and the 4PSEs in the IOAM enclosure. The FCSAs and G4SAs in the IOAM enclosure then provide Fibre Channel and high-speed Ethernet links to storage and communication LANs. Blade Element to IOAM Enclosure and Processor IDs Each blade element contains one processor element.
Blade Elements to IOAM Enclosure System Configuration Guidelines This cabling diagram illustrates the default configuration and connections for an 8-processor Integrity NonStop NS1000 system. Four 4PSEs are required: two for the X-fabric and two for the Y-fabric The diagram is not for use in installing or cabling the system. For instructions on connecting the cables, see the NonStop NS-Series Hardware Installation Manual.
System Configuration Guidelines FCSA to Fibre Channel Disk Modules FCSA to Fibre Channel Disk Modules Fibre Channel disk modules (FCDMs) can be connected directly to FCSAs in an IOAM enclosure (see Blade Elements to IOAM Enclosure on page 6-7), with these exceptions: • • Only configurations with one IOAM enclosure are supported. A maximum of 16 FCDMs can be connected in the Integrity NonStop NS1000 system because only one IOAM enclosure containing a maximum of four FCSAs is supported.
Fibre Channel Devices System Configuration Guidelines Fibre Channel Devices This subsection describes Fibre Channel devices and covers these topics: Topic Page Factory-Default Locations for Disk Volumes 6-12 Configurations for Fibre Channel Devices 6-12 Configuration Restrictions for Fibre Channel Devices 6-12 Recommendations for Fibre Channel Device Configuration 6-13 Example Configurations of the IOAM Enclosure and Fibre Channel Disk Module 6-14 The only Fibre Channel device used internally
Fibre Channel Devices System Configuration Guidelines This illustration shows the locations of the hardware in the Fibre Channel disk module as well as the Fibre Channel port connectors at the back of the enclosure: FC-AL Port B2 FC-AL Port A2 Fibre Channel Disk Module (rear) EMU Port A1 Port B1 Fibre Channel Disk Module (front) Disk Drive Bays 1-14 VSD.503.vst Fibre Channel disk modules connect to Fibre Channel ServerNet adapters (FCSAs) via Fiber Channel arbitrated loop (FC-AL) cables.
Factory-Default Locations for Disk Volumes System Configuration Guidelines Factory-Default Locations for Disk Volumes This illustration shows where the factory-default locations for the primary and mirror system disk volumes reside in separate Fibre Channel disk modules: $SYSTEM (bay 1) $AUDIT (bay 3) Fibre Channel Disk Module (front) $DSMSCM (bay 2) $OSS (bay 4) VSD.082.vst FCSA location and cable connections vary depending on the various controller and Fibre Channel disk module combinations.
System Configuration Guidelines Recommendations for Fibre Channel Device Configuration Recommendations for Fibre Channel Device Configuration These recommendations apply to FCSA and Fibre Channel disk module configurations: • • • • • • • • • • • • • • Primary Fibre Channel disk module connects to the FCSA F-SAC 1. Mirror Fibre Channel disk module connects to the FCSA F-SAC 2. FC-AL port A1 is the incoming port from an FCSA or from another Fibre Channel disk module.
System Configuration Guidelines • • • • • Example Configurations of the IOAM Enclosure and Fibre Channel Disk Module Daisy-chain configurations follow the same configuration restrictions and rules that apply to configurations that are not daisy-chained. (See Daisy-Chain Configurations on page 6-17.) Fibre Channel disk modules containing mirrored volumes must be installed in separate daisy chains.
Example Configurations of the IOAM Enclosure and Fibre Channel Disk Module System Configuration Guidelines Two FCSAs, Two FCDMs, One IOAM Enclosure This illustration shows example cable connections between two FCSAs and the primary and mirror Fibre Channel disk modules: Mirror FCDM Primary FCDM Fibre Channel Cables FCSA Fibre Channel Cables FCSA VST088.
Example Configurations of the IOAM Enclosure and Fibre Channel Disk Module System Configuration Guidelines Four FCSAs, Four FCDMs, One IOAM Enclosure This illustration shows example cable connections between the four FCSAs and the two sets of primary and mirror Fibre Channel disk modules: Mirror FCDM 2 Primary FCDM 2 Mirror FCDM1 Primary FCDM1 FCSAs FCSAs VST089.
Example Configurations of the IOAM Enclosure and Fibre Channel Disk Module System Configuration Guidelines Daisy-Chain Configurations When planning for possible use of daisy-chained disks, consider: Daisy-Chained Disks Recommended for ... Daisy-Chained Disks Not Recommended for ... Requirements for Daisy-Chain1 Cost-sensitive storage and applications using low-bandwidth disk I/O. Many volumes in a large Fiber Channel loop.
Example Configurations of the IOAM Enclosure and Fibre Channel Disk Module System Configuration Guidelines This illustration shows an example of cable connections between the two FCSAs and four Fibre Channel disk modules in a single daisy-chain configuration. A single IOAM enclosure, two FCSAs, and four Fibre Channel disk modules with ID expander does not provide fault-tolerant mirrored disk storage.
Example Configurations of the IOAM Enclosure and Fibre Channel Disk Module System Configuration Guidelines Four FCSAs, Three FCDMs, One IOAM Enclosure This illustration shows example cable connections between the four FCSAs and three Fibre Channel disk modules, with the primary and mirror drives split within each Fibre Channel disk module: Primary 3 Mirror 1 FCDM 3 Primary 2 Mirror 3 FCDM 2 Primary 1 Mirror 2 FCDM 1 IOAM Enclosure FCSAs FCSAs VST085.
G4SAs to Networks System Configuration Guidelines This illustration shows the factory-default locations for the configurations of four FCSAs and three Fibre Channel disk modules where the primary system file disk volumes are in Fibre Channel disk module 1: $SYSTEM (bay 1) $AUDIT (bay 3) Fibre Channel Disk Module (front) $DSMSCM (bay 2) $OSS (bay 4) VSD.082.
G4SAs to Networks System Configuration Guidelines This illustration shows the G4SA: G4SA G4SA LC Connectors (Fiber) RJ-45 Connector (10/100/1000 Mbps) RJ-45 Connector (10/100 Mbps) VST002.
Default Naming Conventions System Configuration Guidelines Default Naming Conventions The Integrity NonStop NS1000 system implements default naming conventions in the same manner as other Integrity NonStop NS-series systems. With a few exceptions, default naming conventions are not necessary for the modular resources that make up an Integrity NonStop NS1000 system.
System Configuration Guidelines PDU Strapping Configurations No TFTP or WANBOOT process is configured for new NonStop systems. Note. Naming conventions or configurations for the dedicated service LAN TCP/IP are the same as the TCP/IP conventions used with G-series RVUs. The names are $ZTCP0 and $ZTCP1. OSM Service Connection provides the location of the resource by adding an identifying suffix to the names of all the system resources.
System Configuration Guidelines PDU Strapping Configurations HP Integrity NonStop NS1000 Planning Guide —542527-002 6- 24
7 Example Configurations This section shows example hardware component configurations within a modular cabinet for an Integrity NonStop NS1000 server. A number of other configurations are also possible because of the flexibility inherent to the NonStop value architecture and ServerNet. Note. Hardware configuration drawings in this section represent the physical arrangement of the modular enclosures but do not show PDUs. For information about PDUs, see Power Distribution Units (PDUs) on page 5-5.
Typical Configurations Example Configurations Enclosure or Component (page 2 of 2) Integrity NonStop NS1000 system (page 2 of 2) 2-processor 4-processor 6-processor 8-processor HP R5500 XR UPS 1 1 21 21 Extended runtime module (ERM) 2 2 41 41 1 This configuration uses two 42U modular cabinets with one UPS at offset 2U and up to two ERMs installed directly above the UPS in each cabinet.
2-Processor System Example Configurations 2-Processor System This 2-processor configuration has a maximum of two blade elements (HP Integrity rx2620 servers) with one IOAM enclosure and two Fibre Channel disk modules: 42 41 Configurable Space 40 39 38 37 Fibre Channel Disk Module 36 35 34 Fibre Channel Disk Module 33 32 31 30 29 28 27 IOAM Enclosure 26 25 24 23 22 21 Console 20 19 18 17 16 15 14 Configurable Space 13 12 11 10 09 08 07 06 05 Blade Element Blade Element 04 03 Configurable Spa
4-Processor System Example Configurations 4-Processor System This 4-processor configuration has a maximum of four blade elements (HP Integrity rx2620 servers) with one IOAM enclosure and two Fibre Channel disk modules: 42 41 Configurable Space 40 39 38 37 Fibre Channel Disk Module 36 35 34 Fibre Channel Disk Module 33 32 31 30 29 28 27 IOAM Enclosure 26 25 24 23 22 21 Console 20 19 18 17 16 Configurable Space 15 14 13 12 11 Blade Element 10 09 08 07 06 05 Blade Element Blade Element Blade E
6-Processor System Example Configurations 6-Processor System This 6-processor configuration has a maximum of six blade elements (HP Integrity rx2620 servers) with one IOAM enclosure and four Fibre Channel disk modules: 42 41 40 Fibre Channel Disk Module 39 38 37 Fibre Channel Disk Module 36 35 34 Fibre Channel Disk Module 33 32 31 30 29 28 27 IOAM Enclosure 26 25 24 23 22 21 Console 20 19 18 Configurable Space 17 16 15 Blade Element 14 13 12 11 Blade Element Blade Element 10 09 08 07 Blad
8-Processor System Example Configurations 8-Processor System This 8-processor configuration has a maximum of eight blade elements (HP Integrity rx2620 servers) with one IOAM enclosure and four Fibre Channel disk modules: 42 41 40 Fibre Channel Disk Module 39 38 37 Fibre Channel Disk Module 36 35 34 Fibre Channel Disk Module 33 32 31 30 29 28 27 IOAM Enclosure 26 25 24 23 22 21 20 19 Console Blade Element 18 17 16 15 Blade Element Blade Element 14 13 12 11 Blade Element Blade Element 10 09 08
2-Processor System With UPS and ERM Example Configurations 2-Processor System With UPS and ERM The UPS and ERM (two ERMs maximum) must reside in the bottom of the cabinet, with the UPS at cabinet offset 2U. This diagram shows 1 UPS at offset 2U and 1 ERM at offset 5U in a 2-processor Integrity NonStop NS1000 system.
4-Processor System With UPS and ERM Example Configurations 4-Processor System With UPS and ERM The UPS and ERM (two ERMs maximum) must reside in the bottom of the cabinet, with the UPS at cabinet offset 2U. This diagram shows 1 UPS at offset 2U and 1 ERM at offset 5U in a 4-processor Integrity NonStop NS1000 system.
6-Processor System With UPS and ERM Example Configurations 6-Processor System With UPS and ERM The UPS and ERM (two ERMs maximum) must reside in the bottom of each cabinet, with the UPS at cabinet offset 2U. This diagram shows 1 UPS at offset 2U and 1 ERM at offset 5U in each cabinet of this 6-processor Integrity NonStop NS1000 system. This 6-processor configuration has a maximum of six blade elements (HP Integrity rx2620 servers) with one IOAM enclosure and four Fibre Channel disk modules.
8-Processor System With UPS and ERM Example Configurations 8-Processor System With UPS and ERM The UPS and ERM (two ERMs maximum) must reside in the bottom of each cabinet, with the UPS at cabinet offset 2U. This diagram shows 1 UPS at offset 2U and 1 ERM at offset 5U in each cabinet of this 8-processor Integrity NonStop NS1000 system. This 8-processor configuration has a maximum of eight blade elements (HP Integrity rx2620 servers) with one IOAM enclosure and four Fibre Channel disk modules.
Example U Locations for Modular Enclosures Example Configurations Example U Locations for Modular Enclosures This illustration lists the relative U location of each modular enclosure in an example 2-processor Integrity NonStop NS1000 system: IOAM Enclosure (U23) 42 42 41 41 40 40 39 39 38 38 37 37 36 36 35 35 34 34 33 33 32 32 31 31 30 30 29 29 28 28 27 27 26 26 25 25 24 24 23 23 22 21 Processor 1 Processor 0 Primary FCDM (U37) Mirror FCDM (U34) ServerNet Switc
Example of 2-Processor System Cabling Example Configurations Example of 2-Processor System Cabling This illustration shows an example of a 2-processor system. This conceptual representation shows the simplified X and Y ServerNet cabling between the blade elements and the IOAM enclosure and also the simplified cabling between the FCSAs and the Fibre Channel disk modules.
A Cables Internal Cables Available internal cables and their lengths are: Cable Type Connectors Length (meters) Length (feet) Product ID MMF LC-LC 2 7 M8900-02 5 16 M8900-05 15 49 M8900-15 40 131 M8900-40 80 262 M8900-80 100 328 M8900100 1251 4101 M8900125 2001 6561 M8900200 2501 8201 M8900250 10 33 M8910-10 20 66 M8910-20 50 164 M8910-50 100 328 TBD 1251 4101 M8910125 3 10 M8920-3 5 16 M8920-5 10 33 M8920-10 30 98 M8920-30 50 164 M8920-50 0.
Cable Length Restrictions Cables Cable Length Restrictions For a general description of cable length restrictions, refer to the NonStop NS-Series Planning Guide. Details about cable length restrictions that are specific to an Integrity NonStop NS1000 server are presented here.
B Control, Configuration, and Maintenance Tools This section introduces the control, configuration, and maintenance tools used in Integrity NonStop NS-series systems: Topic Page Support and Service Library B-1 System Console B-1 Maintenance Architecture B-6 Dedicated Service LAN B-7 OSM B-15 System-Down OSM Low-Level Link B-16 AC Power Monitoring B-17 AC Power-Fail States B-18 Support and Service Library See Support and Service Library on page C-1.
System Console Configurations Control, Configuration, and Maintenance Tools Some system console hardware, including the PC system unit, monitor, and keyboard, can be mounted in the cabinet. Other PCs are installed outside the cabinet and require separate provisions or furniture to hold the PC hardware. System consoles communicate with Integrity NonStop NS-series servers over a dedicated service local area network (LAN) or a secure operations LAN.
System Console Configurations Control, Configuration, and Maintenance Tools One System Console Managing One System (Setup Configuration) DHCP DNS server (optional) Rem ote Service Provider Secure Operations LAN Modem Primary System Console Optional Connection to a Secure Operations LAN (One or Two Connections) 4PSE 4PSE FCSA FCSA G4SA 10/100 ENET Port, ServerNet Switch Boards 4PSE 4PSE FCSA FCSA G4SA Maintenance Switch 1 IOAM Enclosure VST073.
System Console Configurations Control, Configuration, and Maintenance Tools Primary and Backup System Consoles Managing One System DHCP DNS server (optional) Rem ote Service Provider Remote Service Provider Secure Operations LAN Modem Primary System Console Backup System Console Modem Optional Connection to a Secure Operations LAN (One or Two Connections) Maintenance Switch 2 4PSE 4PSE FCSA FCSA G4SA 10/100 ENET Port, ServerNet Switch Boards G4SA (X Fabric) Group 2, Slot 5, Port A 4PSE 4PSE FCSA
Control, Configuration, and Maintenance Tools System Console Configurations The dedicated service LAN is normally connected to the operations LAN using a single connection. If both sides of the dedicated service LAN connect directly to the operations LAN, you must: • • Enable Spanning Tree Protocol (STP) in switches or routers that are part of the operations LAN. Change the preconfigured IP address of the backup system console before you add it to the LAN Caution.
Maintenance Architecture Control, Configuration, and Maintenance Tools Maintenance Architecture This simplified illustration shows the two elements of the maintenance architecture plus the OSM maintenance console applications: To Remote Support Center Maintenance Switch Maintenance LAN OSM Console ServerNet Fabric IOAM ME IOAM G4SA FCSA FCDM FC-AL Blade element PE EMU I/O and Fabric Functional Element Processor Functional Element VST517.
Control, Configuration, and Maintenance Tools Dedicated Service LAN Dedicated Service LAN A dedicated service LAN provides connectivity between the OSM console running in a PC and the maintenance firmware in the system hardware. This dedicated service LAN uses a ProCurve 2524 Ethernet switch for connectivity between the ServerNet switch boards for the IAOM and the system console.
Control, Configuration, and Maintenance Tools Fault-Tolerant Configuration Fault-Tolerant Configuration You can configure the dedicated service LAN as described in the OSM Migration Guide. HP recommends that you use a fault-tolerant LAN configuration. A fault-tolerant configuration includes these connections to two maintenance switches: • • • • Connect one system console to each maintenance switch. Connect one of the two ServerNet switch boards in the IOAM enclosure to each maintenance switch.
Fault-Tolerant Configuration Control, Configuration, and Maintenance Tools This illustration shows a fault-tolerant LAN configuration with two maintenance switches: DHCP DNS server (optional) Rem ote Service Provider Remote Service Provider Secure Operations LAN Modem Primary System Console Backup System Console Modem Optional Connection to a Secure Operations LAN (One or Two Connections) Maintenance Switch 2 4PSE 4PSE FCSA FCSA G4SA 10/100 ENET Port, ServerNet Switch Boards G4SA (X Fabric) Group
IP Addresses Control, Configuration, and Maintenance Tools IP Addresses Integrity NonStop NS1000 servers require Internet protocol (IP) addresses for these components that are connected to the dedicated service LAN: • • • • • ServerNet switch boards in the IOAM enclosure Maintenance switches System consoles G4SAs UPSs (optional) These components have default IP addresses that are preconfigured at the factory.
Control, Configuration, and Maintenance Tools IP Addresses Whether or not the new system will receive dynamic IP addresses from a Dynamic Host Configuration Protocol (DHCP) server, it is recommended that the IP addresses be reconfigured as either: • • Static IP Addresses Dynamically Assigned IP Addresses Note. Be aware of possible conflicts with existing operations LANs. This guide cannot predict all possible configurations of existing LANs.
Control, Configuration, and Maintenance Tools Ethernet Cables Static or Dynamic IP Addresses Various components within the dedicated service LAN can have static or dynamic IP addresses.
System-Up Dedicated Service LAN Control, Configuration, and Maintenance Tools System-Up Dedicated Service LAN When the system is up and the OS running, the ME connects to the NonStop NS1000 system’s dedicated service LAN using one of the PIFs on each of two G4SAs. This connection enables OSM Service Connection and OSM Notification Director communication for maintenance in a running system.
Dedicated Service LAN Links With One IOAM Enclosure Control, Configuration, and Maintenance Tools Dedicated Service LAN Links With One IOAM Enclosure This illustration shows the dedicated service LAN cables connected to the G4SAs in slot 5 of both modules of an IOAM enclosure and to the maintenance switch: Maintenance Switch Module 2 Module 3 G4SA Ethernet PIF Connectors D C B A Cable to Maintenance Switch IOAM Enclosure (Group 110) VST340.
Control, Configuration, and Maintenance Tools Initial Configuration for a Dedicated Service LAN Initial Configuration for a Dedicated Service LAN New systems are shipped with an initial set of IP addresses configured. For a listing of these initial IP addresses, see IP Addresses on page B-10. Factory-default IP addresses for the G4SA are in the LAN Configuration and Management Manual. IP addresses for SWAN concentrators are in the WAN Subsystem Configuration and Management Manual.
System-Down OSM Low-Level Link Control, Configuration, and Maintenance Tools For information on how to install, configure and start OSM server-based processes and components, see the OSM Migration Guide.
Control, Configuration, and Maintenance Tools AC Power Monitoring AC Power Monitoring Integrity NonStop NS-series servers require either the optional HP model R5500 XR UPS (with one or two ERMs for additional battery power) installed in each modular cabinet or a user-supplied site UPS to support system operation through power transients or an orderly system shutdown during a power failure.
Control, Configuration, and Maintenance Tools AC Power-Fail States AC Power-Fail States These states occur when a power failure occurs and an optional HP model R5500 XR UPS is installed in each cabinet within the system: System State Description NSK_RUNNING NonStop operating system is running normally. RIDE_THRU OSM has detect a power failure and begins timing the outage. AC power returning terminates RIDE_THRU and puts the operating system back into an NSK_RUNNING state.
C Guide to Integrity NonStop NS-Series Server Manuals These manuals support the Integrity NonStop NS-series systems. Category Purpose Title Reference Provide information about the manuals, the RVUs, and hardware that support NonStop NS-series servers NonStop Systems Introduction for H-Series RVUs Describe how to prepare for changes to software or hardware configurations Managing Software Changes Change planning and control H06.
Guide to Integrity NonStop NS-Series Server Manuals Support and Service Library Within these categories, where applicable, content might be further categorized according to server or enclosure type. Authorized service providers can also order the NTL Support and Service Library CD: • • HP employees: Subscribe at World on a Workbench (WOW). Subscribers automatically receive CD updates. Access the WOW order form at http://hps.knowledgemanagement.hp.com/wow/order.asp.
Safety and Compliance This section contains three types of required safety and compliance statements: • • • Regulatory compliance Waste Electrical and Electronic Equipment (WEEE) Safety Regulatory Compliance Statements The following regulatory compliance statements apply to the products documented by this manual. FCC Compliance This equipment has been tested and found to comply with the limits for a Class A digital device, pursuant to part 15 of the FCC Rules.
Regulatory Compliance Statements Safety and Compliance Korea MIC Compliance Taiwan (BSMI) Compliance Japan (VCCI) Compliance This is a Class A product based on the standard or the Voluntary Control Council for Interference by Information Technology Equipment (VCCI). If this equipment is used in a domestic environment, radio disturbance may occur, in which case the user may be required to take corrective actions.
Regulatory Compliance Statements Safety and Compliance European Union Notice Products with the CE Marking comply with both the EMC Directive (89/336/EEC) and the Low Voltage Directive (73/23/EEC) issued by the Commission of the European Community.
SAFETY CAUTION Safety and Compliance SAFETY CAUTION The following icon or caution statements may be placed on equipment to indicate the presence of potentially hazardous conditions: DUAL POWER CORDS CAUTION: “THIS UNIT HAS MORE THAN ONE POWER SUPPLY CORD. DISCONNECT ALL POWER SUPPLY CORDS TO COMPLETELY REMOVE POWER FROM THIS UNIT." "ATTENTION: CET APPAREIL COMPORTE PLUS D'UN CORDON D'ALIMENTATION. DÉBRANCHER TOUS LES CORDONS D'ALIMENTATION AFIN DE COUPER COMPLÈTEMENT L'ALIMENTATION DE CET ÉQUIPEMENT".
Waste Electrical and Electronic Equipment (WEEE) Safety and Compliance HIGH LEAKAGE CURRENT To reduce the risk of electric shock due to high leakage currents, a reliable grounded (earthed) connection should be checked before servicing the power distribution unit (PDU).
Safety and Compliance Important Safety Information HP Integrity NonStop NS1000 Planning Guide —542527-002 Statements -6
Glossary For a glossary of Integrity NonStop NS-series terms, see the NonStop System Glossary in the NonStop Technical Library (NTL).
Glossary HP Integrity NonStop NS1000 Planning Guide —542527-002 Glossary- 2
Index Numbers 4-port ServerNet extender card connections 5-10 description 1-1 installation slots 5-13 blade element (continued) HP Integrity rx2620 server 5-11 part of blade complex 4-2 processor ID 5-10 branch circuit 3-3 C A AC current calculations 3-11 AC power 200 to 240 V ac single phase 32A RMS 5-4 200 to 240 V ac single phase 40A RMS 5-4 208 V ac 3-phase delta 24A RMS 5-3 380 to 415 V ac 3-phase Wye 16A RMS 5-4 enclosure input specifications 3-4 input 3-2 power-fail monitoring B-17 power-fail stat
E Index dimensions enclosures 3-7 modular cabinet 3-6 service clearances 3-6 disk drive configuration recommendations 6-13 documentation NonStop NS-series servers C-1 packet 4-12 ServerNet adapter configuration 4-12 dust and microscopic particles 2-5 dynamic IP addresses B-11 E electrical disturbances 2-2 electrical power loading 3-4 emergency power off (EPO) switches HP R5500XR UPS 2-2 Integrity NonStop NS1000 servers 2-2 enclosure combinations 1-3 dimensions 3-7 height in U 3-7 maximum number 7-1 power
G Index FRU blade element 5-11 fan for blade element 5-11 power supply 5-11 fuses, PDU 5-6, 5-7 G G4SA installation slots 5-13 network connections 6-20 service LAN PIF B-13 GMS for Fibre Channel disk module 5-23 grounding 2-3, 3-3 H hardware configurations examples 3-11 typical 7-2 harmonized cables, European Union 2-3 heat calculation 2-4, 3-9 height in U, enclosures 3-7 hot spots 2-4 HP Integrity rx2620 server and blade element 4-3 described 5-11 LEDs 5-11 I indicator LEDs for blade element front pan
N Index N naming conventions 6-22 NonStop value architecture described 4-8 overview 1-1 terms 4-3 NSVA See NonStop value architecture O operating system load paths 4-10 operational space 2-6 OSM B-2, B-15, B-16 P particulates, metallic 2-6 paths, operating system load 4-10 PDU AC power feed 5-6 description 5-5 fuses 5-6, 5-7 receptacles 5-9 strapping configurations 6-23 PE 4-2, 4-4 power and thermal calculations 3-11 power consumption 2-3 power distribution units (PDUs) 2-1, 3-2 power feed, top or botto
T Index startup characteristics, default 4-10 static IP addresses B-11 SWAN concentrator restriction B-12 system configurations, examples 3-11 system console configurations B-2 description B-1 overview 5-15 system disk location 4-10 T tape drives 5-14 terminology 5-17 U U height, enclosures 3-7 uninterruptible power supply (UPS) 2-3, 5-14 UPS, HP R5500 XR 2-3, 2-4, 3-5 W weight calculation 2-5, 3-8 worksheet heat calculation 3-9 weight calculation 3-8 Z zinc, cadmium, or tin particulates 2-6 Special
Special Characters Index HP Integrity NonStop NS1000 Planning Guide —542527-002 Index -6