HP Integrity NonStop NS1000 Planning Guide Abstract This guide describes the HP Integrity NonStop™ NS1000 system and provides examples of system configurations to assist in planning for installation of a new system. Product Version N.A. Supported Release Version Updates (RVUs) This publication supports H06.07 and all subsequent H-series RVUs until otherwise indicated by its replacement publication.
Document History Part Number Product Version Published 542527-001 N.A. March 2006 542527-002 N.A. May 2006 542527-003 N.A.
HP Integrity NonStop NS1000 Planning Guide Glossary Index What’s New in This Guide vii Guide Information vii New and Changed Information Figures viii About This Guide xi Who Should Use This Guide xi What’s in This Guide xi Where to Get More Information xii Notation Conventions xii 1. System Overview Integrity NonStop NS1000 System Overview 1-1 Hardware Enclosures and Configurations 1-3 Preparing for Other Than Integrity NonStop NS1000 Server Hardware 2.
3. System Installation Specifications Contents 3.
4. Integrity NonStop NS1000 System Description (continued) Contents 4. Integrity NonStop NS1000 System Description (continued) System Models 4-10 Default Startup Characteristics 4-11 System Installation Document Packet 4-13 Tech Memo for the Factory-Installed Hardware Configuration Configuration Forms for the ServerNet Adapter 4-13 4-13 5.
. System Configuration Guidelines (continued) Contents 6.
B. Control, Configuration, and Maintenance Tools (continued) Contents B. Control, Configuration, and Maintenance Tools (continued) SWAN Concentrator Restriction B-12 System-Up Dedicated Service LAN B-13 Dedicated Service LAN Links With One IOAM Enclosure B-14 Initial Configuration for a Dedicated Service LAN B-15 Operating Configurations for Dedicated Service LANs B-15 OSM B-15 System-Down OSM Low-Level Link B-16 AC Power Monitoring B-17 AC Power-Fail States B-18 C.
Contents HP Integrity NonStop NS1000 Planning Guide —542527-003 vi
What’s New in This Guide Guide Information HP Integrity NonStop NS1000 Planning Guide Abstract This guide describes the HP Integrity NonStop™ NS1000 system and provides examples of system configurations to assist in planning for installation of a new system. Product Version N.A. Supported Release Version Updates (RVUs) This publication supports H06.07 and all subsequent H-series RVUs until otherwise indicated by its replacement publication.
New and Changed Information What’s New in This Guide Document History Part Number Product Version Published 542527-001 N.A. March 2006 542527-002 N.A. May 2006 542527-003 N.A. August 2006 New and Changed Information This new and changed information is documented in the guide. Section and Title Changes Guide-wide Editorial corrections. Updated all illustrations to reflect standard component layout and PDU placement in 42U modular cabinets.
New and Changed Information What’s New in This Guide Section and Title Changes Section 5, Modular System Hardware Added harmonized power cord characteristic to: International 380 to 415 V AC input, 3-phase Wye, 16A RMS Power on page 5-4. International 200 to 240 V AC input, Single phase, 32A RMS Power on page 5-5. Added information about PDU orientation in a modular cabinet to Power Distribution Units (PDUs) on page 5-5.
What’s New in This Guide New and Changed Information HP Integrity NonStop NS1000 Planning Guide —542527-003 x
About This Guide Who Should Use This Guide This guide is written for those responsible for planning the installation, configuration, and maintenance of the server and the software environment at a particular site. Appropriate personnel must have completed HP training courses on system support for Integrity NonStop NS1000 servers. Note. Integrity NonStop NS1000 refers to hardware systems. H-series refers to release version updates (RVUs).
Where to Get More Information About This Guide Where to Get More Information For information about Integrity NonStop NS-series hardware, software, and operations, refer to Appendix C, Guide to Integrity NonStop NS-Series Server Manuals. Notation Conventions Hypertext Links Blue underline is used to indicate a hypertext link within text. By clicking a passage of text with a blue underline, you are taken to the location described.
1 System Overview This section provides an overview of the Integrity NonStop NS1000 system and covers these topics: Topic Page Integrity NonStop NS1000 System Overview 1-1 Hardware Enclosures and Configurations 1-3 Preparing for Other Than Integrity NonStop NS1000 Server Hardware 1-5 Integrity NonStop NS1000 System Overview The Integrity NonStop NS1000 system combines up to eight HP Integrity rx2620 servers with the NonStop operating system to create the NonStop value architecture (NSVA).
Integrity NonStop NS1000 System Overview System Overview Figure 1-1 shows elements of an example 2-processor Integrity NonStop NS1000 server. Figure 1-1.
Hardware Enclosures and Configurations System Overview Hardware Enclosures and Configurations Enclosures that house specific hardware components in an Integrity NonStop NS1000 system include: • • • • • • Blade element (HP Integrity rx2620 server) I/O adapter module (IOAM) Fibre Channel disk module (FCDM) Maintenance switch (Ethernet) Uninterruptible power supply (UPS) (optional) Extended runtime module (ERM) (optional) A large number of enclosure combinations are possible within the modular cabinet(s)
Hardware Enclosures and Configurations System Overview Figure 1-2 shows the rear view of an example Integrity NonStop NS1000 system with eight blade elements in a 42U modular cabinet without the optional UPS and ERM: Figure 1-2.
System Overview Preparing for Other Than Integrity NonStop NS1000 Server Hardware Preparing for Other Than Integrity NonStop NS1000 Server Hardware This guide provides the specifications only for the Integrity NonStop NS1000 server modular cabinets and enclosures identified earlier in this section. For site preparation specifications for other HP hardware that will be installed at the site with the Integrity NonStop NS-series servers, consult with your HP account team.
System Overview Preparing for Other Than Integrity NonStop NS1000 Server Hardware HP Integrity NonStop NS1000 Planning Guide —542527-003 1 -6
2 Installation Facility Guidelines This section provides guidelines for preparing the installation site for Integrity NonStop NS1000 systems: Topic Page Modular Cabinet Power and I/O Cable Entry 2-1 Emergency Power-Off Switches 2-2 Electrical Power and Grounding Quality 2-2 Uninterruptible Power Supply (UPS) 2-3 Cooling and Humidity Control 2-4 Weight 2-5 Flooring 2-5 Dust and Pollution Control 2-5 Zinc Particulates 2-6 Space for Receiving and Unpacking the System 2-6 Operational Spa
Installation Facility Guidelines Emergency Power-Off Switches Emergency Power-Off Switches Emergency power off (EPO) switches are required by local codes or other applicable regulations when computer equipment contains batteries capable of supplying more than 750 volt-amperes (VA) for more that five minutes. Systems that have these batteries also have internal EPO hardware for connection to a site EPO switch or relay.
Installation Facility Guidelines • • • Grounding Systems Electrical storms Large inductive sources (such as motors and welders) Faults in the distribution system wiring (such as loose connections) Computer systems can be protected from the sources of many of these electrical disturbances by using: • • • A dedicated power distribution system Power conditioning equipment Lightning arresters on power cables to protect equipment against electrical storms For steps to take to ensure proper power for the s
Installation Facility Guidelines Cooling and Humidity Control you add an R5500 XR UPS to a modular cabinet in the field, the PDU on the right side is replaced with HP extension bars. The extension bars are oriented inward facing the components within the rack. For power information, refer to Model R5500 XR Integrated UPS on page 3-7. For complete information and specifications on the R5500 XR UPS, contact your HP representation or refer to the HP UPS R5500 XR Models User Guide available at: http://h10032.
Installation Facility Guidelines Weight Cooling airflow through each enclosure in the Integrity NonStop NS1000 server is front-to-back. Because of high heat densities and hot spots, an accurate assessment of air flow around and through the server equipment and specialized cooling design is essential for reliable server operation. For an airflow assessment, consult with your HP cooling consultant or your heating, ventilation, and air conditioning (HVAC) engineer. Note.
Installation Facility Guidelines Zinc Particulates Metallically conductive particles can short circuit electronic components. Tape drives and some other mechanical devices can experience failures resulting from airborne abrasive particles. For recommendations to keep the site as free of dust and pollution as possible, consult with your heating, ventilation, and air conditioning (HVAC) engineer or your HP site preparation specialist.
Installation Facility Guidelines Operational Space Operational Space When planning the layout of the server site, use the equipment dimensions, door swing, and service clearances listed in Dimensions and Weights on page 3-8. Because location of the lighting fixtures and electrical outlets affects servicing operations, consider an equipment layout that takes advantage of existing lighting and electrical outlets.
Installation Facility Guidelines HP Integrity NonStop NS1000 Planning Guide —542527-003 2 -8 Operational Space
3 System Installation Specifications This section provides these specifications necessary for system installation site planning: Topic Page Processor Type and Memory Size 3-1 Modular Cabinet AC Input Power 3-2 Dimensions and Weights 3-11 Environmental Specifications 3-11 Calculating Specifications for Enclosure Combinations 3-13 Note. All specifications provided in this section assume that each enclosure in the modular cabinet is fully populated.
System Installation Specifications Verify the Processor Type Verify the Processor Type The SYSTEM_PROCESSOR_TYPE for and Integrity NonStop NS1000 system should be specified as NSE-P in the CONFTEXT configuration file: ALLPROCESSORS: SYSTEM_PROCESSOR_TYPE NSE-P Note. The OSM Service Connection will display the processor type as NSE-B. Changing the CONFTEXT File You can modify the CONFTEXT file using DSM/SCM. Any changes to the CONFTEXT file take effect after the next system load.
North America and Japan: 208 V AC PDU Power System Installation Specifications North America and Japan: 208 V AC PDU Power The PDU power characteristics are: PDU input characteristics PDU output characteristics • • • • • • • 208 V ac, 3-phase delta, 24A RMS, 4-wire 50/60Hz NEMA L15-30 input plug 6.5 feet (2 m) attached power cord 3 circuit-breaker-protected 13.
International: 380 to 415 V AC PDU Power System Installation Specifications International: 380 to 415 V AC PDU Power The PDU power characteristics are: PDU input characteristics PDU output characteristics • • • • • • • 380 to 415 V ac, 3-phase Wye, 16A RMS, 5-wire 50/60Hz IEC309 5-pin, 16A input plug 6.
Branch Circuits and Circuit Breakers System Installation Specifications Branch Circuits and Circuit Breakers Modular cabinets for the Integrity NonStop NS-series system contain two PDUs. Each of the two PDUs requires a separate branch circuit of these ratings: Region Volts Amps North America and Japan 208 24 North America and Japan 200 - 240 40 International 380 - 415 16 International 200 - 240 32 Caution.
Enclosure Power Loads System Installation Specifications Enclosure Power Loads The total power and current load for a modular cabinet depends on the number and type of enclosures installed in it. Therefore, the total load is the sum of the loads for all enclosures installed. For examples of calculating the power and current load for various enclosure combinations, refer to Calculating Specifications for Enclosure Combinations on page 3-13.
Model R5500 XR Integrated UPS System Installation Specifications Model R5500 XR Integrated UPS Version Operating Voltage Settings Power Out (VA/Watts) Input Plug Branch Circuit North America and Japan 200/208*, 220, 230, 240 5000/4500 L6-30P Dedicated 30 Amp Other International 200, 230*, 240 6000/5400 Dedicated 30 Amp If 200/208 Then 5000/4500 IEC-309 32 Amp * Factory-default setting For complete information and specifications, refer to the HP UPS R5500 XR Models User Guide.
Dimensions and Weights System Installation Specifications Dimensions and Weights Topic Page Plan View From Above of the 42U Modular Cabinet 3-8 Service Clearances for the Modular Cabinets 3-8 Unit Sizes 3-8 42U Modular Cabinet Physical Specifications 3-9 Enclosure Dimensions 3-9 Modular Cabinet and Enclosure Weights With Worksheet 3-10 Plan View From Above of the 42U Modular Cabinet 40 in. (102 cm) 81.5 in. (207 cm) 46 in. (116.84 cm) 24 in. (60.96 cm) VST102.
42U Modular Cabinet Physical Specifications System Installation Specifications Enclosure Type Height (U) (page 2 of 2) Fibre Channel disk module 3 Maintenance switch (Ethernet) 1 R5500 XR UPS 3 Extended runtime module 3 Rackmount console 2 42U Modular Cabinet Physical Specifications Item Height Width Depth Weight in. cm in. cm in. cm Modular cabinet 78.7 199.9 24.0 60.96 46.0 116.84 Rack 78.5 199.4 23.62 60.0 40.0 101.9 Front door 78.5 199.4 23.5 59.7 3.0 7.
Modular Cabinet and Enclosure Weights With Worksheet System Installation Specifications Modular Cabinet and Enclosure Weights With Worksheet The total weight of each modular cabinet is the sum the weights of the cabinet plus each enclosure installed in it. Use this worksheet to determine the total weight: Enclosure Type Number of Enclosures Weight Total lbs kg 42U Modular cabinet* 303 137.44 Blade element (rx2620) 56 25 IOAM 200 90.7 Fibre Channel disk module 78 35.
Environmental Specifications System Installation Specifications Environmental Specifications Topic Page Heat Dissipation Specifications and Worksheet 3-11 Operating Temperature, Humidity, and Altitude 3-11 Nonoperating Temperature, Humidity, and Altitude 3-12 Cooling Airflow Direction 3-12 Typical Acoustic Noise Emissions 3-12 Tested Electrostatic Immunity 3-12 Heat Dissipation Specifications and Worksheet Number Installed Enclosure Type Unit Heat (Btu/hour with single AC line powered) U
Nonoperating Temperature, Humidity, and Altitude System Installation Specifications Specification Operating Range1 Recommended Range1 Maximum Rate of Change per Hour (page 2 of 2) Humidity 15% to 80%, noncondensing 40% to 50%, noncondensing 6%, noncondensing Altitude2 0 to 10,000 feet (0 to 3,048 meters) - - 1. Operating and recommended ranges refer to the ambient air temperature and humidity measured 19.7 in. (50 cm) from the front of the air intake cooling vents. 2.
System Installation Specifications Calculating Specifications for Enclosure Combinations Calculating Specifications for Enclosure Combinations The example configuration in this subsection shows components installed in a single 42U modular cabinet. Cabinet weight includes the PDUs and their associated wiring and receptacles. Power and thermal calculations assume that each enclosure in the cabinet is fully populated.
Calculating Specifications for Enclosure Combinations System Installation Specifications This sample configuration has four processors (one in each blade element), one IOAM enclosure, two Fibre Channel disk modules, and one optional UPS and ERM installed in a 42U modular cabinet: IOAM Enclosure Processor 3 Processor 2 Processor 1 Processor 0 42 42 41 41 40 40 39 39 38 38 37 37 36 36 35 35 34 34 33 33 32 32 31 31 30 30 29 29 28 28 27 27 26 26 25 25 24 24 23 23 22
Calculating Specifications for Enclosure Combinations System Installation Specifications This table shows the completed weight, power, and thermal calculations for the 4-processor sample configuration cabinet: Weight Component Volt-amps per AC feed Heat (BTU) AC line(s) powered AC line(s) powered Quantity Height (U) (lbs) (kg) Single Both Single Both Blade element (rx2620) with 4 GB memory 4 8 224 100 880 1000 3004 3412 IOAM enclosure 1 11 200 90.
System Installation Specifications Calculating Specifications for Enclosure Combinations HP Integrity NonStop NS1000 Planning Guide —542527-003 3- 16
4 Integrity NonStop NS1000 System Description This section describes the Integrity NonStop NS1000 system and covers these topics: Topic Page NonStop Value Architecture 4-1 Blade Complex 4-2 Blade Element 4-3 Processor Element 4-4 ServerNet Fabric I/O 4-4 System Architecture 4-9 Modular Hardware 4-10 System Models 4-10 Default Startup Characteristics 4-11 System Installation Document Packet 4-13 For information about installing the Integrity NonStop NS1000 server hardware, refer to the
Blade Complex Integrity NonStop NS1000 System Description Blade Complex The basic building block of the NSVA is the blade complex, which consists of rx2620 servers that are also referred to as blade elements. Each blade element houses one microprocessor called a processor element (PE). All input to and output from each blade element occurs through a ServerNet PCI adapter card located in each blade element. The ServerNet PCI adapter card interfaces with the ServerNet fabrics.
Integrity NonStop NS1000 System Description Blade Element In the term ServerNet fabric, the word fabric is significant because it contrasts with the concept of a bus. A bus provides a single, fixed communications path between start and end points. A fabric is a complex web of links between electronic routers that provide a large number of possible paths from one point to another. Two ServerNet communications fabrics, the X and Y, provide redundant, fault-tolerant communications pathways.
Processor Element Integrity NonStop NS1000 System Description • • • Redundant cooling fans (accessed from the top of the enclosure when the enclosure is pulled forward on its rails) Redundant 220-240 V ac power supplies and power cords (accessed from the back of the enclosure) Main memory upgrade from 4 GB to 8 GB (accessed from the top of the enclosure when the enclosure is pulled forward on its rails) For information about installing Integrity NonStop NS-series server hardware, refer to the NonStop NS
Integrity NonStop NS1000 System Description ServerNet Fabric I/O Overview ServerNet 3 routers are used within the Integrity NonStop NS1000 system as building blocks for the ServerNet fabric, which employ fiber-optic links exclusively and is a collection of connected routers and links that form a flexible internal or external network.
ServerNet Fabric I/O Integrity NonStop NS1000 System Description Simplified ServerNet System Diagram This simplified diagram shows the ServerNet architecture in the Integrity NonStop NS1000 system. It shows the X and Y ServerNet communication pathways between the blade elements, 4PSEs, ServerNet switch boards, and ServerNet I/O adapters.
ServerNet Fabric I/O Integrity NonStop NS1000 System Description IOAM Enclosure ServerNet Pathways This drawing shows the ServerNet communication pathways in the IOAM enclosure. Optic lines connect the ServerNet PCI adapter cards in the blade elements with the 4PSEs in each IOAM module. The 4PSEs in IOAM module 2 communicate with the X ServerNet switch board. The 4PSEs in IOAM module 3 communicate with the Y ServerNet switch board.
ServerNet Fabric I/O Integrity NonStop NS1000 System Description Example System ServerNet Pathways This drawing shows the redundant routing and connection of the ServerNet X and Y fabric within a simple example system. This example system includes: Four processors (0 through 3) contained in blade elements 1 through 4. One IOAM enclosure, group 100, connected to the ServerNet PCI adapter card in a blade element.
System Architecture Integrity NonStop NS1000 System Description If a cable, connection, router, or other failure occurs, only the system resources that are downstream of the failure on the same fabric are affected. Because of the redundant ServerNet architecture, communication takes the alternate path on the other fabric to the peer resources.
Modular Hardware Integrity NonStop NS1000 System Description Modular Hardware Hardware for Integrity NonStop NS1000 systems is implemented in modules, or enclosures that are installed in modular cabinets. For descriptions of the components and cabinets, see Modular Hardware Components on page 5-1. All Integrity NonStop NS1000 server components are field-replaceable units (FRUs) that can only be serviced by service providers trained by HP.
Default Startup Characteristics Integrity NonStop NS1000 System Description Default Startup Characteristics The SYSTEM_PROCESSOR_TYPE for and Integrity NonStop NS1000 system should be specified as NSE-P in the CONFTEXT configuration file: ALLPROCESSORS: SYSTEM_PROCESSOR_TYPE NSE-P Note. The OSM Service Connection will display the processor type as NSE-B.
Default Startup Characteristics Integrity NonStop NS1000 System Description Load Path Description Source Disk Destination Processor ServerNet Fabric (page 2 of 2) 12 Backup $SYSTEM-P 1 Y 13 Mirror $SYSTEM-M 1 X 14 Mirror $SYSTEM-M 1 Y 15 Mirror backup $SYSTEM-M 1 X 16 Mirror backup $SYSTEM-M 1 Y This illustration shows the system load paths: Fibre Channel Disk Module 2 1 MB 2 M 1 Fibre Channel Disk Module 2 2 B 1 1P X Fabric Y Fabric ServerNet Switch Board ServerNet Switch
Integrity NonStop NS1000 System Description System Installation Document Packet System Installation Document Packet To keep track of the hardware configuration, internal and external communications cabling, IP addresses, and connect networks, assemble and retain as the systems records an Installation Document Packet.
Integrity NonStop NS1000 System Description Configuration Forms for the ServerNet Adapter HP Integrity NonStop NS1000 Planning Guide —542527-003 4- 14
5 Modular System Hardware This section describes the hardware used in the Integrity NonStop NS1000 system: Topic Page Modular Hardware Components 5-1 Component Location and Identification 5-19 Modular Hardware Components These hardware components can be part of an Integrity NonStop NS1000 system: Topic Page Modular Cabinets 5-3 Power Distribution Units (PDUs) 5-5 Blade Element 5-9 I/O Adapter Module (IOAM) Enclosure and I/O Adapters 5-14 Fibre Channel Disk Module 5-15 Tape Drive and Inte
Modular Hardware Components Modular System Hardware This example shows a 42U modular cabinet with eight blade elements and hardware for a complete system (rear view): IOAM Enclosure Processor 7 Processor 6 42 42 41 41 40 40 39 39 38 38 37 37 36 36 35 35 34 34 33 33 32 32 31 31 30 30 29 29 28 28 27 27 26 26 25 25 24 24 23 23 22 22 21 21 20 Processor 5 Processor 4 Processor 3 Processor 2 Processor 1 Processor 0 System Console FCDMs ServerNet Switch Boards 4-
Modular System Hardware Modular Cabinets Modular Cabinets The modular cabinet is a EIA standard 19-inch, 42U high, rack for mounting modular components. The modular cabinet comes equipped with front and rear doors and includes a rear extension that makes it deeper than some industry-standard racks. The Power Distribution Units (PDUs) are mounted along the rear extension without occupying any U-space in the cabinet, and are oriented inward, facing the components within the rack.
Modular System Hardware Modular Cabinets North America and Japan 200 to 240 V AC input, Single phase, 40A RMS Power • • • • • • • EIA standard 19-inch rack with 42U of rack space Geography: North America and Japan Recommended for most configurations Includes 2 power distribution units (PDU) ° Zero-U rack design PDU input characteristics ° 200 to 240 V ac, single phase, 40A RMS, 3-wire ° 50/60Hz ° Non-NEMA Locking CS8265C, 50A input plug ° 6.
Modular System Hardware Power Distribution Units (PDUs) International 200 to 240 V AC input, Single phase, 32A RMS Power • • • • • • • • EIA standard 19-inch rack with 42U of rack space Geography: International Recommended for most configurations Harmonized power cord Includes 2 power distribution units (PDU) ° Zero-U rack design PDU input characteristics ° 200 to 240 V ac, single phase, 32A RMS, 3-wire ° 50/60Hz ° IEC309 3-pin, 32A input plug ° 6.
Power Distribution Units (PDUs) Modular System Hardware Each PDU in a modular cabinet has: • • • 36 AC receptacles per PDU (12 per segment) - IEC 320 C13 10A receptacle type 3 AC receptacles per PDU (1 per segment) - IEC 320 C19 12A receptacle type 3 circuit-breakers These PDU options are available to receive power from the site AC power source: • • • • 208 V ac, three-phase delta for North America and Japan 200 to 240 V ac, single phase for North America and Japan 380 to 415 V ac three-phase wye for
Power Distribution Units (PDUs) Modular System Hardware This illustration shows the AC power feed cables on PDUs for AC feed at the top of the cabinet: To AC Power Source or Site UPS AC Power Cords Power Distribution Unit (PDU) 42 42 41 41 40 40 39 39 38 38 37 37 36 36 35 35 34 34 06 06 05 05 04 04 03 03 02 02 01 01 VST140.
Power Distribution Units (PDUs) Modular System Hardware If your system includes the optional rackmounted HP R5500 XR UPS, the modular cabinet will have one PDU located on the rear left side and four extension bars on the rear right side. The PDU and extension bars are oriented inward, facing the components within the rack. To provide redundancy, components are plugged into the left-side PDU and the extension bars. Each extension bar is plugged into the UPS.
Blade Element Modular System Hardware This illustration shows the AC power feed cables for the PDU and UPS for AC power feed from the bottom of the cabinet when the optional UPS and ERM are installed: Extension bars are installed along the rear right side instead of a PDU when a UPS is installed in the modular cabinet To AC Power Source Power Distribution Unit (PDU) Extended Runtime Module (ERM) Uninterruptible Power Supply (UPS) 42 42 41 41 40 40 39 39 38 38 37 37 09 09 08 08 07 07
Blade Element Modular System Hardware includes the optional rackmounted HP R5500 XR UPS, up to four blade elements can be installed in a single footprint. To reduce ambiguity in identifying proper cable connections to the blade element, an identification convention uses numbers to refer to each connection. A number such as 1, 2, 3, and 4 identifies each blade element. These IDs reference the appropriate blade element for proper connection of the fiber-optic cables.
Blade Element Modular System Hardware HP Integrity rx2620 Server Description The HP Integrity rx2620 server is 2U high and weighs 46 pounds (25 kilograms), is rackmountable, has redundant AC power feeds, and provides front-to-rear cooling. Each HP Integrity rx2620 server contains: Component Description One processor element (PE) A single 1.3 GHz 3 MB Intel® Itanium® microprocessor with its associated 4 GB or 8 GB memory.
Blade Element Modular System Hardware This illustration show the front and rear view of an HP Integrity rx2620 server: Blade Element (rx2620 server) Front View Hard Drives Front Panel and Status LEDs DVD Drive Blade Element (rx2620 server) Rear View VGA Port AC Power Receptacles AC 1 AC 2 System Lock Console/Remote/UPS Console/Serial Port A PCI Slot 2 LVD/SE SCSI 10/100 Management LAN LAN Gb A 10/100/1000 LAN USB Ports ToC Button X Connectors Y Connectors Serial Port B Locator Button and LED LAN
Blade Element Modular System Hardware This table describes front panel button and indicator LEDs for the preceding illustration: Front Panel Illustration Reference Number LED Indicator State Meaning Locator LED and button On/Off The locator button and LED are used to help locate this server within a rack of servers. 2 Diagnostic LEDs On/Off The four diagnostic LEDs operate in conjunction with the system LED to provide diagnostic information about the system.
I/O Adapter Module (IOAM) Enclosure and I/O Adapters Modular System Hardware ServerNet PCI Adapter Card Each HP Integrity rx2620 server contains one ServerNet PCI adapter card (installed in PCI slot 2) to provide ServerNet connectivity. This illustration shows the rear of an HP Integrity rx2620 server equipped with a ServerNet PCI adapter card: ServerNet Fabric Connections PCI Adapter Card VST011.
Modular System Hardware • • • Fibre Channel Disk Module Up to six ServerNet I/O adapters are supported, and a minimum of two FCSAs and two G4SAs must be installed. Fibre Channel ServerNet adapters (FCSAs) are used for communicating with storage devices such as a Fibre Channel Disk Module, tape devices, or an Enterprise Storage System (ESS) disk. Gigabit Ethernet 4-port ServerNet adapters (G4SAs) are used for Ethernet connectivity. FCSAs and G4SAs are supported for installation in slots 2.3, 2.4, 2.5, 3.
Modular System Hardware UPS and ERM (Optional) maintenance switch that are specific to an Integrity NonStop NS1000 server are presented here.
System Console Modular System Hardware This illustration shows the location of a UPS and an ERM in a rack: Note. The AC input power cord for the R5500 XR UPS is routed to exit the modular cabinet at either the top or bottom rear corners of the cabinet depending on what is ordered for the site power feed, and the large output receptacle is unused. To AC Power Source 12 12 ERM - UPS Cable Extended Run Time Module (ERM) Uninterruptible Power Supply (UPS) To Extension Bars Unused Cable VST504.
Enterprise Storage System (Optional) Modular System Hardware High availability and a fault-tolerant configuration for one IOAM enclosure and pairs of FCSAs are similar to the configurations required for Fibre Channel disk drives, as explained in IOAM Enclosure and Disk Storage Considerations on page 6-9.
Component Location and Identification Modular System Hardware Component Location and Identification Topics discussed in this subsection are: Topic Page Terminology 5-19 Rack and Offset Physical Location 5-19 Blade Element Group-Module-Slot Numbering 5-20 IOAM Enclosure Group-Module-Slot Numbering 5-24 Fibre Channel Disk Module (FCDM) Group-Module-Slot Numbering 5-25 The location and identification of a component in an Integrity NonStop NS1000 server is similar to that of the Integrity NonStop
Modular System Hardware Blade Element Group-Module-Slot Numbering Blade Element Group-Module-Slot Numbering • Group: ° In OSM Service Connection displays, 400 through 407 relates to blade complexes 0 through 7. Each blade complex includes a blade element and its associated processor. Example: group 403 = blade complex 3 ° In the OSM Low-Level Link, 400 relates to all blade complexes.
Blade Element Group-Module-Slot Numbering Modular System Hardware These tables show the default numbering for the blade elements of an Integrity NonStop NS1000 system when blade elements are powered on and functioning: Note. In OSM, if a blade element is not present or is powered off, processors might be renumbered. For example, if processor 3 has been removed, processor 4 becomes processor 3 in OSM displays.
Blade Element Group-Module-Slot Numbering Modular System Hardware The form of the GMS numbering for a blade element displayed in the OSM Service Connection is: 401.1.3 Blade element slot (for example, a power supply) Blade element module Blade group number comprising blade element components 401.101 Processor (for example, processor 1) Blade element module Blade group number comprising blade element components VST013.
Blade Element Group-Module-Slot Numbering Modular System Hardware This illustration shows the physical GMS numbering for the front and top view of a blade element: Memory Fan Slot 18 PCI Fan Slot 19 SystemFan Slot 17 SystemFan Slot 16 Front Bezel Removed Power Supply Slot 3 Power Supply Slot 4 (Removed In This Example) Blade Element Front and Top View HP Integrity NonStop NS1000 Planning Guide —542527-003 5- 23 VST024.
Blade Element Group-Module-Slot Numbering Modular System Hardware IOAM Enclosure Group-Module-Slot Numbering An Integrity NonStop NS1000 system supports only one IOAM enclosure, identified as group 100: IOAM Group X ServerNet Module Y ServerNet Module Slot Item Port 100* 2 3 1 and 2 4PSEs 3, 4, 5 ServerNet adapters 1 - n: where n is the port number on the adapter or extender 14 ServerNet switch logic board 1-4 15, 18 Power supplies - 16, 17 Fans - * In the OSM Service Connection, th
Fibre Channel Disk Module (FCDM) Group-ModuleSlot Numbering Modular System Hardware Fibre Channel Disk Module (FCDM) Group-Module-Slot Numbering IOAM Group IOAM Module IOAM Slot FCSA Controller Port 100 2: X fabric; 3-5 1, 2 3: Y fabric FCDM Shelf 1 - 4 if daisychained; 1 if single disk module FCDM Slot Item 0 FCDM 1-14 Disk drive bays 89 Transceiver A1 90 Transceiver A2 91 Transceiver B1 92 Transceiver B2 93 Left FC-AL board 94 Right FC-AL board 95 Left power supply 96 Righ
Modular System Hardware Fibre Channel Disk Module (FCDM) Group-ModuleSlot Numbering HP Integrity NonStop NS1000 Planning Guide —542527-003 5- 26
6 System Configuration Guidelines This section provides configuration guidelines for an Integrity NonStop NS1000 system: Topic Page Enclosure Locations in Cabinets 6-3 Internal ServerNet Interconnect Cabling 6-4 IOAM Enclosure and Disk Storage Considerations 6-9 Fibre Channel Devices 6-10 G4SAs to Networks 6-21 Default Naming Conventions 6-24 PDU Strapping Configurations 6-25 Integrity NonStop NS1000 systems use a flexible modular architecture.
System Configuration Guidelines This example shows one possible system configuration with eight blade elements in a 42U modular cabinet: IOAM Enclosure Processor 7 Processor 6 42 42 41 41 40 40 39 39 38 38 37 37 36 36 35 35 34 34 33 33 32 32 31 31 30 30 29 29 28 28 27 27 26 26 25 25 24 24 23 23 22 22 21 21 20 Processor 5 Processor 4 Processor 3 Processor 2 Processor 1 Processor 0 System Console FCDMs ServerNet Switch Boards 4-Port ServerNet Extenders Server
Enclosure Locations in Cabinets System Configuration Guidelines Enclosure Locations in Cabinets This table provides details about the location of Integrity NonStop NS1000 server enclosures and components within a cabinet. The enclosure location refers to the U location on the rack where the lower edge of the enclosure resides, such as the bottom of an HP Integrity rx2620 server at 28U.
System Configuration Guidelines Internal ServerNet Interconnect Cabling Internal ServerNet Interconnect Cabling This subsection includes: Topic Page Cable Labeling 6-4 Internal Interconnect Cables 6-5 Dedicated Service LAN Cables 6-6 Cable Length Restrictions 6-6 Internal Cable Product IDs 6-6 Blade Elements to IOAM Enclosure 6-7 FCSA to Fibre Channel Disk Modules 6-9 FCSA to Tape Devices 6-9 For general information about internal ServerNet interconnect cabling, refer to the NonStop NS-S
System Configuration Guidelines Internal Interconnect Cables Each label conveys this information: N1 Identifies the node number. R1 Identifies the rack number within the node. Un Identifies the offset that is the physical location of the component within the rack. n is the lowest U number on the rack that the component occupies. nn.nn Identifies the slot location and port connection of the component. Near Refers to the information for this end of this cable.
Dedicated Service LAN Cables System Configuration Guidelines Fiber-optic cables use either LC or SC connectors at one or both ends. This illustration shows an LC fiber-optic cable connector pair: VST598.vsd This illustration shows an SC fiber cable connector pair: VST599.
Blade Elements to IOAM Enclosure System Configuration Guidelines Blade Elements to IOAM Enclosure Fiber-optic cables provide communication between the ServerNet PCI adapter card in each blade element and the 4PSEs in the IOAM enclosure. The FCSAs and G4SAs in the IOAM enclosure then provide Fibre Channel and high-speed Ethernet links to storage and communication LANs. Blade Element to IOAM Enclosure and Processor IDs Each blade element contains one processor element.
Blade Elements to IOAM Enclosure System Configuration Guidelines This cabling diagram illustrates the default configuration and connections for an 8-processor Integrity NonStop NS1000 system. Four 4PSEs are required: two for the X-fabric and two for the Y-fabric The diagram is not for use in installing or cabling the system. For instructions on connecting the cables, see the NonStop NS-Series Hardware Installation Manual.
System Configuration Guidelines FCSA to Fibre Channel Disk Modules FCSA to Fibre Channel Disk Modules Fibre Channel disk modules (FCDMs) can be connected directly to FCSAs in an IOAM enclosure (see Blade Elements to IOAM Enclosure on page 6-7), with these exceptions: • • Only configurations with one IOAM enclosure are supported. A maximum of 16 FCDMs can be connected in the Integrity NonStop NS1000 system because only one IOAM enclosure containing a maximum of four FCSAs is supported.
System Configuration Guidelines Fibre Channel Devices Fibre Channel Devices This subsection describes Fibre Channel devices and covers these topics: Topic Page Factory-Default Locations for Disk Volumes 6-13 Configurations for Fibre Channel Devices 6-13 Configuration Restrictions for Fibre Channel Devices 6-13 Recommendations for Fibre Channel Device Configuration 6-14 Example Configurations of the IOAM Enclosure and Fibre Channel Disk Module 6-15 The only Fibre Channel device used internally
Fibre Channel Devices System Configuration Guidelines This illustration shows an FCSA with indicators and ports (used and not used) in Integrity NonStop NS1000 systems: Port 1 2Gb 1Gb Port 1 Port 2 2Gb 1Gb Port 1 FCSA Fibre Ethernet ports: not available for FCSA Ethernet ports: not available for FCSA D C Ethernet ports: not available for FCSA VST001.
Fibre Channel Devices System Configuration Guidelines This illustration shows the locations of the hardware in the Fibre Channel disk module as well as the Fibre Channel port connectors at the back of the enclosure: FC-AL Port B2 FC-AL Port A2 Fibre Channel Disk Module (rear) EMU Port A1 Port B1 Fibre Channel Disk Module (front) Disk Drive Bays 1-14 VSD.503.vst Fibre Channel disk modules connect to Fibre Channel ServerNet adapters (FCSAs) via Fiber Channel arbitrated loop (FC-AL) cables.
Factory-Default Locations for Disk Volumes System Configuration Guidelines Factory-Default Locations for Disk Volumes This illustration shows where the factory-default locations for the primary and mirror system disk volumes reside in separate Fibre Channel disk modules: $SYSTEM (bay 1) $AUDIT (bay 3) Fibre Channel Disk Module (front) $DSMSCM (bay 2) $OSS (bay 4) VSD.082.vst FCSA location and cable connections vary depending on the various controller and Fibre Channel disk module combinations.
System Configuration Guidelines Recommendations for Fibre Channel Device Configuration Recommendations for Fibre Channel Device Configuration These recommendations apply to FCSA and Fibre Channel disk module configurations: • • • • • • • • • • • • • • Primary Fibre Channel disk module connects to the FCSA F-SAC 1. Mirror Fibre Channel disk module connects to the FCSA F-SAC 2. FC-AL port A1 is the incoming port from an FCSA or from another Fibre Channel disk module.
System Configuration Guidelines • • • • • Example Configurations of the IOAM Enclosure and Fibre Channel Disk Module Daisy-chain configurations follow the same configuration restrictions and rules that apply to configurations that are not daisy-chained. (See Daisy-Chain Configurations on page 6-18.) Fibre Channel disk modules containing mirrored volumes must be installed in separate daisy chains.
Example Configurations of the IOAM Enclosure and Fibre Channel Disk Module System Configuration Guidelines Two FCSAs, Two FCDMs, One IOAM Enclosure This illustration shows example cable connections between two FCSAs and the primary and mirror Fibre Channel disk modules: Mirror FCDM Primary FCDM Fibre Channel Cables FCSA Fibre Channel Cables FCSA VST088.
Example Configurations of the IOAM Enclosure and Fibre Channel Disk Module System Configuration Guidelines Four FCSAs, Four FCDMs, One IOAM Enclosure This illustration shows example cable connections between the four FCSAs and the two sets of primary and mirror Fibre Channel disk modules: Mirror FCDM 2 Primary FCDM 2 Mirror FCDM1 Primary FCDM1 FCSAs FCSAs VST089.
Example Configurations of the IOAM Enclosure and Fibre Channel Disk Module System Configuration Guidelines Daisy-Chain Configurations When planning for possible use of daisy-chained disks, consider: Daisy-Chained Disks Recommended for ... Daisy-Chained Disks Not Recommended for ... Requirements for Daisy-Chain1 Cost-sensitive storage and applications using low-bandwidth disk I/O. Many volumes in a large Fiber Channel loop.
Example Configurations of the IOAM Enclosure and Fibre Channel Disk Module System Configuration Guidelines This illustration shows an example of cable connections between the two FCSAs and four Fibre Channel disk modules in a single daisy-chain configuration. A single IOAM enclosure, two FCSAs, and four Fibre Channel disk modules with ID expander does not provide fault-tolerant mirrored disk storage.
Example Configurations of the IOAM Enclosure and Fibre Channel Disk Module System Configuration Guidelines Four FCSAs, Three FCDMs, One IOAM Enclosure This illustration shows example cable connections between the four FCSAs and three Fibre Channel disk modules, with the primary and mirror drives split within each Fibre Channel disk module: Primary 3 Mirror 1 FCDM 3 Primary 2 Mirror 3 FCDM 2 Primary 1 Mirror 2 FCDM 1 IOAM Enclosure FCSAs FCSAs VST085.
G4SAs to Networks System Configuration Guidelines This illustration shows the factory-default locations for the configurations of four FCSAs and three Fibre Channel disk modules where the primary system file disk volumes are in Fibre Channel disk module 1: $SYSTEM (bay 1) $AUDIT (bay 3) Fibre Channel Disk Module (front) $DSMSCM (bay 2) $OSS (bay 4) VSD.082.
G4SAs to Networks System Configuration Guidelines This illustration shows a G4SA with indicators and ports: Ejector Latch 527431-003 A E G2UKE8 |||| |||||| ||||| ||||||| Part Number Track ID Bar Code Handle G4SA PIF D Link Status (Fiber) Link Activity (Fiber) LC Connector (Fiber) PIF C Link Status (Fiber) Link Activity (Fiber) LC Connector (Fiber) Link Status RJ-45 Connector (10/100/1000 Mbps) Link Activity PIF D PIF C PIF B PIF A Link Status RJ-45 Connector (10/100 Mbps) Link Activity Power-
G4SAs to Networks System Configuration Guidelines This illustration shows a conceptual example for copper and fiber-optic connectivity to the various LANs: Operations LAN IOAM Enclosure G4SA To Application LAN (10/100/1000 Mbps Fiber) To Application LAN (10/100/1000 Mbps copper) To Maintenance Switch (10/100 Mbps copper) and then to Operations LAN To Application LAN (10/100/1000 Mbps Fiber) To Application LAN (10/100/1000 Mbps copper) To Maintenance Switch (10/100 Mbps copper) and then to Operations L
Default Naming Conventions System Configuration Guidelines Default Naming Conventions The Integrity NonStop NS1000 system implements default naming conventions in the same manner as other Integrity NonStop NS-series systems. With a few exceptions, default naming conventions are not necessary for the modular resources that make up an Integrity NonStop NS1000 system.
System Configuration Guidelines PDU Strapping Configurations No TFTP or WANBOOT process is configured for new NonStop systems. Note. Naming conventions or configurations for the dedicated service LAN TCP/IP are the same as the TCP/IP conventions used with G-series RVUs. The names are $ZTCP0 and $ZTCP1. OSM Service Connection provides the location of the resource by adding an identifying suffix to the names of all the system resources.
System Configuration Guidelines PDU Strapping Configurations HP Integrity NonStop NS1000 Planning Guide —542527-003 6- 26
7 Example Configurations This section shows example hardware component configurations in 42 U high modular cabinets for an Integrity NonStop NS1000 server. A number of other configurations are also possible because of the flexibility inherent to the NonStop value architecture and ServerNet network. Note. Hardware configuration drawings in this section represent the physical arrangement of the modular enclosures but do not show PDUs.
Typical Configurations Example Configurations Enclosure or Component (page 2 of 2) Integrity NonStop NS1000 system (page 2 of 2) 2-processor 4-processor 6-processor 8-processor HP R5500 XR UPS 1 1 21 21 Extended runtime module (ERM) 2 2 41 41 1 This configuration uses two 42U modular cabinets with one UPS at offset 2U and up to two ERMs installed directly above the UPS in each cabinet.
2-Processor System Example Configurations 2-Processor System This 2-processor configuration in a 42U modular cabinet has a maximum of two blade elements (HP Integrity rx2620 servers) with one IOAM enclosure and two Fibre Channel disk modules: 42 41 40 39 38 37 Fibre Channel Disk Module Fibre Channel Disk Module 36 35 34 33 32 31 30 IOAM Enclosure 29 28 27 26 25 24 23 Configurable Space 22 21 20 Console 19 18 17 16 15 14 Configurable Space 13 12 11 10 09 08 Blade Element Blade Element 07 06 05
4-Processor System Example Configurations 4-Processor System This 4-processor configuration in a 42U modular cabinet has a maximum of four blade elements (HP Integrity rx2620 servers) with one IOAM enclosure and two Fibre Channel disk modules: 42 41 40 39 38 37 Fibre Channel Disk Module Fibre Channel Disk Module 36 35 34 33 32 31 30 IOAM Enclosure 29 28 27 26 25 24 23 Configurable Space 22 21 20 Console 19 18 17 Configurable Space 16 15 14 13 12 11 10 09 08 Blade Element Blade Element Blade Ele
6-Processor System Example Configurations 6-Processor System This 6-processor configuration in a 42U modular cabinet has a maximum of six blade elements (HP Integrity rx2620 servers) with one IOAM enclosure and four Fibre Channel disk modules: 42 41 40 39 38 37 Fibre Channel Disk Module Fibre Channel Disk Module 36 35 34 33 32 31 30 IOAM Enclosure 29 28 27 26 25 24 23 Configurable Space 22 21 20 19 18 17 16 15 14 13 12 11 10 09 08 Console Blade Element Blade Element Blade Element Blade Element Blad
8-Processor System Example Configurations 8-Processor System This 8-processor configuration in a 42U modular cabinet has a maximum of eight blade elements (HP Integrity rx2620 servers) with one IOAM enclosure and four Fibre Channel disk modules: 42 41 40 39 38 37 Fibre Channel Disk Module Fibre Channel Disk Module 36 35 34 33 32 31 30 IOAM Enclosure 29 28 27 26 25 24 23 Blade Element Blade Element 22 21 20 19 18 17 16 15 14 13 12 11 10 09 08 Console Blade Element Blade Element Blade Element Blade E
2-Processor System With UPS and ERM Example Configurations 2-Processor System With UPS and ERM The UPS and ERM (two ERMs maximum) must reside in the bottom of the cabinet, with the UPS at cabinet offset 2U. The diagram shows 1 UPS at offset 2U and 1 ERM at offset 5U in a 2-processor Integrity NonStop NS1000 system.
4-Processor System With UPS and ERM Example Configurations 4-Processor System With UPS and ERM The UPS and ERM (two ERMs maximum) must reside in the bottom of the cabinet, with the UPS at cabinet offset 2U. The diagram shows 1 UPS at offset 2U and 1 ERM at offset 5U in a 4-processor Integrity NonStop NS1000 system.
6-Processor System With UPS and ERM Example Configurations 6-Processor System With UPS and ERM The UPS and ERM (two ERMs maximum) must reside in the bottom of each cabinet, with the UPS at cabinet offset 2U. The diagram shows 1 UPS at offset 2U and 1 ERM at offset 5U in each cabinet of the 6-processor Integrity NonStop NS1000 system.
8-Processor System With UPS and ERM Example Configurations 8-Processor System With UPS and ERM The UPS and ERM (two ERMs maximum) must reside in the bottom of each cabinet, with the UPS at cabinet offset 2U. The diagram shows 1 UPS at offset 2U and 1 ERM at offset 5U in each cabinet of the 8-processor Integrity NonStop NS1000 system.
Example U Locations for Modular Enclosures Example Configurations Example U Locations for Modular Enclosures This illustration lists the relative U location in a 42U modular cabinet of each modular enclosure in an example 2-processor Integrity NonStop NS1000 system: IOAM Enclosure (U26) 42 42 41 41 40 40 39 39 38 38 37 37 36 36 35 35 34 34 33 33 32 32 31 31 30 30 29 29 28 28 27 27 26 26 25 25 24 24 23 23 22 22 21 20 Processor 1 Processor 0 System Console Prim
Example of 2-Processor System Cabling Example Configurations Example of 2-Processor System Cabling This illustration shows an example of a 2-processor system. This conceptual representation shows the simplified X and Y ServerNet cabling between the blade elements and the IOAM enclosure and also the simplified cabling between the FCSAs and the Fibre Channel disk modules.
A Cables Internal Cables Available internal cables and their lengths are: Cable Type Connectors Length (meters) Length (feet) Product ID MMF LC-LC 2 7 M8900-02 5 16 M8900-05 15 49 M8900-15 40 131 M8900-40 80 262 M8900-80 100 328 M8900100 1251 4101 M8900125 2001 6561 M8900200 2501 8201 M8900250 10 33 M8910-10 20 66 M8910-20 50 164 M8910-50 100 328 TBD 1251 4101 M8910125 3 10 M8920-3 5 16 M8920-5 10 33 M8920-10 30 98 M8920-30 50 164 M8920-50 0.
Cable Length Restrictions Cables Cable Length Restrictions For a general description of cable length restrictions, refer to the NonStop NS-Series Planning Guide. Details about cable length restrictions that are specific to an Integrity NonStop NS1000 server are presented here.
B Control, Configuration, and Maintenance Tools This section introduces the control, configuration, and maintenance tools used in Integrity NonStop NS-series systems: Topic Page Support and Service Library B-1 System Console B-1 Maintenance Architecture B-6 Dedicated Service LAN B-7 OSM B-15 System-Down OSM Low-Level Link B-16 AC Power Monitoring B-17 AC Power-Fail States B-18 Support and Service Library See Support and Service Library on page C-1.
System Console Configurations Control, Configuration, and Maintenance Tools Some system console hardware, including the PC system unit, monitor, and keyboard, can be mounted in the cabinet. Other PCs are installed outside the cabinet and require separate provisions or furniture to hold the PC hardware. System consoles communicate with Integrity NonStop NS-series servers over a dedicated service local area network (LAN) or a secure operations LAN.
System Console Configurations Control, Configuration, and Maintenance Tools One System Console Managing One System (Setup Configuration) DHCP DNS server (optional) Rem ote Service Provider Secure Operations LAN Modem Primary System Console Optional Connection to a Secure Operations LAN (One or Two Connections) 4PSE 4PSE FCSA FCSA G4SA 10/100 ENET Port, ServerNet Switch Boards 4PSE 4PSE FCSA FCSA G4SA Maintenance Switch 1 IOAM Enclosure VST073.
System Console Configurations Control, Configuration, and Maintenance Tools Primary and Backup System Consoles Managing One System DHCP DNS server (optional) Rem ote Service Provider Remote Service Provider Secure Operations LAN Modem Primary System Console Backup System Console Modem Optional Connection to a Secure Operations LAN (One or Two Connections) Maintenance Switch 2 4PSE 4PSE FCSA FCSA G4SA 10/100 ENET Port, ServerNet Switch Boards G4SA (X Fabric) Group 2, Slot 5, Port A 4PSE 4PSE FCSA
Control, Configuration, and Maintenance Tools System Console Configurations The dedicated service LAN is normally connected to the operations LAN using a single connection. If both sides of the dedicated service LAN connect directly to the operations LAN, you must: • • Enable Spanning Tree Protocol (STP) in switches or routers that are part of the operations LAN. Change the preconfigured IP address of the backup system console before you add it to the LAN Caution.
Maintenance Architecture Control, Configuration, and Maintenance Tools Maintenance Architecture This simplified illustration shows the two elements of the maintenance architecture plus the OSM maintenance console applications: To Remote Support Center Maintenance Switch Maintenance LAN OSM Console ServerNet Fabric IOAM ME IOAM G4SA FCSA FCDM FC-AL Blade element PE EMU I/O and Fabric Functional Element Processor Functional Element VST517.
Control, Configuration, and Maintenance Tools Dedicated Service LAN Dedicated Service LAN A dedicated service LAN provides connectivity between the OSM console running in a PC and the maintenance firmware in the system hardware. This dedicated service LAN uses a ProCurve 2524 Ethernet switch for connectivity between the ServerNet switch boards for the IAOM and the system console.
Control, Configuration, and Maintenance Tools Fault-Tolerant Configuration Fault-Tolerant Configuration You can configure the dedicated service LAN as described in the OSM Migration Guide. HP recommends that you use a fault-tolerant LAN configuration. A fault-tolerant configuration includes these connections to two maintenance switches: • • • • Connect one system console to each maintenance switch. Connect one of the two ServerNet switch boards in the IOAM enclosure to each maintenance switch.
Fault-Tolerant Configuration Control, Configuration, and Maintenance Tools This illustration shows a fault-tolerant LAN configuration with two maintenance switches: DHCP DNS server (optional) Remote Service Provider Remote Service Provider Secure Operations LAN Modem PrimarySystem Console BackupSystemConsole Modem Optional Connection to a Secure Operations LAN (One or Two Connections) Maintenance Switch 2 4PSE 4PSE FCSA FCSA G4SA 10/100 ENET Port, ServerNet Switch Boards G4SA (XFabric) Module 2,
IP Addresses Control, Configuration, and Maintenance Tools IP Addresses Integrity NonStop NS1000 servers require Internet protocol (IP) addresses for these components that are connected to the dedicated service LAN: • • • • • ServerNet switch boards in the IOAM enclosure Maintenance switches System consoles G4SAs UPSs (optional) These components have default IP addresses that are preconfigured at the factory.
Control, Configuration, and Maintenance Tools IP Addresses Whether or not the new system will receive dynamic IP addresses from a Dynamic Host Configuration Protocol (DHCP) server, it is recommended that the IP addresses be reconfigured as either: • • Static IP Addresses Dynamically Assigned IP Addresses Note. Be aware of possible conflicts with existing operations LANs. This guide cannot predict all possible configurations of existing LANs.
Control, Configuration, and Maintenance Tools Ethernet Cables Static or Dynamic IP Addresses Various components within the dedicated service LAN can have static or dynamic IP addresses.
System-Up Dedicated Service LAN Control, Configuration, and Maintenance Tools System-Up Dedicated Service LAN When the system is up and the OS running, the ME connects to the NonStop NS1000 system’s dedicated service LAN using one of the PIFs on each of two G4SAs. This connection enables OSM Service Connection and OSM Notification Director communication for maintenance in a running system.
Dedicated Service LAN Links With One IOAM Enclosure Control, Configuration, and Maintenance Tools Dedicated Service LAN Links With One IOAM Enclosure This illustration shows the dedicated service LAN cables connected to the G4SAs in slot 5 of both modules of an IOAM enclosure and to the maintenance switch: Maintenance Switch Module 2 Module 3 G4SA Ethernet PIF Connectors D C B A Cable to Maintenance Switch IOAM Enclosure (Group 110) VST340.
Control, Configuration, and Maintenance Tools Initial Configuration for a Dedicated Service LAN Initial Configuration for a Dedicated Service LAN New systems are shipped with an initial set of IP addresses configured. For a listing of these initial IP addresses, see IP Addresses on page B-10. Factory-default IP addresses for the G4SA are in the LAN Configuration and Management Manual. IP addresses for SWAN concentrators are in the WAN Subsystem Configuration and Management Manual.
System-Down OSM Low-Level Link Control, Configuration, and Maintenance Tools For information on how to install, configure and start OSM server-based processes and components, see the OSM Migration Guide.
Control, Configuration, and Maintenance Tools AC Power Monitoring AC Power Monitoring Integrity NonStop NS-series servers require either the optional HP model R5500 XR UPS (with one or two ERMs for additional battery power) or a user-supplied UPS installed in each modular cabinet, or a user-supplied site UPS to support system operation through power transients or an orderly system shutdown during a power failure.
Control, Configuration, and Maintenance Tools AC Power-Fail States AC Power-Fail States These states occur when a power failure occurs and an optional HP model R5500 XR UPS is installed in each cabinet within the system: System State Description NSK_RUNNING NonStop operating system is running normally. RIDE_THRU OSM has detect a power failure and begins timing the outage. AC power returning terminates RIDE_THRU and puts the operating system back into an NSK_RUNNING state.
C Guide to Integrity NonStop NS-Series Server Manuals These manuals support the Integrity NonStop NS-series systems. Category Purpose Title Reference Provide information about the manuals, the RVUs, and hardware that support NonStop NS-series servers NonStop Systems Introduction for H-Series RVUs Describe how to prepare for changes to software or hardware configurations Managing Software Changes Change planning and control H06.
Guide to Integrity NonStop NS-Series Server Manuals Support and Service Library Within these categories, where applicable, content might be further categorized according to server or enclosure type. Authorized service providers can also order the NTL Support and Service Library CD: • • HP employees: Subscribe at World on a Workbench (WOW). Subscribers automatically receive CD updates. Access the WOW order form at http://hps.knowledgemanagement.hp.com/wow/order.asp.
Safety and Compliance This section contains three types of required safety and compliance statements: • • • Regulatory compliance Waste Electrical and Electronic Equipment (WEEE) Safety Regulatory Compliance Statements The following regulatory compliance statements apply to the products documented by this guide. FCC Compliance This equipment has been tested and found to comply with the limits for a Class A digital device, pursuant to part 15 of the FCC Rules.
Regulatory Compliance Statements Safety and Compliance Korea MIC Compliance Taiwan (BSMI) Compliance Japan (VCCI) Compliance This is a Class A product based on the standard or the Voluntary Control Council for Interference by Information Technology Equipment (VCCI). If this equipment is used in a domestic environment, radio disturbance may occur, in which case the user may be required to take corrective actions.
Regulatory Compliance Statements Safety and Compliance European Union Notice Products with the CE Marking comply with both the EMC Directive (89/336/EEC) and the Low Voltage Directive (73/23/EEC) issued by the Commission of the European Community.
SAFETY CAUTION Safety and Compliance SAFETY CAUTION The following icon or caution statements may be placed on equipment to indicate the presence of potentially hazardous conditions: DUAL POWER CORDS CAUTION: “THIS UNIT HAS MORE THAN ONE POWER SUPPLY CORD. DISCONNECT ALL POWER SUPPLY CORDS TO COMPLETELY REMOVE POWER FROM THIS UNIT." "ATTENTION: CET APPAREIL COMPORTE PLUS D'UN CORDON D'ALIMENTATION. DÉBRANCHER TOUS LES CORDONS D'ALIMENTATION AFIN DE COUPER COMPLÈTEMENT L'ALIMENTATION DE CET ÉQUIPEMENT".
Waste Electrical and Electronic Equipment (WEEE) Safety and Compliance HIGH LEAKAGE CURRENT To reduce the risk of electric shock due to high leakage currents, a reliable grounded (earthed) connection should be checked before servicing the power distribution unit (PDU).
Safety and Compliance Important Safety Information HP Integrity NonStop NS1000 Planning Guide —542527-003 Statements -6
Glossary For a glossary of Integrity NonStop NS-series terms, see the NonStop System Glossary in the NonStop Technical Library (NTL).
Glossary HP Integrity NonStop NS1000 Planning Guide —542527-003 Glossary- 2
Index Numbers 4-port ServerNet extender card connections 5-10 description 1-1 installation slots 5-14 A AC current calculations 3-13 AC power 200 to 240 V ac single phase 32A RMS 5-5 200 to 240 V ac single phase 40A RMS 5-4 208 V ac 3-phase delta 24A RMS 3-3, 5-3 380 to 415 V ac 3-phase Wye 16A RMS 5-4 enclosure input specifications 3-5 input 3-2 power-fail monitoring B-17 power-fail states B-18 unstrapped PDU 6-25 AC power feed 5-6 bottom of cabinet 5-6 top of cabinet 5-7 with cabinet UPS 5-8, 5-9 air con
D Index D daisy-chain disk configuration recommendations 6-15 dedicated service LAN B-7 default disk drive locations 6-13 default startup characteristics 4-11 dimensions enclosures 3-9 modular cabinet 3-8 service clearances 3-8 disk drive configuration recommendations 6-14 documentation NonStop NS-series servers C-1 packet 4-13 ServerNet adapter configuration 4-13 dust and microscopic particles 2-5 dynamic IP addresses B-11 E electrical disturbances 2-2 electrical power loading 3-6 emergency power off (E
G Index Fibre Channel disk module 5-15, 6-10 flooring 2-5 forms for ServerNet adapter configuration 4-13 front panel, blade element indicator LEDs 5-13 FRU blade element 4-3 fan for blade element 4-4 main memory 4-4 power supply 4-4 fuses, PDU 5-6, 5-7 G G4SA installation slots 5-15 network connections 6-21 service LAN PIF B-13 GMS for Fibre Channel disk module 5-25 grounding 2-3, 3-4 H hardware configurations examples 3-13 typical 7-2 heat calculation 2-5, 3-11 height in U, enclosures 3-8 hot spots 2-4
N Index modular cabinet (continued) 380 to 415 V ac input, 3-phase Wye, 16A RMS 5-4 physical specifications 3-9 weight 3-10 N naming conventions 6-24 NonStop value architecture described 4-9 overview 1-1 terms 4-3 NSVA See NonStop value architecture O operating system load paths 4-11 operational space 2-7 OSM B-2, B-15, B-16 P particulates, metallic 2-6 paths, operating system load 4-11 PDU AC power feed 5-6 description 5-6 fuses 5-6, 5-7 receptacles 5-9 strapping configurations 6-25 PDU power Internat
T Index ServerNet PCI adapter card 5-14 service clearances 3-8 service LAN B-7 specification assumptions 3-1 cabinet physical 3-9 cable 6-5 enclosure dimensions 3-9 heat 3-11 nonoperating temperature, humidity, altitude 3-12 operating temperature, humidity, altitude 3-11 weight 3-10 startup characteristics, default 4-11 static IP addresses B-11 SWAN concentrator restriction B-12 system configurations, examples 3-13 system console configurations B-2 description B-1 overview 5-17 system disk location 4-11
Special Characters Index HP Integrity NonStop NS1000 Planning Guide —542527-003 Index -6