ServerNet Cluster 6780 Planning and Installation Guide Abstract This guide describes installing or migrating to HP NonStop™ ServerNet switches (model 6780) in an HP NonStop ServerNet Cluster. Product Version N.A. Supported Release Version Updates (RVUs) This guide supports G06.22 and all subsequent RVUs until otherwise indicated in its replacement publication.
Document History Part Number Product Version Published 527301-001 N.A. September 2003 527301-002 N.A. December 2003 527301-003 N.A. March 2004 527301-004 N.A.
ServerNet Cluster 6780 Planning and Installation Guide Glossary Index What’s New in This Guide xiii Manual Information xiii New and Changed Information About This Guide xv Notation Conventions Figures Tables xiii xvi 1.
Contents 2. ServerNet Cluster Hardware Description 2.
4.
7.
9.
10.
11. Starting and Stopping ServerNet Cluster Processes and Subsystems Contents Fallback Procedures 10-10 Fallback to a Previous RVU 10-10 Fallback to Previous SPRs 10-10 11.
C. ESD Guidelines Contents C. ESD Guidelines D. Specifications 6780 Switch Specifications D-1 Fiber-Optic Cable Specifications D-2 Single-Mode Fiber-Optic (SMF) Cables D-2 Multimode Fiber-Optic (MMF) Cables D-3 E. Configuring MSGMON, SANMAN, and SNETMON TACL Macro E-2 Run the ZSCCONF Macro E-2 SCF Command File E-3 Task 1: Configure MSGMON E-3 Task 2: Configure SANMAN E-4 Task 3: Configure SNETMON E-4 Task 4: Start MSGMON, SANMAN, and SNETMON E-5 F.
Safety and Compliance Contents Safety and Compliance Glossary Index Figures Figure 1-1. Figure 1-2. Figure 2-1. Figure 2-2. Figure 2-3. Figure 2-4. Figure 2-5. Figure 2-6. Figure 2-7. Figure 2-8. Figure 2-9. Figure 2-10. Figure 2-11. Figure 2-12. Figure 3-1. Figure 3-2. Figure 3-3. Figure 3-4. Figure 4-1. Figure 4-2. Figure 4-3. Figure 4-4. Figure 6-1. Figure 6-2. Figure 6-3. Figure 6-4. Figure 6-5. Figure 6-6. Figure 6-7. Figure 9-1. Figure 9-2.
Figures (continued) Contents Figures (continued) Figure C-1. Figure G-1. Using ESD Protection When Servicing CRUs Possible Long-Distance Configuration G-1 C-2 Tables Table 1-1. Table 1-2. Table 1-3. Table 1-4. Table 1-5. Table 1-6. Table 2-1. Table 2-2. Table 2-3. Table 2-4. Table 2-5. Table 2-6. Table 2-7. Table 2-8. Table 2-9. Table 2-10. Table 2-11. Table 2-12. Table 3-1. Table 3-2. Table 3-3. Table 3-4. Table 3-5. Table 3-6. Table 3-7. Table 3-8. Table 3-9. Table 3-10. Table 3-11. Table 3-12.
Tables (continued) Contents Tables (continued) Table 6-2. Table 6-3. Table 6-4. Table 6-5. Table 8-1. Table 8-2. Table 8-3. Table 8-4. Table 9-1. Table 9-2. Table 10-1. Table A-1. Table A-2. Table B-1. Table B-2. Table B-3. Table B-4. Table B-5. Table B-6. Table B-7. Table D-1. Table D-2. Table D-3. Table D-4. Table G-1. Table G-2.
Contents ServerNet Cluster 6780 Planning and Installation Guide —527301-004 xii
What’s New in This Guide Manual Information ServerNet Cluster 6780 Planning and Installation Guide Abstract This guide describes installing or migrating to HP NonStop™ ServerNet switches (model 6780) in an HP NonStop ServerNet Cluster. Product Version N.A. Supported Release Version Updates (RVUs) This guide supports G06.22 and all subsequent RVUs until otherwise indicated in its replacement publication.
What’s New in This Guide New and Changed Information ServerNet Cluster 6780 Planning and Installation Guide —527301-004 xiv
About This Guide This guide describes installing or migrating to ServerNet clusters using 6780 switches and the layered topology. It assumes that you are familiar with HP NonStop S-series servers, the ServerNet protocol, and networking fundamentals. If you are not familiar with these concepts, refer to the NonStop S-Series Server Description Manual. This table describes the sections and appendixes of this guide. Section Title This section...
Notation Conventions About This Guide Section Title This section... C ESD Guidelines Provides guidelines for preventing damage caused by electrostatic discharge (ESD) when working with electronic components. D Specifications Provides additional information about the 6780 switch and the fiber-optic cables that are used in a ServerNet cluster.
General Syntax Notation About This Guide [ ] Brackets. Brackets enclose optional syntax items. For example: TERM [\system-name.]$terminal-name INT[ERRUPTS] A group of items enclosed in brackets is a list from which you can choose one item or none. The items in the list may be arranged either vertically, with aligned brackets on each side of the list, or horizontally, enclosed in a pair of brackets and separated by vertical lines. For example: FC [ num ] [ -num] [ text] K [ X | D ] address-1 { } Braces.
Change Bar Notation About This Guide Item Spacing. Spaces shown between items are required unless one of the items is a punctuation symbol such as a parenthesis or a comma. For example: CALL STEPMOM ( process-id ) ; If there is no space between two items, spaces are not permitted. In the following example, there are no spaces permitted between the period and any other items: $process-name.#su-name Line Spacing.
1 ServerNet Cluster Overview This section provides an overview of ServerNet clusters.
ServerNet Cluster Overview ServerNet Clusters ServerNet Clusters ServerNet clusters allow multiple multiprocessor nodes to work together and appear to client applications as one large processing entity. This interconnection technology uses the ServerNet protocol to pass information from one node to any other node in the cluster. Note. For ServerNet clusters, a cluster is a collection of servers, or nodes, that can function either independently or collectively as a processing unit.
Cluster Switches ServerNet Cluster Overview Cluster Switches ServerNet clusters support two types of cluster switches as shown in Table 1-1. Table 1-1.
Network Topologies for ServerNet Clusters ServerNet Cluster Overview Network Topologies for ServerNet Clusters The network topology of a ServerNet cluster refers to the physical layout of components in the network and how those components are connected together.
Layered Topology ServerNet Cluster Overview Figure 1-1. Layered Topology (Two Zones and Two Layers) Switch Zone 1 Switch Zone 2 Switch Group (Y) Nodes 17-24 Nodes 1-8 Switch Group 2 (Y) Switch Group (X) Switch Group 2 (X) 6780 Switch Layer 2 (X12) 6780 Switch Layer 2 (X22) 6780 Switch Layer 1 (X11) 6780 Switch Layer 1 (X21) Nodes 25-32 Nodes 9-16 VST500.vsd Cluster Switch Zone A cluster switch zone consists of a pair of X and Y switch groups and the ServerNet nodes connected to them.
ServerNet Cluster Overview Automatic Fail-Over Cluster Switch Module Within a 6780 cluster group, each 6780 switch module represents one 6780 switch. Each module is numbered one through four from bottom to top. The switch module number and switch layer number are always the same. Cluster Switch Name The cluster switch name consists of the fabric, zone, and layer. For example, the 6780 switch on the X fabric, zone 2, and layer 1 is named X21.
Message-System Traffic and Expand ServerNet Cluster Overview Figure 1-2. Message Passing Over ServerNet Processor A \ALPHA Processor C \BETA Process Process File System File System Ordinary Messages Message System Message System Processor B \ALPHA Processor D \BETA LH File System Message System SecurityChecked Messages LH File System Message System CDT 043.
Software Requirements for NonStop S-Series Servers ServerNet Cluster Overview Software Requirements for NonStop S-Series Servers Before adding a NonStop S-series server to a ServerNet cluster that uses the layered topology, you must install the required software. HP recommends that you install the latest product version updates (PVUs).
Expand Profiles Required ServerNet Cluster Overview Table 1-3. Software Requirements for NonStop S-Series Servers (page 2 of 2) RVU SPRs Required Comments ServerNet Management Driver T2800G08 A system generation and a system load is required to install this SPR. SP firmware T1089ABN A system generation and a system load is required to install this SPR.
Hardware Required ServerNet Cluster Overview Hardware Required ServerNet clusters require switches, nodes, and fiber-optic cables to route the ServerNet packets and messages. For more information, refer to Hardware Installation and Migration Planning on page 3-5. Switches For the layered topology, 6780 switches are required. Depending on your configuration, specific PICs are required in each 6780 switch. Table 1-5.
Fiber-Optic Cables ServerNet Cluster Overview Table 1-6. NonStop S-Series Server Components Required for Clustering (page 2 of 2) Hardware Component • • Required for Layered Topology Required for each node Notes Modular ServerNet expansion boards (MSEBs) 4 to 128 2 (one per external fabric) The MSEBs must be installed in slot 51 and slot 52 of the group 01 system enclosure. MSEBs in any other system enclosure cannot be used for connections to the cluster switches.
ServerNet Cluster Overview ServerNet Cluster 6780 Planning and Installation Guide —527301-004 1- 12 Fiber-Optic Cables
2 ServerNet Cluster Hardware Description This section provides a description of the 6780 switches and related components. For part numbers and service categories, refer to Appendix A, Part Numbers.
ServerNet Cluster Hardware Description Routers Routers Routers in a ServerNet cluster: • • • Provide wormhole routing of ServerNet packets Receive data packets, check the packet for errors, interpret their destination addresses, and then route the packets out one of the output ports Route messages across ServerNet links in a node and ServerNet clusters Router-2 ASICs The router-2 ASIC: • • Contains a 12-way crossbar switch.
Identifying a 6780 Switch ServerNet Cluster Hardware Description Numeric Selector Setting The numeric selector setting running on the 6780 switch, shown in Table 2-1, determines the switch name, group number, module number, and the ServerNet node numbers supported by that switch. Note. Optional long-distance connections require special numeric selector settings. For more information, see Appendix G, Using the Long-Distance Option. Table 2-1.
Identifying a 6780 Switch ServerNet Cluster Hardware Description 6780 Switch Name The switch name consists of the fabric, zone, and layer. For example, the switch on the X fabric, zone 2, and layer 1 is named X21. 6780 Group Number A 6780 switch group number contains four digits. The first two digits are always ten for 6780 switches, the third digit indicates the 6780 switch zone, and the fourth digit indicates the fabric (0 for the X fabric, and 1 for the Y fabric).
Identifying Components in a 6780 Switch ServerNet Cluster Hardware Description Module Number The module number is always the same as the layer number. The module number corresponds to the fourth digit setting on the numeric selector. Identifying Components in a 6780 Switch Each switch module contains 18 slots: • • Slots 1 through 13 (Figure 2-2, Plug-In Cards, on page 2-6) are located in the rear of the switch, and contain up to 13 plug-in cards.
Identifying Components in a 6780 Switch ServerNet Cluster Hardware Description The 13 slots in the rear of each 6780 switch contain PICs as shown in Figure 2-2. Figure 2-2. Plug-In Cards Plug-in Cards (PICs) 1 2 3 4 5 6 7 8 9 1011 12 13 PIC Slots VST600.vsd 6780 Switch Plug-In Cards (PIC) The PICs share the common midplane interface and provide the physical interface for each ServerNet link.
Identifying Components in a 6780 Switch ServerNet Cluster Hardware Description Table 2-4.
Identifying Components in a 6780 Switch ServerNet Cluster Hardware Description 6780 Switch Ports The number of ports on each PIC varies depending on the PIC type as described in Table 2-5. The quad MMF PICs contain four transceivers in each port. The dual SMF PICs contain only one transceiver each in ports 1 and 2. Table 2-5.
Connections Between External Ports and Internal Routers ServerNet Cluster Hardware Description Connections Between External Ports and Internal Routers Table 2-6 shows the relationship between the switch ports located in the rear of the 6780 switch to the router and ports inside the logic board. The internal routers that connect to an external port do not become enabled until a fiber-optic cable establishes a good connection to the external port. Table 2-6.
Midplane ServerNet Cluster Hardware Description Midplane The midplane includes four ServerNet ports, I2C based control and status, and environmental and power control. Logic Board The logic board takes 12V power off the midplane, and converts it into the lower voltage levels needed throughout the logic board and PICs.
ServerNet Cluster Hardware Description Logic Board Numeric Selector The four-digit numeric selector sets the configuration for each 6780 switch. For information about how to set the numeric selector, refer to Task 3: Set the Numeric Selector on page 5-4. A hard reset is required after you set the numeric selector. Ejectors The ejectors are used to remove the logic board from the switch module, and to secure the logic board in place.
Logic Board ServerNet Cluster Hardware Description Table 2-8. Logic Board LCD Display Description For more information...
Logic Board ServerNet Cluster Hardware Description Logic Board Routers Five routers are located inside the logic board. Some of the ports on these routers provide links to external ports, and others provide connections for the router interconnect or packetizer. Table 2-9 shows the external slot and external port where each port on the five internal routers connects. Table 2-9. Router Connections (page 1 of 3) Internal External External Router Port Connection Type Slot Port Connects to...
Logic Board ServerNet Cluster Hardware Description Table 2-9. Router Connections (page 2 of 3) Internal External External Router Port Connection Type Slot Port Connects to...
Logic Board ServerNet Cluster Hardware Description Table 2-9. Router Connections (page 3 of 3) Internal External External Slot Port Connects to...
Firmware, Configuration, and FPGA Images ServerNet Cluster Hardware Description Firmware, Configuration, and FPGA Images The firmware, configuration, and FPGA images are saved in flash memory on the logic board in the 6780 switch. Check that the correct versions of the firmware, configuration, and FPGA images are running on each 6780 switch after you install it. If a newer version of an image is available, update it on the logic board.
ServerNet Cluster Hardware Description Power Components Power Components These components support ServerNet clusters: • • Power Distribution Unit (PDU) on page 2-17 Uninterruptible Power Supply (UPS) on page 2-17 Power Distribution Unit (PDU) The 1U modular PDU contains one control unit, four extension bars, and an attached input cord. For one or two 6780 switches in a rack, a PDU is not required. One PDU is required for three or more switches in a rack.
ServerNet Cluster Hardware Description Nodes Nodes ServerNet clusters support any NonStop S-series server as a ServerNet node. NonStop K-series servers are not supported in ServerNet clusters. Identifying a Node Each node in a ServerNet cluster has a ServerNet node number that identifies its unique position in the cluster. Each ServerNet node has a system number, also known as the Expand node number, that identifies a system in an Expand network.
ServerNet Node Numbers ServerNet Cluster Hardware Description Figure 2-5.
ServerNet Cluster Hardware Description NonStop S-Series Servers NonStop S-Series Servers NonStop S-series servers contain these components to support ServerNet clusters: • • • Service Processors (SPs) on page 2-20 Modular ServerNet Expansion Boards (MSEBs) on page 2-20 Node-Numbering Agent (NNA) Field-Programmable Gate Array (FPGA) Plug-in Card (PIC) on page 2-21 Service Processors (SPs) In a NonStop S-series server, SPs are components of the processor multifunction (PMF), PMF2, I/O multifunction (IOMF
ServerNet Cluster Hardware Description NonStop S-Series Servers Node-Numbering Agent (NNA) Field-Programmable Gate Array (FPGA) Plug-in Card (PIC) Port 6 of the MSEB in slots 51 and 52 in the group 01 system enclosure must contain a single-mode, fiber-optic NNA FPGA PIC. An NNA FPGA can not be installed in other ports of the MSEBs or in any other system enclosures. See Figure 2-6, NNA FPGA PIC Installed in Port 6, on page 2-21.
ServerNet Cluster Hardware Description NonStop S-Series Servers Figure 2-7. Duplex SC Connector and Receptacle for MSEB Connections Keys on Connector Body CDT 055.
ServerNet Cluster Hardware Description Light-Emitting Diodes (LEDs) Light-Emitting Diodes (LEDs) Several components required for ServerNet clusters provide LEDs to help you monitor the cluster. LED Types The types of LEDs include: • • • Power-On LED on page 2-23 Fault LED on page 2-23 Link-Alive LED on page 2-23 Power-On LED This LED lights when the component is powered on. Fault LED When lit steadily and not flashing or off, this LED indicates that a component is not in a fully functional state.
ServerNet Cluster Hardware Description LEDs on the 6780 Switch LEDs on the 6780 Switch The 6780 switch logic board, fans, power supplies, PICs, and ports contain LEDs. Figure 2-8 describes the LEDs on the 6780 switch. Figure 2-8.
ServerNet Cluster Hardware Description LEDs on MSEBs in NonStop S-Series Servers LEDs on MSEBs in NonStop S-Series Servers In NonStop S-series servers, the MSEB includes LEDs for monitoring the connection to the ServerNet cluster. Figure 2-9 on page 2-25 describes the LEDs on the MSEB. Only PIC 6 on the MSEB is shown. Figure 2-9.
ServerNet Cluster Hardware Description Links Links Figure 2-10 shows the basic model for a link between a cluster switch and either another switch or a node. A link is the entire communications path between two devices. The intermediate fiber-optic connectors might be part of a patch panel or splices between distinct fiber-optic cable segments. Figure 2-10.
ServerNet Cluster Hardware Description Electric Code Regulations for Fiber-Optic Cables Electric Code Regulations for Fiber-Optic Cables WARNING. You are required to comply with the National Electric Code (NEC) Article 725 for installations in the United States and with any local regulations for fiber-optic cables. Neither the plenum-rated cables nor riser-rated cables are designed to be used in conduits.
ServerNet Cluster Hardware Description Connectors Connectors Two types of connectors are supported for ServerNet clusters using 6780 switches. LC connectors The duplex Lucent connectors (LC) (Figure 2-11) holds a single fiber in a 1.25 mm ceramic ferrule, half the size of the standard SC ferrule. The connector body is made of moulded plastic and features a square front profile. Two LC connectors clipped together form a duplex LC.
3 Planning for Installation and Migration This section provides instructions on how to plan the installation or migration to a ServerNet cluster that uses the layered topology and 6780 switches.
Planning for Installation and Migration Planning Checklist Planning Checklist Before you begin, HP strongly recommends that you complete the planning steps to ensure that the installation or migration is successful, proceeds quickly, and requires little long-term maintenance. Note. Some planning checks are optional, but others must performed in advance. Otherwise, they will prevent you from completing the procedure. Table 3-1 shows the Planning Checklist.
Planning for Installation and Migration Planning Checklist Table 3-1. Planning Checklist (page 2 of 3) √ Major Planning Step For More Information For the best fault tolerance, check that the uninterruptible power supplies are available to support the 6780 switches.
Planning for Installation and Migration Planning Checklist Table 3-1. Planning Checklist (page 3 of 3) √ Major Planning Step For More Information Check that there is enough space for servicing in front and back of each 19-inch rack. Floor Space for the Racks on page 3-12 Plan for the Fiber-Optic Cables Check that each 6780 switch is no more than 80 meters (measured by cable length) from the connecting port on the node.
Planning for Installation and Migration Software Installation Planning Software Installation Planning Planning for software includes identifying and installing all required software on all nodes in the ServerNet cluster. Note. G06.21 and later functionality includes support for the layered topology but is also backward compatible with the star, split-star, and tri-star topologies. For more information, refer to Software Requirements for NonStop S-Series Servers on page 1-8.
Planning for Installation and Migration Task 2: Plan for the System Consoles Task 2: Plan for the System Consoles For system consoles connected to nodes in a ServerNet cluster, HP recommends this LAN configuration for the best OSM performance: • • • For every 10 nodes in the ServerNet cluster, at least one primary system console and one backup system console on a private LAN. No more than 10 nodes should be included within one subnet on a private LAN.
Planning for Installation and Migration Task 2: Plan for the System Consoles Figure 3-1. Ethernet LAN Serving Multiple Nodes Ethernet LAN \A \B \C \D X Fabric External ServerNet Fabrics Y Fabric ServerNet Cluster CDT 009.
Planning for Installation and Migration Task 3: Plan for the 6780 Switches Task 3: Plan for the 6780 Switches Each 6780 switch weighs approximately 75 pounds. The switches are either preinstalled in a provided rack or shipped separately in a box. Plan the Number of Layers and Zones Note. Each zone must have the same number of layers.
Planning for Installation and Migration Task 3: Plan for the 6780 Switches Table 3-2 shows the minimum number of zones and layers required for a particular number of nodes. Table 3-2. Number of Layers and Zones Maximum Number of Nodes Number of Zones Number of Layers 8 1 1 16 2 1 1 2 3 1 1 3 2 2 1 4 3 2 2 3 64 2 4 72 3 3 96 3 4 24 32 48 Plan the Number of 6780 Switches The number of 6780 switches you need depends on the number of zones and layers as shown in Table 3-3.
Planning for Installation and Migration Task 3: Plan for the 6780 Switches Plan the PICs Needed for Each 6780 Switch Blank PICs are installed if other PICs are not present. 1. If you have two or three zones, decide whether to use two dual SMF PICs or two quad MMF PICs for the zone interconnect. 2. Decide on the number of dual SMF PICs for the connection between a 6780 switch and a NonStop S-series server. You must have at least the minimum number stated in Table 3-4. Table 3-4.
Planning for Installation and Migration Task 3: Plan for the 6780 Switches Plan the Fabric, Zone Number, and Layer Number for Each 6780 Switch Use Table 3-5, 6780 Switch Settings to plan the name (which includes the fabric, zone, and layer) and numeric selector setting for each switch. Note. Optional long-distance connections require special numeric selector settings. For more information, see Appendix G, Using the Long-Distance Option. Table 3-5.
Planning for Installation and Migration Task 4: Plan for the Racks Task 4: Plan for the Racks You can order the 6780 switches already installed in an HP 10642 rack. Table 3-6 shows the dimensions of the HP 10642 rack. Table 3-6. Dimensions of an HP 10642 Rack Dimensions Height Depth Width Rack (without packaging materials) 78.7 inches (42U) 39.4 inches 23.62 inches 2000 mm 1000 mm 600 mm 200 cm 100 cm 60 cm 85.
Planning for Installation and Migration Task 5: Plan for the Power Requirements Task 5: Plan for the Power Requirements Check that you order the appropriate parts for the best fault tolerance. Check that the power cords are long enough to reach between the 6780 switch and either the UPS, PDU, or external power source.
Planning for Installation and Migration Task 6: Plan the Location of the Hardware UPS For 6780 switches, a UPS is optional but recommended. You can choose to use any UPS that meets the 6780 switch power requirements for all switches being powered on from the UPS. Refer to Table 3-7 on page 3-13. One UPS option to support the switches is the HP UPS R3000 XR. For part numbers, refer to Uninterruptible Power Supply (UPS) on page A-3.
Planning for Installation and Migration Task 7: Plan for the Fiber-Optic Cables Planning the Location of the 6780 Switches To reduce cabling errors, HP recommends that you install the X-fabric and Y-fabric switches with some distance between them. Do not install switches for different fabrics in the same rack as there is a higher potential for miscabling. Each switch must be located so that: • • • The power cords can reach the UPS, PDU, or external power source.
Planning for Installation and Migration Task 7: Plan for the Fiber-Optic Cables Table 3-9. Cables Required for Clustering Cable Type Number of Cables Cable Length SMF cables with SC to LC connectors Based on how many NonStop S-series servers you plan for the ServerNet cluster, determine the number of cables needed. Refer to Table 3-10 on page 3-16. Based on the distance between each ServerNet node and the switch it connects to, determine the length needed for the cables.
Planning for Installation and Migration Task 7: Plan for the Fiber-Optic Cables Table 3-11 on page 3-17 describes the required fiber-optic cables. Table 3-11.
Planning for Installation and Migration Task 8: Plan to Migrate the ServerNet Nodes From 6770 Switches Task 8: Plan to Migrate the ServerNet Nodes From 6770 Switches Skip this step if you are not migrating from a ServerNet cluster with 6770 switches. 1. Determine the current ServerNet node numbers for each node using either SCF or OSM. Refer to Checking the ServerNet Node Numbers on page 8-12. 2. Using the Migrating Nodes Form on page B-3: a.
Planning for Installation and Migration Task 10: Plan the Expand-Over-ServerNet Lines Task 10: Plan the Expand-Over-ServerNet Lines You must plan for: • • Configuration of Expand-over-ServerNet line-handler processes and lines for each node in the ServerNet cluster Compatibility with other types of Expand lines, including changing the time factors for other line types if necessary Refer to the Expand Configuration and Management Manual for more information about: • • • Configuring Expand-over-ServerN
Planning for Installation and Migration Task 10: Plan the Expand-Over-ServerNet Lines Table 3-13.
Planning for Installation and Migration Task 10: Plan the Expand-Over-ServerNet Lines $NCP uses this criteria to select the best-path route to a specific node: • • • The route must have the lowest TF of all possible routes. If two or more routes have the same TF, the route that has the lowest hop count (HC)—the fewest intervening nodes—is selected. Each path between two nodes is one hop. For example, a route that includes one passthrough node has a HC of 2.
Planning for Installation and Migration Migration Examples Migration Examples These examples begin with all nodes in the ServerNet cluster running the G06.21 or a later RVU and the tri-star topology. The original cluster can also be configured as a star or split-star topology. Following the migration, the cluster uses the layered topology. Figure 3-2 shows the existing tri-star topology. The cluster uses ServerNet II Switches, which are the routing components of a 6770 switch.
Planning for Installation and Migration Example: Online Migration Using the Existing ServerNet Node Numbers In most cases, migration to the layered topology can be done online. Refer to Example: Online Migration Using the Existing ServerNet Node Numbers on page 3-23. However, if the X3/Y3 6770 switches and X1/Y1 6770 switches are more than 100 meters apart, the migration from a tri-star topology must be offline because changing the ServerNet node numbers is required.
Planning for Installation and Migration Example: Online Migration Using the Existing ServerNet Node Numbers Figure 3-3. After the Migration: Example of Upgrading to a Layered Topology ServerNet Cluster (X fabric only) System Console B \C \D \E 6780 Switch X11 Layer Cables X12 System Console C \F 6780 Switch Layer Cables Zone Cables 6780 Switch \A X21 6780 Switch X22 \B System Console A VST537.
Planning for Installation and Migration Example: Offline Migration Requiring Changes to the ServerNet Node Numbers Example: Offline Migration Requiring Changes to the ServerNet Node Numbers This migration changes the ServerNet node numbers from 17 through 24 to 65 through 72. The ServerNet node numbers 1 through 8 and 9 through 16 remain unchanged. Figure 3-4 shows the cluster after the migration.
Planning for Installation and Migration Example: Offline Migration Requiring Changes to the ServerNet Node Numbers ServerNet Cluster 6780 Planning and Installation Guide—527301-004 3- 26
4 Preparing a System for Installation or Migration This section provides instructions to prepare both systems that are not currently in a ServerNet cluster and existing ServerNet nodes currently connected to 6770 switches. You must perform all the required tasks before adding or migrating the node to a ServerNet cluster using 6780 switches.
Preparing a System for Installation or Migration Task 1: Complete the Planning Checklist Table 4-1.
Preparing a System for Installation or Migration Task 3: Check and Install the Required Software Task 3: Check and Install the Required Software Time constraints unique to each production environment might make it difficult to install the required software on all systems all at once. At least one node for each zone is required during the installation of 6780 switches.
Preparing a System for Installation or Migration Installing the Client Software Table 4-2. Minimum NonStop S-Series Server Software VPROC Versions (page 2 of 2) Software Component Minimum VPROC Version SNETMON and MSGMON T0294G08_11AUG03_07JUL03_AAL SANMAN T0502G08_11AUG03_12AUG03_AAL SP Firmware T1089G06^26AUG03^23JUL03^ABK Installing the Client Software On system consoles connected to ServerNet nodes, install all client applications as described in the NonStop System Console Installer Guide.
Preparing a System for Installation or Migration Task 4: Check Operations and the Required Processes Task 4: Check Operations and the Required Processes 1. Check that these processes are configured and started. These processes might already be configured when you receive a new system: a. Before a system joins a ServerNet cluster, MSGMON, SANMAN, and SNETMON must be configured using the correct symbolic names and started. To check these processes, refer to Checking MSGMON, SANMAN, and SNETMON on page 8-13.
Preparing a System for Installation or Migration Task 5: Check and Install the Hardware 1. Check that all nodes in the ServerNet cluster have the same time. At a TACL prompt: > TIME 2. Use the SCF interface to the Kernel subsystem to check the system name and system number and view the time zone offset. At an SCF prompt: - > INFO SUBSYS $ZZKRN a.
Preparing a System for Installation or Migration Checking NonStop S-series Servers 4. To install MSEBs, refer to Installing the Required Hardware in NonStop S-Series Servers on page 4-10. Figure 4-1.
Preparing a System for Installation or Migration Checking NonStop S-series Servers Figure 4-2.
Preparing a System for Installation or Migration Checking NonStop S-series Servers When installed in an MSEB and viewed on the MSEB faceplate, the SMF fiber PIC, MMF fiber PIC, and SMF NNA PIC are identical in appearance as shown in Figure 4-3. Figure 4-3. SEB and MSEB Connectors (Actual Size) SEB ECL MSEB Fiber-Optic PIC vst615.vsd Therefore, you must remove the MSEB and visually check the PIC installed on the MSEB common base board.
Preparing a System for Installation or Migration Installing the Required Hardware in NonStop SSeries Servers Figure 4-4. Comparison of an NNA PIC and ECL PIC NNA PIC ECL PIC vst057.vsd Installing the Required Hardware in NonStop S-Series Servers If MSEBs are not installed on the NonStop S-series server to be added as a ServerNet node, you must install them before adding the node to the cluster.
Preparing a System for Installation or Migration Installing the Required Hardware in NonStop SSeries Servers Replace SEB or MSEB Guided Procedure You can use a guided replacement procedure to replace SEBs installed in slots 51 and 52 of group 01 with MSEBs, and to install a PIC in an MSEB. The online help for the guided procedure tells you when to connect ServerNet cables to the PICs in the MSEB. From a system console connected to the node you are adding: 1. Log on to the OSM Service Connection. 2.
Preparing a System for Installation or Migration Installing the Required Hardware in NonStop SSeries Servers ServerNet Cluster 6780 Planning and Installation Guide—527301-004 4- 12
5 Installing 6780 Switches This section provides instructions to install 6780 switches in a ServerNet cluster that uses the layered topology. Task 1: Inventory the Hardware 5-2 Task 2: Install the Hardware 5-2 Installing a Rack With Preinstalled Components 5-2 Installing Components Into an EIA Standard Rack 5-2 Task 3: Set the Numeric Selector 5-4 Task 4: Connect the Power Cords 5-5 Note. The time required to complete the installation varies with the hardware to be installed.
Installing 6780 Switches Task 1: Inventory the Hardware Task 1: Inventory the Hardware Check that you have all the required number of 6780 switches to build your cluster. The number of switches you need depends on how many nodes, zones, and layers you have planned. For more information, refer to Task 3: Plan for the 6780 Switches on page 3-8. Task 2: Install the Hardware The 6780 switches and related components are either shipped preinstalled in a HP 10642 rack or are each shipped separately.
Installing 6780 Switches Installing Components Into an EIA Standard Rack Quantity Description 2 Front rails 2 Rear rails 10 M6 Torx screws 4 Nylon ties for power cords 2. Attach the front rails to the rack: a. Place each front rail to the inside front of the rack. b. Check that the placement of the M6 screws is correct. c. Use two M6 screws to attach the rail to the rack. Center the M6 screws in the center of the hole and fasten tightly. 3. Attach the rear rail to the rack: a.
Installing 6780 Switches Task 3: Set the Numeric Selector Install a Cable-Management Assembly For each switch: 1. Verify that you have all parts for the cable-management assembly Quantity Description 1 Cable-management assembly including tray, 2 vertical radius guides (VRGs), and 13 cable-management cartridges 4 M6 Torx screws 1 Cable labels 4 Velcro ties for the fiber-optic cables 2. Verify that you have a T30 Torx screwdriver. 3.
Installing 6780 Switches Task 4: Connect the Power Cords Task 4: Connect the Power Cords For each switch: 1. Connect one AC power cord to the left AC power inlet on the switch. 2. Use two nylon cable ties to secure the power cord to the rear rail. 3. Cut the excess from the cable ties. 4. If using a power distribution unit: a. Connect the other side of the power cord to an extension bar. b. Connect the power cord on the extension bar to the PDU control unit. c. Power on the PDU control unit. d.
Installing 6780 Switches Task 4: Connect the Power Cords ServerNet Cluster 6780 Planning and Installation Guide—527301-004 5 -6
6 Connecting the Fiber-Optic Cables This section provides instructions to connect the cables between switches, and between the switches and nodes.
Connecting the Fiber-Optic Cables Summary of Tasks Summary of Tasks The procedure to install the fiber-optic cables varies depending on whether you are migrating from a ServerNet cluster with 6770 switches or installing a new ServerNet cluster. Installing a New ServerNet Cluster With 6780 Switches 1. Select a fabric to install. 2. Connect one node to one 6780 switch in one group. Refer to Connecting the Cables Between a Node and a 6780 Switch on page 6-18. 3.
Connecting the Fiber-Optic Cables Handling the Fiber-Optic Cables One at a time, disconnect the cable from the NonStop S-series server to the 6770 switch on only one fabric. Note. Do not disconnect both cables. When removing a node from only one fabric, you do not need to stop the Expand-overServerNet lines or the ServerNet cluster subsystem. You cannot use the cables for 6770 switches when connecting 6780 switches. 4. Connect one node to one 6780 switch in one group.
Connecting the Fiber-Optic Cables Connecting the Layer Cables Connecting the Layer Cables To connect the fiber-optic cables between layers of 6780 switches in a group: 1. Inventory the hardware. For more information, refer to Task 7: Plan for the FiberOptic Cables on page 3-15. a. Check that you have the correct number of layer cables. b. Check that you have the correct type of cables in the required lengths. 2. Label both ends of each fiber-optic cable. Refer to Layer Cable Connections on page 6-5.
Connecting the Fiber-Optic Cables Layer Cable Connections Layer Cable Connections Each layer cable connects to another switch in the same fabric and group with the same PIC slot and port number.
Connecting the Fiber-Optic Cables Layer Cable Connections Figure 6-2 on page 6-6 shows X-fabric cable connections for three layers in zone 1. Connections for the Y fabric and for zones 2 and 3 are the same. With three layers in each zone, these slots contain blank PICs: • • • Slot 13 in layer 1 Slot 12 in layer 2 Slot 11 in layer 3 Figure 6-2.
Connecting the Fiber-Optic Cables Layer Cable Connections Figure 6-3 on page 6-7 shows X-fabric cable connections for four layers in zone 1. Connections for the Y fabric and for zones 2 and 3 are the same. Figure 6-3. Cable Connections for Four Layers X14 X13 X12 X11 4 3 2 1 4 3 2 1 4 3 2 1 Slot 11 Slot 12 Slot 13 4 3 2 4 3 2 4 3 2 1 1 1 Slot 11 Slot 12 Slot 13 4 3 2 4 3 2 4 3 2 1 1 1 Slot 11 Slot 12 Slot 13 4 4 4 3 2 1 3 2 1 3 2 1 Slot 11 Slot 12 Slot 13 VST301.
Connecting the Fiber-Optic Cables Layer Cable Connections The connections between layers in switch zone 1 are shown in Table 6-1. Connections between layers in switch zone 2 and 3 are the same. Table 6-1.
Connecting the Fiber-Optic Cables Connect the Layer Cables Connect the Layer Cables 1. From slots 11, 12, and 13, ports 1, 2, 3 and 4 of each switch on the specified fabric, remove the black plugs from the ports that you will be connecting to. Leave the plugs for the remaining ports in place to protect from dust. 2. Remove the dust caps from the MMF LC-to-LC fiber-optic cables. 3. One cable at a time: a.
Connecting the Fiber-Optic Cables Connecting the Zone Cables Connecting the Zone Cables This subsection describes connecting the fiber-optic cables between 6780 switches in different zones. 1. Inventory the hardware. For more information, refer to Task 7: Plan for the FiberOptic Cables on page 3-15. a. Check that you have the correct number of zone cables. b. Check that you have the correct type of cables in the required lengths. 2. Label both ends of each fiber-optic cable.
Connecting the Fiber-Optic Cables Zone Cable Connections Two-Zone ServerNet Cluster Note. Connections between zones 1 and 3 or between zones 2 and 3 are not shown. Twozone ServerNet clusters between these zones only support two cables between zones and are less fault tolerant than a ServerNet cluster between zones 1 and 2. Figure 6-4 illustrates the connections between zones 1 and 2. Only the X fabric and layer 1 are shown, but the cabling for the Y fabric and layers 2 through 4 is the same.
Connecting the Fiber-Optic Cables Zone Cable Connections Table 6-2. X Fabric Connections Between Two Zones Connects to ...
Connecting the Fiber-Optic Cables Zone Cable Connections Table 6-3. Y Fabric Connections Between Two Zones Connects to ...
Connecting the Fiber-Optic Cables Zone Cable Connections Three-Zone ServerNet Cluster Figure 6-5 illustrates the connections between three zones. Only X fabric and layer 1 are shown, but the cabling for the Y fabric and layers 2 through 4 is the same.
Connecting the Fiber-Optic Cables Zone Cable Connections Table 6-4. X Fabric Connections for a Three-Zone ServerNet Cluster Connects to ...
Connecting the Fiber-Optic Cables Zone Cable Connections Table 6-5. Y Fabric Connections for a Three-Zone ServerNet Cluster Connects to ...
Connecting the Fiber-Optic Cables Connect the Zone Cables Connect the Zone Cables To connect a zone cable: 1. From slots 2 or 3, ports 1 or 2 of each switch on the specified fabric, remove the black plug from the port you are connecting to. Leave the plugs for the remaining ports in place to protect from dust. 2. Remove the dust cap from the MMF or SMF fiber-optic cable connector. 3. One cable at a time, connect the cable ends. Refer to Zone Cable Connections on page 6-10 4.
Connecting the Fiber-Optic Cables Connecting the Cables Between a Node and a 6780 Switch Connecting the Cables Between a Node and a 6780 Switch This subsection describes connecting each node to a 6780 switch. Alerts • • Do not connect the cables between the 6780 switch and the node until instructed to do so. During the procedure to connect the cables between the switches, you must connect one node for each group to a switch.
Connecting the Fiber-Optic Cables Task 3: Inspect the Cables 5. For NonStop S-series servers: a. Place the first label on the LC connector side of the SMF cable. b. Place the second label on the SC connector side of the SMF cable. Task 3: Inspect the Cables 1. Remove the dust caps from each cable connector. 2. Inspect each fiber-optic LC cable connector. 3.
Connecting the Fiber-Optic Cables Task 5: Connect a Cable to the Node Task 5: Connect a Cable to the Node To connect the cable to the node: 1. Align the keys on the connector with the key slots on the receptacle. 2. Insert the connector into the receptacle, squeezing the connector gently between your thumb and forefinger as you insert it. Push the connector straight into the receptacle until the connector clicks into place. Check that the connector is fully mated to the receptacle.
Connecting the Fiber-Optic Cables Task 5: Connect a Cable to the Node Figure 6-6. Inserting a Fiber-Optic Cable Connector Into an MSEB Receptacle Keys on Connector Body CDT 055.CDD Figure 6-7 shows the effect of uneven fiber insertion. Figure 6-7. Effect of Uneven Fiber Insertion on Link Alive Good Connector Defective Connectors Link Alive No Link Alive Fibers Evenly Inserted Rx Link Alive Fibers Not Evenly Inserted Tx Rx Tx Rx Tx VST145.
Connecting the Fiber-Optic Cables Task 6: Check the Link-Alive LEDs Task 6: Check the Link-Alive LEDs Check the link-alive LED near each port at both the switch port and the port where the cable connects to the NonStop server. Both LEDs should light seconds after the connector is inserted. Wait for the LED to stop flashing. If the link-alive LEDs do not light, refer to Green Link-Alive LED Is Not Lit on page 10-2.
7 Configuring Expand-Over-ServerNet Lines This section contains a summary of configuring Expand-over-ServerNet lines using automatic line-handler generation, OSM, or SCF.
Configuring Expand-Over-ServerNet Lines Using Automatic Line-Handler Generation Using Automatic Line-Handler Generation Automatic line-handler generation is enabled by default. As soon as you connect the fiber-optic cables to a node in a ServerNet cluster, Expand-over-ServerNet line-handler processes and lines are automatically configured to and from that node to all other nodes in the cluster. The naming convention for each line-handler process is $SCexpand-node-number such as $SC035.
Configuring Expand-Over-ServerNet Lines Rule 1: Configure the Primary and Backup LineHandler Processes in Different Processor Rule 1: Configure the Primary and Backup Line-Handler Processes in Different Processor Enclosures Whenever possible, configure the primary and backup Expand-over-ServerNet linehandler processes in different processor enclosures. This rule applies to all systems except for two-processor systems, which have only one processor enclosure. Rule 1 takes precedence over Rules 2 and 3.
Configuring Expand-Over-ServerNet Lines Expand-Over-ServerNet Line-Handler Process Example This SCF example shows recommended modifier values for an Expand-overServerNet line-handler process: >SCF ->ADD PROFILE $ZZWAN.#PEXPSSN, FILE $SYSTEM.SYS01.PEXPSSN& ,AFTERMAXRETRIES_DOWN ->ADD DEVICE $ZZWAN.#SC001, CPU 2, ALTCPU 5, PROFILE PEXPSSN, & ->IOPOBJECT $SYSTEM.SYSTEM.
8 Checking Operations This section provides procedures to check the operations of components in the ServerNet cluster.
Checking Operations Checking the Operation of the ServerNet Cluster Checking the Operation of the ServerNet Cluster From each node, check the operation of the ServerNet cluster after these procedures: • • • Installing or migrating a ServerNet cluster Replacing a switch or logic board Performing a hard reset of the logic board To check the operation of the ServerNet cluster: 1. Use SCF or the OSM Service Connection to check the external fabric.
Checking Operations Checking the External Fabric for All Nodes Checking the External Fabric for All Nodes Perform this procedure after installing or migrating a ServerNet cluster. Using SCF Note. You must configure remote passwords before you can use the STATUS SUBNET $ZZSCL, PROBLEMS command to gather information about remote nodes. Use SCF to check direct ServerNet communication is possible on both fabrics between all nodes in the ServerNet cluster.
Checking Operations Checking for Problems Between Nodes Checking for Problems Between Nodes Use SCF to check that direct ServerNet communication is possible on both fabrics between all nodes in the ServerNet cluster. This command queries the SNETMON processes in all nodes and displays any connectivity problems. You do not have to issue separate commands for each node. 1. Log on to a node: -> STATUS SUBNET $ZZSCL, PROBLEMS 2.
Checking Operations Checking the Power to Each Switch Checking the Power to Each Switch Perform this procedure after you install or replace a switch: 1. Power off the primary power rail on the switch: • • If the primary power rail is connected to a PDU, switch off the power breaker that connects to the primary power rail. If the primary power rail is connected directly to an external power source, disconnect the power cable from the external power source. 2. Check that the switch is still powered on.
Checking Operations Checking the Numeric Selector Setting 5. Check the numeric selector setting loaded on each switch as described in Checking the Numeric Selector Setting on page 8-6. Checking the Numeric Selector Setting For each switch, the numeric selector should be set as shown in Table 8-1. To check the numeric selector setting loaded on the logic board, you can use either SCF, the OSM Service Connection, or the logic board liquid-crystal display (LCD). Note.
Checking Operations Checking the Globally Unique ID (GUID) Using SCF 1. From an SCF prompt, type: -> INFO SWITCH $ZZSMN, ZONE , LAYER , & -> FABRIC 2. In the display, check the Load Num Selector and Config Tag attributes. Check that the setting is correct for this switch. Refer to Task 3: Set the Numeric Selector on page 5-4 Using OSM 1. Log on to the OSM Service Connection. 2. From the tree pane: a. Double-click the ServerNet cluster resource. b.
Checking Operations Checking for a Mixed Globally Unique ID (GUID) Using OSM 1. Log on to the OSM Service Connection. 2. From the tree pane: a. Expand the ServerNet cluster resource. b. Expand the External ServerNet Fabric for the X or Y fabric to display the Switch Group. c. Expand the Switch Group to display the switch module. d. Select the Switch Module. 3. Click the Attributes tab to display the Switch attributes. 4. Check the Logical Globally Unique ID attribute. 5.
Checking Operations Checking the Switch Configuration, Firmware, and FPGA Images Checking the Switch Configuration, Firmware, and FPGA Images The version of logic board firmware, configuration, and FPGA images running on the 6780 switch logic board should match the VPROC file versions on the systems. To check, perform this procedure after you install or replace: • • • • A switch A logic board A new RVU SPRs for the logic board firmware, configuration, or FPGA 1.
Checking Operations Checking the Switch Configuration, Firmware, and FPGA Images Using SCF to Check the Switch Images 1. Log on to any node. 2. From an SCF prompt, type: ->INFO SWITCH $ZZSMN, ZONE , LAYER , & ->FABRIC 3. In the SCF display, check that both Image A VPROC and Image B VPROC match the VPROC shown in Table 8-2. Table 8-2.
Checking Operations Checking the Operation of Each Node Using the LCD to Check the Switch Images 1. Open the rack door. 2. Open the front door of the 6780 switch. 3. Use the up or down buttons on the logic board front panel to scroll to the Firmware, FPGA, and Configuration images running on the logic board. 4. Check the Firmware, FPGA, and Configuration images. Refer to Liquid Crystal Display (LCD) on page 2-11.
Checking Operations Checking That Automatic Line-Handler Generation Is Enabled 2. Use the OSM Service Connection to determine the version of firmware downloaded to the service processors: a. From the display menu, select Multi-resource actions. The Multi-Resource Actions dialog box is displayed. b. From the Resource Type drop-down list, select SP. c. From the Action drop-down list, select SP Firmware Update. d. From the Filter by drop-down list, check that No filter selected is selected. e.
Checking Operations Checking MSGMON, SANMAN, and SNETMON Checking MSGMON, SANMAN, and SNETMON Perform this procedure: • • • • Before adding a node to a ServerNet cluster If you cannot start the ServerNet cluster subsystem If you cannot connect to a remote node If you cannot display information about the ServerNet cluster in the OSM Service Connection To avoid conflicts with OSM and the guided procedures, you must configure $ZZKRN.#MSGMON, $ZZKRN.#ZZSMN, and $ZZKRN.
Checking Operations Checking for Alarms on Each Node 5. Check that the processes are started: • Using SCF, either: ° Check the status of each process using the required SCF symbolic name: -> STATUS PROCESS $ZZKRN.#ZZSMN -> STATUS PROCESS $ZZKRN.#ZZSCL -> STATUS PROCESS $ZZKRN.#MSGMON ° Or check the status of all generic processes: -> STATUS PROCESS $ZZKRN.* In the display, check the status of MSGMON, SANMAN, and SNETMON.
Checking Operations Checking That the ServerNet Node Numbers Are Consistent 4. From the Attributes tab, check the ServerNet Cluster State attribute. • At an SCF prompt, type: -> STATUS SUBSYS $ZZSCL 2. If the ServerNet cluster subsystem is not in a STARTED state, refer to Starting the ServerNet Cluster Subsystem on page 11-4.
Checking Operations Checking Communications With a Remote Node d. Check for an alarm on the External_ServerNet_X_Fabric or the External_ServerNet_Y_Fabric. If no alarm is present, the action completed successfully. If an alarm is present (an alarm bell appears next to the object), click the object to select it. For more information about the alarm, see the repair actions. Table 8-4 describes the scope of the Node Connectivity ServerNet Path Test action. Table 8-4.
Checking Operations Checking the Internal ServerNet X and Y Fabrics h. If the action failed, click Show detail for more information. Checking the Internal ServerNet X and Y Fabrics Perform this procedure if a problem occurs on an internal fabric on any node in the ServerNet cluster. You can use the OSM Service Connection or SCF to check the internal ServerNet fabrics. 1. Using the OSM Service Connection: a. Check for alarms and repair actions for the Internal Fabric resource. Refer to OSM online help. b.
Checking Operations Checking the Operation of Expand Processes and Lines Checking the Operation of Expand Processes and Lines $NCP and $ZEXP must be running before you add a node to a ServerNet cluster. The Expand-over-ServerNet line-handler processes and lines are configured and started after the node is added. Checking $NCP and $ZEXP Perform this procedure before connecting the fiber-optic cable from a node to a 6780 switch. Process TACL Name Required SCF Symbolic Name Expand manager $ZEXP $ZZKRN.
Checking Operations Checking the Status of an Expand-Over-ServerNet Line Checking the Status of an Expand-Over-ServerNet Line Perform this procedure after you have added a node to a ServerNet cluster, migrated a ServerNet cluster, or if you are having problems with an Expand-over-ServerNet line. 1. To list the Expand lines: -> LISTDEV TYPE 63,4 2. In the display, check all lines of type 63, 4. The naming convention for Expandover-ServerNet lines is $SCexpand node number.
Checking Operations Checking the Status of an Expand-Over-ServerNet Line ServerNet Cluster 6780 Planning and Installation Guide—527301-004 8- 20
9 Changing a ServerNet Cluster This section provides procedures to make changes to an already-installed ServerNet Cluster or a node in a cluster.
Changing a ServerNet Cluster OSM Actions OSM Actions Use these OSM actions to make changes to the ServerNet cluster. Update Topology Action The Update Topology action disables alarms on the ServerNet cluster during the migration. This action can be performed from any node in the current ServerNet cluster. To update the topology: 1. Log on to the OSM Service Application 2. From the tree pane, right-click ServerNet Cluster, and select Actions. 3.
Changing a ServerNet Cluster Removing a Node From a ServerNet Cluster 4. Click Perform Action. The guided procedure is displayed. 5. Before you run the procedure, review the online help topic “Read Before Using.” This topic contains important information about software requirements that must be met. Removing a Node From a ServerNet Cluster To remove a node from a ServerNet cluster: 1. Record the Expand-Over-ServerNet lines to be stopped. Refer to Expand Lines Form on page B-9. 2.
Changing a ServerNet Cluster Removing Switches From a ServerNet Cluster Removing Switches From a ServerNet Cluster Removing a zone or layer of switches: • • • Can be done online Does not change the topology Does not change the ServerNet node numbers that remain in the ServerNet cluster To remove a zone or layer of switches: 1. Select the zone or layer to be removed. You can select the zone or layer with the fewest number of nodes or one that is least critical to your application. 2.
Changing a ServerNet Cluster Adding a Node to a ServerNet Cluster 12. If you plan to use the removed zone or layer as an independently functioning ServerNet cluster: a. Restart the ServerNet Cluster Subsystem on the nodes you have removed. Refer to Starting the ServerNet Cluster Subsystem on page 11-4. b. Restart the Expand-over-ServerNet lines in the new ServerNet cluster. Refer to Starting the Expand-Over-ServerNet Line-Handler Processes and Lines on page 11-5. c. Restart applications.
Changing a ServerNet Cluster Task 3: Connect the Cables Task 3: Connect the Cables 1. Connect the cable from the node to an unused port on the PICs in slots 6 through 9 of the 6780 switch, check that the fiber-optic cables: • • Connect to the same PIC slot number and port number on the X and Y switches. Connect to the same zone and layer of switches (for example, X11 and Y11, or X21 and Y21). 2. Check for link alive at both ends of each cable.
Changing a ServerNet Cluster Task 6: Check Operations Task 6: Check Operations 1. Check that the Expand-over-ServerNet lines have started. Refer to Checking the Status of Expand-Over-ServerNet Line-Handler Processes on page 8-18. 2. Using the OSM Service Connection, check that both external fabrics to the node are Up. 3. If the fabrics do not come up, refer to Starting the External ServerNet Fabric on page 11-4. 4. If necessary, restart your applications.
Changing a ServerNet Cluster Adding a Switch Layer to a ServerNet Cluster Adding a Switch Layer to a ServerNet Cluster Note. All zones must have the same number of layers. Adding a switch layer to a ServerNet cluster: • • • • Adds a switch layer Can be done online Does not change the topology Does not change the ServerNet node numbers of the current ServerNet nodes Task 1: Prepare to Add the Switches 1. Review the Planning Checklist in Planning Checklist on page 3-2. 2.
Changing a ServerNet Cluster Task 5: Resume Normal Operations 4. If a ServerNet port LED does not remain lit continuously after 60 seconds, refer to Green Link-Alive LED Is Not Lit on page 10-2. Task 5: Resume Normal Operations 1. Use the OSM Service Connection to enable the OSM alarms that were suppressed by the Update Topology action: a. Right-click ServerNet Cluster, and select Actions. b. From the Available Actions drop-down list, select Reanalyze. c. Click Perform action.
Changing a ServerNet Cluster Task 2: Connect the Cables Between Layers 4. Perform the Update Topology action. Refer to the Update Topology Action on page 9-2. This action disables OSM alarms. Task 2: Connect the Cables Between Layers If there is more than one layer in the zone: 1. Label the layer cables as described in Layer Cable Connections on page 6-5. 2. Connect the cables between the layers in the zone to be added. Task 3: Check Operations Refer to Checking the Operation of Each Switch on page 8-4.
Changing a ServerNet Cluster Task 5: Connect the Cables Between Zones Use Table 9-1 when disconnecting from zone 1. Table 9-1. Disconnecting the Fiber-Optic Cables From Zone 1 Layer Switch PIC Port 1 X11/Y11 2 2 3 2 2 2 3 2 2 2 3 2 2 2 3 2 2 X12/Y12 3 X13/Y13 4 X14/Y14 Use Table 9-2 when disconnecting from zone 2. Table 9-2.
Changing a ServerNet Cluster Task 6: Check Operations Figure 9-2. Cable Connections to Zone 3 X11 X21 Slot 2 Port 1 Slot 2 Port 2 Slot 3 Port 1 Slot 3 Port 2 Slot 2 Port 2 Slot 2 Port 1 Slot 3 Port 2 Slot 3 Port 1 Slot 3 Port 1 Slot 2 Port 1 Slot 2 Port 2 Slot 3 Port 2 X31 VST802.vsd Task 6: Check Operations 1. Check for mixed GUID errors. Refer to Checking for a Mixed Globally Unique ID (GUID) on page 8-8. 2. Check the operation of each switch in the new zone that was added.
Changing a ServerNet Cluster Task 10: Reenable OSM Alarms Task 10: Reenable OSM Alarms Use the OSM Service Connection to enable the OSM alarms that were suppressed by the Update Topology action: 1. Right-click ServerNet Cluster, and select Actions. 2. From the Available Actions drop-down list, select Reanalyze. 3. Click Perform action. Moving a Node Use this procedure to move a node to another port on the same switch pair or to another switch pair.
Changing a ServerNet Cluster Changing the Hardware in a Node Connected to a ServerNet Cluster Changing the Hardware in a Node Connected to a ServerNet Cluster If you make changes to the hardware in the NonStop server that provides the connections to the ServerNet cluster, the node might lose connectivity to the cluster or the node’s connections to the cluster might not be in a fault-tolerant state for a short time.
10 Troubleshooting This section provides procedures to troubleshoot and recover from basic problems that might occur when installing, changing, or migrating to a ServerNet cluster that uses the layered topology and 6780 switches.
Troubleshooting Symptoms Symptoms During installation and migration these symptoms might occur: • • • • • • • Green Power-On LED on a Switch Is Not Lit on page 10-2 Green Link-Alive LED Is Not Lit on page 10-2 Yellow Fault LED on a Switch Is Lit on page 10-3 Yellow Fault LED on a Switch Is Blinking on page 10-3 An OSM Alarm on the External Fabric Is Generated on page 10-3 ServerNet Node Numbers Are Not Consistent on page 10-4 ServerNet Remote Node Name Does Not Display on page 10-4 Green Power-On LED o
Troubleshooting Yellow Fault LED on a Switch Is Lit Yellow Fault LED on a Switch Is Lit A yellow fault LED that is lit but not blinking indicates an error. The CRU might be incorrectly seated in the slot, or the CRU might have been inserted too slowly. 1. Wait a few minutes to see if the fault LED clears. The fault LED is normally on while a CRU is undergoing a power-on self test (POST). 2. If the problem continues, completely remove the CRU from its slot. 3.
Troubleshooting ServerNet Node Numbers Are Not Consistent ServerNet Node Numbers Are Not Consistent 1. Check that the ServerNet node numbers are consistent. Refer to Checking That the ServerNet Node Numbers Are Consistent on page 8-15. 2. If the ServerNet Node numbers used by the NonStop server port and switch port are not consistent, log on to the OSM Service Connection. For example, the ServerNet node numbers on the MSEB port 6 in a NonStop S-series server and the switch port should be the same. 3.
Troubleshooting Recovery Operations Recovery Operations To recover from problems that might occur during installation or migration: • • • • • • • • Enabling Automatic Expand-Over-ServerNet Line-Handler Generation on page 10-5 Reseating a Fiber-Optic Cable on page 10-5 Correcting a Mixed Globally Unique ID (GUID) on page 10-6 Restoring Connectivity to a Node on page 10-7 Switching the SANMAN Primary and Backup Processes on page 10-8 Switching the SNETMON Primary and Backup Processes on page 10-8 Configur
Troubleshooting Correcting a Mixed Globally Unique ID (GUID) Correcting a Mixed Globally Unique ID (GUID) A mixed GUID can occur when the 6780 switches have been miscabled. To correct this problem: 1. Check that the fiber-optic cables are correctly labeled. 2. When a mixed GUID occurs on these ports on a 6780 switch, disconnect the cables with a mixed GUID error from all the indicated ports in Table 10-1. Do not reconnect any of the cables until after all the cables indicated are disconnected. Table 10-1.
Troubleshooting Restoring Connectivity to a Node Restoring Connectivity to a Node A switch hard reset or other procedure, failure, or replacement can cause the loss of connectivity in a ServerNet cluster fabric. Direct ServerNet connectivity is automatically restored after an interval of approximately 25 seconds times the number of remote nodes in the cluster. If connectivity is not restored: 1.
Troubleshooting Switching the SANMAN Primary and Backup Processes Switching the SANMAN Primary and Backup Processes If connectivity to a node is not restored after Steps 1 through 4 in Restoring Connectivity to a Node on page 10-7, switching the primary and backup processes forces a takeover of the SANMAN primary process but is not invasive. The SANMAN process continues to run, and any ServerNet connectivity that is already up does not go down.
Troubleshooting Configuring the Expand-Over-ServerNet LineHandler Processes and Lines Using OSM 1. Log on to the OSM Service Connection. 2. From the tree pane, right-click the ServerNet Cluster resource, and select Actions to display the Actions dialog box. 3. From the Available Actions drop-down list, select Switch SNETMON process pair. 4. Click Perform action.
Troubleshooting Fallback Procedures Fallback Procedures Note. Fallback to previous versions of the 6780 switch firmware, configuration, and FPGA is not necessary. Before falling back to software that does not meet the minimum requirements (Hardware Installation and Migration Planning on page 3-5) for the layered topology and the 6780 switches, remove the node from the ServerNet cluster. Refer to Removing a Node From a ServerNet Cluster on page 9-3.
11 Starting and Stopping ServerNet Cluster Processes and Subsystems This section provides procedures to stop or start ServerNet cluster processes and subsystems.
Starting and Stopping ServerNet Cluster Processes and Subsystems Stopping the ServerNet Cluster Subsystem Stopping the ServerNet Cluster Subsystem When the ServerNet cluster subsystem is stopped, the node informs other nodes that it is leaving the cluster. Stopping the ServerNet cluster subsystem destroys interprocessor connectivity between the node that receives the commands and all other nodes over both external fabrics. The ServerNet cluster monitor process $ZZKRN.
Starting and Stopping ServerNet Cluster Processes and Subsystems Stopping Expand-Over-ServerNet Lines Stopping Expand-Over-ServerNet Lines Stopping Expand-over-ServerNet lines is required during these procedures: • • • Removing a Node From a ServerNet Cluster on page 9-3 Removing Switches From a ServerNet Cluster on page 9-4 Moving a Node on page 9-13 Use these SCF commands to identify and stop Expand-over-ServerNet lines: 1.
Starting and Stopping ServerNet Cluster Processes and Subsystems Starting the ServerNet Cluster Subsystem Starting the ServerNet Cluster Subsystem You can also configure the ServerNet cluster subsystem to start automatically. Refer to Appendix E, Configuring MSGMON, SANMAN, and SNETMON for more information. 1.
Starting and Stopping ServerNet Cluster Processes and Subsystems Starting MSGMON Starting MSGMON If $ZZKRN.#MSGMON is configured but not started, use SCF to start it. This command starts a copy of MSGMON on every available processor on the node. At an SCF prompt: -> START PROCESS $ZZKRN.#MSGMON Starting SNETMON If $ZZKRN.#ZZSCL is configured but not started, use SCF to start it. At an SCF prompt: -> START PROCESS $ZZKRN.
Starting and Stopping ServerNet Cluster Processes and Subsystems Starting the Expand-Over-ServerNet Line-Handler Processes and Lines ServerNet Cluster 6780 Planning and Installation Guide—527301-004 11- 6
A Part Numbers This appendix lists the part numbers for hardware associated with a ServerNet cluster.
Part Numbers Switch Rack Switch Rack Part Number Description 527136 HP 10642 Rack Switch Part Number Description 524392 6780 HP ServerNet Switch, including: • • • Rack mount kit Cable management assembly 6780 HP ServerNet Switch bezel 524393 6780 HP ServerNet Switch 524939 6780 HP ServerNet Switch bezel 526272 Rack mount kit, including: 526272 • • • Rails: left front, left rear right front, and right rear 10 Torx screws, M6 4 black nylon ties Cable management assembly, including: • • •
Part Numbers Switch Components Switch Components Part Number Description Slot 523978 Logic board 14 523793 Fan 16 and 17 523808 Power Supply 15 and 18 524986 Blank PIC 1 through 13 523811 Maintenance PIC 1 523810 Router interconnect PIC 4, 5, and 10 523809 Quad Multimode (MMF) fiber PIC 2 and 3, 11 through 13 Zone or Layer 524984 Dual Single-mode fiber (SMF) PIC 2 and 3, 6 through 9 Zone or Node, ports 1 and 2 populated Uninterruptible Power Supply (UPS) Part Number 527133 Des
Part Numbers Power Distribution Unit (PDU) Power Distribution Unit (PDU) Part Number Description Voltage Amps Area 527135 PDU HV 24 North America Asia HV 32 International Attached input power cord NEMA L6-30P 527296 Attached input power cord IEC 309 Power Cords Power Cords for Connecting a 6780 Switch to a PDU or UPS Part Number Length Description Voltage Amps Area 527182 4.
Part Numbers Power Cords for Connecting a 6780 Switch Directly to an External Power Source Power Cords for Connecting a 6780 Switch Directly to an External Power Source Part Number Length Description Voltage Amps Area T24242 2.77 meters (9 feet) C13 receptacle 125 10 North America 522835 2.
Part Numbers Part Number Power Cords for Connecting a 6780 Switch Directly to an External Power Source Length Description Voltage Amps Area T32342 4.5 meters C13 receptacle 250 10 Continental Europe T24495 2.5 meters C13 receptacle 250 10 Denmark T65383 4.5 meters C13 receptacle 250 10 Denmark T24494 2.5 meters C13 receptacle 250 10 Italy T65382 4.5 meters C13 receptacle 250 10 Italy T24493 2.5 meters C13 receptacle 250 10 Switzerland T65381 4.
Part Numbers Fiber-Optic Cables Fiber-Optic Cables Table A-1. Duplex SMF With LC to SC Connectors for Node Connections Part Number Length Description 522748 10 meters (32.8 feet) Riser-rated 526082 40 meters (131 feet) Ruggedized plenum-rated 526083 80 meters (262.5 feet) Ruggedized plenum-rated Table A-2. Duplex MMF With LC to LC Connectors for Layer or Zone Connections Part Number Length Description 522745 2 meters (6.6 feet) Riser-rated 522517 10 meters (32.
Part Numbers Modular ServerNet Expansion Board (MSEB) ServerNet Cluster 6780 Planning and Installation Guide—527301-004 A- 8
B Blank Planning Forms This appendix contains blank copies of planning forms to aid in installation, migration, or changes to the ServerNet cluster.
Switch Planning Form Table B-1.
Migrating Nodes Form Table B-2.
ServerNet Node Number Forms Use the ServerNet node number form to record the name, numbers, and locations of the servers to be clustered. On the 6780 switch, PICs 6, 7, 8, and 9, ports 3 and 4 are not supported for connections to NonStop S-series servers. Table B-3.
Table B-4.
Table B-5.
Table B-6.
Firmware, Configuration, and FPGA Planning Form Table B-7. Switch Logic Board Downloadable Images File Version on $System.
Expand Lines Form To complete the form: 1. Use the OSM Service Connection to identify the system name and Expand node number of the affected node. 2. Record the system name. 3. For the affected node, list the Expand-over-ServerNet lines to all other nodes. 4. For each other node, list the Expand-over-ServerNet line to the affected node. On the affected node... On all other nodes...
C ESD Guidelines Observe these ESD guidelines when servicing electronic components: • • • • • • • Obtain an electrostatic discharge (ESD) protection kit and follow the directions that come with the kit. You can purchase an ESD kit from HP (T99247-A00) or from a local electronics store. Ensure that your ESD wriststrap has a built-in series resistor and that the kit includes an antistatic table mat.
ESD Guidelines Figure C-1. Using ESD Protection When Servicing CRUs System Enclosure (Appearance Side) ESD wriststrap with grounding clip ESD wriststrap clipped to door latch stud ESD floor mats ESD antistatic table mat. Mat should be connected to a soft ground (1 megohm min. to 10 megohm max.) Clip 15-foot straight ground cord to•screw screw on grounded outlet cover. CDT 001.
D Specifications This appendix lists specifications for components used in a ServerNet cluster. 6780 Switch Specifications D-1 Fiber-Optic Cable Specifications D-2 Single-Mode Fiber-Optic (SMF) Cables D-2 Multimode Fiber-Optic (MMF) Cables D-3 6780 Switch Specifications Table D-1 describes the power requirements, operating environment requirements, and weight of a 6780 switch. Table D-1. 6780 Switch Specification Characteristic AC input voltage 100-240 V AC, 50 Hz/60 Hz, 3.
Specifications Fiber-Optic Cable Specifications Fiber-Optic Cable Specifications If you plan to use any fiber-optic cables not provided by HP, use this information to help you choose the correct cable. The fiber-optic cables conform to the IEEE 802.3z (Gigabit Ethernet) specification. Single-Mode Fiber-Optic (SMF) Cables HP supports customer-provided SMF cables for use with the 6780 switches if certain requirements are met as shown in Table D-2 and Table D-3.
Specifications Multimode Fiber-Optic (MMF) Cables Multimode Fiber-Optic (MMF) Cables HP supports customer-provided MMF cables for use with the 6780 switches if certain requirements are met as shown in Table D-4. Table D-4. MMF Fiber-Optic Requirements Description Requirements Fiber-optic Corning MMF Infinitor 600 Nominal fiber-optic specification wavelength 850 nm Core/cladding diameter 50/125 micrometers Cable attenuation (max.) 0.5 dB/km Connector insertion loss 0.
Specifications Multimode Fiber-Optic (MMF) Cables ServerNet Cluster 6780 Planning and Installation Guide—527301-004 D- 4
E Configuring MSGMON, SANMAN, and SNETMON This appendix provides an example of how to configure MSGMON, SANMAN, and SNETMON as generic processes in the system configuration database. You can use either the ZSCCONF macro or create your own SCF command file.
Configuring MSGMON, SANMAN, and SNETMON TACL Macro TACL Macro Note. Before using the macro: • • • You must log on using the super ID (255, 255) in order to run the macro successfully. Do not run the macro on a node that is currently a member of a ServerNet cluster. MSGMON, SANMAN, and SNETMON will be aborted, and the connection to the cluster will be lost temporarily. The macro is intended as an example and might not be appropriate for all systems.
Configuring MSGMON, SANMAN, and SNETMON SCF Command File SCF Command File You can add these processes by creating your own SCF command file. You must configure these processes: • • To start automatically at system load and be persistent (that is, restart automatically if stopped abnormally). You must set the AUTORESTART attribute to a nonzero value. To run under the super group user ID. (By default the USERID attribute is set to the user ID of the current SCF session.) To add these processes: 1.
Configuring MSGMON, SANMAN, and SNETMON Task 2: Configure SANMAN Task 2: Configure SANMAN The ServerNet SAN manager process must be configured: • • • To have the process name $ZZSMN (set the NAME attribute to $ZZSMN). For this process to work correctly with OSM and the guided procedures, the required symbolic name is $ZZKRN.#ZZSMN. So that the $ZPM persistence manager stops the process by sending in an internal system message. (Set the STOPMODE attribute to SYSMSG.
Configuring MSGMON, SANMAN, and SNETMON Task 4: Start MSGMON, SANMAN, and SNETMON To configure SNETMON, add this SCF command with these attributes to your command file: -> ADD PROCESS $ZZKRN.#ZZSCL, & AUTORESTART 10, & PRIORITY 199, & PROGRAM $SYSTEM.SYSTEM.SNETMON, & CPU FIRSTOF (02, 05, 06, 03, 07, 04), & HOMETERM $ZHOME, & OUTFILE $ZHOME, & NAME $ZZSCL, & SAVEABEND ON, & STARTMODE SYSTEM, & STOPMODE SYSMSG, & STARTUPMSG "CPU-LIST cpu-list" Task 4: Start MSGMON, SANMAN, and SNETMON Note. When $ZZKRN.
Configuring MSGMON, SANMAN, and SNETMON Task 4: Start MSGMON, SANMAN, and SNETMON ServerNet Cluster 6780 Planning and Installation Guide—527301-004 E- 6
F Updating the 6780 Switch Logic Board Firmware, Configuration, and FPGA Images This appendix provides procedures to update the logic board firmware, configuration, and FPGA images on a 6780 Switch.
Updating the 6780 Switch Logic Board Firmware, Configuration, and FPGA Images Alerts Alerts • • • HP recommends that you have a spare switch on site or have ready access to a spare switch before starting any procedure that includes a firmware, configuration, or FPGA change. Ignore any alarms that occur during the firmware, configuration, or FPGA update. After a power on or a hard reset of the switch, the 6780 switch starts running the configuration and FPGA images.
Updating the 6780 Switch Logic Board Firmware, Configuration, and FPGA Images Using the OSM Service Connection Using the OSM Service Connection To prevent alarms during sensitive operations involving the switch hardware or firmware, the OSM ServerNet cluster and switch incident analysis (IA) software undergoes a 3-minute rest period after you perform these actions using OSM: • • • • Firmware Update action Configuration Update action FPGA Update action Hard Reset action In general, alarms are not create
Updating the 6780 Switch Logic Board Firmware, Configuration, and FPGA Images Task 2: Update the 6780 Switch Logic Board Configuration Task 2: Update the 6780 Switch Logic Board Configuration If the correct version of the logic board configuration is currently running on the switch, skip this step. From the OSM Service Connection, use the Update Configuration action to download the configuration to the switch: 1.
Updating the 6780 Switch Logic Board Firmware, Configuration, and FPGA Images Task 3: Download the 6780 Switch Logic Board FPGA Task 3: Download the 6780 Switch Logic Board FPGA If the correct version of the logic board FPGA is currently running on the switch, skip this step. From the OSM Service Connection, use the Update FPGA action, which downloads the FPGA to the switch: 1. Log on to the OSM Service Connection. 2. From the tree pane: a. Expand the ServerNet Cluster. b.
Updating the 6780 Switch Logic Board Firmware, Configuration, and FPGA Images Using the Multi-Resource Action Dialog Box From the OSM Service Connection Using the Multi-Resource Action Dialog Box From the OSM Service Connection 1. Log on to the OSM Service Connection. 2. From the Display menu, select Multi-Resource Actions. The Multi-Resource Actions dialog box appears. 3. From the Resource Type drop-down list, select Switch Logic Board. 4. From the Action drop-down list, select Firmware Update. 5.
G Using the Long-Distance Option This appendix provides information on the long-distance option between zones. Starting with the G06.22 RVU, the HP NonStop ServerNet Switch (model 6780) can support significantly longer zone-to-zone connections in a ServerNet cluster. Software, firmware, and configuration files enable you to have zone-to-zone connections between 5 and 15 kilometers (km). To accommodate the longer distances, the supporting software sets the links to a capacity of 50 megabytes per second.
Using the Long-Distance Option Hardware Requirements Hardware Requirements To use the long-distance option, your cluster must meet these hardware requirements: • • • • All processors on all NonStop S-series servers in the cluster must be S76000, S86000, or later. The cluster must be built with 6780 switches. Zone-to-zone connections between 5 and 15 km can use either single-mode fiberoptic cables or Dense Wave Division Multiplexers (DWDMs).
Using the Long-Distance Option Software Requirements Software Requirements On all NonStop S-series servers in a cluster that uses the long-distance option between any 6780 switches in the cluster, you must install: • • The G06.22 or later RVU. The T2790AAC configuration image for the 6780 switch or a superseding SPR. T2790AAC is available only as a restricted SPR. Table G-1 lists the minimum software that supports the long-distance option on NonStop S-series servers.
Using the Long-Distance Option Bandwidth Considerations Bandwidth Considerations With the long-distance option, overall bandwidth is reduced due to end-to-end transmission latency and lower link transmission rates. Depending on your application profile, this limit might affect cluster performance. Although each 6780 switch can support up to eight nodes, you should consider the number of processors supported by each long-distance link between the switches.
Using the Long-Distance Option Hardware Installation Setting the Numeric Selectors Follow the guidelines in Task 3: Set the Numeric Selector on page 5-4, but do not use the numeric selector settings described there. Instead, set the numeric selectors for the switches to the values shown in Table G-2. These settings are reserved for long-distance configurations and can be used only if all the required software components in Table G-1 are installed. Note.
Using the Long-Distance Option Hardware Installation Making the Zone-to-Zone Connections When you use DWDMs as part of a zone-to-zone connection, the cables between the switches and the DWDMs must be single-mode fiber-optic cables. To determine the type of connections to be used between the DWDMs, refer to the documentation for your DWDM. If DWDMs are not used, you must use single-mode fiber-optic cables for the zone-tozone connections.
Safety and Compliance Regulatory Compliance Statements The following warning and regulatory compliance statements apply to the products documented by this manual. FCC Compliance This equipment has been tested and found to comply with the limits for a Class A digital device, pursuant to part 15 of the FCC Rules. These limits are designed to provide reasonable protection against harmful interference when the equipment is operated in a commercial environment.
Safety and Compliance Regulatory Compliance Statements Taiwan (BSMI) Compliance Japan (VCCI) Compliance This is a Class A product based on the standard or the Voluntary Control Council for Interference by Information Technology Equipment (VCCI). If this equipment is used in a domestic environment, radio disturbance may occur, in which case the user may be required to take corrective actions.
Safety and Compliance Regulatory Compliance Statements DECLARATION OF CONFORMITY Supplier Name: HP COMPUTER CORPORATION Supplier Address: HP Computer Corporation, NonStop Enterprise Division 10300 North Tantau Ave Cupertino, CA 95014 USA Represented in the EU By: Hewlett Packard Company P.O.
Safety and Compliance Consumer Safety Statements Consumer Safety Statements Customer Installation and Servicing of Equipment The following statements pertain to safety issues regarding customer installation and servicing of equipment described in this manual. Do not remove the covers of an HP ServerNet switch. CAUTION: This unit has more than one power supply cord. Disconnect all power supply cords to completely remove power from this unit. Attention: Cet appareil comporte plus d'un cordon d'alimentation.
Glossary 6770 switch. See HP NonStop Cluster Switch (model 6770). 6780 switch. See HP NonStop ServerNet Switch (model 6780). action. An operation that can be performed on a selected resource. adapter. See ServerNet adapter. alternate path. A path not enabled as the preferred path. An alternate path can become a primary path when a primary path is disabled. Contrast with primary path. appearance side.
Glossary boot enclosure, one I/O enclosure, or one processor enclosure with one or more I/O enclosures attached. boot. A synonym for load. Load is the preferred term used in this and other NonStop S-series system manuals. See load. cable channel. A cable management conduit that protects the cables that run between two system enclosures in a double-high stack.
Glossary cluster cluster. (1) A collection of servers, or nodes, that can function either independently or collectively as a processing unit. See also ServerNet cluster. (2) A term used to describe a system in a Fiber Optic Extension (FOX) ring. More specifically, a FOX cluster is a collection of processors and I/O devices functioning as a logical group. In FOX nomenclature, the term is synonymous with system or node. cluster number. A number that uniquely identifies a node in a FOX ring.
Glossary command file command file. An EDIT file that contains a series of commands and serves as a source of command input. communications line. A two-way link consisting of processing equipment, I/O devices, protocol conventions, and cables that connect a computer to other computers. communications subsystem. The combination of data communications hardware and software processes that function together as an integrated unit to provide services and access to wide and local area networks. configuration.
Glossary cyclic redundancy check (CRC) Class 2, and Class 3 according to the risk of causing a system outage if the documented replacement procedure is not followed correctly and how much CRU-replacement training or experience is advisable. See also field-replaceable unit (FRU). cyclic redundancy check (CRC). The most widely used error detection code for ensuring the integrity of transmitted data.
Glossary domain domain. A set of objects over which control or ownership is maintained. Types of domains include power domains and service processor (SP) domains. Domain Name System (DNS). A system that defines a hierarchical, yet distributed, database of information about hosts on a network.
Glossary EMI incurred during the procedure is discharged safely to the enclosure instead of to electrical components within the enclosure. EMI. See electromagnetic interference (EMI). emitter-coupled logic (ECL). A logic that expresses digital signals in differential negative voltage levels, from -8 volts to -1.8 volts. NonStop S-series servers that contain ServerNet expansion boards (SEBs) use ECL cables. An ECL plug-in card (PIC) allows the modular SEB (MSEB) to use ECL ServerNet cables. EMS.
Glossary Expand line-handler process Expand line-handler process. A process pair that handles incoming and outgoing Expand messages and packets. An Expand line-handler process handles direct links and also binds to other processes using the Network Access Method (NAM) interface to support Expand-over-X.25, Expand-over-FOX, Expand-over-ServerNet, Expand-over-TCP/IP, and Expand-over-SNA links. See also Expand-over-ServerNet line-handler process (Expand/SvNet). Expand network.
Glossary external system area network manager process (SANMAN) external system area network manager process (SANMAN). (1) A Guardian process with the name $ZZSMN that provides management access to the external ServerNet X and Y fabrics. (2) A Windows NT process that configures and maintains ServerNet switches within a Windows NT cluster. fabric. A complex set of interconnections through which there can be multiple and (to the user) unknown paths from point to point.
Glossary file transfer protocol (FTP) name is $DISK.SUBVOL.MYFILE. For files that are network accessible, the node name precedes the volume name: \NODE.$DISK.SUBVOL.MYFILE. file transfer protocol (FTP). A data communications protocol that is used for transferring files between systems. filler panel. A blank faceplate that is installed in place of a ServerNet adapter to ensure proper ventilation within a system enclosure. FIR. See FRU information record (FIR). FIRINIT.
Glossary GCSC GCSC. See Global Customer Support Center (GCSC). generic process. A process created and managed by the Kernel subsystem. Also known as a system-managed process. A common characteristic of a generic process is persistence. See also persistence. gigabyte (GB). A unit of measurement equal to 1,073,741,824 bytes (1024 megabytes). See also kilobyte (KB), megabyte (MB), and terabyte (TB). Global Customer Support Center (GCSC).
Glossary HP NonStop ServerNet Switch (model 6780) uninterruptible power supply (UPS), and AC transfer switch, and it can be packaged in a switch enclosure or in a 19-inch rack. See also HP NonStop ServerNet Switch (model 6780). HP NonStop ServerNet Switch (model 6780). The cluster switch used in the layered topology. The 6780 switch consists of a switch logic board, a midplane, plug-in cards, power supplies, and fans. See also HP NonStop Cluster Switch (model 6770). IBC. See in-band control (IBC).
Glossary IOMF CRU enclosure is identical to a processor enclosure except that it contains I/O multifunction (IOMF) CRUs instead of processor multifunction (PMF) CRUs. IOMF CRU. See I/O multifunction (IOMF) CRU. I/O multifunction (IOMF) CRU.
Glossary line line. The specific hardware path over which data is transmitted or received. A line can also have a process name associated with it that identifies an input/output process (IOP) or logical device associated with that specific hardware path. line-handler process. See Expand line-handler process or Expand-over-ServerNet line-handler process (Expand/SvNet). linker. The process or server that invokes the message system to deliver a message to some other process or server. listener.
Glossary Measure Measure. A tool used for monitoring the performance of a system. Measure can be used to check the performance of a ServerNet cluster. megabyte (MB). A unit of measurement equal to 1,048,576 bytes (1024 kilobytes). See also gigabyte (GB), kilobyte (KB), and terabyte (TB). message monitor process (MSGMON). A helper process for SNETMON that runs in each processor on every node of a ServerNet cluster. MSGMON is started by $ZPM (the persistence manager process).
Glossary MSEB port MSEB port. A connector on modular ServerNet expansion boards (MSEBs) used for ServerNet links. An MSEB has four fixed serial-copper-based ports and six plug-in card (PIC) slots that accept a variety of connection media. See also SEB port. MSGMON. See message monitor process (MSGMON). MSP. See master service processor (MSP). multifunction I/O board (MFIOB).
Glossary node node. (1) A uniquely identified computer system connected to one or more other computer systems in a network. See also Expand node, ServerNet node, and system. (2) An endpoint in a ServerNet fabric, such as a processor or ServerNet addressable controller (SAC). node number. A number used to identify a member system in a network. The node number is usually unique for each system in the network. See also node, ServerNet node number, and Expand node number. node-numbering agent (NNA).
Glossary online online. Used to describe tasks that can be performed while the HP NonStop™ operating system and system utilities are operational. Contrast with offline. operator message. A message, intended for an operator, that describes a significant event on an HP NonStop™ server. An operator message is the displayed-text form of an Event Management Service (EMS) event message. OSM Event Viewer. A component of the OSM software.
Glossary persistent process persistent process. A process that must always be either waiting, ready, or executing. These processes are usually controlled by a monitor process that checks on the status of persistent processes and restarts them if necessary. physical interface (PIF). The hardware components that connect a system node to a network. PIC. See plug-in card (PIC). PIF. See physical interface (PIF). ping. A utility used to verify connections to one or more remote hosts.
Glossary process process. A program that has been submitted to the operating system for execution, or a program that is currently running in the computer. process ID. A number that uniquely identifies a process. It consists of the processor (CPU) number and the process identification number (PIN). process identification number (PIN). A number that uniquely identifies a process running in a processor. The same number can exist in other processors in the same system. See also process ID. processor.
Glossary remote notification remote notification. A form of remote support. Remote notification, or dial-out, allows the OSM package to notify a service provider, such as the Global Customer Support Center (GCSC), of pending hardware and software problems. See also remote interprocessor communication (RIPC). remote processor. A processor in a node other than the node running the ServerNet cluster monitor (SNETMON) process reporting status about the processor. remote node.
Glossary ServerNet messages to the clients or requesters. A server process is a running instance of a server program. ServerNet. A communications protocol developed by HP. See also ServerNet I and ServerNet II. ServerNet I. A communications protocol that features 50 megabytes/second speed, 6-port ServerNet routers, 8b/9b encoding, and a 64-byte maximum packet size. See also ServerNet II. ServerNet II.
Glossary ServerNet cluster services ServerNet cluster services. The functions necessary to allow a node to join, participate in, or leave a ServerNet Cluster.
Glossary ServerNet node ServerNet node. A system (node) in a ServerNet cluster. See also node and ServerNet cluster. ServerNet node number. A number that identifies a member system in a ServerNet cluster. Each node in a ServerNet cluster has a unique ServerNet node number. The ServerNet node number is a simplified expression of the ServerNet node routing ID that determines the node to which a ServerNet packet is routed.
Glossary ServerNet switch ServerNet switch. A point-to-point networking device that connects ServerNet nodes to a single fabric (X or Y) of the ServerNet communications network. The ServerNet switch routes ServerNet packets between these nodes. service processor (SP). A physical component of the processor multifunction (PMF) CRU or I/O multifunction (IOMF) CRU that controls environmental and maintenance functions (including system load functions) in the enclosure.
Glossary SMN SMN. The mnemonic name for the external system area network manager process (SANMAN). See system area network manager process (SANMAN). SNETMON. See ServerNet cluster monitor process (SNETMON). soft reset. An action performed on a cluster switch that restarts the firmware on the switch but does not interfere with ServerNet passthrough data traffic. software product revision (SPR). A method of releasing incremental software updates on NonStop systems.
Glossary store and forward routing store and forward routing. A form of message routing whereby a router must receive an entire packet or message before it can start to forward the packet or message to the next router. Contrast with wormhole routing. switch group. See cluster switch group. switch layer. See cluster switch layer. switch layer number. See cluster switch layer number. switch logic board. See cluster switch logic board. switch rack. See cluster switch rack. switch zone.
Glossary summary report summary report. A brief informational listing of status or configuration information provided by the Subsystem Control Facility (SCF) STATUS or INFO command. Contrast with detailed report. super group. The group of user IDs that have 255 as the group number. This group has special privileges. Many utilities have commands or functions that can be executed only by a member of the super group. super-group user. A user who can read, write, execute, and purge most files on the system.
Glossary system expansion system expansion. The process of making a target system larger by adding enclosures to it. The enclosures being added can be either new enclosures or enclosures from a donor system. Contrast with system reduction. system image tape (SIT). A tape that can be used to perform a system load on a system if the system subvolume has become corrupted on both $SYSTEM disks. The tape contains a minimum set of software necessary to bring up and run the system.
Glossary tetrahedral topology tetrahedral topology. A topology of NonStop S-series servers in which the ServerNet connections between the processor enclosures form a tetrahedron. See also topology. TF. See time factor (TF). time factor (TF). A number assigned to a line, path, or route to indicate efficiency in transporting data. The lower the time factor, the more efficient the line, path, or route. See also super time factors (STFs). topology.
Glossary volume (OSS) environment normally uses the scalar view of this user ID, also known as the UID, which is the value (group-number * 256) + user-number. For example, the scalar view of the super ID is (255 * 256) + 255 = 65535. volume. A logical disk drive, which can be one or two physical disk drives. In a NonStop system, volumes have names that begin with a dollar sign ($), such as $DATA. See also mirrored disk or volume. WAN. See wide area network (WAN). wide area network (WAN).
Glossary $ZZSMN $ZZSMN. The process name for the external system area network manager process (SANMAN). See also external system area network manager process (SANMAN).
Index A D Add Node to ServerNet Cluster action 6-2 Adding a node to a cluster 9-5/9-7 ALGORITHM modifier 8-18 Automatic fail-over of ServerNet links 1-6 Dial-out 9-3 B Bend radius 6-3, 6-19, D-2, D-3 C Cables disconnecting 9-3 fiber-optic, requirements D-2, D-3 lengths 3-17 media 2-20 moving fiber-optic 9-13 Cabling considerations 3-15 Checklist for installation planning 3-2 Cluster defined 1-2 removing a zone or layer 9-4, 9-8, 9-9 switch 1-4 topologies 1-4 Configuration, switch checking 8-9 downloadi
Index G Fiber-optic cables (continued) moving 9-13 part numbers A-3, A-4, A-7 requirements D-2, D-3 supported lengths D-2, D-3 Firmware, switch checking 8-9 compatibility with software F-2 downloading F-3, F-5 file name 2-16 reference information F-2 running 8-9 version procedure information 1-9, 4-3 Floor space requirements 3-14 FPGA 2-21 G Group Connectivity ServerNet Path Test 8-17 Guided procedure adding a node 9-5, 9-8, 9-9 configuring ServerNet node 9-2 installing PICs 4-11 replacing a SEB with an
Index O O Online expansion 9-14 OSM, alarms when disconnecting cables 9-3 P Part numbers A-1/A-7 PIC installation 4-11 installed in port 6 2-21 NNA FPGA 2-21 Planning checklist 3-2/3-3 hardware 3-5 power 3-13 Plug-in cards (PICs) double-wide 2-6 installing a NNA FPGA 4-8 SEB and MSEB connectors 4-9 switch 2-6 Power requirements 3-13 Public LAN 3-6 R Rack dimensions 3-12 Rack mount kit part number A-2 Remote node, checking communications with 8-16 Remote passwords 7-2 Removing a node 6-2, 9-3 Reset, soft
Index T Super time factors 3-21 Switch component part numbers A-3 components 2-2 connections between layers 3-17 defined 2-2, 3-12 enclosure 3-12 enclosure dimensions 3-12 floor space for servicing 3-12 installing 5-1 LEDs 2-11, 2-23 location 3-14, 3-15 number required for clustering 3-9 packaging 3-8 power requirements 3-13, D-1 Symbolic names 8-13 System console 3-6 name 4-6 number 4-6 Z Zone cables, connecting 6-17 ZPMCONF macro E-2 ZSCCONF macro E-2 Special Characters $NCP 4-5, 8-18 $NCP ALGORITHM m