ServerNet Cluster 6780 Planning and Installation Guide (G06.28+, H06.05+)
Table Of Contents
- What’s New in This Guide
- About This Guide
- 1 ServerNet Cluster Overview
- 2 ServerNet Cluster Hardware Description
- 3 Planning for Installation and Migration
- Planning Checklist
- Software Installation Planning
- Hardware Installation and Migration Planning
- Task 1: Plan for the ServerNet Nodes
- Task 2: Plan for the System Consoles
- Task 3: Plan for the 6780 Switches
- Task 4: Plan for the Racks
- Task 5: Plan for the Power Requirements
- Task 6: Plan the Location of the Hardware
- Task 7: Plan for the Fiber-Optic Cables
- Task 8: Plan to Migrate the ServerNet Nodes From 6770 Switches
- Task 9: Plan the ServerNet Node Numbers
- Task 10: Plan the Expand-Over-ServerNet Lines
- Migration Examples
- 4 Preparing a System for Installation or Migration
- 5 Installing 6780 Switches
- 6 Connecting the Fiber-Optic Cables
- Summary of Tasks
- Handling the Fiber-Optic Cables
- Connecting the Layer Cables
- Connecting the Zone Cables
- Connecting the Cables Between a Node and a 6780 Switch
- Alerts
- Task 1: Double-Check the Required Software and Hardware
- Task 2: Label the Cables That Connect to the Node
- Task 3: Inspect the Cables
- Task 4: Connect a Cable to the Switch
- Task 5: Connect a Cable to the Node
- Task 6: Check the Link-Alive LEDs
- Task 7: Check Operations
- Task 8: Finish Connecting the Fiber-Optic Cables
- Routing the Fiber-Optic Cables
- 7 Configuring Expand-Over-ServerNet Lines
- Using Automatic Line-Handler Generation
- Using the OSM Service Connection
- Using SCF
- Rule 1: Configure the Primary and Backup Line-Handler Processes in Different Processor Enclosures
- Rule 2: For Nodes With 6 or More Processors, Avoid Configuring the Line-Handler Processes in Proc...
- Rule 3: For Nodes With More Than 10 Processors, Avoid Configuring the Line-Handler Processes in P...
- Expand-Over-ServerNet Line-Handler Process Example
- 8 Checking Operations
- Checking the Operation of the ServerNet Cluster
- Checking the Operation of Each Switch
- Checking the Power to Each Switch
- Checking the Switch Components
- Checking the Numeric Selector Setting
- Checking the Globally Unique ID (GUID)
- Checking for a Mixed Globally Unique ID (GUID)
- Checking the Fiber-Optic Cable Connections to the Switch Port
- Checking the Switch Configuration, Firmware, and FPGA Images
- Checking the Operation of Each Node
- Checking the Service Processor (SP) Firmware
- Checking That Automatic Line-Handler Generation Is Enabled
- Checking the ServerNet Node Numbers
- Checking MSGMON, SANMAN, and SNETMON
- Checking for Alarms on Each Node
- Checking the ServerNet Cluster Subsystem
- Checking That the ServerNet Node Numbers Are Consistent
- Checking Communications Between a Local Node and a Switch
- Checking Communications With a Remote Node
- Checking the Internal ServerNet X and Y Fabrics
- Checking the Operation of Expand Processes and Lines
- 9 Changing a ServerNet Cluster
- OSM Actions
- Removing a Node From a ServerNet Cluster
- Removing Switches From a ServerNet Cluster
- Adding a Node to a ServerNet Cluster
- Adding a Switch Layer to a ServerNet Cluster
- Adding a Switch Zone to a ServerNet Cluster
- Task 1: Prepare to Add the Switches
- Task 2: Connect the Cables Between Layers
- Task 3: Check Operations
- Task 4: Disconnect the Cables Between Zones
- Task 5: Connect the Cables Between Zones
- Task 6: Check Operations
- Task 7: Connect the Additional Nodes
- Task 8: Check Operations
- Task 9: Repeat Tasks 2 Through 8 for the Other Fabric
- Task 10: Reenable OSM Alarms
- Moving a Node
- Changing the Hardware in a Node Connected to a ServerNet Cluster
- 10 Troubleshooting
- Symptoms
- Recovery Operations
- Enabling Automatic Expand-Over-ServerNet Line-Handler Generation
- Reseating a Fiber-Optic Cable
- Correcting a Mixed Globally Unique ID (GUID)
- Restoring Connectivity to a Node
- Switching the SANMAN Primary and Backup Processes
- Switching the SNETMON Primary and Backup Processes
- Configuring the Expand-Over-ServerNet Line-Handler Processes and Lines
- Starting Required Processes and Subsystems
- Fallback Procedures
- 11 Starting and Stopping ServerNet Cluster Processes and Subsystems
- A Part Numbers
- B Blank Planning Forms
- C ESD Guidelines
- D Specifications
- E Configuring MSGMON, SANMAN, and SNETMON
- F Updating the 6780 Switch Logic Board Firmware, Configuration, and FPGA Images
- G Using the Long-Distance Option
- Safety and Compliance
- Glossary
- Index

Glossary
ServerNet Cluster 6780 Planning and Installation Guide—527301-005
Glossary-30
tetrahedral topology
tetrahedral topology. A topology of NonStop S-series servers in which the ServerNet
connections between the processor enclosures form a tetrahedron. See also topology.
TF. See time factor (TF).
time factor (TF). A number assigned to a line, path, or route to indicate efficiency in
transporting data. The lower the time factor, the more efficient the line, path, or route.
See also super time factors (STFs).
topology. The physical layout of components that define a local area network (LAN), wide
area network (WAN), or ServerNet communications network. See also layered
topology, star topology, split-star topology, and tri-star topology.
topology branch. A processor enclosure and the I/O enclosures attached to it.
Transmission Control Protocol over Internet Protocol (TCP/IP). A set of layered
communications protocols for connecting workstations and larger systems.
tri-star topology. A network topology that uses up to three cluster switches for each
external fabric. External routing is implemented between the three star groups of a
ServerNet cluster. (A star group consists of the eight nodes attached to one set of
cluster switches.) The star groups are joined by two-lane links. Introduced with the
G06.14 RVU, the tri-star topology supports up to 24 nodes. A tri-star topology requires
HP NonStop Cluster Switches (model 6770). See also star topology and split-star
topology.
UCME. See uncorrectable memory error (UCME).
unattended site. A computer environment where no operator resides on site and the only
access is from a central monitoring station.
uncorrectable memory error (UCME). An error caused by incorrect data at a particular
memory location. The cause of the error is such that the error is not automatically
corrected by the system, and memory replacement is required. Contrast with
correctable memory error (CME).
uninterruptible power supply (UPS). A source of power, external to a device, capable of
supplying continuous power to the device in the event of a power failure.
unplanned outage. Time during which a system is not capable of doing useful work
because of an unplanned interruption. Unplanned interruptions can include failures
caused by faulty hardware, operator error, or disaster.
UPS. See uninterruptible power supply (UPS)
.
user ID. A user ID within a NonStop system. The Guardian environment normally uses the
structured view of this user ID, which consists of either the group-number,
user-number pair of values or the group-name.user-name pair of values. For
example, the structured view of the super ID is (255, 255). The Open System Services