ServerNet Cluster 6780 Planning and Installation Guide (G06.28+, H06.05+)
Table Of Contents
- What’s New in This Guide
- About This Guide
- 1 ServerNet Cluster Overview
- 2 ServerNet Cluster Hardware Description
- 3 Planning for Installation and Migration
- Planning Checklist
- Software Installation Planning
- Hardware Installation and Migration Planning
- Task 1: Plan for the ServerNet Nodes
- Task 2: Plan for the System Consoles
- Task 3: Plan for the 6780 Switches
- Task 4: Plan for the Racks
- Task 5: Plan for the Power Requirements
- Task 6: Plan the Location of the Hardware
- Task 7: Plan for the Fiber-Optic Cables
- Task 8: Plan to Migrate the ServerNet Nodes From 6770 Switches
- Task 9: Plan the ServerNet Node Numbers
- Task 10: Plan the Expand-Over-ServerNet Lines
- Migration Examples
- 4 Preparing a System for Installation or Migration
- 5 Installing 6780 Switches
- 6 Connecting the Fiber-Optic Cables
- Summary of Tasks
- Handling the Fiber-Optic Cables
- Connecting the Layer Cables
- Connecting the Zone Cables
- Connecting the Cables Between a Node and a 6780 Switch
- Alerts
- Task 1: Double-Check the Required Software and Hardware
- Task 2: Label the Cables That Connect to the Node
- Task 3: Inspect the Cables
- Task 4: Connect a Cable to the Switch
- Task 5: Connect a Cable to the Node
- Task 6: Check the Link-Alive LEDs
- Task 7: Check Operations
- Task 8: Finish Connecting the Fiber-Optic Cables
- Routing the Fiber-Optic Cables
- 7 Configuring Expand-Over-ServerNet Lines
- Using Automatic Line-Handler Generation
- Using the OSM Service Connection
- Using SCF
- Rule 1: Configure the Primary and Backup Line-Handler Processes in Different Processor Enclosures
- Rule 2: For Nodes With 6 or More Processors, Avoid Configuring the Line-Handler Processes in Proc...
- Rule 3: For Nodes With More Than 10 Processors, Avoid Configuring the Line-Handler Processes in P...
- Expand-Over-ServerNet Line-Handler Process Example
- 8 Checking Operations
- Checking the Operation of the ServerNet Cluster
- Checking the Operation of Each Switch
- Checking the Power to Each Switch
- Checking the Switch Components
- Checking the Numeric Selector Setting
- Checking the Globally Unique ID (GUID)
- Checking for a Mixed Globally Unique ID (GUID)
- Checking the Fiber-Optic Cable Connections to the Switch Port
- Checking the Switch Configuration, Firmware, and FPGA Images
- Checking the Operation of Each Node
- Checking the Service Processor (SP) Firmware
- Checking That Automatic Line-Handler Generation Is Enabled
- Checking the ServerNet Node Numbers
- Checking MSGMON, SANMAN, and SNETMON
- Checking for Alarms on Each Node
- Checking the ServerNet Cluster Subsystem
- Checking That the ServerNet Node Numbers Are Consistent
- Checking Communications Between a Local Node and a Switch
- Checking Communications With a Remote Node
- Checking the Internal ServerNet X and Y Fabrics
- Checking the Operation of Expand Processes and Lines
- 9 Changing a ServerNet Cluster
- OSM Actions
- Removing a Node From a ServerNet Cluster
- Removing Switches From a ServerNet Cluster
- Adding a Node to a ServerNet Cluster
- Adding a Switch Layer to a ServerNet Cluster
- Adding a Switch Zone to a ServerNet Cluster
- Task 1: Prepare to Add the Switches
- Task 2: Connect the Cables Between Layers
- Task 3: Check Operations
- Task 4: Disconnect the Cables Between Zones
- Task 5: Connect the Cables Between Zones
- Task 6: Check Operations
- Task 7: Connect the Additional Nodes
- Task 8: Check Operations
- Task 9: Repeat Tasks 2 Through 8 for the Other Fabric
- Task 10: Reenable OSM Alarms
- Moving a Node
- Changing the Hardware in a Node Connected to a ServerNet Cluster
- 10 Troubleshooting
- Symptoms
- Recovery Operations
- Enabling Automatic Expand-Over-ServerNet Line-Handler Generation
- Reseating a Fiber-Optic Cable
- Correcting a Mixed Globally Unique ID (GUID)
- Restoring Connectivity to a Node
- Switching the SANMAN Primary and Backup Processes
- Switching the SNETMON Primary and Backup Processes
- Configuring the Expand-Over-ServerNet Line-Handler Processes and Lines
- Starting Required Processes and Subsystems
- Fallback Procedures
- 11 Starting and Stopping ServerNet Cluster Processes and Subsystems
- A Part Numbers
- B Blank Planning Forms
- C ESD Guidelines
- D Specifications
- E Configuring MSGMON, SANMAN, and SNETMON
- F Updating the 6780 Switch Logic Board Firmware, Configuration, and FPGA Images
- G Using the Long-Distance Option
- Safety and Compliance
- Glossary
- Index

Glossary
ServerNet Cluster 6780 Planning and Installation Guide—527301-005
Glossary-3
cluster
cluster. (1) A collection of servers, or nodes, that can function either independently or
collectively as a processing unit. See also ServerNet cluster. (2) A term used to
describe a system in a Fiber Optic Extension (FOX) ring. More specifically, a FOX
cluster is a collection of processors and I/O devices functioning as a logical group. In
FOX nomenclature, the term is synonymous with system or node.
cluster number. A number that uniquely identifies a node in a FOX ring. This number can
range from 1 through 14. See also node number.
cluster switch. An assembly that routes ServerNet messages across an external fabric of a
ServerNet cluster. See HP NonStop Cluster Switch (model 6770) and HP NonStop
ServerNet Switch (model 6780).
cluster switch group. Within an external ServerNet fabric, all of the 6780 switches that
belong to the same cluster switch zone. The cluster switches within a group are
connected via four vertical tetrahedrons. A cluster switch group can consist of up to
four 6780 switches. All of the cluster switches that form a cluster switch group typically
are installed in the same cluster switch rack. See also cluster switch layer and cluster
switch zone.
cluster switch layer. The topological cluster switch position within a cluster switch group.
Each cluster switch group consists of four layers, numbered 1 to 4 from bottom to top.
A cluster switch layer is equivalent to a cluster switch and provides connections for up
to eight ServerNet nodes. See also cluster switch layer number.
cluster switch layer number. A number in the range 1 through 4 that identifies the position
of a cluster switch within a cluster switch group. See also cluster switch group.
cluster switch logic board. A circuit board that provides switching logic for the HP
NonStop ServerNet Switch (model 6780). The logic board (LB) has a front panel for
operator and maintenance functions.
cluster switch rack. A mechanical frame consisting of or based on a 19-inch rack that
supports the hardware necessary for a cluster switch group.
cluster switch zone. A pair of X-fabric and Y-fabric cluster switch groups and the ServerNet
nodes connected to them. The zone always contains two cluster switch groups with
the cluster switches on each fabric connected by intrazone cables. A cluster switch
zone can support up to 32 nodes.
CME. See correctable memory error (CME).
cold load. A synonym for system load or load (in the case of single processor load). System
load or load is the preferred term in HP NonStop™ system publications.
command. A demand for action by or information from a subsystem or the operation
demanded by an operator or application. A command is typically conveyed as an
interprocess message from an application to a subsystem.