NonStop S-Series System Expansion and Reduction Guide
Table Of Contents
- NonStop S-Series System Expansion and Reduction Guide
- What’s New in This Guide
- About This Guide
- 1 The Resizing Process
- 2 Planning System Expansion
- 3 Planning System Reduction
- 4 Reducing a System Online
- 1.Prepare the Donor System for Reduction
- 2.Record Information About the Donor System
- 3.Prepare Both ServerNet Fabrics
- 4.Inventory Enclosures to Be Removed
- 5.Prepare and Stop Devices and Processes
- 6.Ensure Devices and Processes Are Stopped
- 7.Delete Devices and Processes If Necessary
- 8.Prepare Enclosures for Removal
- 9.Finish the Reduction
- 10.Remove Other Cables From Powered-Off Enclosures
- 11.Physically Remove Enclosures From the System
- Adding Enclosures to Another System
- 5 Reducing a System Offline
- 6 Expanding a System Online
- Preparation for Online Expansion
- 1.Prepare Target System for Expansion
- 2.Record Information About Target System
- 3.Prepare Target System for Addition of Block
- 4.Save Current Target System Configuration
- 5.Copy SP Firmware File From the Target System to the System Console
- 6.Finish Gathering Information
- 7.Connect a System Console to the Enclosure
- 8.Change Group Number of Enclosure to 01
- 9.Power On Enclosure
- 10.Verify Connection Between System Console and Enclosure
- 11.Configure System Console and Enclosure
- 12.Verify SP Firmware Is Compatible
- 13.Update SP Firmware in Enclosure If Necessary
- 14.Configure Topology of Enclosure If Necessary
- 15.Power Off Enclosure
- 16.Repeat Steps 6 Through 15 If Necessary
- 17.Assemble Enclosures Into a Block
- 18.Change Group Numbers of Block to Fit Target System
- 19.Disconnect System Console From Block
- 20.Power On Added Block
- 21.Cable Block to Target System
- 22.Verify Resized Target System
- 23a.Update Firmware and Code in Block (Using TSM)
- 23b.Update Firmware and Code in Block (Using OSM)
- 24.Reload Processors in Block If Necessary
- 25.Verify Operations in Added Block
- 26.Configure CRUs in Added Block
- 7 Troubleshooting
- A Common System Operations
- Determine the Processor Type
- Determine the ServerNet Fabric Status
- Determine the Product Versions of the OSM Client Software
- Determine the Product Version of the TSM Client Software
- Move the System Console
- Stop the OSM or TSM Low-Level Link
- Start a Startup TACL Session
- Start the OSM or TSM Low-Level Link
- Start the OSM Service Connection or TSM Service Application
- B ServerNet Cabling
- C Checklists and Worksheets
- D Stopping Devices and Processes
- Safety and Compliance
- Glossary
- Index

Glossary
G-Series Common Glossary
Glossary-13
CLIP
CLIP. See communications line interface processor (CLIP).
cluster. (1) A collection of servers, or nodes, that can function either independently or
collectively as a processing unit. See also ServerNet cluster
. (2) A term used to
describe a system in a Fiber Optic Extension (FOX) ring. More specifically, a FOX
cluster is a collection of processors and I/O devices functioning as a logical group. In
FOX nomenclature, the term is synonymous with system or node.
cluster number. A number that uniquely identifies a node in a Fiber Optic Extension (FOX)
ring. This number is in the range 1 through 14. See also node number
.
cluster switch. See HP NonStop™ Cluster Switch (model 6770) and HP NonStop™
ServerNet Switch (model 6780).
cluster switch enclosure. An enclosure provided by HP for housing the subcomponents of
an HP NonStop™ Cluster Switch. The subcomponents include the ServerNet II Switch,
the AC transfer switch, and the uninterruptible power supply (UPS). A cluster switch
enclosure resembles, but is half the height of, a standard HP NonStop S-series system
enclosure.
cluster switch group. Within an external ServerNet fabric, all the cluster switches that
belong to the same cluster switch zone
. A cluster switch group can consist of up to four
6780 switches, each representing one cluster switch layer
. All of the cluster switches
that form a cluster switch group typically are installed in the same cluster switch rack
.
cluster switch layer. The topological cluster switch position within a cluster switch group.
Each cluster switch group can contain up to four layers, numbered 1 to 4 from bottom
to top. A cluster switch layer consists of a pair of cluster switch
es (X and Y) and
provides connections for up to eight ServerNet nodes. Layers within a group are
interconnected by intragroup cables. When all four layers are present, the intragroup
cables are configured as a vertical tetrahedron
. See also cluster switch layer number.
cluster switch layer number. A number in the range 1 through 4 that identifies the position
of a cluster switch
within a cluster switch group. See also cluster switch group.
cluster switch logic board. A circuit board that provides switching logic for the HP
NonStop™ ServerNet Switch (model 6780). The logic board (LB) has a front panel for
operator and maintenance functions and is a Class-3 CRU
.
cluster switch rack. A mechanical frame consisting of or based on a 19-inch rack that
supports the hardware necessary for a cluster switch group
.
cluster switch zone. A pair of X-fabric and Y-fabric cluster switch groups and the ServerNet
nodes connected to them. Up to three zones are possible. The zones, if more than
one, are interconnected by interzone cables, with each cluster switch layer
cabled
separately from the other layers.
CME. See correctable memory error (CME).