ServerNet Cluster Manual
Adding or Removing a Node
ServerNet Cluster Manual—520575-003
6-11
Expanding or Reducing a Node in a ServerNet
Cluster
Expanding or Reducing a Node in a ServerNet
Cluster
Like any NonStop S-series server, a node in a ServerNet cluster can be expanded or
reduced (enclosures can be added or removed) while the server is online. However, if
online expansion requires changes to the MSEBs in the group 01 enclosure, the
node’s connections to the cluster might not be in a fault-tolerant state for a short time.
To expand or reduce a node in a ServerNet cluster, refer to the NonStop S-Series
System Expansion and Reduction Guide.
Splitting a Large Cluster Into Multiple Smaller
Clusters
ServerNet clusters that have more than one cluster switch per fabric (clusters using the
split-star or tri-star topologies) can be split into smaller clusters that are valid subsets of
the split-star or tri-star topologies. Any valid subset of the split-star or tri-star topology
can function independently as a cluster, if necessary.
Splitting a cluster:
•
Can be done online.
•
Does not require installing additional cluster switches.
•
Does not change the topology or the ServerNet node numbers used by the star
groups that are split. If you need to change the topology, refer to Section 4,
“Upgrading a ServerNet Cluster.”
Use the following steps to split a split-star topology or a tri-star topology:
1. Select one of the star groups for a complete shutdown of ServerNet cluster
services. This can be the star group with the least nodes or the star group that is
least critical to your application.
2. In all nodes of the cluster, stop any applications that depend on ServerNet cluster
connectivity to the nodes in the star group that will be shut down.
This topology . . . Can be split into . . .
Split-star Two clusters having up to eight nodes each and using one cluster
switch per fabric.
Tri-star One of the following:
•
Three clusters having up to eight nodes each and using one
cluster switch per fabric.
•
Two clusters: a cluster having up to eight nodes and using one
cluster switch per fabric and a cluster having up to 16 nodes and
using two cluster switches per fabric.