Managing HP Serviceguard for Linux, Seventh Edition, July 2007

Cluster and Package Maintenance
Managing the Cluster and Nodes
Chapter 7242
Managing the Cluster and Nodes
This section describes the following tasks:
“Starting the Cluster When all Nodes are Down” on page 242
Adding Previously Configured Nodes to a Running Cluster” on
page 244
“Removing Nodes from Participation in a Running Cluster” on
page 245
“Halting the Entire Cluster” on page 246
Automatically Restarting the Cluster” on page 246
Starting the cluster means running the cluster daemon on one or more of
the nodes in a cluster. You use different Serviceguard commands to start
the cluster depending on whether all nodes are currently down (that is,
no cluster daemons are running), or whether you are starting the cluster
daemon on an individual node.
Note the distinction that is made in this chapter between adding an
already configured node to the cluster and adding a new node to the
cluster configuration. An already configured node is one that is already
entered in the cluster configuration file; a new node is added to the
cluster by modifying the cluster configuration file.
NOTE Manually starting or halting the cluster or individual nodes does not
require access to the quorum server, if one is configured. The quorum
server is only used when tie-breaking is needed following a cluster
partition.
Starting the Cluster When all Nodes are Down
You can use Serviceguard Manager, or the cmruncl command as
described in this section, to start the cluster when all cluster nodes are
down. Particular command options can be used to start the cluster under
specific circumstances.