Managing HP Serviceguard A.11.20.10 for Linux, December 2012

7.2 Managing the Cluster and Nodes
This section describes the following tasks:
Starting the Cluster When all Nodes are Down (page 203)
Adding Previously Configured Nodes to a Running Cluster (page 204)
Removing Nodes from Participation in a Running Cluster (page 204)
Halting the Entire Cluster (page 204)
Automatically Restarting the Cluster (page 205)
Halting a Node or the Cluster while Keeping Packages Running (page 205)
In Serviceguard A.11.16 and later, these tasks can be performed by non-root users with the
appropriate privileges. See Controlling Access to the Cluster (page 152) for more information about
configuring access.
You can use Serviceguard Manager or the Serviceguard command line to start or stop the cluster,
or to add or halt nodes. Starting the cluster means running the cluster daemon on one or more of
the nodes in a cluster. You use different Serviceguard commands to start the cluster depending on
whether all nodes are currently down (that is, no cluster daemons are running), or whether you
are starting the cluster daemon on an individual node.
Note the distinction that is made in this chapter between adding an already configured node to
the cluster and adding a new node to the cluster configuration. An already configured node is one
that is already entered in the cluster configuration file; a new node is added to the cluster by
modifying the cluster configuration file.
NOTE: Manually starting or halting the cluster or individual nodes does not require access to the
quorum server, if one is configured. The quorum server is only used when tie-breaking is needed
following a cluster partition.
7.2.1 Starting the Cluster When all Nodes are Down
You can use Serviceguard Manager, or the cmruncl command as described in this section, to
start the cluster when all cluster nodes are down. Particular command options can be used to start
the cluster under specific circumstances.
The -v option produces the most informative output. The following starts all nodes configured in
the cluster without a connectivity check:
cmruncl -v
The above command performs a full check of LAN connectivity among all the nodes of the cluster.
Using -w none option will allow the cluster to start more quickly but will not test connectivity. The
following starts all nodes configured in the cluster without doing connectivity check:
cmruncl -v -w none
The -n option specifies a particular group of nodes. Without this option, all nodes will be started.
The following example starts up the locally configured cluster only onftsys9 and ftsys10. (This
form of the command should only be used when you are sure that the cluster is not already running
on any node.)
cmruncl -v -n ftsys9 -n ftsys10
CAUTION: HP Serviceguard cannot guarantee data integrity if you try to start a cluster with the
cmruncl -n command while a subset of the cluster's nodes are already running a cluster. If the
network connection is down between nodes, using cmruncl -n might result in a second cluster
forming, and this second cluster might start up the same applications that are already running on
the other cluster. The result could be two applications overwriting each other's data on the disks.
7.2 Managing the Cluster and Nodes 203