HP Serviceguard Cluster Configuration for HP-UX 11i or Linux Partitioned Systems, April 2009

7
systems, while maintaining a third VM Host system for backup. Should one or more active
applications fail, they can fail over to the backup VM Host system, which has the capacity to run
all the guests at one time if necessary. The backup system might have limited performance but the
applications would resume operation after a hardware failure.
3. Virtual-physical cluster consists of a VM Host system running Serviceguard and an HP Integrity
server or nPar that is not running Integrity VM, but is running Serviceguard. The application
packages on the guests running on the physical server can fail over to the VM Host. This is an
efficient backup configuration, as the VM host can provide virtual machines that serve as
adoptive/standby systems for many SG clusters.
Cluster configuration considerations
Using the information from the preceding sections, we can now assess any impacts or potential issues
that arise from utilizing partitions and virtual machines (nPartitions, vPars, or virtual machines) as part
of a Serviceguard cluster. From a Serviceguard perspective, an OS instance running in a partition or
virtual machine is not treated any differently than OS instances running on non-partitioned, physical
nodes. Thus, partitioning or using virtual machines does not alter the basic Serviceguard configuration
rules as described in HP 9000 Enterprise Servers Configuration Guide, Chapter 6 and the
Serviceguard for Linux Order and Configuration Guide. Details can be obtained through your local
HP Sales Representative.
An example of these existing configuration requirements is the need to have dual communication
paths to both storage and networks. The use of partitioning or virtual machines does, however,
introduce interesting configuration situations that necessitate additional configuration requirements.
These are discussed below.
Quorum arbitration requirements
As previously mentioned, existing Serviceguard configuration rules for non-partitioned, physical
systems require the use of a cluster lock only in the case of a two-node cluster. This requirement is in
place to protect against failures that result in a 50% quorum with respect to the membership prior to
the failure. Clusters with more than two nodes do not have this as a strict requirement because of the
independent failure assumption. However, this assumption is no longer valid when dealing with
partitions and virtual machines. Cluster configurations that contain OS instances running within a
partition or virtual machine must be analyzed to determine the impact on cluster membership based
on complete failure of hardware components that support more than one partition or virtual machine.
Rule 1. Configurations containing the potential for a loss of more than 50% of the membership
resulting from a single failure are not supported. These include configurations with the majority of
nodes as partitions or virtual machines within a single hardware cabinet. This implies that when there
are two cabinets, the partitions or virtual machines must be symmetrically divided between the
cabinets.
For example, given three systems as shown in figure 3, creating a five-node cluster with three nPars
(or hard partitions) in one and no partitioning in each of the other systems would not be supported
because the failure of the partitioned system would represent the loss of greater than 50% of quorum
(3 out of 5 nodes). Alternatively, the cluster would be supported if the systems without nPartitions
each contained two vPars or virtual machines, resulting in a seven-node cluster. In this case, the
failure of any partitioned system (within a single hardware cabinet) would not represent greater then
50% of quorum.
Exception: All cluster nodes are running within partitions in a single cabinet (such as the so-called
cluster in a box configuration). The configuration is supported as long as users understand and accept