HP Serviceguard Cluster Configuration for HP-UX 11i or Linux Partitioned Systems, April 2009

11
interface cards for the heartbeat LANs. Assuming that combination Fibre Channel/network cards are
not used, each partition would require a minimum of four interface cards. To support a 2 partition
cluster-in-a-box the system would need to have a total of eight I/O slots.
The use of “combination” cards that combine both network and storage can help in some situations.
However, redundant paths for a particular device must be split across separate interface cards (for
example, using multiple ports on the same network interface card for the heartbeat LANs is not
supported).
For Integrity VM, the following types of storage units can be used as virtual storage by virtual machine
packages:
Files inside logical volumes (LVM, VxVM, Veritas cluster volume manager (CVM))
Files on Cluster File System (CFS)
Raw logical volumes (LVM, VxVM, CVM)
Whole disks
The following storage types are not supported:
Files outside logical volume
Disk partitions
When LVM, VxVM, or CVM disk groups are used for guest storage, each guest must have its own set
of LVM volume groups or VxVM (or CVM) disk groups. For CFS, all storage units are available to all
running guests at the same time. For configurations where Serviceguard runs on the virtual machine,
the storage units must be whole disks.
Latency considerations for vPars
As mentioned previously, there is a latency issue, unique to vPars that must be considered when
configuring a Serviceguard cluster to utilize vPars.
There are certain operations performed by one vPars partition (such as initializing the boot disk
during bootup) that can induce delays in other vPars partitions within the same nPartition or node. The
net result to Serviceguard is the loss of cluster heartbeats if the delay exceeds the configured
NODE_TIMEOUT (pre A.11.19) or MEMBER_TIMEOUT (A.11.19 or later) parameter.
If heartbeats are not received within NODE_TIMEOUT (pre A 11.19) , the cluster begins the cluster
re-formation protocol and, providing the delay is within the failover time, the delayed node simply
rejoins the cluster. This results in cluster re-formation messages appearing in the syslog(1m) file along
with diagnostic messages from the Serviceguard cluster monitor (cmcld) describing the length of the
delay. For this reason, it is recommended that clusters containing nodes running in a vPars partition,
be carefully tested using representative workloads to determine the appropriate NODE_TIMEOUT (pre
A 11.19) parameter that eliminates cluster reformations caused by vPars interactions.
If heartbeats are not received within MEMBER_TIMEOUT (A.11.19 or later), the delayed node will be
removed from cluster and will restart. Thus appropriate value of MEMBER_TIMEOUT becomes more
important in vPars to avoid node failures due to latency. For this reason, it is recommended that
clusters containing nodes running in a vPars partition, be carefully tested using representative
workloads to determine the appropriate MEMBER_TIMEOUT parameter that eliminates unnecessary
failovers caused by vPars interactions.
Note:
This does not eliminate the cmcld diagnostic messages that record delays of
greater than certain values.