HP Integrity Virtual Machines 4.2: Installation, Configuration, and Administration

memory, in addition to storage and network I/O connections, to handle their workloads. Any
initial performance problems with a virtual machine can be compounded when application
workloads are failed over to it by Serviceguard in response to a failure in one of the other cluster
members.
11.2.6 Availability
Integrity VM instances are not highly available in VMs as Nodes configurations. A failure of a
VM is similar to a node failure in a Serviceguard cluster. The use of Serviceguard within the VM
provides high availability for the applications running in the VM. VMs as Serviceguard Nodes
configurations do have a shortcoming in that the adoptive failover VMs must be executing and
consuming some degree of VM Host resources, which could potentially be used by other VMs
that are not part of the Serviceguard cluster. The Integrity VM dynamic memory allocation feature
should be considered to better manage adoptive VM memory usage during application failovers.
NOTE: HP recommends not using VMs as Serviceguard Nodes configuration with virtual
machines on the same VM Host (cluster-in-a-box) for mission- or business-critical applications,
because the physical VM Host is an SPOF. If the physical system fails, the entire cluster also fails.
11.2.7 Storage Considerations
An important distinction between VMs as Serviceguard Package and Node configurations is
that VMs as Serviceguard Node configurations have limitations on the backing store selection
depending on the use of the storage on the node. The guest root or systems disks, those not used
by application that can fail over, can be of any supported backing store type. Shared storage
disks, those used by applications on more than one node that will be accessed by failover
applications, must be only whole disk VM backing stores.VMs as Serviceguard Node
configurations support only whole disk VM backing stores, because:
It not possible to set timeouts on logical volumes or file systems presented as backing stores
to the VM. Any errors generated from these types of backing stores are not passed through
the virtualization layers from the VM Host to VM that would allow Serviceguard running
in the VM to react to these conditions.
Disk I/O performance and the speed at which I/O requests can be completed prior to a VM
node failure can affect cluster-reformation time. For more information about handling
outstanding I/O requests during a VM node failure, see Usage Considerations.
Data used by applications protected by Serviceguard packages must reside on shared storage
that is physically connected to all nodes in the cluster and can be placed in LVM or VxVM
logical volumes or on a cluster file system (CFS) that is accessible by the VM.
The storage for the application data presented to the VM guest by the VM host must be
whole disks so the logical volume and file system structures on this storage can be accessed
by the other nodes in the cluster during a Serviceguard package failover.
11.2.8 Limitations Associated with These Configurations
Online migration of VMs as Serviceguard Nodes is not supported at the present time due to the
guest freeze time causing a loss of cluster node heartbeats in the migrating VM. This triggers a
cluster reformation and the removal of the VM from the cluster. To avoid accidental online
migration of VMs as Serviceguard nodes, HP recommended that you use the following command
to disable online migration of the VM:
# hpvmmodify -P Serviceguard-node vm-name x online_migration=disabled
The following limitations apply:
11.2 VMs as Serviceguard Nodes 191