Designing High Availability Solutions with HP Serviceguard and HP Integrity Virtual Machines

32
Networks
For VM as package configurations:
Three LAN connections are recommended: one LAN for a dedicated Serviceguard heartbeat for the VM host and a
primary/standby LAN pair for VM guests, which are monitored by Serviceguard on the VM host.
HP Auto-Port Aggregation (APA) is supported and can be used to provide network bandwidth scalability, load
balancing between the physical links, automatic fault detection, and HA recovery. (Note that it is important when
using APA to have at least two physical NICs configured to avoid a single point of failure for the Serviceguard
cluster heartbeat connections.) Serviceguard also has a network monitor that provides network failure detection
options for identifying failed network cards based on inbound and outbound message counts and failing over to
configured standby LANs.
The vswitch monitor is responsible for monitoring the activities of the Serviceguard network monitor and
automatically moves the vswitch configuration, when required, between the primary and standby physical network
interfaces. (A vswitch is a virtual device that accepts network traffic from one or more VMs and directs network
traffic to an associated port on a physical network interface card, or NIC used by a VM guest.) The vswitch
monitor, vswitchmon is installed as part of the Integrity VM product and requires no user configuration. The SG-IVS
toolkit also includes a vswitch monitor, vswitchmgr. vswitchmon and vswitchmgr can coexist on the same VM host,
running in parallel.
For VM as node configurations:
Three LAN connections are recommended: one LAN for a dedicated Serviceguard heartbeat for the VM guest and
a primary/standby LAN pair for the VM guest that is monitored by Serviceguard on the VM guest.
Linux channel bonding for making network connections highly available is currently not supported in Linux VM
guests. It is recommended to use APA LACP_AUTO mode to provide switch port-level redundancy and APA LAN
Monitor mode to provide switch/hub-level redundancy on the VM host NICs to make Linux guest networking highly
available.
For either VM as package or VM as node configurations, your application availability and network performance
requirements should be used to determine whether VMs should share physical network ports or be assigned their
own dedicated ports.
Storage protection
Disk storage protection should be performed on the VM host. Implementing a storage protection solution (for
example, RAID mirroring) for the physical storage on the host automatically protects the storage used by the VMs and
eliminates the need to implement the same solution for each VM, in addition to minimizing virtualization overhead.
Multipathing solutions should also be implemented on the VM host, as they are not supported within VM guests (note
this also applies to Native Multipathing with HP-UX 11i v3 guests). Only the primary paths to virtual disks for VMs
can be used; secondary paths are not permitted. Logical volumes used as virtual disks can provide their own
multipathing capabilities (for example, LVM PVlinks, VxVM DMP). HP Secure Path and EMC PowerPath are two other
supported multipathing options when using HP or EMC disk arrays on HP-UX 11i v2. HP-UX 11i v3, which includes a
native multipathing storage stack, is supported with Integrity VM B.04.00 (and later) hosts and eliminates the need for
using alternative multipathing solutions. Use of legacy device special files (DSFs) to define virtual storage is
deprecated starting in Integrity VM B.04.30. Use of native multipathing is strongly encouraged by HP.
Performance
With both VM as Package and VM as node configurations, Serviceguard will move a workload (i.e., VM or
application) to a failover node in the cluster in the event of a failure. The cluster design should ensure that all failover
nodes have sufficient system resources to run both their existing workloads in addition to the workload that is being
failed over. If a node with an existing workload does not have sufficient capacity to handle a failover workload,
several options can be considered such as using WLM and TiCAP on the failover node or implementing a standby
node for the failover workload.
There are several other areas to consider when implementing VMs to achieve the best possible performance. The
Best Practices for Integrity Virtual Machineswhite paper (available at
www.hp.com/go/hpux-hpvm-docs) contains
additional information on the following VM configuration recommendations.