Technical white paper Using HP Serviceguard for Linux with VMware virtual machines Table of contents About this paper 2 Introduction 2 Scope 3 Considering virtual machine configuration 3 Using VMware NIC teaming to avoid single point of failure 3 Configuring shared storage 4 Installing VMware guest tools 9 Install VMware Tools 10 About vminfo 11 Installing vminfo 11 About sg_persist 11 Installing sg_persist on Red Hat 11 Serviceguard on VM guests 11 Cluster configuration options
About this paper Virtual machine technology is a powerful capability that can reduce costs and power usage while improving utilization of resources. HP is also applying virtualization to other aspects of the data center and uniting virtual and physical resources to create an environment suitable for deploying mission-critical applications. HP Serviceguard for Linux is certified for deployment on Linux virtual machines created on VMware ESX server on industry-standard HP ProLiant servers.
Scope This document describes how to configure Serviceguard for Linux clusters using physical machines and VMware virtual machines running on ESX server, so as to provide high availability for applications. As new versions of ESX server or Linux distributions are certified, they will be listed in the Serviceguard for Linux certification matrix at hp.com/info/sglx →HP Serviceguard for Linux Certification Matrix.
When NIC teaming is configured in fault-tolerant mode, and one of the underlying physical NICs fails or its cable is unplugged, ESX Server detects the fault condition and automatically moves traffic to another NIC in the bound interfaces. This eliminates any one physical NIC as a single point of failure, and makes the overall network connection fault tolerant. This feature requires the beacon monitoring feature (see document 1), of both the physical switch and ESX Server NIC team to be enabled.
Now you can start adding the device by clicking the Add button above the hardware listing, as shown in figure 2. In the next screen, select Hard Disk, as shown in figure 3. Click the Raw Device Mappings radio button in the next screen, as shown in figure 4. If the RDM option is disabled, it indicates that there is no free LUN available for mapping. If the LUNs are exposed to the ESX server and if it is found that no LUNs are available for mapping, you may need to reboot the ESX server.
Figure 4. Select Raw Device Mappings In the next screen, as shown in figure 5, select the target LUN. Figure 5. Select LUN In the next screen, select a disk to keep the LUN mapping file, as shown in figure 6. Virtual machines running on the same ESX server can map to this device as to any other virtual disk. This is useful in cluster-in-a-box configurations.
Figure 6. Select Datastore The next option is to select the compatibility mode. You should select Physical, as shown in figure 7. This allows the guest OS to access the LUN directly. Figure 7. Compatibility Mode Virtual machines do not support HBAs and so LUNs are attached to virtual SCSI controllers. In the next screen (shown in figure 8) the drop-down list shows SCSI (0:0), SCSI (0:1) …….. SCSI (0:15).
Figure 8. Advanced options Now click Next to go to verify the selections, as shown in figure 9. Figure 9.
Click Finish on this step. This takes you to the Virtual Machine Properties screen, shown in figure 10. For the newly added hard disk you can see Physical is being selected under Compatibility mode. This allows virtual disks to be shared between virtual machines on any server. Figure 10. VM Properties For more details on SAN configuration options, refer to the following documents: • VMware Infrastructure 3.
Install VMware Tools To install VMware Tools onto a Linux guest from the Virtual Infrastructure (VI) client, you must be running X-windows on the host console. The details from VMware on the latest version of vSphere can be found through VMware’s Installing and Configuring VMware Tools document. The latest edition of the document can be found at http://www.vmware.com/support/pubs. Below steps are for vSphere Client and Web client of vSphere 4.x & 5.0 and ESX/ESXi 4.x host.
About vminfo Serviceguard running on VMs uses the vminfo command to get information about the virtualization platform. When invoked (vminfo –M), it returns the ESX host name and the default timeout value. A symbolic link cmvminfo is created for this command in the $SGSBIN directory. Installing vminfo vminfo command is part of the Serviceguard rpms from A.11.19 onwards. On Serviceguard version A.11.18, you have to install vminfo separately. Go through the details below to get the rpms for your system.
Cluster configuration options A Serviceguard cluster that includes virtual machine nodes can consist of: • Virtual machines on the same host (cluster in a box; not recommended) • Virtual machines on separate hosts • Virtual machine and physical nodes • All of the above If a cluster is configured with multiple virtual machines running on the same host, together with virtual machines running on other hosts or physical servers, you need to be aware of the possibility of data corruption if an application fails
Figure 12. HP Serviceguard cluster of virtual machine guests on different physical nodes A Serviceguard cluster consisting of VM guests on separate physical servers is shown in figure 12. In this configuration, Serviceguard provides HA for applications against failures of physical nodes, VMware ESX hypervisor, VM guest, and failure of application itself. A failed application can be restarted on the same virtual machine guest or failed over to another virtual machine guest on a different physical node.
Migrating legacy packages to a VM cluster Legacy packages can be migrated to modular packages before deploying them on a cluster consisting of VMs depending on the need. The cmmigratepkg command can be used for migrating legacy packages as follows. # cmmigratepkg -p legacy_pkg -o mod_pkg1.conf If the legacy package was created on SG 11.18.02, you will get the following warning message: Warning: at line: 533 function persist_reservation is not a Serviceguard function.
Summary This guide describes best practices for deploying HP Serviceguard in a typical VMware ESX environment. It is not the intent of this guide to duplicate the strategies and best practices of other HP or VMware technical white papers. The strategies and best practices offered here are presented at a very high level to provide general knowledge. Where appropriate, you are referred to specific white papers that provide more detailed information. For more information 1.