Serviceguard Extended Distance Cluster (EDC) with VxVM/CVM Mirroring on HP-UX, May 2008

Read Policy for Mirrored Volumes
VxVM offers different read policies for mirrored volumes. The default policy is “select” which reads in
a round-robin fashion if none of the plexes are striped. In an EDC, it might be beneficial to read from
a specific plex that is local to the data center where the reading node resides, thus reducing the load
on the inter-site storage link.
With the VxVM 5.0 site-awareness feature, the read policy can be set to siteread, which directs
each node to read from a volume’s plex that is stored on a disk at the same site as the node—tagged
with the same siteid as the node. This feature does not exist with VxVM/CVM 4.1
With VxVM 4.1 the read policy can be set to a specific plex of a volume, but that configuration will
then be binding for all cluster nodes. With concurrently accessed CVM volumes, this will make half of
the EDC nodes read from the local plex and the other half read from the remote plex. However, if
only one of the nodes accesses the volume or does the majority of the reads, setting its local plex to
the preferred plex makes sense. To automate setting the read policy for applications that run on
different cluster nodes, the following commands are examples of what you can include in the
Serviceguard package startup script:
For the nodes in DC1: vxvol -g dgEDC rdpol prefer vol1 vol1-01
For the nodes in DC2: vxvol -g dgEDC rdpol prefer vol1 vol1-02
Hot Relocation not recommended for shared volumes in an EDC
In a local Serviceguard cluster, mirroring is used to protect against individual disk failures—in an
EDC, mirroring is used to protect against the failure of an entire highly available storage system. If a
disk associated with a mirrored volume fails, VxVM can automatically restore redundancy of the
volume, as long as one complete plex exists. This is achieved by VxVM’s hot relocation feature.
Subdisks are relocated to designated spare disks or to free space within the disk group and the
information on the subdisks of the remaining plex is copied to the new subdisks.
In an EDC, the VxVM hot relocation feature can be counter productive. If the inter-site FC link fails, or
a complete site fails, this feature automatically relocates plexes to the same storage system in the
surviving datacenter. This would result in a configuration in which both plexes of a volume are stored
on subdisks of the same datacenter, compromising the ability to survive a data center failure at a later
time.
For EDC configurations it is highly recommended that you switch VxVM’s hot relocation feature off. A
convenient way to switch off hot relocation for all VxVM/CVM volumes is to prevent the vxrelocd
daemon from starting. This can be achieved by commenting out the entry (nohup vxrelocd root &)
that invokes the vxrelocd daemon in the startup file (/sbin/rc2.d/S95vxvm-recover). You
should verify these configuration changes after installing patches or upgrading to a new version of
VxVM to ensure hot relocation stays disabled.
Note that preventing vxrelocd from starting will disable the hot relocation feature for all
VxVM/CVM volumes on the system. The VxVM Administration Guide provides additional information
on how to use the hot relocation feature in a more granular way.
14