Managing HP Serviceguard A.11.20.10 for Linux, December 2012

IMPORTANT: Although cross-subnet topology can be implemented on a single site, it is most
commonly used by extended-distance clusters and Metrocluster. For more information about such
clusters, see the following documents at http://www.hp.com/go/linux-serviceguard-docs:
Understanding and Designing Serviceguard Disaster Recovery Architectures
HP Serviceguard Extended Distance Cluster for Linux Deployment Guide
Building Disaster Recovery Serviceguard Solutions Using Metrocluster with 3PAR Remote Copy
for Linux B.01.00.00
Building Disaster Recovery Serviceguard Solutions Using Metrocluster with Continuous Access
XP P9000 for Linux B.01.00.00
Building Disaster Recovery Serviceguard Solutions Using Metrocluster with Continuous Access
EVA P6000 for Linux B.01.00.00
2.3 Redundant Disk Storage
Each node in a cluster has its own root disk, but each node may also be physically connected to
several other disks in such a way that more than one node can obtain access to the data and
programs associated with a package it is configured for. This access is provided by the Logical
Volume Manager (LVM). A volume group must be activated by no more than one node at a time,
but when the package is moved, the volume group can be activated by the adoptive node.
NOTE: As of release A.11.16.07, Serviceguard for Linux provides functionality similar to HP-UX
exclusive activation. This feature is based on LVM2 hosttags, and is available only for Linux
distributions that officially support LVM2.
All of the disks in the volume group owned by a package must be connected to the original node
and to all possible adoptive nodes for that package.
Shared disk storage in Serviceguard Linux clusters is provided by disk arrays, which have redundant
power and the capability for connections to multiple nodes. Disk arrays use RAID modes to provide
redundancy.
2.3.1 Supported Disk Interfaces
The following interfaces are supported by Serviceguard for disks that are connected to two or more
nodes (shared data disks):
FibreChannel.
For information on configuring multipathing, see “Multipath for Storage ” (page 78).
2.3.2 Disk Monitoring
You can configure monitoring for disks and configure packages to be dependent on the monitor.
For each package, you define a package service that monitors the disks that are activated by that
package. If a disk failure occurs on one node, the monitor will cause the package to fail, with the
potential to fail over to a different node on which the same disks are available.
2.3.3 Sample Disk Configurations
Figure 5 shows a two node cluster. Each node has one root disk which is mirrored and one package
for which it is the primary node. Resources have been allocated to each node so that each node
can adopt the package from the other node. Each package has one disk volume group assigned
to it and the logical volumes in that volume group are mirrored.
2.3 Redundant Disk Storage 29