Managing HP Serviceguard A.11.20.20 for Linux, March 2014

IMPORTANT: Although cross-subnet topology can be implemented on a single site, it is most
commonly used by extended-distance clusters and Metrocluster. For more information about such
clusters, see the following documents at http://www.hp.com/go/linux-serviceguard-docs:
Understanding and Designing Serviceguard Disaster Recovery Architectures
HP Serviceguard Extended Distance Cluster for Linux Deployment Guide
Building Disaster Recovery Serviceguard Solutions Using Metrocluster with 3PAR Remote Copy
for Linux
Building Disaster Recovery Serviceguard Solutions Using Metrocluster with Continuous Access
XP P9000 for Linux
Building Disaster Recovery Serviceguard Solutions Using Metrocluster with Continuous Access
EVA P6000 for Linux
2.3 Redundant Disk Storage
Each node in a cluster has its own root disk, but each node may also be physically connected to
several other disks in such a way that more than one node can obtain access to the data and
programs associated with a package it is configured for. This access is provided by the Logical
Volume Manager (LVM). A volume group must be activated by no more than one node at a time,
but when the package is moved, the volume group can be activated by the adoptive node.
NOTE: As of release A.11.16.07, Serviceguard for Linux provides functionality similar to HP-UX
exclusive activation. This feature is based on LVM2 hosttags, and is available only for Linux
distributions that officially support LVM2.
All of the disks in the volume group owned by a package must be connected to the original node
and to all possible adoptive nodes for that package.
Shared disk storage in Serviceguard Linux clusters is provided by disk arrays, which have redundant
power and the capability for connections to multiple nodes. Disk arrays use RAID modes to provide
redundancy.
2.3.1 Supported Disk Interfaces
The following interfaces are supported by Serviceguard for disks that are connected to two or more
nodes (shared data disks):
FibreChannel
iSCSI
For information on configuring multipathing, see “Multipath for Storage ” (page 82).
2.3.1.1 Using iSCSI LUNs as Shared Storage
The following guidelines are applicable when iSCSI LUNs are used as shared storage:
The iSCSI storage can be configured on a channel bonding. For more information about
channel bonding, see “Implementing Channel Bonding (Red Hat)” (page 140) or “Implementing
Channel Bonding (SUSE)” (page 142).
Software initiator models support iSCSI storage.
NOTE: Ensure that the iSCSI daemon is persistent across reboots.
Configuring multiple paths from different networks to the iSCSI LUN is not supported.
The iSCSI storage configured over LAN is similar to other LANs that are part of the cluster.
2.3 Redundant Disk Storage 29