Understanding and Designing Serviceguard Disaster Recovery Architectures

There must be less than 200 milliseconds of roundtrip latency in the link between the data
centers. This latency requirement applies for both the heartbeat network and the Fiber Channel
data.
Fibre Channel Direct Fabric Attach (DFA) is recommended over Fibre Channel Arbitrated loop
configurations, due to the superior performance of DFA, especially as the distance increases.
Any combination of the following Fibre Channel capable disk arrays may be used: HP Storage
Disk Array XP, HP Storage Enterprise Virtual Array (EVA), HP Storage 1500cs Modular Smart
Array (Active/Active controller version only), HP Storage 1000 Modular Smart Array
(Active/Active controller version only) or EMC Symmetrix Disk Arrays. Verify that HP Storage
Division (SWD) will support your desired combination of disk arrays to be connected to the
same Host Bus Adapter.
1
The Data Link Provider Interface (DLPI) is an industry standard definition for message
communications to a STREAMS based network interface driver. The DLPI resides at layer 2, the
data link layer, in the OSI Reference Model.
2
WDM stands for Wavelength Division Multiplexing. There are two WDM technology solutions:
CWDM (Coarse Wavelength Division Multiplexing) and DWDM (Dense Wavelength Division
Multiplexing). CWDM is similar to DWDM but is less expensive, has fewer channels, is less
expandable, and works over a distance of 100 km.
3
By separately routed network path, we mean a completely independent, physically separate
path, such that the failure of any component in one network path will not result in a network partition
between any nodes in the cluster. In the case of fault tolerant WDM boxes, there may be a single
common WDM box in the data center. There should be separate fibers routed independently
between the WDM boxes in each data center, however.
The use of Mirrordisk/UX with LVM or SLVM, or software mirroring with VxVM or CVM is
required to mirror the application data between the Primary data centers. Devices with
Active/Passive controllers are not supported with VxVM or CVM mirroring, therefore only
LVM or Shared LVM and Mirrordisk/UX are supported for the mirroring between the data
centers with these devices.
See Table 3 (page 49), Table 4 (page 49) and Table 5 (page 51) for the distances and
number of nodes supported with different volume managers, Serviceguard/SGeRAC versions,
and HP-UX versions.
VxVM/CVM mirroring is supported in Extended Clusters for distances of up to 100 kilometers.
CFS relies on CVM mirroring. Please see the “Special requirements and recommendations for
using VxVM, CVM and CFS in Extended Clusters” below for more information using these in
Extended Clusters.
An Extended Cluster may contain any combination of physical nodes, nPar nodes, vPar nodes,
and HP Integrity Virtual Machine (HPVM) nodes. For more information on configuration of
nPar, vPar, and HPVM nodes in clusters, see HP Serviceguard Cluster Configuration for HP
UX 11i or Linux Partitioned Systems and Designing High Availability Solutions using HP Integrity
Virtual Machines, available at: www.hp.com/go/hpux-serviceguard-docs.
Special Requirements and Recommendations for using LVM and SLVM in
Extended Clusters
For LVM volume groups using Mirrordisk/UX mirroring, you must activate the volume groups
in the package without LVM quorum (using the vgchange q n option), after failures involving
the loss of one of the disk arrays. This is because LVM does not allow a volume group to be
activated unless more than 50 percent of the disks in the volume group are present. Please
note that activating LVM volume groups without quorum may cause data loss in some rare
scenarios involving multiple failures, so this option should be used carefully. In some cases,
it may be better to require LVM quorum in the package configuration, which will result in the
Special Requirements and Recommendations for using LVM and SLVM in Extended Clusters 55