Understanding and Designing Serviceguard Disaster Recovery Architectures

Extended Distance Cluster with two Data Centers
Configurations with two data centers have the following additional requirements:
To maintain cluster quorum after the loss of an entire data center, you must configure dual
cluster lock disks (one in each data center). Cluster lock disks are supported for up to four
nodes, the cluster can contain only two or four nodes. Serviceguard does not support dual
lock LUNs, so lock LUNs cannot be used in this configuration. When using dual cluster lock
disks, is a possibility of Split Brain Syndrome (where the nodes in each data center form two
separate clusters, each with exactly one half of the cluster nodes) if all communication between
the two data centers is lost and all nodes continue to run. The Serviceguard Quorum Server
prevents the possibility of split brain; however the Quorum Server must reside in a third site.
Therefore, a three data center cluster is a preferable solution, to prevent split brain, and the
only solution if dual cluster lock disks cannot be used, or if the cluster must have more than
four nodes.
Two data center configurations are not supported if SONET is used for the cluster interconnects
between the Primary data centers.
There must be equal number of nodes (one or two) in each data center.
To protect against the possibility of a split cluster inherent when using dual cluster lock, at
least two and preferably three independent paths between the two data centers must be used
for heartbeat and cluster lock I/O. Specifically, the path from the first data center to the cluster
lock at the second data center must be different than the path from the second data center to
the cluster lock at the first data center. Preferably, at least one of the paths for heartbeat traffic
should be different from each of the paths for cluster lock I/O.
Routing cannot be used for the networks between the data centers, except in Cross-Subnet
configurations.
Mirrordisk/UX mirroring for LVM and VxVM mirroring are supported for clusters of two or
four nodes. However, the dual cluster lock devices can only be configured in LVM Volume
Groups.
CVM 3.5, 4.1, 5.0, or 5.0.1 mirroring is supported for Serviceguard and EC RAC clusters
using CVM or CFS. However, the dual cluster lock devices must still be configured in LVM
Volume Groups. Cluster lock disks are only supported for up to four nodes, therefore the cluster
can contain only two or four nodes.
Mirrordisk/UX mirroring for Shared LVM volume groups is supported for EC RAC clusters
containing two nodes. Using LVM version 2.0, which is available on HP-UX 11i v3, with SLVM
volume groups is supported for EC RAC clusters containing two or four nodes.
SeeFigure 24 (page 48) for an example of two node Extended Distance Cluster configurations.
Extended Distance Cluster configurations with Two Data Centers and a
Third Location
Configurations with two data centers and a third location have the following additional requirements:
The third location, also known as the Arbitrator data center, can contain either Arbitrator
nodes or a Quorum Server node.
If Arbitrator nodes are used there must be an equal number of nodes (1–7) in each Primary
data center, and the third location can contain one or two Arbitrator nodes. The Arbitrator
nodes are standard Serviceguard nodes configured in the cluster. However, they are not
allowed to be connected to the shared disks in either of the Primary data centers. Arbitrator
nodes are used as tie breakers to maintain cluster quorum when all communication between
the two Primary data centers is lost. The data center containing the Arbitrator Nodes must be
located separately from the nodes in the Primary data centers. If the each Primary data centers
Extended Distance Cluster with two Data Centers 61