Designing Disaster Tolerant High Availability Clusters, 10th Edition, March 2003 (B7660-90013)

Building an Extended Distance Cluster Using MC/ServiceGuard
Two Data Center Architecture
Chapter 252
Two Data Center Architecture
The two-data-center architecture is based on a standard
MC/ServiceGuard configuration with half of the nodes in one data center,
and the other half in another data center. Nodes can be located in
separate data centers in the same building, or even separate buildings
within the limits of FibreChannel technology. Configurations with two
data centers have the following requirements:
There must be an equal number of nodes (1 or 2) in each data center.
In order to maintain cluster quorum after the loss of an entire data
center, you must configure dual cluster lock disks (one in each data
center). Since cluster lock disks are only supported for up to 4 nodes,
the cluster can contain only 2 or 4 nodes. The MC/ServiceGuard
Quorum Server cannot be used in place of dual cluster disks, as the
Quorum Server must reside in a third data center. Therefore, a three
data center cluster is a preferable solution, if dual cluster lock disks
cannot be used, or if the cluster must have more than 4 nodes.
To protect against the possibility of a split cluster inherent when
using dual cluster lock, at least two (three preferred) independent
paths between the two data centers must be used for heartbeat and
cluster lock I/O. Specifically, the path from the first data center to
the cluster lock at the second data center must be different than the
path from the second data center to the cluster lock at the first data
center. Preferably, at least one of the paths for heartbeat traffic
should be different from each of the paths for cluster lock I/O.
There can be separate networking and FibreChannel links between
the two data centers, or both networking and Fibre Channel can go
over DWDM links between the two data centers. See the section
below Network and Data Replication Links Between the Data
Centers for more detail.
FibreChannel Direct Fabric Attach (DFA) is recommended over
FibreChannel Arbitrated loop configurations, due to the superior
performance of DFA, especially as the distance increases. Therefore
Fibre Channel switches are preferred over Fibre Channel hubs.
Any combination of the following FibreChannel capable disk arrays
may be used: Model FC30, HP StorageWorks FC10, HP
StorageWorks FC60, HP StorageWorks Virtual Arrays, HP