Understanding and Designing Serviceguard Disaster Recovery Architectures

contains a single node, only one Arbitrator node is allowed. Cluster lock disks are not supported
in this configuration. Arbitrator nodes are not supported if CVM or CFS is used in the cluster.
If a Quorum Server node is used, there must be an equal number or nodes (1–8) in each
Primary data center. The third location can contain a single Serviceguard Quorum Server
node (running HP-UX or Linux), with a separate power circuit. The Quorum Server may not
be on the same subnet as the cluster nodes, but network routing must be configured that all
nodes in the cluster can contact the Quorum Server via separate physical routes. Before the
release of Quorum Server A.03.00, only one IP address could be configured for a Quorum
Server, so it is suggested to make the Quorum Server more highly available by running it in
it’s own Serviceguard cluster, or to configure the LAN used for the Quorum Server IP address
with at least two LAN interface cards using APA (Automatic Port Aggregation) LAN_MONITOR
mode to improve the availability if a LAN failure occurs. Beginning with Quorum Server
revision A.03.00, cluster nodes can communicate with the Quorum Server on a primary and
an alternate subnet, for improved tolerance to network failures between the Quorum Server
and the cluster nodes. This functionality is supported in Serviceguard A.11.17 on HP UX 11i
v2, with the patch PHSS_35427 or later, with Serviceguard A.11.18 and the patches,
PHSS_36997 or later (11i v2) or PHSS_36998 or later (11i v3) and with Serviceguard
A.11.19. In addition, for Serviceguard A.11.17 you must apply the Cluster Object Manager
(COM) patch PHSS_35372 or later, to use Serviceguard Manager to manage a cluster that
uses more than one subnet for communication with the Quorum Server.
Routing cannot be used for the networks between the data centers, except in Cross-Subnet
configurations. Routing is allowed to the third location if a Quorum Server is used in that site.
Mirrordisk/UX mirroring for LVM and VxVM mirroring is supported for clusters of up to 16
nodes.
CVM 3.5, 4.1, 5.0, or 5.0.1 mirroring is supported for Serviceguard and EC RAC clusters
containing 2, 4, 6, or 8 nodes (and 10, 12, 14 or 16 nodes with CVM 5.0, or 5.0.1 and
Serviceguard A.11.19, SG SMS A.02.01, A.02.01.01, or SG SMS A.03.00). In CVM and
CFS configurations, Arbitrator nodes are not supported, and a Quorum Server node must be
used .
Mirrordisk/UX mirroring for Shared LVM volume groups is supported for EC RAC clusters
containing 2 nodes. Using LVM version 2.0, which is available on HP-UX 11i v3, with SLVM
volume groups is supported for EC RAC clusters containing 2 or 4 nodes.
See Figure 25 (page 63) for an example of an Extended Distance Cluster configuration in two
data centers with a third location.
62 Extended Distance Cluster Configurations