Understanding and Designing Serviceguard Disaster Recovery Architectures

only with SG SMS A.02.00, A.02.01, or A.02.01.01. CVM/CFS 5.0.1 are available
only with SG SMS A.03.00 on HP-UX 11i v3. Beginning with SG SMS A.02.01, CVM
5.0/CFS 5.0 mirroring is supported for distances of up to 100 kilometers for 2, 4, 6, 8,
10, 12, 14, or 16 node clusters on HP-UX 11i v2 or 11i v3. Standalone CVM 5.0 (without
SG SMS) is also supported.
Recommendations and Requirements for EC RAC Configurations with
Oracle RAC 10g or 11g
Oracle 10g Release 2 and later supports up to two copies of the Oracle Cluster Registry (OCR)
and up to three vote disks. For EDC, each copy of OCR and each vote disk are required to
be physically mirrored between the two datacenters. The mirrored OCR and vote disks ensure
that Oracle Clusterware has access to local physical copies for Oracle Clusterware cluster
reformation.
For EC RAC configurations, HP recommends that you to maintain local storage for Oracle
Clusterware and Oracle Database binaries and HOME, to reduce inter-site traffic.
TCP/IP Network and Fibre Channel Data Links between the Data Centers
There are three supported configurations for the interconnections between the data centers.
Separate Links for TCP/IP Networking and Fibre Channel Data
The maximum distance between the data centers for this type of configuration is currently
limited by the maximum distance supported for the networking type or Fibre Channel link type
being used, whichever is shorter.
Ethernet switches can support varying distances for the inter switch link between the data
centers, depending upon the type of GBIC and fiber cabling used. Inter switch distances of
up to 100 KM are supported in Extended Clusters. Check with the network switch vendor for
the distances supported for the inter switch link and the hardware and cabling requirements.
There can be a maximum of 500 meters between the Fibre Channel switches in the two data
centers if Short wave GBICs are used. This distance can be increased to 10 kilometers by
using Long wave Fibre Channel GBICs in the switches. The distance can be increased to 80
kilometers if Finisar (long haul) GBICs are used for the Inter Switch Links (ISL) between the
Fibre Channel switches. WDM links can also be used for the connection between the Fibre
Channel switches in the two Primary data centers and can provide ISL connections of up to
100 kilometers in length .
There must be at least two TCP/IP networking links, routed geographically differently between
each Primary data center to prevent the “backhoe problem. For example the backhoe problem
can occur when all cables are routed through a single trench and a tractor on a construction
job severs all cables and disables all communications between the data centers. Only a single
network link can be routed from each Primary data center to the Arbitrator data center.
However, to survive the loss of the network link between a Primary data center and the
Arbitrator data center, the network routing must be configured so that a Primary data center
can also reach the Arbitrator via a route which passes through the other Primary data center.
There must be at least two Fibre Channel Data links, routed geographically differently between
each data center. In three data center configurations, no Fibre Channel Data links are required
for the Arbitrator data center.
Redundant Fibre Channel switches are required in each data center, unless the switch offers
built-in redundancy.
See SWD Streams documents available at www.hp.com/storage/spock for supported Fibre
Channel switches An Extended Fabric license may be required if the inter-switch links (ISL)
between the switches is greater than 10 kilometers. For optimum data replication performance,
58 Extended Distance Cluster Configurations