Understanding and Designing Serviceguard Disaster Recovery Architectures

it is suggested to tune the buffer credits properly for the ISL used for data replication between
the data centers.
If CVM or CFS is being used and all data replication links are lost between the data centers,
but the network links remain functional, it is likely that all mirror copies that the CVM master
cannot contact will be detached from the disk group. This means that applications running on
nodes which are not in the same data center as the CVM Master node may hang, as their
local mirror copies are detached and they cannot reach the remote mirror copies. SLVM
handles this scenario differently, because it will arbitrate (during which time all writes to the
shared volumes will temporarily hang) and allow only one of the nodes to continue writing to
the shared volumes (the writes on the other node will continue to hang until the data replication
links are re-established).
Common WDM Links for both TCP/IP Networking and Fibre Channel Data
The maximum distance supported between the data centers for DWDM and CWDM
configurations is 100 kilometers.
Both the TCP/IP networking and Fibre Channel Data can go through the same WDM box.
WDM hardware is typically designed to be fault tolerant, therefore it is acceptable to use only
one WDM box (in each data center) for the links between each data center. However, for the
highest availability, HP recommends you to have redundant WDM boxes (in each data center)
used for the links between each data center. If you are using a single WDM box for the links
between each data center, you must ensure that no Single Points of Failure (SPOFs) exist for
that WDM box, and the redundant standby fiber link feature of the WDM box must be
configured. If the WDM box supports multiple active WDM links, that feature can be used
instead of the redundant standby feature.
At least two fiber
4
optic links are required between each Primary data center, each fiber link
routed geographically differently to prevent the “backhoe problem. Only one fiber link routed
from each Primary data center to the Arbitrator data center, however in order to survive the
loss of a link between a Primary data center and the Arbitrator data center, the network routing
must be configured so that a Primary data center can also reach the nodes in the Arbitrator
data center via a route passing through the other Primary data center.
The network switches can be 100Base T (TX or FX), 1000Base T (TX or FX), 10 Gigabit
Ethernet. The connections between the network switches and the WDM boxes must be fiber
optic.
Direct Fabric Attach mode must be used for the Fibre Channel switch ports connected to the
WDM link. Redundant Fibre Channel switches are required in each data center, unless the
switch offers built in redundancy.
See SWD Streams documents available at www.hp.com/storage/spock for supported Fibre
Channel switches. An Extended Fabric license may be required if the ISL link between the
switches is greater than 10 kilometers. For optimum data replication performance, it is
suggested to tune the buffer credits properly for the inter switch links (ISL) used for data
replication between the data centers.
It is also possible to use a combination of separate network links and WDM links for Fibre
Channel data, or WDM links for networking and Fibre Channel links for data; however, it is
probably much more cost effective to use the WDM links for both networking and Fibre Channel
data.
4
DWDM requires dark fiber, CWDM can use multimode fiber.
WDM Hardware Requirements:
HP does DWDM or CWDM equipment from any specific vendor . The customer is responsible for
the selection and maintenance of any DWDM or CWDM equipment.
Common WDM Links for both TCP/IP Networking and Fibre Channel Data 59