HP Tru64 UNIX and TruCluster Server Version 5.1B-5 Patch Summary and Release Notes (March 2009)

The cluster root (/), /usr, and /var file systems must be located within the same
site.
All SAN-attached storage must be shared and directly accessible not over the
cluster inter-site connection but via the SAN from all nodes at both sites.
The storage must be configured with remote data replication software (such as XP
Continuous Access). Data replication is required in order to provide the ability to
boot the site that does not contain the cluster file systems following a disaster
event.
A reboot of all nodes at the surviving site is required following any disaster event
that requires activation of the secondary replicated volumes. You will need to shut
down the system, reconfigure the storage as necessary (perform an XP takeover,
for example), and reboot the system. The expected quorum votes and other
parameters may need to be modified in order to successfully boot the system.
If a site disaster occurs that involves multiple failures, high availability will be
lost. Therefore, there needs to be procedures in place for the manual rebooting of
the surviving site. The surviving site will work as a normal cluster with minimal
or no data loss.
A single, combined span of up to 100 km using three switches and two segments
of 50 km.
The configuration must have at least one physical subnet to which all cluster
members and the default cluster alias belong.
The cluster must have an extended, dedicated cluster interconnect to which all
cluster members are connected to serve as a private communication channel
between cluster members. The interconnect must be shielded from any traffic that
is not a part of the cluster communication according to the requirements for the
LAN based cluster interconnect in the TruCluster Cluster Hardware Configuration
manual.
Figure A-1 provides an example of an Enhanced Distance Cluster cluster interconnect
configuration. The nodes at Data Center 1 are connected to a switch that is connected
to an intermediate switch using a fiber link of up to 50 km. From the intermediate
switch, another fiber link of up to 50 km connects to a third switch, to which the
remaining two nodes at Data Center 2 are attached, thereby, establishing an overall
distance between the sites of up to 100 km.
254 Setting Up an Enhanced Distance Cluster