Designing Disaster Tolerant High Availability Clusters, 10th Edition, March 2003 (B7660-90013)

Physical Data Replication for ContinentalClusters Using Continuous Access XP
Preparing the Continental Cluster for Data Replication
Chapter 6 291
package. For example, a device group name of db-payroll is easily
associated with the database for the payroll application. A device
group name of group1 would be more difficult to easily relate to an
application.
Edit the Raid Manager configuration file (horcm0.conf in the above
example) to include the devices and device group used by the
application package. Only one device group may be specified for all
of the devices that belong to a single application package. These
devices are specified in the field HORCM_DEV.
Also complete the HORCM_INST field, supplying the names of only
those hosts that are attached to the XP disk array that is remote from
the disk array directly attached to this host. For example, with the
continental cluster shown in Figure 6-1 (node 1 and node 2 in the
primary cluster and nodes 3 and 4 in the recovery cluster), you would
specify only nodes 3 and 4 in the HORCM_INST field in a file you are
creating on node 1 on the primary cluster. Node 1 would have
previously been specified in the HORCM_MON field.
Figure 6-1 Continental Cluster
See file
/opt/cmcluster/toolkit/SGCA/Samples-CC/horcm0.conf.<sys-
name> for an example.
11. Restart the Raid Manager instance so that the new information in
the configuration file is read. Use the commands:
# horcmshutdown.sh <instance-#>
node 1
pkg A
Highly Available
node 2
pkg B
PVlinks
network
CA link
network
PVlinks
Network
A
node 3
node 4
PVlinks
network
network
PVlinks
B
A’
B
Local XP
Disk Array
Remote XP
Disk Array
PVOL
SVOL
replicated data for package A
PVOL
SVOL
replicated data for package B
CA link