HP Serviceguard for Linux Version A.11.19 Deployment Guide, September 2012

30
c. Deactivate the volume group:
umount /dev/vgws/lvol1
vgchange -a n vgws
vgchange -deltag $(uname n) vgws
2. Repeat the steps on the second server.
You should see the content from each server in the test file.
Configure the Cluster
In the next steps, you will create the cluster, define the node membership, configure the cluster
heartbeat and cluster lock LUN device.
1. From an internet browser such as Internet Explorer, invoke HP System Management
Homepage, https://[hostname]:2381. For example:
https://eve.cup.hp.com:2381
2. Login (use root user and password set during installation of the operating system).
3. Go to the Tools tab.
4. Click the “Serviceguard Manager” link to launch the Serviceguard Manager.
5. Click the “Create Cluster” button on the right.
NOTE: You may see the following message: “There is no cluster configured.
6. In the Create Cluster window, enter the Cluster Name, for example Test and
enter checkmarks in the boxes for both nodes, for example: adam, eve.
7. Go to the Network tab,
Enter in the “Subnets” section, for example:
Subnet: 16.89.84.128, Type: Heartbeat
Enter in the “Select Subnet Configuration” section, for example:
Node Network Address
adam bond0 16.89.84.245
eve bond0 16.89.84.247
8. Go to the Lock tab
For the Cluster Lock Type, Select “Lock Lun”
Enter the Lock Lun Path for each node, for example:
Node Lock Lun Path
adam /dev/mapper/mpath0p1
eve /dev/mapper/mpath0p1
Select Finish.
NOTE: When using Device Mapper Multipath, the path to the cluster Lock LUN, for example
/dev/mapper/mpath0p1, must be the same on each node.
9. Select Check Configuration Look for any errors
NOTE: You may get a warning about the default NODE_TIMEOUT value. This warning can
be ignored here, but refer to the documentation when finalizing your cluster.
10. Select Apply Configuration. Select OK” in the pop-up dialog box.
11. To verify the cluster configuration, run the following options from Administration menu of the
HP Serviceguard Manager Summary page to test that each node can run the cluster in the
event that the other node fails:
a. Administration -> Run Cluster (on both nodes)
b. Administration -> Halt Node (select adam)
c. Administration -> Run Node (on adam)