Data Protector Cluster Cell Manager Configuration and Integration on RHCS

10
Stopping dps...
[stop] script:DP_ services
[stop] clusterfs:SharedDisk
[stop] ip:10.10.1.9
[stop] service:dps
Stop of dps complete
1. After creating the resource group and DP Services, enable them in failover mode by executing
the following command
:
$ clusvcadm -e dps –F
2. Check the status of the node and DP serviced by executing the clustat command:
$clustat
Cluster Status for new_cluster @ Mon Jun 20 16:26:46 2011
Member Status: Quorate
Member Name ID Status
------ ---- ---- ------
dpi00182 1 Online, Local, rgmanager
dpi00181 2 Online, rgmanager
Service Name Owner (Last) State
------- ---- ----- ------ -----
service:dps dpi00182 started
Testing the start/stop order of Data Protector resources
Before deploying Data Protector cluster services in the real environment, it is advisable to test the
start/stop operation on DP Services in test mode. To test that environment, execute the command
rg_test with the test option
.
Note: Any errors during start and stop operations should be fixed while DP Services are running in
test mode
.
$rg_test test cluster.conf start service dps
Running in test mode.
Starting dps...
<debug> 10.10.1.9 already configured
<debug> mount -t gfs /dev/mapper/DP_Grp-DP_Vol /FileShare
<info> Executing /opt/omni/sbin/omnisv start
HP Data Protector services successfully started.
Start of dps complete
$ /opt/omni/sbin/omnisv status
ProcName Status [PID]
===============================
rds : Active [11421]
crs : Active [11436]
mmd : Active [11434]
kms : Active [11435]
omnitrig: Active
uiproxy : Active [11442]
Sending of traps disabled.