OVNM 5.3 - Operations Agent for NonStop Installation Guide
Appendix E: HP MC/SG Cluster Installation Supplement 122
Appendix E: HP MC/SG Cluster Installation Supplement
The information in this topic pertains to installing and configuring OVNM when installed with an HPOM
Management Server on a MC/SG (Mission Critical/ServiceGuard) cluster.
HPOM installed on a MC/SG Cluster provides high availability in the event the primary Node (running HPOM
Management Server) fails. The HPOM Management Server is installed on the shared disk /var/opt/OV/share,
which is shared between the two nodes of the MC/SG Cluster. Being installed on a shared disk enables the
Backup Node to restart the HPOM Management Server in the event of a failure.
The OVNM SPI includes a sessionctl and a SAF file. The sessionctl file is used for maintaining session information
with NonStop Severs. The SAF file is used to store NonStop events when communications is lost between the
OVNM SPI and the HPOM Management Server.
Refer to the section 10.2 Broadcast Mode for more details.
To ensure guaranteed delivery of NonStop messages without loss of any event message (in the event of a node
failure), OVNM SPI is installed on the same shared disk as HPOM. This allows the Backup Node to start OVNM
when HPOM is restarted.
OVNM SPI: Installation Considerations on HPOM Cluster
The OVNM software supplied in ovnm_hpux-risc_depot.tar is configured to install OVNM SPI in the
/opt/OV/ directory. This is the standard directory used for Non-Cluster installation.
The ovnm_hpux-risc_depot.tar file contains a script called custom_install.sh, which will be located
in the directory specified when depot.tar is untared. For example: /var/spool/sw.
To change the location of the OVNM installation default directory to HPOM’s shared disk /var/opt/OV/share, the
custom_install.sh script must be run before performing swinstall.
When custom_install.sh is executed, it will make the following changes to OVNM package:
39. The configured executable install directory /opt/OV/ is changed to the new directory defined, for example
/var/opt/OV/share.
40. The path for the NSCMD utility, which is used by OVNM NonStop Tools, is changed in OVNM Application
Templates.
41. The supplied OVNM utility scripts are changed to reflect the new OVNM_HOME_DIR.
Refer to Installing OVNM on HPOM UNIX for details.
OVNM SPI Configuration Information
When configuring OVNM SPI on a MC/SG Cluster, it is recommended that the OVNM SPI be configured to run
in broadcast mode and have Message queuing enabled.
During a new installation, the execution of ovnm_configure.sh is performed. Select the mode in which the
HPOM Management Server is to run as broadcast when asked. Refer to 6-3 Configuring OVNM on HPOM UNIX
for detaiks.
The msgqueuing parameter is set in the ovnm_start.sh script which is used to start the OVNM SPI . The
following is a example of setting the msgqueuing parameter in /var/opt/OV/ovnm/bin/ovnm_start.sh.
$OVNM_HOME_DIR/ovnm_spi -node -msgqueuing > /dev/null 2>&1 &
Refer to chapter Starting OVNM on HPOM UNIX
for a list of all available OVNM SPI parameters.
Appendix E