HP Serviceguard Enterprise Cluster Master Toolkit User Guide HP Part Number: 5900-2131 Published: December 2011 Edition: 3
© Copyright 2011 Hewlett-Packard Development Company, L.P. Confidential computer software. Valid license from HP required for possession, use or copying. Consistent with FAR 12.211 and 12.212, Commercial Computer Software, Computer Software Documentation, and Technical Data for Commercial Items are licensed to the U.S. Government under vendor's standard commercial license. The information contained herein is subject to change without notice.
Contents 1 Introduction...............................................................................................6 Overview................................................................................................................................6 2 Using the Oracle Toolkit in an HP Serviceguard Cluster...................................8 Overview................................................................................................................................8 Supported versions....
3 Using the Sybase ASE Toolkit in a Serviceguard Cluster on HP-UX ..................60 Overview..............................................................................................................................60 Sybase Information.................................................................................................................60 Setting up the Application........................................................................................................
Apache Web Server Maintenance..........................................................................................108 7 Using Tomcat Toolkit in a HP Serviceguard Cluster......................................110 Tomcat Package Configuration Overview.................................................................................112 Local Configuration..........................................................................................................112 Shared Configuration........................
1 Introduction The Enterprise Cluster Master Toolkit (ECMT) is a set of templates and scripts that allow the configuration of Serviceguard packages for internet servers as well as for third-party Database Management Systems. This unified set of high availability tools is being released on HP-UX 11i v2 and 11i v3. Each Toolkit is a set of scripts specifically for individual application to start, stop, and monitor the application.
Table 1 Toolkit Name/Application Extension and Application Name Toolkit Name/ Application Extension Application Name Serviceguard Extension for RAC (SGeRAC) Oracle Real application clusters Serviceguard Extension for SAP (SGeSAP) SAP Serviceguard Extension for Oracle E-Business Suite (SGeEBS) Oracle E-Business Suite Serviceguard toolkit for Oracle Data Guard (ODG) Oracle Data Guard HA NFS Toolkit Network File System (NFS) HP VM Guest (configured as a Serviceguard Package) HP VM Guest To packag
2 Using the Oracle Toolkit in an HP Serviceguard Cluster This chapter describes the High Availability Oracle Toolkit designed for use in a HP Serviceguard environment. This chapter covers the basic steps for configuring an Oracle instance in a HP-UX cluster, and is intended for users who want to integrate an Oracle Database Server with Serviceguard.
package). For more information, see the section “Configuring and packaging Oracle single-instance database to co-exist with SGeRAC packages” (page 53). Support For Oracle Database without ASM Setting Up The Application To set up the application, it is assumed that the Oracle should be installed in /home/oracle on all the cluster nodes by 'Oracle' as the database administrator, and that shared storage must be configured. 1.
If you need help creating, importing, or managing the volume group or disk group and filesystem, see Building an HA Cluster Configuration in the Serviceguard user manual available at http://www.hp.com/go/hpux-serviceguard-docs-> HP Serviceguard. • Configuring shared file system using CFS The shared file system can be a CFS mounted file system.
NOTE: If you opted to store the configuration information on a local disk and propagate the information to all nodes, ensure that pfile/spfile, the password file, and all control files and data files are on shared storage. For this set up, you will need to set up symbolic links to the pfile and the password file from /home/oracle/dbs as follows: ln -s /ORACLE_TEST0/dbs/initORACLE_TEST0.ora \ ${ORACLE_HOME}/dbs/initORACLE_TEST0.ora ln -s /ORACLE_TEST0/dbs/orapwORACLE_TEST0.
NOTE: This setup is not supported if Oracle 10g Release 1 is configured with LVM or VxVM. If Oracle 10g Release 1 is configured with LVM or VxVM then local configuration is recommended. The above configuration is supported in Oracle 10g Release 2 and Oracle 11g, but subject to a condition that Oracle's Automatic Storage Management (ASM) is not configured on that node.
Setting Up the Toolkit Toolkit Overview It is assumed that users have used swinstall to properly install both Serviceguard and the Enterprise Cluster Master Toolkit (referred to as the ECMT), which includes the scripts for Oracle. After installing the toolkit, six scripts and a README file will be in the/opt/cmcluster/toolkit/ oracle directory. Two more scripts and one file will be installed which will be used only for modular packages.
Table 2 Legacy Package Scripts (continued) Script Name Description Alert Notification Scriptv(SGAlert.sh) This script is used to send an e-mail to the e-mail address specified by the value of the ALERT_MAIL_ID package attribute, whenever there are critical problems with the package. Interface Script (toolkit.sh) This script is the interface between the Serviceguard package control script and the Oracle toolkit. Table 3 Variable or Parameter Name in haoracle.
Table 3 Variable or Parameter Name in haoracle.conf file (continued) NOTE: Setting MAINTENANCE_FLAG to yes, and touching the oracle.debug file in the package directory will put the package in toolkit maintenance mode. Serviceguard A.11.19 release has a new feature which allows individual components of the package to be maintained while the package is still up. This feature is called Package Maintenance mode and is available only for modular packages.
Table 3 Variable or Parameter Name in haoracle.conf file (continued) TIME_OUT The amount of time, in seconds, to wait for the Oracle abort to complete before killing the Oracle processes defined in MONITOR_PROCESSES. The TIME_OUT variable is used as protection against a worst-case scenario where a hung instance prevents the package halt script from completing, therefore preventing the standby node from starting the instance.
Support for multiple listeners This feature provides support for multiple listeners with the Enterprise Cluster Master Oracle Toolkit. It enables you to configure: 1. Single service to monitor all the listeners together: This is the default behavior. If one of the configured listeners fail, it does not impact the service. However, if all the configured listeners fail, the service fails, and it leads to the failover of the package to an alternate node.
NOTE: Services configured using both the approaches in a single package is not supported. You must either configure all the listeners using a single service, or use a separate service for each of them. Ensure that the elements in the LISTENER_RESTART array and the LISTENER_PASS array corresponds to those in the LISTENER_NAME array in the package configuration file.
2. as value. The default value for TIMEOUT is 30 (seconds). This should not be confused with the TIME_OUT package attribute or the service_halt_timeout attribute. ACTION: This attribute defines the action that must be taken if a database hang is detected. Currently, this attribute can take one of the following values: • log - Log a message. A message is logged to the package log every time a hang is detected.
If you are using LVM or VxVM Follow the instructions in the chapter Building an HA Cluster Configuration in the manual Managing ServiceGuard manual available at http://www.hp.com/go/hpux-serviceguard-docs —>HP Serviceguard to create a logical volume infrastructure on a shared disk. The disk must be available to all clustered nodes that will be configured to run this database instance. Create a file system to hold the necessary configuration information and symbolic links to the Oracle executables.
If you are using VxVM - unmount and deport the disk group, $ umount /ORACLE_TEST0 $ vxdg deport /dev/vx/dsk/DG0_ORACLE_TEST0 Repeat this step on all other cluster nodes to be configured to run the package to ensure Oracle can be brought up and down successfully. • Create the Serviceguard package using legacy method. The following steps in this section describes the method for creating the Serviceguard package using the legacy method.
If you are using VxVM • DISK GROUPS Define the disk groups that are used by the Oracle instance. File systems associated with these disk groups are defined as follows: VXVM_DG[0]=/dev/vx/dsk/DG00_${SID_NAME} For example: VXVM_DG[0]=/dev/vx/dsk/DG00_ORACLE_TEST0 Define the file systems which are used by the Oracle instance. NOTE: One of these file systems must be the shared file system/logical volume containing the Oracle Home configuration information ($ORACLE_HOME).
Edit the customer_defined_run_cmds function to execute the toolkit.sh script with the start option. In the example below, the line /etc/cmcluster/pkg/ORACLE_TEST0/ toolkit.sh start was added, and the ":" null command line deleted. For example: function customer_defined_run_cmds { # Start the Oracle database. /etc/cmcluster/pkg/ORACLE_TEST0/toolkit.
SERVICE_NAME SERVICE_FAIL_FAST_ENABLED SERVICE_HALT_TIMEOUT ORACLE_TEST0 NO 300 If the listener should also be monitored, another service must be added. SERVICE_NAME SERVICE_FAIL_FAST_ENABLED SERVICE_HALT_TIMEOUT NOTE: LSNR_0 NO 300 If listener monitoring is not intended, do not create a new service. The run and halt scripts are (typically) identified as the control script, as follows: RUN_SCRIPT /etc/cmcluster/pkg/ORACLE_TEST0/ORACLE_TEST0.
NOTE: In case of a modular package, the user need not specify the parameter values in the haoracle.conf file. The toolkit populates haoracle.conf on its own. For example: Edit the haoracle.conf script as indicated by the comments in that script.
# cmmakepkg -m ecmt/oracle/oracle pkg.conf where, 'ecmt/oracle/oracle' is the ECMT Oracle toolkit module name. pkg.conf is the name of the package configuration file. 2. Configure the following Serviceguard parameters in the pkg.conf file: package_name — Set to any name desired. package_type — Set to failover. Edit the service parameters if necessary. The service parameters are preset to: service_name oracle_service service_cmd "$SGCONF/scripts/ecmt/oracle/tkit_module.
provides the necessary I/O fencing mechanism and also the multipathing capability not present in HP-UX 11i v2. When using ASM with Oracle single-instance database versions earlier to 11gR2, the ASM file descriptors were kept open on ASM disk group members even after the disk groups had been dismounted. Oracle has released patches which address the ASM descriptor issue and meets the Serviceguard requirement for supporting ASM. Note that these patches are not required for Oracle 11gR2 or later versions.
NOTE: For information on the proposed framework for ASM integration with Serviceguard, please refer to the whitepaper High Availability Support for Oracle ASM with Serviceguard available at: www.hp.com/go/hpux-serviceguard-docs —> HP Enterprise Cluster Master Toolkit . The Oracle toolkit uses Multi-Node Package (MNP) and the package dependency feature to integrate Oracle ASM with HP Serviceguard.
Figure 1 Oracle database storage hierarchy without and with ASM Why ASM over LVM? As mentioned above, we require ASM disk group members in Serviceguard configurations to be raw logical volumes managed by LVM. We leverage existing HP-UX capabilities to provide multipathing for LVM logical volumes, using either the PV Links feature, or separate products such as HP StorageWorks Secure Path that provide multipathing for specific types of disk arrays.
• should not span multiple PVs • and should not share a PV with other LVs. The idea is that ASM provides the mirroring, striping, slicing, and dicing functionality as needed and LVM supplies the multipathing functionality not provided by ASM. Figure 2 indicates this one-to-one mapping between LVM PVs and LVs used as ASM disk group members. Further, the default retry behavior of LVM could result in an I/O operation on an LVM LV taking an indefinitely long period of time.
1. Create the volume group with the two PVs, incorporating the two physical paths for each (choosing hh to be the next hexadecimal number that is available on the system, after the volume groups that are already configured). # # # # # # # # 2.
Serviceguard support for ASM on HP-UX 11i v3 onwards This document describes how to configure Oracle ASM database with Serviceguard for high availability using the ECMT Oracle toolkit. Look at the ECMT support matrix available at http:// www.hp.com/go/hpux-serviceguard-docs -> HP Enterprise Cluster Master Toolkit for the supported versions of ECMT, Oracle, and Serviceguard. Note that for a database failover, each database should store its data in its own disk group.
On ASM instance failure, all dependent database instances will be brought down and will be started on the adoptive node. How Toolkit Starts, Stops and Monitors the Database instance The Toolkit failover package for the database instance provides start and stop functions for the database instance and has a service for checking the status of the database instance. There will be a separate package for each database instance. Each database package has a simple dependency on the ASM package.
Figure 4 ASM environment when DB1 fails on node 2 Serviceguard Toolkit Internal File Structure HP provides a set of scripts for the framework proposed for ASM integration with Serviceguard. The ECMT Oracle scripts contain the instance specific logic to start/stop/monitor both the ASM and the database instance. These scripts support both legacy and the modular method of packaging. Even though, Legacy method of packaging is supported, it is deprecated now and will not be supported in future.
Figure 5 Internal file structure for legacy packages Modular packages use the package configuration file for the ASM or database instance on the Serviceguard specific side. The package configuration parameters are stored in the Serviceguard configuration database at cmapplyconf time, and are used by the package manager in its actions on behalf of this package.
ASM File Descriptor Release When an ASM disk group is dismounted on a node in the Serviceguard cluster, it is expected that the ASM instance closes the related descriptors of files opened on the raw volumes underlying the members of that ASM disk group. However, there may be a possibility that processes of the ASM instance and client processes to the ASM instance may not close the descriptors.
Assume that the Serviceguard cluster, ASM instance and one or more database instances are already installed and configured. • Halt the ASM and database instances. • Configure the ASM MNP using the ECMT Oracle scripts provided by HP following the instructions in the README file in the scripts bundle. • Start the ASM instance on all the configured nodes by invoking cmrunpkg on the ASM MNP.
NOTE: If the Oracle database is running in a cluster where SGeRAC packages are also running, then the Oracle database must be disabled from being started automatically by the Oracle Clusterware.
Table 6 Variables or Parameters in haoracle.conf file (continued) Parameter Name Description ASM_HOME The home directory where ASM is installed. This parameter must be set for both the ASM and the ASM database instance packages. ASM_USER The user name of the Oracle ASM administrator. This parameter must be set for both ASM instance and ASM database instance packages. ASM_SID The ASM session name. This uniquely identifies an ASM instance.
Table 6 Variables or Parameters in haoracle.conf file (continued) Parameter Name Description NOTE: If the Oracle database package is dependent on the SGeRAC clusterware multi-node package (OC MNP), then the Oracle database package will automatically go into toolkit maintenance mode if the SGeRAC OC MNP is put into toolkit maintenance mode. To put the SGeRAC OC MNP into toolkit maintenance mode, its MAINTENANCE_FLAG attribute must be set to yes, and a file oc.
ASM Package Configuration Example Oracle Legacy Package Configuration Example 1. ASM Multi-Node Package Setup and Configuration NOTE: cluster. This package must not be created if SGeRAC packages are created in the same Create your own ASM package directory under /etc/cmcluster and copy over the scripts in the bundle. Log in as root: # mkdir /etc/cmcluster/asm_package_mnp # cd /etc/cmcluster/asm_package_mnp Then copy the framework scripts provided to this location : cp /opt/cmcluster/toolkit/oracle/* .
SERVICE_CMD[0]="/etc/cmcluster/asm_package_mnp/toolkit.sh monitor" SERVICE_RESTART[0]="-r 2" Add in the customer_defined_run_cmds function: /etc/cmcluster/asm_package_mnp/toolkit.sh start Add in the customer_defined_halt_cmds function: if [ $SG_HALT_REASON = "user_halt" ]; then reason="user" else reason="auto" fi /etc/cmcluster/asm_package_mnp/toolkit.
MONITOR_PROCESSES[3]=ora_smon_${SID_NAME} MONITOR_PROCESSES[4]=ora_lgwr_${SID_NAME} MONITOR_PROCESSES[5]=ora_reco_${SID_NAME} MONITOR_PROCESSES[6]=ora_rbal_${SID_NAME} MONITOR_PROCESSES[7]=ora_asmb_${SID_NAME} MAINTENANCE_FLAG=yes MONITOR_INTERVAL=30 TIME_OUT=30 KILL_ASM_FOREGROUNDS=yes PARENT_ENVIRONMENT=yes CLEANUP_BEFORE_STARTUP=no USER_SHUTDOWN_MODE=abort OC_TKIT_DIR=/etc/cmcluster/crs # This attribute is needed only when this toolkit is used in an SGeRAC cluster.
If a listener service is configured in the package configuration file, set the following parameters: SERVICE_NAME[1]="ORACLE_LSNR_SRV" SERVICE_CMD[1]="/etc/cmcluster/db1_package/toolkit.sh monitor_listener" SERVICE_RESTART[1]="-r 2" Configure the Package IP and the SUBNET. Add in the customer_defined_run_cmds function: /etc/cmcluster/db1_package/toolkit.
service_name oracle_service service_cmd "$SGCONF/scripts/ecmt/oracle/tkit_module.sh oracle_monitor" service_restart none service_fail_fast_enabled no service_halt_timeout 300 service_name oracle_listener_service service_cmd "$SGCONF/scripts/ecmt/oracle/tkit_module.sh oracle_monitor_listener" service_restart none service_fail_fast_enabled no service_halt_timeout 300 Uncomment the second set of service parameters in the package configuration file which are used to monitor the listener.
DEPENDENCY_NAME asm_dependency DEPENDENDY_CONDITION =up DEPENDENCY_LOCATION same_node Since LVM logical volumes are used in disk groups, specify the name(s) of the volume groups on which the ASM diskgroups reside on, for the attribute "vg". Configure the ip_subnet and ip_address parameters. Configure the toolkit parameterTKIT_DIR. This parameter is synonymous to the legacy package directory (for example, /etc/cmcluster/dg1_package).
package_type - Set to "failover". Edit the service parameters if necessary. The service parameters are preset to: service_name Oracle_service service_cmd "$SGCONF/scripts/ecmt/Oracle/tkit_module.sh Oracle_monitor" service_restart none service_fail_fast_enabled no service_halt_timeout 300 service_name Oracle_listener_service service_cmd "$SGCONF/scripts/ecmt/Oracle/tkit_module.
7) Apply the package configuration. # cmapplyconf -P db1pkg.conf This command applies the package configuration to the cluster It also creates the toolkit configuration directory defined by the package attribute TKIT_DIR on all target nodes, if it is not already present, and then creates the toolkit configuration file in it with the values specified in the db1pkg.conf file. 8) 9) Open the haoracle.conf file generated in the package directory (TKIT_DIR ) .
3. • Create a new package ASCII file and control script or edit the existing package ASCII and control scripts in the database package directory. Configure the parameters as mentioned for the database package. • Copy the scripts from the package directory to all the configured nodes. • Remove the older database package configuration if a new package for the database instance is being configured. • Apply the database package configuration. • Run the database package.
Refer to the Whitepaper Migrating Packages from Legacy to Modular Style,October 2007 and also the Whitepaper on modular package support in Serviceguard at http:// www.hp.com/go/hpux-serviceguard-docs -> HP Enterprise Cluster Master Toolkit • To add the values to this modular package configuration file, from the older toolkit configuration file, issue the following command: # cmmakepkg -i modular.ascii -m ecmt/Oracle/Oracle -t modular1.ascii • "modular1.
NOTE: If using this configuration, the 'PFILE' parameter in the haoracle.conf configuration file should be set to the specific pfile on a given host. For example, the PFILE in haoracle.conf on node1 should be set to /ORACLE_TEST0/dbs/initORACLE_TEST0.ora.node1. Error Handling On startup, the Oracle shell script will check for the existence of the init${SID_NAME}.ora or spfile${SID_NAME}.ora file in the shared ${ORACLE_HOME}/dbs directory.
By default Local OS Authentication is enabled for Oracle 10g and 11g with default value of "LOCAL_OS_AUTHENTICATION_listener_name = ON". The absence of this parameter in LISTENER.ORA file implies the feature is enabled. In case if it has been disabled, it can be re-enabled by commenting or removing "LOCAL_OS_AUTHENTICATION_listener_name = OFF" from LISTENER.ORA or by setting it to "LOCAL_OS_AUTHENTICATION_listener_name = ON". NOTE: toolkit.
NOTE: If the package fails during maintenance (example, the node crashes), the package will not automatically fail over to an adoptive node. It is the responsibility of the user to start the package up on an adoptive node. See the manual Managing ServiceGuard manual available at http://www.hp.com/go/hpux-serviceguard-docs —>HP Serviceguard for more details.
1. 2. ORA_CRS_HOME: When using ECMT oracle toolkit in a coexistence environment, this attribute must be set to Oracle CRS HOME. OC_TKIT_DIR: When using ECMT Oracle in a coexistence environment, this attribute must be set to the SGeRAC Toolkit’s Oracle Clusterware (OC) package directory. Figure 7 describes the various package dependencies between the single-instance Oracle database package created using ECMT and the Oracle RAC packages created using SGeRAC.
service_fail_fast_enabled service_halt_timeout no 300 If the listener is configured, uncomment the second set of service parameters which are used to monitor the listener. Edit the dependency parameters as shown: dependency_name dependency_condition dependency_location oc_dependency =up same_node Since LVM logical volumes are used in disk groups, specify the name(s) of the volume groups on which the ASM diskgroups reside, for the parameter "vg".
Configuring a legacy failover package for an Oracle database using ASM in a Coexistence Environment To configure an ECMT legacy package for an Oracle database: 1. Log in as the Oracle administrator and run the following command to set the database management policy to manual: $ORACLE_HOME/bin/srvctl modify database -d -y MANUAL 2. Create your own database package directory under /etc/cmcluster and copy over the files shipped in the bundle.
PACKAGE_NAME - Set to any name desired. PACKAGE_TYPE - Set to failover. RUN_SCRIPT /etc/cmcluster/db1_package/db1pkg.cntl HALT_SCRIPT /etc/cmcluster/db1_package/db1pkg.
ECMT Oracle Toolkit Maintenance Mode To put the single-instance ECMT Oracle toolkit package in maintenance mode, the package attribute MAINTENANCE_FLAG must be set to ‘yes’ and a file named ‘oracle.debug’ must exist in the package directory. In a coexistence environment, the single-instance database will go into toolkit maintenance mode if any of the following condition is met: 1. If the MAINTENANCE_FLAG is set to ‘yes’ in the ECMT package configuration for single-instance Oracle and if oracle.
To support ASM for EBS DB Tier, Migrate from non-ASM storage to ASM storage. The following steps are for migrating EBS DB from Non-ASM storage to ASM storage: NOTE: While performing the migration procedure, a scheduled downtime is needed for the application. 1. 2. 3. Halt the SGeEBS APPS Tier Packages on all nodes which are currently using DB Tier package whose database storage needs to be migrated. Ensure that approximately 200GB free disk space is needed to create the ASM disk groups for EBS database.
3 Using the Sybase ASE Toolkit in a Serviceguard Cluster on HP-UX This chapter describes the High Availability Sybase Adaptive Server Enterprise (ASE) Toolkit designed for use in a Serviceguard environment. This document covers the basic steps to configure a Sybase ASE instance in Serviceguard Cluster, and is intended for users who want to integrate a Sybase ASE Server with Serviceguard in HP-UX environments.
If the choice is to store the configuration files on a local disk, the configuration must be replicated to local disks on all nodes configured to run the package. Here the Sybase ASE binaries reside on the local storage. If any change is made to the configuration files, the file must be copied to all nodes. The user is responsible to ensure the systems remain synchronized. Shared Configuration This is the recommended configuration.
NOTE: The dataserver and the monitor server, both need to be installed on the same disk as monitor server is dependent on the presence of the dataserver for its working. 2. Make sure that the 'sybase' user has the same user id and group id on all nodes in the cluster. Create the user or group with the following commands and ensure the uid/gid by editing the /etc/passwd file: # groupadd sybase # useradd -g sybase -d -p sybase 3.
/dev/vg02_SYBASE0/lvol1 #Logical volume Sybase ASE data /dev/vg02_SYBASE0/lvol2 #Logical volume Sybase ASE data See the Sybase ASE documentation to determine which format is more appropriate for the setup environment that user prefers.
Table 7 Sybase ASE attributes (continued) Sybase ASE Attributes Description ASE_SERVER Name of the Sybase ASE instance set during installation or configuration of the ASE. This uniquely identifies an ASE instance ALERT_MAIL ID Sends an e-mail message to the specified e-mail address when packages fail. This e-mail is generated only when packages fail, and not when a package is halted by the operator.
1. 2. On package start-up, it starts the Sybase ASE instance and launches the monitor process. On package halt, it stops the Sybase ASE instance and monitor process. This script also contains the functions for monitoring the Sybase ASE instance. By default, only the "€˜dataserver€" process of ASE is monitored. This process is contained in the variable MONITOR_PROCESSES. Sybase Package Configuration Example • Package Setup and Configuration 1.
The following is an example of specifying Sybase ASE specific variables: ecmt/sybase/sybase/TKIT_DIR /tmp/SYBASE0 ecmt/sybase/sybase/SYBASE /home/sybase ecmt/sybase/sybase/SYBASE_ASE ASE-15_0 ecmt/sybase/sybase/SYBASE_OCS OCS-15_0 ecmt/sybase/sybase/SYBASE_ASE_ADMIN sybase ecmt/sybase/sybase/SALOGIN sa ecmt/sybase/sybase/SAPASSWD somepasswd NOTE: Keep this commented if the password for the administrator is not set. Along with other package attributes, this password is also stored in the Cluster Database.
to determine when a package has exceeded its restart limit as defined by the "service_restart" parameter in the package control script. To reset the restart counter execute the following command cmmodpkg [-v] [-n node_name] -R -s service_name package_name After setting up the Serviceguard environment, each clustered Sybase ASE Instance should have the following files in the toolkit specific directories: /etc/cmcluster/scripts/ecmt/sybase/tkit_module.
For more information on configuring security for a Serviceguard cluster, see the Securing Serviceguard, March 2009 whitepaper available at http://www.hp.com/go/ hpux-serviceguard-docs —>HP Serviceguard . Serviceguard allows the Role Based Access feature to be switched off, in which case only the root user will be able to view the package attributes.
do for setting up a single-instance of ASE for failover in a Serviceguard cluster. Consult Sybase ASE documentation for a detailed description on how to setup ASE in a cluster. • Sybase ASE interfaces file For setting up a single-instance of ASE in a Serviceguard cluster, the ASE instance should be available at the same IP address across all nodes in the cluster. To achieve this, the interfaces file of the ASE instance, available at the $SYBASE directory of that instance, should be edited.
following procedure should be used: NOTE: The example assumes that the package name is SYBASE0, the package directory is /opt/cmcluster/pkg/SYBASE0, and the SYBASE_HOME is configured as /SYBASE0. • Disable the failover of the package through cmmodpkg command: $ cmmodpkg -d SYBASE0 • Pause the monitor script. Create an empty file /sybase.debug as shown below: $ touch /sybase.
4 Using the DB2 database Toolkit in a Serviceguard Cluster in HP-UX DB2 is a RDBMS product from IBM. This chapter describes the High Availability toolkit for DB2 V9.1, V9.5 and V9.7 designed to be used in a Serviceguard environment. This chapter covers the basic steps to configure DB2 instances in a Serviceguard cluster. For more information on support matrix, see compatibility matrix available at http://www.hp.com/go/ hpux-serviceguard-docs —>HP Serviceguard .
2. 3. 4. the Serviceguard user manual available at http://www.hp.com/go/hpux-serviceguard-docs -> HP Serviceguard. Make sure that the minimum hardware and the software prerequisites are met before the product installation is initiated. The latest installation requirements are listed on IBM web site: http://www-306.ibm.com/software/data/db2/9/sysreqs.html. Use the DB2 Setup Wizard or db2install script to install the database server deferring the instance creation.
2 node2 0 node2 3 node2 1 node2 9. To enable the communication between the partitions, edit the /etc/services, /etc/ hosts, .rhosts file with the required entries. Edit the/etc/hosts file by adding the IP address and hostname of the DB2 server. For example: [payroll_inst@node1 ~]> vi /etc/hosts 10.0.0.1 DBNODE.domainname.com DBNODE Edit the ~/.rhosts file for the instance owner: For example: [payroll_inst@node1 ~]> vi home/payroll_inst/.rhosts node1 payroll_inst node2 payroll_inst 10.
NOTE: In case of multiple physical and logical partition configuration of DB2, the number of ports added in the services file has to be sufficient for the number of partitions created in the current node as well as the number of partitions created on the other nodes. This is to ensure that enough ports are available for all partitions to startup on a single node if all packages managing different partitions are started on that node.
NOTE: In a Serviceguard cluster environment, the fault monitor facility provided by DB2 must be turned off. Fault monitor facility is a sequence of processes that work together to ensure that DB2 instance is running. Fault monitor facility is specifically designed for non-clustered environments and has the flexibility to be turned off if the DB2 instance is running in a cluster environment.
For legacy packages, there will be one user configuration script (hadb2.conf) and three functional scripts (toolkit.sh), hadb2.sh and hadb2.mon) which work with each other to integrate DB2 with the Serviceguard package control scripts. For modular packages, there is an Attribute Definition File (ADF), a Toolkit Module Script (tkit_module.sh), and a Toolkit Configuration File Generator Script (tkit_gen.
Table 10 Variables in hadb2.conf File (continued) Variable Name Description MONITOR_INTERVAL The time interval in seconds between the checks to ensure that the DB2 database is running. Default value is 30 seconds. TIME_OUT The amount of time, in seconds, to wait for the DB2 shutdown to complete before killing the DB2 processes defined in MONITOR_PROCESSES.
After waiting for a few minutes, check for the existence of DB2 processes (there should be several, identified by "db2") ps -ef | grep db2. Bring the database down, unmount, and deactivate the volume group. ./toolkit.sh stop umount /mnt/payroll vgchange -a n /dev/vg0_payroll Repeat this step on all other clustered nodes to be configured to run the package to ensure DB2 can be brought up and down successfully.
Creating Serviceguard package using Modular method.
• Create the Serviceguard package using Modular method. Follow these steps to create Serviceguard package using Modular method: 1. Create a directory for the package: #mkdir /etc/cmcluster/pkg/db2_pkg/ 2. Copy the toolkit template and script files from db2 directory: #cd /etc/cmcluster/pkg/db2_pkg/ #cp /opt/cmcluster/toolkit/db2/* ./ 3. Create a configuration file (pkg.conf) as follows: #cmmakepkg -m ecmt/db2/db2 4. pkg.conf Edit the package configuration file.
MAINTENANCE_FLAG yes Set the MONITOR_INTERVAL in seconds to specify how often the partition or instance is monitored. For example, MONITOR_INTERVAL 30 Set the TIME_OUT variable to the time that the toolkit must wait for completion of a normal shut down, before initiating a forceful halt of the application. For example, TIME_OUT 30 Set the monitored_subnet variables to the subnet that is monitored for the package.
5. Use cmcheckconf command to check for the validity of the configuration specified. For example, #cmcheckconf -P pkg.conf 6. If the cmcheckconf command does not report any errors, use the cmapplyconf command to add the package into Serviceguard environment. For example, #cmapplyconf -P pkg.conf Creating Serviceguard package using legacy method.
• Create the Serviceguard package using legacy method. Follow these steps to create Serviceguard package using legacy method. mkdir /etc/cmcluster/pkg/db2_pkg/ Copy the toolkit files from db2 cd /etc/cmcluster/pkg/db2_pkg/ cp /opt/cmcluster/toolkit/db2/* ./ Create a configuration file (pkg.conf) and package control script pkg.cntl) as follows: cmmakepkg -p pkg.conf cmmakepkg -s pkg.cntl NOTE: There should be one set of configuration and control script files for each DB2 instance.
test_return 52 } The Serviceguard package configuration file (pkg.conf). The package configuration file is created with cmmakepkg -p, and should be put in the following location: /etc/cmcluster/pkg/db2_pkg/ For example: /etc/cmcluster/pkg/db2_pkg/pkg.conf The configuration file should be edited as indicated by the comments in that file. The package name needs to be unique within the cluster. For clarity, use the name of the database instance to name the package.
Table 12 DB2 Package Files (continued) File Name Description hadb2.sh Main shell script of the toolkit. hadb2.mon Monitor the health of the application. hadb2.conf Toolkit DB2 configuration file. toolkit.sh Interface between pkg.cntl and hadb2.sh. Adding the Package to the Cluster After the setup is complete, add the package to the SG cluster and start it up. cmapplyconf -P pkg.
The message "Starting DB2 toolkit monitoring again after maintenance" appears in the Serviceguard Package Control script log. • Enable the package failover: $ cmmodpkg -e db2_payroll If the package fails during maintenance (for example, the node crashes) the package will not automatically fail over to an adoptive node. It is the responsibility of the user to start the package up on an adoptive node. For more information, see Managing ServiceGuard manual available at http://www.hp.
5 Using MySQL Toolkit in a HP Serviceguard Cluster This chapter describes the MySQL Toolkit for use in the HP Serviceguard environment. The ideal audience is of those who want to configure the MySQL Database Server application toolkit under HP Serviceguard cluster environment using MySQL Toolkit. This toolkit supports the Enterpise MySQL Database Server Application 5.0.56 and later.
The following three files are also installed and they are used only for the modular method of packaging. The following Attribute Definition File (ADF) is installed in /etc/cmcluster/modules/ecmt/ mysql. Table 14 ADF File in Modular Package in MySQL File Name Description mysql.1 For every parameter in the legacy toolkit user configuration file, there is an attribute in the ADF. It also has an additional attribute TKIT_DIR which is analogous to the package directory in the legacy method of packaging.
This is the recommended configuration. Here, the configuration and database files are on shared disks, visible to all nodes. Since the storage is shared, there is no additional work to ensure all nodes have the same configuration at any point in time. To run MySQL in a HP Serviceguard environment: • Each node must have the same version of the MySQL Database Server software installed. • Each node that will be configured to run the package must have access to the configuration files.
2. 3. 4. an HA Cluster Configuration chapter of Managing ServiceGuard manual available at http:// www.hp.com/go/hpux-serviceguard-docs —>HP Serviceguard . Following the instructions in the documentation for MySQL, create a database on the lvol in /MySQL_1. This information may be viewed online at the following link http:// www.mysql.com/documentation/mysql/bychapter/manual_Tutorial.html#Creating_database. Copy the configuration file /etc/my.cnf to /MySQL_1/my.cnf. Modify /MySQL_1/my.
MySQL Configuration File (my.cnf) The following parameters are contained in the configuration file /etc/my.cnf. This file must be copied to the file system on the shared storage (in our for example, /etc/my.cnf would be copied to /MySQL_1/my.conf). Then parameters need to be manually set with unique values for each DB instance configured. Table 18 Parameters in MySQL Configuration File (my.
Table 19 User Variables in hamysql.conf file (continued) File name Description NOTE: If DATA_DIRECTORY is used, my.cnf MUST reside in the DATA_DIRECTORY location. This directory is also used as the data directory for this instance of the database server. PID_FILE="/var/run/mysql/mysqld.pid" This is the path where PID file for the MySQL daemon is created for the Parent PID. If this variable is defined, it overrides the "pid-file" defined in the MySQL configuration file my.cnf.
Table 21 Package Control Script Parameters Parameter Name [control script parameters] Control script Description VG vgMySQL # VG created for this package LV /dev/vgMySQL/lvol1 # Logical vol created in VT FS /MySQL_1 # File system for DB FS_TYPE "ext2" # FS type is "Extended 2" FS_MOUNT_OPT "-o rw" # mount with read/write options SUBNET "192.70.183.0" # Package Subnet IP "192.70.183.171" # Relocatable IP #The service name must be the same as defined in the package. #configuration file.
#cd /etc/cmcluster/pkg/mysql_pkg/ #cp /opt/cmcluster/toolkit/mysql/* ./ 3. Create a configuration file (pkg.conf) as follows. #cmmakepkg -m ecmt/mysql/mysql pkg.conf 4. Edit the package configuration file. NOTE: Mysql toolkit configuration parameters in the package configuration file have been prefixed by ecmt/mysql/mysql when used in Serviceguard A.11.19.00 or later. For Example: /etc/cmcluster/pkg/mysql_pkg/pkg.conf The configuration file should be edited as indicated by the comments in that file.
7. Ensure both root and mysql users have read, write, and execute permissions for the package directory. 8. Distribute the package directory to all nodes in the cluster. 9. Apply the Serviceguard package configuration using the command cmapplyconf -P MySQL1.conf 10. Enable package switching for MySQL package using: cmmodpkg -e -n node1 -n node2 mysql_1 cmmodpkg -e mysql_1 11. The package should now be running. If it is not, start the package by issuing the cmrunpkg command.
NOTE: If the package fails during maintenance (for example, the node crashes), the package will not automatically fail over to an adoptive node. It is the responsibility of the user to start the package up on an adoptive node. Please refer to the manual Managing ServiceGuard manual available at http://www.hp.com/go/hpux-serviceguard-docs —>HP Serviceguard for more details. This feature is enabled only when the configuration variable MAINTENANCE_FLAG is set to "yes" in the MySQL toolkit configuration file.
6 Using an Apache Toolkit in a HP Serviceguard Cluster This chapter describes the toolkit that integrates and runs HP Apache in the HP Serviceguard environment. This chapter is intended for users who want to install, configure, and execute the Apache web server application in a Serviceguard clustered environment. It is assumed that users of this document are familiar with Serviceguard and the Apache web server, including installation, configuration, and execution.
Table 23 Files in Apache Toolkit (continued) File Name Description toolkit.sh Interface between the package control script and the Apache Toolkit main shell script. SGAlert.sh This generates the Alert mail based on package failure The following three files, listed in Table 24 (page 98) are also installed and they are used only for the modular method of packaging. The following Attribute Definition File (ADF) is installed in /etc/cmcluster/modules/ecmt/apache.
Apache Package Configuration Overview Apache starts up by reading the httpd.conf file from the "conf" sub-directory of the SERVER_ROOT directory which is configured in the toolkit user configuration file hahttp.conf. Configuration rules include the following: • Each node must have the same version of the HP-UX based Apache Web Server. • Each node must have the same SERVER_ROOT directory where identical copies of the configuration file for each instance are placed.
Configuring the Apache Web Server with Serviceguard To manage an Apache Web Server by Serviceguard, the default Apache configuration needs to be modified.
Shared Configuration To configure a shared file system which is managed by LVM, create volume group(s) and logical volume(s) on the shared disks and construct a new file system for each logical volume for the Apache Web Server document root (and server root). Static web data such as web pages with no data update features may reside on local disk. However, all web data that needs to be shared must reside on shared storage.
node startup and shutdown, create a one-node package for each node that runs an Apache instance. Active - Passive In an active-passive configuration, an instance of Apache Web Server can run on only one node at any time. A package of this configuration is a typical failover package. The active - passive support on CFS comes with a caution or limitation that, when an Apache instance is up on one node, no attempts should be made to start the same instance of Apache on any another node.
Setting up the package The following procedures include the steps to configure a Serviceguard package running the Apache instance, which includes customizing the Serviceguard package configuration file and package control script. (See Managing ServiceGuard manual available at http://www.hp.com/ go/hpux-serviceguard-docs —>HP Serviceguard " for more detailed instructions on cluster configuration.
To configure the dependency of the Apache package, set the following configurable parameters in the package configuration file: DEPENDENCY_NAME http1_dependency DEPENDENCY_CONDITION DEPENDENCY_LOCATION 2. SG-CFS-MP-1 = up SAME_NODE Create a Serviceguard package control file with command cmmakepkg -s http_pkg.cntl. The package control file must be edited as indicated by the comments in that file. The package control file must be customized to your environment.
NOTE: If CFS mounted file systems are used then volume groups, logical volumes and file systems must not be configured in the package control script but dependency on SG CFS packages must be configured. 3. 4. Configure the Apache user configuration file hahttp.conf as explained in the next section. Copy this package configuration directory to all other package nodes. Use the same procedure to create multiple Apache packages (multiple Apache instances) that will be running on the cluster.
Table 26 Configuration Variables (continued) Configuration Variables Description server root directory is /opt/hpws22/apache. However, to have multiple instances running in a cluster, set a value for this variable. PID_FILE (for example, PID_FILE="/var/run/httpd_s1.pid") This variable holds the Process ID file path of the Apache server instance. Each Apache instance must have its own PID file that keeps the main process ID of the running Apache server instance.
NOTE: Before working on the toolkit configuration, the package directory (for example, /etc/ cmcluster/pkg/http_pkg1) must be created and all toolkit scripts copied to the package directory. 1. Edit the Apache Toolkit user configuration file. In the package directory, edit the user configuration file ( hahttp.conf) as indicated by the comments in that file. For example: SERVER_ROOT="/shared/apache1/httpd" PID_FILE="/var/run/httpd1.pid" SSL="yes" MAINTENANCE_FLAG="yes" 2.
Set the TKIT_DIR variable as the path of . For example, TKIT_DIR /etc/cmcluster/pkg/apache_pkg. 5. Use cmcheckconf command to check for the validity of the configuration specified. For Example: #cmcheckconf -P pkg.conf 6. If the cmcheckconf command does not report any errors, use the cmapplyconf command to add the package into Serviceguard environment. For Example: #cmapplyconf -P pkg.
A message "Starting Apache toolkit monitoring again after maintenance" appears in the Serviceguard Package Control script log. ◦ Enable the package failover cmmodpkg -e http_pkg1 NOTE: If the package fails during maintenance (for example, the node crashes), it will not automatically fail over to an adoptive node. It is the responsibility of the user to start the package up on an adoptive node. See Managing ServiceGuard manual available at http://www.hp.
7 Using Tomcat Toolkit in a HP Serviceguard Cluster This chapter describes the toolkit that integrates and runs HP Tomcat in the HP Serviceguard environment. It is intended for users who want to install, configure, and execute the Tomcat servlet engine application in a Serviceguard clustered environment. It is assumed that users of this document are familiar with Serviceguard and the Tomcat Servlet engine, including installation, configuration, and execution.
Table 28 ADF File for Modular Method of Packaging File Name Description tomcat.1 For every parameter in the legacy toolkit user configuration file, there is an attribute in the ADF. It also has an additional attribute TKIT_DIR which is analogous to the package directory in the legacy method of packaging. The ADF is used to generate a modular package ASCII template file. The following files are located in /etc/cmcluster/scripts/ecmt/tomcat after installation.
Tomcat Package Configuration Overview Tomcat starts up by reading the server.xml file from the conf sub-directory of the CATALINA_BASE directory which is configured in the toolkit user configuration file hatomcat.conf. The configuration rules include the following: • Each node must have the same version of the HP-UX based Tomcat Servlet Engine. • Each node must have the same CATALINA_BASE directory where identical copies of the configuration file for each instance are placed.
configuration or a combination of both. CATALINA_BASE needs to be unique for each Tomcat instance. Configuring the Tomcat server with Serviceguard To manage a Tomcat Server with Serviceguard, the default Tomcat configuration needs to be modified.
a. b. c. d. e. f. g. h. 2. Create a Volume Group "vg01" for a shared storage. Create a Logical Volume "lvol1" on the volume group "vg01". Construct a new file system on the Logical Volume "lvol1". Create a directory named “/shared/tomcat_1” on a local disk. Repeat this step on all nodes configured to run the package. Mount device "/dev/vg01/lvol1" to the "/shared/tomcat_1". Copy all files from "/opt/hpws22/tomcat/conf" to "/shared/tomcat_1/conf" Create a directory "logs" under the "/shared/tomcat_1/".
NOTE: As mentioned before, under shared configuration, you can choose to put the Tomcat binaries as well in a shared file system. This can be configured by 2 methods: To create a shared configuration for the Tomcat Server on the shared file system mounted at /mnt/tomcat: a. Method 1 1) Create the shared storage that will be used to store the Tomcat files for all nodes configured to run the Tomcat package. Once that storage has been configured, create the mount point for that shared storage on these nodes.
on a file system "/shared/tomcat_1" directory, that resides on a logical volume "lvol1" in a shared volume group "/dev/vg01". Here, it is assumed that the user has already determined the Serviceguard cluster configuration, including cluster name, node names, heartbeat IP addresses, and so on. See the Managing ServiceGuard manual available at http://www.hp.com/go/ hpux-serviceguard-docs —>HP Serviceguard for more detail.
Example 1 For example: LVM ----VG[0]="vg01" LV[0]="/dev/vg01/lvol1" FS[0]="/shared/tomcat_1" FS_TYPE[0]="vxfs" FS_MOUNT_OPT[0]="-o rw" | | | | | | | | | VxVM -----VXVM_DG[0]="DG_00" LV[0]="/dev/vx/dsk/DG_00/LV_00 FS[0]="/shared/tomcat_1" FS_TYPE[0]="vxfs" FS_MOUNT_OPT[0]="-o rw" IP[0]="192.168.0.1" SUBNET[0]="192.168.0.0" #The service name must be the same as defined in the package #configuration file. SERVICE_NAME[0]="tomcat1_monitor" SERVICE_CMD=[0]"/etc/cmcluster/pkg/tomcat_pkg1/toolkit.
#mkdir /etc/cmcluster/pkg/tomcat_pkg/ 2. Copy the toolkit template and script files from tomcat directory. #cd /etc/cmcluster/pkg/tomcat_pkg/ #cp /opt/cmcluster/toolkit/tomcat/* ./ 3. Create a configuration file (pkg.conf) as follows. #cmmakepkg -m ecmt/tomcat/tomcat pkg.conf 4. Edit the package configuration file. NOTE: Tomcat toolkit configuration parameters in the package configuration file have been prefixed by ecmt/tomcat/tomcat when used in Serviceguard A.11.19.00 or later.
Table 30 Legal Package Scripts (continued) Script Name Description user configuration data. This file will be included (that is, sourced) by the toolkit main script hatomcat.sh. Main Script (hatomcat.sh) This script contains a list of internal-use variables and functions that support the start and stop of a Tomcat instance. This script will be called by the Toolkit Interface Script to do the following: • On package start; it starts the Tomcat server instance.
Table 31 User Configuration Variables (continued) User Configuration Variables Description tomcat are configured, this port needs be unique for each instance. The default value is 8081. MONITOR_INTERVAL (for example, MONITOR_INTERVAL=5) Specify a time interval in seconds for monitoring the Tomcat instance. The monitor process checks the Tomcat daemons at this interval to validate they are running. The default value will be 5 seconds.
the Tomcat does not start for some other reason the action by the Tomcat toolkit script is to halt the package on that node and try it on another node. In order to troubleshoot why Tomcat has not been started correctly, one has to look at the Tomcat error log files. The Tomcat log files can be available at $CATALINA_BASE/logs directory. Tomcat Server Maintenance There might be situations, when the Tomcat Server has to be taken down for maintenance.
Configuring Apache web server with Tomcat in a single package NOTE: This section contains details on configuring Apache web server with Tomcat in a single package only for the legacy method of packaging. For configuring Apache and Tomcat in a single package using the modular method of packaging, see whitepaper Modular package support in Serviceguard for Linux and ECM Toolkits available at http://www.hp.com/go/ hpux-serviceguard-docs —>HP Serviceguard Enterprise Cluster Master Toolkit.
Example 2 For example: VG[0]="vg01" LV[0]="/dev/vg01/lvol1" FS[0]="/share/pkg_1" FS_MOUNT_OPT[0]="-o rw" FS_UMOUNT_OPT[0]="" FS_FSCK_OPT[0]="" FS_TYPE[0]="vxfs" Configure the two services one for Tomcat and Apache instances respectively SERVICE_NAME[0]="tomcat_pkg1.monitor" SERVICE_CMD[0]="/etc/cmcluster/pkg/tomcat_pkg1/toolkit.sh monitor" SERVICE_RESTART[0]="" SERVICE_NAME[1]="http_pkg1.monitor" SERVICE_CMD[1]="/etc/cmcluster/pkg/http_pkg1/toolkit.
8 Using SAMBA Toolkit in a Serviceguard Cluster This chapter describes the High Availability SAMBA Toolkit for use in the Serviceguard environment. The chapter is intended for users who want to install and configure the SAMBA toolkit in a Serviceguard cluster. Readers should be familiar with Serviceguard configuration as well as HP CIFS Server application concepts and installation/configuration procedures. NOTE: • This toolkit supports: HP Serviceguard versions: ◦ A.11.19 ◦ A.11.
The following three files are also installed and they are used only for the modular method of packaging. Attribute Definition File (ADF) is installed in /etc/cmcluster/modules/ecmt/samba . Table 33 Attribute Definition File (ADF) File Name Description samba.1 For every parameter in the legacy toolkit user configuration file, there is an attribute in the ADF. It also has an additional attribute TKIT_DIR which is analogous to the package directory in the legacy method of packaging.
In a typical local configuration, identical copies of the HP CIFS Server configuration files reside in exactly the same locations on the local file system on each node. All HP CIFS Server file systems are shared between the nodes. It is the responsibility of the toolkit administrator to maintain identical copies of the HP CIFS Server components on all nodes. If the shared file system allows only read operations, then local configuration is easy to maintain.
netbios name = smb1 interfaces = XXX.XXX.XXX.XXX/xxx.xxx.xxx.xxx bind interfaces only = yes log file = /var/opt/samba/smb1/logs/log.%m lock directory = /var/opt/samba/smb1/locks pid directory = /var/opt/samba/smb1/locks Replace the "XXX.XXX.XXX.XXX/xxx.xxx.xxx.xxx" with one (space separated) relocatable IP address and subnet mask for the Serviceguard package. Copy the workgroup line from the /etc/opt/samba/smb.conf file.
f. g. 3. Copy all the necessary files depending on the configuration. Unmount "/shared/smb1". Using CFS To configure a Samba package in a CFS environment, the SG CFS packages need to be running in order for the Samba package to access CFS mounted file systems. Please see your Serviceguard Manual for information on how to configure SG CFS packages. Create a directory /shared/smb1 on all cluster nodes. Mount the CFS filesystem on /shared/smb1 using the CFS packages.
$ cd /etc/cmcluster/smb1 $ cp /opt/cmcluster/toolkit/samba/* . #copy to $PWD To create both the package configuration (smb_pkg.conf) and package control (smb_pkg.cntl) files, cd to the package directory (for example, cd /etc/cmcluster/smb1) 1. Create a package configuration file with the command cmmakepkg -p. The package configuration file must be edited as indicated by the comments in that file. The package name must be unique within the cluster.
/etc/cmcluster/smb1/toolkit.sh start test_return 51 } 4. Edit the customer_defined_halt_cmds function in the package control script to execute the toolkit.sh script with the stop option. In the example below, the line /etc/cmcluster/smb1/ toolkit.sh stop was added, and the ":" null command line deleted. EXAMPLE: function customer_defined_halt_cmds { # Stop the HP CIFS Server. /etc/cmcluster/smb1/toolkit.sh stop test_return 51 } 5. 6. Configure the user configuration file hasmb.
Table 35 Legacy Package Scripts (continued) Script Name Description script ( hasmb.sh) and will constantly monitor two HP CIFS Server daemons, smbd and nmbd. Interface Script (toolkit.sh) This script is an interface between a package control script and the toolkit main script (hasmb.sh ). Creating Serviceguard package using Modular method. Follow the steps below to create Serviceguard package using Modular method: 1. Create a directory for the package. #mkdir /etc/cmcluster/pkg/samba_pkg/ 2.
#cmcheckconf -P pkg.conf 6. If the cmcheckconf command does not report any errors, use the cmapplyconf command to add the package into Serviceguard environment. For Example: #cmapplyconf -P pkg.conf Toolkit User Configuration All the user configuration variables are kept in a single file in shell script format.
Table 36 User Configuration Variables (continued) Configuration Variables Description NOTE: Setting MAINTENANCE_FLAG to "yes" and touching the samba.debug file in the package directory will put the package in toolkit maintenance mode. Serviceguard A.11.19 release has a new feature which allows individual components of the package to be maintained while the package is still up. This feature is called Package Maintenance mode and is available only for modular packages.
CIFS Server Maintenance Mode There might be situations, when a CIFS Server instance has to be taken down for maintenance purposes like changing configuration, without having the instance to migrate to standby node. The following procedure should be implemented: NOTE: The example assumes that the package name is SMB_1, package directory is /etc/ cmcluster/pkg/SMB_1. • Disable the failover of the package through cmmodpkg command. $ cmmodpkg -d SMB_1 • Pause the monitor script.
these instances there may be cases where the application will need to be restarted and the files reopened, as a switchover is a logical shutdown and restart of the CIFS Server. • File Locks File locks are not preserved during failover, and applications are not advised about any lost file locks. • Print Jobs If a failover occurs when a print job is in process, the job may be printed twice or not at all, depending on the job state at the time of the failover.
• HP CIFS Server as a Master Browser If HP CIFS Server is configured as the domain master browser (that is, the domain master support parameter is set to "yes"), the database will be stored in the /var/opt/samba/ locks/browse.tdb file. HP does not recommend doing this in an HA configuration. However, if the CIFS Server is configured as the domain master browser, /var/opt/samba/ locks/browse.tdb should be set as a symbolic link to browse.tdb on the shared file system.
9 Support and other resources Information to collect before contacting HP Be sure to have the following information available before you contact HP: • Software product name • Hardware product model number • Operating system type and version • Applicable error message • Third-party hardware or software • Technical support registration number (if applicable) How to contact HP Use the following methods to contact HP technical support: • In the United States, see the Customer Service / Contact HP U
HP authorized resellers For the name of the nearest HP authorized reseller, see the following sources: • In the United States, see the HP U.S. service locator web site: http://www.hp.com/service_locator • In other locations, see the Contact HP worldwide web site: http://welcome.hp.com/country/us/en/wwcontact.html Documentation feedback HP welcomes your feedback. To make comments and suggestions about product documentation, send a message to: docsfeedback@hp.
[] In command syntax statements, these characters enclose optional content. {} In command syntax statements, these characters enclose required content. | The character that separates items in a linear list of choices. ... Indicates that the preceding element can be repeated one or more times. WARNING An alert that calls attention to important information that, if not understood or followed, results in personal injury.