Managing HP Serviceguard for Linux, Tenth Edition, September 2012

4. Create a $SGCONF/cmclnodelist file on all nodes that you intend to configure
into the cluster, and allow access by all cluster nodes. See Allowing Root Access
to an Unconfigured Node” (page 158).
NOTE: HP recommends that you also make the name service itself highly available,
either by using multiple name servers or by configuring the name service into a
Serviceguard package.
Ensuring Consistency of Kernel Configuration
Make sure that the kernel configurations of all cluster nodes are consistent with the
expected behavior of the cluster during failover. In particular, if you change any kernel
parameters on one cluster node, they may also need to be changed on other cluster
nodes that can run the same packages.
Enabling the Network Time Protocol
HP strongly recommends that you enable network time protocol (NTP) services on each
node in the cluster. The use of NTP, which runs as a daemon process on each system,
ensures that the system time on all nodes is consistent, resulting in consistent timestamps
in log files and consistent behavior of message services. This ensures that applications
running in the cluster are correctly synchronized. The NTP services daemon, xntpd,
should be running on all nodes before you begin cluster configuration. The NTP
configuration file is /etc/ntp.conf.
Implementing Channel Bonding (Red Hat)
This section applies to Red Hat installations. If you are using a SUSE distribution, skip
ahead to the next section.
Channel bonding of LAN interfaces is implemented by the use of the bonding driver,
which is installed in the kernel at boot time. With this driver installed, the networking
software recognizes bonding definitions that are created in the /etc/sysconfig/
network-scripts directory for each bond. For example, the file named ifcfg-bond0
defines bond0 as the master bonding unit, and the ifcfg-eth0 and ifcfg-eth1
scripts define each individual interface as a slave.
Bonding can be defined in different modes. Mode 0, which is used for load balancing,
uses all slave devices within the bond in parallel for data transmission. This can be done
when the LAN interface cards are connected to an Ethernet switch, with the ports on the
switch configured as Fast EtherChannel trunks. Two switches should be cabled together
as an HA grouping to allow package failover.
For high availability, in which one slave serves as a standby for the bond and the other
slave transmits data, install the bonding module in mode 1. This is most appropriate for
dedicated heartbeat connections that are cabled through redundant network hubs or
switches that are cabled together.
162 Building an HA Cluster Configuration