Managing HP Serviceguard for Linux, Eighth Edition, March 2008

Building an HA Cluster Configuration
Preparing Your Systems
Chapter 5150
Implementing Channel Bonding (Red Hat)
This section applies to Red Hat installations. If you are using a SUSE
distribution, skip ahead to the next section.
Channel bonding of LAN interfaces is implemented by the use of the
bonding driver, which is installed in the kernel at boot time. With this
driver installed, the networking software recognizes bonding definitions
that are created in the /etc/sysconfig/network-scripts directory for
each bond. For example, the file named ifcfg-bond0 defines bond0 as
the master bonding unit, and the ifcfg-eth0 and ifcfg-eth1 scripts
define each individual interface as a slave.
Bonding can be defined in different modes. Mode 0, which is used for load
balancing, uses all slave devices within the bond in parallel for data
transmission. This can be done when the LAN interface cards are
connected to an Ethernet switch such as the HP ProCurve switch, with
the ports on the switch configured as Fast EtherChannel trunks. Two
switches should be cabled together as an HA grouping to allow package
failover.
For high availability, in which one slave serves as a standby for the bond
and the other slave transmits data, install the bonding module in
mode 1. This is most appropriate for dedicated heartbeat connections
that are cabled through redundant network hubs or switches that are
cabled together.
For more information on networking bonding, make sure you have
installed the kernel-doc rpm, and see:
/usr/share/doc/kernel-doc-<version>/Documentation/networking/bonding.txt
NOTE HP recommends that you do the bonding configuration from the system
console, because you will need to restart networking from the console
when the configuration is done.