Managing HP Serviceguard for Linux, Sixth Edition, August 2006

Building an HA Cluster Configuration
Implementing Channel Bonding (Red Hat)
Chapter 5140
Implementing Channel Bonding (Red Hat)
Use the procedures included in this section to implement channel
bonding on Red Hat installations. If you are using a SuSE distribution,
skip ahead to the next section.
Channel bonding of LAN interfaces is implemented by the use of the
bonding driver, which is installed in the kernel at boot time. With this
driver installed, the networking software recognizes bonding definitions
that are created in the /etc/sysconfig/network-scripts directory for
each bond. For example, the file named ifcfg-bond0 defines bond0 as
the master bonding unit, and the ifcfg-eth0 and ifcfg-eth1 scripts
define each individual interface as a slave.
Bonding can be defined in different modes. Mode 0, which is used for load
balancing, uses all slave devices within the bond in parallel for data
transmission. This can be done when the LAN interface cards are
connected to an Ethernet switch such as the HP ProCurve switch, with
the ports on the switch configured as Fast EtherChannel trunks. Two
switches should be cabled together as an HA grouping to allow package
failover.
For high availability, in which one slave serves as a standby for the bond
and the other slave transmits data, install the bonding module in
mode 1. This is most appropriate for dedicated heartbeat connections
that are cabled through redundant network hubs or switches that are
cabled together.
For more networking information on bonding, see:
RedHat 3:
/usr/src/linux-2.4/Documentation/networking/bonding.txt
RedHat 4:
/usr/share/doc/kernel-2.6.9/Documentation/networking/bonding.txt
NOTE HP recommends that you do the bonding configuration from the system
console, because you will need to restart networking from the console
when the configuration is done.