Managing HP Serviceguard A.11.20.20 for Linux, May 2013

2 Understanding Hardware Configurations for Serviceguard
for Linux
This chapter gives a broad overview of how the server hardware components operate with
Serviceguard for Linux. The following topics are presented:
Redundant Cluster Components
Redundant Network Components (page 25)
Redundant Disk Storage (page 29)
Redundant Power Supplies (page 30)
Refer to the next chapter for information about Serviceguard software components.
2.1 Redundant Cluster Components
In order to provide a high level of availability, a typical cluster uses redundant system components,
for example, two or more SPUs and two or more independent disks. Redundancy eliminates single
points of failure. In general, the more redundancy, the greater your access to applications, data,
and supportive services in the event of a failure. In addition to hardware redundancy, you need
software support to enable and control the transfer of your applications to another SPU or network
after a failure. Serviceguard provides this support as follows:
In the case of LAN failure, the Linux bonding facility provides a standby LAN, or Serviceguard
moves packages to another node.
In the case of SPU failure, your application is transferred from a failed SPU to a functioning
SPU automatically and in a minimal amount of time.
For software failures, an application can be restarted on the same node or another node with
minimum disruption.
Serviceguard also gives you the advantage of easily transferring control of your application to
another SPU in order to bring the original SPU down for system administration, maintenance, or
version upgrades.
The maximum number of nodes supported in a Serviceguard Linux cluster is 4; the actual number
depends on the storage configuration. For example, a package that accesses data over a
FibreChannel connection can be configured to fail over among 16 nodes, while SCSI disk arrays
are typically limited to four nodes.
A package that does not use data from shared storage can be configured to fail over to as many
nodes as you have configured in the cluster (up to the maximum of 16), regardless of disk
technology. For instance, a package that runs only local executables, and uses only local data,
can be configured to fail over to all nodes in the cluster.
2.2 Redundant Network Components
To eliminate single points of failure for networking, each subnet accessed by a cluster node is
required to have redundant network interfaces. Redundant cables are also needed to protect
against cable failures. Each interface card is connected to a different cable and hub or switch.
Network interfaces are allowed to share IP addresses through a process known as channel bonding.
See “Implementing Channel Bonding (Red Hat)” (page 140) or “Implementing Channel Bonding
(SUSE)” (page 142).
Serviceguard supports a maximum of 30 network interfaces per node. For this purpose an interface
is defined as anything represented as a primary interface in the output of ifconfig, so the total
of 30 can comprise any combination of physical LAN interfaces or bonding interfaces. (A node
can have more than 30 such interfaces, but only 30 can be part of the cluster configuration.)
2.1 Redundant Cluster Components 25