Managing HP Serviceguard for Linux Ninth Edition, April 2009

2 Understanding Hardware Configurations for Serviceguard
for Linux
This chapter gives a broad overview of how the server hardware components operate
with Serviceguard for Linux. The following topics are presented:
Redundant Cluster Components
Redundant Network Components (page 30)
Redundant Disk Storage (page 34)
Redundant Power Supplies (page 36)
Refer to the next chapter for information about Serviceguard software components.
Redundant Cluster Components
In order to provide a high level of availability, a typical cluster uses redundant system
components, for example two or more SPUs and two or more independent disks.
Redundancy eliminates single points of failure. In general, the more redundancy, the
greater your access to applications, data, and supportive services in the event of a
failure. In addition to hardware redundancy, you need software support to enable and
control the transfer of your applications to another SPU or network after a failure.
Serviceguard provides this support as follows:
In the case of LAN failure, the Linux bonding facility provides a standby LAN, or
Serviceguard moves packages to another node.
In the case of SPU failure, your application is transferred from a failed SPU to a
functioning SPU automatically and in a minimal amount of time.
For software failures, an application can be restarted on the same node or another
node with minimum disruption.
Serviceguard also gives you the advantage of easily transferring control of your
application to another SPU in order to bring the original SPU down for system
administration, maintenance, or version upgrades.
The maximum number of nodes supported in a Serviceguard Linux cluster is 16; the
actual number depends on the storage configuration. For example, a package that
accesses data over a FibreChannel connection can be configured to fail over among 16
nodes, while SCSI disk arrays are typically limited to four nodes.
A package that does not use data from shared storage can be configured to fail over to
as many nodes as you have configured in the cluster (up to the maximum of 16),
regardless of disk technology. For instance, a package that runs only local executables,
and uses only local data, can be configured to fail over to all nodes in the cluster.
Redundant Cluster Components 29