Installation guide

configurations in which some nodes have access to the file system and others do not are
not supported.This does not require that all nodes actually mount the GFS/GFS2 file
system itself.
No - sin g le- p o in t - o f - f ailu re h ard ware co n f ig u rat io n
Clusters can include a dual-controller RAID array, multiple bonded network channels,
multiple paths between cluster members and storage, and redundant un-interruptible power
supply (UPS) systems to ensure that no single failure results in application down time or
loss of data.
Alternatively, a low-cost cluster can be set up to provide less availability than a no-single-
point-of-failure cluster. For example, you can set up a cluster with a single-controller RAID
array and only a single Ethernet channel.
Certain low-cost alternatives, such as host RAID controllers, software RAID without cluster
support, and multi-initiator parallel SCSI configurations are not compatible or appropriate
for use as shared cluster storage.
Dat a in t eg rit y assu ran ce
To ensure data integrity, only one node can run a cluster service and access cluster-
service data at a time. The use of power switches in the cluster hardware configuration
enables a node to power-cycle another node before restarting that node's HA services
during a failover process. This prevents two nodes from simultaneously accessing the
same data and corrupting it. It is strongly recommended that fence devices (hardware or
software solutions that remotely power, shutdown, and reboot cluster nodes) are used to
guarantee data integrity under all failure conditions. Watchdog timers provide an
alternative way to to ensure correct operation of HA service failover.
Et h ern et ch an n el b o n d in g
Cluster quorum and node health is determined by communication of messages among
cluster nodes via Ethernet. In addition, cluster nodes use Ethernet for a variety of other
critical cluster functions (for example, fencing). With Ethernet channel bonding, multiple
Ethernet interfaces are configured to behave as one, reducing the risk of a single-point-of-
failure in the typical switched Ethernet connection among cluster nodes and other cluster
hardware.
Red Hat Enterprise Linux 5 supports bonding mode 1 only. It is recommended that you wire
each node's slaves to the switches in a consistent manner, with each node's primary device
wired to switch 1 and each node's backup device wired to switch 2.
2.2. Compat ible Hardware
Before configuring Red Hat Cluster software, make sure that your cluster uses appropriate hardware
(for example, supported fence devices, storage devices, and Fibre Channel switches). Refer to the
Red Hat Hardware Catalog at https://hardware.redhat.com/ for the most current hardware
compatibility information.
2.3. Enabling IP Port s
Before deploying a Red Hat Cluster, you must enable certain IP ports on the cluster nodes and on
computers that run lu ci (the Co n g a user interface server). The following sections identify the IP
ports to be enabled:
Red Hat Ent erprise Linux 5 Clust er Administ rat ion
20