Managing HP Serviceguard for Linux, Tenth Edition, September 2012

This information is needed when you create the mirrored disk configuration using LVM.
In addition, it is useful to gather as much information as possible about your disk
configuration.
You can obtain information about available disks by using the following commands;
your system may provide other utilities as well.
ls /dev/sd* (Smart Array cluster storage)
ls /dev/hd* (non-SCSI/FibreChannel disks)
ls /dev/sd* (SCSI and FibreChannel disks)
du
df
mount
vgdisplay -v
lvdisplay -v
See the manpages for these commands for information about specific usage. The
commands should be issued from all nodes after installing the hardware and rebooting
the system. The information will be useful when doing LVM and cluster configuration.
Hardware Configuration Worksheet
The hardware configuration worksheet (page 323) will help you organize and record
your specific cluster hardware configuration. Make as many copies as you need.
Power Supply Planning
There are two sources of power for your cluster which you will have to consider in your
design: line power and uninterruptible power supplies (UPS). Loss of a power circuit
should not bring down the cluster.
Frequently, servers, mass storage devices, and other hardware have two or three separate
power supplies, so they can survive the loss of power to one or more power supplies or
power circuits. If a device has redundant power supplies, connect each power supply
to a separate power circuit. This way the failure of a single power circuit will not cause
the complete failure of any critical device in the cluster. For example, if each device in
a cluster has three power supplies, you will need a minimum of three separate power
circuits to eliminate electrical power as a single point of failure for the cluster. In the case
of hardware with only one power supply, no more than half of the nodes should be on
a single power source. If a power source supplies exactly half of the nodes, it must not
also supply the cluster lock LUN or quorum server, or the cluster will not be able to re-form
after a failure. See “Cluster Lock Planning” (page 96) for more information.
To provide a high degree of availability in the event of power failure, use a separate
UPS at least for each node’s SPU and for the cluster lock disk (if any). If you use a quorum
server, or quorum server cluster, make sure each quorum server node has a power source
Power Supply Planning 95