Managing HP Serviceguard A.11.20.10 for Linux, December 2012

You can obtain information about available disks by using the following commands; your system
may provide other utilities as well.
ls /dev/sd* (Smart Array cluster storage)
ls /dev/hd* (non-SCSI/FibreChannel disks)
ls /dev/sd* (SCSI and FibreChannel disks)
du
df
mount
vgdisplay -v
lvdisplay -v
See the manpages for these commands for information about specific usage. The commands should
be issued from all nodes after installing the hardware and rebooting the system. The information
will be useful when doing LVM and cluster configuration.
4.3.5 Hardware Configuration Worksheet
The hardware configuration worksheet (page 273) will help you organize and record your specific
cluster hardware configuration. Make as many copies as you need.
4.4 Power Supply Planning
There are two sources of power for your cluster which you will have to consider in your design:
line power and uninterruptible power supplies (UPS). Loss of a power circuit should not bring down
the cluster.
Frequently, servers, mass storage devices, and other hardware have two or three separate power
supplies, so they can survive the loss of power to one or more power supplies or power circuits.
If a device has redundant power supplies, connect each power supply to a separate power circuit.
This way the failure of a single power circuit will not cause the complete failure of any critical
device in the cluster. For example, if each device in a cluster has three power supplies, you will
need a minimum of three separate power circuits to eliminate electrical power as a single point
of failure for the cluster. In the case of hardware with only one power supply, no more than half
of the nodes should be on a single power source. If a power source supplies exactly half of the
nodes, it must not also supply the cluster lock LUN or quorum server, or the cluster will not be able
to re-form after a failure. See “Cluster Lock Planning” (page 80) for more information.
To provide a high degree of availability in the event of power failure, use a separate UPS at least
for each node’s SPU and for the cluster lock disk (if any). If you use a quorum server, or quorum
server cluster, make sure each quorum server node has a power source separate from that of every
cluster it serves. If you use software mirroring, make sure power supplies are not shared among
different physical volume groups; this allows you to set up mirroring between physical disks that
are not only on different I/O buses, but also connected to different power supplies.
To prevent confusion, label each hardware unit and power supply unit clearly with a different unit
number. Indicate on the Power Supply Worksheet the specific hardware units you are using and
the power supply to which they will be connected. Enter the following label information on the
worksheet:
Host Name Enter the host name for each SPU.
Disk Unit Enter the disk drive unit number for each disk.
Tape Unit Enter the tape unit number for each backup device.
Other Unit Enter the number of any other unit.
4.4 Power Supply Planning 79