HP XC System Software Installation Guide Version 3.1

Deciding on the Method to Achieve Quorum for Serviceguard Clusters In a Serviceguard configuration,
each availability set becomes its own two-node Serviceguard cluster, and each Serviceguard cluster requires
some form of quorum. The quorum acts as a tie breaker in the Serviceguard cluster running on each
availability set. If connectivity is lost between the nodes of the Serviceguard cluster, the node that can
access the quorum continues to run the cluster and the other node is considered down.
The quorum can be either a quorum server or a lock LUN; you must configure one or the other on every
availability set. You can configure a lock LUN as the tie breaker only if the head node and the other node
in the availability set are both connected to the same shared storage (for instance, an MSA) and both are
able to access the same partition.
Configuring a Quorum Server You can select any node in the HP XC system that is not participating in
any availability set to serve as the quorum server, even a compute node. You can use the same quorum
server for one or more availability sets. If you configure a quorum server, you must have previously
installed the qs-A.02.00.03-0.product.redhat.ia64.rpm RPM. Later, the cluster_config
utility prompts you to supply the node name of the quorum server; there is nothing you need to do now.
Configuring a Lock LUN If you intend to use a lock LUN instead of a quorum server to achieve quorum,
enter the following command to create the lock LUN now, before running the cluster_config utility
later in the system configuration process.
In the following command, /dev/sdb is the full path to the disk on the MSA, and partition 1 on that disk
is configured as the lock LUN:
# /sbin/fdisk /dev/sdb
Command (m for help): n
Command action
e extended
p primary partition (1-4)
p
Partition number (1-4): 1
Command (m for help): t
Selected partition 1
HEX code (type L to list codes): 83
Command (m for help): w
When you run the cluster_config utility, it prompts you to supply the name of the lock LUN, and
you must supply the full path with partition (for example, /dev/sdb1, where 1 is the partition number).
2.5.1.2 HP Scalable Visual Array
The HP Scalable Visual Array (SVA) is a scalable visualization solution that brings the power of parallel
computing to bear on many demanding visualization challenges. SVA can be a specialized, standalone
system consisting entirely of visualization nodes, or it can be integrated into a larger HP Cluster Platform
system such as HP XC and share a single interconnect with the compute nodes and a storage system.
If SVA was not installed at the factory, and you want to integrate SVA into the HP XC system, you must
have the SVA distribution media in your possession. Install the RPMs now by following the instructions
in the SVA documentation:
http://www.docs.hp.com/en/highperfcomp.html
2.5.2 Install Third-Party Software Products
An HP XC system supports the use of several third-party software products. Use of these products is
optional; the purchase and installation of these components is your decision depending on the software
requirements.
IMPORTANT: Network connectivity is not established until the cluster_prep utility is run. Thus, if a
third-party software product is only available through Web-based downloads, you must wait until network
connectivity is established to download and install it.
Potentially important software that is not bundled with the HP XC software includes the Intel Fortran and
C compilers, The Portland Group PGI compiler, and the TotalView Debugger.
2.5 Task 4: Install Additional Software from Local Distribution Media 41