HP XC System Software Administration Guide Version 3.2

The next step is to configure the LSF alias on the HP XC system. An alias is used on the HP XC
system to prevent hard-wiring LSF to any one node, so that the LSF node in HP XC can fail over
to another node if the current node becomes compromised (hung or crashed). HP XC provides
infrastructure to monitor the LSF node and fail over the LSF daemons to another node if necessary.
The selected IP and host name must not be in use but must be known on the external network.
Our example is using xclsf with an IP address of '16.32.2.140', and the head node for the HP
XC system is xc-head.
Use the ping command to verify that the selected external host name is not currently in use:
# ping xclsf
PING xclsf.lab.example.com (IP address) 56(84) bytes of data.
From xc128.lab.example.com (IP address) icmp_seq=0
Destination Host Unreachable
Next configure controllsf (which manages LSF setup on HP XC) with the new alias:
# controllsf set virtual hostname xclsf
Confirm that the alias is set:
# controllsf show
LSF is currently shut down, and assigned to node .
Failover is disabled.
Head node is preferred.
The primary LSF host node is xc128.
SLURM affinity is enabled.
The virtual hostname is "xclsf".
A.8 Starting LSF on the HP XC System
At this point lsadmin reconfig followed by badmin reconfig can be run within the existing
LSF cluster (on plain in our example) to update LSF with the latest configuration changes. A
subsequent lshosts or bhosts displays the new HP XC "node", although it will be UNKNOWN
and unavailable, respectively.
LSF can now be started on XC:
# controllsf start
This command sets up the virtual LSF alias on the appropriate node and then starts the LSF
daemons. It will also create a $LSF_ENVDIR/hosts file (in our example $LSF_ENVDIR =
/shared/lsf/conf). This hosts file is used by LSF to map the LSF alias to the actual host name
of the node in HP XC system running LSF. See the Platform LSF documentation for information
on hosts files.
When the LSF daemons have started up and synchronized their data with the rest of the LSF
cluster, the lshosts and bhosts commands display all the nodes with their appropriate values
and indicate that they are ready for use:
$ lshosts
HOST_NAME type model cpuf ncpus maxmem maxswp server RESOURCES
plain LINUX86 PC1133 23.1 2 248M 1026M Yes ()
xclsf SLINUX6 Intel_EM 60.0 256 3456M - Yes (slurm)
$ bhosts
HOST_NAME STATUS JL/U MAX NJOBS RUN SSUSP USUSP RSV
plain ok - 2 0 0 0 0 0
xclsf ok - 256 0 0 0 0 0
300 Installing LSF-HPC with SLURM into an Existing Standard LSF Cluster