HP XC System Software Administration Guide Version 3.2

Standard LSF-HPC is installed and configured on all nodes of the HP XC system by default.
The LSF RPM places the LSF tar files from Platform Computing in the
/opt/hptc/lsf/files/lsf/ directory. Standard LSF-HPC is installed, during the operation
of the cluster_config utility, in the /opt/hptc/lsf/top directory. On completion, the
conf and work directories are moved to /hptc_cluster/lsf directory to ensure:
A single set of Standard LSF-HPC configuration files for the HP XC system
One common working space for preserving and accessing accounting and event data.
The log directory is moved to /var/lsf so that per-node LSF daemon logging is stored locally
and that it is unaffected by updateimage operations. However the logs will be lost during a
reimage operation. The LSF directory containing the binary files remains in /opt/hptc/lsf/top;
it will be imaged to all the other nodes.
Also during the operation of the cluster_config utility, the HP XC nodes without the compute
role are configured to remain closed with 0 job slots available for use. This is done by editing
the Hosts section of the lsb.hosts file and configuring these hosts with MXJ (or Maximum
Job Slots) set to zero (0). You can run LSF commands from these hosts, but no jobs run on
them.
Nodes without the compute role are closed with '0' job slots available for use.
The LSF environment is set up automatically for the user on login. LSF commands and their
manpages are readily accessible. The profile.lsf and cshrc.lsf source files are copied
from the /hptc_cluster/lsf/conf directory to the /opt/hptc/lsf/top/env directory,
which is specific to each node. Then the and then creating the /etc/profile.d/lsf.sh and
/etc/profile.d/lsf.csh files that reference the appropriate source file upon login are
created.
Finally, Standard LSF-HPC is configured to start when the HP XC boots up. A soft link from
/etc/init.d/lsf to the lsf_daemons startup script provided by Standard LSF-HPC
is created. All this configuration optimizes the installation of Standard LSF-HPC on HP XC.
The following LSF commands are particularly useful:
The bhosts command is useful for viewing LSF batch host information.
The lshosts command provides static resource information.
The lsload command provides dynamic resource information.
The bsub command is used to submit jobs to LSF.
The bjobs command provides information on batch jobs.
For more information on using Standard LSF-HPC on the HP XC system, see the Platform LSF
documentation available on the HP XC documentation disk.
16.2 LSF-HPC with SLURM
The Platform Load Sharing Facility for High Performance Computing (LSF-HPC with SLURM)
product is installed and configured as an embedded component of the HP XC system during
installation. This product has been integrated with SLURM to provide a comprehensive
high-performance workload management solution for the HP XC system.
This section describes the LSF-HPC with SLURM product that differentiate it from Standard
LSF-HPC. These topics include integration with SLURM, job starter scripts, the SLURM lsf
partition, the SLURM external scheduler, and LSF-HPC with SLURM failover.
See “Troubleshooting” (page 245) for information on LSF-HPC with SLURM troubleshooting.
See “Installing LSF-HPC with SLURM into an Existing Standard LSF Cluster ” (page 289) for
information on extending the LSF-HPC with SLURM cluster.
190 Managing LSF