HP XC System Software Administration Guide Version 4.0

Table Of Contents
Standard LSF is installed and configured on all nodes of the HP XC system by default.
The LSF RPM places the LSF tar files from Platform Computing Inc. in the /opt/hptc/lsf/
files/lsf/ directory. Standard LSF is installed, during the operation of the cluster_config
utility, in the /opt/hptc/lsf/top directory. On completion, the conf and work directories
are moved to /hptc_cluster/lsf directory to ensure:
A single set of Standard LSF configuration files for the HP XC system
One common working space for preserving and accessing accounting and event data.
The log directory is moved to /var/lsf so that per-node LSF daemon logging is stored locally
and that it is unaffected by updateimage operations. However, the logs will be lost during a
reimage operation. The LSF directory containing the binary files remains in /opt/hptc/lsf/
top; it will be imaged to all the other nodes.
Also during the operation of the cluster_config utility, the HP XC nodes without the compute
role are configured to remain closed with 0 job slots available for use. This is done by editing
the Hosts section of the lsb.hosts file and configuring these hosts with MXJ (or Maximum
Job Slots) set to zero (0). You can run LSF commands from these hosts, but no jobs run on
them.
Nodes without the compute role are closed with '0' job slots available for use.
The LSF environment is set up automatically for the user on login. LSF commands and their
manpages are readily accessible. The profile.lsf and cshrc.lsf source files are copied
from the /hptc_cluster/lsf/conf directory to the /opt/hptc/lsf/top/env directory,
which is specific to each node. Then the /etc/profile.d/lsf.sh and /etc/profile.d/
lsf.csh files that reference the appropriate source file upon login are created.
Finally, Standard LSF is configured to start when the HP XC boots up. A soft link from /etc/
init.d/lsf to the lsf_daemons startup script provided by Standard LSF is created.
All this configuration optimizes the installation of Standard LSF on HP XC.
The following LSF commands are particularly useful:
The bhosts command is useful for viewing LSF batch host information.
The lshosts command provides static resource information.
The lsload command provides dynamic resource information.
The bsub command is used to submit jobs to LSF.
The bjobs command provides information on batch jobs.
For more information on using Standard LSF on the HP XC system, see the Platform LSF
documentation available on the HP XC documentation disk.
16.2 LSF with SLURM
The Platform LSF with SLURM product is installed and configured as an embedded component
of the HP XC system during installation. This product has been integrated with SLURM to
provide a comprehensive high-performance workload management solution for the HP XC
system.
This section describes the LSF with SLURM product that differentiates it from Standard LSF.
These topics include integration with SLURM, job starter scripts, the SLURM lsf partition, the
SLURM external scheduler, and LSF with SLURM failover.
See “Troubleshooting” (page 247) for information on LSF with SLURM troubleshooting.
See “Installing LSF with SLURM into an Existing Standard LSF Cluster (page 291) for information
on extending the LSF with SLURM cluster.
16.2.1 Integration of LSF with SLURM
The LSF component of the LSF with SLURM product acts primarily as the workload scheduler
and node allocator running on top of SLURM. The SLURM component provides a job execution
190 Managing LSF