HP XC System Software Installation Guide Version 4.0

a. Go to Appendix J (page 239) to determine the type of customizations that are available
or required. For instance, if you installed and configured SVA, SVA requires certain
SLURM customizations.
b. Use the text editor of your choice to edit the SLURM configuration file:
/hptc_cluster/slurm/etc/slurm.conf
c. Use the information in Appendix J (page 239) to customize the SLURM configuration
according to your requirements.
d. If you make changes to the slurm.conf file, save your changes and exit the text editor.
e. Update the SLURM daemons with this new information:
# scontrol reconfig
If some nodes are reported as being in the down state, see Section 14.6 (page 183) for more
information.
7.1.2 Perform LSF Postconfiguration Tasks
Follow this procedure to set up the LSF environment and enable LSF failover (if you assigned
the resource_management role to two or more nodes). Omit this task if you did not configure
LSF.
1. Begin this procedure as the root user on the head node.
2. Set up the LSF environment by sourcing the LSF profile file:
# . /opt/hptc/lsf/top/conf/profile.lsf
3. Verify that the LSF profile file has been sourced by finding an LSF command:
# which lsid
/opt/hptc/lsf/top/7.0/linux2.6-glibc2.3-x86_64-slurm/bin/lsid
This sample output was obtained from an HP ProLiant server. Thus, the directory name
linux2.6-glibc2.3-x86_64-slurm is included in the path (the string x86_64 signifies
a Xeon- or Opteron-based architecture). The string ia64 is included in the directory name
for HP Integrity servers. The string slurm exists in the path only if LSF with SLURM is
configured.
4. If your hardware configuration contains servers with multiple core CPUs (dual core or quad
core) and you are using standard LSF, you must add the following entry to the /opt/hptc/
lsf/conf/lsf.conf configuration file for LSF to recognize multiple cores. Otherwise,
omit this step and the next.
LSF_ENABLE_DUALCORE=Y
5. Complete this step if the system is configured with LSF with SLURM. Otherwise omit this
step.
a. If you assigned two or more nodes with the resource_management role and want
to enable LSF failover, enter the following command.
# controllsf enable failover
b. Determine the node on which the LSF daemons are running:
# controllsf show current
LSF is currently running on node n32, and assigned to node n32
c. Log in to the node that is running LSF:
# ssh n32
6. Restart the LIMs on all hosts:
# lsadmin reconfig
Checking configuration files ...
No errors found.
124 Finalizing the Configuration