HP XC System Software Administration Guide Version 3.2

c. Set the file permissions:
# chmod 555 /opt/hptc/lsf/etc/slsf
d. Create the appropriate soft link to the file:
# ln -s /opt/hptc/lsf/etc/slsf /etc/init.d/slsf
e. Enable the file:
# chkconfig --add slsf
# chkconfig --list slsf
slsf0:off 1:off 2:off 3:on 4:on 5:on 6:off
f. Edit the /opt/hptc/systemimager/etc/chkconfig.map file to add the following
line to enable this new "service" on all nodes in the HP XC system:
slsf 0:off 1:off 2:off 3:on 4:on 5:on 6:off
8. Update the node roles and reimage:
a. If improved availability is in effect, enter the transfer_from_avail command:
# transfer_from_avail
b. Use the stopsys command to shut down the other nodes of the HP XC system.
c. Change directory to /opt/hptc/config/sbin
d. Execute cluster_config utility.
Select Modify Nodes. Remove the compute and resource_management role
assignments for the fat nodes. Ensure that there is at least one resource management
role remaining in the HP XC system (HP recommends two resource management
nodes).
Do not reinstall LSF.
e. When the cluster_config utility completes, edit the
/hptc_cluster/slurm/etc/slurm.conf file to remove the names of the fat nodes
from:
The NodeName parameter assignment
The PartitionName parameter assignment
f. Run the following command to update SLURM with the new information:
# scontrol reconfig
9. Use the startsys command to restart the HP XC system.
The nodes are reimaged after startsys command completes.
Note:
Only the nodes on which role changes were made are reimaged
The standard LSF binaries, the slsf script, and its soft link are not on the thin nodes. For
information on the updateclient command to update the thin nodes with these latest file
changes, see Chapter 11: Distributing Software Throughout the System (page 139) .
The thin nodes do not need to be updated with these files to complete this procedure. This
is a matter of consistency between all the nodes in the HP XC system. The "thin" nodes can
be brought up-to-date with these changes at a later time.
10. If improved availability is in effect, enter the transfer_to_avail command:
# transfer_to_avail
11. Use the sinfo and lshosts commands to verify the SLURM nodes and partitions and LSF
hosts, respectively:
# sinfo
PARTITION AVAIL TIMELIMIT NODES STATE NODELIST
lsf up infinite 6 idle xc[7-120]
# lshosts
HOST_NAME type model cpuf ncpus maxmem maxswp server RESOURCES
306 Installing Standard LSF on a Subset of Nodes