HP XC System Software Administration Guide Version 3.2

Weight
The scheduling priority of the node.
Nodes of lower priority are scheduled before nodes of higher priority, all else
being equal.
To change the configuration of a set of nodes, first locate the line in the slurm.conf file that
starts with the following text to specify the configuration:
NodeName=
Multiple node sets are allowed on the HP XC system; the initial configuration specifies a single
node set.
Consider a system that has 512 nodes, and all those nodes are in the same partition. SLURM
partitions are discussed in “Configuring SLURM Partitions”. The system's slurm.conf file
contains the following line:
NodeName=n[1-512] Procs=2 # Set by the SLURM configuration process
This might be reset as follows:
NodeName=DEFAULT Procs=2 TmpDisk=1024 RealMemory=4000
NodeName=n[1-15] RealMemory=1000 Weight=8
NodeName=n[16-127,129] RealMemory=2000 Weight=16
NodeName=n[128,130-512] Feature="noswap" Procs=4 Weight=50
This reset accomplishes the following after SLURM reconfiguration:
The assignments NodeName=DEFAULT Procs=2 TmpDisk=1024 specify that, by default,
all nodes have two processors and that 1 GB (1,024 MB) of temporary disk space is available
on each node, unless otherwise specified.
Nodes 1–15 are allocated before any of the others because their Weight is the lowest.
Nodes 128 and 130–512 are allocated last, unless a user requests large memory, the locally
defined noswap feature, or nodes with four processors.
If you make any changes, be sure to run the scontrol reconfigure command to update
SLURM with these new settings.
15.2.4 Configuring SLURM Partitions
Nodes are grouped together in partitions. A node can belong to only one partition.
A SLURM job cannot be scheduled to run across partitions.
Only the superuser (root) and the SLURM system administrator (SlurmUser) are allowed to
allocate resources for any other user.
Your HP XC system is configured initially with all compute nodes in a single SLURM partition,
called lsf. In some situations, you might want to remove some nodes from the lsf partition
and manage them directly with SLURM, submitting jobs to those nodes with the srun
--partition=partition-name command.
LSF manages only one partition. If present, the LSF partition must have these characteristics
specified:
PartitionName=lsf
RootOnly=YES
Shared=FORCE
Assigning YES to the RootOnly= characteristic means that only the superuser (root) can create
allocations for normal user jobs.
The Shared=FORCE characteristic ensures that more than one job can run on the same node.
LSF-HPC with SLURM uses this facility to support preemption and scheduling multiple serial
jobs on the same node (node sharing). The FORCE value makes all nodes in the partition available
for sharing without user means of disabling it.
Do not configure the MaxNodes, MaxTime, or MinNodes parameters in an LSF partition; these
parameters conflict with LSF scheduling decisions.
15.2 Configuring SLURM 173