HP XC System Software User's Guide Version 3.2

sometime in the future, depending on resource availability and
batch system scheduling policies.
Batch job submissions typically provide instructions on I/O
management, such as files from which to read input and
filenames to collect output.
By default, LSF-HPC jobs are batch jobs. The output is e-mailed
to the user, which requires that e-mail be set up properly.
SLURM batch jobs are submitted with the srun -b command.
By default, the output is written to
$CWD/slurm-SLURMjobID.out from the node on which the
batch job was launched.
Use Ctrl-C at any time to terminate the job.
Interactive batch job A job submitted to LSF-HPC or SLURM that maintains I/O
connections with the terminal from which the job was
submitted. The job is also subject to resource availability and
scheduling policies, so it may pause before starting. After
running, the job output displays on the terminal and the user
can provide input if the job allows it.
By default, SLURM jobs are interactive. Interactive LSF-HPC
jobs are submitted with the bsub -I command.
Use Ctrl-C at any time to terminate the job.
Serial job A job that requests only one slot and does not specify any of
the following constraints:
mem
tmp
mincpus
nodes
Serial jobs are allocated a single CPU on a shared node with
minimal capacities that satisfies other allocation criteria.
LSF-HPC always tries to run multiple serial jobs on the same
node, one CPU per job. Parallel jobs and serial jobs cannot run
on the same node.
Pseudo-parallel job A job that requests only one slot but specifies any of these
constraints:
mem
tmp
nodes=1
mincpus > 1
Pseudo-parallel jobs are allocated one node for their exclusive
use.
NOTE: Do NOT rely on this feature to provide node-level
allocation for small jobs in job scripts. Use the SLURM[nodes]
specification instead, along with mem, tmp, mincpus allocation
options.
LSF-HPC considers this job type as a parallel job because the
job requests explicit node resources. LSF-HPC does not monitor
these additional resources, so it cannot schedule any other jobs
to the node without risking resource contention. Therefore
100 Using LSF-HPC