HP XC System Software Hardware Preparation Guide Version 3.1

Integrated Lights
Out
See iLO.
interconnect A hardware component that provides high-speed connectivity between the nodes in the HP
XC system. It is used for message passing and remote memory access capabilities for parallel
applications.
interconnect
module
A module in an HP BladeSystem server. The interconnect module provides the Physical I/O
ports for the server blades and can be either a switch, with connections to each of the server
blades and some number of external ports, it can be or a pass-through module, with individual
external ports for each of the server blades.
See also server blade.
interconnect
network
The private network within the HP XC system that is used primarily for user file access and
for communications within applications.
Internet address A unique 32-bit number that identifies a host's connection to an Internet network. An Internet
address is commonly represented as a network number and a host number and takes a form
similar to the following: 192.0.2.0.
IPMI Intelligent Platform Management Interface. A self-contained hardware technology available
on HP ProLiant DL145 G1 servers that enables remote management of any node within a system.
L
Linux Virtual
Server
See LVS.
load file A file containing the names of multiple executables that are to be launched simultaneously by
a single command.
Load Sharing
Facility
See LSF-HPC with SLURM.
local storage Storage that is available or accessible from one node in the HP XC system.
LSF execution
host
The node on which LSF runs. A user's job is submitted to the LSF execution host. Jobs are
launched from the LSF execution host and are executed on one or more compute nodes.
LSF master host The overall LSF coordinator for the system. The master load information manager (LIM) and
master batch daemon (mbatchd) run on the LSF master host. Each system has one master host
to do all job scheduling and dispatch. If the master host goes down, another LSF server in the
system becomes the master host.
LSF-HPC with
SLURM
Load Sharing Facility for High Performance Computing integrated with SLURM. The batch
system resource manager on an HP XC system that is integrated with SLURM. LSF-HPC with
SLURM places a job in a queue and allows it to run when the necessary resources become
available. LSF-HPC with SLURM manages just one resource: the total number of processors
designated for batch processing.
LSF-HPC with SLURM can also run interactive batch jobs and interactive jobs. An LSF interactive
batch job allows you to interact with the application while still taking advantage of LSF-HPC
with SLURM scheduling policies and features. An LSF-HPC with SLURM interactive job is run
without using LSF-HPC with SLURM batch processing features but is dispatched immediately
by LSF-HPC with SLURM on the LSF execution host.
See also LSF execution host.
LVS Linux Virtual Server. Provides a centralized login capability for system users. LVS handles
incoming login requests and directs them to a node with a login role.
M
Management
Processor
See MP.
master host See LSF master host.
69