White Papers

Dell HPC Lustre Storage solution with Mellanox Infiniband EDR
pairs. Each additional OSS increases the existing networking throughput, while each additional OST
increases the storage capacity. Figure 1 shows the relationship of the MDS, MDT, MGS, OSS and OST
components of a typical Lustre configuration. Clients in the figure are the HPC cluster’s compute
nodes.
Figure 1: Lustre based storage solution components
A parallel file system, such as Lustre, delivers performance and scalability by distributing data
(“striping” data) across multiple Object Storage Targets (OSTs), allowing multiple compute nodes to
efficiently access the data simultaneously. A key design consideration of Lustre is the separation of
metadata access from IO data access in order to improve the overall system performance.
The Lustre client software is installed on the compute nodes to allow access to data stored on the
Lustre file system. To the clients, the file system appears as a single namespace that can be mounted
for access. This single mount point provides a simple starting point for application data access, and
allows access via native client operating system tools for easier administration.
Lustre includes a sophisticated and enhanced storage network protocol, Lustre Network, referred to as
LNet. LNet is capable of leveraging certain types of network features. For example, when the Dell HPC
Lustre Storage utilizes Infiniband as the network to connect the clients, MDSs and OSSs, LNet enables
Lustre to take advantage of the RDMA capabilities of the InfiniBand fabric to provide faster I/O
transport and lower latency compared to typical networking protocols.
To summarize, the elements of the Lustre file system are as follows:
Metadata Target (MDT) Stores the location of “stripes” of data, file names, time stamps, etc.
Management Target (MGT) Stores management data such as configuration and registry
Metadata Storage Server (MDS) Manages the MDT, providing Lustre clients access to files.