White Papers

Dell Storage for HPC with Intel Enterprise Edition 2.3 for Lustre sofware
same file. The overhead encountered comes from threads dealing with Lustre’s file locking and
serialized writes. See Appendix A for examples of the commands used to run these benchmarks.
Each set of tests was executed on a range of clients to test the scalability of the solution. The number
of simultaneous physical clients involved in each test varied from a single client to 64 clients. The
number of threads corresponds to the number of physical servers, up to 64. The total number of
threads above 64 were simulated by increasing the number of threads per client across all clients. For
instance, for 128 threads, each of the 64 clients ran two threads.
The test environment for the solution has a single MDS pair and a single OSS pair with a total of 960TB
of raw disk space. The OSS pair contains two PowerEdge R630s, each with 256GB of memory, two
12Gbps SAS controllers and a single Mellanox ConnectX-3 FDR HCA. Consult the Dell Storage for HPC
with Intel EE for Lustre Configuration Guide for details of cabling and expansion card locations. The
MDS has identical configurations with 256GB of memory, a Mellanox ConnectX-3 FDR HCA and dual
12Gbps SAS controllers.
The InfiniBand fabric is comprised of a 32-port Mellanox M3601Q QDR InfiniBand switch for the client
cluster and a 36-port Mellanox SX6025 FDR InfiniBand switch for the Dell Storage for HPC with Intel EE
for Lustre Solution servers. Three ports from the M3601Q switch are connected to the SX6025 switch.
Table 2 shows the details about the characteristics for the different software and hardware
components.