White Papers

8
Eight Zenith nodes to a single F800 node: ~40.06 Gbits/sec.
Twenty Zenith nodes to four F800 nodes: ~154.29 Gbits/sec.
The iperf results demonstrate that the bandwidth between a single Zenith node and a single F800 node is approximately 9.9 Gbits/sec
(or ~1.2 GB/sec), maximum bandwidth between Zenith and a single F800 node is approximately 40 Gbits/sec (or ~5GB/sec), and the
aggregate bandwidth between the Zenith nodes and the F800 cluster is approximately 154.29 Gbits/sec (or ~19 GB/sec). Similar
results were obtained for the H600:
A single Zenith node to a single H600 node: ~9.9 Gbits/sec.
A single H600 node to a Zenith node: ~9.8 Gbits/sec.
Eight Zenith nodes to a single H600 node: ~39.81 Gbits/sec.
Twenty Zenith nodes to four H600 nodes: ~120.83 Gbits/sec.
The iperf results demonstrate that the bandwidth between a single Zenith node and a single H600 node is approximately 9.9 Gbits/sec
(or ~1.2 GB/sec), maximum bandwidth between Zenith and a single H600 node is approximately 40 Gbits/sec (or ~5GB/sec), and the
aggregate bandwidth between the Zenith nodes and the H600 cluster is approximately 120.83 Gbits/sec (or ~15 GB/sec).
I/O Tests
IOR v3.0.1 is the preferred I/O benchmark in the Dell HPC Innovation Labs for several reasons:
Developed by Lawrence Livermore National Laboratory to evaluate I/O performance
IOR scales much better than Iozone
4
for multiple clients
Can be used to measure I/O performance via
o multiple access patterns
o different storage configurations
o several file sizes
It can be used to evaluate the performance of different parallel I/O interfaces
o POSIX, MPI-IO, HDF5, and NETCDF
IOR makes “apples to apples” comparisons with other filesystems possible
Before executing any IOR commands the iotest directory was first deleted, then recreated. Before and between any IOR test run, the
NFS share was first unmounted then mounted on each compute node used in the tests. NFSv3 was used for all tests.
An example mount command:
mount -o tcp,vers=3 -t nfs <NFS_IP>:/ifs/iotest /mnt/F800
An example unmount command:
umount /mnt/F800
Sequential I/O Performance Tests (N-to-N)
The F800 and H600 storage systems were benchmarked using the default OneFS configuration with endurant cache enabled, multi-
writer enabled and coalescer disabled.
For sequential I/O tests with IOR, a block size of 1024 KB was used. There were 7 test cases (1-client, 2-client, 4-client, 8-client, 16-
client, 32-client, and 64 client test cases). For each test IOR generated individual threads on all the compute nodes for reading/writing
data and the total workload was 2 TB. For example, in the 1-client test case, one thread was generated on one compute node to
read/write a 2 TB file. For the 2-client case, there are two compute nodes used in the test and each client node generated one thread to
read/write a 1 TB file concurrently. The IOR commands used for these tests are listed in the Appendix. Figure 4 illustrates the
sequential write performance.