White Papers

19
Summary
Sequential I/O Performance
o N-to-N tests
F800 had better sequential read and write performance than NSS-7.0-HA and H600
F800 and H600 write performance up to 400% better than NSS-7.0-HA
F800 write performance up to 20% better than H600
F800 read performance up to 251% better than NSS-7.0-HA
F800 read performance up to 111% better than H600
H600 and NSS-7.0-HA had similar sequential read performance
o N-to-1 tests
For write tests, both F800 and H600 had similar performance behavior
For read tests, F800 had ~300% better performance than H600
Random I/O Performance (N-to-N)
o F800 had up to 250% better random write performance than H600
o H600 had up to 571% better random write performance than NSS-7.0-HA
o F800 had up to 550% better random read performance than H600
o H600 had up to 15x better random read performance than NSS-7.0-HA at 64-clients
o F800 had 7x better random read performance than Lustre at 64-clients
o F800 had 3x better random write performance than Lustre at 64-clients
The F800 and H600 consistently outperformed the NSS-7.0-HA and demonstrated good I/O performance scalability with both
sequential and random workloads. Furthermore, the comparison between Lustre, NSS, and the F800 demonstrated that the F800 is the
best choice for intensive random I/O workloads while Lustre was superior at sequential intensive workloads. While Lustre sequential I/O
performance was better than the F800 by 25-75%, the F800 random I/O performance was 300-700% better than Lustre. So a case can
be made that the F800 is the best overall choice for a high performance file system, particularly if the workload has a significant random
I/O component. If features like backup, snapshots and multiple protocol (SMB/NFS/HDFS) support are required in addition to a mixed
HPC workload, then Isilon is a better choice than Lustre or the NSS-7.0-HA However, if the HPC workload is mixed and includes MPI-
based or other applications that require low latency interconnects (Infiniband or OmniPath) in order to scale well, then Lustre is the
better choice.
REFERENCES
1. All benchmark tests were run on the Zenith cluster in the Dell HPC Innovation Lab in Round Rock, TX. Zenith ranked #292 in the
Top 500 ranking as of November 2017: https://top500.org/list/2017/11/?page=3
2. Iperf: https://sourceforge.net/projects/iperf/
3. IOR: https://github.com/LLNL/ior
4. IOzone: http://www.iozone.org/
5. Dell HPC Lustre Storage Solution: Dell HPC Lustre Storage with IEEL 3.0
6. Stable Writes: https://community.emc.com/community/products/isilon/blog/2016/10/18/stable-writes; Accessed 1/24/18
APPENDIX
iperf commands:
# On each F800 node the iperf command was:
iperf s w 2M l 1M
# On each Zenith client node the iperf command was:
iperf c <F800_IP> w 2M l 1M N t 30
IOR sequential write (N-to-N) commands:
#1-client write
mpirun --allow-run-as-root -np 1 -npernode 1 -hostfile hosts -nolocal /home/xin/bin/ior -a POSIX -v -i 1 -d 3 -e -F -k -o /mnt/nfs/test -w -s
1 -t 1m -b 2048g