White Papers

Dell HPC NFS Storage Solution High Availability Configurations with Large Capacities
18
5.4 describes these functionality tests and their results. Functionality testing was similar to work done
in the previous versions of the solution
(4)
.
A 64 node HPC cluster was used to provide I/O workload to test the performance of the NSS-HA. The
performances of the 144TB and 288TB solutions were measured against this test bed for both InfiniBand
and Ethernet based clients. Details of the test bed are provided in Section 5.2. Results of the
performance study are presented in Section 6.
5.2. Test bed
The test bed used to evaluate the NSS-HA functionality and performance is shown in Figure 7. A 64
node HPC compute cluster was used to provide I/O traffic for the NSS.
For the NSS-HA, two PowerEdge R710 servers were used as the NFS servers. Both servers were
connected to shared PowerVault MD3200 SAS storage extended with PowerVault MD1200 arrays (the
diagram above shows a 144TB solution with four MD storage arrays). A PowerConnect 5424 Gigabit
Ethernet switch was used as the private HA cluster network between the servers.
The NFS servers were connected to the compute cluster via InfiniBand or 10 Gigabit Ethernet.
Complete configuration details are provided in Table 5, Table 6, Table 7 and Table 8.