Troubleshooting

Improving NFS Performance on HPC Clusters with Dell Fluid Cache for DAS
11
I/O cluster details Table 5.
I/O cluster configuration
CLIENTS
64 PowerEdge M420 blade servers
32 blades in each of two PowerEdge M1000e chassis
CHASSIS CONFIGURATION
Two PowerEdge M1000e chassis, each with 32 blades
Two Mellanox M4001F FDR10 I/O modules per chassis
Two PowerConnect M6220 I/O switch modules per chassis
INFINIBAND FABRIC
For I/O traffic
Each PowerEdge M1000e chassis has two Mellanox M4001
FDR10 I/O module switches.
Each FDR10 I/O module has four uplinks to a rack Mellanox
SX6025 FDR switch for a total of 16 uplinks.
The FDR rack switch has a single FDR link to the NFS server.
ETHERNET FABRIC
For cluster deployment and
management
Each PowerEdge M1000e chassis has two PowerConnect
M6220 Ethernet switch modules.
Each M6220 switch module has one link to a rack
PowerConnect 5224 switch.
There is one link from the rack PowerConnect switch to an
Ethernet interface on the cluster master node.
I/O compute node configuration
CLIENT PowerEdge M420 blade server
PROCESSORS Dual Intel(R) Xeon(R) CPU E5-2470 @ 2.30 GHz
MEMORY 48 GB. 6 * 8 GB 1600 MT/s RDIMMs
INTERNAL DISK 1 50GB SATA SSD
INTERNAL RAID CONTROLLER PERC H310 Embedded
CLUSTER ADMINISTRATION
INTERCONNECT
Broadcom NetXtreme II BCM57810
I/O INTERCONNECT
Mellanox ConnectX-3 FDR10 mezzanine card
I/O cluster software and firmware
BIOS 1.3.5
iDRAC 1.23.23 (Build 1)
OPERATING SYSTEM Red Hat Enterprise Linux (RHEL) 6.2
KERNEL 2.6.32-220.el6.x86_64