White Papers

Understanding the Role of Dell EMC Isilon SmartConnect in Genomics
Workloads
Kihoon Yoon. Dell EMC HPC Innovation Lab. November 2016
Coming together with EMC has opened many new opportunities for the Dell EMC HPC Team to develop
high-performance computing and storage solutions for the Life Sciences. Our lab recently stood up a
‘starter' 3 node Dell EMC Isilon X410 cluster. As a loyal user of the Isilon X210 in a previous role, I
couldn’t wait to start profiling genomics applications using the X410 with Dell EMC HPC System for Life
Sciences.
Because our current Isilon X410 storage cluster is currently fixed at the 3 node minimum, we aren’t set
up yet to evaluate the scalability of the X410 with genomics workflows. We will tackle this work once
our lab receives additional X nodes and the new the Isilon All-Flash node (formerly project Nitro).
In the meantime, I wanted to understand how the Isilon storage behaves relative to other storage
solutions and decided to focus on the role of Isilon SmartConnect.
Through a single host name, SmartConnect enables client connection load balancing and dynamic
network file system (NFS) failover and failback of client connections across storage nodes to provide
optimal utilization of the Isilon cluster resources.
Without the need to install client-side drivers, administrators can easily manage a large and growing
number of clients and ensure in the event of a system failure, in-flight reads and writes will successfully
finish without failing.
Traditional storage systems with two-way failover typically sustain a minimum 50 percent degradation in
performance when a storage head fails, as all clients must fail over to the remaining head. With Isilon
SmartConnect, clients are evenly distributed across all remaining nodes in the cluster during failover,
helping to ensure minimal performance impact.
To test this concept, I ran the GATK pipeline varying the number of samples and compute nodes without
and with SmartConnect enabled on the Isilon storage cluster.
The configuration of our current lab environment and whole human genome sequencing data used for
this evaluation are listed below.
Table 1 System configuration, software, and data
Dell EMC HPC System for Life Sciences
Server
40 x PowerEdge C6320
Processor
2 x Intel Xeon E5-2697 v4. 18 cores per socket, 2.3 GHz
Memory
128 GB at 2400 MT/s
Interconnect
10GbE NIC and switch for accessing Isilon &
Intel Omni-Path fabric
Software

Summary of content (4 pages)