White Papers

Dell PowerEdge C4130 Performance with
K80 GPUs - HPL
Authors: Saeed Iqbal and Mayura Deshmukh
There is an ever increasing demand for compute power. This demand has pushed server designs towards higher
hardware accelerator density. However, most such designs have a standard system configuration, which may not
be optimal for maximum performance across all application classes. The latest high density design from Dell, the
PowerEdge C4130, offers up to four GPUs in a 1U form factor. Also the uniqueness of PowerEdge C4130 is that it
offers a configurable system design, potentially making it a better fit, for the wider variety of extreme HPC
applications.
This blog is about performance characterization of the C4130 on HPL, we present data on performance achieved,
power consumption and performance per watt on various system configurations.
The latest HPC focused Tesla series General Purpose Graphic Units (GPU) released from NVIDIA is the Tesla K80,
from the HPC prospective the most important improvement is the 1.87 TFLOPs (double precision) compute
capacity, which is about 30% more than K40, the previous Tesla card. The K80 auto-boost feature automatically
provides additional performance if additional power head room is available. The internal GPUs are based on the
GK210 architecture and have a total of 4,992 cores which represent a 73% improvement over K40. The K80 has a
total memory of 24GBs which is divided equally between the two internal GPUs; this is a 100% more memory
capacity compared to the K40. The memory bandwidth in K80 is improved to 480 GB/s. The rated power
consumption of a single K80 is a maximum of 300 watts.
The C4130 offers five configurations “Athrough “E”. Since GPUs provide the bulk of compute horsepower, the
configurations can be divided into groups based on expected performance, the first group of three configurations,
“A”, “B” and “C”, with four GPUs each and the second group of two configurations, “Dand “E”, with two GPUs
each. The first two quad GPU configurations have an internal PCIe switch module. The details of the various
configurations are shown in the Table 1 and the block diagram (Figure 1) below:
Table 1: C4130 Configurations
C4130
Configuration
GPUs
CPUs
Switch
Module
(SW)
GPU/CPU
ratio
Comments
A
4
1
Y
4
Single CPU, optimized for peer to peer communication
B
4
Y
2
Dual CPUs, optimized for peer to peer communication
C
4
2
N
2
Dual CPUs, Balanced with four GPUs
D
2
N
1
Dual CPUs, Balanced with two GPUs
E
2
1
N
2
Single CPU, Balanced with two GPUs

Summary of content (4 pages)