White Papers

Ready Solutions Engineering Test Results
Skylake memory study
Authors: Joseph Stanfield, Garima Kochhar and Donald Russell, Bruce Wagner.
HPC Engineering and System Performance Analysis Teams, HPC Innovation Lab, January 2018
To function efficiently in an HPC environment, a cluster of compute nodes must work in tandem to compile complex data and achieve
desired results. The user expects each node to function at peak performance as an individual system, as well as a part of an intricate
group of nodes processing data in parallel. To enable efficient cluster performance, we first need good single system performance. With
that in mind, we evaluated the impact of different memory configurations on single node memory bandwidth performance using the
STREAM benchmark. The servers used here support the latest Intel Skylake processor (Intel Scalable Processor Family) and are from
the Dell EMC 14th generation (14G) server product line.
Less than 6 DIMMS per socket
The Skylake processor has a built-in memory controller similar to previous generation Xeons but now supports *six* memory channels
per socket. This is an increase from the four memory channels found in previous generation Xeon E5-2600 v3 and E5-2600 v4
processors. Different Dell EMC server models offer a different number of memory slots based on server density, but all servers offer at
least one memory module slot on each of the six memory channels per socket.
For applications that are sensitive to memory bandwidth and require predictable performance, configuring memory for the underlying
architecture is an important consideration. For optimal memory performance, all six memory channels of a CPU should be populated
with memory modules (DIMMs), and populated identically. This is a called a balanced memory configuration. In a balanced
configuration all DIMMs are accessed uniformly and the full complement of memory channels are available to the application. An
unbalanced memory configuration will lead to lower memory performance as some channels will be unused or used unequally. Even
worse, an unbalanced memory configuration can lead to unpredictable memory performance based on how the system fractures the
memory space into multiple regions and how Linux maps out these memory domains.
Figure 1. Relative memory bandwidth with different number of DIMMs on one socket.
PowerEdge C6420. Platinum 8176. 32 GB 2666 MT/s DIMMs.
Figure 1 shows the drop in performance when all six memory channels of a 14G server are not populated. Using all six memory
channels per socket is the best configuration, and will give the most predictable performance. This data was collected using the Intel
0.17
0.35
0.51
0.69
0.36
1.00
0.00
0.20
0.40
0.60
0.80
1.00
1.20
1 2 3 4 5 6
Relative Memory Bandwidth
[higher is better]
Number of DIMMs populated in the socket, one per memory channel
Relative Memory bandwidth of one socket with different DIMMs

Summary of content (6 pages)