White Papers

BIOS characterization for HPC
with Intel Skylake processor
Ashish Kumar Singh. Dell EMC HPC Innovation Lab. Aug 2017
This blog discusses the impact of the different BIOS tuning options available on Dell EMC 14
th
generation PowerEdge
servers with the Intel Xeon® Processor Scalable Family (architecture codenamed “Skylake”) for some HPC
benchmarks and applications. A brief description of the Skylake processor, BIOS options and HPC applications is
provided below.
Skylake is a new 14nm “tock” processor in the Intel “tick-tock” series, which has the same process technology as the
previous generation but with a new microarchitecture. Skylake requires a new CPU socket that is available with the
Dell EMC 14
th
Generation PowerEdge servers. Skylake processors are available in two different configurations, with
an integrated Omni-Path fabric and without fabric. The Omni-Path fabric supports network bandwidth up to 100Gb/s.
The Skylake processor supports up to 28 cores, six DDR4 memory channels with speed up to 2666MT/s, and
additional vectorization power with the AVX512 instruction set. Intel also introduces a new cache coherent
interconnect named “Ultra Path Interconnect” (UPI), replacing Intel® QPI, that connects multiple CPU sockets.
Skylake offers a new, more powerful AVX512 vectorization technology that provides 512-bit vectors. The Skylake
CPUs include models that support two 512-bit Fuse-Multiply-Add (FMA) units to deliver 32 Double Precision (DP)
FLOPS/cycle and models with a single 512-bit FMA unit that is capable of 16 DP FLOPS/cycle. More details on
AVX512 are described in the Intel programming reference. With 32 FLOPS/cycle, Skylake doubles the compute
capability of the previous generation, Intel Xeon E5-2600 v4 processors (“Broadwell”).
Skylake processors are supported in the Dell EMC PowerEdge 14
th
Generation servers. The new processor
architecture allows different tuning knobs, which are exposed in the server BIOS menu. In addition to existing
options for performance and power management, the new servers also introduce a clustering mode called Sub
NUMA clustering (SNC). On CPU models that support SNC, enabling SNC is akin to splitting the single socket into
two NUMA domains, each with half the physical cores and half the memory of the socket. If this sounds familiar, it is
similar in utility to the Cluster-on-Die option that was available in E5-2600 v3 and v4 processors as described here.
SNC is implemented differently from COD, and these changes improve remote socket access in Skylake when
compared to the previous generation. At the Operating System level, a dual socket server with SNC enabled will
display four NUMA domains. Two of the domains will be closer to each other (on the same socket), and the other two
will be a larger distance away, across the UPI to the remote socket. This can be seen using OS tools like numactl –H.

Summary of content (7 pages)