White Papers

Eeny Meeny Miney Mo, should I go with 20 cores?
Nishanth Dandapanthula, June 2014
Intel’s Xeon E5-2600 v2 product family processors (architecture code named Ivy Bridge) have been
available in the server market for a few months now. Ivy bridge processor based systems provide better
performance when compared to previous generation processor families such as Sandy Bridge (Xeon E5-
2600) and Westmere (Xeon X5600). This can be attributed to several factors, such as increased core
counts because of the 22nm process technology, higher clock rates, increased system memory speeds and
larger last level cache. This performance improvement is shown in several studies mentioned in 1, 2 and 3.
So, once the decision to move to a new platform or new processor technology has been made, what next?
How should these new systems be configured? There are so many choices for the processor itself
different options with different core counts, processor frequency, TDP and, of course, price. Which
processor model is optimal for a specific workload? This blog provides quantitative data and analysis to
help answer this question by comparing the performance and power profile of different processor models
across a variety of HPC applications.
Once the decision has been made to choose a particular processor SKU, there is another important
decision which needs to be made. This is in regard to the choice between single rank and dual rank
memory modules. As mentioned in studies 4 and 5, the choice and configuration of memory modules
impacts the performance and power consumption statistics of an environment. This blog also describes a
performance comparison done between single and dual rank memory modules for several HPC
applications. These studies were done in the Dell engineering lab in December 2013 and results are actual
measured results.
Table 1 describes the configuration of the HPC test cluster and the benchmarks used for this analysis. Six
different types of Ivy Bridge processors were studied along with two types of memory modules, single
ranked and dual ranked.
BIOS options were set to reflect Maximum Performance settings, details are in Table 1. A four node
cluster was interconnected with InfiniBand. All of the results shown in this blog were obtained by fully
subscribing the four servers, i.e., all cores on all servers were in use.
Table 1 Test bed configuration and Benchmarks
Servers
4 x Dell PowerEdge C6220 II sleds in one chassis
Processors per server
2 x E5-2643 v2 @ 3.5 GHz 6c 130 W
2 x E5-2667 v2 @ 3.3 GHz 8c 130 W
2 x E5-2680 v2 @ 2.80 GHz 10c 115 W
2 x E5-2670 v2 @ 2.5 GHz 10c 115W
2 x E5-2690 v2 @ 3.00 GHz 10c 130 W
2 x E5-2697-v2 @ 2.7 GHz 12c 130W
Memory per server
*
1 DPC 8 x 16 GB 1866 MT/s Dual Rank

Summary of content (7 pages)