Reference Guide

9 Dell EMC Ready Solutions for AI Deep Learning with NVIDIA | v1.0
Figure 3: The topology of a compute node
Table 2: PowerEdge C4140 Configurations
Component Details
Server Model PowerEdge C4140
Processor 2 x Intel Xeon Gold 6148 CPU @ 2.40GHz
Memory 24 x 16GB DDR4 2666MT/s DIMMs - 384GB
Local Disks 120GB SSD, 1.6TB NVMe
I/O & Ports Network daughter card with
2 x 10GE + 2 x 1GE
Network Adapter 1 x InfiniBand EDR adapter
GPU 4 x V100-SXM2 16GB
Out of Band Management iDRAC9 Enterprise with Lifecycle Controller
Power Supplies 2000W hot-plug Redundant Power Supply Unit (PSU)
2.2.1 GPU
The NVIDIA Tesla V100 is the latest data center GPU available to accelerate Deep Learning. Powered by
engineers to tackle challenges that were once difficult. With 640 Tensor Cores, Tesla V100 is the first GPU to
break the 100 teraflops (TFLOPS) barrier of Deep Learning performance.
Table 3: V100-SXM2 vs V100-PCIe
Description V100-PCIe V100-SXM2
CUDA Cores 5120 5120
GPU Max Clock Rate (MHz) 1380 1530