Reference Guide

System Building Blocks
8 Dell EMC Ready Solutions for HPC Digital Manufacturing with AMD EPYC™ Processors—ANSYS®
Performance
Additionally, two BBB’s can be directly coupled together via a high-speed network cable, such as InfiniBand
or Ethernet, without need of an additional high-speed switch if additional compute capability is required for
each simulation run (BBB Couplet). BBB’s provide a simple framework for customers to incrementally grow
the size and power of the HPC cluster by purchasing individual BBBs, BBB Couplets, or combining the
individual and/or Couplets with a high-speed switch into a single monolithic system.
We did not carry out any explicit performance testing on BBB configurations for this paper. For Linux based
systems, the single node and two-node couplet BBB clusters with InfiniBand would be comparable to the
results reported for the two-node 7452 based CBB benchmarks below. For Windows based clusters, the use
of InfiniBand for node-to-node connectivity in a two-node couplet requires complex setup and administration,
likely beyond the intended scope for most customers seeking a Windows based solution. It is recommended
that Windows based two-node couplets be networked with high speed Ethernet, such as 25GbE. Our
experience with Windows for HPC workloads indicates the performance differential between Windows and
Linux can be highly variable and problem dependent, making the use of standard benchmarks as an
indication of projected performance of limited value. Customers wishing for the highest level of performance
and potential cluster expansion would be advised to use Linux as an operating system.
2.4 Storage
Dell EMC offers a wide range of HPC storage solutions. For a general overview of the entire HPC solution
portfolio please visit www.dellemc.com/hpc. There are typically three tiers of storage for HPC: scratch
storage, operational storage, and archival storage, which differ in terms of size, performance, and
persistence.
Scratch storage tends to persist for the duration of a single simulation. It may be used to hold temporary data
which is unable to reside in the compute system’s main memory due to insufficient physical memory capacity.
HPC applications may be considered “I/O bound” if access to storage impedes the progress of the simulation.
For these HPC workloads, typically the most cost-effective solution is to provide sufficient direct-attached
local storage on the compute nodes. For situations where the application may require a shared file system
across the compute cluster, a high-performance shared file system may be better suited than relying on local
direct-attached storage. Typically, using direct-attached local storage offers the best overall
price/performance and is considered best practice for most CAE simulations. For this reason, local storage is
included in the recommended configurations with appropriate performance and capacity for a wide range of
production workloads. If anticipated workload requirements exceed the performance and capacity provided by
the recommended local storage configurations, care should be taken to size scratch storage appropriately
based on the workload.
Operational storage is typically defined as storage used to maintain results over the duration of a project and
other data, such as home directories, such that the data may be accessed daily for an extended period of
time. Typically, this data consists of simulation input and results files, which may be transferred from the
scratch storage or from users analyzing the data, often remotely. Since this data may persist for an extended
period, some or all of it may be backed up at a regular interval, where the interval chosen is based on the
balance of the cost to either archive the data or regenerate it if need be. Archival data is assumed to be
persistent for a very long term, and data integrity is considered critical. For many modest HPC systems, use
of the existing enterprise archival data storage may make the most sense, as the performance aspect of
archival data tends to not impede HPC activities. Our experience in working with customers indicates that
there is no ‘one size fits all’ operational and archival storage solution. Many customers rely on their corporate
enterprise storage for archival purposes and instantiate a high-performance operational storage system
dedicated for their HPC environment.