White Papers

Dell - Internal Use - Confidential
Dell - Internal Use - Confidential
Dell - Internal Use - Confidential
HPC in an OpenStack Environment
Joseph Stanfield and Nishanth Dandapanthula, June 2014
Introduction
As the concept of cloud computing continues to expand its market reach, many companies have
discovered the advantage of encapsulating configuration details into a virtual setting allowing users to
build a customized environment and run it in the cloud as the need arises for computational resources.
Cloud computing would also seem to go hand-in-hand with a production HPC environment, offering
virtually unlimited storage with instantly available and scalable resources. OpenStack is the open source
cloud computing platform used in this study.
The applications in the HPC domain have massive requirements in terms of CPU, memory, I/O and
interconnect. Traditionally HPC applications have been run on physical clusters, but with the trend
moving towards cloud computing and virtualization, we wanted to see how these applications fare in a
virtualized environment. In theory, the ability to scale out available resources on per-user basis would
boost productivity and lower the total cost of ownership of the cluster. But, how does the performance
of virtual machines (VM) compare to bare metal servers (BM)?
In this blog, we’ve set out to compare the performance of a physical server with a bare metal installation
and a virtual machine, using a single node in similar environments, with identical resources. The bare
metal machine is a physical server with just a minimal OS installed. VM refers to the virtual machine
running on a hypervisor on this bare metal machine using all the cores and memory of the bare metal
system, thus both having the same configuration.
Consider a scenario with multiple project development needs, where users require a range of custom
platforms for their individual projects. There may be a need for a whole server or multiple servers for
various reasons such as application development, code beta testing, sharing a stable and uniform
platform among collaborators etc. An administrator would be able to easily deploy an environment
tailored to each user without having to re-provision the entire server farm for each project. Once the
user is done, the VM’s data or the VM itself can be archived for future use.
We study the differences in performance and the overhead raising from the use of VMS when compared
to BMs in an HPC space. We present analytical results and weigh the pros and cons of each approach.
This is the first in a series of blogs where we will evaluate virtual machines, Linux containers, and bare
metal servers and their respective tuning options from the perspective of applications in HPC domain. In
future posts, we will expand this study at scale by introducing the interconnect component.
The test bed has a head node and a compute node with bare metal installations of RedHat Enterprise
Linux 6.5. We installed RDO OpenStack on the head node and used that to add the compute node to the
resource pool. The VMs are deployed solely on the compute node. The details of the test bed and the

Summary of content (6 pages)