Installation guide

By default, libvirt provisions guests using the hypervisor's default policy. For most hypervisors, the
policy is to run guests on any available processing core or CPU. There are times when an explicit
policy may be better, in particular for systems with a NUMA (Non-Uniform Memory Access)
architecture. A guest on a NUMA system should be pinned to a processing core so that its memory
allocations are always local to the node it is running on. This avoids cross-node memory transports
which have less bandwidth and can significantly degrade performance.
On a non-NUMA systems some form of explicit placement across the hosts’ sockets, cores and
hyperthreads may be more efficient.
Iden t if yin g CPU an d NU MA to p o lo g y
The first step in deciding what policy to apply is to determine the host’s memory and CPU topology.
The virsh nodeinfo command provides information about how many sockets, cores and
hyperthreads there are attached a host.
# virsh nodeinfo
CPU model: x86_64
CPU(s): 8
CPU frequency: 1000 MHz
CPU socket(s): 2
Core(s) per socket: 4
Thread(s) per core: 1
NUMA cell(s): 1
Memory size: 8179176 kB
This system has eight CPUs, in two sockets, each processor has four cores.
The output shows that that the system has a NUMA architecture. NUMA is more complex and requires
more data to accurately interpret. Use the virsh capabilities to get additional output data on
the CPU configuration.
# virsh capabilities
<capabilities>
<host>
<cpu>
<arch>x86_64</arch>
</cpu>
<migration_features>
<live/>
<uri_transports>
<uri_transport>tcp</uri_transport>
</uri_transports>
</migration_features>
<topology>
<cells num='2'>
<cell id='0'>
<cpus num='4'>
<cpu id='0'/>
<cpu id='1'/>
<cpu id='2'/>
<cpu id='3'/>
</cpus>
</cell>
<cell id='1'>
<cpus num='4'>
<cpu id='4'/>
<cpu id='5'/>
<cpu id='6'/>
<cpu id='7'/>
</cpus>
</cell>
Red Hat En t erp rise Lin ux 5 Virt ualizat ion Guid e
318