Installation guide

</cells>
</topology>
<secmodel>
<model>selinux</model>
<doi>0</doi>
</secmodel>
</host>
[ Additional XML removed ]
</capabilities>
The output shows two NUMA nodes (also know as NUMA cells), each containing four logical CPUs
(four processing cores). This system has two sockets, therefore we can infer that each socket is a
separate NUMA node. For a guest with four virtual CPUs, it would be optimal to lock the guest to
physical CPUs 0 to 3, or 4 to 7 to avoid accessing non-local memory, which are significantly slower
than accessing local memory.
If a guest requires eight virtual CPUs, as each NUMA node only has four physical CPUs, a better
utilization may be obtained by running a pair of four virtual CPU guests and splitting the work
between them, rather than using a single 8 CPU guest. Running across multiple NUMA nodes
significantly degrades performance for physical and virtualized tasks.
Decid e wh ich NUMA n o d e can ru n t h e g u est
Locking a guest to a particular NUMA node offers no benefit if that node does not have sufficient free
memory for that guest. libvirt stores information on the free memory available on each node. Use the
virsh freecell command to display the free memory on all NUMA nodes.
# virsh freecell
0: 2203620 kB
1: 3354784 kB
If a guest requires 3 GB of RAM allocated, then the guest should be run on NUMA node (cell) 1. Node
0 only has 2.2GB free which is probably not sufficient for certain guests.
Lock a g u est t o a NUMA n o d e or ph ysical CPU set
Once you have determined which node to run the guest on, see the capabilities data (the output of
the virsh capabilities command) about NUMA topology.
1. Extract from the virsh capabilities output.
<topology>
<cells num='2'>
<cell id='0'>
<cpus num='4'>
<cpu id='0'/>
<cpu id='1'/>
<cpu id='2'/>
<cpu id='3'/>
</cpus>
</cell>
<cell id='1'>
<cpus num='4'>
<cpu id='4'/>
<cpu id='5'/>
<cpu id='6'/>
<cpu id='7'/>
Chapt er 33. T ips and t ricks
319