Installation guide

Setting KVM processor affinities
185
The following output contains a vmx entry indicating an Intel processor with the Intel VT
extensions:
flags : fpu tsc msr pae mce cx8 apic mtrr mca cmov pat pse36 clflush
dts acpi mmx fxsr sse sse2 ss ht tm syscall lm constant_tsc pni monitor ds_cpl
vmx est tm2 cx16 xtpr lahf_lm
The following output contains an svm entry indicating an AMD processor with the AMD-V
extensions:
flags : fpu tsc msr pae mce cx8 apic mtrr mca cmov pat pse36 clflush
mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt lm 3dnowext 3dnow pni cx16
lahf_lm cmp_legacy svm cr8legacy ts fid vid ttp tm stc
If any output is received, the processor has the hardware virtualization extensions. However in
some circumstances manufacturers disable the virtualization extensions in BIOS.
The "flags:" output content may appear multiple times, once for each hyperthread, core or CPU
on the system.
The virtualization extensions may be disabled in the BIOS. If the extensions do not appear or full
virtualization does not work refer to Procedure 34.1, “Enabling virtualization extensions in BIOS”.
3. For users of the KVM hypervisor
If the kvm package is installed. I As an additional check, verify that the kvm modules are loaded in
the kernel:
# lsmod | grep kvm
If the output includes kvm_intel or kvm_amd then the kvm hardware virtualization modules are
loaded and your system meets requirements. sudo
Additional output
If the libvirt package is installed, the virsh command can output a full list of virtualization system
capabilities. Run virsh capabilities as root to receive the complete list.
24.4. Setting KVM processor affinities
This section covers setting processor and processing core affinities with libvirt and KVM guests.
By default, libvirt provisions guests using the hypervisor's default policy. For most hypervisors,
the policy is to run guests on any available processing core or CPU. There are times when an
explicit policy may be better, in particular for systems with a NUMA (Non-Uniform Memory Access)
architecture. A guest on a NUMA system should be pinned to a processing core so that its memory
allocations are always local to the node it is running on. This avoids cross-node memory transports
which have less bandwidth and can significantly degrade performance.
On a non-NUMA systems some form of explicit placement across the hosts’ sockets, cores and
hyperthreads may be more efficient.