Red Hat Enterprise Linux 6 Virtualization Guide Guide to Virtualization on Red Hat Enterprise Linux 6
Virtualization Guide Red Hat Enterprise Linux 6 Virtualization Guide Guide to Virtualization on Red Hat Enterprise Linux 6 Edition 1 Author Copyright © 2008,2009,2010 Red Hat, Inc. The text of and illustrations in this document are licensed by Red Hat under a Creative Commons Attribution–Share Alike 3.0 Unported license ("CC-BY-SA"). An explanation of CC-BY-SA is available at http://creativecommons.org/licenses/by-sa/3.0/.
Preface vii 1. Document Conventions .................................................................................................. vii 1.1. Typographic Conventions .................................................................................... vii 1.2. Pull-quote Conventions ........................................................................................ ix 1.3. Notes and Warnings ............................................................................................ ix 2.
Virtualization Guide III. Configuration 73 10. Network Configuration 75 10.1. Network Address Translation (NAT) with libvirt .................................................... 75 10.2. Bridged networking with libvirt ........................................................................... 76 11. KVM Para-virtualized Drivers 79 11.1. Using the para-virtualized drivers with Red Hat Enterprise Linux 3.9 guests ........... 79 11.2. Installing the KVM Windows para-virtualized drivers .....................
23. Migrating to KVM from other hypervisors using virt-v2v 23.1. Preparing to convert a virtualized guest ........................................................... 23.2. Converting virtualized guests ........................................................................... 23.2.1. virt-v2v ................................................................................................ 23.2.2. Converting a local Xen virtualized guest ................................................ 23.2.3.
Virtualization Guide 29.1.1. Identifying HBAs in a Host System ........................................................ 251 29.1.2. Verify NPIV is used on the HBA ............................................................ 252 VI. Virtualization reference guide 255 30. Managing guests with virsh 257 31. Managing guests with the Virtual Machine Manager (virt-manager) 31.1. Starting virt-manager ....................................................................................... 31.2.
Preface Welcome to the Red Hat Enterprise Linux 6 Virtualization Guide. This guide covers all aspects of using and managing virtualization products included with Red Hat Enterprise Linux 6. This book is divided into 7 parts: • System Requirements • Installation • Configuration • Administration • Reference • Troubleshooting • Appendixes Key terms and concepts used throughout this book are covered in the Glossary. 1.
Preface Press Ctrl+Alt+F2 to switch to the first virtual terminal. Press Ctrl+Alt+F1 to return to your X-Windows session. The first paragraph highlights the particular keycap to press. The second highlights two key combinations (each a set of three keycaps with each set pressed simultaneously). If source code is discussed, class names, methods, functions, variable names and returned values mentioned within a paragraph will be presented as above, in mono-spaced bold.
Pull-quote Conventions Publican is a DocBook publishing system. 1.2. Pull-quote Conventions Terminal output and source code listings are set off visually from the surrounding text. Output sent to a terminal is set in mono-spaced roman and presented thus: books books_tests Desktop Desktop1 documentation downloads drafts images mss notes photos scripts stuff svgs svn Source-code listings are also set in mono-spaced roman but add syntax highlighting as follows: package org.jboss.book.jca.
Preface 2. We need your feedback If you find a typographical error in this manual, or if you have thought of a way to make this manual better, we would love to hear from you. Submit a report in Bugzilla: http://bugzilla.redhat.com/ against the Red_Hat_Enterprise_Linux product. When submitting a bug report, be sure to refer to the correct component: doc-Virtualization_Guide and version number: 6. If you have a suggestion for improving the documentation, try to be as specific as possible when describing it.
Chapter 1. Introduction This chapter introduces various virtualization technologies, applications and features and explains how they work. The purpose of this chapter is to assist Red Hat Enterprise Linux users in understanding the basics of virtualization. 1.1. What is virtualization? Virtualization is a broad computing term for running software, usually operating systems, concurrently and isolated from other programs on one system.
Chapter 1. Introduction KSM Kernel SamePage Merging (KSM) is used by the KVM hypervisor to allow KVM guests to share identical memory pages. These shared pages are usually common libraries or other identical, high-use data. KSM allows for greater guest density of identical or similar guest operating systems by avoiding memory duplication. For more information on KSM, refer to Chapter 21, KSM. 1.3.
Virtualized and emulated devices • Emulated software devices. • Para-virtualized devices. • Physically shared devices. These hardware devices all appear as physically attached hardware devices to the virtualized guest but the device drivers work in different ways. 1.4.1. Virtualized and emulated devices The KVM hypervisor implements many core devices for virtualized guests in software. These emulated hardware devices are crucial for virtualizing operating systems.
Chapter 1. Introduction Emulated sound devices Two emulated sound devices are available: • The ac97 device emulates an Intel 82801AA AC97 Audio compatible sound card. • The es1370 device emulates an ENSONIQ AudioPCI ES1370 sound card. Emulated network drivers There are four emulated network drivers available for network devices: • The e1000 driver emulates an Intel E1000 network adaptor (Intel 82540EM, 82573L, 82544GC). • The ne2k_pci driver emulates a Novell NE2000 network adaptor.
Physically shared devices Para-virtualized block driver The para-virtualized block driver is a driver for all storage devices supported by the hypervisor attached to the virtualized guest (except for floppy disk drives, which must be emulated). The para-virtualized clock Guests using the Time Stamp Counter (TSC) as a clock source may suffer timing issues. KVM works around hosts that do not have a constant Time Stamp Counter by providing guests with a para-virtualized clock.
Chapter 1. Introduction in the PCI configuration space as multiple functions, each device has its own configuration space complete with Base Address Registers (BARs). SR-IOV uses two new PCI functions: • Virtualization-PF • Virtualization VF For more information on SR-IOV, refer to Chapter 13, SR-IOV. NPIV N_Port ID Virtualization (NPIV) is a function available with some Fibre Channel devices. NPIV shares a single physical N_Port as multiple N_Port IDs.
Virtualization security features Storage volumes are presented to virtualized guests as local storage devices regardless of the underlying hardware. For more information on storage and virtualization refer to Part V, “Virtualization storage topics”. 1.6. Virtualization security features SELinux SELinux was developed by the US National Security Agency and others to provide Mandatory Access Control (MAC) for Linux. All processes and files are given a type and access is limited by fine-grained controls.
Chapter 1. Introduction Offline migration An offline migration suspends the guest then moves an image of the guest's memory to the destination host. The guest is resumed on the destination host and then memory the guest used on the source host is freed. Live migration Live migration is the process of migrating a running guest from one physical host to another physical host. For more information on migration refer to Chapter 18, KVM live migration. 1.8.
Part I. Requirements and limitations System requirements, support restrictions and limitations for virtualization with Red Hat Enterprise Linux 6 These chapters outline the system requirements, support restrictions, and limitations of virtualization on Red Hat Enterprise Linux 6.
Chapter 2. System requirements This chapter lists system requirements for successfully running virtualized guest operating systems with Red Hat Enterprise Linux 6. Virtualization is available for Red Hat Enterprise Linux 6 on the Intel 64 and AMD64 architecture. The KVM hypervisor is provided with Red Hat Enterprise Linux 6. For information on installing the virtualization packages, read Chapter 5, Installing the virtualization packages. Minimum system requirements • 6GB free disk space • 2GB of RAM.
Chapter 2. System requirements • GFS2 clustered file systems, and • Fibre Channel-based LUNs • SRP devices (SCSI RDMA Protocol), the block export protocol used in Infiniband and 10GbE iWARP adapters. File-based guest storage File-based guest images should be stored in the /var/lib/libvirt/images/ folder. If you use a different directory you must add the directory to the SELinux policy. Refer to Section 16.2, “SELinux and virtualization” for details.
Chapter 3. KVM compatibility The KVM hypervisor requires a processor with the Intel-VT or AMD-V virtualization extensions. Note that this list is not complete. Help us expand it by sending in a bug with anything you get working. To verify whether your processor supports the virtualization extensions and for information on enabling the virtualization extensions if they are disabled, refer to Section 24.3, “Verifying virtualization extensions”. Red Hat Enterprise Linux 6.
14
Chapter 4. Virtualization limitations This chapter covers additional support and product limitations of the virtualization packages in Red Hat Enterprise Linux 6. 4.1. General limitations for virtualization Other limitations For the list of all other limitations and issues affecting virtualization read the Red Hat Enterprise Linux 6 Release Notes. The Red Hat Enterprise Linux 6 Release Notes cover the present new features, known issues and limitations as they are updated or discovered.
Chapter 4. Virtualization limitations Para-virtualized devices Para-virtualized devices, which use the virtio drivers, are PCI devices. Presently, guests are limited to a maximum of 32 PCI devices. Some PCI devices are critical for the guest to run and these devices cannot be removed. The default, required devices are: • the host bridge, • the ISA bridge and usb bridge (The usb and isa bridges are the same device), • the graphics card (using either the Cirrus or qxl driver), and • the memory balloon device.
Application limitations Applications with high I/O throughput requirements should use the para-virtualized drivers for fully virtualized guests. Without the para-virtualized drivers certain applications may be unstable under heavy I/O loads. The following applications should be avoided for their high I/O requirement reasons: • kdump server • netdump server You should carefully evaluate databasing applications before running them on a virtualized guest.
18
Part II. Installation Virtualization installation topics These chapters cover setting up the host and installing virtualized guests with Red Hat Enterprise Linux 6. It is recommended to read these chapters carefully to ensure successful installation of virtualized guest operating systems.
Chapter 5. Installing the virtualization packages Before you can use virtualization, the virtualization packages must be installed on your computer. Virtualization packages can be installed either during the installation sequence or after installation using the yum command and the Red Hat Network (RHN). The KVM hypervisor uses the default Red Hat Enterprise Linux kernel with the kvm kernel module. 5.1.
Chapter 5. Installing the virtualization packages Select the Virtual Host server role to install a platform for virtualized guests. Alternatively, select the Customize Now radio button to specify individual packages. 4. 22 Select the Virtualization package group. This selects the KVM hypervisor, virt-manager, libvirt and virt-viewer for installation.
Installing KVM with a new Red Hat Enterprise Linux installation 5. Customize the packages (if required) Customize the Virtualization group if you require other virtualization packages.
Chapter 5. Installing the virtualization packages Press the Close button then the Next button to continue the installation. Note You require a valid RHN virtualization entitlement to receive updates for the virtualization packages. Installing KVM packages with Kickstart files This section describes how to use a Kickstart file to install Red Hat Enterprise Linux with the KVM hypervisor packages.
Installing KVM packages on an existing Red Hat Enterprise Linux system 5.2. Installing KVM packages on an existing Red Hat Enterprise Linux system The section describes the steps for installing the KVM hypervisor on a working Red Hat Enterprise Linux 6 or newer system. Adding packages to your list of Red Hat Network entitlements This section describes how to enable Red Hat Network (RHN) entitlements for the virtualization packages.
Chapter 5. Installing the virtualization packages libvirt-python The libvirt-python package contains a module that permits applications written in the Python programming language to use the interface supplied by the libvirt API. virt-manager virt-manager, also known as Virtual Machine Manager, provides a graphical tool for administering virtual machines. It uses libvirt-client library as the management API.
Chapter 6. Virtualized guest installation overview After you have installed the virtualization packages on the host system you can create guest operating systems. This chapter describes the general processes for installing guest operating systems on virtual machines. You can create guests using the New button in virt-manager or use the command line interface virt-install. Both methods are covered by this chapter.
Chapter 6. Virtualized guest installation overview • Uses LVM partitioning • Is a plain QEMU guest • Uses virtual networking • Boots from PXE • Uses VNC server/viewer # virt-install \ --network network:default \ --name rhel5support --ram=756\ --file=/var/lib/libvirt/images/rhel5support.img \ --file-size=6 --vnc --cdrom=/dev/sr0 Refer to man virt-install for more examples. 6.3.
Creating guests with virt-manager Figure 6.1. Virtual Machine Manager window 4. New VM wizard The New VM wizard breaks down the guest creation process into five steps: 1. Naming the guest and choosing the installation type 2. Locating and configuring the installation media 3. Configuring memory and CPU options 4. Configuring the guest's storage 5.
Chapter 6. Virtualized guest installation overview Figure 6.2. Step 1 Type in a virtual machine name and choose an installation type: Local install media (ISO image or CDROM) This method uses a CD-ROM, DVD, or image of an installation disk (e.g. .iso). Network Install (HTTP, FTP, or NFS) Network installing involves the use of a mirrored Red Hat Enterprise Linux or Fedora installation tree to install a guest. The installation tree must be accessible through either HTTP, FTP, or NFS.
Creating guests with virt-manager Network Boot (PXE) This method uses a Preboot eXecution Environment (PXE) server to install the guest. Setting up a PXE server is covered in the Deployment Guide. To install via network boot, the guest must have a routable IP address or shared network device. For information on the required networking configuration for PXE installation, refer to Chapter 10, Network Configuration.
Chapter 6. Virtualized guest installation overview Figure 6.4. Import existing disk image (configuration) Important It is recommend that you use the default directory for virtual machine images, /var/lib/ libvirt/images/. If you are using a different location, make sure it is added to your SELinux policy and relabeled before you continue with the installation. Refer to Section 16.2, “SELinux and virtualization” for details on how to do this.
Creating guests with virt-manager Figure 6.5. Network Install (configuration) Click Forward to continue. 7. Configure CPU and memory The next step involves configuring the number of CPUs and amount of memory to allocate to the virtual machine. The wizard shows the number of CPUs and amount of memory you can allocate; configure these settings and click Forward.
Chapter 6. Virtualized guest installation overview Figure 6.6. Configuring CPU and Memory 8. 34 Configure storage Assign a physical storage device (Block device) or a file-based image (File). File-based images should be stored in /var/lib/libvirt/images/ to satisfy default SELinux permissions.
Creating guests with virt-manager Figure 6.7. Configuring virtual storage If you chose to import an existing disk image during the first step, virt-manager will skip this step. Assign sufficient space for your virtualized guest and any applications the guest requires, then click Forward to continue. 9.
Chapter 6. Virtualized guest installation overview Figure 6.8. Verifying the configuration If you prefer to further configure the virtual machine's hardware first, check the Customize configuration before install box first before clicking Finish. Doing so will open another wizard Figure 6.9, “Virtual hardware configuration” that will allow you to add, remove, and configure the virtual machine's hardware settings.
Installing guests with PXE Figure 6.9. Virtual hardware configuration After configuring the virtual machine's hardware, click Apply. virt-manager will then create the guest with your specified hardware settings. This concludes the general process for creating guests with virt-manager. Chapter 6, Virtualized guest installation overview contains step-by-step instructions to installing a variety of common operating systems. 6.4.
Chapter 6. Virtualized guest installation overview TYPE=Bridge BOOTPROTO=dhcp ONBOOT=yes Warning The line, TYPE=Bridge, is case-sensitive. It must have uppercase 'B' and lower case 'ridge'. b. Start the new bridge by restarting the network service. The ifup installation command can start the individual bridge but it is safer to test the entire network restarts properly. # service network restart c. There are no interfaces added to the new bridge yet.
Installing guests with PXE # service iptables restart Disable iptables on bridges Alternatively, prevent bridged traffic from being processed by iptables rules. In /etc/ sysctl.conf append the following lines: net.bridge.bridge-nf-call-ip6tables = 0 net.bridge.bridge-nf-call-iptables = 0 net.bridge.bridge-nf-call-arptables = 0 Reload the kernel parameters configured with sysctl. # sysctl -p /etc/sysctl.conf 4. Restart libvirt before the installation Restart the libvirt daemon.
Chapter 6. Virtualized guest installation overview 2. 40 Select the bridge Select Shared physical device and select the bridge created in the previous procedure.
Installing guests with PXE 3. Start the installation The installation is ready to start.
Chapter 6. Virtualized guest installation overview A DHCP request is sent and if a valid PXE server is found the guest installation processes will start.
Chapter 7. Installing Red Hat Enterprise Linux 6 as a virtualized guest This Chapter covers how to install Red Hat Enterprise Linux 6 as a fully virtualized guest on Red Hat Enterprise Linux 6. This procedure assumes that the KVM hypervisor and all other required packages are installed and the host is configured for virtualization. For more information on installing the virtualization pacakges, refer to Chapter 5, Installing the virtualization packages. 7.1.
Chapter 7. Installing Red Hat Enterprise Linux 6 as a virtualized guest Figure 7.1. The main virt-manager window Press the create new virtualized guest button (see figure Figure 7.2, “The create new virtualized guest button”) to start the new virtualized guest wizard. Figure 7.2. The create new virtualized guest button The Create a new virtual machine window opens. 3. Name the virtualized guest Guest names can contain letters, numbers and the following characters: '_', '.' and '-'.
Creating a Red Hat Enterprise Linux 6 guest with local installation media Figure 7.3. The Create a new virtual machine window - Step 1 Press Forward to continue. 4. Select the installation media Select the installation ISO image location or a DVD drive with the installation disc inside. This example uses an ISO file image of the Red Hat Enterprise Linux 6.0 installation DVD image.
Chapter 7. Installing Red Hat Enterprise Linux 6 as a virtualized guest Figure 7.4. The Locate ISO media volume window Image files and SELinux For ISO image files and guest storage images, the recommended directory to use is the /var/lib/libvirt/images/ directory. Any other location may require additional configuration for SELinux, refer to Section 16.2, “SELinux and virtualization” for details. Select the operating system type and version which match the installation media you have selected.
Creating a Red Hat Enterprise Linux 6 guest with local installation media Figure 7.5. The Create a new virtual machine window - Step 2 Press Forward to continue. 5. Set RAM and virtual CPUs Choose appropriate values for the virtualized CPUs and RAM allocation. These values affect the host's and guest's performance. Memory and virtualized CPUs can be overcommitted, for more information on overcommitting refer to Chapter 20, Overcommitting with KVM.
Chapter 7. Installing Red Hat Enterprise Linux 6 as a virtualized guest Figure 7.6. The Create a new virtual machine window - Step 3 Press Forward to continue. 6. Storage Enable and assign storage for the Red Hat Enterprise Linux 6 guest. Assign at least 5GB for a desktop installation or at least 1GB for a minimal installation. Migration Live and offline migrations require guests to be installed on shared network storage.
Creating a Red Hat Enterprise Linux 6 guest with local installation media Figure 7.7. The Create a new virtual machine window - Step 4 b. With a storage pool Select Select managed or other existing storage to use a storage pool.
Chapter 7. Installing Red Hat Enterprise Linux 6 as a virtualized guest Figure 7.8. The Locate or create storage volume window 50 i. Press the browse button to open the storage pool browser. ii. Select a storage pool from the Storage Pools list. iii. Optional: Press the New Volume button to create a new storage volume. Enter the name of the new storage volume. iv. Press the Choose Volume button to select the volume for the virtualized guest.
Creating a Red Hat Enterprise Linux 6 guest with local installation media Figure 7.9. The Create a new virtual machine window - Step 4 Press Forward to continue. 7. Verify and finish Verify there were no errors made during the wizard and everything appears as expected. Select the Customize configuration before install check box to change the guest's storage or network devices, to use the para-virtualized drivers or, to add additional devices.
Chapter 7. Installing Red Hat Enterprise Linux 6 as a virtualized guest Figure 7.10. The Create a new virtual machine window - Step 5 Press Finish to continue into the Red Hat Enterprise Linux installation sequence. For more information on installing Red Hat Enterprise Linux 6 refer to the Red Hat Enterprise Linux 6 Installation Guide. A Red Hat Enterprise Linux 6 guest is now created from a an ISO installation disc image.
Creating a Red Hat Enterprise Linux 6 guest with a network installation tree 7.2. Creating a Red Hat Enterprise Linux 6 guest with a network installation tree Procedure 7.2. Creating a Red Hat Enterprise Linux 6 guest with virt-manager 1. Optional: Preparation Prepare the storage environment for the virtualized guest. For more information on preparing storage, refer to Part V, “Virtualization storage topics”. Note Various storage types may be used for storing virtualized guests.
Chapter 7. Installing Red Hat Enterprise Linux 6 as a virtualized guest Figure 7.11. The main virt-manager window Press the create new virtualized guest button (see figure Figure 7.12, “The create new virtualized guest button”) to start the new virtualized guest wizard. Figure 7.12. The create new virtualized guest button The Create a new virtual machine window opens. 3. Name the virtualized guest Guest names can contain letters, numbers and the following characters: '_', '.' and '-'.
Creating a Red Hat Enterprise Linux 6 guest with PXE Figure 7.13. The Create a new virtual machine window - Step 1 Press Forward to continue. 7.3. Creating a Red Hat Enterprise Linux 6 guest with PXE Procedure 7.3. Creating a Red Hat Enterprise Linux 6 guest with virt-manager 1. Optional: Preparation Prepare the storage environment for the virtualized guest. For more information on preparing storage, refer to Part V, “Virtualization storage topics”.
Chapter 7. Installing Red Hat Enterprise Linux 6 as a virtualized guest 2. Open virt-manager and start the wizard Open virt-manager by executing the virt-manager command as root or opening Applications -> System Tools -> Virtual Machine Manager. Figure 7.14. The main virt-manager window Press the create new virtualized guest button (see figure Figure 7.15, “The create new virtualized guest button”) to start the new virtualized guest wizard. Figure 7.15.
Creating a Red Hat Enterprise Linux 6 guest with PXE Choose the installation method from the list of radio buttons. Figure 7.16. The Create a new virtual machine window - Step 1 Press Forward to continue.
58
Chapter 8. Installing Red Hat Enterprise Linux 6 as a para-virtualized guest on Red Hat Enterprise Linux 5 This section describes how to install Red Hat Enterprise Linux 6 as a para-virtualized guest on Red Hat Enterprise Linux 5. Para-virtualization is only available for Red Hat Enterprise Linux 5 hosts. Red Hat Enterprise Linux 6 uses the PV-opts features of the Linux kernel to appear as a compatible Xen para-virtualized guest.
Chapter 8. Installing Red Hat Enterprise Linux 6 as a para-virtualized guest on Red Hat Enterprise Linux 5 The graphical console opens showing the initial boot phase of the guest: After your guest has completed its initial boot, the standard installation process for Red Hat Enterprise Linux 6 starts. For most systems the default answers are acceptable. Refer to the Red Hat Enterprise Linux 6 Installation Guide for more information on installing Red Hat Enterprise Linux 6. 8.2.
Using virt-manager 3. Start the new virtual machine wizard Pressing the New button starts the virtual machine creation wizard. Press Forward to continue. 4. Name the virtual machine Provide a name for your virtualized guest. The following punctuation and whitespace characters are permitted for '_', '.' and '-' characters.
Chapter 8. Installing Red Hat Enterprise Linux 6 as a para-virtualized guest on Red Hat Enterprise Linux 5 Press Forward to continue. 5. 62 Choose a virtualization method Select Xen para-virtualized as the virtualization method.
Using virt-manager Press Forward to continue. 6. Select the installation method Red Hat Enterprise Linux can be installed using one of the following methods: • local install media, either an ISO image or physical optical media. • Select Network install tree if you have the installation tree for Red Hat Enterprise Linux hosted somewhere on your network via HTTP, FTP or NFS. • PXE can be used if you have a PXE server configured for booting Red Hat Enterprise Linux installation media.
Chapter 8. Installing Red Hat Enterprise Linux 6 as a para-virtualized guest on Red Hat Enterprise Linux 5 Press Forward to continue. 7. 64 Locate installation media Select ISO image location or CD-ROM or DVD device. This example uses an ISO file image of the Red Hat Enterprise Linux installation DVD. a. Press the Browse button. b. Search to the location of the ISO file and select the ISO image. Press Open to confirm your selection. c. The file is selected and ready to install.
Using virt-manager Press Forward to continue. Image files and SELinux For ISO image files and guest storage images it is recommended to use the /var/lib/ libvirt/images/ directory. Any other location may require additional configuration for SELinux, refer to Section 16.2, “SELinux and virtualization” for details. 8. Storage setup Assign a physical storage device (Block device) or a file-based image (File).
Chapter 8. Installing Red Hat Enterprise Linux 6 as a para-virtualized guest on Red Hat Enterprise Linux 5 Press Forward to continue. Migration Live and offline migrations require guests to be installed on shared network storage. For information on setting up shared storage for guests refer to Part V, “Virtualization storage topics”. 9. Network setup Select either Virtual network or Shared physical device.
Using virt-manager Press Forward to continue. 10. Memory and CPU allocation The Memory and CPU Allocation window displays. Choose appropriate values for the virtualized CPUs and RAM allocation. These values affect the host's and guest's performance. Virtualized guests require sufficient physical memory (RAM) to run efficiently and effectively. Choose a memory value which suits your guest operating system and application requirements. Remember, guests use physical RAM.
Chapter 8. Installing Red Hat Enterprise Linux 6 as a para-virtualized guest on Red Hat Enterprise Linux 5 Press Forward to continue. 11. Verify and start guest installation Verify the configuration.
Using virt-manager Press Finish to start the guest installation procedure. 12. Installing Red Hat Enterprise Linux Complete the Red Hat Enterprise Linux installation sequence. The installation sequence is 1 covered by the Red Hat Enterprise Linux 6 Installation Guide. Refer to Red Hat Documentation for the Red Hat Enterprise Linux 6 Installation Guide.
70
Chapter 9. Installing a fully-virtualized Windows guest Red Hat Enterprise Linux 6 supports the installation of any Microsoft Windows operating system as a fully virtualized guest. This chapter describes how to create a fully virtualized guest using the command-line (virt-install), launch the operating system's installer inside the guest, and access the installer through virt-viewer. To install a Windows operating system on the guest, use the virt-viewer tool.
Chapter 9. Installing a fully-virtualized Windows guest Important All image files should be stored in /var/lib/libvirt/images/. Other directory locations for file-based images are prohibited by SELinux. If you run SELinux in enforcing mode, refer to Section 16.2, “SELinux and virtualization” for more information on installing guests. You can also run virt-install interactively.
Part III. Configuration Configuring virtualization in Red Hat Enterprise Linux 6 These chapters cover configuration procedures for various advanced virtualization tasks. These tasks include adding network and storage devices, enhancing security, improving performance, and using the para-virtualized drivers on fully virtualized guests.
Chapter 10. Network Configuration This page provides an introduction to the common networking configurations used by libvirt based applications. For additional information consult the libvirt network architecture documentation: http:// libvirt.org/intro.html. Red Hat Enterprise Linux 6 supports the following networking setups for virtualization: • virtual networks using Network Address Translation (NAT) • directly allocated physical devices using PCI passthrough or SR-IOV.
Chapter 10. Network Configuration libvirt adds iptables rules which allow traffic to and from guests attached to the virbr0 device in the INPUT, FORWARD, OUTPUT and POSTROUTING chains. libvirt then attempts to enable the ip_forward parameter. Some other applications may disable ip_forward, so the best option is to add the following to /etc/sysctl.conf. net.ipv4.ip_forward = 1 Guest configuration Once the host configuration is complete, a guest can be connected to the virtual network based on its name.
Bridged networking with libvirt Creating the bridge Create or edit the following two network configuration files. These steps can be repeated (with different names) for additional network bridges. 1. Change to the network scripts directory Change to the /etc/sysconfig/network-scripts directory: # cd /etc/sysconfig/network-scripts 2. Modify a network interface to make a bridge Edit the network script for the network device you are adding to the bridge.
Chapter 10. Network Configuration 5. Configure iptables Configure iptables to allow all traffic to be forwarded across the bridge. # iptables -I FORWARD -m physdev --physdev-is-bridged -j ACCEPT # service iptables save # service iptables restart Disable iptables on bridges Alternatively, prevent bridged traffic from being processed by iptables rules. In /etc/ sysctl.conf append the following lines: net.bridge.bridge-nf-call-ip6tables = 0 net.bridge.bridge-nf-call-iptables = 0 net.bridge.
Chapter 11. KVM Para-virtualized Drivers Para-virtualized drivers are available for virtualized Windows guests running on KVM hosts. These para-virtualized drivers are included in the virtio package. The virtio package supports block (storage) devices and network interface controllers. Para-virtualized drivers enhance the performance of fully virtualized guests. With the para-virtualized drivers guest I/O latency decreases and throughput increases to near bare-metal levels.
Chapter 11. KVM Para-virtualized Drivers Note To use the network device driver only, load the virtio, virtio_net and virtio_pci modules. To use the block device driver only, load the virtio, virtio_ring, virtio_blk and virtio_pci modules. Modified initrd files The virtio package modifies the initrd RAM disk file in the /boot directory. The original initrd file is saved to /boot/initrd- kernel-version .img.virtio.orig.
Using the para-virtualized drivers with Red Hat Enterprise Linux 3.9 guests modprobe virtio_pci Reboot the guest to load the kernel modules. Adding the para-virtualized drivers to the initrd RAM disk This procedure covers loading the para-virtualized driver modules with the kernel on a Red Hat Enterprise Linux 3.9 or newer guest by including the modules in the initrd RAM disk. The mkinitrd tool configures the initrd RAM disk to load the the modules.
Chapter 11. KVM Para-virtualized Drivers 11.2. Installing the KVM Windows para-virtualized drivers This section covers the installation process for the KVM Windows para-virtualized drivers. The KVM para-virtualized drivers can be loaded during the Windows installation or installed after the guest is installed.
Installing the drivers on an installed Windows guest 3. Select the device type This opens a wizard for adding the new device. Select Storage from the dropdown menu.
Chapter 11. KVM Para-virtualized Drivers Click the Forward button to proceed. 4. Select the ISO file Select Select managed or other existing storage and set the file location of the para-virtualized drivers .iso image file. The default location for the latest version of the drivers is /usr/share/ virtio-win/virtio-win.iso. Change the Device type to IDE cdrom and click the Forward button to proceed.
Installing the drivers on an installed Windows guest 5. Finish adding virtual hardware Press the Finish button to complete the wizard.
Chapter 11. KVM Para-virtualized Drivers 6. Reboot Reboot or start the guest to begin using the driver disc. Virtualized IDE devices require a restart to for the guest to recognize the new device. Once the CD-ROM with the drivers is attached and the guest has started, proceed with Procedure 11.2, “Windows installation”. Procedure 11.2. Windows installation 1. Open My Computer On the Windows guest, open My Computer and select the CD-ROM drive.
Installing the drivers on an installed Windows guest 2. Select the correct installation files There are four files available on the disc. Select the drivers you require for your guest's architecture: • the para-virtualized block device driver (RHEV-Block.msi for 32-bit guests or RHEVBlock64.msi for 64-bit guests), • the para-virtualized network device driver (RHEV-Network.msi for 32-bit guests or RHEVBlock64.msi for 64-bit guests), • or both the block and network device drivers.
Chapter 11. KVM Para-virtualized Drivers Press Install to continue. b. Confirm the exception Windows may prompt for a security exception. Press Yes if it is correct.
Installing the drivers on an installed Windows guest c. Finish Press Finish to complete the installation. 4. Install the network device driver a. Start the network device driver installation Double click RHEV-Network.msi or RHEV-Network64.msi.
Chapter 11. KVM Para-virtualized Drivers Press Next to continue. b. Performance setting This screen configures advanced TCP settings for the network driver. TCP timestamps and TCP window scaling can be enabled or disabled. The default is, 1, for window scaling to be enabled. 2 TCP window scaling is covered by IETF RFC 1323 . The RFC defines a method of increasing the receive window size to a size greater than the default maximum of 65,535 bytes up to a new maximum of 1 gigabyte (1,073,741,824 bytes).
Installing the drivers on an installed Windows guest Press Next to continue. c. Confirm the exception Windows may prompt for a security exception. Press Yes if it is correct.
Chapter 11. KVM Para-virtualized Drivers d. Finish Press Finish to complete the installation. 5. Reboot Reboot the guest to complete the driver installation. Change an existing device to use the para-virtualized drivers (Section 11.3, “Using KVM paravirtualized drivers for existing devices”) or install a new device using the para-virtualized drivers (Section 11.4, “Using KVM para-virtualized drivers for new devices”). 11.2.2.
Installing drivers during the Windows installation 2. Creating the guest with virsh This method attaches the para-virtualized driver floppy disk to a Windows guest before the installation. If the guest is created from an XML definition file with virsh use the virsh define command not the virsh create command. a. Create, but do not start, the guest. Refer to Chapter 30, Managing guests with virsh for details on creating guests with the virsh command. b.
Chapter 11. KVM Para-virtualized Drivers Press the Finish button to continue. b. 94 Add the new device Select Storage from the Hardware type list. Click Forward to continue.
Installing drivers during the Windows installation c. Select the driver disk Select Select managed or existing storage. Set the location to /usr/share/virtio-win/virtio-drivers.vfd. Change Device type to Floppy disk.
Chapter 11. KVM Para-virtualized Drivers Press the Forward button to continue. d. 96 Confirm the new device Click the Finish button to confirm the device setup and add the device to the guest.
Installing drivers during the Windows installation Press the green tick button to add the new device. 4. Creating the guest with virt-install Append the following parameter exactly as listed below to add the driver disk to the installation with the virt-install command : --disk path=/usr/share/virtio-win/virtio-drivers.vfd,device=floppy 5. During the installation, additional steps are required to install drivers, depending on the type of Windows guest. a.
Chapter 11.
Using KVM para-virtualized drivers for existing devices Press Enter to continue the installation. b. Windows Server 2008 Install the guest as described by Section 9.1, “Using virt-install to create a guest” When the installer prompts you for the driver, click on Load Driver, point the installer to Drive A: and pick the driver that suits your guest operating system and architecture. 11.3.
Chapter 11. KVM Para-virtualized Drivers
Using KVM para-virtualized drivers for new devices Press Forward to continue. 2. Select the storage device and driver Create a new disk image or select a storage pool volume. Set the Device type to Virtio Disk to use the para-virtualized drivers.
Chapter 11. KVM Para-virtualized Drivers Press Forward to continue. 3. 102 Finish the procedure Confirm the details for the new device are correct.
Using KVM para-virtualized drivers for new devices Press Finish to complete the procedure. Procedure 11.5. Adding a network device using the para-virtualized network driver 1. Select hardware type Select Network as the Hardware type.
Chapter 11. KVM Para-virtualized Drivers Press Forward to continue. 2. Select the network device and driver Select the network device from the Host device list. Create a custom MAC address or use the one provided. Set the Device model to virtio to use the para-virtualized drivers.
Using KVM para-virtualized drivers for new devices Press Forward to continue. 3. Finish the procedure Confirm the details for the new device are correct.
Chapter 11. KVM Para-virtualized Drivers Press Finish to complete the procedure. Once all new devices are added, reboot the guest. Windows guests may may not recognise the devices until the guest is rebooted.
Chapter 12. PCI passthrough This chapter covers using PCI passthrough with KVM. Certain hardware platforms allow virtualized guests to directly access various hardware devices and components. This process in virtualization is known as passthrough. Passthrough is known as device assignment in some of the KVM documentation and the KVM code. The KVM hypervisor supports attaching PCI devices on the host system to virtualized guests.
Chapter 12. PCI passthrough 3. Ready to use Reboot the system to enable the changes. Your system is now PCI passthrough capable. Procedure 12.2. Preparing an AMD system for PCI passthrough • Enable AMD IOMMU extensions The AMD IOMMU extensions are required for PCI passthrough with Red Hat Enterprise Linux. The extensions must be enabled in the BIOS. Some system manufacturers disable these extensions by default. AMD systems only require that the IOMMU is enabled in the BIOS.
Adding a PCI device with virsh Record the PCI device number; the number is needed in other steps. 2.
Chapter 12. PCI passthrough $ readlink /sys/bus/pci/devices/0000\:00\:1a.7/driver ../../../bus/pci/drivers/ehci_hcd 7. Detach the device: $ virsh nodedev-dettach pci_8086_3a6c 8. Verify it is now under the control of pci_stub: $ readlink /sys/bus/pci/devices/0000\:00\:1d.7/driver ../../../bus/pci/drivers/pci-stub 9. Set a sebool to allow the management of the PCI device from the guest: $ setsebool -P virt_manage_sysfs 1 10.
Adding a PCI device with virt-manager pci_0000_00_1a_7 pci_0000_00_1b_0 pci_0000_00_1c_0 Tip: determining the PCI device Comparing lspci output to lspci -n (which turns off name resolution) output can assist in deriving which device has which device identifier code. Record the PCI device number; the number is needed in other steps. 2. Detach the PCI device Detach the device from the system. # virsh nodedev-dettach pci_8086_3a6c Device pci_8086_3a6c dettached 3.
Chapter 12. PCI passthrough 4. Add the new device Select Physical Host Device from the Hardware type list. Click Forward to continue. 5. Select a PCI device Select an unused PCI device. Note that selecting PCI devices presently in use on the host causes errors. In this example a PCI to USB interface device is used.
Adding a PCI device with virt-manager 6. Confirm the new device Click the Finish button to confirm the device setup and add the device to the guest.
Chapter 12. PCI passthrough The setup is complete and the guest can now use the PCI device. 12.3. PCI passthrough with virt-install To use PCI passthrough with the virt-install parameter, use the additional --host-device parameter. 1. Identify the PCI device Identify the PCI device designated for passthrough to the guest. The virsh nodedev-list command lists all devices attached to the system.
PCI passthrough with virt-install pci_0000_00_02_0 pci_0000_00_02_1 pci_0000_00_03_0 pci_0000_00_03_2 pci_0000_00_03_3 pci_0000_00_19_0 pci_0000_00_1a_0 pci_0000_00_1a_1 pci_0000_00_1a_2 pci_0000_00_1a_7 pci_0000_00_1b_0 pci_0000_00_1c_0 Tip: determining the PCI device Comparing lspci output to lspci -n (which turns off name resolution) output can assist in deriving which device has which device identifier code. 2.
116
Chapter 13. SR-IOV 13.1. Introduction The PCI-SIG (PCI Special Interest Group) developed the Single Root I/O Virtualization (SR-IOV) specification. The SR-IOV specification is a standard for a type of PCI passthrough which natively shares a single device to multiple guests. SR-IOV reduces hypervisor involvement by specifying virtualization compatible memory spaces, interrupts and DMA streams. SR-IOV improves device performance for virtualized guests. Figure 13.1.
Chapter 13. SR-IOV appears as a network card in the same way as a normal network card would appear to an operating system. The SR-IOV drivers are implemented in the kernel. The core implementation is contained in the PCI subsystem, but there must also be driver support for both the Physical Function (PF) and Virtual Function (VF) devices. With an SR-IOV capable device one can allocate VFs from a PF.
Using SR-IOV 4. Activate Virtual Functions The max_vfs parameter of the igb module allocates the maximum number of Virtual Functions. The max_vfs parameter causes the driver to spawn, up to the value of the parameter in, Virtual Functions. For this particular card the valid range is 0 to 7. Remove the module to change the variable. # modprobe -r igb Restart the module with the max_vfs set to 1 or any number of Virtual Functions up to the maximum supported by your device. # modprobe igb max_vfs=7 5.
Chapter 13. SR-IOV pci_0000_0b_10_2 pci_0000_0b_10_3 pci_0000_0b_10_4 pci_0000_0b_10_5 pci_0000_0b_10_6 pci_0000_0b_11_7 pci_0000_0b_11_1 pci_0000_0b_11_2 pci_0000_0b_11_3 pci_0000_0b_11_4 pci_0000_0b_11_5 The serial numbers for the Virtual Functions and Physical Functions should be in the list. 8. Get device details with virsh The pci_0000_0b_00_0 is one of the Physical Functions and pci_0000_0b_10_0 is the first corresponding Virtual Function for that Physical Function.
Troubleshooting SR-IOV Device pci_0000_0b_10_0 dettached 10. Add the Virtual Function to the guest a. Shut down the guest. b. Use the output from the virsh nodedev-dumpxml pci_8086_10ca_0 command to calculate the values for the configuration file. Convert slot and function values to hexadecimal values (from decimal) to get the PCI bus addresses. Append "0x" to the beginning of the output to tell the computer that the value is a hexadecimal number.
Chapter 13. SR-IOV Error starting the guest Start the configured vm , an error reported as follows: # virsh start test error: Failed to start domain test error: internal error unable to start guest: char device redirected to /dev/pts/2 get_real_device: /sys/bus/pci/devices/0000:03:10.0/config: Permission denied init_assigned_device: Error: Couldn't get real device (03:10.0)! Failed to initialize assigned device host=03:10.
Chapter 14. KVM guest timing management Virtualization poses various challenges for guest time keeping. Guests using the Time Stamp Counter (TSC) as a clock source may suffer timing issues as some CPUs do not have a constant Time Stamp Counter. Guests without accurate timekeeping may have issues with some networked applications and processes as the guest will run faster or slower than the actual time and fall out of synchronization.
Chapter 14. KVM guest timing management 1 If the CPU lacks the constant_tsc bit, disable all power management features (BZ#513138 ). Each system has several timers it uses to keep time. The TSC is not stable on the host, which is sometimes caused by cpufreq changes, deep C state, or migration to a host with a faster TSC. Deep C sleep states can stop the TSC. To prevent the kernel using deep C states append processor.max_cstate=1 to the kernel boot options in the grub.
Using the Real-Time Clock with Windows Server 2003 and Windows XP guests Windows uses the both the Real-Time Clock (RTC) and the Time Stamp Counter (TSC). For Windows guests the Real-Time Clock can be used instead of the TSC for all time sources which resolves guest timing issues. To enable the Real-Time Clock for the PMTIMER clock source (the PMTIMER usually uses the TSC) add the following line to the Windows boot settings. Windows boot settings are stored in the boot.ini file.
126
Part IV. Administration Administering virtualized systems These chapters contain information for administering host and virtualized guests using tools included in Red Hat Enterprise Linux 6.
Chapter 15. Server best practices The following tasks and tips can assist you with securing and ensuring reliability of your Red Hat Enterprise Linux host. • Run SELinux in enforcing mode. Set SELinux to run in enforcing mode with the setenforce command. # setenforce 1 • Remove or disable any unnecessary services such as AutoFS, NFS, FTP, HTTP, NIS, telnetd, sendmail and so on. • Only add the minimum number of user accounts needed for platform management on the server and remove unnecessary user accounts.
130
Chapter 16. Security for virtualization When deploying virtualization technologies on your corporate infrastructure, you must ensure that the host cannot be compromised. The host is a Red Hat Enterprise Linux system that manages the system, devices, memory and networks as well as all virtualized guests. If the host is insecure, all guests in the system are vulnerable. There are several ways to enhance security on systems using virtualization.
Chapter 16. Security for virtualization # lvcreate -n NewVolumeName -L 5G volumegroup 2. Format the NewVolumeName logical volume with a file system that supports extended attributes, such as ext3. # mke2fs -j /dev/volumegroup/NewVolumeName 3. Create a new directory for mounting the new logical volume. This directory can be anywhere on your file system. It is advised not to put it in important system directories (/etc, /var, /sys) or in home directories (/home or /root).
SELinux 16.3. SELinux This sections contains topics to consider when using SELinux with your virtualization deployment. When you deploy system changes or add devices, you must update your SELinux policy accordingly. To configure an LVM volume for a guest, you must modify the SELinux context for the respective underlying block device and volume group.
Chapter 16. Security for virtualization • Enabling IP forwarding (net.ipv4.ip_forward = 1) is also required for shared bridges and the default bridge. Note that installing libvirt enables this variable so it will be enabled when the virtualization packages are installed unless it was manually disabled.
Chapter 17. sVirt sVirt is a technology included in Red Hat Enterprise Linux 6 that integrates SELinux and virtualization. sVirt applies Mandatory Access Control (MAC) to improve security when using virtualized guests. The main reasons for integrating these technologies are to improve security and harden the system against bugs in the hypervisor that might be used as an attack vector aimed toward the host or to another virtualized guest.
Chapter 17. sVirt 17.1. Security and Virtualization When services are not virtualized, machines are physically separated. Any exploit is usually contained to the affected machine, with the obvious exception of network attacks. When services are grouped together in a virtualized environment, extra vulnerabilities emerge in the system.
sVirt labeling system_u:object_r:svirt_image_t:s0:c87,c520 image1 The following table outlines the different labels that can be assigned when using sVirt: Table 17.1. sVirt labels Type SELinux Context Description Virtualized guest processes system_u:system_r:svirt_t:MCS1 MCS1 is a randomly selected MCS field. Currently approximately 500,000 labels are supported.
138
Chapter 18. KVM live migration This chapter covers migrating guests running on a KVM hypervisor to another KVM host. Migration is the term for the process of moving a virtualized guest from one host to another. Migration is a key feature of virtualization as software is completely separated from hardware. Migration is useful for: • Load balancing - guests can be moved to hosts with lower usage when a host becomes overloaded.
Chapter 18. KVM live migration • Two or more Red Hat Enterprise Linux systems of the same version with the same updates. • Both systems must have the appropriate ports open. • Both systems must have identical network configurations. All bridging and network configurations must be exactly the same on both hosts. • Shared storage must mount at the same location on source and destination systems. The mounted directory name must be identical.
Live KVM migration with virsh Locations must be the same on source and destination Whichever directory is chosen for the guests must exactly the same on host and guest. This applies to all types of shared storage. The directory must be the same or the migration will fail. 18.3. Live KVM migration with virsh A guest can be migrated to another host with the virsh command.
Chapter 18. KVM live migration Verify the guest has arrived at the destination host From the destination system, test2.example.com, verify RHEL4test is running: 4. [root@test2 ~]# virsh list Id Name State ---------------------------------10 RHEL4 running The live migration is now complete. Other networking methods libvirt supports a variety of networking methods including TLS/SSL, unix sockets, SSH, and unencrypted TCP.
Migrating with virt-manager Figure 18.1. Add Connection virt-manager now displays the newly connected host in the list of available hosts.
Chapter 18. KVM live migration Figure 18.2. Connected Host 2. Add a storage pool to both hosts Both hosts must be connected to the same storage pool. Create the storage pool on both hosts using the same network storage device. Using a storage pool ensures both servers have identical storage configurations. This procedure uses a NFS server. a. Open the storage tab On the Edit menu, click Host Details, the Host Details window appears. Click the Storage tab.
Migrating with virt-manager Figure 18.3. Storage tab b. Add a storage pool with the same NFS to the source and target hosts. Add a new storage pool. In the lower left corner of the window, click the + button. The Add a New Storage Pool window appears. Enter the following details: • Name: Enter the name of the storage pool. • Type: Select netfs: Network Exported Directory.
Chapter 18. KVM live migration Figure 18.4. Add a new Storage Pool Press Forward to continue. c. Specify storage pool details Enter the following details: • Format: Select the storage type. This must be NFS or iSCSI for live migrations. • Host Name: Enter the IP address or fully-qualified domain name of the storage server.
Migrating with virt-manager Figure 18.5. Storage pool details Press the Finish button to add the storage pool. d. Verify the new storage pool was added sucessfully The new storage pool should be visible in the Storage tab.
Chapter 18. KVM live migration Figure 18.6. New storage pool in the storage tab Complete these steps on both hosts before proceeding. 3. Optional: Add a volume to the storage pool Add a volume to the storage pool or create a new virtualized guest on the storage pool. If your storage pool already has virtualized guests, you can skip this step. a. Create a new volume in the shared storage pool, click New Volume. Enter the details, then click Create Volume.
Migrating with virt-manager Figure 18.7. Add a storage volume b. Create a new virtualized guest on the new volume Create a new virtualized guest that uses the new volume. For information on creating virtualized guests, refer to Part II, “Installation”.
Chapter 18. KVM live migration Figure 18.8. New virtualized guest The Virtual Machine window appears.
Migrating with virt-manager Figure 18.9. Virtual Machine window 4. Migrate the virtualized guest From the main virt-manager screen, right-click on the virtual machine and select Migrate.... The Migrate the virtual machine window appears.
Chapter 18. KVM live migration Figure 18.10. Migrate the virtual machine Select the destination host from the list. Select Migrate offline to disable live migration and do an offline migration. Select advanced options if required. For a standard migration, no of these settings should be modified. Press Migrate to confirm and migrate the virtualized guest.
Migrating with virt-manager 5. A status bar tracks the progress of the migration. Once the migration is complete the virtualized guest will appear in the list of virtualized guests on the destination. Figure 18.11.
154
Chapter 19. Remote management of virtualized guests This section explains how to remotely manage your virtualized guests using ssh or TLS and SSL. 19.1. Remote management with SSH The ssh package provides an encrypted network protocol which can securely send management functions to remote virtualization servers. The method described uses the libvirt management connection securely tunneled over an SSH connection to manage the remote machines.
Chapter 19. Remote management of virtualized guests $ ssh-keygen -t rsa 3. Copying the keys to the remote hosts Remote login without a password, or with a passphrase, requires an SSH key to be distributed to the systems being managed. Use the ssh-copy-id command to copy the key to root user at the system address provided (in the example, root@example.com). $ ssh-copy-id -i ~/.ssh/id_rsa.pub root@example.com root@example.com's password: Now try logging into the machine, with the ssh root@example.
Transport modes TLS/SSL access for virt-manager The libvirt Wiki contains complete details on how to configure TLS/SSL access: http://wiki.libvirt.org/ page/TLSSetup To enable SSL and TLS for VNC, refer to the libvirt Wiki: http://wiki.libvirt.org/page/VNCTLSSetup. It is necessary to place the Certificate Authority Certificate, Client Certificate, and Client Certificate Private Key, in the following locations: • The Certificate Authority Certificate should be placed in /etc/pki/CA/cacert.pem.
Chapter 19. Remote management of virtualized guests Remote URIs A Uniform Resource Identifier (URI) is used by virsh and libvirt to connect to a remote host. URIs can also be used with the --connect parameter for the virsh command to execute single commands or migrations on remote hosts.
Transport modes Name Transport mode Description Example usage hostname, port number, username and extra parameters from the remote URI, but in certain very complex cases it may be better to supply the name explicitly. command ssh and ext The external command. For ext transport this is required. For ssh the default is ssh. The PATH is searched for the command.
Chapter 19. Remote management of virtualized guests Name Transport mode Description Example usage server checks of the client's certificate or IP address you must change the libvirtd configuration. no_tty 160 ssh If set to a non-zero value, this stops ssh from asking for a password if it cannot log in to the remote machine automatically (for using ssh-agent or similar). Use this when you do not have access to a terminal - for example in graphical programs which use libvirt.
Chapter 20. Overcommitting with KVM The KVM hypervisor supports overcommitting CPUs and overcommitting memory. Overcommitting is allocating more virtualized CPUs or memory than there are physical resources on the system. With CPU overcommit, under-utilized virtualized servers or desktops can run on fewer servers which saves power and money. Overcommitting memory Most operating systems and applications do not use 100% of the available RAM all the time. This behavior can be exploited with KVM.
Chapter 20. Overcommitting with KVM Configuring swap for overcommitting memory The swap partition is used for swapping underused memory to the hard drive to speed up memory performance. The default size of the swap partition is calculated from the physical RAM of the host. 1 Red Hat Knowledgebase has an article on safely and efficiently determining the size of the swap partition. The swap partition must be large enough to provide virtual memory for all guests and the host system.
core processor. Overcommitting symmetric multiprocessing guests in over the physical number of processing cores will cause significant performance degradation. Assigning guests VCPUs up to the number of physical cores is appropriate and works as expected. For example, running virtualized guests with four VCPUs on a quad core host. Guests with less than 100% loads should function effectively in this setup.
164
Chapter 21. KSM The concept of shared memory is common in modern operating systems. For example, when a program is first started it shares all of its memory with the parent program. When either the child or parent program tries to modify this memory, the kernel allocates a new memory region, copies the original contents and allows the program to modify this new region. This is known as copy on write. KSM is a new Linux feature which uses this concept in reverse.
Chapter 21. KSM The KSM tuning service The ksmtuned service does not have any options. The ksmtuned service loops and adjusts ksm. The ksmtuned service is notified by libvirt when a virtualized guest is created or destroyed. # service ksmtuned start Starting ksmtuned: [ OK ] The ksmtuned service can be tuned with the retune parameter. The retune parameter instructs ksmtuned to run tuning functions manually. The /etc/ksmtuned.conf file is the configuration file for the ksmtuned service.
pages_volatile Number of volatile pages. run Whether the KSM process is running. sleep_millisecs Sleep milliseconds. KSM tuning activity is stored in the /var/log/ksmtuned log file if the DEBUG=1 line is added to the /etc/ksmtuned.conf file. The log file location can be changed with the LOGFILE parameter. Changing the log file location is not advised and may require special configuration of SELinux settings. The /etc/sysconfig/ksm file can manually set a number or all pages used by KSM as not swappable. 1.
168
Chapter 22. Advanced virtualization administration This chapter covers advanced administration tools for fine tuning and controlling virtualized guests and host system resources. Note This chapter is a work in progress. Refer back to this document at a later date. 22.1. Guest scheduling KVM guests function as Linux processes. By default, KVM guests are prioritized and scheduled with the Linux Completely Fair Scheduler.
170
Chapter 23. Migrating to KVM from other hypervisors using virt-v2v The virt-v2v command converts guests from a foreign hypervisor to run on KVM, managed by libvirt. The virt-v2v command can currently convert Red Hat Enterprise Linux 4, Red Hat Enterprise Linux 5, Windows Vista, Windows 7, Windows Server 2003 and Windows Server 2008 virtualized guests running on Xen, KVM and VMware ESX. The virt-v2v command enables para-virtualized (virtio) drivers in the converted guest if possible.
Chapter 23. Migrating to KVM from other hypervisors using virt-v2v Figure 23.2. The storage tab Click the plus sign (+) button to add a new storage pool.
Preparing to convert a virtualized guest Figure 23.3. Adding a storage pool 2. Create local network interfaces. The local machine must have an appropriate network to which the converted virtualized guest can connect. This is likely to be a bridge interface. A bridge interface can be created using standard tools on the host. Since version 0.8.3, virt-manager can also create and manage bridges. 3. Specify network mappings in virt-v2v.conf. This step is optional, and is not required for most use cases.
Chapter 23. Migrating to KVM from other hypervisors using virt-v2v virt-v2v/software/. virt-v2v will display an error similar to Example 23.1, “Missing Package error” if software it depends upon for a particular conversion is not available. Example 23.1. Missing Package error virt-v2v: Installation failed because the following files referenced in the configuration file are required, but missing: rhel/5/kernel-2.6.18-128.el5.x86_64.rpm rhel/5/ecryptfs-utils-56-8.el5.x86_64.rpm rhel/5/ecryptfs-utils-56-8.el5.
Converting virtualized guests 1. From the Red Hat Enterprise Virtualization Manager, Login to Red Hat Network 2. Click on Download Software 3. Select the Red Hat Enterprise Virtualization (x86-64) channel 4. Select the Red Hat Enterprise Virt Manager for Desktops (v.2 x86) or Red Hat Enterprise Virt Manager for Desktops (v.2 x86) channel, as appropriate for your subscription. 5. Download Guest Tools ISO for 2.2 and save it locally 2.
Chapter 23. Migrating to KVM from other hypervisors using virt-v2v virt-v2v -i libvirtxml -op pool --bridge brname vm-name.xml virt-v2v -op pool --network netname vm-name virt-v2v -ic esx://esx.example.com/?no_verify=1 -op pool --bridge brname vm-name Parameters -i input Specifies the input method to obtain the guest for conversion. The default is libvirt. Supported options are: • libvirt Guest argument is the name of a libvirt domain.
Converting a local Xen virtualized guest --version Display version number and exit. 23.2.2. Converting a local Xen virtualized guest Ensure that the virtualized guest's XML is available locally, and that the storage referred to in the XML is available locally at the same paths. To convert the virtualized guest from an XML file, run: virt-v2v -i libvirtxml -op pool --bridge brname vm-name.
Chapter 23. Migrating to KVM from other hypervisors using virt-v2v Authenticating to the ESX server Connecting to the ESX server will require authentication. virt-v2v supports password authentication when connecting to ESX. It reads passwords from $HOME/.netrc. The format of this file is described in the netrc(5) man page. An example entry is: machine esx.example.com login root password s3cr3t .netrc permissions The .
Running converted virtualized guests 23.3. Running converted virtualized guests On successful completion, virt-v2v will create a new libvirt domain for the converted virtualized guest with the same name as the original virtualized guest. It can be started as usual using libvirt tools, for example virt-manager. Guest network configuration virt-v2v cannot currently reconfigure a guest's network configuration.
Chapter 23. Migrating to KVM from other hypervisors using virt-v2v Para-virtualized driver type Driver module Storage virtio_blk Network virtio_net In addition, initrd will preload the virtio_pci driver Other drivers Display cirrus Block Virtualized IDE Network Virtualized e1000 23.4.2. Configuration changes for Windows virtualized guests Warning Before converting Windows virtualized guests, ensure that the libguestfs-winsupport and virtiowin packages are installed on the host running virt-v2v.
Configuration changes for Windows virtualized guests Note The Guest Tools ISO must be uploaded using the ISO Uploader for this step to succeed. See Preparing to convert a virtualized guest running Windows for instructions. 3. CDUpgrader detects the Guest Tools CD and installs all the virtio drivers from it, including a reinstall of the virtio block drivers.
182
Chapter 24. Miscellaneous administration tasks This chapter contain useful hints and tips to improve virtualization performance, scale and stability. 24.1. Automatically starting guests This section covers how to make virtualized guests start automatically during the host system's boot phase. This example uses virsh to set a guest, TestServer, to automatically start when the host boots. # virsh autostart TestServer Domain TestServer marked as autostarted The guest now automatically starts with the host.
Chapter 24. Miscellaneous administration tasks Image conversion is also useful to get smaller image when using a format which can grow, such as qcow or cow. The empty sectors are detected and suppressed from the destination image. getting image information the info parameter displays information about a disk image. the format for the info option is as follows: # qemu-img info [-f format] filename give information about the disk image filename.
Setting KVM processor affinities • The following output contains a vmx entry indicating an Intel processor with the Intel VT extensions: flags : fpu tsc msr pae mce cx8 apic mtrr mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm syscall lm constant_tsc pni monitor ds_cpl vmx est tm2 cx16 xtpr lahf_lm • The following output contains an svm entry indicating an AMD processor with the AMD-V extensions: flags : fpu tsc msr pae mce cx8 apic mtrr mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscal
Chapter 24. Miscellaneous administration tasks Identifying CPU and NUMA topology The first step in deciding what policy to apply is to determine the host’s memory and CPU topology. The virsh nodeinfo command provides information about how many sockets, cores and hyperthreads there are attached a host.
Setting KVM processor affinities The output shows two NUMA nodes (also know as NUMA cells), each containing four logical CPUs (four processing cores). This system has two sockets, therefore it can be inferred that each socket is a separate NUMA node. For a guest with four virtual CPUs, it would be optimal to lock the guest to physical CPUs 0 to 3, or 4 to 7 to avoid accessing non-local memory, which are significantly slower than accessing local memory.
Chapter 24. Miscellaneous administration tasks b. Locate where the guest's virtual CPU count is specified. Find the vcpus element. 4 The guest in this example has four CPUs. c. Add a cpuset attribute with the CPU numbers for the relevant NUMA cell. 4 4. Save the configuration file and restart the guest. The guest has been locked to CPUs 4 to 7.
Generating a new unique MAC address To lock the virtual CPUs to the second NUMA node (CPUs four to seven), run the following commands. # # # # virsh virsh virsh virsh vcpupin vcpupin vcpupin vcpupin guest1 guest1 guest1 guest1 0 1 2 3 4 5 6 7 The virsh vcpuinfo command confirms the change in affinity. # virsh vcpuinfo guest1 VCPU: 0 CPU: 4 State: running CPU time: 32.2s CPU Affinity: ----y--VCPU: 1 CPU: 5 State: running CPU time: 16.9s CPU Affinity: -----y-VCPU: 2 CPU: 6 State: running CPU time: 11.
Chapter 24. Miscellaneous administration tasks mac = [ 0x00, 0x16, 0x3e, random.randint(0x00, 0x7f), random.randint(0x00, 0xff), random.randint(0x00, 0xff) ] return ':'.join(map(lambda x: "%02x" % x, mac)) # print randomMAC() Another method to generate a new MAC for your guest You can also use the built-in modules of python-virtinst to generate a new MAC address and UUID for use in a guest configuration file: # echo 'import virtinst.util ; print\ virtinst.util.uuidToString(virtinst.util.
Very Secure ftpd For more information on overcommitting with KVM, refer to Chapter 20, Overcommitting with KVM. Warning: turning off swap Virtual memory allows Linux system to use more memory than there is physical RAM on the system. Underused processes are swapped out which allows active processes to use memory, improving memory utilization. Disabling swap reduces memory utilization as all processes are stored in physical RAM. If swap is turned off, do not overcommit guests.
Chapter 24. Miscellaneous administration tasks 4. Use the chkconfig --list vsftpd command to verify the vsftpd daemon is enabled to start during system boot: $ chkconfig --list vsftpd vsftpd 0:off 1:off 2:off 3:on 4:on 5:on 6:off 5. use the service vsftpd start vsftpd to start the vsftpd service: $service vsftpd start vsftpd Starting vsftpd for vsftpd: [ OK ] 24.8.
Virtual machine timer management with libvirt Without the acpid package, the Red Hat Enterprise Linux 6 guest does not shut down when the virsh shutdown command is executed. The virsh shutdown command is designed to gracefully shut down virtualized guests. Using virsh shutdown is easier and safer for system administration. Without graceful shut down with the virsh shutdown command a system administrator must log into a virtualized guest manually or send the Ctrl-Alt-Del key combination to each guest.
Chapter 24. Miscellaneous administration tasks Table 24.1. Offset attribute values Value Description utc The guest clock will be synchronized to UTC when booted. localtime The guest clock will be synchronized to the host's configured timezone when booted, if any. timezone The guest clock will be synchronized to a given timezone, specified by the timezone attribute. variable The guest clock will be synchronized to an arbitrary offset from UTC.
Virtual machine timer management with libvirt Table 24.2. name attribute values Value Description platform The master virtual time source which may be used to drive the policy of other time sources. pit Programmable Interval Timer - a timer with periodic interrupts. rtc Real Time Clock - a continuously running timer with periodic interrupts. hpet High Precision Event Timer - multiple timers with periodic interrupts. tsc Time Stamp Counter - counts the number of ticks since reset, no interrupts.
Chapter 24. Miscellaneous administration tasks Value Description paravirt Native + para-virtualized. • present Used to override the default set of timers visible to the guest. For example, to enable or disable the HPET. Table 24.6. present attribute values Value Description yes Force this timer to the visible to the guest. no Force this timer to not be visible to the guest. Example 24.5.
Part V. Virtualization storage topics Introduction to storage administration for virtualization These chapters contain information the storage used in a virtualized environment. The chapters explain the concepts of storage pools and volumes, provide detailed configuration procedures and cover other relevant storage topics.
Chapter 25. Storage concepts This chapter introduces the concepts used for describing managing storage devices. Local storage Local storage is directly attached to the host server. Local storage includes local directories, directly attached disks, and LVM volume groups on local storage devices. Networked storage Networked storage covers storage devices shared over a network using standard protocols.
Chapter 25. Storage concepts 25.2. Volumes Storage pools are divided into storage volumes. Storage volumes are an abstraction of physical partitions, LVM logical volumes, file-based disk images and other storage types handled by libvirt. Storage volumes are presented to virtualized guests as local storage devices regardless of the underlying hardware.
Volumes Name: Type: Capacity: Allocation: firstimage block 20.00 GB 20.00 GB virsh provides commands for converting between a volume name, volume path, or volume key: vol-name Returns the volume name when provided with a volume path or volume key. # virsh vol-name /dev/guest_images/firstimage firstimage # virsh vol-name Wlvnf7-a4a3-Tlje-lJDa-9eak-PZBv-LoZuUr vol-path Returns the volume path when provided with a volume key, or a storage pool identifier and volume name.
202
Chapter 26. Storage pools 26.1. Creating storage pools 26.1.1. Dedicated storage device-based storage pools This section covers dedicating storage devices to virtualized guests. Security issues with dedicated disks Guests should not be given write access to whole disks or block devices (for example, /dev/ sdb). Use partitions (for example, /dev/sdb1) or LVM volumes.
Chapter 26. Storage pools The device parameter with the path attribute specifies the device path of the storage device. This example uses the device /dev/sdb . /dev The file system target parameter with the path sub-parameter determines the location on the host file system to attach volumes created with this this storage pool. For example, sdb1, sdb2, sdb3.
Partition-based storage pools 5. Turn on autostart Turn on autostart for the storage pool. Autostart configures the libvirtd service to start the storage pool when the service starts. # virsh pool-autostart guest_images_disk Pool guest_images_disk marked as autostarted # virsh pool-list --all Name State Autostart ----------------------------------------default active yes guest_images_disk active yes 6.
Chapter 26. Storage pools b. 206 Click on the Storage tab of the Host Details window.
Partition-based storage pools 2. Create the new storage pool a. Add a new pool (part 1) Press the + button (the add pool button). The Add a New Storage Pool wizard appears. Choose a Name for the storage pool. This example uses the name guest_images_fs. Change the Type to fs: Pre-Formatted Block Device.
Chapter 26. Storage pools Press the Forward button to continue. b. Add a new pool (part 2) Change the Target Path, Format, and Source Path fields. Target Path Enter the location to mount the source device for the storage pool in the Target Path field. If the location does does not already exist, virt-manager will create the directory. Format Select a format from the Format list. The device is formatted with the selected format.
Partition-based storage pools The storage pool is now created, close the Host Details window. 26.1.2.2. Creating a partition-based storage pool using virsh This section covers creating a partition-based storage pool with the virsh command. Security warning Do not use this procedure to assign an entire disk as a storage pool (for example, /dev/sdb). Guests should not be given write access to whole disks or block devices. Only use this method to assign partitions (for example, /dev/sdb1) to storage pools.
Chapter 26. Storage pools The directory /guest_images is used in this example. # virsh pool-define-as guest_images_fs fs - - /dev/sdc1 - "/guest_images" Pool guest_images_fs defined The new pool and mount points are now created. 2. Verify the new pool List the present storage pools. # virsh pool-list --all Name State Autostart ----------------------------------------default active yes guest_images_fs inactive no 3.
Directory-based storage pools guest_images_fs 6. active yes Verify the storage pool Verify the storage pool was created correctly, the sizes reported are as expected, and the state is reported as running. Verify there is a "lost+found" directory in the mount point on the file system, indicating the device is mounted. # virsh pool-info guest_images_fs Name: guest_images_fs UUID: c7466869-e82a-a66c-2187-dc9d6f0877d0 State: running Capacity: 458.39 GB Allocation: 197.91 MB Available: 458.
Chapter 26. Storage pools total 8 drwx------. 2 root root 4096 May 28 13:57 . dr-xr-xr-x. 26 root root 4096 May 28 13:57 .. 2. Configure SELinux file contexts Configure the correct SELinux context for the new directory. # semanage fcontext -a -t virt_image_t /guest_images 3. Open the storage pool settings a. In the virt-manager graphical interface, select the host from the main window. Open the Edit menu and select Host Details b. 212 Click on the Storage tab of the Host Details window.
Directory-based storage pools 4. Create the new storage pool a. Add a new pool (part 1) Press the + button (the add pool button). The Add a New Storage Pool wizard appears. Choose a Name for the storage pool. This example uses the name guest_images_dir. Change the Type to dir: Filesystem Directory.
Chapter 26. Storage pools Press the Forward button to continue. b. Add a new pool (part 2) Change the Target Path field. This example uses /guest_images. Verify the details and press the Finish button to create the storage pool. 5. Verify the new storage pool The new storage pool appears in the storage list on the left after a few seconds. Verify the size is reported as expected, 36.41 GB Free in this example. Verify the State field reports the new storage pool as Active. Select the storage pool.
Directory-based storage pools The storage pool is now created, close the Host Details window. 26.1.3.2. Creating a directory-based storage pool with virsh 1. Create the storage pool definition Use the virsh pool-define-as command to define a new storage pool. There are two options required for creating directory-based storage pools: • The name of the storage pool. This example uses the name guest_images_dir. All further virsh commands used in this example use this name.
Chapter 26. Storage pools 3. Create the local directory Use the virsh pool-build command to build the directory-based storage pool. virsh poolbuild sets the required permissions and SELinux settings for the directory and creates the directory if it does not exist. # virsh pool-build guest_images_dir Pool guest_images_dir built # ls -la /guest_images total 8 drwx------. 2 root root 4096 May 30 02:44 . dr-xr-xr-x. 26 root root 4096 May 30 02:44 ..
LVM-based storage pools A directory-based storage pool is now available. 26.1.4. LVM-based storage pools This chapter covers using LVM volume groups as storage pools. LVM-based storage groups provide flexibility of Warning LVM-based storage pools require a full disk partition. This partition will be formatted and all data presently stored on the disk device will be erased. Back up the storage device before commencing the procedure. 26.1.4.1.
Chapter 26. Storage pools e. Select the size of the partition. In this example the entire disk is allocated by pressing Enter. Last cylinder or +size or +sizeM or +sizeK (2-400, default 400): f. Set the type of partition by pressing t. Command (m for help): t g. Choose the partition you created in the previous steps. In this example, the partition number is 1. Partition number (1-4): 1 h. Enter 8e for a Linux LVM partition. Hex code (type L to list codes): 8e i. write changes to disk and quit.
LVM-based storage pools b. Click on the Storage tab of the Host Details window.
Chapter 26. Storage pools 3. Create the new storage pool a. Start the Wizard Press the + button (the add pool button). The Add a New Storage Pool wizard appears. Choose a Name for the storage pool. We use guest_images_lvm for this example.
LVM-based storage pools Press the Forward button to continue. b. Add a new pool (part 2) Change the Target Path field. This example uses /guest_images. Now fill in the Target Path and Source Path fields, then tick the Build Pool check box. • Use the Target Path field to either select an existing LVM volume group or as the name for a new volume group. The default format is /dev/storage_pool_name. This example uses a new volume group named /dev/guest_images_lvm.
Chapter 26. Storage pools Press the Yes button to proceed to erase all data on the storage device and create the storage pool. 4. Verify the new storage pool The new storage pool will appear in the list on the left after a few seconds. Verify the details are what you expect, 465.76 GB Free in our example. Also verify the State field reports the new storage pool as Active. It is generally a good idea to have the Autostart check box enabled, to ensure the storage pool starts automatically with libvirtd.
iSCSI-based storage pools Pool guest_images_lvm defined # virsh pool-build guest_images_lvm Pool guest_images_lvm built # virsh pool-start guest_images_lvm Pool guest_images_lvm started # vgs VG #PV #LV #SN Attr VSize VFree libvirt_lvm 1 0 0 wz--n- 465.76g 465.
Chapter 26. Storage pools Procedure 26.3. Creating an iSCSI target 1. Install the required packages Install the scsi-target-utils package and all dependencies # yum install scsi-target-utils 2. Start the tgtd service The tgtd service hosts SCSI targets and uses the iSCSI protocol to host targets. Start the tgtd service and make the service persistent after restarting with the chkconfig command. # service tgtd start # chkconfig tgtd on 3.
iSCSI-based storage pools c. Configure SELinux file contexts Configure the correct SELinux context for the new image and directory. # restorecon -R /var/lib/tgtd The new file-based image, virtimage2.img, is ready to use for iSCSI. 5. Create targets Targets can be created by adding a XML entry to the /etc/tgt/targets.conf file. The target attribute requires an iSCSI Qualified Name (IQN). The IQN is in the format: iqn.yyyy-mm.
Chapter 26. Storage pools # service iptables restart 8. Verify the new targets View the new targets to ensure the setup was success with the tgt-admin --show command. # tgt-admin --show Target 1: iqn.2010-05.com.example.
iSCSI-based storage pools # iscsiadm -d2 -m node --login scsiadm: Max file limits 1024 1024 Logging in to [iface: default, target: iqn.2010-05.com.example.server1:trial1, portal: 10.0.0.1,3260] Login to [iface: default, target: iqn.2010-05.com.example.server1:trial1, portal: 10.0.0.1,3260] successful. Detach the device. # iscsiadm -d2 -m node --logout scsiadm: Max file limits 1024 1024 Logging out of session [sid: 2, target: iqn.2010-05.com.example.server1:trial1, portal: 10.0.0.
Chapter 26. Storage pools 228 c. Open the Edit menu and select Host Details. d. Click on the Storage tab of the Host Details window.
iSCSI-based storage pools 2. Add a new pool (part 1) Press the + button (the add pool button). The Add a New Storage Pool wizard appears. Choose a name for the storage pool, change the Type to iscsi, and press Forward to continue.
Chapter 26. Storage pools 3. Add a new pool (part 2) Enter the target path for the device, the host name of the target and the source path (the IQN). The Format option is not available as formatting is handled by the guests. It is not advised to edit the Target Path. The default target path value, /dev/disk/by-path/, adds the drive path to that folder. The target path should be the same on all hosts for migration. Enter the hostname or IP address of the iSCSI target. This example uses server1.example.com.
iSCSI-based storage pools The device element path attribute must contain the IQN for the iSCSI server. With a text editor, create an XML file for the iSCSI storage pool. This example uses a XML definition named trial1.xml. trial1 afcc5367-6770-e151-bcb3-847bc36c5e28
Chapter 27. Volumes i. write changes to disk and quit. Command (m for help): w Command (m for help): q j. Format the new partition with the ext3 file system. # mke2fs -j /dev/sdb1 7. Mount the disk on the guest. # mount /dev/sdb1 /myfiles The guest now has an additional virtualized file-based storage device. 27.3.2.
Deleting and removing volumes Block device security - disk labels The host should not use disk labels to identify file systems in the fstab file, the initrd file or on the kernel command line. Doing so presents a security risk if less privileged users, such as virtualized guests, have write access to whole partitions or LVM volumes. A virtualized guest could write a disk label belonging to the host, to its own block device storage.
242
Chapter 28. Miscellaneous storage topics 28.1. Creating a virtualized floppy disk controller Floppy disk controllers are required for a number of older operating systems, especially for installing drivers. Presently, physical floppy disk devices cannot be accessed from virtualized guests. However, creating and accessing floppy disk images from virtualized floppy drives should work. This section covers creating a virtualized floppy device. An image file of a floppy disk is required.
Chapter 28. Miscellaneous storage topics 28.2. Configuring persistent storage in Red Hat Enterprise Linux 6 This section is for systems with external or networked storage; for example, Fibre Channel, iSCSI, or SRP based storage devices. It is recommended that those systems have persistent device names configured for your hosts. This assists live migration as well as providing consistent device names and storage for multiple virtualized systems.
Configuring persistent storage in Red Hat Enterprise Linux 6 KERNEL=="sd*", SUBSYSTEM=="block", PROGRAM="/sbin/scsi_id --whitelisted --replacewhitespace /dev/$name", RESULT=="1IET_00010003", NAME="rack4row16lun3" The udev daemon now searches all devices named /dev/sd* for a matching UUID in the rules. When a matching device is connected to the system the device is assigned the name from the rule.
Chapter 28. Miscellaneous storage topics multipath { wwid 1IET_00010004 alias oramp1 } multipath { wwid 1IET_00010005 alias oramp2 } multipath { wwid 1IET_00010006 alias oramp3 } multipath { wwid 1IET_00010007 alias oramp4 } } Multipath devices are created in the /dev/mapper directory. The above example will create 4 LUNs named /dev/mapper/oramp1, /dev/mapper/oramp2, /dev/mapper/oramp3 and / dev/mapper/oramp4. 4. Enable the multipathd daemon to start at system boot.
Accessing data from a guest disk image `- 9:0:0:7 sdbo 68:32 active ready running 28.3. Accessing data from a guest disk image There are various methods for accessing the data from guest image files. One common method is to use the kpartx tool, covered by this section, to mount the guest file system as a loop device which can then be accessed. The kpartx command creates device maps from partition tables. Each guest storage image has a partition table embedded in the file.
Chapter 28. Miscellaneous storage topics 4. Mount the loop device which to a directory. If required, create the directory. This example uses / mnt/guest1 for mounting the partition. # mkdir /mnt/guest1 # mount /dev/mapper/loop0p1 /mnt/guest1 -o loop,ro 5. The files are now available for reading in the /mnt/guest1 directory. Read or copy the files. 6. Unmount the device so the guest image can be reused by the guest. If the device is mounted the guest cannot access the image and therefore cannot start.
Accessing data from a guest disk image # mount /dev/VolGroup00/LogVol00 /mnt/guestboot 6. The files are now available for reading in the /mnt/guestboot directory. Read or copy the files. 7. Unmount the device so the guest image can be reused by the guest. If the device is mounted the guest cannot access the image and therefore cannot start. # umount /mnt/guestboot 8. Disconnect the volume group VolGroup00 # vgchange -an VolGroup00 9. Disconnect the image file from the partition mappings.
250
Chapter 29. N_Port ID Virtualization (NPIV) N_Port ID Virtualization (NPIV) is a function available with some Fibre Channel devices. NPIV shares a single physical N_Port as multiple N_Port IDs. NPIV provides similar functionality for Host Bus Adaptors (HBAs) that SR-IOV provides for network interfaces. With NPIV, virtualized guests can be provided with a virtual Fibre Channel initiator to Storage Area Networks (SANs).
Chapter 29. N_Port ID Virtualization (NPIV) 29.1.2. Verify NPIV is used on the HBA Output the data from the kernel on the port nodes of the HBA. Example 29.1.
Verify NPIV is used on the HBA Adding the virtual HBA with virsh This procedure covers creating virtual HBA devices on a host with virsh. This procedure requires a compatible HBA device. 1. List available HBAs Find the node device name of the HBA with the virtual adapters.
Chapter 29. N_Port ID Virtualization (NPIV) WWNN and WWPN validation Libvirt does not validate the WWPN or WWNN values, invalid WWNs are rejected by the kernel and libvirt reports the failure. The error reported by the kernel is similar to the following: # virsh nodedev-create badwwn.xml error: Failed to create node device from badwwn.
Part VI. Virtualization reference guide Virtualization commands, system tools, applications and additional systems reference These chapters provide detailed descriptions of virtualization commands, system tools, and applications included in Red Hat Enterprise Linux 6. These chapters are designed for users requiring information on advanced functionality and other features.
Chapter 30. Managing guests with virsh virsh is a command line interface tool for managing guests and the hypervisor. The virsh command-line tool is built on the libvirt management API and operates as an alternative to the qemu-kvm command and the graphical virt-manager application. The virsh command can be used in read-only mode by unprivileged users or, with root access, full administration functionality. The virsh command is ideal for scripting virtualization administration.
Chapter 30. Managing guests with virsh Command Description setvcpus Changes number of virtual CPUs assigned to a guest. vcpuinfo Displays virtual CPU information about a guest. vcpupin Controls the virtual CPU affinity of a guest. domblkstat Displays block device statistics for a running guest. domifstat Displays network interface statistics for a running guest. attach-device Attach a device to a guest, using a device definition in an XML file.
Command Description the XML definition for the storage pool without creating the storage pool. pool-destroy Permanently destroys a storage pool in libvirt. The raw data contained in the storage pool is not changed and can be recovered with the pool-create command. pool-delete Destroys the storage resources used by a storage pool. This operation cannot be recovered. The storage pool still exists after this command but all data is deleted. pool-dumpxml Prints the XML definition for a storage pool.
Chapter 30. Managing guests with virsh This command outputs the guest's XML configuration file to standard out (stdout). You can save the data by piping the output to a file. An example of piping the output to a file called guest.xml: # virsh dumpxml GuestID > guest.xml This file guest.xml can recreate the guest (refer to Editing a guest's configuration file. You can edit this XML configuration file to configure additional devices or to deploy additional guests. Refer to Section 33.
Suspending a guest Suspend a guest with virsh: # virsh suspend {domain-id, domain-name or domain-uuid} When a guest is in a suspended state, it consumes system RAM but not processor resources. Disk and network I/O does not occur while the guest is suspended. This operation is immediate and the guest can be restarted with the resume (Resuming a guest) option.
Chapter 30. Managing guests with virsh You can control the behavior of the rebooting guest by modifying the on_reboot element in the guest's configuration file. Forcing a guest to stop Force a guest to stop with the virsh command: # virsh destroy {domain-id, domain-name or domain-uuid} This command does an immediate ungraceful shutdown and stops the specified guest. Using virsh destroy can corrupt guest file systems . Use the destroy option only when the guest is unresponsive.
used memory: 512000 kb Displaying host information To display information about the host: # virsh nodeinfo An example of virsh nodeinfo output: # virsh nodeinfo CPU model CPU (s) CPU frequency CPU socket(s) Core(s) per socket Threads per core: Numa cell(s) Memory size: x86_64 8 2895 Mhz 2 2 2 1 1046528 kb This displays the node information and the machines that support the virtualization process.
Chapter 30. Managing guests with virsh 2 Domain010 3 Domain9600 inactive crashed The output from virsh list is categorized as one of the six states (listed below). • The running state refers to guests which are currently active on a CPU. • Guests listed as blocked are blocked, and are not running or runnable. This is caused by a guest waiting on I/O (a traditional wait state) or guests in a sleep mode. • The paused state lists domains that are paused.
# virsh setvcpus {domain-name, domain-id or domain-uuid} count The new count value cannot exceed the count above the amount specified when the guest was created. Configuring memory allocation To modify a guest's memory allocation with virsh : # virsh setmem {domain-id or domain-name} count You must specify the count in kilobytes. The new count value cannot exceed the amount you specified when you created the guest. Values lower than 64 MB are unlikely to work with most guest operating systems.
Chapter 30.
Chapter 31. Managing guests with the Virtual Machine Manager (virt-manager) This section describes the Virtual Machine Manager (virt-manager) windows, dialog boxes, and various GUI controls. virt-manager provides a graphical view of hypervisors and guest on your system and on remote machines. You can use virt-manager to define virtualized guests.
Chapter 31. Managing guests with the Virtual Machine Manager (virt-manager) Figure 31.1. Starting virt-manager Alternatively, virt-manager can be started remotely using ssh as demonstrated in the following command: ssh -X host's address [remotehost]# virt-manager Using ssh to manage virtual machines and hosts is discussed further in Section 19.1, “Remote management with SSH”. 31.2. The Virtual Machine Manager main window This main window displays all the running guests and resources used by guests.
The virtual hardware details window Figure 31.2. Virtual Machine Manager main window 31.3. The virtual hardware details window The virtual hardware details window displays information about the virtual hardware configured for the virtualized guest. Virtual hardware resources can be added, removed and modified in this window. To access the virtual hardware details window, click on the icon in the toolbar.
Chapter 31. Managing guests with the Virtual Machine Manager (virt-manager) Figure 31.3. The virtual hardware details icon Clicking the icon displays the virtual hardware details window.
Virtual Machine graphical console Figure 31.4. The virtual hardware details window 31.4. Virtual Machine graphical console This window displays a virtualized guest's graphical console. Virtualized guests use different techniques to export their local virtual framebuffers, but both technologies use VNC to make them available to the Virtual Machine Manager's console window.
Chapter 31. Managing guests with the Virtual Machine Manager (virt-manager) Figure 31.5. Graphical console window A note on security and VNC VNC is considered insecure by many security experts, however, several changes have been made to enable the secure usage of VNC for virtualization on Red Hat enterprise Linux. The guest machines only listen to the local host's loopback address (127.0.0.1).
Adding a remote connection 31.5. Adding a remote connection This procedure covers how to set up a connection to a remote system using virt-manager. 1. To create a new connection open the File menu and select the Add Connection... menu item. 2. The Add Connection wizard appears. Select the hypervisor. For Red Hat Enterprise Linux 6 systems select QEMU/KVM. Select Local for the local system or one of the remote connection options and click Connect.
Chapter 31. Managing guests with the Virtual Machine Manager (virt-manager) Figure 31.7. Remote host in the main virt-manager window 31.6. Displaying guest details You can use the Virtual Machine Monitor to view activity information for any virtual machines on your system.
Displaying guest details 1. In the Virtual Machine Manager main window, highlight the virtual machine that you want to view. Figure 31.8.
Chapter 31. Managing guests with the Virtual Machine Manager (virt-manager) 2. From the Virtual Machine Manager Edit menu, select Virtual Machine Details. Figure 31.9. Displaying the virtual machine details On the Virtual Machine window, select Overview from the navigation pane on the left hand side. The Overview view shows a summary of configuration details for the virtualized guest.
Displaying guest details Figure 31.10. Displaying guest details overview 3. Select Performance from the navigation pane on the left hand side. The Performance view shows a summary of guest performance, including CPU and Memory usage.
Chapter 31. Managing guests with the Virtual Machine Manager (virt-manager) Figure 31.11.
Displaying guest details 4. Select Processor from the navigation pane on the left hand side. The Processor view allows you to view or change the current processor allocation. Figure 31.12.
Chapter 31. Managing guests with the Virtual Machine Manager (virt-manager) 5. Select Memory from the navigation pane on the left hand side. The Memory view allows you to view or change the current memory allocation. Figure 31.13.
Performance monitoring 6. Each virtual disk attached to the virtual machine is displayed in the navigation pane. Click on a virtual disk to modify or remove it. Figure 31.14. Displaying disk configuration 7. Each virtual network interface attached to the virtual machine is displayed in the nagivation pane. Click on a virtual network interface to modify or remove it. Figure 31.15. Displaying network configuration 31.7.
Chapter 31. Managing guests with the Virtual Machine Manager (virt-manager) 1. From the Edit menu, select Preferences. Figure 31.16. Modifying guest preferences The Preferences window appears.
Displaying CPU usage 2. From the Stats tab specify the time in seconds or stats polling options. Figure 31.17. Configuring performance monitoring 31.8.
Chapter 31. Managing guests with the Virtual Machine Manager (virt-manager) 1. From the View menu, select Graph, then the CPU Usage check box. Figure 31.18. Selecting CPU usage 2. The Virtual Machine Manager shows a graph of CPU usage for all virtual machines on your system. Figure 31.19. Displaying CPU usage 31.9.
Displaying Network I/O 1. From the View menu, select Graph, then the Disk I/O check box. Figure 31.20. Selecting Disk I/O 2. The Virtual Machine Manager shows a graph of Disk I/O for all virtual machines on your system. Figure 31.21. Displaying Disk I/O 31.10.
Chapter 31. Managing guests with the Virtual Machine Manager (virt-manager) 1. From the View menu, select Graph, then the Network I/O check box. Figure 31.22. Selecting Network I/O 2. The Virtual Machine Manager shows a graph of Network I/O for all virtual machines on your system. Figure 31.23. Displaying Network I/O 31.11.
Managing a virtual network 1. From the Edit menu, select Host Details. Figure 31.24.
Chapter 31. Managing guests with the Virtual Machine Manager (virt-manager) 2. This will open the Host Details menu. Click the Virtual Networks tab. Figure 31.25. Virtual network configuration 3. All available virtual networks are listed on the left-hand box of the menu. You can edit the configuration of a virtual network by selecting it from this box and editing as you see fit. 31.12.
Creating a virtual network 1. Open the Host Details menu (refer to Section 31.11, “Managing a virtual network”) and click the Add Network button, identified by a plus sign (+) icon. Figure 31.26. Virtual network configuration This will open the Create a new virtual network window. Click Forward to continue.
Chapter 31. Managing guests with the Virtual Machine Manager (virt-manager) Figure 31.27.
Creating a virtual network 2. Enter an appropriate name for your virtual network and click Forward. Figure 31.28.
Chapter 31. Managing guests with the Virtual Machine Manager (virt-manager) 3. Enter an IPv4 address space for your virtual network and click Forward. Figure 31.29.
Creating a virtual network 4. Define the DHCP range for your virtual network by specifying a Start and End range of IP addresses. Click Forward to continue. Figure 31.30.
Chapter 31. Managing guests with the Virtual Machine Manager (virt-manager) 5. Select how the virtual network should connect to the physical network. Figure 31.31. Connecting to physical network If you select Forwarding to physical network, choose whether the Destination should be Any physical device or a specific physical device. Also select whether the Mode should be NAT or Routed. Click Forward to continue.
Creating a virtual network 6. You are now ready to create the network. Check the configuration of your network and click Finish. Figure 31.32.
Chapter 31. Managing guests with the Virtual Machine Manager (virt-manager) 7. The new virtual network is now available in the Virtual Network tab of the Host Details window. Figure 31.33.
Chapter 32. libvirt configuration reference This chapter provides is a references for various parameters of libvirt XML configuration files Table 32.1. libvirt configuration files Item Description pae Specifies the physical address extension configuration data. apic Specifies the advanced programmable interrupt controller configuration data. memory Specifies the memory size in megabytes. vcpus Specifies the numbers of virtual CPUs.
298
Chapter 33. Creating custom libvirt scripts This section provides some information which may be useful to programmers and system administrators intending to write custom scripts to make their lives easier by using libvirt. Chapter 24, Miscellaneous administration tasks is recommended reading for programmers thinking of writing new applications which use libvirt. 33.1. Using XML configuration files with virsh virsh can handle XML configuration files.
300
Part VII. Troubleshooting Introduction to troubleshooting and problem solving The following chapters provide information to assist you in troubleshooting issues you may encounter using virtualization. Important note on virtualization issues Your particular problem may not appear in this book due to ongoing development which creates and fixes bugs. For the most up to date list of known bugs, issues and bug fixes read the Red Hat Enterprise Linux 6 Release Notes for your version and hardware architecture.
Chapter 34. Troubleshooting This chapter covers common problems and solutions for Red Hat Enterprise Linux 6 virtualization issues. This chapter is to give you, the reader, a background to identify where problems with virtualization technologies are. Troubleshooting takes practice and experience which are difficult to learn from a book. It is recommended that you experiment and test virtualization on Red Hat Enterprise Linux 6 to develop your troubleshooting skills.
Chapter 34. Troubleshooting # brctl showmacs virtbr0 port-no mac-addr 1 fe:ff:ff:ff:ff: 2 fe:ff:ff:fe:ff: # brctl showstp virtbr0 virtbr0 bridge-id 8000.fefffffffff designated-root 8000.fefffffffff root-port 0 max-age 20.00 hello-time 2.00 forward-delay 0.00 aging-time 300.01 hello-timer 1.43 topology-change-timer 0.00 local? yes yes aging timer 0.00 0.00 path-cost bridge-max-age bridge-hello-time bridge-forward-delay 0 20.00 2.00 0.00 tcn-timer gc-timer 0.00 0.
kvm_stat halt_exits 14050 259 halt_wakeup 4496 203 host_state_reload 1638354 24893 hypercalls 0 0 insn_emulation 1093850 1909 insn_emulation_fail 0 0 invlpg 75569 0 io_exits 1596984 24509 irq_exits 21013 363 irq_injections 48039 1222 irq_window 24656 870 largepages 0 0 mmio_exits 11873 0 mmu_cache_miss 42565 8 mmu_flooded 14752 0 mmu_pde_zapped 58730 0 mmu_pte_updated 6 0 mmu_pte_write 138795 0 mmu_recycled 0 0 mmu_shadow_zapped 40358 0 mmu_unsync 793 0 nmi_injections 0 0 nmi_window 0 0 pf_fixed 697731 3150
Chapter 34. Troubleshooting irq_exits Number of guest exits due to external interrupts. irq_injections Number of interrupts sent to guests. irq_window Number of guest exits from an outstanding interrupt window. largepages Number of large pages currently in use. mmio_exits Number of guest exits due to memory mapped I/O (MMIO) accesses. mmu_cache_miss Number of KVM MMU shadow pages created. mmu_flooded Detection count of excessive write operations to an MMU page.
Log files request_irq Number of guest interrupt window request exits. signal_exits Number of guest exits due to pending signals from the host. tlb_flush Number of tlb_flush operations performed by the hypervisor. Note The output information from the kvm_stat command is exported by the KVM hypervisor as pseudo files located in the /sys/kernel/debug/kvm/ directory. 34.3. Log files KVM uses various log files. All the log files are standard ASCII files, and accessible with a text editor.
Chapter 34. Troubleshooting initrd /initrd-2.6.32-36.x86-64.img Reboot the guest. On the host, access the serial console with the following command: # virsh console You can also use virt-manager to display the virtual text console. In the guest console window, select Serial Console from the View menu. 34.5. Virtualization log files • /var/log/libvirt/qemu/GuestName.log If you encounter any errors with the Virtual Machine Manager, you can review the generated data in the virt-manager.
KVM networking performance 2. Enabling the virtualization extensions in BIOS Note: BIOS steps Many of the steps below may vary depending on your motherboard, processor type, chipset and OEM. Refer to your system's accompanying documentation for the correct information on configuring your system. a. Open the Processor submenu The processor settings menu may be hidden in the Chipset, Advanced CPU Configuration or Northbridge. b. Enable Intel Virtualization Technology (also known as Intel VT).
Chapter 34. Troubleshooting 3. Find the network interface section of the configuration. This section resembles the snippet below: [output truncated] 4. Change the type attribute of the model element from 'rtl8139' to 'virtio'. This will change the driver from the rtl8139 driver to the e1000 driver. [output truncated] 5. Save the changes and exit the text editor 6.
Appendix A. Additional resources To learn more about virtualization and Red Hat Enterprise Linux, refer to the following resources. A.1. Online resources • http://www.libvirt.org/ is the official website for the libvirt virtualization API. • http://virt-manager.et.redhat.com/ is the project website for the Virtual Machine Manager (virtmanager), the graphical application for managing virtual machines. • Open Virtualization Center http://www.openvirtualization.com 1 • Red Hat Documentation http://www.
312
Glossary This glossary is intended to define the terms used in this Installation Guide. Bare-metal The term bare-metal refers to the underlying physical architecture of a computer. Running an operating system on bare-metal is another way of referring to running an unmodified version of the operating system on the physical hardware. An example of operating system running on bare metal is a normally installed operating system. Full virtualization KVM uses full, hardware-assisted virtualization.
Glossary can run multiple, unmodified virtualized guest Windows and Linux operating systems. The KVM hypervisor in Red Hat Enterprise Linux is managed with the libvirt API and tools built for libvirt, virtmanager and virsh. KVM is a set of Linux kernel modules which manage devices, memory and management APIs for the Hypervisor module itself. Virtualized guests are run as Linux processes and threads which are controlled by these modules.
Security Enhanced Linux Short for Security Enhanced Linux, SELinux uses Linux Security Modules (LSM) in the Linux kernel to provide a range of minimum privilege required security policies. Single Root I/O Virtualization SR-IOV is a standard for a type of PCI passthrough which natively shares a single device to multiple guests. SR-IOV enables a Single Root Function (for example, a single Ethernet port), to appear as multiple, separate, physical devices.
Glossary operating systems. Software virtualization is significantly slower than hardware-assisted virtualization or para-virtualization. Software virtualization, in the form of QEMU or BORCH, works in Red Hat Enterprise Linux, it's just slow. Red Hat Enterprise Linux supports hardware-assisted, full virtualization with the KVM hypervisor. Virtualized CPU A system has a number of virtual CPUs (VCPUs) relative to the number of physical processor cores.
Appendix B. Revision History Revision Mon Oct 04 2010 6.0-35 Review for 6.0 release. Scott Radvan sradvan@redhat.com Revision Thu Sep 09 2010 6.0-25 1 Resolves BZ#621740 . Christopher Curran ccurran@redhat.com Revision Fri Sep 03 2010 Christopher Curran ccurran@redhat.com 6.0-24 2 Updated para-virtualized driver usage procedures. BZ#621740 . Revision Tue May 25 2010 6.0-23 3 New storage content BZ#536816 . Christopher Curran ccurran@redhat.
Appendix B. Revision History Beta version released.
Appendix C. Colophon This manual was written in the DocBook XML v4.3 format. This book is based on the original work of Jan Mark Holzer, Justin Clift and Chris Curran. This book is edited and maintained by Scott Radvan. Other writing credits go to: • Daniel Berrange contributed various sections on libvirt. • Don Dutile contributed technical editing for the para-virtualized drivers section. • Barry Donahue contributed technical editing for the para-virtualized drivers section.
Appendix C.