vSphere Resource Management Update 1 ESXi 6.0 vCenter Server 6.0 This document supports the version of each product listed and supports all subsequent versions until the document is replaced by a new edition. To check for more recent editions of this document, see http://www.vmware.com/support/pubs.
vSphere Resource Management You can find the most up-to-date technical documentation on the VMware Web site at: http://www.vmware.com/support/ The VMware Web site also provides the latest product updates. If you have comments about this documentation, submit your feedback to: docfeedback@vmware.com Copyright © 2006–2016 VMware, Inc. All rights reserved. Copyright and trademark information. VMware, Inc. 3401 Hillview Ave. Palo Alto, CA 94304 www.vmware.com 2 VMware, Inc.
Contents About vSphere Resource Management 7 Updated Information 9 1 Getting Started with Resource Management 11 Resource Types 11 Resource Providers 11 Resource Consumers 12 Goals of Resource Management 12 2 Configuring Resource Allocation Settings 13 Resource Allocation Shares 13 Resource Allocation Reservation 14 Resource Allocation Limit 14 Resource Allocation Settings Suggestions 15 Edit Resource Settings 15 Changing Resource Allocation Settings—Example Admission Control 17 16 3 CPU Virtualizatio
vSphere Resource Management How ESXi Hosts Allocate Memory 36 Memory Reclamation 37 Using Swap Files 38 Sharing Memory Across Virtual Machines 42 Memory Compression 43 Measuring and Differentiating Types of Memory Usage Memory Reliability 45 About System Swap 45 44 7 View Graphics Information 47 8 Managing Storage I/O Resources 49 Storage I/O Control Requirements 49 Storage I/O Control Resource Shares and Limits 50 Set Storage I/O Control Resource Shares and Limits 51 Enable Storage I/O Control 51 Set S
Contents 12 Creating a Datastore Cluster 91 Initial Placement and Ongoing Balancing 92 Storage Migration Recommendations 92 Create a Datastore Cluster 92 Enable and Disable Storage DRS 93 Set the Automation Level for Datastore Clusters 93 Setting the Aggressiveness Level for Storage DRS 94 Datastore Cluster Requirements 95 Adding and Removing Datastores from a Datastore Cluster 96 13 Using Datastore Clusters to Manage Storage Resources 97 Using Storage DRS Maintenance Mode 97 Applying Storage DRS Recomme
vSphere Resource Management No Compatible Hard Affinity Host 125 No Compatible Soft Affinity Host 125 Soft Rule Violation Correction Disallowed 125 Soft Rule Violation Correction Impact 126 17 DRS Troubleshooting Information 127 Cluster Problems 127 Host Problems 130 Virtual Machine Problems Index 6 133 137 VMware, Inc.
About vSphere Resource Management ® ® vSphere Resource Management describes resource management for VMware ESXi and vCenter Server environments. This documentation focuses on the following topics.
vSphere Resource Management 8 VMware, Inc.
Updated Information This vSphere Resource Management is updated with each release of the product or when necessary. This table provides the update history of the vSphere Resource Management. Revision Description 001903-01 n 001903-00 Initial release. VMware, Inc. Added “Storage DRS Integration with Storage Profiles,” on page 53.
vSphere Resource Management 10 VMware, Inc.
Getting Started with Resource Management 1 To understand resource management, you must be aware of its components, its goals, and how best to implement it in a cluster setting. Resource allocation settings for a virtual machine (shares, reservation, and limit) are discussed, including how to set them and how to view them. Also, admission control, the process whereby resource allocation settings are validated against existing resources is explained.
vSphere Resource Management Resource Consumers Virtual machines are resource consumers. The default resource settings assigned during creation work well for most machines. You can later edit the virtual machine settings to allocate a share-based percentage of the total CPU, memory, and storage I/O of the resource provider or a guaranteed reservation of CPU and memory.
Configuring Resource Allocation Settings 2 When available resource capacity does not meet the demands of the resource consumers (and virtualization overhead), administrators might need to customize the amount of resources that are allocated to virtual machines or to the resource pools in which they reside. Use the resource allocation settings (shares, reservation, and limit) to determine the amount of CPU, memory, and storage resources provided for a virtual machine.
vSphere Resource Management The following table shows the default CPU and memory share values for a virtual machine. For resource pools, the default CPU and memory share values are the same, but must be multiplied as if the resource pool were a virtual machine with four virtual CPUs and 16 GB of memory. Table 2‑1. Share Values Setting CPU share values Memory share values High 2000 shares per virtual CPU 20 shares per megabyte of configured virtual machine memory.
Chapter 2 Configuring Resource Allocation Settings In most cases, it is not necessary to specify a limit. There are benefits and drawbacks: n Benefits — Assigning a limit is useful if you start with a small number of virtual machines and want to manage user expectations. Performance deteriorates as you add more virtual machines. You can simulate having fewer resources available by specifying a limit. n Drawbacks — You might waste idle resources if you specify a limit.
vSphere Resource Management 4 5 Edit the Memory Resources. Option Description Shares Memory shares for this resource pool with respect to the parent’s total. Sibling resource pools share resources according to their relative share values bounded by the reservation and limit. Select Low, Normal, or High, which specify share values respectively in a 1:2:4 ratio. Select Custom to give each virtual machine a specific number of shares, which expresses a proportional weight.
Chapter 2 Configuring Resource Allocation Settings If you select the cluster’s Resource Reservation tab and click CPU, you should see that shares for VM-QA are twice that of the other virtual machine. Also, because the virtual machines have not been powered on, the Reservation Used fields have not changed. Admission Control When you power on a virtual machine, the system checks the amount of CPU and memory resources that have not yet been reserved.
vSphere Resource Management 18 VMware, Inc.
CPU Virtualization Basics 3 CPU virtualization emphasizes performance and runs directly on the processor whenever possible. The underlying physical resources are used whenever possible and the virtualization layer runs instructions only as needed to make virtual machines operate as if they were running directly on a physical machine. CPU virtualization is not the same thing as emulation. ESXi does not use emulation to run virtual CPUs. With emulation, all operations are run in software by an emulator.
vSphere Resource Management Hardware-Assisted CPU Virtualization Certain processors provide hardware assistance for CPU virtualization. When using this assistance, the guest can use a separate mode of execution called guest mode. The guest code, whether application code or privileged code, runs in the guest mode. On certain events, the processor exits out of guest mode and enters root mode.
Administering CPU Resources 4 You can configure virtual machines with one or more virtual processors, each with its own set of registers and control structures. When a virtual machine is scheduled, its virtual processors are scheduled to run on physical processors. The VMkernel Resource Manager schedules the virtual CPUs on physical CPUs, thereby managing the virtual machine’s access to physical CPU resources. ESXi supports virtual machines with up to 128 virtual CPUs.
vSphere Resource Management n Use advanced settings under certain circumstances. n Use the vSphere SDK for scripted CPU allocation. n Use hyperthreading. Multicore Processors Multicore processors provide many advantages for a host performing multitasking of virtual machines. Intel and AMD have each developed processors which combine two or more processor cores into a single integrated circuit (often called a package or socket).
Chapter 4 Administering CPU Resources While hyperthreading does not double the performance of a system, it can increase performance by better utilizing idle resources leading to greater throughput for certain important workload types. An application running on one logical processor of a busy core can expect slightly more than half of the throughput that it obtains while running alone on a non-hyperthreaded processor.
vSphere Resource Management 3 Ensure that hyperthreading is enabled for the ESXi host. a Browse to the host in the vSphere Web Client navigator. b Click the Manage tab and click Settings. c Under System, click Advanced System Settings and select VMkernel.Boot.hyperthreading. Hyperthreading is enabled if the value is true. 4 Under Hardware, click Processors to view the number of Logical processors. Hyperthreading is enabled.
Chapter 4 Administering CPU Resources 4 Under Scheduling Affinity, select physical processor affinity for the virtual machine. Use '-' for ranges and ',' to separate values. For example, "0, 2, 4-7" would indicate processors 0, 2, 4, 5, 6 and 7. 5 Select the processors where you want the virtual machine to run and click OK. Potential Issues with CPU Affinity Before you use CPU affinity, you might need to consider certain issues.
vSphere Resource Management Table 4‑1. CPU Power Management Policies (Continued) Power Management Policy Description Low Power The VMkernel aggressively uses available power management features to reduce host energy consumption at the risk of lower performance. Custom The VMkernel bases its power management policy on the values of several advanced configuration parameters. You can set these parameters in the vSphere Web Client Advanced Settings dialog box.
Chapter 4 Administering CPU Resources Procedure 1 Browse to the host in the vSphere Web Client navigator. 2 Click the Manage tab and click Settings. 3 Under System, select Advanced System Settings. 4 In the right pane, you can edit the power management parameters that affect the Custom policy. Power management parameters that affect the Custom policy have descriptions that begin with In Custom policy. All other power parameters affect all power management policies.
vSphere Resource Management 28 VMware, Inc.
Memory Virtualization Basics 5 Before you manage memory resources, you should understand how they are being virtualized and used by ESXi. The VMkernel manages all physical RAM on the host. The VMkernel dedicates part of this managed physical RAM for its own use. The rest is available for use by virtual machines. The virtual and physical memory space is divided into blocks called pages. When physical memory is full, the data for virtual pages that are not present in physical memory are stored on disk.
vSphere Resource Management After a virtual machine consumes all of the memory within its reservation, it is allowed to retain that amount of memory and this memory is not reclaimed, even if the virtual machine becomes idle. Some guest operating systems (for example, Linux) might not access all of the configured memory immediately after booting.
Chapter 5 Memory Virtualization Basics less memory than it would when running on physical machines. As a result, higher levels of overcommitment can be supported efficiently. The amount of memory saved by memory sharing depends on whether the workload consists of nearly identical machines which might free up more memory, while a more diverse workload might result in a signifigantly lower percentage of memory savings.
vSphere Resource Management n The dashed arrows show the mapping from guest virtual memory to machine memory in the shadow page tables also maintained by the VMM. The underlying processor running the virtual machine uses the shadow page table mappings. Software-Based Memory Virtualization ESXi virtualizes guest physical memory by adding an extra level of address translation. n The VMM maintains the combined virtual-to-machine page mappings in the shadow page tables.
Chapter 5 Memory Virtualization Basics Performance Considerations When you use hardware assistance, you eliminate the overhead for software memory virtualization. In particular, hardware assistance eliminates the overhead required to keep shadow page tables in synchronization with guest page tables. However, the TLB miss latency when using hardware assistance is significantly higher. By default the hypervisor uses large pages in hardware assisted modes to reduce the cost of TLB misses.
vSphere Resource Management 34 VMware, Inc.
Administering Memory Resources 6 Using the vSphere Web Client you can view information about and make changes to memory allocation settings. To administer your memory resources effectively, you must also be familiar with memory overhead, idle memory tax, and how ESXi hosts reclaim memory. When administering memory resources, you can specify memory allocation. If you do not customize memory allocation, the ESXi host uses defaults that work well in most situations.
vSphere Resource Management ESXi memory virtualization adds little time overhead to memory accesses. Because the processor's paging hardware uses page tables (shadow page tables for software-based approach or two level page tables for hardware-assisted approach) directly, most memory accesses in the virtual machine can execute without address translation overhead. The memory space overhead has two components. n A fixed, system-wide overhead for the VMkernel.
Chapter 6 Administering Memory Resources Memory Tax for Idle Virtual Machines If a virtual machine is not actively using all of its currently allocated memory, ESXi charges more for idle memory than for memory that is in use. This is done to help prevent virtual machines from hoarding idle memory. The idle memory tax is applied in a progressive fashion. The effective tax rate increases as the ratio of idle memory to active memory for the virtual machine rises.
vSphere Resource Management Memory Balloon Driver The memory balloon driver (vmmemctl) collaborates with the server to reclaim pages that are considered least valuable by the guest operating system. The driver uses a proprietary ballooning technique that provides predictable performance that closely matches the behavior of a native system under similar memory constraints.
Chapter 6 Administering Memory Resources n It is functioning properly, but maximum balloon size is reached. Standard demand-paging techniques swap pages back in when the virtual machine needs them. Swap File Location By default, the swap file is created in the same location as the virtual machine's configuration file, which could either be on a VMFS datastore, a vSAN datastore or a VVol datastore. On a vSAN datastore or a VVol datastore, the swap file is created as a separate vSANor VVol object.
vSphere Resource Management Host-local swap is now enabled for the standalone host. Swap Space and Memory Overcommitment You must reserve swap space for any unreserved virtual machine memory (the difference between the reservation and the configured memory size) on per-virtual machine swap files. This swap reservation is required to ensure that the ESXi host is able to preserve virtual machine memory under any circumstances. In practice, only a small fraction of the host-level swap space might be used.
Chapter 6 Administering Memory Resources Setting an alternative swapfile location might cause migrations with vMotion to complete more slowly. For best vMotion performance, store the virtual machine on a local datastore rather than in the same directory as the virtual machine swapfiles. If the virtual machine is stored on a local datastore, storing the swapfile with the other virtual machine files will not improve vMotion. Prerequisites Required privilege: Host machine.Configuration.
vSphere Resource Management 4 Next to Swap file location, click Edit. 5 Select where to store the swapfile. 6 Option Description Virtual machine directory Stores the swapfile in the same directory as the virtual machine configuration file. Datastore specified by host Stores the swapfile in the location specified in the host configuration. If the swapfile cannot be stored on the datastore that the host specifies, the swapfile is stored in the same folder as the virtual machine. Click OK.
Chapter 6 Administering Memory Resources Memory Compression ESXi provides a memory compression cache to improve virtual machine performance when you use memory overcommitment. Memory compression is enabled by default. When a host's memory becomes overcommitted, ESXi compresses virtual pages and stores them in memory.
vSphere Resource Management Measuring and Differentiating Types of Memory Usage The Performance tab of the vSphere Web Client displays a number of metrics that can be used to analyze memory usage. Some of these memory metrics measure guest physical memory while other metrics measure machine memory. For instance, two types of memory usage that you can examine using performance metrics are guest physical memory and machine memory.
Chapter 6 Administering Memory Resources A similar result is obtained when determining Memory Shared and Memory Shared Common for the host. n Memory Shared for the host is the sum of each virtual machine's Memory Shared. Calculate this by looking at each virtual machine's guest physical memory and counting the number of blocks that have arrows to machine memory blocks that themselves have more than one arrow pointing at them.
vSphere Resource Management ESXi determines automatically where the system swap should be stored, this is the Preferred swap file location. This decision can be aided by selecting a certain set of options. The system selects the best possible enabled option. If none of the options are feasible then system swap is not activated. The available options are: n Datastore - Allow the use of the datastore specified. Please note that a vSAN datastore or a VVol datastore cannot be specified for system swap files.
View Graphics Information 7 You can access information about host graphics hardware capability for multiple virtual machines. You can view information about the graphics card and view the virtual machines that use the graphics card. Virtual machines are listed only if they are turned on and if the graphics card is of the shared type. Prerequisites Verify that the virtual machines are turned on. Procedure 1 In the vSphere Web Client, navigate to the host. 2 Click the Manage tab and click Settings.
vSphere Resource Management 48 VMware, Inc.
Managing Storage I/O Resources 8 vSphere Storage I/O Control allows cluster-wide storage I/O prioritization, which allows better workload consolidation and helps reduce extra costs associated with over provisioning. Storage I/O Control extends the constructs of shares and limits to handle storage I/O resources.
vSphere Resource Management n Storage I/O Control does not support datastores with multiple extents. n Before using Storage I/O Control on datastores that are backed by arrays with automated storage tiering capabilities, check the VMware Storage/SAN Compatibility Guide to verify whether your automated tiered storage array has been certified to be compatible with Storage I/O Control.
Chapter 8 Managing Storage I/O Resources Procedure 1 Browse to the datastore in the vSphere Web Client navigator. 2 Under the Monitor tab, click the Performance tab 3 From the View drop-down menu, select Performance. For more information, see the vSphere Monitoring and Performance documentation. Set Storage I/O Control Resource Shares and Limits Allocate storage I/O resources to virtual machines based on importance by assigning a relative amount of shares to the virtual machine.
vSphere Resource Management Under Datastore Capabilities, Storage I/O Control is enabled for the datastore. Set Storage I/O Control Threshold Value The congestion threshold value for a datastore is the upper limit of latency that is allowed for a datastore before Storage I/O Control begins to assign importance to the virtual machine workloads according to their shares. You do not need to adjust the threshold setting in most environments.
Chapter 8 Managing Storage I/O Resources Storage DRS Integration with Storage Profiles Storage Policy Based Management (SPBM) allows you to specify the policy for a virtual machine which is enforced by Storage DRS. A datastore cluster can have set of datastores with different capability profiles. If the virtual machines have storage profiles associated with them, Storage DRS can enforce placement based on underlying datastore capabilities.
vSphere Resource Management 54 VMware, Inc.
Managing Resource Pools 9 A resource pool is a logical abstraction for flexible management of resources. Resource pools can be grouped into hierarchies and used to hierarchically partition available CPU and memory resources. Each standalone host and each DRS cluster has an (invisible) root resource pool that groups the resources of that host or cluster. The root resource pool does not appear because the resources of the host (or cluster) and the root resource pool are always the same.
vSphere Resource Management Why Use Resource Pools? Resource pools allow you to delegate control over resources of a host (or a cluster), but the benefits are evident when you use resource pools to compartmentalize all resources in a cluster. Create multiple resource pools as direct children of the host or cluster and configure them. You can then delegate control over the resource pools to other individuals or organizations. Using resource pools can result in the following benefits.
Chapter 9 Managing Resource Pools Create a Resource Pool You can create a child resource pool of any ESXi host, resource pool, or DRS cluster. NOTE If a host has been added to a cluster, you cannot create child resource pools of that host. If the cluster is enabled for DRS, you can create child resource pools of the cluster. When you create a child resource pool, you are prompted for resource pool attribute information.
vSphere Resource Management Example: Creating Resource Pools Assume that you have a host that provides 6GHz of CPU and 3GB of memory that must be shared between your marketing and QA departments. You also want to share the resources unevenly, giving one department (QA) a higher priority. This can be accomplished by creating a resource pool for each department and using the Shares attribute to prioritize the allocation of resources.
Chapter 9 Managing Resource Pools n Under Monitor, the information displayed in the Resource Reservation tab about the resource pool’s reserved and unreserved CPU and memory resources changes to reflect the reservations associated with the virtual machine (if any). NOTE If a virtual machine has been powered off or suspended, it can be moved but overall available resources (such as reserved and unreserved CPU and memory) for the resource pool are not affected.
vSphere Resource Management Remove a Resource Pool You can remove a resource pool from the inventory. Procedure 1 In the vSphere Web Client, right-click the resource pool and Select Delete. A confirmation dialog box appears. 2 Click Yes to remove the resource pool. Resource Pool Admission Control When you power on a virtual machine in a resource pool, or try to create a child resource pool, the system performs additional admission control to ensure the resource pool’s restrictions are not violated.
Chapter 9 Managing Resource Pools Expandable Reservations Example 2 This example shows how a resource pool with expandable reservations works. Assume the following scenario, as shown in the figure. n Parent pool RP-MOM has a reservation of 6GHz and one running virtual machine VM-M1 that reserves 1GHz. n You create a child resource pool RP-KID with a reservation of 2GHz and with Expandable Reservation selected.
vSphere Resource Management 62 VMware, Inc.
10 Creating a DRS Cluster A cluster is a collection of ESXi hosts and associated virtual machines with shared resources and a shared management interface. Before you can obtain the benefits of cluster-level resource management you must create a cluster and enable DRS. Depending on whether or not Enhanced vMotion Compatibility (EVC) is enabled, DRS behaves differently when you use vSphere Fault Tolerance (vSphere FT) virtual machines in your cluster. Table 10‑1.
vSphere Resource Management Admission Control and Initial Placement When you attempt to power on a single virtual machine or a group of virtual machines in a DRS-enabled cluster, vCenter Server performs admission control. It checks that there are enough resources in the cluster to support the virtual machine(s). If the cluster does not have sufficient resources to power on a single virtual machine, or any of the virtual machines in a group power-on attempt, a message appears.
Chapter 10 Creating a DRS Cluster Example: Group Power On The user selects three virtual machines in the same data center for a group power-on attempt. The first two virtual machines (VM1 and VM2) are in the same DRS cluster (Cluster1), while the third virtual machine (VM3) is on a standalone host. VM1 is in automatic mode and VM2 is in manual mode.
vSphere Resource Management n If the cluster and virtual machines involved are all fully automated, vCenter Server migrates running virtual machines between hosts as needed to ensure efficient use of cluster resources. NOTE Even in an automatic migration setup, users can explicitly migrate individual virtual machines, but vCenter Server might move those virtual machines to other hosts to optimize cluster resources. By default, automation level is specified for the whole cluster.
Chapter 10 Creating a DRS Cluster DRS Cluster Requirements Hosts that are added to a DRS cluster must meet certain requirements to use cluster features successfully. Shared Storage Requirements A DRS cluster has certain shared storage requirements. Ensure that the managed hosts use shared storage. Shared storage is typically on a SAN, but can also be implemented using NAS shared storage. See the vSphere Storage documentation for information about other shared storage.
vSphere Resource Management Configure EVC from the Cluster Settings dialog box. The hosts in a cluster must meet certain requirements for the cluster to use EVC. For information about EVC and EVC requirements, see the vCenter Server and Host Management documentation. n CPU compatibility masks – vCenter Server compares the CPU features available to a virtual machine with the CPU features of the destination host to determine whether to allow or disallow migrations with vMotion.
Chapter 10 Creating a DRS Cluster 3 Enter a name for the cluster. 4 Select DRS and vSphere HA cluster features. 5 Option Description To use DRS with this cluster a b Select the DRS Turn ON check box. Select an automation level and a migration threshold. To use HA with this cluster a b c d e Select the vSphere HA Turn ON check box. Select whether to enable host monitoring and admission control. If admission control is enabled, specify a policy. Select a VM Monitoring option.
vSphere Resource Management Create a DRS Cluster When you add a host to a DRS cluster, the host’s resources become part of the cluster’s resources. In addition to this aggregation of resources, with a DRS cluster you can support cluster-wide resource pools and enforce cluster-level resource allocation policies. The following cluster-level resource management capabilities are also available.
Chapter 10 Creating a DRS Cluster 6 (Optional) Select the vSphere HA Turn ON check box to enable vSphere HA. vSphere HA allows you to: n Enable host monitoring. n Enable admission control. n Specify the type of policy that admission control should enforce. n Adjust the monitoring sensitivity of virtual machine monitoring. 7 If appropriate, enable Enhanced vMotion Compatibility (EVC) and select the mode it should operate in. 8 Click OK to complete cluster creation.
vSphere Resource Management 9 Click OK. NOTE Other VMware products or features, such as vSphere vApp and vSphere Fault Tolerance, might override the automation levels of virtual machines in a DRS cluster. Refer to the product-specific documentation for details. Disable DRS You can turn off DRS for a cluster. When DRS is disabled, the cluster’s resource pool hierarchy and affinity rules are not reestablished when DRS is turned back on. If you disable DRS, the resource pools are removed from the cluster.
Using DRS Clusters to Manage Resources 11 After you create a DRS cluster, you can customize it and use it to manage resources. To customize your DRS cluster and the resources it contains you can configure affinity rules and you can add and remove hosts and virtual machines. When a cluster’s settings and resources have been defined, you should ensure that it is and remains a valid cluster. You can also use a valid DRS cluster to manage power resources and interoperate with vSphere HA.
vSphere Resource Management 3 Select a cluster. 4 Click OK to apply the changes. 5 Select what to do with the host’s virtual machines and resource pools. n Put this host’s virtual machines in the cluster’s root resource pool vCenter Server removes all existing resource pools of the host and the virtual machines in the host’s hierarchy are all attached to the root.
Chapter 11 Using DRS Clusters to Manage Resources Adding Virtual Machines to a Cluster You can add a virtual machine to a cluster in a number of ways. n When you add a host to a cluster, all virtual machines on that host are added to the cluster. n When a virtual machine is created, the New Virtual Machine wizard prompts you for the location to place the virtual machine. You can select a standalone host or a cluster and you can select any resource pool inside the host or cluster.
vSphere Resource Management 4 Select a datastore and click Next. 5 Click Finish. If the virtual machine is a member of a DRS cluster rules group, vCenter Server displays a warning before it allows the migration to proceed. The warning indicates that dependent virtual machines are not migrated automatically. You have to acknowledge the warning before migration can proceed.
Chapter 11 Using DRS Clusters to Manage Resources n 3 If the host is part of an automated DRS cluster, virtual machines are migrated to different hosts when the host enters maintenance mode. If applicable, click Yes. The host is in maintenance mode until you select Exit Maintenance Mode. Remove a Host from a Cluster You can remove hosts from a cluster. Procedure 1 Browse to the host in the vSphere Web Client navigator. 2 Right-click the host and select Enter Maintenance Mode.
vSphere Resource Management When considering cluster validity scenarios, you should understand these terms. Reservation A fixed, guaranteed allocation for the resource pool input by the user. Reservation Used The sum of the reservation or reservation used (whichever is larger) for each child resource pool, added recursively. Unreserved This nonnegative number differs according to resource pool type. n Nonexpandable resource pools: Reservation minus reservation used.
Chapter 11 Using DRS Clusters to Manage Resources n RP2 was created with a reservation of 4GHz. Two virtual machines of 1GHz and 2GHz are powered on (Reservation Used: 3GHz). 1GHz remains unreserved. n RP3 was created with a reservation of 3GHz. One virtual machine with 3GHz is powered on. No resources for powering on additional virtual machines are available. The following figure shows an example of a valid cluster with some resource pools (RP1 and RP3) using reservation type Expandable. Figure 11‑2.
vSphere Resource Management Overcommitted DRS Clusters A cluster becomes overcommitted (yellow) when the tree of resource pools and virtual machines is internally consistent but the cluster does not have the capacity to support all resources reserved by the child resource pools. There will always be enough resources to support all running virtual machines because, when a host becomes unavailable, all its virtual machines become unavailable.
Chapter 11 Using DRS Clusters to Manage Resources Invalid DRS Clusters A cluster enabled for DRS becomes invalid (red) when the tree is no longer internally consistent, that is, resource constraints are not observed. The total amount of resources in the cluster does not affect whether the cluster is red. A cluster can be red, even if enough resources exist at the root level, if there is an inconsistency at a child level.
vSphere Resource Management Managing Power Resources The vSphere Distributed Power Management (DPM) feature allows a DRS cluster to reduce its power consumption by powering hosts on and off based on cluster resource utilization. vSphere DPM monitors the cumulative demand of all virtual machines in the cluster for memory and CPU resources and compares this to the total available resource capacity of all hosts in the cluster.
Chapter 11 Using DRS Clusters to Manage Resources 4 Click Edit. 5 Enter the following information. 6 n User name and password for a BMC account. (The user name must have the ability to remotely power the host on.) n IP address of the NIC associated with the BMC, as distinct from the IP address of the host. The IP address should be static or a DHCP address with infinite lease. n MAC address of the NIC associated with the BMC. Click OK.
vSphere Resource Management 5 For any host that fails to exit standby mode successfully, perform the following steps. a Select the host in the vSphere Web Client navigator and select the Manage tab. b Under Power Management, click Edit to adjust the power management policy. After you do this, vSphere DPM does not consider that host a candidate for being powered off.
Chapter 11 Using DRS Clusters to Manage Resources The threshold is configured under Power Management in the cluster’s Settings dialog box. Each level you move the vSphere DPM Threshold slider to the right allows the inclusion of one more lower level of priority in the set of recommendations that are executed automatically or appear as recommendations to be manually executed.
vSphere Resource Management Table 11‑2. vCenter Server Events (Continued) Event Type Event Name Exiting Standby mode (about to power on the host) DrsExitingStandbyModeEvent Successfully exited Standby mode (power on succeeded) DrsExitedStandbyModeEvent For more information about creating and editing alarms, see the vSphere Monitoring and Performance documentation.
Chapter 11 Using DRS Clusters to Manage Resources 3 Click Settings, and click DRS Groups. 4 In the DRS Groups section, click Add. 5 In the Create DRS Group dialog box, type a name for the group. 6 Select Host DRS Group from the Type drop down box and click Add. 7 Click the check box next to a host to add it. Continue this process until all desired hosts have been added. 8 Click OK.
vSphere Resource Management Create a VM-VM Affinity Rule You can create VM-VM affinity rules to specify whether selected individual virtual machines should run on the same host or be kept on separate hosts. NOTE If you use the vSphere HA Specify Failover Hosts admission control policy and designate multiple failover hosts, VM-VM affinity rules are not supported. Procedure 1 Browse to the cluster in the vSphere Web Client navigator. 2 Click the Manage tab. 3 Click Settings and click DRS Rules.
Chapter 11 Using DRS Clusters to Manage Resources Create a VM-Host Affinity Rule You can create VM-Host affinity rules to specify whether or not the members of a selected virtual machine DRS group can run on the members of a specific host DRS group. Prerequisites Create the virtual machine and host DRS groups to which the VM-Host affinity rule applies. Procedure 1 Browse to the cluster in the vSphere Web Client navigator. 2 Click the Manage tab. 3 Click Settings and click DRS Rules. 4 Click Add.
vSphere Resource Management When you create a VM-Host affinity rule, its ability to function in relation to other rules is not checked. So it is possible for you to create a rule that conflicts with the other rules you are using. When two VM-Host affinity rules conflict, the older one takes precedence and the newer rule is disabled. DRS only tries to satisfy enabled rules and disabled rules are ignored.
Creating a Datastore Cluster 12 A datastore cluster is a collection of datastores with shared resources and a shared management interface. Datastore clusters are to datastores what clusters are to hosts. When you create a datastore cluster, you can use vSphere Storage DRS to manage storage resources. NOTE Datastore clusters are referred to as storage pods in the vSphere API. When you add a datastore to a datastore cluster, the datastore's resources become part of the datastore cluster's resources.
vSphere Resource Management Initial Placement and Ongoing Balancing Storage DRS provides initial placement and ongoing balancing recommendations to datastores in a Storage DRS-enabled datastore cluster. Initial placement occurs when Storage DRS selects a datastore within a datastore cluster on which to place a virtual machine disk.
Chapter 12 Creating a Datastore Cluster Enable and Disable Storage DRS Storage DRS allows you to manage the aggregated resources of a datastore cluster. When Storage DRS is enabled, it provides recommendations for virtual machine disk placement and migration to balance space and I/O resources across the datastores in the datastore cluster. When you enable Storage DRS, you enable the following functions. n Space load balancing among datastores within a datastore cluster.
vSphere Resource Management 5 Click OK. Setting the Aggressiveness Level for Storage DRS The aggressiveness of Storage DRS is determined by specifying thresholds for space used and I/O latency. Storage DRS collects resource usage information for the datastores in a datastore cluster. vCenter Server uses this information to generate recommendations for placement of virtual disks on datastores.
Chapter 12 Creating a Datastore Cluster 2 (Optional) Set Storage DRS thresholds. You set the aggressiveness level of Storage DRS by specifying thresholds for used space and I/O latency. n Use the Utilized Space slider to indicate the maximum percentage of consumed space allowed before Storage DRS is triggered. Storage DRS makes recommendations and performs migrations when space use on the datastores is higher than the threshold.
vSphere Resource Management Adding and Removing Datastores from a Datastore Cluster You add and remove datastores to and from an existing datastore cluster. You can add to a datastore cluster any datastore that is mounted on a host in the vSphere Web Client inventory, with the following exceptions: n All hosts attached to the datastore must be ESXi 5.0 and later. n The datastore cannot be in more than one data center in the same instance of the vSphere Web Client.
Using Datastore Clusters to Manage Storage Resources 13 After you create a datastore cluster, you can customize it and use it to manage storage I/O and space utilization resources.
vSphere Resource Management No CD-ROM image files are stored on the datastore. There are at least two datastores in the datastore cluster. Procedure 1 Browse to the datastore in the vSphere Web Client navigator. 2 Right-click the datastore and select Enter Storage DRS Maintenance Mode. A list of recommendations appears for datastore maintenance mode migration. 3 (Optional) On the Placement Recommendations tab, deselect any recommendations you do not want to apply.
Chapter 13 Using Datastore Clusters to Manage Storage Resources Applying Storage DRS Recommendations Storage DRS collects resource usage information for all datastores in a datastore cluster. Storage DRS uses the information to generate recommendations for virtual machine disk placement on datastores in a datastore cluster. Storage DRS recommendations appear on the Storage DRS tab in the vSphere Web Client datastore view.
vSphere Resource Management Change Storage DRS Automation Level for a Virtual Machine You can override the datastore cluster-wide automation level for individual virtual machines. You can also override default virtual disk affinity rules. Procedure 1 Browse to the datastore cluster in the vSphere Web Client navigator. 2 Click the Manage tab and click Settings. 3 Under VM Overrides, select Add. 4 Select a virtual machine.
Chapter 13 Using Datastore Clusters to Manage Storage Resources 5 Expand DRS Automation. a Select an automation level. b Set the Migration threshold. Use the Migration slider to select the priority level of vCenter Server recommendations that adjust the cluster's load balance. c Select whether to enable Virtual Machine Automation. Override for individual virtual machines can be set from the VM Overrides page. 6 Expand Power Managment. a Select an automation level. b Set the DPM threshold.
vSphere Resource Management n Datastore Cluster B has an inter-VM anti-affinity rule. When you move a virtual disk out of Datastore Cluster A and into Datastore Cluster B, any rule that applied to the virtual disk for a given virtual machine in Datastore Cluster A no longer applies. The virtual disk is now subject to the inter-VM antiaffinity rule in Datastore Cluster B. n Datastore Cluster B has an intra-VM anti-affinity rule.
Chapter 13 Using Datastore Clusters to Manage Storage Resources Create Intra-VM Anti-Affinity Rules You can create a VMDK anti-affinity rule for a virtual machine that indicates which of its virtual disks must be kept on different datastores. VMDK anti-affinity rules apply to the virtual machine for which the rule is defined, not to all virtual machines. The rule is expressed as a list of virtual disks that are to be separated from one another.
vSphere Resource Management n If the virtual machine's virtual disk violates the rule, Storage DRS makes migration recommendations to correct the error or reports the violation as a fault if it cannot make a recommendation that will correct the error. When you add a datastore to a datastore cluster that is enabled for Storage DRS, the VMDK affinity rule is disabled for any virtual machine that has virtual disks on that datastore if it also has virtual disks on other datastores.
Chapter 13 Using Datastore Clusters to Manage Storage Resources Storage vMotion Compatibility with Datastore Clusters ® A datastore cluster has certain vSphere Storage vMotion requirements. n The host must be running a version of ESXi that supports Storage vMotion. n The host must have write access to both the source datastore and the destination datastore. n The host must have enough free memory resources to accommodate Storage vMotion. n The destination datastore must have sufficient disk space.
vSphere Resource Management 106 VMware, Inc.
Using NUMA Systems with ESXi 14 ESXi supports memory access optimization for Intel and AMD Opteron processors in server architectures that support NUMA (non-uniform memory access). After you understand how ESXi NUMA scheduling is performed and how the VMware NUMA algorithms work, you can specify NUMA controls to optimize the performance of your virtual machines.
vSphere Resource Management The high latency of remote memory accesses can leave the processors under-utilized, constantly waiting for data to be transferred to the local node, and the NUMA connection can become a bottleneck for applications with high-memory bandwidth demands. Furthermore, performance on such a system can be highly variable.
Chapter 14 Using NUMA Systems with ESXi ESXi 5.0 and later includes support for exposing virtual NUMA topology to guest operating systems. For more information about virtual NUMA control, see “Using Virtual NUMA,” on page 110. VMware NUMA Optimization Algorithms and Settings This section describes the algorithms and settings used by ESXi to maximize application performance while still maintaining resource guarantees.
vSphere Resource Management When initial placement, dynamic rebalancing, and intelligent memory migration work in conjunction, they ensure good memory performance on NUMA systems, even in the presence of changing workloads. When a major workload change occurs, for instance when new virtual machines are started, the system takes time to readjust, migrating virtual machines and memory to new locations.
Chapter 14 Using NUMA Systems with ESXi When the number of virtual CPUs and the amount of memory used grow proportionately, you can use the default values. For virtual machines that consume a disproportionally large amount of memory, you can override the default values in one of the following ways: n Increase the number of virtual CPUs, even if this number of virtual CPUs is not used. See “Change the Number of Virtual CPUs,” on page 111.
vSphere Resource Management Table 14‑1. Advanced Options for Virtual NUMA Controls (Continued) Option Description Default Value numa.autosize.once When you create a virtual machine template with these settings, the settings are guaranteed to remain the same every time you subsequently power on the virtual machine. The virtual NUMA topology will be reevaluated if the configured number of virtual CPUs on the virtual machine is modified. TRUE numa.vcpu.
Chapter 14 Using NUMA Systems with ESXi Manual NUMA placement might interfere with ESXi resource management algorithms, which distribute processor resources fairly across a system. For example, if you manually place 10 virtual machines with processor-intensive workloads on one node, and manually place only 2 virtual machines on another node, it is impossible for the system to give all 12 virtual machines equal shares of systems resources.
vSphere Resource Management 4 In the vSphere Web Client, turn on CPU affinity for processors 4, 5, 6, and 7. Then, you want this virtual machine to run only on node 1. 1 In the vSphere Web Client inventory panel, select the virtual machine and select Edit Settings. 2 Select Options and click Advanced. 3 Click the Configuration Parameters button. 4 In the vSphere Web Client, set memory affinity for the NUMA node to 1.
Advanced Attributes 15 You can set advanced attributes for hosts or individual virtual machines to help you customize resource management. In most cases, adjusting the basic resource allocation settings (reservation, limit, shares) or accepting default settings results in appropriate resource allocation. However, you can use advanced attributes to customize resource management for a host or a specific virtual machine.
vSphere Resource Management Advanced Memory Attributes You can use the advanced memory attributes to customize memory resource usage. Table 15‑1. Advanced Memory Attributes 116 Attribute Description Default Mem.ShareForceSalting Mem.ShareForceSalting 0: Inter-virtual machine Transparent Page Sharing (TPS) behavior is still retained. The value of VMX option sched.mem.pshare.salt is ignored even if present. Mem.ShareForceSalting 1: By default the salt value is taken from sched.mem.pshare.salt.
Chapter 15 Advanced Attributes Table 15‑1. Advanced Memory Attributes (Continued) Attribute Description Default LPage.LPageDefragRateTotal Maximum number of large page defragmentation attempts per second. Accepted values range from 1 to 10240. 256 LPage.LPageAlwaysTryForNPT Try to allocate large pages for nested page tables (called 'RVI' by AMD or 'EPT' by Intel).
vSphere Resource Management Set Advanced Virtual Machine Attributes You can set advanced attributes for a virtual machine. Procedure 1 Find the virtual machine in the vSphere Web Client inventory. a To find a virtual machine, select a data center, folder, cluster, resource pool, or host. b Click the Related Objects tab and click Virtual Machines. 2 Right-click the virtual machine and select Edit Settings. 3 Click VM Options. 4 Expand Advanced.
Chapter 15 Advanced Attributes Table 15‑3. Advanced Virtual Machine Attributes (Continued) Attribute Description Default sched.swap.persist Specifies whether the virtual machine’s swap files should persist or be deleted when the virtual machine is powered off. By default, the system creates the swap file for a virtual machine when the virtual machine is powered on, and deletes the swap file when the virtual machine is powered off. False sched.swap.
vSphere Resource Management Table 15‑4. Advanced NUMA Attributes (Continued) Attribute Description numa.nodeAffinity Constrains the set of NUMA nodes on which a virtual machine's virtual CPU and memory can be scheduled. NOTE When you constrain NUMA node affinities, you might interfere with the ability of the NUMA scheduler to rebalance virtual machines across NUMA nodes for fairness. Specify NUMA node affinity only after you consider the rebalancing issues. numa.mem.
Chapter 15 Advanced Attributes 2 Click the Manage tab and click Settings. 3 Under System select Licensing. 4 Under Features verify Reliable Memory is displayed. What to do next You can look up how much memory is considered reliable by using the ESXCLI hardware memory get command. VMware, Inc.
vSphere Resource Management 122 VMware, Inc.
Fault Definitions 16 DRS faults indicate the reasons that prevent the generation of DRS actions (or the recommendation of those actions in manual mode). The DRS faults are defined within this section.
vSphere Resource Management Virtual Machine is Pinned This fault occurs when DRS cannot move a virtual machine because DRS is disabled on it. That is, the virtual machine is "pinned" on its registered host. Virtual Machine not Compatible with any Host This fault occurs when DRS cannot find a host that can run the virtual machine.
Chapter 16 Fault Definitions Host has Insufficient Number of Physical CPUs for Virtual Machine This fault occurs when the host hardware does not enough physical CPUs (hyperthreads) to support the number of virtual CPUs in the virtual machine. Host has Insufficient Capacity for Each Virtual Machine CPU This fault occurs when the host does not have enough CPU capacity for running the virtual machine.
vSphere Resource Management Soft Rule Violation Correction Impact Correcting the non-mandatory VM/Host DRS affinity rule does not occur because it impacts performance. 126 VMware, Inc.
DRS Troubleshooting Information 17 ® This information describes vSphere Distributed Resource Scheduler (DRS) problems for particular categories: cluster, host, and virtual machine problems. This chapter includes the following topics: n “Cluster Problems,” on page 127 n “Host Problems,” on page 130 n “Virtual Machine Problems,” on page 133 Cluster Problems Cluster problems can prevent DRS from performing optimally or from reporting faults.
vSphere Resource Management n vMotion is not enabled or set up for the hosts in the cluster. Solution Address the problem that is causing the load imbalance. Cluster is Yellow The cluster is yellow due to a shortage of resources. Problem If the cluster does not have enough resources to satisfy the reservations of all resource pools and virtual machines, but does have enough resources to satisfy the reservations of all running virtual machines, DRS continues to run and the cluster is yellow.
Chapter 17 DRS Troubleshooting Information No Hosts are Powered Off When Total Cluster Load is Low Hosts are not powered off when the total cluster load is low. Problem Hosts are not powered off when the total cluster load is low because extra capacity is needed for HA failover reservations. Cause Hosts might not be powered off for the following reasons: n The MinPoweredOn{Cpu|Memory}Capacity advanced options settings need to be met.
vSphere Resource Management Cause DRS never performs vMotion migrations when one or more of the following issues is present on the cluster. n DRS is disabled on the cluster. n The hosts do not have shared storage. n The hosts in the cluster do not contain a vMotion network. n DRS is manual and no one has approved the migration. DRS seldom performs vMotion when one or more of the following issues is present on the cluster: n Loads are unstable, or vMotion takes a long time, or both.
Chapter 17 DRS Troubleshooting Information Total Cluster Load Is High The total cluster load is high. Problem When the total cluster load is high, DRS does not power on the host. Cause The following are possible reasons why DRS does not power on the host: n VM/VM DRS rules or VM/Host DRS rules prevent the virtual machine from being moved to this host. n Virtual machines are pinned to their current hosts, hence DRS cannot move these virtual machines to hosts in standby mode to balance the load.
vSphere Resource Management DRS Does Not Evacuate a Host Requested to Enter Maintenance or Standby Mode DRS does not evacuate a host requested to enter maintenance mode or standby mode. Problem When you attempt to put a host into maintenance or standby mode, DRS does not evacuate the host as expected. Cause vSphere HA is enabled and evacuating this host might violate HA failover capacity. Solution There is no solution.
Chapter 17 DRS Troubleshooting Information Cause This may be because of problems with vMotion, DRS, or host compatibility. The following are the possible reasons: n vMotion is not configured or enabled on this host. n DRS is disabled for the virtual machines on this host. n Virtual machines on this host are not compatible with any other hosts. n No other hosts have sufficient resources for any virtual machines on this host.
vSphere Resource Management Cluster is Overloaded The cluster on which the virtual machine is running might have insufficient resources. Also, the virtual machine's share value is such that other virtual machines are granted proportionally more of the resources. To determine the demand is larger than the capacity, check the cluster statistics. Host is Overloaded To determine if the host's resources are oversubscribed, check the host statistics.
Chapter 17 DRS Troubleshooting Information When a VM/VM DRS rule or VM/Host DRS rule is violated, it might be because DRS cannot move some or all of the virtual machines in the rule. The reservation of the virtual machine or other virtual machines in the affinity rule, or their parent resource pools, might prevent DRS from locating all virtual machines on the same host. Solution n Check the DRS faults panel for faults associated with affinity rules.
vSphere Resource Management n The DRS automation level of the virtual machine is manual and the user does not approve the migration recommendation. n DRS will not move fault tolerance-enabled virtual machines. Solution Address the issue that prevents DRS from moving the virtual machine. 136 VMware, Inc.
Index Numerics 3D Graphics card 47 A admission control CPU 25 resource pools 60 with expandable resource pools 61 advanced attributes hosts 115 memory 116 NUMA 117 Storage I/O Control 52 virtual machines 118, 120 virtual NUMA 119 affinity rules creating 88, 89 intra-VM 103 Storage DRS 101 alarms 85 AMD Opteron-based systems 107, 117 anti-affinity rules 124 applications CPU-bound 20 single-threaded 20 automation level datastore clusters 93 Storage DRS 100 virtual machines 71 B ballooning, memory 38 Basebo
vSphere Resource Management DPM and admission control 17 automation level 84 enabling 84 Last Time Exited Standby 85 monitoring 85 overrides 85 threshold 84 DRS creating rules 88 disable 72 enabling 69 fully automated 70 group power on 64 initial placement 63, 64 load balancing 63 manual 70 migration 63 migration recommendations 66 migration thresholds 66 partially automated 70 single virtual machine power on 64 virtual machine migration 65 vMotion network 67 DRS groups host 86 virtual machine 87 DRS clust
Index hosts adding to DRS clusters 73, 74 advanced attributes 115 as resource providers 11 removing from a DRS cluster 77 virtual machine swapfile location 40 hyperthreading and hosts 23 CPU affinity 23 disabling 21 enabling 23 performance implications 22 I idle memory tax 37 iLO, configuring 82 incompatible host 124 initial placement NUMA 109 Storage DRS 92 insufficient capacity 124 insufficient resources 125 Intelligent Platform Management Interface (IPMI), configuring 82 inter-VM anti-affinity rules, c
vSphere Resource Management IBM Enterprise X-Architecture 110 manual controls 112 memory affinity 114 memory page sharing 110 optimization algorithms 109 page migration 109 scheduling 108 supported architectures 110 transparent page sharing 110 virtual 110, 111 Numa.AutoMemAffinity 117 Numa.MigImbalanceThreshold 117 Numa.PageMigEnable 117 Numa.RebalanceCoresNode 117 Numa.RebalanceCoresTotal 117 Numa.RebalanceEnable 117 Numa.
Index statistics 104 thresholds 94 Storage I/O Control enabling 51 limitations 49 monitoring 50 requirements 49 shares and limits 49–51 threshold 52 storage migration recommendations 92 storage requirements 67 Storage vMotion datastore cluster compatibility 105 recommendations 92 swap file deleting 42 location 39 using 38 swap files, VMX 37 swap space 40 swap to VMX 37 swapfile location 41 swapfiles 40 system resource allocation, editing 15 System Resource Allocation Table (SRAT) 108 system swap 45 system
vSphere Resource Management 142 VMware, Inc.