Dell EqualLogic Best Practices Series Sizing and Best Practices for Deploying VMware View 4.5 on VMware vSphere 4.1 with Dell EqualLogic Storage A Dell Technical Whitepaper This document has been archived and will no longer be maintained or updated. For more information go to the Storage Solutions Technical Documents page on Dell TechCenter or contact support.
THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL ERRORS AND TECHNICAL INACCURACIES. THE CONTENT IS PROVIDED AS IS, WITHOUT EXPRESS OR IMPLIED WARRANTIES OF ANY KIND. © 2011 Dell Inc. All rights reserved. Reproduction of this material in any manner whatsoever without the express written permission of Dell Inc. is strictly forbidden. For more information, contact Dell.
Table of Contents 1 2 3 Introduction ........................................................................................................................................................ 1 1.1 Audience ...................................................................................................................................................... 1 1.2 The Rest of this Paper .........................................................................................................................
6.2 Virtual Machine and Guest OS Configuration .................................................................................... 27 6.3 ESX Host Configuration.......................................................................................................................... 28 6.4 Host Server Resources ........................................................................................................................... 28 6.5 SAN Design ..................................................
1 Introduction Virtual Desktop Infrastructure (VDI) solutions are starting to gain considerable foothold. In addition to the traditional server virtualization benefits, VDI solutions can provide significant additional cost savings due to streamlined implementation and ease of management. In VDI environments the storage infrastructure must be carefully designed and sized to meet I/O performance requirements while supporting efficient capacity utilization.
2 Virtual Desktop Infrastructures Desktop virtualization is emerging as an important strategy for organizations seeking to reduce the cost and complexity of managing an expanding variety of client desktops, laptops, netbooks, and mobile handheld devices. In a VMware View based VDI environment, user desktops are hosted as virtual machines in a centralized infrastructure. The user interface to the desktop virtual machine is transmitted over a network to an end-user’s client device. 2.
When designing storage systems for a VDI deployment you must take all of these considerations into account. The storage platform will need to meet the performance demands generated by utilization spikes and be able to cost-effectively scale to meet capacity requirements.
2.4 The VMware View Solution VMware View is one of the leading VDI solutions in the market today. It includes a complete suite of tools for delivering desktops as a secure, managed service from a centralized infrastructure.
effectively balance peak load capabilities and scale-out storage capacity requirements for VDI environments. 3 VMware View Infrastructure and Test Configuration The core VMware View infrastructure components used in our test configuration are shown in Figure 1 below. Figure 1: Test Configuration Functional Components We added the following components to the test system configuration to simulate a realistic VDI workload: Sizing and Best Practices for Deploying VMware View 4.5 on VMware vSphere 4.
2 RAWC Controller RAWC Session Launchers Provides the RAWC system configuration and management GUI for desktop workload simulation Automates launch of View VDI client sessions Microsoft SQL Server Provides database for vCenter and View Composer Microsoft Exchange 2010 Email server for Outlook clients running on each desktop We used Dell PowerEdge M610 blade servers and Dell PowerConnect M6220/M6348 Ethernet blade switches within a Dell PowerEdge M1000e Modular Blade Enclosure as the host platform for
The size of the parent VM image used in our desktop pools was 15GB. We allocated one vCPU and 1GB of memory for each desktop VM. A linked-clone desktop pool was assigned to each ESX cluster. 3.1 Test Infrastructure: Component Design Details Table 1 below provides an overview of the components used to build our test configuration. Component Purpose / Usage Servers 3 x Dell PowerEdge M610 Blade Servers: • • • 2 x Quad Core Intel Xeon E5620 2.
vCenter Performance Monitor Performance monitoring and capture at the ESX host ESXTOP Performance monitoring and capture at the ESX host Table 1: Test Components The PowerConnect M6348 switch modules and the PowerConnect 6224 top-of-rack switches were dedicated to the iSCSI SAN. The PowerConnect M6220 switch modules were dedicated to the server LAN. Figure 3 shows the overall topology and the connection paths used by each M610 blade servers (only one server is shown).
Server LAN Configuration: • • • Each PowerEdge M610 server has an onboard Broadcom 5709 dual-port 1GbE NIC. Dual PowerConnect M6220 switches were installed in fabric A of the blade chassis. We connected the onboard NICs to each the M6220 switches. The two M6220 switches were inter-connected using a 2 x 10GbE LAG. SAN Configuration: • • • • • Each PowerEdge M610 server included two Broadcom NetXtreme II 5709 quad port NIC mezzanine cards. We assigned one card to fabric B and the other to on fabric C.
physical NIC uplink was exclusively assigned to each port. Figure 4: ESX vSwitch Configuration We used VLANs to segregate network traffic into different classes (tagged packets) within the Server LAN. VLAN and port group assignments for the Server LAN (vSwitch0) were assigned as shown in Table 2. Figure 5 shows the logical connection paths for vSwitch0 and vSwitchISCSI.
Figure 5: ESX vSwitch Connection Paths In our configuration we used the software iSCSI initiator provided by the ESX host. To take advantage of EqualLogic-aware multi-path I/O, the EqualLogic Multipathing Extension Module (MEM) for VMware vSphere was installed on each ESX host. Note: For detailed information on using the EqualLogic Multipathing Extension Module, see the following publication: Configuring and Installing the EqualLogic Multipathing Extension Module for VMware vSphere 4.
• • Maintain less than 20ms disk latency under any VDI workload; Provision, on average, a minimum of 2GB storage per linked clone delta disk. 3.4 VMware View Configuration The VMware View configuration settings used during all performance characterization tests are shown in Table 3 below.
3.4.1 Using Linked Clones You can realize significant storage space savings and increased efficiencies in client VM provisioning and administration when using linked clones. To setup a linked clone pool, you first create a “parent” VM image with the required OS, settings and software installed. View Composer will coordinate with vCenter to create a pool of linked clones, using the parent VM image as the base image for the clones.
VM, RAWC also automates launching of applications such as Word, Excel, Outlook, Adobe Reader and others, based on specific criteria such as typing speed, number of emails and number of pages to change. We customized the application simulation settings to achieve our desired user load profile. Note: For more information on the Reference Architecture Workload Simulator, see the following VMware Information Guide: Workload Considerations for Virtual Desktop Reference Architectures: www.vmware.
The results in Table 4 above show the following: • • • For both persistent and non-persistent desktops, we comfortably scaled up to 1014 desktop VMs using a single PS6000XVS array for storage (test cases A and B above). Non-persistent desktops created a peak load of 8600 IOPS during the login storm. The PS6000XVS was able to meet this I/O demand with performance headroom to spare.
Figure 6: Login storm I/O performance detail: PS6000XVS hosting 1014 non-persistent desktops Figure 7 shows the I/O performance measured at the individual disk drives within the storage system at the same login storm peak I/O point. The data in the table below the chart shows that during peak load approximately 90% of the array IOPS were served by the SSD drives. During login storm most of the read I/O is targeted to the replica image. As this happens, the replica image data becomes “hot.
Figure 8: Steady state I/O performance detail: PS6000XVS hosting 1014 desktops Figure 9 shows the I/O performance measured at the individual disk drives within the storage system at the same measurement point as in Figure 8. The data table below the chart shows that approximately 85% of the array IOPS were served by the SSD drives and the remaining 15% served by the 15K SAS drives in the array.
4.2.3 Host performance in the View Client ESX cluster During the test, we measured CPU, memory, network and disk performance on all of the View Client 5 cluster ESX hosts . The performance of one of the ESX hosts is presented here. It is representative of all the ESX hosts under test. The results shown in the figures in this section were captured using VMware vCenter. Figure 10 shows the CPU utilization for one of the ESX hosts during the test.
Figure 11: Memory utilization for one ESX View Client Cluster Host during 1014 non-persistent desktop test Figure 12 shows disk latencies for read and write operations, as measured from the ESX host, averaged for the seven volumes under test. As shown in the figure, disk latency stayed well within 20ms on average. Figure 13 shows the iSCSI read and write network data rates during the same test period.
Figure 13: iSCSI network utilization for one ESX View Client Cluster Host during 1014 non-persistent desktop test Sizing and Best Practices for Deploying VMware View 4.5 on VMware vSphere 4.
5 Sizing Guidelines for EqualLogic SANs Client desktop usage in enterprise environments follows typical patterns or phases. For example, at the beginning of the workday most employees login into their desktops within a relatively small time frame. During this time “login storms” can be expected. After the login storm, periods of high and low steady state application activities will occur.
o o o o Allocate space equal to the memory size of the VM for swap usage Allocate space for linked clone delta files. The size of the delta file allocation is determined by the amount of change that occurs between the desktop VM and the base image. Determine the maximum change expected (percent of base image size) and use that value for the delta file allocation. (Note: The setting for storage over6 commit level will also affect the capacity calculation.
persistent desktops PDisk=0. Log Allocated for VM log files (if VM logging is enabled) 0 (no logging) OF% Overhead factor. 15% Calculation: (1000*( 1GB + 15GB*(0.075)) + 2 * 2*(15GB)) * 1.15 (1000*( 1GB + 15GB*(0.05) + 0.5GB) + 2 * 2*(15GB)) * 1.15 2513GB 2484GB (1) For non-persistent desktops, this will include OS image changes, user profile data and application data. The linked clone delta disk for non-persistent desktops will incur a higher change rate (%C) as compared to persistent desktops.
5.4 Performance Considerations In general, the design considerations specific to VDI performance focus on supporting the workloads generated by a typical desktop session lifecycle (boot, login, work activities and logoff). Quantifying the average I/O workload generated by the user profile (task worker, knowledge worker etc.) is important for performance and sizing considerations.
6 Best Practices Each of the component layers in the system stack shown in Figure 14 requires careful design and configuration to ensure optimal system performance. Figure 14: Component Stack Within each layer of the stack you must consider each of the following design goals: Availability Ensuring high availability is a critical design guideline for each component in the solution. Redundant sub-components and connection paths must be maintained to avoid single points of failure.
users to have their desktop and application settings follow them between different virtual desktops. Folders like Desktop, Documents, Pictures, Music, Videos, Favorites, Contacts, Downloads and Links should be redirected to a user’s home directory in a network file share. By following these practices, login speeds will improve and a reset of user’s profile will not cause document data loss.
storage platform) should be designed to handle the IO requirements during these periods. Organizations should study the arrival rate (number of user connections per minute) and the corresponding IOPS requirements when designing the VDI solution. Our test results showed that using the EqualLogic PS6000XVS (hybrid SSD/SAS storage array with automatic hot-data migration between device tiers) enabled us to support much higher peak I/O workloads while still supporting VM storage density requirements. 6.
• Align guest desktop OS vmdk image files to 64K block boundaries (Windows 7 performs this automatically) 6.3 ESX Host Configuration The following steps are recommended for configuring the ESX host: • • • Install the EqualLogic Multipathing Extension Module (MEM) for vSphere 4.1. This is supported with both the ESX software iSCSI initiator or any supported iSCSI hardware initiator on the host.
• • • • You should also make sure that the server NIC ports and storage NIC ports are connected in a way such that any single component failure in the SAN will not disable access to any storage array volumes. Flow control should be enabled on both the server NICs and the switch ports connecting to server and storage ports. We recommend you enable jumbo frames on the server NIC ports and the switch ports.
• linked clones. With View 4.5, only one replica volume datastore needs to be specified per View pool and ESX cluster. When using persistent desktops, if user data needs to be backed up then you should host persistent disks assigned for each VM on a separate VMFS datastore. This datastore can be backed up independently to protect user data with its own SLA requirements. Sizing and Best Practices for Deploying VMware View 4.5 on VMware vSphere 4.
Appendix A Test System Component Details The tests were conducted using the following component firmware levels. Component Firmware Version EqualLogic Arrays v5.0.2 Network Switches PowerConnect 6248 PowerConnect M6220 PowerConnect M6248 3.2.07 3.1.5.2 3.1.5.2 Broadcom BCM5709 Quadport 1GigE IO Mezzanine 5.2.7, A10 M610 Server BIOS 2.2.3 M1000e Chassis Firmware Chassis Management Controller (CMC) Integrated Dell Remote Access Controller (iDRAC6) iKVM 2.30 3.02 01.00.01.
THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL ERRORS AND TECHNICAL INACCURACIES. THE CONTENT IS PROVIDED AS IS, WITHOUT EXPRESS OR IMPLIED WARRANTIES OF ANY KIND.