vSphere Storage Update 1 Modified 20 MAR 2018 VMware vSphere 6.5 VMware ESXi 6.5 vCenter Server 6.
vSphere Storage You can find the most up-to-date technical documentation on the VMware website at: https://docs.vmware.com/ If you have comments about this documentation, submit your feedback to docfeedback@vmware.com VMware, Inc. 3401 Hillview Ave. Palo Alto, CA 94304 www.vmware.com Copyright © 2009–2018 VMware, Inc. All rights reserved. Copyright and trademark information. VMware, Inc.
Contents About vSphere Storage Updated Information 8 9 1 Introduction to Storage 10 Traditional Storage Virtualization Models Software-Defined Storage Models vSphere Storage APIs 10 12 12 2 Getting Started with a Traditional Storage Model 14 Types of Physical Storage 14 Supported Storage Adapters Datastore Characteristics 25 26 3 Overview of Using ESXi with a SAN 30 ESXi and SAN Use Cases 31 Specifics of Using SAN Storage with ESXi ESXi Hosts and Multiple Storage Arrays Making LUN Decisions 3
vSphere Storage 7 Booting ESXi from Fibre Channel SAN 51 Boot from SAN Benefits 51 Requirements and Considerations when Booting from Fibre Channel SAN Getting Ready for Boot from SAN 52 52 Configure Emulex HBA to Boot from SAN 54 Configure QLogic HBA to Boot from SAN 55 8 Booting ESXi with Software FCoE 57 Requirements and Considerations for Software FCoE Boot Best Practices for Software FCoE Boot Set Up Software FCoE Boot 57 58 58 Troubleshooting Boot from Software FCoE for an ESXi Host 60
vSphere Storage Configure Independent Hardware iSCSI Adapter for SAN Boot iBFT iSCSI Boot Overview 113 114 13 Best Practices for iSCSI Storage 120 Preventing iSCSI SAN Problems 120 Optimizing iSCSI SAN Storage Performance Checking Ethernet Switch Statistics 121 125 14 Managing Storage Devices 126 Storage Device Characteristics 126 Understanding Storage Device Naming Storage Rescan Operations 129 131 Identifying Device Connectivity Problems Edit Configuration File Parameters 133 139 Enable o
vSphere Storage Configuring VMFS Pointer Block Cache 199 18 Understanding Multipathing and Failover 201 Failovers with Fibre Channel 201 Host-Based Failover with iSCSI 202 Array-Based Failover with iSCSI 204 Path Failover and Virtual Machines Managing Multiple Paths 205 206 VMware Multipathing Module 207 Path Scanning and Claiming 209 Managing Storage Paths and Multipathing Plug-Ins Scheduling Queues for Virtual Machine I/Os 213 224 19 Raw Device Mapping 226 About Raw Device Mapping 226
vSphere Storage Virtual Volumes Architecture 273 Virtual Volumes and VMware Certificate Authority Snapshots and Virtual Volumes 276 Before You Enable Virtual Volumes Configure Virtual Volumes 275 276 278 Provision Virtual Machines on Virtual Volumes Datastores Virtual Volumes and Replication 281 285 Best Practices for Working with vSphere Virtual Volumes 290 23 Filtering Virtual Machine I/O 295 About I/O Filters 295 Using Flash Storage Devices with Cache I/O Filters System Requirements for I
About vSphere Storage vSphere Storage describes virtualized and software-defined storage technologies that VMware ESXi™ ® and VMware vCenter Server offer, and explains how to configure and use these technologies. Intended Audience This information is for experienced system administrators who are familiar with the virtual machine and storage virtualization technologies, data center operations, and SAN storage concepts.
Updated Information This vSphere Storage is updated with each release of the product or when necessary. This table provides the update history of the vSphere Storage. Revision Description 20 MAR 2018 Minor revisions. 16 JAN 2018 Minor revisions. 04 OCT 2017 Minor revisions. EN-002637-00 Initial release. VMware, Inc.
Introduction to Storage 1 vSphere supports various storage options and functionalities in traditional and software-defined storage environments. A high-level overview of vSphere storage elements and aspects helps you plan a proper storage strategy for your virtual data center.
vSphere Storage Internet SCSI Internet iSCSI (iSCSI) is a SAN transport that can use Ethernet connections between computer systems, or ESXi hosts, and highperformance storage systems. To connect to the storage systems, your hosts use hardware iSCSI adapters or software iSCSI initiators with standard network adapters. See Chapter 10 Using ESXi with iSCSI SAN. Storage Device or LUN In the ESXi context, the terms device and LUN are used interchangeably.
vSphere Storage Software-Defined Storage Models In addition to abstracting underlying storage capacities from VMs, as traditional storage models do, software-defined storage abstracts storage capabilities. With the software-defined storage model, a virtual machine becomes a unit of storage provisioning and can be managed through a flexible policy-based mechanism. The model involves the following vSphere technologies.
vSphere Storage vSphere APIs for Storage Awareness Also known as VASA, these APIs, either supplied by third-party vendors or offered by VMware, enable communications between vCenter Server and underlying storage. Through VASA, storage entities can inform vCenter Server about their configurations, capabilities, and storage health and events. In return, VASA can deliver VM storage requirements from vCenter Server to a storage entity and ensure that the storage layer meets the requirements.
Getting Started with a Traditional Storage Model 2 Setting up your ESXi storage in traditional environments, includes configuring your storage systems and devices, enabling storage adapters, and creating datastores.
vSphere Storage Figure 2‑1. Local Storage ESXi Host VMFS vmdk SCSI Device In this example of a local storage topology, the ESXi host uses a single connection to a storage device. On that device, you can create a VMFS datastore, which you use to store virtual machine disk files. Although this storage configuration is possible, it is not a best practice.
vSphere Storage In addition to traditional networked storage that this topic covers, VMware supports virtualized shared storage, such as vSAN. vSAN transforms internal storage resources of your ESXi hosts into shared storage that provides such capabilities as High Availability and vMotion for virtual machines. For details, see the Administering VMware vSAN documentation. Note The same LUN cannot be presented to an ESXi host or multiple hosts through different storage protocols.
vSphere Storage In this configuration, a host connects to a SAN fabric, which consists of Fibre Channel switches and storage arrays, using a Fibre Channel adapter. LUNs from a storage array become available to the host. You can access the LUNs and create datastores for your storage needs. The datastores use the VMFS format. For specific information on setting up the Fibre Channel SAN, see Chapter 4 Using ESXi with Fibre Channel SAN.
vSphere Storage Figure 2‑3. iSCSI Storage ESXi Host Software Adapter iSCSI HBA Ethernet NIC LAN LAN VMFS VMFS vmdk iSCSI Array vmdk In the left example, the host uses the hardware iSCSI adapter to connect to the iSCSI storage system. In the right example, the host uses a software iSCSI adapter and an Ethernet NIC to connect to the iSCSI storage. iSCSI storage devices from the storage system become available to the host.
vSphere Storage Figure 2‑4. NFS Storage ESXi Host Ethernet NIC LAN NFS vmdk NAS Appliance For specific information on setting up NFS storage, see Understanding Network File System Datastores. Shared Serial Attached SCSI (SAS) Stores virtual machines on direct-attached SAS storage systems that offer shared access to multiple hosts. This type of access permits multiple hosts to access the same VMFS datastore on a LUN.
vSphere Storage Figure 2‑5. Target and LUN Representations Storage Array Storage Array Target LUN LUN LUN Target Target Target LUN LUN LUN In this illustration, three LUNs are available in each configuration. In one case, the host connects to one target, but that target has three LUNs that can be used. Each LUN represents an individual storage volume. In the other example, the host detects three different targets, each having one LUN.
vSphere Storage Figure 2‑6. Virtual machines accessing different types of storage ESXi Host Requires TCP/IP Connectivity vmdk Software iSCSI Adapter VMFS SCSI Device Fibre Channel HBA iSCSI HBA Ethernet NIC Ethernet NIC SAN LAN LAN LAN VMFS VMFS VMFS NFS vmdk Fibre Channel Array vmdk vmdk vmdk NAS Appliance iSCSI Array Note This diagram is for conceptual purposes only. It is not a recommended configuration.
vSphere Storage Table 2‑1. Storage Device Information (Continued) Storage Device Information Description Capacity Total capacity of the storage device. Owner The plug-in, such as the NMP or a third-party plug-in, that the host uses to manage paths to the storage device. For details, see Managing Multiple Paths. Hardware Acceleration Information about whether the storage device assists the host with virtual machine management operations. The status can be Supported, Not Supported, or Unknown.
vSphere Storage Icon Description Attach the selected device to the host. Change the display name of the selected device. Turn on the locator LED for the selected devices. Turn off the locator LED for the selected devices. Mark the selected devices as flash disks. Mark the selected devices as HDD disks. 6 Use tabs under Device Details to access additional information and modify properties for the selected device. Tab Description Properties View device properties and characteristics.
vSphere Storage Icon Description Turn off the locator LED for the selected devices. Mark the selected devices as flash disks. Mark the selected devices as HDD disks. Mark the selected devices as local for the host. Mark the selected devices as remote for the host. Erase partitions on the selected devices. Comparing Types of Storage Whether certain vSphere functionality is supported might depend on the storage technology that you use.
vSphere Storage Supported Storage Adapters Storage adapters provide connectivity for your ESXi host to a specific storage unit or network. ESXi supports different classes of adapters, including SCSI, iSCSI, RAID, Fibre Channel, Fibre Channel over Ethernet (FCoE), and Ethernet. ESXi accesses the adapters directly through device drivers in the VMkernel. Depending on the type of storage you use, you might need to enable and configure a storage adapter on your host.
vSphere Storage 6 Use tabs under Adapter Details to access additional information and modify properties for the selected adapter. Tab Description Properties Review general adapter properties that typically include a name and model of the adapter and unique identifiers formed according to specific storage standards. For iSCSI and FCoE adapters, use this tab to configure additional properties, for example, authentication. Devices View storage devices the adapter can access.
vSphere Storage Table 2‑4. Datastore Information (Continued) Datastore Information Applicable Datastore Type Description Device Backing VMFS Information about underlying storage, such as a storage device on which the datastore is deployed (VMFS), server and folder (NFS), or disk groups (vSAN). NFS vSAN Protocol Endpoints Virtual Volumes Information about corresponding protocol endpoints. See Protocol Endpoints. Extents VMFS Individual extents that the datastore spans and their capacity.
vSphere Storage Table 2‑4. Datastore Information (Continued) Datastore Information Applicable Datastore Type Description Connectivity with Hosts VMFS Hosts where the datastore is mounted. NFS Virtual Volumes Multipathing VMFS Virtual Volumes Path selection policy the host uses to access storage. For more information, see Chapter 18 Understanding Multipathing and Failover. Display Datastore Information Access the Datastores view with the vSphere Web Client navigator.
vSphere Storage 4 Use tabs to access additional information and modify datastore properties. Tab Description Getting Started View introductory information and access basic actions. Summary View statistics and configuration for the selected datastore. Monitor View alarms, performance data, resource allocation, events, and other status information for the datastore. Configure View and modify datastore properties. Menu items that you can see depend on the datastore type.
Overview of Using ESXi with a SAN 3 Using ESXi with a SAN improves flexibility, efficiency, and reliability. Using ESXi with a SAN also supports centralized management, failover, and load balancing technologies. The following are benefits of using ESXi with a SAN: n You can store data securely and configure multiple paths to your storage, eliminating a single point of failure. n Using a SAN with ESXi systems extends failure resistance to the server.
vSphere Storage n Making LUN Decisions n Selecting Virtual Machine Locations n Third-Party Management Applications n SAN Storage Backup Considerations ESXi and SAN Use Cases When used with a SAN, ESXi can benefit from multiple vSphere features, including Storage vMotion, Distributed Resource Scheduler (DRS), High Availability, and so on.
vSphere Storage When you use SAN storage with ESXi, the following considerations apply: n You cannot use SAN administration tools to access operating systems of virtual machines that reside on the storage. With traditional tools, you can monitor only the VMware ESXi operating system. You use the vSphere Web Client to monitor virtual machines. n The HBA visible to the SAN administration tools is part of the ESXi system, not part of the virtual machine.
vSphere Storage You might want more, smaller LUNs for the following reasons: n Less wasted storage space. n Different applications might need different RAID characteristics. n More flexibility, as the multipathing policy and disk shares are set per LUN. n Use of Microsoft Cluster Service requires that each cluster disk resource is in its own LUN. n Better performance because there is less contention for a single volume.
vSphere Storage Selecting Virtual Machine Locations When you are working on optimizing performance for your virtual machines, storage location is an important factor. Depending on your storage needs, you might select storage with high performance and high availability, or storage with lower performance. Storage can be divided into different tiers depending on several factors: n High Tier. Offers high performance and high availability.
vSphere Storage If you run the SAN management software on a virtual machine, you gain the benefits of a virtual machine, including failover with vMotion and VMware HA. Because of the additional level of indirection, however, the management software might not see the SAN. In this case, you can use an RDM. Note Whether a virtual machine can run management software successfully depends on the particular storage system.
vSphere Storage Using Third-Party Backup Packages You can use third-party backup solutions to protect system, application, and user data in your virtual machines. The Storage APIs - Data Protection that VMware offers can work with third-party products. When using the APIs, third-party software can perform backups without loading ESXi hosts with the processing of backup tasks.
Using ESXi with Fibre Channel SAN 4 When you set up ESXi hosts to use FC SAN storage arrays, special considerations are necessary. This section provides introductory information about how to use ESXi with an FC SAN array.
vSphere Storage Generally, a single path from a host to a LUN consists of an HBA, switch ports, connecting cables, and the storage controller port. If any component of the path fails, the host selects another available path for I/O. The process of detecting a failed path and switching to another is called path failover. Ports in Fibre Channel SAN In the context of this document, a port is the connection from a device into the SAN.
vSphere Storage Using Zoning with Fibre Channel SANs Zoning provides access control in the SAN topology. Zoning defines which HBAs can connect to which targets. When you configure a SAN by using zoning, the devices outside a zone are not visible to the devices inside the zone. Zoning has the following effects: n Reduces the number of targets and LUNs presented to a host. n Controls and isolates paths in a fabric.
vSphere Storage 6 Depending on a port the HBA uses to connect to the fabric, one of the SAN switches receives the request. The switch routes the request to the appropriate storage device. VMware, Inc.
Configuring Fibre Channel Storage 5 When you use ESXi systems with SAN storage, specific hardware and system requirements exist. This chapter includes the following topics: n ESXi Fibre Channel SAN Requirements n Installation and Setup Steps n N-Port ID Virtualization ESXi Fibre Channel SAN Requirements In preparation for configuring your SAN and setting up your ESXi system to use SAN storage, review the requirements and recommendations.
vSphere Storage n You cannot use multipathing software inside a virtual machine to perform I/O load balancing to a single physical LUN. However, when your Microsoft Windows virtual machine uses dynamic disks, this restriction does not apply. For information about configuring dynamic disks, see Set Up Dynamic Disk Mirroring. Setting LUN Allocations This topic provides general information about how to allocate LUNs when your ESXi works with SAN.
vSphere Storage n ESXi supports 16 GB end-to-end Fibre Channel connectivity. Installation and Setup Steps This topic provides an overview of installation and setup steps that you need to follow when configuring your SAN environment to work with ESXi. Follow these steps to configure your ESXi SAN environment. 1 Design your SAN if it is not already configured. Most existing SANs require only minor modification to work with ESXi. 2 Check that all SAN components meet requirements.
vSphere Storage When a virtual machine has a WWN assigned to it, the virtual machine’s configuration file (.vmx) is updated to include a WWN pair. The WWN pair consists of a World Wide Port Name (WWPN) and a World Wide Node Name (WWNN). When that virtual machine is powered on, the VMkernel instantiates a virtual port (VPORT) on the physical HBA which is used to access the LUN. The VPORT is a virtual HBA that appears to the FC fabric as a physical HBA.
vSphere Storage ESXi with NPIV supports the following items: n NPIV supports vMotion. When you use vMotion to migrate a virtual machine it retains the assigned WWN. If you migrate an NPIV-enabled virtual machine to a host that does not support NPIV, VMkernel reverts to using a physical HBA to route the I/O. n If your FC SAN environment supports concurrent I/O on the disks from an active-active array, the concurrent I/O to two different NPIV ports is also supported.
vSphere Storage What to do next Register newly created WWNs in the fabric. Modify WWN Assignments You can modify WWN assignments for a virtual machine with an RDM. Typically, you do not need to change existing WWN assignments on your virtual machine. In certain circumstances, for example, when manually assigned WWNs are causing conflicts on the SAN, you might need to change or remove WWNs. Prerequisites Make sure to power off the virtual machine if you want to edit the existing WWNs.
Configuring Fibre Channel over Ethernet 6 To access Fibre Channel storage, an ESXi host can use the Fibre Channel over Ethernet (FCoE) protocol. The FCoE protocol encapsulates Fibre Channel frames into Ethernet frames. As a result, your host does not need special Fibre Channel links to connect to Fibre Channel storage. The host can use 10 Gbit lossless Ethernet to deliver Fibre Channel traffic.
vSphere Storage For the software FCoE adapter, you must properly configure networking and then activate the adapter. Note The number of software FCoE adapters you activate corresponds to the number of physical NIC ports. ESXi supports a maximum of four software FCoE adapters on one host. Configuration Guidelines for Software FCoE When setting up your network environment to work with ESXi software FCoE, follow the guidelines and best practices that VMware offers.
vSphere Storage Procedure 1 Browse to the host in the vSphere Web Client navigator. 2 Click Actions > Add Networking. 3 Select VMkernel Network Adapter, and click Next. 4 Select New standard switch to create a vSphere Standard switch. 5 Under Unclaimed Adapters, select the network adapter (vmnic#) that supports FCoE and click Assign. Make sure to assign the adapter to Active Adapters. 6 Enter a network label.
vSphere Storage 5 On the Add Software FCoE Adapter dialog box, select an appropriate vmnic from the drop-down list of physical network adapters. Only those adapters that are not yet used for FCoE traffic are listed. 6 Click OK. The software FCoE adapter appears on the list of storage adapters. After you activate the software FCoE adapter, you can view its properties. If you do not use the adapter, you can remove it from the list of adapters. VMware, Inc.
Booting ESXi from Fibre Channel SAN 7 When you set up your host to boot from a SAN, your host's boot image is stored on one or more LUNs in the SAN storage system. When the host starts, it boots from the LUN on the SAN rather than from its local disk. ESXi supports booting through a Fibre Channel host bus adapter (HBA) or a Fibre Channel over Ethernet (FCoE) converged network adapter (CNA).
vSphere Storage n Improved management. Creating and managing the operating system image is easier and more efficient. n Better reliability. You can access the boot disk through multiple paths, which protects the disk from being a single point of failure. Requirements and Considerations when Booting from Fibre Channel SAN Your ESXi boot configuration must meet specific requirements. Table 7‑1.
vSphere Storage Configure SAN Components and Storage System Before you set up your ESXi host to boot from a SAN LUN, configure SAN components and a storage system. Because configuring the SAN components is vendor-specific, refer to the product documentation for each item. Procedure 1 Connect network cable, referring to any cabling guide that applies to your setup. Check the switch wiring, if there is any. 2 Configure the storage array.
vSphere Storage Because changing the boot sequence in the BIOS is vendor-specific, refer to vendor documentation for instructions. The following procedure explains how to change the boot sequence on an IBM host. Procedure 1 Power on your system and enter the system BIOS Configuration/Setup Utility. 2 Select Startup Options and press Enter. 3 Select Startup Sequence Options and press Enter. 4 Change the First Startup Device to [CD-ROM]. You can now install ESXi.
vSphere Storage 2 3 To configure the adapter parameters, press ALT+E at the Emulex prompt and follow these steps. a Select an adapter (with BIOS support). b Select 2. Configure This Adapter's Parameters. c Select 1. Enable or Disable BIOS. d Select 1 to enable BIOS. e Select x to exit and Esc to return to the previous menu. To configure the boot device, follow these steps from the Emulex main menu. a Select the same adapter. b Select 1. Configure Boot Devices.
vSphere Storage 5 6 7 Set the BIOS to search for SCSI devices. a In the Host Adapter Settings page, select Host Adapter BIOS. b Press Enter to toggle the value to Enabled. c Press Esc to exit. Enable the selectable boot. a Select Selectable Boot Settings and press Enter. b In the Selectable Boot Settings page, select Selectable Boot. c Press Enter to toggle the value to Enabled. Select the Boot Port Name entry in the list of storage processors (SPs) and press Enter.
Booting ESXi with Software FCoE 8 ESXi supports boot from FCoE capable network adapters. When you install and boot ESXi from an FCoE LUN, the host can use a VMware software FCoE adapter and a network adapter with FCoE capabilities. The host does not require a dedicated FCoE HBA. You perform most configurations through the option ROM of your network adapter. The network adapters must support one of the following formats, which communicate parameters about an FCoE boot device to VMkernel.
vSphere Storage n Support ESXi open FCoE stack. n Contain FCoE boot firmware which can export boot information in FBFT format or FBPT format. Considerations n You cannot change software FCoE boot configuration from within ESXi. n Coredump is not supported on any software FCoE LUNs, including the boot LUN. n Multipathing is not supported at pre-boot. n Boot LUN cannot be shared with other hosts even on shared storage.
vSphere Storage Configure Software FCoE Boot Parameters To support a software FCoE boot process, a network adapter on your host must have a specially configured FCoE boot firmware. When you configure the firmware, you enable the adapter for the software FCoE boot and specify the boot LUN parameters. Procedure u In the option ROM of the network adapter, specify software FCoE boot parameters. These parameters include a boot target, boot LUN, VLAN ID, and so on.
vSphere Storage What to do next If needed, you can rename and modify the VMware_FCoE_vSwitch that the installer automatically created. Make sure that the Cisco Discovery Protocol (CDP) mode is set to Listen or Both. Troubleshooting Boot from Software FCoE for an ESXi Host If the installation or boot of ESXi from a software FCoE LUN fails, you can use several troubleshooting methods. Problem When you install or boot ESXi from FCoE storage, the installation or the boot process fails.
Best Practices for Fibre Channel Storage 9 When using ESXi with Fibre Channel SAN, follow recommendations to avoid performance problems. The vSphere Web Client offers extensive facilities for collecting performance information. The information is graphically displayed and frequently updated. You can also use the resxtop or esxtop command-line utilities. The utilities provide a detailed look at how ESXi uses resources. For more information, see the vSphere Resource Management documentation.
vSphere Storage n Ensure that the Fibre Channel HBAs are installed in the correct slots in the host, based on slot and bus speed. Balance PCI bus load among the available buses in the server. n Become familiar with the various monitor points in your storage network, at all visibility points, including host's performance charts, FC switch statistics, and storage performance statistics. n Be cautious when changing IDs of the LUNs that have VMFS datastores being used by your ESXi host.
vSphere Storage To improve the array performance in the vSphere environment, follow these general guidelines: n When assigning LUNs, remember that several hosts might access the LUN, and that several virtual machines can run on each host. One LUN used by a host can service I/O from many different applications running on different operating systems. Because of this diverse workload, the RAID group containing the ESXi LUNs typically does not include LUNs used by other servers that are not running ESXi.
vSphere Storage n When allocating LUNs or RAID groups for ESXi systems, remember that multiple operating systems use and share that resource. The LUN performance required by the ESXi host might be much higher than when you use regular physical machines. For example, if you expect to run four I/O intensive applications, allocate four times the performance capacity for the ESXi LUNs.
Using ESXi with iSCSI SAN 10 You can use ESXi in conjunction with a storage area network (SAN), a specialized high-speed network that connects computer systems to high-performance storage subsystems. Using ESXi together with a SAN provides storage consolidation, improves reliability, and helps with disaster recovery. To use ESXi effectively with a SAN, you must have a working knowledge of ESXi systems and SAN concepts.
vSphere Storage Generally, a single path from a host to a LUN consists of an iSCSI adapter or NIC, switch ports, connecting cables, and the storage controller port. If any component of the path fails, the host selects another available path for I/O. The process of detecting a failed path and switching to another is called path failover. For more information on multipathing, see Chapter 18 Understanding Multipathing and Failover.
vSphere Storage n naming-authority is the reverse syntax of the Internet domain name of the naming authority. For example, the iscsi.vmware.com naming authority can have the iSCSI qualified name form of iqn. 1998-01.com.vmware.iscsi. The name indicates that the vmware.com domain name was registered in January of 1998, and iscsi is a subdomain, maintained by vmware.com. n unique name is any name you want to use, for example, the name of your host.
vSphere Storage This type of adapter can be a card that presents a standard network adapter and iSCSI offload functionality for the same port. The iSCSI offload functionality depends on the host's network configuration to obtain the IP, MAC, and other parameters used for iSCSI sessions. An example of a dependent adapter is the iSCSI licensed Broadcom 5709 NIC. Independent Hardware iSCSI Adapter Implements its own networking and iSCSI configuration and management interfaces.
vSphere Storage Host-based iSCSI initiators establish connections to each target. Storage systems with a single target containing multiple LUNs have traffic to all the LUNs on a single connection. With a system that has three targets with one LUN each, a host uses separate connections to the three LUNs.
vSphere Storage Discovery A discovery session is part of the iSCSI protocol. It returns the set of targets you can access on an iSCSI storage system. The two types of discovery available on ESXi are dynamic and static. Dynamic discovery obtains a list of accessible targets from the iSCSI storage system. Static discovery can access only a particular target by target name and address. For more information, see Configuring Discovery Addresses for iSCSI Adapters.
vSphere Storage Enabling header and data digests does require additional processing for both the initiator and the target and can affect throughput and CPU use performance. Note Systems that use the Intel Nehalem processors offload the iSCSI digest calculations, as a result, reducing the impact on performance. For information on enabling header and data digests, see Configuring Advanced Parameters for iSCSI.
Configuring iSCSI Adapters and Storage 11 Before ESXi can work with a SAN, you must set up your iSCSI adapters and storage. The following table lists the iSCSI adapters (vmhbas) that ESXi supports and indicates whether VMkernel networking configuration is required. Table 11‑1. Supported iSCSI Adapters iSCSI Adapter (vmhba) Description VMkernel Networking Software Uses standard NICs to connect your host to a remote iSCSI target on the IP network.
vSphere Storage n Configuring Discovery Addresses for iSCSI Adapters n Configuring CHAP Parameters for iSCSI Adapters n Configuring Advanced Parameters for iSCSI n iSCSI Session Management ESXi iSCSI SAN Requirements To work properly with a SAN, your ESXi host must meet several requirements. n Verify that your ESXi systems support the SAN storage hardware and firmware. For an up-to-date list, see VMware Compatibility Guide. n Configure your system to have only one VMFS datastore for each LUN.
vSphere Storage n vMotion and VMware DRS. When you use vCenter Server and vMotion or DRS, make sure that the LUNs for the virtual machines are provisioned to all hosts. This configuration provides the greatest freedom in moving virtual machines. n Active-active versus active-passive arrays. When you use vMotion or DRS with an active-passive SAN storage device, make sure that all hosts have consistent paths to all storage processors. Not doing so can cause path thrashing when a vMotion migration occurs.
vSphere Storage Procedure 1 View Independent Hardware iSCSI Adapters View an independent hardware iSCSI adapter and verify that it is correctly installed and ready for configuration. 2 Modify General Properties for iSCSI Adapters You can change the default iSCSI name and alias assigned to your iSCSI adapters. For the independent hardware iSCSI adapters, you can also change the default IP settings.
vSphere Storage Adapter Information Description Model Model of the adapter. iSCSI Name Unique name formed according to iSCSI standards that identifies the iSCSI adapter. You can edit the iSCSI name. iSCSI Alias A friendly name used instead of the iSCSI name. You can edit the iSCSI alias. IP Address Address assigned to the iSCSI HBA. Targets Number of targets accessed through the adapter. Devices All storage devices or LUNs the adapter can access.
vSphere Storage Procedure 1 Browse to the host in the vSphere Web Client navigator. 2 Click the Configure tab. 3 Under Storage, click Storage Adapters, and select the adapter (vmhba#) to configure. 4 Under Adapter Details, click the Network Settings tab and click Edit. 5 In the IPv4 settings section, disable IPv6 or select the method to obtain IP addresses. Note The automatic DHCP option and static option are mutually exclusive. 6 Option Description No IPv4 settings Disable IPv4.
vSphere Storage Prerequisites Required privilege: Host.Configuration.Storage Partition Configuration Procedure 1 Browse to the host in the vSphere Web Client navigator. 2 Click the Configure tab. 3 Under Storage, click Storage Adapters, and select the adapter (vmhba#) to configure. 4 Under Adapter Details, click the Targets tab. 5 Configure the discovery method. Discovery Method Dynamic Discovery Description a Click Dynamic Discovery and click Add.
vSphere Storage n If you use a third-party virtual switch, for example Cisco Nexus 1000V DVS, disable automatic pinning. Use manual pinning instead, making sure to connect a VMkernel adapter (vmk) to an appropriate physical NIC (vmnic). For information, refer to your virtual switch vendor documentation. n The Broadcom iSCSI adapter performs data reassembly in hardware, which has a limited buffer space.
vSphere Storage View Dependent Hardware iSCSI Adapters View a dependent hardware iSCSI adapter to verify that it is correctly loaded. If installed, the dependent hardware iSCSI adapter (vmhba#) appears on the list of storage adapters under such category as, for example, Broadcom iSCSI Adapter. If the dependent hardware adapter does not appear on the list of storage adapters, check whether it needs to be licensed. See your vendor documentation.
vSphere Storage 5 (Optional) Modify the following general properties. Option Description iSCSI Name Unique name formed according to iSCSI standards that identifies the iSCSI adapter. If you change the name, make sure that the name you enter is worldwide unique and properly formatted. Otherwise, certain storage devices might not recognize the iSCSI adapter. iSCSI Alias A friendly name you use instead of the iSCSI name. If you change the iSCSI name, it is used for new iSCSI sessions.
vSphere Storage Configure Dynamic or Static Discovery for iSCSI With dynamic discovery, each time the initiator contacts a specified iSCSI storage system, it sends the SendTargets request to the system. The iSCSI system responds by supplying a list of available targets to the initiator. In addition to the dynamic discovery method, you can use static discovery and manually enter information for the targets. When you set up static or dynamic discovery, you can only add new iSCSI targets.
vSphere Storage n Avoid hard coding the name of the software adapter, vmhbaXX, in the scripts. It is possible for the name to change from one ESXi release to another. The change might cause failures of your existing scripts if they use the hardcoded old name. The name change does not affect the behavior of the iSCSI software adapter. Configure the Software iSCSI Adapter The software iSCSI adapter configuration workflow includes these steps.
vSphere Storage 3 Under Storage, click Storage Adapters, and click the Add icon ( ). 4 Select Software iSCSI Adapter and confirm that you want to add the adapter. The software iSCSI adapter (vmhba#) is enabled and appears on the list of storage adapters. After enabling the adapter, the host assigns the default iSCSI name to it. If you need to change the default name, follow iSCSI naming conventions. What to do next Select the adapter and use the Adapter Details section to complete configuration.
vSphere Storage Configuring the network connection involves creating a virtual VMkernel adapter for each physical network adapter. You then associate the VMkernel adapter with an appropriate iSCSI adapter. This process is called port binding. For information, see Setting Up iSCSI Network. Configure Dynamic or Static Discovery for iSCSI With dynamic discovery, each time the initiator contacts a specified iSCSI storage system, it sends the SendTargets request to the system.
vSphere Storage Prerequisites Required privilege: Host.Configuration.Storage Partition Configuration Procedure 1 Browse to the host in the vSphere Web Client navigator. 2 Click the Configure tab. 3 Under Storage, click Storage Adapters, and select the adapter (vmhba#) to configure. 4 Under Adapter Details, click the Properties tab. 5 Click Disable and confirm that you want to disable the adapter. The status indicates that the adapter is disabled. 6 Reboot the host.
vSphere Storage If you change the iSCSI name, it is used for new iSCSI sessions. For existing sessions, the new settings are not used until you log out and log in again. Setting Up iSCSI Network Software and dependent hardware iSCSI adapters depend on the VMkernel networking. If you use the software or dependent hardware iSCSI adapters, you must configure connections for the traffic between the iSCSI component and the physical network adapters.
vSphere Storage The iSCSI adapter and physical NIC connect through a virtual VMkernel adapter, also called the virtual network adapter or the VMkernel port. You create a VMkernel adapter (vmk) on a vSphere switch (vSwitch) using 1:1 mapping between each virtual and physical network adapter. One way to achieve the 1:1 mapping when you have multiple NICs, is to designate a separate vSphere switch for each virtual-to-physical adapter pair.
vSphere Storage Table 11‑2. Networking Configuration for iSCSI iSCSI Adapters VMkernel Adapters (Ports) Physical Adapters (NICs) vmk1 vmnic1 vmk2 vmnic2 vmhbaX3 vmk1 vmnic1 vmhbaX4 vmk2 vmnic2 Software iSCSI vmhbaX2 Dependent Hardware iSCSI Requirements for iSCSI Port Binding You can use multiple VMkernel adapters bound to iSCSI to have multiple paths to an iSCSI array that broadcasts a single IP address.
vSphere Storage VMkernel Ports Target Portals iSCSI Sessions 2 bound VMkernel ports 2 target portals 4 sessions (2 x 2) 4 bound VMkernel ports 1 target portal 4 sessions (4 x 1) 2 bound VMkernel ports 4 target portals 8 sessions (2 x 4) Note Make sure that all target portals are reachable from all VMkernel ports when port binding is used. Otherwise, iSCSI sessions might fail to create. As a result, the rescan operation might take longer than expected.
vSphere Storage In this example, all initiator ports and the target portal are configured in the same subnet. The target is reachable through all bound ports. You have four VMkernel ports and one target portal, so total of four paths are created. Without the port binding, only one path is created. Example 2. Multiple paths with VMkernel ports in different subnets You can create multiple paths by configuring multiple ports and target portals on different IP subnets.
vSphere Storage N1 N2 vmk1 SP/Controller A Port 0 10.115.179.1/24 192.168.1.1/24 vmnic1 IP Network vmk2 SP/Controller B Port 0 10.115.179.2/24 192.168.1.2/24 vmnic2 Use the following command: # esxcli network ip route ipv4 add -gateway 192.168.1.253 -network 10.115.179.0/24 Example 2. Using static routes to create multiple paths In this configuration, you use static routing when using different subnets. You cannot use the port binding with this configuration. vmk1 SP/Controller A Port 0 0.115.155.
vSphere Storage To see gateway information per VMkernel port, use the following command: # esxcli network ip interface ipv4 address list Name ---vmk0 vmk1 vmk2 IPv4 Address -------------10.115.155.122 10.115.179.209 10.115.179.146 IPv4 Netmask ------------255.255.252.0 255.255.252.0 255.255.252.0 IPv4 Broadcast -------------10.115.155.255 10.115.179.255 10.115.179.255 Address Type -----------DHCP DHCP DHCP Gateway -------------10.115.155.253 10.115.179.253 10.115.179.
vSphere Storage Create a Single VMkernel Adapter for iSCSI Connect the VMkernel, which runs services for iSCSI storage, to a physical network adapter. Procedure 1 Browse to the host in the vSphere Web Client navigator. 2 Click Actions > Add Networking. 3 Select VMkernel Network Adapter, and click Next. 4 Select New standard switch to create a vSphere Standard switch. 5 Click the Add adapters icon, and select the network adapter (vmnic#) to use for iSCSI.
vSphere Storage 2 Click the Configure tab. 3 Under Networking, click Virtual switches, and select the vSphere switch that you want to modify from the list. 4 Connect additional network adapters to the switch. a Click the Add host networking icon. b Select Physical Network Adapters, and click Next. c Make sure that you are using the existing switch, and click Next. d Click the Add adapters icon, and select one or more network adapters (vmnic#) to use for iSCSI.
vSphere Storage Procedure 1 Browse to the host in the vSphere Web Client navigator. 2 Click the Configure tab. 3 Under Networking, click Virtual switches, and select the vSphere switch that you want to modify from the list. 4 On the vSwitch diagram, select the VMkernel adapter and click the Edit Settings icon. 5 On the Edit Settings wizard, click Teaming and Failover and select Override under Failover Order.
vSphere Storage 3 Under Storage, click Storage Adapters, and select the software or dependent iSCSI adapter to configure from the list. 4 Under Adapter Details, click the Network Port Binding tab and click the Add icon ( ). 5 Select a VMkernel adapter to bind with the iSCSI adapter. Note Make sure that the network policy for the VMkernel adapter is compliant with the binding requirements. You can bind the software iSCSI adapter to one or more VMkernel adapters.
vSphere Storage n If VMkernel adapters are on the same subnet, they must connect to a single vSwitch. n If you migrate VMkernel adapters to a different vSphere switch, move associated physical adapters. n Do not make configuration changes to iSCSI-bound VMkernel adapters or physical network adapters. n Do not make changes that might break association of VMkernel adapters and physical network adapters.
vSphere Storage Table 11‑3. Support of Jumbo Frames Type of iSCSI Adapters Jumbo Frames Support Software iSCSI Supported Dependent Hardware iSCSI Supported. Check with vendor. Independent Hardware iSCSI Supported. Check with vendor. Enable Jumbo Frames for Software and Dependent Hardware iSCSI To enable Jumbo Frames for software and dependent hardware iSCSI adapters in the vSphere Web Client, change the default value of the maximum transmission units (MTU) parameter.
vSphere Storage Configuring Discovery Addresses for iSCSI Adapters You need to set up target discovery addresses, so that the iSCSI adapter can determine which storage resource on the network is available for access. The ESXi system supports these discovery methods: Dynamic Discovery Also known as SendTargets discovery. Each time the initiator contacts a specified iSCSI server, the initiator sends the SendTargets request to the server.
vSphere Storage 3 Under Storage, click Storage Adapters, and select the adapter (vmhba#) to configure. 4 Under Adapter Details, click the Targets tab. 5 Configure the discovery method. Discovery Method Dynamic Discovery Description a Click Dynamic Discovery and click Add. b Enter the IP address or DNS name of the storage system and click OK. c Rescan the iSCSI adapter.
vSphere Storage ESXi supports CHAP authentication at the adapter level. In this case, all targets receive the same CHAP name and secret from the iSCSI initiator. For software and dependent hardware iSCSI adapters, ESXi also supports per-target CHAP authentication, which allows you to configure different credentials for each target to achieve greater level of security.
vSphere Storage Table 11‑4. CHAP Security Level (Continued) CHAP Security Level Description Supported Use unidirectional CHAP The host requires successful CHAP authentication. The connection fails if CHAP negotiation fails. Software iSCSI Dependent hardware iSCSI Independent hardware iSCSI Use bidirectional CHAP The host and the target support bidirectional CHAP.
vSphere Storage n To set the CHAP name to anything other than the iSCSI initiator name, deselect Use initiator name and enter a name in the Name text box. 5 Enter an outgoing CHAP secret to be used as part of authentication. Use the same secret that you enter on the storage side. 6 If configuring bidirectional CHAP, specify incoming CHAP credentials. Make sure to use different secrets for the outgoing and incoming CHAP. 7 Click OK. 8 Rescan the iSCSI adapter.
vSphere Storage 5 Specify the outgoing CHAP name. Make sure that the name you specify matches the name configured on the storage side. n To set the CHAP name to the iSCSI adapter name, select Use initiator name. n To set the CHAP name to anything other than the iSCSI initiator name, deselect Use initiator name and enter a name in the Name text box. 6 Enter an outgoing CHAP secret to be used as part of authentication. Use the same secret that you enter on the storage side.
vSphere Storage Configuring Advanced Parameters for iSCSI You might need to configure additional parameters for your iSCSI initiators. For example, some iSCSI storage systems require ARP (Address Resolution Protocol) redirection to move iSCSI traffic dynamically from one port to another. In this case, you must activate the ARP redirection on your host. The following table lists advanced iSCSI parameters that you can configure using the vSphere Web Client.
vSphere Storage Table 11‑5. Additional Parameters for iSCSI Initiators (Continued) Advanced Parameter Description Configurable On No-Op Timeout Specifies the amount of time, in seconds, that can lapse before your host receives a NOP-In message. The iSCSI target sends the message in response to the NOP-Out request. When the no-op timeout limit is exceeded, the initiator ends the current session and starts a new one.
vSphere Storage iSCSI Session Management To communicate with each other, iSCSI initiators and targets establish iSCSI sessions. You can review and manage iSCSI sessions using vSphere CLI. By default, software iSCSI and dependent hardware iSCSI initiators start one iSCSI session between each initiator port and each target port. If your iSCSI initiator or target has more than one port, your host can have multiple sessions established.
vSphere Storage Procedure u To list iSCSI sessions, run the following command: esxcli --server=server_name iscsi session list The command takes these options: Option Description -A|--adapter=str The iSCSI adapter name, for example, vmhba34. -s|--isid=str The iSCSI session identifier. -n|--name=str The iSCSI target name, for example, iqn.X. Add iSCSI Sessions Use the vCLI to add an iSCSI session for a target you specify or to duplicate an existing session.
vSphere Storage In the procedure, --server=server_name specifies the target server. The specified target server prompts you for a user name and password. Other connection options, such as a configuration file or session file, are supported. For a list of connection options, see Getting Started with vSphere CommandLine Interfaces. Prerequisites Install vCLI or deploy the vSphere Management Assistant (vMA) virtual machine. See Getting Started with vSphere Command-Line Interfaces.
Booting from iSCSI SAN 12 When you set up your host to boot from a SAN, your host's boot image is stored on one or more LUNs in the SAN storage system. When the host starts, it boots from the LUN on the SAN rather than from its local disk. You can use boot from the SAN if you do not want to handle maintenance of local storage or have diskless hardware configurations, such as blade systems. ESXi supports different methods of booting from the iSCSI SAN. Table 12‑1.
vSphere Storage n Configure proper ACLs on your storage system. n The boot LUN must be visible only to the host that uses the LUN. No other host on the SAN is permitted to see that boot LUN. n n If a LUN is used for a VMFS datastore, multiple hosts can share the LUN. Configure a diagnostic partition. n With the independent hardware iSCSI only, you can place the diagnostic partition on the boot LUN.
vSphere Storage Configure Independent Hardware iSCSI Adapter for SAN Boot If your ESXi host uses an independent hardware iSCSI adapter, such as QLogic HBA, you can configure the adapter to boot from the SAN. This procedure discusses how to enable the QLogic iSCSI HBA to boot from the SAN. For more information and more up-to-date details about QLogic adapter configuration settings, see the QLogic website.
vSphere Storage 3 Select Primary Boot Device Settings. a Enter the discovery Target IP and Target Port. b Configure the Boot LUN and iSCSI Name parameters. n If only one iSCSI target and one LUN are available at the target address, leave Boot LUN and iSCSI Name blank. After your host reaches the target storage system, these text boxes are populated with appropriate information. n c 4 If more than one iSCSI target and LUN are available, supply values for Boot LUN and iSCSI Name. Save changes.
vSphere Storage 3 After the successful connection, the iSCSI boot firmware writes the networking and iSCSI boot parameters in to the iBFT. The firmware stores the table in the system memory. Note The system uses this table to configure its own iSCSI connection and networking and to start up. 4 The BIOS boots the boot device. 5 The VMkernel starts loading and takes over the boot operation. 6 Using the boot parameters from the iBFT, the VMkernel connects to the iSCSI target.
vSphere Storage 3 Install ESXi to iSCSI Target When setting up your host to boot from iBFT iSCSI, install the ESXi image to the target LUN. 4 Boot ESXi from iSCSI Target After preparing the host for an iBFT iSCSI boot and copying the ESXi image to the iSCSI target, perform the actual boot. Configure iSCSI Boot Parameters To begin an iSCSI boot process, a network adapter on your host must have a specially configured iSCSI boot firmware.
vSphere Storage Install ESXi to iSCSI Target When setting up your host to boot from iBFT iSCSI, install the ESXi image to the target LUN. Prerequisites n Configure iSCSI boot firmware on your boot NIC to point to the target LUN that you want to use as the boot LUN. n Change the boot sequence in the BIOS so that iSCSI precedes the DVD-ROM. n If you use Broadcom adapters, set Boot to iSCSI target to Disabled. Procedure 1 Insert the installation media in the CD/DVD-ROM drive and restart the host.
vSphere Storage Shared iSCSI and Management Networks Configure the networking and iSCSI parameters on the first network adapter on the host. After the host boots, you can add secondary network adapters to the default port group. Isolated iSCSI and Management Networks When you configure isolated iSCSI and management networks, follow these guidelines to avoid bandwidth problems. n Your isolated networks must be on different subnets.
vSphere Storage Problem A loss of network connectivity occurs after you delete a port group. Cause When you specify a gateway in the iBFT-enabled network adapter during ESXi installation, this gateway becomes the system's default gateway. If you delete the port group associated with the network adapter, the system's default gateway is lost. This action causes the loss of network connectivity. Solution Do not set an iBFT gateway unless it is required.
Best Practices for iSCSI Storage 13 When using ESXi with the iSCSI SAN, follow recommendations that VMware offers to avoid problems. Check with your storage representative if your storage system supports Storage API - Array Integration hardware acceleration features. If it does, refer to your vendor documentation to enable hardware acceleration support on the storage system side. For more information, see Chapter 24 Storage Hardware Acceleration.
vSphere Storage n Change LUN IDs only when VMFS datastores deployed on the LUNs have no running virtual machines. If you change the ID, virtual machines running on the VMFS datastore might fail. After you change the ID of the LUN, you must rescan your storage to reset the ID on your host. For information on using the rescan, see Storage Rescan Operations. n If you change the default iSCSI name of your iSCSI adapter, make sure that the name you enter is worldwide unique and properly formatted.
vSphere Storage Each server application must have access to its designated storage with the following conditions: n High I/O rate (number of I/O operations per second) n High throughput (megabytes per second) n Minimal latency (response times) Because each application has different requirements, you can meet these goals by selecting an appropriate RAID group on the storage system.
vSphere Storage Figure 13‑1. Single Ethernet Link Connection to Storage When systems read data from storage, the storage responds with sending enough data to fill the link between the storage systems and the Ethernet switch. It is unlikely that any single system or virtual machine gets full use of the network speed. However, this situation can be expected when many systems share one storage device. When writing data to storage, multiple systems or virtual machines might attempt to fill their links.
vSphere Storage If the transactions are large and multiple servers are sending data through a single switch port, an ability to buffer can be exceeded. In this case, the switch drops the data it cannot send, and the storage system must request a retransmission of the dropped packet. For example, if an Ethernet switch can buffer 32 KB, but the server sends 256 KB to the storage device, some of the data is dropped.
vSphere Storage Using VLANs or VPNs does not provide a suitable solution to the problem of link oversubscription in shared configurations. VLANs and other virtual partitioning of a network provide a way of logically designing a network. However, they do not change the physical capabilities of links and trunks between switches. When storage traffic and other network traffic share physical connections, oversubscription and lost packets might become possible.
Managing Storage Devices 14 Manage local and networked storage device that your ESXi host has access to.
vSphere Storage Table 14‑1. Storage Device Information (Continued) Storage Device Information Description Drive Type Information about whether the device is a flash drive or a regular HDD drive. For information about flash drives, see Chapter 15 Working with Flash Devices. Transport Transportation protocol your host uses to access the device. The protocol depends on the type of storage being used. See Types of Physical Storage. Capacity Total capacity of the storage device.
vSphere Storage Icon Description Refresh information about storage adapters, topology, and file systems. Rescan all storage adapters on the host to discover newly added storage devices or VMFS datastores. Detach the selected device from the host. Attach the selected device to the host. Change the display name of the selected device. Turn on the locator LED for the selected devices. Turn off the locator LED for the selected devices. Mark the selected devices as flash disks.
vSphere Storage Icon Description Attach the selected device to the host. Change the display name of the selected device. Turn on the locator LED for the selected devices. Turn off the locator LED for the selected devices. Mark the selected devices as flash disks. Mark the selected devices as HDD disks. Mark the selected devices as local for the host. Mark the selected devices as remote for the host. Erase partitions on the selected devices.
vSphere Storage vmhbaAdapter:CChannel:TTarget:LLUN n vmhbaAdapter is the name of the storage adapter. The name refers to the physical adapter on the host, not to the SCSI controller used by the virtual machines. n CChannel is the storage channel number. Software iSCSI adapters and dependent hardware adapters use the channel number to show multiple paths to the same target. n TTarget is the target number.
vSphere Storage Procedure 1 Browse to the host in the vSphere Web Client navigator. 2 Click the Configure tab. 3 Under Storage, click Storage Devices. 4 Select the device to rename and click Rename. 5 Change the device name to a friendly name. Storage Rescan Operations When you perform storage management tasks or make changes in the SAN configuration, you might need to rescan your storage.
vSphere Storage Procedure 1 In the vSphere Web Client object navigator, browse to a host, a cluster, a data center, or a folder that contains hosts. 2 From the right-click menu, select Storage > Rescan Storage . 3 Specify extent of rescan. Option Description Scan for New Storage Devices Rescan all adapters to discover new storage devices. If new devices are discovered, they appear in the device list.
vSphere Storage Procedure 1 Browse to the host in the vSphere Web Client navigator. 2 Click the Configure tab. 3 Under System, click Advanced System Settings. 4 In the Advanced System Settings table, select Disk.MaxLUN and click the Edit icon. 5 Change the existing value to the value of your choice, and click OK. The value you enter specifies the LUN ID that is after the last one you want to discover. For example, to discover LUN IDs from 1 through 100, set Disk.MaxLUN to 101.
vSphere Storage The vSphere Web Client displays the following information for the device: n The operational state of the device changes to Lost Communication. n All paths are shown as Dead. n Datastores on the device are not available. If no open connections to the device exist, or after the last connection closes, the host removes the PDL device and all paths to the device. You can disable the automatic removal of paths by setting the advanced host parameter Disk.AutoremoveOnPDL to 0.
vSphere Storage Task Description Migrate virtual machines from the device you plan to detach. vCenter Server and Host Management Unmount the datastore deployed on the device. See Unmount Datastores. Detach the storage device. See Detach Storage Devices. For an iSCSI device with a single LUN per target, delete the static target entry from each iSCSI HBA that has a path to the storage device. See Remove Dynamic or Static iSCSI Targets.
vSphere Storage 4 Select the detached storage device and click the Attach icon. The device becomes accessible. Recovering from PDL Conditions An unplanned permanent device loss (PDL) condition occurs when a storage device becomes permanently unavailable without being properly detached from the ESXi host. The following items in the vSphere Web Client indicate that the device is in the PDL state: n The datastore deployed on the device is unavailable.
vSphere Storage By default, the APD timeout is set to 140 seconds. This value is typically longer than most devices require to recover from a connection loss. If the device becomes available within this time, the host and its virtual machine continue to run without experiencing any problems. If the device does not recover and the timeout ends, the host stops its attempts at retries and stops any non-virtual machine I/O. Virtual machine I/O continues retrying.
vSphere Storage The timeout period begins immediately after the device enters the APD state. After the timeout ends, the host marks the APD device as unreachable. The host stops its attempts to retry any I/O that is not coming from virtual machines. The host continues to retry virtual machine I/O. By default, the timeout parameter on your host is set to 140 seconds.
vSphere Storage Device Connectivity Problems and High Availability When a device enters a Permanent Device Loss (PDL) or an All Paths Down (APD) state, vSphere High Availability (HA) can detect connectivity problems and provide automated recovery for affected virtual machines. vSphere HA uses VM Component Protection (VMCP) to protect virtual machines running on a host in a vSphere HA cluster against accessibility failures.
vSphere Storage Procedure 1 Browse to the host in the vSphere Web Client navigator. 2 Click the Configure tab. 3 Under Storage, click Storage Devices. 4 From the list of storage devices, select one or more disks and enable or disable the locator LED indicator. Option Description Enable Click the Turns on the locator LED icon. Disable Click the Turns off the locator LED icon.
Working with Flash Devices 15 In addition to the regular storage hard disk drives (HDDs), ESXi supports flash storage devices. Unlike the regular HDDs that are electromechanical devices containing moving parts, the flash devices use semiconductors as their storage medium and have no moving parts. Typically, the flash devices are resilient and provide faster access to data. To detect flash devices, ESXi uses an inquiry mechanism based on T10 standards.
vSphere Storage Table 15‑1. Using Flash Devices with ESXi Functionality Description vSAN vSAN requires flash devices. For more information, see the Administering VMware vSAN documentation. VMFS Datastores You can create VMFS datastores on flash devices. Use the datastores for the following purposes: Virtual Flash Resource (VFFS) n Store virtual machines. Certain guest operating systems can identify virtual disks stored on these datastores as flash virtual disks. See Identifying Flash Virtual Disks.
vSphere Storage However, ESXi might not recognize certain storage devices as flash devices when their vendors do not support automatic flash device detection. In other cases, certain devices might not be detected as local, and ESXi marks them as remote. When devices are not recognized as the local flash devices, they are excluded from the list of devices offered for vSAN or virtual flash resource. Marking these devices as local flash makes them available for vSAN and virtual flash resource.
vSphere Storage 2 Click the Configure tab. 3 Under Storage, click Storage Devices. 4 From the list of storage devices, select one or several remote devices to mark as local and click the All Actions icon. 5 Click Mark as Local, and click Yes to save your changes. Monitor Flash Devices You can monitor certain critical flash device parameters, including Media Wearout Indicator, Temperature, and Reallocated Sector Count, from an ESXi host. Use the esxcli command to monitor flash devices.
vSphere Storage Prerequisites Note the number of days passed since the last reboot of your ESXi host. For example, ten days. Procedure 1 Obtain the total number of blocks written to the flash device since the last reboot. Run the esxcli storage core device stats get -d=device_ID command. For example: ~ # esxcli storage core device stats get -d t10.xxxxxxxxxxxxxxx Device: t10.
vSphere Storage The following vSphere functionalities require the virtual flash resource: n Virtual machine read cache. See Chapter 16 About VMware vSphere Flash Read Cache. n Host swap cache. See Configure Host Swap Cache with Virtual Flash Resource. n I/O caching filters, if required by your vendors. See Chapter 23 Filtering Virtual Machine I/O. Before setting up the virtual flash resource, make sure that you use devices approved by the VMware Compatibility Guide.
vSphere Storage 3 Under Virtual Flash, select Virtual Flash Resource Management and click Add Capacity. 4 From the list of available flash devices, select one or more devices to use for the virtual flash resource and click OK. Under certain circumstances, you might not be able to see flash devices on the list. For more information, see the Troubleshooting Flash Devices section in the vSphere Troubleshooting documentation. The virtual flash resource is created.
vSphere Storage 4 5 Select the setting to change and click the Edit button. Parameter Description VFLASH.VFlashResourceUsageThresh old The system triggers the Host vFlash resource usage alarm when a virtual VFLASH.MaxResourceGBForVmCache An ESXi host stores Flash Read Cache metadata in RAM. The default limit of total virtual machine cache size on the host is 2 TB. You can reconfigure this setting. You must restart the host for the new setting to take effect.
vSphere Storage 5 To enable the host swap cache on a per-datastore basis, select the Allocate space for host cache check box. By default, maximum available space is allocated for the host cache. 6 (Optional) To change the host cache size, select Custom size and make appropriate adjustments. 7 Click OK. Configure Host Swap Cache with Virtual Flash Resource You can reserve a certain amount of virtual flash resource for host swap cache. Prerequisites Set up a virtual flash resource.
About VMware vSphere Flash Read Cache 16 Flash Read Cache™ can accelerate virtual machine performance by using host-resident flash devices as a cache. You can reserve a Flash Read Cache for any individual virtual disk. The Flash Read Cache is created only when a virtual machine is powered on. It is discarded when a virtual machine is suspended or powered off. When you migrate a virtual machine, you can migrate the cache.
vSphere Storage This chapter includes the following topics: n DRS Support for Flash Read Cache n vSphere High Availability Support for Flash Read Cache n Configure Flash Read Cache for a Virtual Machine n Migrate Virtual Machines with Flash Read Cache DRS Support for Flash Read Cache DRS supports virtual flash as a resource. DRS manages virtual machines with Flash Read Cache reservations. Every time DRS runs, it displays the available virtual flash capacity reported by the ESXi host.
vSphere Storage Procedure 1 Navigate to the virtual machine. 2 Right-click the virtual machine and select Edit Settings. 3 On the Virtual Hardware tab, expand Hard disk to view the disk menu items. 4 To enable Flash Read Cache for the virtual machine, enter a value in the Virtual Flash Read Cache text box. 5 Click Advanced to specify the following parameters. 6 Parameter Description Reservation Select a cache size reservation. Block Size Select a block size. Click OK.
vSphere Storage 4 5 6 Specify a migration setting for all virtual disks configured with virtual Flash Read Cache. This migration parameter does not appear when you do not change the host, but only change the datastore. Flash Read Cache Migration Settings Description Always migrate the cache contents Virtual machine migration proceeds only if all of the cache contents can be migrated to the destination host.
Working with Datastores 17 Datastores are logical containers, analogous to file systems, that hide specifics of physical storage and provide a uniform model for storing virtual machine files. Datastores can also be used for storing ISO images, virtual machine templates, and floppy images.
vSphere Storage Table 17‑1. Types of Datastores Datastore Type Description VMFS (version 3, 5, and 6) Datastores that you deploy on block storage devices use the vSphere Virtual Machine File System (VMFS) format. VMFS is a special high-performance file system format that is optimized for storing virtual machines. See Understanding VMFS Datastores. NFS (version 3 and 4.1) An NFS client built into ESXi uses the Network File System (NFS) protocol over TCP/IP to access a designated NFS volume.
vSphere Storage You can increase the capacity of the datastore while the virtual machines are running on the datastore. This ability lets you add new space to your VMFS datastores as your virtual machine requires it. VMFS is designed for concurrent access from multiple physical machines and enforces the appropriate access controls on the virtual machine files. Versions of VMFS Datastores Several versions of the VMFS file system have been released since its introduction.
vSphere Storage Table 17‑3. Comparing VMFS5 and VMFS6 (Continued) Features and Functionalities VMFS5 VMFS6 Support for virtual machines with large capacity virtual disks, or disks greater than 2 TB Yes Yes Support of small files of 1 KB Yes Yes Default use of ATS-only locking mechanisms on storage devices that support ATS. See VMFS Locking Mechanisms. Yes Yes Block size Standard 1 MB Standard 1 MB Default snapshots VMFSsparse for virtual disks smaller than 2 TB.
vSphere Storage Device Sector Formats and VMFS Versions ESXi supports storage devices with traditional and advanced sector formats. In storage, a sector is a subdivision of a track on a storage disk or device. Each sector stores a fixed amount of data. Traditional 512n storage devices have been using a native 512-bytes sector size. In addition, due to the increasing demand for larger capacities, the storage industry has introduced advanced formats, such as 512-byte emulation, or 512e.
vSphere Storage When you run multiple virtual machines, VMFS provides specific locking mechanisms for the virtual machine files. As a result, the virtual machines can operate safely in a SAN environment where multiple ESXi hosts share the same VMFS datastore. In addition to the virtual machines, the VMFS datastores can store other files, such as the virtual machine templates and ISO images.
vSphere Storage VMFS Metadata Updates A VMFS datastore holds virtual machine files, directories, symbolic links, RDM descriptor files, and so on. The datastore also maintains a consistent view of all the mapping information for these objects. This mapping information is called metadata. Metadata is updated each time you perform datastore or virtual machine management operations.
vSphere Storage ATS+SCSI Mechanism A VMFS datastore that supports the ATS+SCSI mechanism is configured to use ATS and attempts to use it when possible. If ATS fails, the VMFS datastore reverts to SCSI reservations. In contrast with the ATS locking, the SCSI reservations lock an entire storage device while an operation that requires metadata protection is performed. After the operation completes, VMFS releases the reservation and other operations can continue.
vSphere Storage Table 17‑4. VMFS Locking Information (Continued) Fields Values Descriptions ATS upgrade pending The datastore is in the process of an online upgrade to the ATS-only mode. ATS downgrade pending The datastore is in the process of an online downgrade to the ATS+SCSI mode. ATS Compatible Indicates whether the datastore can be or cannot be configured for the ATS-only mode. ATS Upgrade Modes Indicates the type of upgrade that the datastore supports.
vSphere Storage Prepare for an Upgrade to ATS-Only Locking You must perform several steps to prepare your environment for an online or offline upgrade to ATS-only locking. Procedure 1 Upgrade all hosts that access the VMFS5 datastore to the newest version of vSphere. 2 Determine whether the datastore is eligible for an upgrade of its current locking mechanism by running the esxcli storage vmfs lockmode list command. The following sample output indicates that the datastore is eligible for an upgrade.
vSphere Storage 2 For an online upgrade, perform additional steps. a Close the datastore on all hosts that have access to the datastore, so that the hosts can recognize the change. You can use one of the following methods: b n Unmount and mount the datastore. n Put the datastore into maintenance mode and exit maintenance mode.
vSphere Storage VMFSsparse is implemented on top of VMFS. The VMFSsparse layer processes I/Os issued to a snapshot VM. Technically, VMFSsparse is a redo-log that starts empty, immediately after a VM snapshot is taken. The redo-log expands to the size of its base vmdk, when the entire vmdk is rewritten with new data after the VM snapshotting. This redo-log is a file in the VMFS datastore. Upon snapshot creation, the base vmdk attached to the VM is changed to the newly created sparse vmdk.
vSphere Storage NFS Protocols and ESXi ESXi supports NFS protocols version 3 and 4.1. To support both versions, ESXi uses two different NFS clients. Comparing Versions of NFS Clients The following table lists capabilities that the NFS version 3 and 4.1 support. Characteristics NFS version 3 NFS version 4.
vSphere Storage NFS 4.1 and Fault Tolerance Virtual machines on NFS v4.1 support the new Fault Tolerance mechanism introduced in vSphere 6.0. Virtual machines on NFS v4.1 do not support the old, legacy Fault Tolerance mechanism. In vSphere 6.0, the newer Fault Tolerance mechanism can accommodate symmetric multiprocessor (SMP) virtual machines with up to four vCPUs. Earlier versions of vSphere used a different technology for Fault Tolerance, with different requirements and characteristics.
vSphere Storage n NFS Security With NFS 3 and NFS 4.1, ESXi supports the AUTH_SYS security. In addition, for NFS 4.1, the Kerberos security mechanism is supported. n NFS Multipathing While NFS 3 with ESXi does not provide multipathing support, NFS 4.1 supports multiple paths. n NFS and Hardware Acceleration Virtual disks created on NFS datastores are thin-provisioned by default.
vSphere Storage n ESXi supports Layer 2 and Layer 3 Network switches. If you use Layer 3 switches, ESXi hosts and NFS storage arrays must be on different subnets and the network switch must handle the routing information. n Configure a VMkernel port group for NFS storage. You can create the VMkernel port group for IP storage on an existing virtual switch (vSwitch) or on a new vSwitch. The vSwitch can be a vSphere Standard Switch (VSS) or a vSphere Distributed Switch (VDS).
vSphere Storage NFS 3 uses one TCP connection for I/O. As a result, ESXi supports I/O on only one IP address or hostname for the NFS server, and does not support multiple paths. Depending on your network infrastructure and configuration, you can use the network stack to configure multiple connections to the storage targets. In this case, you must have multiple datastores, each datastore using separate network connections between the host and the storage. NFS 4.
vSphere Storage Supported services, including NFS, are described in a rule set configuration file in the ESXi firewall directory /etc/vmware/firewall/. The file contains firewall rules and their relationships with ports and protocols. The behavior of the NFS Client rule set (nfsClient) is different from other rule sets. For more information about firewall configurations, see the vSphere Security documentation.
vSphere Storage Verify Firewall Ports for NFS Clients To enable access to NFS storage, ESXi automatically opens firewall ports for the NFS clients when you mount an NFS datastore. For troubleshooting reasons, you might need to verify that the ports are open. Procedure 1 Browse to the host in the vSphere Web Client navigator. 2 Click the Configure tab. 3 Under System, click Security Profile, and click Edit. 4 Scroll down to an appropriate version of NFS to make sure that the port is opened.
vSphere Storage The ESXi implementation of Kerberos for NFS 4.1 provides two security models, krb5 and krb5i, that offer different levels of security. n Kerberos for authentication only (krb5) supports identity verification. n Kerberos for authentication and data integrity (krb5i), in addition to identity verification, provides data integrity services. These services help to protect the NFS traffic from tampering by checking data packets for any potential modifications.
vSphere Storage Prerequisites n Familiarize yourself with the guidelines in NFS Storage Guidelines and Requirements. n For details on configuring NFS storage, consult your storage vendor documentation. n If you use Kerberos, make sure that AES256-CTS-HMAC-SHA1-96 or AES128-CTS-HMAC-SHA1-96 are enabled on the NAS server. Procedure 1 On the NFS server, configure an NFS volume and export it to be mounted on the ESXi hosts.
vSphere Storage Procedure 1 Configure DNS for NFS 4.1 with Kerberos When you use NFS 4.1 with Kerberos, you must change the DNS settings on ESXi hosts. The settings must point to the DNS server that is configured to hand out DNS records for the Kerberos Key Distribution Center (KDC). For example, use the Active Directory server address if AD is used as a DNS server. 2 Configure Network Time Protocol for NFS 4.1 with Kerberos If you use NFS 4.
vSphere Storage Procedure 1 Select the host in the vSphere inventory. 2 Click the Configure tab. 3 Under System, select Time Configuration. 4 Click Edit and set up the NTP server. 5 a Select Use Network Time Protocol (Enable NTP client). b Set the NTP Service Startup Policy. c To synchronize with the NTP server, enter its IP addresses. d Click Start or Restart in the NTP Service Status section. Click OK. The host synchronizes with the NTP server.
vSphere Storage Creating Datastores You use the New Datastore wizard to create your datastores. Depending on the type of your storage and storage needs, you can create a VMFS, NFS, or Virtual Volumes datastore. A vSAN datastore is automatically created when you enable vSAN. For information, see the Administering VMware vSAN documentation. You can also use the New Datastore wizard to manage VMFS datastore copies. n Create a VMFS Datastore VMFS datastores serve as repositories for virtual machines.
vSphere Storage 5 Select the device to use for your datastore. Important The device you select must not have any values displayed in the Snapshot Volume column. If a value is present, the device contains a copy of an existing VMFS datastore. For information on managing datastore copies, see Managing Duplicate VMFS Datastores. 6 7 Specify the datastore version. Option Description VMFS6 This option is default for 512e storage devices. The ESXi hosts of version 6.
vSphere Storage Create an NFS Datastore You can use the New Datastore wizard to mount an NFS volume. Prerequisites n Set up NFS storage environment. n If you plan to use Kerberos authentication with the NFS 4.1 datastore, make sure to configure the ESXi hosts for Kerberos authentication. Procedure 1 In the vSphere Web Client navigator, select Global Inventory Lists > Datastores. 2 Click the New Datastore icon.
vSphere Storage Create a Virtual Volumes Datastore You use the New Datastore wizard to create a Virtual Volumes datastore. Procedure 1 In the vSphere Web Client navigator, select Global Inventory Lists > Datastores. 2 Click the New Datastore icon. 3 Specify the placement location for the datastore. 4 Select VVol as the datastore type. 5 From the list of storage containers, select a backing storage container and type the datastore name.
vSphere Storage ESXi can detect the VMFS datastore copy and display it in the vSphere Web Client. You can mount the datastore copy with its original UUID or change the UUID. The process of changing the UUID is called the datastore resignaturing. Whether you select resignaturing or mounting without resignaturing depends on how the LUNs are masked in the storage environment. If your hosts can see both copies of the LUN, then resignaturing is the optimal method.
vSphere Storage When you perform datastore resignaturing, consider the following points: n Datastore resignaturing is irreversible. n After resignaturing, the storage device replica that contained the VMFS copy is no longer treated as a replica. n A spanned datastore can be resignatured only if all its extents are online. n The resignaturing process is fault tolerant. If the process is interrupted, you can resume it later.
vSphere Storage n Dynamically add the extent. The datastore can span over up to 32 extents with the size of each extent of more than 2 TB, yet appear as a single volume. The spanned VMFS datastore can use any or all its extents at any time. It does not need to fill up a particular extent before using the next one. Note Datastores that support only the hardware assisted locking, also called the atomic test and set (ATS) mechanism, cannot span over non-ATS devices.
vSphere Storage 6 Set the capacity for the extent. The minimum extent size is 1.3 GB. By default, the entire free space on the storage device is available. 7 Click Next. 8 Review the proposed layout and the new configuration of your datastore, and click Finish. Administrative Operations for Datastores After creating datastores, you can perform several administrative operations on the datastores. Certain operations, such as renaming a datastore, are available for all types of datastores.
vSphere Storage 2 Right-click the datastore to rename, and select Rename. 3 Type a new datastore name. The vSphere Web Client enforces a 42 character limit for the datastore name. The new name appears on all hosts that have access to the datastore. Unmount Datastores When you unmount a datastore, it remains intact, but can no longer be seen from the hosts that you specify. The datastore continues to appear on other hosts, where it remains mounted.
vSphere Storage A VMFS datastore that has been unmounted from all hosts remains in inventory, but is marked as inaccessible. You can use this task to mount the VMFS datastore to a specified host or multiple hosts. If you have unmounted an NFS or a Virtual Volumes datastore from all hosts, the datastore disappears from the inventory. To mount the NFS or Virtual Volumes datastore that has been removed from the inventory, use the New Datastore wizard.
vSphere Storage Use Datastore Browser Use the datastore file browser to manage contents of your datastores. You can browse folders and files that are stored on the datastore. You can also use the browser to upload files and perform administrative tasks on your folders and files. Procedure 1 Open the datastore browser. a Display the datastore in the inventory. b Right-click the datastore and select Browse Files ( ). 2 Explore the contents of the datastore by navigating to existing folders and files.
vSphere Storage Procedure 1 Open the datastore browser. a Display the datastore in the inventory. b Right-click the datastore and select Browse Files ( ). 2 (Optional) Create a folder to store the file. 3 Select the target folder and click the Upload a file to the datastore icon ( ). 4 Locate the item to upload on the local computer and click Open. 5 Refresh the datastore file browser to see the uploaded file on the list.
vSphere Storage Procedure 1 Open the datastore browser. a Display the datastore in the inventory. b Right-click the datastore and select Browse Files ( ). 2 Browse to an object you want to copy, either a folder or a file. 3 Select the object and click the Copy selection to a new location ( 4 Specify the destination location. 5 (Optional) Select the Overwrite files and folders with matching names at the destination check box. 6 Click OK. ) icon.
vSphere Storage Procedure 1 Open the datastore browser. a Display the datastore in the inventory. b Right-click the datastore and select Browse Files ( ). 2 Browse to an object you want to rename, either a folder or a file. 3 Select the object and click the Rename selection icon. 4 Specify the new name and click OK. Inflate Thin Virtual Disks If you created a virtual disk in the thin format, you can convert the thin disk to a virtual disk in thick provision format.
vSphere Storage Turn Off Storage Filters When you perform VMFS datastore management operations, vCenter Server uses default storage protection filters. The filters help you to avoid storage corruption by retrieving only the storage devices that can be used for a particular operation. Unsuitable devices are not displayed for selection. You can turn off the filters to view all devices. Prerequisites Before you change the device filters, consult with the VMware support team.
vSphere Storage Table 17‑6. Storage Filters Filter Name Description config.vpxd.filter.vmfsFilter Filters out storage devices, or LUNs, that are already used by a VMFS datastore on any host managed by vCenter Server. The LUNs do not show up as candidates to be formatted with another VMFS datastore or to be used as an RDM. (VMFS Filter) config.vpxd.filter.rdmFilter (RDM Filter) Filters out LUNs that are already referenced by an RDM on any host managed by vCenter Server.
vSphere Storage c Click Edit Configuration next to Configuration Parameters. d Click Add Row and add the following parameters: e Name Value scsi#.returnNoConnectDuringAPD True scsi#.returnBusyOnNoConnectStatus False Click OK. Collecting Diagnostic Information for ESXi Hosts on a Storage Device During a host failure, ESXi must be able to save diagnostic information to a preconfigured location for diagnostic and technical support purposes.
vSphere Storage n If a host that uses a shared diagnostic partition fails, reboot the host and extract log files immediately after the failure. Otherwise, the second host that fails before you collect the diagnostic data of the first host might save the core dump. Procedure 1 Browse to the host in the vSphere Web Client navigator. 2 Right-click the host, and select Add Diagnostic Partition. If you do not see this menu item, the host already has a diagnostic partition.
vSphere Storage What to do next To manage the host’s diagnostic partition, use the vCLI commands. See vSphere Command-Line Interface Concepts and Examples. Set Up a File as Core Dump Location If the size of your available core dump partition is insufficient, you can configure ESXi to use a file for diagnostic information. Typically, a core dump partition of 2.5 GB is created during ESXi installation. For upgrades from ESXi 5.0 and earlier, the core dump partition is limited to 100 MB.
vSphere Storage 3 Activate the core dump file for the host: esxcli system coredump file set The command takes the following options: Option Description --path | -p The path of the core dump file to use. The file must be pre-allocated. --smart | -s This flag can be used only with --enable | -e=true. It causes the file to be selected using the smart selection algorithm.
vSphere Storage 2 Remove the file from the VMFS datastore: system coredump file remove --file | -f file_name The command takes the following options: Option Description --file | -f Enter the name of the dump file to be removed. If you do not enter the name, the command removes the default configured core dump file. --force | -F Deactivate and unconfigure the dump file being removed. This option is required if the file has not been previously deactivated and is active.
vSphere Storage #esxcli storage vmfs extent list The Device Name and Partition columns in the output identify the device. For example: Volume Name 1TB_VMFS5 2 XXXXXXXX XXXXXXXX Device Name naa.00000000000000000000000000000703 Partition 3 Check for VMFS errors. Provide the absolute path to the device partition that backs the VMFS datastore, and provide a partition number with the device name. For example: # voma -m vmfs -f check -d /vmfs/devices/disks/naa.
vSphere Storage Table 17‑7. VOMA Command Options (Continued) Command Option Description -v | --version Display the version of VOMA. -h | --help Display the help message for the VOMA command. For more details, see the VMware Knowledge Base article 2036767. Configuring VMFS Pointer Block Cache You can use advanced VMFS parameters to configure the pointer block cache.
vSphere Storage Procedure 1 Browse to the host in the vSphere Web Client navigator. 2 Click the Configure tab. 3 Under System, click Advanced System Settings. 4 In Advanced System Settings, select the appropriate item. 5 Click the Edit button and change the value. 6 Click OK. Obtain Information for VMFS Pointer Block Cache You can get information about VMFS pointer block cache use. This information helps you understand how much space the pointer block cache consumes.
Understanding Multipathing and Failover 18 To maintain a constant connection between a host and its storage, ESXi supports multipathing. With multipathing, you can use more than one physical path that transfers data between the host and an external storage device. If a failure of any element in the SAN network, such as an adapter, switch, or cable, occurs, ESXi can switch to another viable physical path. This process of path switching to avoid failed components is known as path failover.
vSphere Storage In the following illustration, multiple physical paths connect each server with the storage device. For example, if HBA1 or the link between HBA1 and the FC switch fails, HBA2 takes over and provides the connection. The process of one HBA taking over for another is called HBA failover. Figure 18‑1.
vSphere Storage Figure 18‑2. Host-Based Path Failover hardware iSCSI software iSCSI host 1 host 2 software adapter HBA2 HBA1 NIC2 NIC1 IP network SP iSCSI storage Hardware iSCSI and Failover With hardware iSCSI, the host typically has two or more hardware iSCSI adapters. The host uses the adapters to reach the storage system through one or more switches.
vSphere Storage Array-Based Failover with iSCSI Some iSCSI storage systems manage path use of their ports automatically and transparently to ESXi. When using one of these storage systems, your host does not see multiple ports on the storage and cannot choose the storage port it connects to. These systems have a single virtual port address that your host uses to initially communicate.
vSphere Storage Figure 18‑4. Port Reassignment 10.0.0.1 10.0.0.2 storage 10.0.0.1 10.0.0.1 10.0.0.2 storage With this form of array-based failover, you can have multiple paths to the storage only if you use multiple ports on the ESXi host. These paths are active-active. For additional information, see iSCSI Session Management. Path Failover and Virtual Machines A path failover occurs when the active path to a LUN is changed from one path to another.
vSphere Storage 4 Double-click TimeOutValue. 5 Set the value data to 0x3c (hexadecimal) or 60 (decimal) and click OK. After you make this change, Windows waits at least 60 seconds for delayed disk operations to finish before it generates errors. 6 Reboot guest OS for the change to take effect. Managing Multiple Paths To manage storage multipathing, ESXi uses a collection of Storage APIs, also called the Pluggable Storage Architecture (PSA).
vSphere Storage n Handles physical path discovery and removal. n Provides logical device and physical path I/O statistics. As the Pluggable Storage Architecture illustration shows, multiple third-party MPPs can run in parallel with the VMware NMP. When installed, the third-party MPPs replace the behavior of the NMP and take control of the path failover and the load-balancing operations for the storage devices. Figure 18‑5.
vSphere Storage VMware SATPs Storage Array Type Plug-Ins (SATPs) run in conjunction with the VMware NMP and are responsible for array-specific operations. ESXi offers a SATP for every type of array that VMware supports. It also provides default SATPs that support non-specific active-active and ALUA storage arrays, and the local SATP for direct-attached devices.
vSphere Storage The policy is displayed in the client as the Most Recently Used (VMware) path selection policy. VMW_PSP_FIXED The host uses the designated preferred path, if it has been configured. Otherwise, it selects the first working path discovered at system boot time. If you want the host to use a particular preferred path, specify it manually. Fixed is the default policy for most active-active storage devices.
vSphere Storage The claim rules are numbered. For each physical path, the host runs through the claim rules starting with the lowest number first. The attributes of the physical path are compared to the path specification in the claim rule. If there is a match, the host assigns the MPP specified in the claim rule to manage the physical path. This continues until all physical paths are claimed by corresponding MPPs, either third-party multipathing plug-ins or the native multipathing plug-in (NMP).
vSphere Storage fc.adapterID-fc.targetID-naa.deviceID Note When you use the host profiles editor to edit paths, specify all three parameters that describe a path, adapter ID, target ID, and device ID. View Datastore Paths Review the paths that connect to storage devices backing your datastores. Procedure 1 In the vSphere Web Client navigator, select Global Inventory Lists > Datastores. 2 Click the datastore to display its information. 3 Click the Configure tab.
vSphere Storage By default, VMware supports the following path selection policies. If you have a third-party PSP installed on your host, its policy also appears on the list. Fixed (VMware) The host uses the designated preferred path, if it has been configured. Otherwise, it selects the first working path discovered at system boot time. If you want the host to use a particular preferred path, specify it manually. Fixed is the default policy for most active-active storage devices.
vSphere Storage 6 Select a path policy. By default, VMware supports the following path selection policies. If you have a third-party PSP installed on your host, its policy also appears on the list. n Fixed (VMware) n Most Recently Used (VMware) n Round Robin (VMware) 7 For the fixed policy, specify the preferred path. 8 Click OK to save your settings and exit the dialog box. Disable Storage Paths You can temporarily disable paths for maintenance or other reasons.
vSphere Storage n When the system searches the SATP rules to locate a SATP for a given device, it searches the driver rules first. If there is no match, the vendor/model rules are searched, and finally the transport rules are searched. If no match occurs, NMP selects a default SATP for the device. n If VMW_SATP_ALUA is assigned to a specific storage device, but the device is not ALUA-aware, no claim rule match occurs for this device.
vSphere Storage Procedure List the multipathing claim rules by running the esxcli --server=server_name storage core claimrule list --claimrule-class=MP u command.
vSphere Storage Display Multipathing Modules Use the esxcli command to list all multipathing modules loaded into the system. Multipathing modules manage physical paths that connect your host with storage. In the procedure, --server=server_name specifies the target server. The specified target server prompts you for a user name and password. Other connection options, such as a configuration file or session file, are supported.
vSphere Storage Display NMP Storage Devices Use the esxcli command to list all storage devices controlled by the VMware NMP and display SATP and PSP information associated with each device. In the procedure, --server=server_name specifies the target server. The specified target server prompts you for a user name and password. Other connection options, such as a configuration file or session file, are supported. For a list of connection options, see Getting Started with vSphere CommandLine Interfaces.
vSphere Storage Procedure 1 To define a new claim rule, run the following command: esxcli --server=server_name storage core claimrule add The command takes the following options: Option Description -A|--adapter= Indicate the adapter of the paths. -u|--autoassign The system auto assigns a rule ID. -C|--channel= Indicate the channel of the paths. -c|--claimrule-class= Indicate the claim rule class. Valid values are: MP, Filter, VAAI. -d|--device= Indicate the device UID.
vSphere Storage # esxcli --server=server_name storage core claimrule load After you run the esxcli --server=server_name storage core claimrule list command, you can see the new claim rule appearing on the list. The following output indicates that the claim rule 500 has been loaded into the system and is active.
vSphere Storage 2 Remove the claim rule from the system. esxcli --server=server_name storage core claimrule load This step removes the claim rule from the Runtime class. Mask Paths You can prevent the host from accessing storage devices or LUNs or from using individual paths to a LUN. Use the esxcli commands to mask the paths. When you mask paths, you create claim rules that assign the MASK_PATH plug-in to the specified paths. In the procedure, --server=server_name specifies the target server.
vSphere Storage Example: Masking a LUN In this example, you mask the LUN 20 on targets T1 and T2 accessed through storage adapters vmhba2 and vmhba3.
vSphere Storage Procedure 1 Delete the MASK_PATH claim rule. esxcli --server=server_name storage core claimrule remove -r rule# 2 Verify that the claim rule was deleted correctly. esxcli --server=server_name storage core claimrule list 3 Reload the path claiming rules from the configuration file into the VMkernel. esxcli --server=server_name storage core claimrule load 4 Run the esxcli --server=server_name storage core claiming unclaim command for each path to the masked storage device.
vSphere Storage Procedure 1 To add a claim rule for a specific SATP, run the esxcli --server=server_name storage nmp satp rule add command. The command takes the following options. Option Description -b|--boot This rule is a system default rule added at boot time. Do not modify esx.conf or add to a host profile. -c|--claim-option=string Set the claim option string when adding a SATP claim rule. -e|--description=string Set the claim rule description when adding a SATP claim rule.
vSphere Storage Scheduling Queues for Virtual Machine I/Os By default, vSphere provides a mechanism that creates scheduling queues for every virtual machine file. Each file, for example .vmdk, gets its own bandwidth controls. This mechanism ensures that I/O for a particular virtual machine file goes into its own separate queue and avoids interfering with I/Os from other files. This capability is enabled by default. To turn it off, adjust the VMkernel.Boot.
vSphere Storage In the procedure, --server=server_name specifies the target server. The specified target server prompts you for a user name and password. Other connection options, such as a configuration file or session file, are supported. For a list of connection options, see Getting Started with vSphere CommandLine Interfaces. Prerequisites Install vCLI or deploy the vSphere Management Assistant (vMA) virtual machine. See Getting Started with vSphere Command-Line Interfaces.
Raw Device Mapping 19 Raw device mapping (RDM) provides a mechanism for a virtual machine to have direct access to a LUN on the physical storage subsystem. The following topics contain information about RDMs and provide instructions on how to create and manage RDMs.
vSphere Storage Typically, you use VMFS datastores for most virtual disk storage. On certain occasions, you might use raw LUNs or logical disks located in a SAN. For example, you might use raw LUNs with RDMs in the following situations: n When SAN snapshot or other layered applications run in the virtual machine. The RDM enables backup offloading systems by using features inherent to the SAN.
vSphere Storage File System Operations Makes it possible to use file system utilities to work with a mapped volume, using the mapping file as a proxy. Most operations that are valid for an ordinary file can be applied to the mapping file and are redirected to operate on the mapped device. Snapshots Makes it possible to use virtual machine snapshots on a mapped volume. Snapshots are not available when the RDM is used in physical compatibility mode.
vSphere Storage SAN Management Agents Makes it possible to run some SAN management agents inside a virtual machine. Similarly, any software that needs to access a device by using hardware-specific SCSI commands can be run in a virtual machine. This kind of software is called SCSI target-based software. When you use SAN management agents, select a physical compatibility mode for the RDM.
vSphere Storage n If you use vMotion to migrate virtual machines with RDMs, make sure to maintain consistent LUN IDs for RDMs across all participating ESXi hosts. n Flash Read Cache does not support RDMs in physical compatibility. Virtual compatibility RDMs are supported with Flash Read Cache. Raw Device Mapping Characteristics An RDM is a special mapping file in a VMFS volume that manages metadata for its mapped device.
vSphere Storage Raw Device Mapping with Virtual Machine Clusters Use an RDM with virtual machine clusters that require access to the same raw LUN for failover scenarios. The setup is similar to that of a virtual machine cluster that accesses the same virtual disk file, but an RDM replaces the virtual disk file. Figure 19‑3.
vSphere Storage Create Virtual Machines with RDMs When you give your virtual machine direct access to a raw SAN LUN, you create an RDM disk that resides on a VMFS datastore and points to the LUN. You can create the RDM as an initial disk for a new virtual machine or add it to an existing virtual machine. When creating the RDM, you specify the LUN to be mapped and the datastore on which to put the RDM. Although the RDM disk file has the same.
vSphere Storage 10 Select a compatibility mode. Option Description Physical Allows the guest operating system to access the hardware directly. Physical compatibility is useful if you are using SAN-aware applications on the virtual machine. However, a virtual machine with a physical compatibility RDM cannot be cloned, made into a template, or migrated if the migration involves copying the disk.
Software-Defined Storage and Storage Policy Based Management 20 Storage Policy Based Management (SPBM) is a major element of your software-defined storage environment. It is a storage policy framework that provides a single unified control panel across a broad range of data services and storage solutions. The framework helps to align storage with application demands of your virtual machines.
vSphere Storage UI API/SDK CLI Storage Policy-Based Management (SPBM) vSphere APIs for IO Filtering (VAIO) I/O Filter Vendors vSAN Virtual Volumes Traditional (VMFS, NFS) Storage Vendors Storage Vendors Storage Vendors SPBM offers the following mechanisms: n Advertisement of storage capabilities and data services that storage arrays and other entities, such as I/O filters, offer.
vSphere Storage Working with Virtual Machine Storage Policies The entire process of creating and managing storage policies typically includes several steps. Whether you must perform a specific step might depend on the type of storage or data services that your environment offers. Step Description Populate the VM Storage Policies interface with appropriate data.
vSphere Storage Storage Capabilities and Data Services Certain datastores, for example, Virtual Volumes and vSAN, are represented by the storage providers. Through the storage providers, the datastores can advertise their capabilities in the VM Storage Policy interface. The lists of datastore capabilities, data services, and other characteristics with ranges of values populate the VM Storage Policy interface.
vSphere Storage Similar to the storage capabilities and characteristics, all tags associated with the datastores appear in the VM Storage Policies interface. You can use the tags when you define the tag-based placement rules. Use Storage Providers to Populate the VM Storage Policies Interface For entities represented by storage (VASA) providers, verify that an appropriate provider is registered.
vSphere Storage Assign Tags to Datastores Use tags to encode information about a datastore. The tags are helpful when your datastore is not represented by a storage provider and does not advertise its services in the VM Storage Polices interface. You can also use the tags to indicate a property that is not communicated through a storage provider, such as a geographical location or administrative group. You can apply a new tag that contains general storage information to a datastore.
vSphere Storage c d 3 Specify the properties for the tag. See the following example. Tag Property Example Name Texas Description Datastore located in Texas Category Storage Location Click OK. Apply the tag to the datastore. a Browse to the datastore in the vSphere Web Client navigator. b Right-click the datastore, and selectTags & Custom Attributes > Assign Tag. c From the Categories drop-down menu, select the Storage Location category.
vSphere Storage vSAN Default Storage Policy When you do not select any vSAN policy, the system applies the default storage policy to all virtual machine objects that are provisioned on a vSAN datastore. The default vSAN policy that VMware provides has the following characteristics: n You cannot delete the policy. n The policy is editable. To edit the policy, you must have the storage policy privileges that include the view and update privileges.
vSphere Storage Change the Default Storage Policy for a Datastore For Virtual Volumes and vSAN datastores, VMware provides storage policies that are used as the default during the virtual machine provisioning. You can change the default storage policy for a selected Virtual Volumes or vSAN datastore. Note A storage policy that contains replication rules should not be specified as a default storage policy. Otherwise, the policy prevents you from selecting replication groups.
vSphere Storage About Datastore-Specific and Common Rule Sets After the VM Storage Policies interface is populated with the appropriate data, you can start defining your storage policies. A basic element of a VM storage policy is a rule. Each individual rule is a statement that describes a single requirement for virtual machine storage and data services. Within the policy, rules are grouped in collections of rules. Two types of collections exist, regular rule sets and common rule sets.
vSphere Storage Virtual machine storage policy Common rules rule 1 and rule 2 Rule set 1 Rule set 2 Rule set 3 rule 1_1 rule 2_1 rule 3_1 rule 1_2 or rule 2_2 or rule 1_3 rule 3_2 rule 3_3 About Rules Rules are the basic elements of a VM storage policy. Each individual rule is a statement that describes a single requirement for virtual machine storage and data services.
vSphere Storage Data Service Rules Unlike the placement rules, the data service rules do not define storage placement and storage requirements for the virtual machine. Instead, these rules activate specific data services for the virtual machine, for example, caching and replication. Storage systems or other entities can provide these services. They can also be installed on your hosts and vCenter Server. You include the data service rules in the storage policy components.
vSphere Storage When you work with the components, follow these guidelines: n Each component can include only one set of rules. All characteristics in this rule set belong to a single provider of the data services. n If the component is referenced in the VM storage policy, you cannot delete the component. Before deleting the component, you must remove it from the storage policy or delete the storage policy.
vSphere Storage View Storage Policy Components After you create a storage policy component, you can view its details and perform other management tasks. Procedure 1 From the vSphere Web Client Home, click Policies and Profiles > VM Storage Policies. 2 Click the Storage Policy Components tab. 3 View the details for the selected storage policy component. a Select the component. b Switch between the following tabs. Menu Item Description Content Display rules and their related values.
vSphere Storage 6 If a VM storage policy that is assigned to a virtual machine references the policy component you edit, reapply the storage policy to the virtual machine. Menu Item Description Manually later If you select this option, the compliance status for all virtual disks and virtual machine home objects associated with the storage policy changes to Out of Date. To update configuration and compliance, manually reapply the storage policy to all associated entities.
vSphere Storage Procedure 1 Start VM Storage Policy Creation Process To define a virtual machine storage policy, use the Create New VM Storage Policy wizard. 2 Define Common Rules for a VM Storage Policy On the Common rules page, specify which data services to include in the VM storage policy. The data services are provided by software components that are installed on your ESXi hosts and vCenter Server.
vSphere Storage Prerequisites n For information about encrypting your virtual machines, see the vSphere Security documentation. n For information about I/O filters, see Chapter 23 Filtering Virtual Machine I/O. n For information about storage policy components, see About Storage Policy Components. Procedure 1 Enable common rules by selecting Use common rules in the VM storage policy.
vSphere Storage 2 Define placement rules. Placement rules request a specific storage entity as a destination for the virtual machine. They can be capability-based or tag-based. Capability-based rules are based on data services that storage entities such as vSAN and Virtual Volumes advertise through storage (VASA) providers. Tag-based rules reference tags that you assign to datastores.
vSphere Storage 4 (Optional) To define another rule set, click Add another rule set and repeat Step 2 through Step 3. Multiple rule sets allow a single policy to define alternative storage placement parameters, often from several storage providers. 5 Click Next. Finish VM Storage Policy Creation You can review the list of datastores that are compatible with the VM storage policy and change any storage policy settings.
vSphere Storage Edit or Clone a VM Storage Policy If storage requirements for virtual machines and virtual disks change, you can modify the existing storage policy. You can also create a copy of the existing VM storage policy by cloning it. While cloning, you can optionally select to customize the original storage policy. Prerequisites Required privilege: StorageProfile.View Procedure 1 From the vSphere Web Client Home, click Policies and Profiles > VM Storage Policies.
vSphere Storage This topic describes how to assign the VM storage policy when you create a virtual machine. For information about other deployment methods that include cloning, deployment from a template, and so on, see the vSphere Virtual Machine Administration documentation. You can apply the same storage policy to the virtual machine configuration file and all its virtual disks.
vSphere Storage Change Storage Policy Assignment for Virtual Machine Files and Disks If your storage requirements for the applications on the virtual machine change, you can edit the storage policy that was originally applied to the virtual machine. You can edit the storage policy for a powered-off or powered-on virtual machine. When changing the VM storage policy assignment, you can apply the same storage policy to the virtual machine configuration file and all its virtual disks.
vSphere Storage 5 If you use Virtual Volumes policy with replication, configure the replication group. Replication groups indicate which VMs and virtual disks must be replicated together to a target site. a Click Configure to open the Configure VM Replication Groups page. b Specify the replication group. c 6 Option Actions Assign the same replication group to all virtual machine objects. Select Common replication gourp and select a preconfigured or automatic group from the drop-down menu.
vSphere Storage Compliance Status Description Compliant The datastore that the virtual machine or virtual disk uses has the required storage capabilities. Noncompliant The datastore supports specified storage requirements, but cannot currently satisfy the virtual machine storage policy. For example, the status might become Not Compliant when physical resources for the datastore are unavailable or exhausted.
vSphere Storage Compliance Status Description Out of Date The status indicates that the policy has been edited, but the new requirements have not been communicated to the datastore where the virtual machine objects reside. To communicate the changes, reapply the policy to the objects that are out of date. Not Applicable This storage policy references datastore capabilities that are not supported by the datastore where the virtual machine resides.
vSphere Storage Reapply Virtual Machine Storage Policy After you edit a storage policy that is already associated with a virtual machine object, you must reapply the policy. By reapplying the policy, you communicate new storage requirements to the datastore where the virtual machine object resides. Prerequisites The compliance status for a virtual machine is Out of Date. The status indicates that the policy has been edited, but the new requirements have not been communicated to the datastore.
Using Storage Providers 21 A storage provider is a software component that is offered by VMware or developed by a third party through vSphere APIs for Storage Awareness (VASA). The storage provider can also be called VASA provider. The storage providers integrate with various storage entities that include external physical storage and storage abstractions, such as vSAN and Virtual Volumes. Storage providers can also support software solutions, for example, I/O filters.
vSphere Storage Built-in Storage Providers Built-in storage providers are offered by VMware. Typically, they do not require registration. For example, the storage providers that support I/O filters become registered automatically. Third-Party Storage Providers When a third party offers a storage provider, you typically must register the provider. An example of such a provider is the Virtual Volumes provider. You use the vSphere Web Client to register and manage each storage provider component.
vSphere Storage You reference these data services when you define storage requirements for virtual machines and virtual disks in a storage policy. Depending on your environment, the SPBM mechanism ensures appropriate storage placement for a virtual machine or enables specific data services for virtual disks. For details, see Creating and Managing VM Storage Policies. n Storage status. This category includes reporting about status of various storage entities.
vSphere Storage When you upgrade a storage provider to a later VASA version, you must unregister and reregister the provider. After registration, vCenter Server can detect and use the functionality of the later VASA version. Note If you use vSAN, the storage providers for vSAN are registered and appear on the list of storage providers automatically. vSAN does not support manual registration of storage providers. See the Administering VMware vSAN documentation.
vSphere Storage Procedure 1 Browse to vCenter Server in the vSphere Web Client navigator. 2 Click the Configure tab, and click Storage Providers. 3 In the Storage Providers list, view the storage providers registered with vCenter Server. The list shows general information including the name of the storage provider, its URL and status, version of VASA APIs, storage entities the provider represents, and so on.
vSphere Storage Refresh Storage Provider Certificates vCenter Server warns you when a certificate assigned to a storage provider is about to expire. You can refresh the certificate to continue using the provider. If you fail to refresh the certificate before it expires, vCenter Server discontinues using the provider. Procedure 1 Browse to vCenter Server in the vSphere Web Client navigator. 2 Click the Configure tab, and click Storage Providers.
Working with Virtual Volumes 22 The Virtual Volumes functionality changes the storage management paradigm from managing space inside datastores to managing abstract storage objects handled by storage arrays.
vSphere Storage The Virtual Volumes functionality helps to improve granularity. It helps you to differentiate virtual machine services on a per application level by offering a new approach to storage management. Rather than arranging storage around features of a storage system, Virtual Volumes arranges storage around the needs of individual virtual machines, making storage virtual-machine centric.
vSphere Storage n Binding and Unbinding Virtual Volumes to Protocol Endpoints At the time of creation, a virtual volume is a passive entity and is not immediately ready for I/O. To access the virtual volume, ESXi or vCenter Server send a bind request. n Virtual Volumes Datastores A Virtual Volumes (VVol) datastore represents a storage container in vCenter Server and the vSphere Web Client.
vSphere Storage For example, the following SQL server has six virtual volumes: n Config-VVol n Data-VVol for the operating system n Data-VVol for the database n Data-VVol for the log n Swap-VVol when powered on n Snapshot-VVol By using different virtual volumes for different VM components, you can apply and manipulate storage policies at the finest granularity level.
vSphere Storage A storage container is a part of the logical storage fabric and is a logical unit of the underlying hardware. The storage container logically groups virtual volumes based on management and administrative needs. For example, the storage container can contain all virtual volumes created for a tenant in a multitenant deployment, or a department in an enterprise deployment.
vSphere Storage Protocol endpoints are managed per array. ESXi and vCenter Server assume that all protocol endpoints reported for an array are associated with all containers on that array. For example, if an array has two containers and three protocol endpoints, ESXi assumes that virtual volumes on both containers can be bound to all three protocol points.
vSphere Storage You can use the Virtual Volumes datastores with traditional VMFS and NFS datastores and with vSAN. Note The size of a virtual volume must be a multiple of 1 MB, with a minimum size of 1 MB. As a result, all virtual disks that you provision on a Virtual Volumes datastore must be an even multiple of 1 MB. If the virtual disk you migrate to the Virtual Volumes datastore is not an even multiple of 1 MB, extend the disk to the nearest even multiple of 1 MB.
vSphere Storage As any block-based LUNs, the protocol endpoints are discovered using standard LUN discovery commands. The ESXi host periodically rescans for new devices and asynchronously discovers block‐ based protocol endpoints. The protocol endpoint can be accessible by multiple paths. Traffic on these paths follows well‐known path selection policies, as is typical for LUNs. On SCSI-based disk arrays at VM creation time, ESXi makes a virtual volume and formats it as VMFS.
vSphere Storage Data center VM storage policies Bronze VM VM Silver Gold VM VM Storage Monitoring Service VM VM VMware vSphere (corresponds to storage container 1) Virtual datastore 1 Virtual datastore 2 (corresponds to storage container 2) VASA APIs Fibre Channel, FCoE, iSCSI, NFS Protocol endpoints Virtual volume Virtual volume Virtual volume Virtual volume Storage container 1 Virtual volume Virtual volume Storage container 2 VASA provider Storage array Virtual volumes are objec
vSphere Storage The ESXi hosts have no direct access to the virtual volumes storage. Instead, the hosts access the virtual volumes through an intermediate point in the data path, called the protocol endpoint. The protocol endpoints establish a data path on demand from the virtual machines to their respective virtual volumes. The protocol endpoints serve as a gateway for direct in-band I/O between ESXi hosts and the storage system.
vSphere Storage Snapshots and Virtual Volumes Snapshots preserve the state and data of a virtual machine at the time you take the snapshot. Snapshots are useful when you must revert repeatedly to the same virtual machine state, but you do not want to create multiple virtual machines. Virtual volumes snapshots serve many purposes. You can use them to create a quiesced copy for backup or archival purposes, or to create a test and rollback environment for applications.
vSphere Storage Prepare Storage System for Virtual Volumes To prepare your storage system environment for Virtual Volumes, follow these guidelines. For additional information, contact your storage vendor. n The storage system or storage array that you use must support Virtual Volumes and integrate with the vSphere components through vSphere APIs for Storage Awareness (VASA). The storage array must support thin provisioning and snapshotting. n The Virtual Volumes storage provider must be deployed.
vSphere Storage 4 5 Click Edit and set up the NTP server. a Select Use Network Time Protocol (Enable NTP client). b Set the NTP Service Startup Policy. c Enter the IP addresses of the NTP server to synchronize with. d Click Start or Restart in the NTP Service Status section. Click OK. The host synchronizes with the NTP server. Configure Virtual Volumes To configure your Virtual Volumes environment, follow several steps. Prerequisites Follow guidelines in Before You Enable Virtual Volumes.
vSphere Storage Register Storage Providers for Virtual Volumes Your Virtual Volumes environment must include storage providers, also called VASA providers. Typically, third-party vendors develop storage providers through the VMware APIs for Storage Awareness (VASA). Storage providers facilitate communication between vSphere and the storage side. You must register the storage provider in vCenter Server to be able to work with Virtual Volumes.
vSphere Storage Procedure 1 In the vSphere Web Client navigator, select Global Inventory Lists > Datastores. 2 Click the New Datastore icon. 3 Specify the placement location for the datastore. 4 Select VVol as the datastore type. 5 From the list of storage containers, select a backing storage container and type the datastore name. Make sure to use the name that does not duplicate another datastore name in your data center environment.
vSphere Storage 5 Use tabs under Protocol Endpoint Details to access additional information and modify properties for the selected protocol endpoint. Tab Description Properties View the item properties and characteristics. For SCSI (block) items, view and edit multipathing policies. Paths (SCSI protocol endpoints only) Display paths available for the protocol endpoint. Disable or enable a selected path. Change the Path Selection Policy. Datastores Display a corresponding Virtual Volumes datastore.
vSphere Storage After you provision the virtual machine, you can perform typical VM management tasks. For information, see the vSphere Virtual Machine Administration documentation. For troubleshooting information, see the vSphere Troubleshooting documentation. Procedure 1 Define a VM Storage Policy for Virtual Volumes VMware provides a default No Requirements storage policy for Virtual Volumes. If you need, you can create a custom storage policy compatible with Virtual Volumes.
vSphere Storage 6 On the Rule Set page, define placement rules. a From the Storage Type drop-down menu, select a target storage entity, for example, Virtual Volumes. b From theAdd rule drop-down menu, select a capability and specify its value. For example, you can specify the number of read operations per second for the Virtual Volumes objects. You can include as many rules as you need for the selected storage entity.
vSphere Storage You can assign the Virtual Volumes storage policy during an initial deployment of a virtual machine, or when performing other virtual machine operations, such as cloning or migrating. This topic describes how to assign the Virtual Volumes storage policy when you create a new virtual machine. For information about other VM provisioning methods, see the vSphere Virtual Machine Administration documentation.
vSphere Storage What to do next If storage placement requirements for the configuration file or the virtual disks change, you can later modify the virtual policy assignment. See Change Storage Policy Assignment for Virtual Machine Files and Disks. Change Default Storage Policy for a Virtual Volumes Datastore For virtual machines provisioned on Virtual Volumes datastores, VMware provides a default No Requirements policy. You cannot edit this policy, but you can designate a newly created policy as default.
vSphere Storage By assigning the replication policy during VM provisioning, you request replication services for your virtual machine. After that, the array takes over the management of all replication schedules and processes.
vSphere Storage vSphere Requirements n Use the vCenter Server and ESXi versions that support Virtual Volumes storage replication. vCenter Server and ESXi hosts that are older than 6.5 release do not support replicated Virtual Volumes storage. Any attempts to create a replicated VM on an incompatible host fail with an error. For information, see VMware Compatibility Guide.
vSphere Storage For example, provision a VM with two disks, one associated with replication group Anaheim: B, the second associated with replication group Anaheim: C. SPBM validates the provisioning because both disks are replicated to the same target fault domains. Source Target Fault Domain: “Anaheim” Fault Domain: “Boulder” Repl. Group:”Anaheim:A” Repl. Group:”Boulder:A” Repl. Group:”Anaheim:B” Repl. Group:”Boulder:B” Repl. Group:”Anaheim:C” Repl. Group:”Boulder:C” Repl.
vSphere Storage Now provision a VM with two disks, one associated with replication group Anaheim: B, the second associated with replication group Anaheim: D. This configuration is invalid. Both replication groups replicate to the New-York fault domain, however, only one replicates to the Boulder fault domain. Source Target Fault Domain: “Anaheim” Fault Domain: “Boulder” Repl. Group:”Anaheim:A” Repl. Group:”Boulder:A” Repl. Group:”Anaheim:B” Repl. Group:”Boulder:B” Repl. Group:”Anaheim:C” Repl.
vSphere Storage Replication Guidelines and Considerations When you use replication with Virtual Volumes, specific considerations apply. n You can apply the replication storage policy only to a configuration virtual volume and a data virtual volume. Other VM objects inherit the replication policy in the following way: n The memory virtual volume inherits the policy of the configuration virtual volume. n The digest virtual volume inherits the policy of the data virtual volume.
vSphere Storage n Best Practices for Storage Container Provisioning Follow these best practices when provisioning storage containers on the vSphere Virtual Volumes array side. n Best Practices for vSphere Virtual Volumes Performance To ensure optimal vSphere Virtual Volumes performance results, follow these recommendations. Guidelines and Limitations in Using vSphere Virtual Volumes For the best experience with vSphere Virtual Volumes functionality, you must follow specific guidelines.
vSphere Storage n Host profiles that contain Virtual Volumes datastores are vCenter Server specific. After you extract this type of host profile, you can attach it only to hosts and clusters managed by the same vCenter Server as the reference host. Best Practices for Storage Container Provisioning Follow these best practices when provisioning storage containers on the vSphere Virtual Volumes array side.
vSphere Storage If your environment uses LUN IDs that are greater than 1023, change the number of scanned LUNs through the Disk.MaxLUN parameter. See Change the Number of Scanned Storage Devices. Best Practices for vSphere Virtual Volumes Performance To ensure optimal vSphere Virtual Volumes performance results, follow these recommendations.
vSphere Storage Ensuring that Storage Provider Is Available To access vSphere Virtual Volumes storage, your ESXi host requires a storage provider (VASA provider). To ensure that the storage provider is always available, follow these guidelines: n Do not migrate a storage provider VM to Virtual Volumes storage. n Back up your storage provider VM. n When appropriate, use vSphere HA or Site Recovery Manager to protect the storage provider VM. VMware, Inc.
Filtering Virtual Machine I/O 23 I/O filters are software components that can be installed on ESXi hosts and can offer additional data services to virtual machines. The filters process I/O requests, which move between the guest operating system of a virtual machine and virtual disks. The I/O filters can be offered by VMware or created by third parties through vSphere APIs for I/O Filtering (VAIO).
vSphere Storage n NFS 3 n NFS 4.1 n Virtual Volumes (VVol) n vSAN Types of I/O Filters VMware provides certain categories of I/O filters that are installed on your ESXi hosts. In addition, VMware partners can create the I/O filters through the vSphere APIs for I/O Filtering (VAIO) developer program. The I/O filters can serve multiple purposes. The supported types of filters include the following: n Replication.
vSphere Storage CIM Provider If VMware partners develop the I/O filters, the partners can provide an optional component that configures and manages I/O filter plug-ins. vSphere Web Client Plug-In When developing I/O filters, VMware partners can include this optional plug-in. The plug-in provides vSphere administrators with methods for communication with an I/O filter CIM provider to receive monitoring information about the I/O filter status.
vSphere Storage Each Virtual Machine Executable (VMX) component of a virtual machine contains a Filter Framework that manages the I/O filter plug-ins attached to the virtual disk. The Filter Framework invokes filters when the I/O requests move between the guest operating system and the virtual disk. Also, the filter intercepts any I/O access towards the virtual disk that happens outside of a running VM. The filters run sequentially in a specific order.
vSphere Storage VM I/O Path Cache Filter I/O Path Virtual Machine Cache Virtual Flash Resource (VFFS) SSD SSD SSD SSD SSD SSD Flash Storage Devices Flash Storage Devices ESXi To set up a virtual flash resource, you use flash devices that are connected to your host. To increase the capacity of your virtual flash resource, you can add more flash drives.
vSphere Storage n Web server to host partner packages for filter installation. The server must remain available after initial installation. When a new host joins the cluster, the server pushes appropriate I/O filter components to the host. Configure I/O Filters in the vSphere Environment To set up data services that the I/O filters provide for your virtual machines, follow several steps. Prerequisites n Create a cluster that includes at least one ESXi host. n VMware offers the I/O filters.
vSphere Storage Prerequisites n Required privileges: Host.Configuration.Query patch. n Verify that the I/O filter solution integrates with vSphere ESX Agent Manager and is certified by VMware. Procedure u Run the installer that the vendor provided. The installer deploys the appropriate I/O filter extension on vCenter Server and the filter components on all hosts within a cluster. A storage provider, also called a VASA provider, is automatically registered for every ESXi host in the cluster.
vSphere Storage Configure Virtual Flash Resource for Caching I/O Filters If your caching I/O filter uses local flash devices, configure a virtual flash resource, also known as VFFS volume. You configure the resource on your ESXi host before activating the filter. Prerequisites To determine whether the virtual flash resource must be enabled, check with your I/O filter vendor. Procedure 1 In the vSphere Web Client, navigate to the host. 2 Click the Configure tab.
vSphere Storage The I/O filter capabilities are displayed on the Common rules page of the VM Storage Policies wizard. The policy that enables I/O filters must include common rules. However, adding placement rules is optional. Depending on the I/O filters installed in your environment, the data services can belong to various categories, including caching, replication, and so on. By referencing the specific category in the storage policy, you request the service for your virtual machine.
vSphere Storage c Define rules for the data service category by specifying an appropriate provider and values for the rules. Or select the data service from the list of predefined components. Option Component Name Description This option is available if you have predefined storage policy components in your database. If you know which component to use, select it from the list to add to the VM storage policy. See all Review all component available for the category.
vSphere Storage You can assign the I/O filter policy during an initial deployment of a virtual machine. This topic describes how to assign the policy when you create a new virtual machine. For information about other deployment methods, see the vSphere Virtual Machine Administration documentation. Note You cannot change or assign the I/O filter policy when migrating or cloning a virtual machine. Prerequisites Verify that the I/O filter is installed on the ESXi host where the virtual machine runs.
vSphere Storage When you work with I/O filters, the following considerations apply: n vCenter Server uses ESX Agent Manager (EAM) to install and uninstall I/O filters. As an administrator, never invoke EAM APIs directly for EAM agencies that are created or used by vCenter Server. All operations related to I/O filters must go through VIM APIs. If you accidentally modify an EAM agency that was created by vCenter Server, you must revert the changes.
vSphere Storage Prerequisites n Required privileges:Host.Config.Patch. Procedure 1 To upgrade the filter, run the vendor-provided installer. During the upgrade, vSphere ESX Agent Manager automatically places the hosts into maintenance mode. The installer identifies any existing filter components and removes them before installing the new filter components.
vSphere Storage n If your virtual machine has a snapshot tree associated with it, you cannot add, change, or remove the I/O filter policy for the virtual machine. For information about troubleshooting I/O filters, see the vSphere Troubleshooting documentation. Migrating Virtual Machines with I/O Filters When you migrate a virtual machine with I/O filters, specific considerations apply.
Storage Hardware Acceleration 24 The hardware acceleration functionality enables the ESXi host to integrate with compliant storage systems. The host can offload certain virtual machine and storage management operations to the storage systems. With the storage hardware assistance, your host performs these operations faster and consumes less CPU, memory, and storage fabric bandwidth. Block storage devices, Fibre Channel and iSCSI, and NAS devices support the hardware acceleration.
vSphere Storage Hardware Acceleration Requirements The hardware acceleration functionality works only if you use an appropriate host and storage array combination. Table 24‑1.
vSphere Storage n Hardware assisted locking, also called atomic test and set (ATS). Supports discrete virtual machine locking without use of SCSI reservations. This operation allows disk locking per sector, instead of the entire LUN as with SCSI reservations. Check with your vendor for the hardware acceleration support. Certain storage arrays require that you activate the support on the storage side. On your host, the hardware acceleration is enabled by default.
vSphere Storage You can use several esxcli commands to query storage devices for the hardware acceleration support information. For the devices that require the VAAI plug-ins, the claim rule commands are also available. For information about esxcli commands, see Getting Started with vSphere Command-Line Interfaces.
vSphere Storage In the procedure, --server=server_name specifies the target server. The specified target server prompts you for a user name and password. Other connection options, such as a configuration file or session file, are supported. For a list of connection options, see Getting Started with vSphere CommandLine Interfaces. Prerequisites Install vCLI or deploy the vSphere Management Assistant (vMA) virtual machine. See Getting Started with vSphere Command-Line Interfaces.
vSphere Storage Procedure u Run the esxcli --server=server_name storage core device vaai status get -d=device_ID command. If a VAAI plug-in manages the device, the output shows the name of the plug-in attached to the device. The output also shows the support status for each T10 SCSI based primitive, if available. Output appears in the following example: # esxcli --server=server_name storage core device vaai status get -d naa.XXXXXXXXXXXX4c naa.
vSphere Storage Add Hardware Acceleration Claim Rules To configure the hardware acceleration for a new array, add two claim rules, one for the VAAI filter and another for the VAAI plug-in. For the new claim rules to be active, you first define the rules and then load them into your system. This procedure is for those block storage devices that do not support T10 SCSI commands and instead use the VAAI plug-ins. In the procedure, --server=server_name specifies the target server.
vSphere Storage # esxcli --server=server_name storage core claimrule add --claimrule-class=VAAI -plugin=VMW_VAAIP_T10 --type=vendor --vendor=IBM --autoassign # esxcli --server=server_name storage core claimrule load --claimrule-class=Filter # esxcli --server=server_name storage core claimrule load --claimrule-class=VAAI # esxcli --server=server_name storage core claimrule run --claimrule-class=Filter Delete Hardware Acceleration Claim Rules Use the esxcli command to delete existing hardware acceleration c
vSphere Storage Typically, when you create a virtual disk on an NFS datastore, the NAS server determines the allocation policy. The default allocation policy on most NAS servers is thin and does not guarantee backing storage to the file. However, the reserve space operation can instruct the NAS device to use vendor-specific mechanisms to reserve space for a virtual disk. As a result, you can create thick virtual disks on the NFS datastore. n Native Snapshot Support.
vSphere Storage 3 Install the VIB package: esxcli --server=server_name software vib install -v|--viburl=URL The URL specifies the URL to the VIB package to install. http:, https:, ftp:, and file: are supported. 4 Verify that the plug-in is installed: esxcli --server=server_name software vib list 5 Restart your host for the installation to take effect. Uninstall NAS Plug-Ins To uninstall a NAS plug-in, remove the VIB package from your host.
vSphere Storage Prerequisites This topic discusses how to update a VIB package using the esxcli command. For more details, see the vSphere Upgrade documentation. Install vCLI or deploy the vSphere Management Assistant (vMA) virtual machine. See Getting Started with vSphere Command-Line Interfaces. For troubleshooting, run esxcli commands in the ESXi Shell.
vSphere Storage The VMFS data mover does not leverage hardware offloads and instead uses software data movement when one of the following occurs: n The source and destination VMFS datastores have different block sizes. n The source file type is RDM and the destination file type is non-RDM (regular file). n The source VMDK type is eagerzeroedthick and the destination VMDK type is thin. n The source or destination VMDK is in sparse or hosted format. n The source virtual machine has a snapshot.
Thin Provisioning and Space Reclamation 25 vSphere supports two models of storage provisioning, thick provisioning and thin provisioning. Thick provisioning It is a traditional model of the storage provisioning. With the thick provisioning, large amount of storage space is provided in advance in anticipation of future storage needs. However, the space might remain unused causing underutilization of storage capacity.
vSphere Storage ESXi supports thin provisioning for virtual disks. With the disk-level thin provisioning feature, you can create virtual disks in a thin format. For a thin virtual disk, ESXi provisions the entire space required for the disk’s current and future activities, for example 40 GB. However, the thin disk uses only as much storage space as the disk needs for its initial operations. In this example, the thin-provisioned disk occupies only 20 GB of storage.
vSphere Storage You can use Storage vMotion or cross-host Storage vMotion to transform virtual disks from one format to another. Thick Provision Lazy Zeroed Creates a virtual disk in a default thick format. Space required for the virtual disk is allocated when the disk is created. Data remaining on the physical device is not erased during creation, but is zeroed out on demand later on first write from the virtual machine. Virtual machines do not read stale data from the physical device.
vSphere Storage 4 On the Customize Hardware page, click the Virtual Hardware tab. 5 Click the New Hard Disk triangle to expand the hard disk options. 6 (Optional) Adjust the default disk size. With a thin virtual disk, the disk size value shows how much space is provisioned and guaranteed to the disk. At the beginning, the virtual disk might not use the entire provisioned space. The actual storage use value can be less than the size of the virtual disk. 7 Select Thin Provision for Disk Provisioning.
vSphere Storage What to do next If your virtual disk is in the thin format, you can inflate it to its full size. Inflate Thin Virtual Disks If you created a virtual disk in the thin format, you can convert the thin disk to a virtual disk in thick provision format. You use the datastore browser to inflate the virtual disk. Prerequisites n Make sure that the datastore where the virtual machine resides has enough space. n Make sure that the virtual disk is thin. n Remove snapshots.
vSphere Storage For information on setting alarms, see the vCenter Server and Host Management documentation. If your virtual machines require more space, the datastore space is allocated on a first come first served basis. When the datastore runs out of space, you can add more physical storage and increase the datastore. See Increase VMFS Datastore Capacity. ESXi and Array Thin Provisioning You can use thin-provisioned storage arrays with ESXi.
vSphere Storage The following sample flow demonstrates how the ESXi host and the storage array interact to generate breach of space and out-of-space warnings for a thin-provisioned LUN. The same mechanism applies when you use Storage vMotion to migrate virtual machines to the thin-provisioned LUN. 1 Using storage-specific tools, your storage administrator provisions a thin LUN and sets a soft threshold limit that, when reached, triggers an alert. This step is vendor-specific.
vSphere Storage The following thin provisioning status indicates that the storage device is thin-provisioned. # esxcli --server=server_name storage core device list -d naa.XXXXXXXXXXXX4c naa.XXXXXXXXXXXX4c Display Name: XXXX Fibre Channel Disk(naa.XXXXXXXXXXXX4c) Size: 20480 Device Type: Direct-Access Multipath Plugin: NMP --------------------Thin Provisioning Status: yes Attached Filters: VAAI_FILTER VAAI Status: supported --------------------- An unknown status indicates that a storage device is thick.
vSphere Storage ESXi host Storage Array VMFS Datastore VMs Physical Disk Blocks The command can also originate directly from the guest operating system. Both VMFS5 and VMFS6 datastores can provide support to the unmap command that proceeds from the guest operating system. However, the level of support is limited on VMFS5. Depending on the type of your VMFS datastore, you use different methods to configure space reclamation for the datastore and your virtual machines.
vSphere Storage The operation helps the storage array to reclaim unused free space. Unmapped space can be then used for other storage allocation requests and needs. Asynchronous Reclamation of Free Space on VMFS6 Datastore On VMFS6 datastores, ESXi supports the automatic asynchronous reclamation of free space. VMFS6 can run the unmap command to release free storage space in the background on thin-provisioned storage arrays that support unmap operations.
vSphere Storage Manual Reclamation of Free Space on VMFS5 Datastore VMFS5 and earlier file systems do not unmap free space automatically, but you can use the esxcli storage vmfs unmap command to reclaim space manually. When you use the command, keep in mind that it might send many unmap requests at a time. This action can lock some of the resources during the operation.
vSphere Storage Procedure 1 Browse to the datastore in the vSphere Web Client navigator. 2 Select Edit Space Reclamation from the right-click menu. 3 Modify the space reclamation setting. 4 Option Description None Select this option if you want to disable the space reclamation operations for the datastore. Low (default) Reclaim space at a low rate. Click OK to save the new settings. The modified value for the space reclamation priority appears on the General page for the datastore.
vSphere Storage Prerequisites Install vCLI or deploy the vSphere Management Assistant (vMA) virtual machine. See Getting Started with vSphere Command-Line Interfaces. For troubleshooting, run esxcli commands in the ESXi Shell. Procedure u To reclaim unused storage blocks on the thin-provisioned device, run the following command: esxcli --server=server_name storage vmfs unmap The command takes these options: Option Description -l|--volume-label=volume_label The label of the VMFS volume to unmap.
vSphere Storage VMFS6 processes the unmap request from the guest OS only when the space to reclaim equals 1 MB or is a multiple of 1 MB. If the space is less than 1 MB or is not aligned to 1 MB, the unmap requests are not processed. Space Reclamation for VMFS5 Virtual Machines Typically, the unmap command that generates from the guest operation system on VMFS5 cannot be passed directly to the array. You must run the esxcli storage vmfs unmap command to trigger unmaps for the array.
Using vmkfstools 26 vmkfstools is one of the ESXi Shell commands for managing VMFS volumes, storage devices, and virtual disks. You can perform many storage operations using the vmkfstools command. For example, you can create and manage VMFS datastores on a physical partition, or manipulate virtual disk files, stored on VMFS or NFS datastores. Note After you make a change using the vmkfstools, the vSphere Web Client might not be updated immediately. Use a refresh or rescan operation from the client.
vSphere Storage Table 26‑1. vmkfstools Command Arguments (Continued) Argument Description device Specifies devices or logical volumes. This argument uses a path name in the ESXi device file system. The path name begins with /vmfs/devices, which is the mount point of the device file system. Use the following formats when you specify different types of devices: path n /vmfs/devices/disks for local or SAN-based disks. n /vmfs/devices/lvm for ESXi logical volumes.
vSphere Storage You can specify the -v suboption with any vmkfstools option. If the output of the option is not suitable for use with the -v suboption, vmkfstools ignores -v. Note Because you can include the -v suboption in any vmkfstools command line, -v is not included as a suboption in the option descriptions. File System Options File system options allow you to create and manage VMFS datastores. These options do not apply to NFS. You can perform many of these tasks through the vSphere Web Client.
vSphere Storage You can specify the following suboptions with the -C option. n -S|--setfsname - Define the volume label of the VMFS datastore you are creating. Use this suboption only with the -C option. The label you specify can be up to 128 characters long and cannot contain any leading or trailing blank spaces. Note vCenter Server supports the 80 character limit for all its entities. If a datastore name exceeds this limit, the name gets shortened when you add this datastore to vCenter Server.
vSphere Storage You must specify the full path name for the head and span partitions, for example /vmfs/devices/disks/disk_ID:1. Each time you use this option, you add an extent to the VMFS datastore, so that the datastore spans multiple partitions. Caution When you run this option, you lose all data that previously existed on the SCSI device you specified in span_partition.
vSphere Storage Supported Disk Formats When you create or clone a virtual disk, you can use the -d|--diskformat suboption to specify the format for the disk. Choose from the following formats: n zeroedthick (default) – Space required for the virtual disk is allocated during creation. Any data remaining on the physical device is not erased during creation, but is zeroed out on demand on first write from the virtual machine. The virtual machine does not read stale data from disk.
vSphere Storage This option creates a virtual disk at the specified path on a datastore. Specify the size of the virtual disk. When you enter the value for size, you can indicate the unit type by adding a suffix of k (kilobytes), m (megabytes), or g (gigabytes). The unit type is not case-sensitive. vmkfstools interprets either k or K to mean kilobytes. If you do not specify a unit type, vmkfstools defaults to bytes. You can specify the following suboptions with the -c option.
vSphere Storage Follow this example: vmkfstools --eagerzero /vmfs/volumes/myVMFS/VMName/disk.vmdk Removing Zeroed Blocks Use the vmkfstools command to remove zeroed blocks. -K|--punchzero This option deallocates all zeroed out blocks and leaves only those blocks that were allocated previously and contain valid data. The resulting virtual disk is in thin format. Deleting a Virtual Disk Use the vmkfstools command to delete a virtual disk file at the specified path on the VMFS volume.
vSphere Storage By default, ESXi uses its native methods to perform the cloning operations. If your array supports the cloning technologies, you can off-load the operations to the array. To avoid the ESXi native cloning, specify the -N|--avoidnativeclone option. Example: Example for Cloning or Converting a Virtual Disk This example illustrates cloning the contents of a master virtual disk from the templates repository to a virtual disk file named myOS.vmdk on the myVMFS file system.
vSphere Storage n After you extend the disk, you might need to update the file system on the disk. As a result, the guest operating system recognizes the new size of the disk and can use it. Upgrading Virtual Disks This option converts the specified virtual disk file from ESX Server 2 formats to the ESXi format. Use this option to convert virtual disks of type LEGACYSPARSE, LEGACYPLAIN, LEGACYVMFS, LEGACYVMFS_SPARSE, and LEGACYVMFS_RDM.
vSphere Storage When specifying the device parameter, use the following format: /vmfs/devices/disks/disk_ID For example, vmkfstools -z /vmfs/devices/disks/disk_ID my_rdm.vmdk Listing Attributes of an RDM Use the vmkfstools command to list the attributes of a raw disk mapping. The attributes help you identify the storage device to which your RDM files maps. -q|--queryrdm my_rdm.vmdk This option prints the name of the raw disk RDM.
vSphere Storage Checking Disk Chain for Consistency Use the vmkfstools command to check the entire snapshot chain. You can determine if any of the links in the chain are corrupted or any invalid parent-child relationships exist. -e|--chainConsistent Storage Device Options You can use the device options of the vmkfstools command to perform administrative task for physical storage devices.
vSphere Storage When entering the device parameter, use the following format: /vmfs/devices/disks/disk_ID:P Breaking Device Locks Use the vmkfstools command to break the device lock on a particular partition. -B|--breaklock device When entering the device parameter, use the following format: /vmfs/devices/disks/disk_ID:P You can use this command when a host fails in the middle of a datastore operation, such as expand the datastore, add an extent, or resignature.