vSphere Storage 17 APR 2018 VMware vSphere 6.7 VMware ESXi 6.7 vCenter Server 6.
vSphere Storage You can find the most up-to-date technical documentation on the VMware website at: https://docs.vmware.com/ If you have comments about this documentation, submit your feedback to docfeedback@vmware.com VMware, Inc. 3401 Hillview Ave. Palo Alto, CA 94304 www.vmware.com Copyright © 2009–2018 VMware, Inc. All rights reserved. Copyright and trademark information. VMware, Inc.
Contents About vSphere Storage 8 1 Introduction to Storage 9 Traditional Storage Virtualization Models Software-Defined Storage Models vSphere Storage APIs 9 11 11 2 Getting Started with a Traditional Storage Model 13 Types of Physical Storage 13 Supported Storage Adapters Datastore Characteristics 25 Using Persistent Memory 28 24 3 Overview of Using ESXi with a SAN 32 ESXi and SAN Use Cases 33 Specifics of Using SAN Storage with ESXi ESXi Hosts and Multiple Storage Arrays Making LUN Decision
vSphere Storage 7 Booting ESXi from Fibre Channel SAN 53 Boot from SAN Benefits 53 Requirements and Considerations when Booting from Fibre Channel SAN Getting Ready for Boot from SAN 54 54 Configure Emulex HBA to Boot from SAN 56 Configure QLogic HBA to Boot from SAN 57 8 Booting ESXi with Software FCoE 59 Requirements and Considerations for Software FCoE Boot Set Up Software FCoE Boot 59 60 Troubleshooting Boot from Software FCoE for an ESXi Host 62 9 Best Practices for Fibre Channel Storage
vSphere Storage 13 Best Practices for iSCSI Storage 119 Preventing iSCSI SAN Problems 119 Optimizing iSCSI SAN Storage Performance Checking Ethernet Switch Statistics 120 124 14 Managing Storage Devices 125 Storage Device Characteristics 125 Understanding Storage Device Naming Storage Rescan Operations 129 131 Identifying Device Connectivity Problems 134 Enable or Disable the Locator LED on Storage Devices Erase Storage Devices 139 140 15 Working with Flash Devices 141 Using Flash Devices wi
vSphere Storage 18 Understanding Multipathing and Failover 202 Failovers with Fibre Channel 202 Host-Based Failover with iSCSI 203 Array-Based Failover with iSCSI 205 Path Failover and Virtual Machines 206 Pluggable Storage Architecture and Path Management Viewing and Managing Paths Using Claim Rules 207 217 220 Scheduling Queues for Virtual Machine I/Os 230 19 Raw Device Mapping 232 About Raw Device Mapping 232 Raw Device Mapping Characteristics 236 Create Virtual Machines with RDMs Mana
vSphere Storage Configure Virtual Volumes 285 Provision Virtual Machines on Virtual Volumes Datastores Virtual Volumes and Replication 289 290 Best Practices for Working with vSphere Virtual Volumes Troubleshooting Virtual Volumes 294 298 23 Filtering Virtual Machine I/O 302 About I/O Filters 302 Using Flash Storage Devices with Cache I/O Filters System Requirements for I/O Filters 306 Configure I/O Filters in the vSphere Environment Enable I/O Filter Data Services on Virtual Disks Managing I/O
About vSphere Storage vSphere Storage describes virtualized and software-defined storage technologies that VMware ESXi™ ® and VMware vCenter Server offer, and explains how to configure and use these technologies. Intended Audience This information is for experienced system administrators who are familiar with the virtual machine and storage virtualization technologies, data center operations, and SAN storage concepts.
Introduction to Storage 1 vSphere supports various storage options and functionalities in traditional and software-defined storage environments. A high-level overview of vSphere storage elements and aspects helps you plan a proper storage strategy for your virtual data center.
vSphere Storage Internet SCSI Internet iSCSI (iSCSI) is a SAN transport that can use Ethernet connections between computer systems, or ESXi hosts, and highperformance storage systems. To connect to the storage systems, your hosts use hardware iSCSI adapters or software iSCSI initiators with standard network adapters. See Chapter 10 Using ESXi with iSCSI SAN. Storage Device or LUN In the ESXi context, the terms device and LUN are used interchangeably.
vSphere Storage Software-Defined Storage Models In addition to abstracting underlying storage capacities from VMs, as traditional storage models do, software-defined storage abstracts storage capabilities. With the software-defined storage model, a virtual machine becomes a unit of storage provisioning and can be managed through a flexible policy-based mechanism. The model involves the following vSphere technologies.
vSphere Storage vSphere APIs for Storage Awareness Also known as VASA, these APIs, either supplied by third-party vendors or offered by VMware, enable communications between vCenter Server and underlying storage. Through VASA, storage entities can inform vCenter Server about their configurations, capabilities, and storage health and events. In return, VASA can deliver VM storage requirements from vCenter Server to a storage entity and ensure that the storage layer meets the requirements.
Getting Started with a Traditional Storage Model 2 Setting up your ESXi storage in traditional environments, includes configuring your storage systems and devices, enabling storage adapters, and creating datastores.
vSphere Storage Figure 2‑1. Local Storage ESXi Host VMFS vmdk SCSI Device In this example of a local storage topology, the ESXi host uses a single connection to a storage device. On that device, you can create a VMFS datastore, which you use to store virtual machine disk files. Although this storage configuration is possible, it is not a best practice.
vSphere Storage In addition to traditional networked storage that this topic covers, VMware supports virtualized shared storage, such as vSAN. vSAN transforms internal storage resources of your ESXi hosts into shared storage that provides such capabilities as High Availability and vMotion for virtual machines. For details, see the Administering VMware vSAN documentation. Note The same LUN cannot be presented to an ESXi host or multiple hosts through different storage protocols.
vSphere Storage In this configuration, a host connects to a SAN fabric, which consists of Fibre Channel switches and storage arrays, using a Fibre Channel adapter. LUNs from a storage array become available to the host. You can access the LUNs and create datastores for your storage needs. The datastores use the VMFS format. For specific information on setting up the Fibre Channel SAN, see Chapter 4 Using ESXi with Fibre Channel SAN.
vSphere Storage Figure 2‑3. iSCSI Storage ESXi Host Software Adapter iSCSI HBA Ethernet NIC LAN LAN VMFS VMFS vmdk iSCSI Array vmdk In the left example, the host uses the hardware iSCSI adapter to connect to the iSCSI storage system. In the right example, the host uses a software iSCSI adapter and an Ethernet NIC to connect to the iSCSI storage. iSCSI storage devices from the storage system become available to the host.
vSphere Storage Figure 2‑4. NFS Storage ESXi Host Ethernet NIC LAN NFS vmdk NAS Appliance For specific information on setting up NFS storage, see Understanding Network File System Datastores. Shared Serial Attached SCSI (SAS) Stores virtual machines on direct-attached SAS storage systems that offer shared access to multiple hosts. This type of access permits multiple hosts to access the same VMFS datastore on a LUN.
vSphere Storage Figure 2‑5. Target and LUN Representations Storage Array Storage Array Target LUN LUN LUN Target Target Target LUN LUN LUN In this illustration, three LUNs are available in each configuration. In one case, the host connects to one target, but that target has three LUNs that can be used. Each LUN represents an individual storage volume. In the other example, the host detects three different targets, each having one LUN.
vSphere Storage Figure 2‑6. Virtual machines accessing different types of storage ESXi Host Requires TCP/IP Connectivity vmdk Software iSCSI Adapter VMFS SCSI Device Fibre Channel HBA iSCSI HBA Ethernet NIC Ethernet NIC SAN LAN LAN LAN VMFS VMFS VMFS NFS vmdk Fibre Channel Array vmdk vmdk vmdk NAS Appliance iSCSI Array Note This diagram is for conceptual purposes only. It is not a recommended configuration.
vSphere Storage Table 2‑1. Storage Device Information (Continued) Storage Device Information Description LUN Logical Unit Number (LUN) within the SCSI target. The LUN number is provided by the storage system. If a target has only one LUN, the LUN number is always zero (0). Type Type of device, for example, disk or CD-ROM. Drive Type Information about whether the device is a flash drive or a regular HDD drive.
vSphere Storage 5 Use the icons to perform basic storage management tasks. Availability of specific icons depends on the device type and configuration. 6 Icon Description Refresh Refresh information about storage adapters, topology, and file systems. Rescan Rescan all storage adapters on the host to discover newly added storage devices or VMFS datastores. Detach Detach the selected device from the host. Attach Attach the selected device to the host.
vSphere Storage Icon Description Refresh Refresh information about storage adapters, topology, and file systems. Rescan Rescan all storage adapters on the host to discover newly added storage devices or VMFS datastores. Detach Detach the selected device from the host. Attach Attach the selected device to the host. Rename Change the display name of the selected device. Turn On LED Turn on the locator LED for the selected devices. Turn Off LED Turn off the locator LED for the selected devices.
vSphere Storage Table 2‑3. vSphere Features Supported by Storage (Continued) Storage Type Boot VM vMotion Datastore RDM VM Cluster VMware HA and DRS Storage APIs Data Protectio n iSCSI Yes Yes VMFS Yes Yes Yes Yes NAS over NFS Yes Yes NFS 3 and NFS 4.1 No No Yes Yes Note Local storage supports a cluster of virtual machines on a single host (also known as a cluster in a box). A shared virtual disk is required.
vSphere Storage 3 Under Storage, click Storage Adapters. 4 Use the icons to perform storage adapter tasks. Availability of specific icons depends on the storage configuration. Icon Description Add Software Adapter Add a storage adapter. Applies to software iSCSI and software FCoE. Refresh Refresh information about storage adapters, topology, and file systems on the host. Rescan Storage Rescan all storage adapters on the host to discover newly added storage devices or VMFS datastores.
vSphere Storage Table 2‑4. Datastore Information Datastore Information Applicable Datastore Type Description Name VMFS Editable name that you assign to a datastore. For information on renaming a datastore, see Change Datastore Name. NFS vSAN Virtual Volumes Type VMFS NFS vSAN Virtual Volumes File system that the datastore uses. For information about VMFS and NFS datastores and how to manage them, see Chapter 17 Working with Datastores.
vSphere Storage Table 2‑4. Datastore Information (Continued) Datastore Information Applicable Datastore Type Description Hardware Acceleration VMFS Information on whether the underlying storage entity supports hardware acceleration. The status can be Supported, Not Supported, or Unknown. For details, see Chapter 24 Storage Hardware Acceleration. NFS vSAN Virtual Volumes Note NFS 4.1 does not support Hardware Acceleration.
vSphere Storage 3 To view specific datastore details, click a selected datastore. 4 Use tabs to access additional information and modify datastore properties. Tab Description Summary View statistics and configuration for the selected datastore. Monitor View alarms, performance data, resource allocation, events, and other status information for the datastore. Configure View and modify datastore properties. Menu items that you can see depend on the datastore type.
vSphere Storage The PMem datastore is used to store virtual NVDIMM devices and traditional virtual disks of a VM. The VM home directory with the vmx and vmware.log files cannot be placed on the PMem datastore. PMem Access Modes ESXi exposes persistent memory to a VM in two different modes. PMemaware VMs can have direct access to persistent memory. Traditional VMs can use fast virtual disks stored on the PMem datastore.
vSphere Storage PMem-aware VM Traditional VM Direct-access mode Virtual disk mode PMem Storage Policy NVDMM device Virtual disk PMem Datastore Persistent Memory For information about how to configure and manage VMs with NVDIMMs or virtual persistent memory disks, see the vSphere Resource Management documentation. Monitor PMem Datastore Statistics You can use the vSphere Client and the esxcli command to review the capacity of the PMem datastore and some of its other attributes.
vSphere Storage /vmfs/volumes/5xxx... ds01-102 /vmfs/volumes/59ex... ds02-102 /vmfs/volumes/59bx... /vmfs/volumes/pmem:5ax... PMemDS-56ax... VMware, Inc. 5xxx... 59ex... 59bx... pmem:5a0x...
Overview of Using ESXi with a SAN 3 Using ESXi with a SAN improves flexibility, efficiency, and reliability. Using ESXi with a SAN also supports centralized management, failover, and load balancing technologies. The following are benefits of using ESXi with a SAN: n You can store data securely and configure multiple paths to your storage, eliminating a single point of failure. n Using a SAN with ESXi systems extends failure resistance to the server.
vSphere Storage n Making LUN Decisions n Selecting Virtual Machine Locations n Third-Party Management Applications n SAN Storage Backup Considerations ESXi and SAN Use Cases When used with a SAN, ESXi can benefit from multiple vSphere features, including Storage vMotion, Distributed Resource Scheduler (DRS), High Availability, and so on.
vSphere Storage When you use SAN storage with ESXi, the following considerations apply: n You cannot use SAN administration tools to access operating systems of virtual machines that reside on the storage. With traditional tools, you can monitor only the VMware ESXi operating system. You use the vSphere Client to monitor virtual machines. n The HBA visible to the SAN administration tools is part of the ESXi system, not part of the virtual machine.
vSphere Storage You might want more, smaller LUNs for the following reasons: n Less wasted storage space. n Different applications might need different RAID characteristics. n More flexibility, as the multipathing policy and disk shares are set per LUN. n Use of Microsoft Cluster Service requires that each cluster disk resource is in its own LUN. n Better performance because there is less contention for a single volume.
vSphere Storage Selecting Virtual Machine Locations When you are working on optimizing performance for your virtual machines, storage location is an important factor. Depending on your storage needs, you might select storage with high performance and high availability, or storage with lower performance. Storage can be divided into different tiers depending on several factors: n High Tier. Offers high performance and high availability.
vSphere Storage If you run the SAN management software on a virtual machine, you gain the benefits of a virtual machine, including failover with vMotion and VMware HA. Because of the additional level of indirection, however, the management software might not see the SAN. In this case, you can use an RDM. Note Whether a virtual machine can run management software successfully depends on the particular storage system.
vSphere Storage Using Third-Party Backup Packages You can use third-party backup solutions to protect system, application, and user data in your virtual machines. The Storage APIs - Data Protection that VMware offers can work with third-party products. When using the APIs, third-party software can perform backups without loading ESXi hosts with the processing of backup tasks.
Using ESXi with Fibre Channel SAN 4 When you set up ESXi hosts to use FC SAN storage arrays, special considerations are necessary. This section provides introductory information about how to use ESXi with an FC SAN array.
vSphere Storage Generally, a single path from a host to a LUN consists of an HBA, switch ports, connecting cables, and the storage controller port. If any component of the path fails, the host selects another available path for I/O. The process of detecting a failed path and switching to another is called path failover. Ports in Fibre Channel SAN In the context of this document, a port is the connection from a device into the SAN.
vSphere Storage Using Zoning with Fibre Channel SANs Zoning provides access control in the SAN topology. Zoning defines which HBAs can connect to which targets. When you configure a SAN by using zoning, the devices outside a zone are not visible to the devices inside the zone. Zoning has the following effects: n Reduces the number of targets and LUNs presented to a host. n Controls and isolates paths in a fabric.
vSphere Storage 6 Depending on a port the HBA uses to connect to the fabric, one of the SAN switches receives the request. The switch routes the request to the appropriate storage device. VMware, Inc.
Configuring Fibre Channel Storage 5 When you use ESXi systems with SAN storage, specific hardware and system requirements exist. This chapter includes the following topics: n ESXi Fibre Channel SAN Requirements n Installation and Setup Steps n N-Port ID Virtualization ESXi Fibre Channel SAN Requirements In preparation for configuring your SAN and setting up your ESXi system to use SAN storage, review the requirements and recommendations.
vSphere Storage n You cannot use multipathing software inside a virtual machine to perform I/O load balancing to a single physical LUN. However, when your Microsoft Windows virtual machine uses dynamic disks, this restriction does not apply. For information about configuring dynamic disks, see Set Up Dynamic Disk Mirroring. Setting LUN Allocations This topic provides general information about how to allocate LUNs when your ESXi works with SAN.
vSphere Storage n ESXi supports 16 GB end-to-end Fibre Channel connectivity. Installation and Setup Steps This topic provides an overview of installation and setup steps that you need to follow when configuring your SAN environment to work with ESXi. Follow these steps to configure your ESXi SAN environment. 1 Design your SAN if it is not already configured. Most existing SANs require only minor modification to work with ESXi. 2 Check that all SAN components meet requirements.
vSphere Storage When a virtual machine has a WWN assigned to it, the virtual machine’s configuration file (.vmx) is updated to include a WWN pair. The WWN pair consists of a World Wide Port Name (WWPN) and a World Wide Node Name (WWNN). When that virtual machine is powered on, the VMkernel instantiates a virtual port (VPORT) on the physical HBA which is used to access the LUN. The VPORT is a virtual HBA that appears to the FC fabric as a physical HBA.
vSphere Storage ESXi with NPIV supports the following items: n NPIV supports vMotion. When you use vMotion to migrate a virtual machine it retains the assigned WWN. If you migrate an NPIV-enabled virtual machine to a host that does not support NPIV, VMkernel reverts to using a physical HBA to route the I/O. n If your FC SAN environment supports concurrent I/O on the disks from an active-active array, the concurrent I/O to two different NPIV ports is also supported.
vSphere Storage 4 Option Description Generate new WWNs Generate new WWNs, overwriting any existing WWNs. The WWNs of the HBA are not affected. Specify the number of WWNNs and WWPNs. A minimum of two WWPNs are required to support failover with NPIV. Typically only one WWNN is created for each virtual machine. Remove WWN assignment Remove the WWNs assigned to the virtual machine. The virtual machine uses the HBA WWNs to access the storage LUN. Click OK to save your changes.
Configuring Fibre Channel over Ethernet 6 To access Fibre Channel storage, an ESXi host can use the Fibre Channel over Ethernet (FCoE) protocol. The FCoE protocol encapsulates Fibre Channel frames into Ethernet frames. As a result, your host does not need special Fibre Channel links to connect to Fibre Channel storage. The host can use 10 Gbit lossless Ethernet to deliver Fibre Channel traffic.
vSphere Storage VMware supports two categories of NICs with the software FCoE adapters. NICs With Partial FCoE Offload The extent of the offload capabilities might depend on the type of the NIC. Generally, the NICs offer Data Center Bridging (DCB) and I/O offload capabilities. NICs Without FCoE Offload Any NICs that offer Data Center Bridging (DCB) and have a minimum speed of 10 Gbps. The network adapters are not required to support any FCoE offload capabilities.
vSphere Storage n Do not move a network adapter port from one vSwitch to another when FCoE traffic is active. If you make this change, reboot your host afterwards. n If you changed the vSwitch for a network adapter port and caused a failure, moving the port back to the original vSwitch resolves the problem. Set Up Networking for Software FCoE Before you activate the software FCoE adapters, create VMkernel network adapters for all physical FCoE NICs installed on your host.
vSphere Storage Add Software FCoE Adapters You must activate software FCoE adapters so that your host can use them to access Fibre Channel storage. The number of software FCoE adapters you can activate corresponds to the number of physical FCoE NIC ports on your host. ESXi supports the maximum of four software FCoE adapters on one host. Prerequisites Set up networking for the software FCoE adapter. Procedure 1 Navigate to the host. 2 Click the Configure tab.
Booting ESXi from Fibre Channel SAN 7 When you set up your host to boot from a SAN, your host's boot image is stored on one or more LUNs in the SAN storage system. When the host starts, it boots from the LUN on the SAN rather than from its local disk. ESXi supports booting through a Fibre Channel host bus adapter (HBA) or a Fibre Channel over Ethernet (FCoE) converged network adapter (CNA).
vSphere Storage n Improved management. Creating and managing the operating system image is easier and more efficient. n Better reliability. You can access the boot disk through multiple paths, which protects the disk from being a single point of failure. Requirements and Considerations when Booting from Fibre Channel SAN Your ESXi boot configuration must meet specific requirements. Table 7‑1.
vSphere Storage Configure SAN Components and Storage System Before you set up your ESXi host to boot from a SAN LUN, configure SAN components and a storage system. Because configuring the SAN components is vendor-specific, refer to the product documentation for each item. Procedure 1 Connect network cable, referring to any cabling guide that applies to your setup. Check the switch wiring, if there is any. 2 Configure the storage array.
vSphere Storage Because changing the boot sequence in the BIOS is vendor-specific, refer to vendor documentation for instructions. The following procedure explains how to change the boot sequence on an IBM host. Procedure 1 Power on your system and enter the system BIOS Configuration/Setup Utility. 2 Select Startup Options and press Enter. 3 Select Startup Sequence Options and press Enter. 4 Change the First Startup Device to [CD-ROM]. You can now install ESXi.
vSphere Storage 2 3 To configure the adapter parameters, press ALT+E at the Emulex prompt and follow these steps. a Select an adapter (with BIOS support). b Select 2. Configure This Adapter's Parameters. c Select 1. Enable or Disable BIOS. d Select 1 to enable BIOS. e Select x to exit and Esc to return to the previous menu. To configure the boot device, follow these steps from the Emulex main menu. a Select the same adapter. b Select 1. Configure Boot Devices.
vSphere Storage 5 6 7 Set the BIOS to search for SCSI devices. a In the Host Adapter Settings page, select Host Adapter BIOS. b Press Enter to toggle the value to Enabled. c Press Esc to exit. Enable the selectable boot. a Select Selectable Boot Settings and press Enter. b In the Selectable Boot Settings page, select Selectable Boot. c Press Enter to toggle the value to Enabled. Select the Boot Port Name entry in the list of storage processors (SPs) and press Enter.
Booting ESXi with Software FCoE 8 ESXi supports booting from FCoE capable network adapters. Only NICs with partial FCoE offload support the boot capabilities with the software FCoE. If you use the NICs without FCoE offload, the software FCoE boot is not supported. When you install and boot ESXi from an FCoE LUN, the host can use a VMware software FCoE adapter and a network adapter with FCoE capabilities. The host does not require a dedicated FCoE HBA.
vSphere Storage n The network adapter must have the following capabilities: n Be FCoE capable. n Support ESXi open FCoE stack. n Contain FCoE boot firmware which can export boot information in FBFT format or FBPT format. Considerations n You cannot change software FCoE boot configuration from within ESXi. n Coredump is not supported on any software FCoE LUNs, including the boot LUN. n Multipathing is not supported at pre-boot.
vSphere Storage Procedure u In the option ROM of the network adapter, specify software FCoE boot parameters. These parameters include a boot target, boot LUN, VLAN ID, and so on. Because configuring the network adapter is vendor-specific, review your vendor documentation for instructions. Install and Boot ESXi from Software FCoE LUN When you set up your system to boot from a software FCoE LUN, you install the ESXi image to the target LUN. You can then boot your host from that LUN.
vSphere Storage Troubleshooting Boot from Software FCoE for an ESXi Host If the installation or boot of ESXi from a software FCoE LUN fails, you can use several troubleshooting methods. Problem When you install or boot ESXi from FCoE storage, the installation or the boot process fails. The FCoE setup that you use includes a VMware software FCoE adapter and a network adapter with partial FCoE offload capabilities.
Best Practices for Fibre Channel Storage 9 When using ESXi with Fibre Channel SAN, follow recommendations to avoid performance problems. The vSphere Client offers extensive facilities for collecting performance information. The information is graphically displayed and frequently updated. You can also use the resxtop or esxtop command-line utilities. The utilities provide a detailed look at how ESXi uses resources. For more information, see the vSphere Resource Management documentation.
vSphere Storage n Ensure that the Fibre Channel HBAs are installed in the correct slots in the host, based on slot and bus speed. Balance PCI bus load among the available buses in the server. n Become familiar with the various monitor points in your storage network, at all visibility points, including host's performance charts, FC switch statistics, and storage performance statistics. n Be cautious when changing IDs of the LUNs that have VMFS datastores being used by your ESXi host.
vSphere Storage To improve the array performance in the vSphere environment, follow these general guidelines: n When assigning LUNs, remember that several hosts might access the LUN, and that several virtual machines can run on each host. One LUN used by a host can service I/O from many different applications running on different operating systems. Because of this diverse workload, the RAID group containing the ESXi LUNs typically does not include LUNs used by other servers that are not running ESXi.
vSphere Storage n When allocating LUNs or RAID groups for ESXi systems, remember that multiple operating systems use and share that resource. The LUN performance required by the ESXi host might be much higher than when you use regular physical machines. For example, if you expect to run four I/O intensive applications, allocate four times the performance capacity for the ESXi LUNs.
Using ESXi with iSCSI SAN 10 ESXi can connect to external SAN storage using the Internet SCSI (iSCSI) protocol. In addition to traditional iSCSI, ESXi also supports iSCSI Extensions for RDMA (iSER). When the iSER protocol is enabled, the host can use the same iSCSI framework, but replaces the TCP/IP transport with the Remote Direct Memory Access (RDMA) transport. About iSCSI SAN iSCSI SANs use Ethernet connections between hosts and high-performance storage subsystems.
vSphere Storage The client, called iSCSI initiator, operates on your ESXi host. It initiates iSCSI sessions by issuing SCSI commands and transmitting them, encapsulated into the iSCSI protocol, to an iSCSI server. The server is known as an iSCSI target. Typically, the iSCSI target represents a physical storage system on the network. The target can also be a virtual iSCSI SAN, for example, an iSCSI target emulator running in a virtual machine.
vSphere Storage iSCSI Naming Conventions iSCSI uses a special unique name to identify an iSCSI node, either target or initiator. iSCSI names are formatted in two different ways. The most common is the IQN format. For more details on iSCSI naming requirements and string profiles, see RFC 3721 and RFC 3722 on the IETF website. iSCSI Qualified Name Format The iSCSI Qualified Name (IQN) format takes the form iqn.yyyy-mm.
vSphere Storage Software iSCSI Adapter A software iSCSI adapter is a VMware code built into the VMkernel. Using the software iSCSI adapter, your host can connect to the iSCSI storage device through standard network adapters. The software iSCSI adapter handles iSCSI processing while communicating with the network adapter. With the software iSCSI adapter, you can use iSCSI technology without purchasing specialized hardware.
vSphere Storage iSER differs from traditional iSCSI as it replaces the TCP/IP data transfer model with the Remote Direct Memory Access (RDMA) transport. Using the direct data placement technology of the RDMA, the iSER protocol can transfer data directly between the memory buffers of the ESXi host and storage devices. This method eliminates unnecessary TCP/IP processing and data coping, and can also reduce latency and the CPU load on the storage device.
vSphere Storage The types of storage that your host supports include active-active, active-passive, and ALUA-compliant. Active-active storage system Supports access to the LUNs simultaneously through all the storage ports that are available without significant performance degradation. All the paths are always active, unless a path fails. Active-passive storage system A system in which one storage processor is actively providing access to a given LUN.
vSphere Storage Access Control Access control is a policy set up on the iSCSI storage system. Most implementations support one or more of three types of access control: n By initiator name n By IP address n By the CHAP protocol Only initiators that meet all rules can access the iSCSI volume. Using only CHAP for access control can slow down rescans because the ESXi host can discover all targets, but then fails at the authentication step.
vSphere Storage 2 Device drivers in the virtual machine’s operating system communicate with the virtual SCSI controllers. 3 The virtual SCSI controller forwards the commands to the VMkernel. 4 The VMkernel performs the following tasks. 5 6 7 a Locates an appropriate virtual disk file in the VMFS volume. b Maps the requests for the blocks on the virtual disk to blocks on the appropriate physical device.
Configuring iSCSI Adapters and Storage 11 Before ESXi can work with iSCSI SAN, you must set up your iSCSI environment. The process of preparing your iSCSI environment involves the following steps: Step Details Set up iSCSI storage For information, see your storage vendor documentation.
vSphere Storage n iSCSI Session Management ESXi iSCSI SAN Recommendations and Restrictions To work properly with iSCSI SAN, your ESXi environment must follow specific recommendations. In addition, several restrictions exist when you use ESXi with iSCSI SAN. iSCSI Storage Recommendations n Verify that your ESXi host supports the iSCSI SAN storage hardware and firmware. For an up-to-date list, see VMware Compatibility Guide.
vSphere Storage The independent hardware iSCSI adapter does not require VMkernel networking. You can configure network parameters, such as an IP address, subnet mask, and default gateway on the independent hardware iSCSI adapter. All types of iSCSI adapters support IPv4 and IPv6 protocols. iSCSI Adapter (vmhba) Description VMkernel Networking Adapter Network Settings Independent Hardware iSCSI Adapter Third-party adapter that offloads the iSCSI and network processing and management from your host.
vSphere Storage Prerequisites n Verify whether the adapter must be licensed. n Install the adapter on your ESXi host. For information about licensing, installation, and firmware updates, see vendor documentation. The process of setting up the independent hardware iSCSI adapter includes these steps. Step Description View Independent Hardware iSCSI Adapters View an independent hardware iSCSI adapter and verify that it is correctly installed and ready for configuration.
vSphere Storage Adapter Information Description Model Model of the adapter. iSCSI Name Unique name formed according to iSCSI standards that identifies the iSCSI adapter. You can edit the iSCSI name. iSCSI Alias A friendly name used instead of the iSCSI name. You can edit the iSCSI alias. IP Address Address assigned to the iSCSI HBA. Targets Number of targets accessed through the adapter. Devices All storage devices or LUNs the adapter can access.
vSphere Storage 7 Option Description Override Link-local address for IPv6 Override the link-local IP address by configuring a static IP address. Static IPv6 addresses a Click Add to add a new IPv6 address. b Enter the IPv6 address and subnet prefix length, and click OK. In the DNS settings section, provide IP addresses for a preferred DNS server and an alternate DNS server. You must provide both values.
vSphere Storage Step Description Set Up CHAP for Target You can also configure different CHAP credentials for each discovery address or static target. Enable Jumbo Frames for Networking If your iSCSI environment supports Jumbo Frames, enable them for the adapter. Dependent Hardware iSCSI Considerations When you use dependent hardware iSCSI adapters with ESXi, certain considerations apply.
vSphere Storage What to do next Although the dependent iSCSI adapter is enabled by default, to make it functional, you must set up networking for the iSCSI traffic and bind the adapter to the appropriate VMkernel iSCSI port. You then configure discovery addresses and CHAP parameters. Determine Association Between iSCSI and Network Adapters You create network connections to bind dependent iSCSI and physical network adapters.
vSphere Storage Step Description Activate or Disable the Software iSCSI Adapter Activate your software iSCSI adapter so that your host can use it to access iSCSI storage. Modify General Properties for iSCSI or iSER Adapters If needed, change the default iSCSI name and alias assigned to your adapter. Configure Port Binding for iSCSI or iSER Configure connections for the traffic between the iSCSI component and the physical network adapters.
vSphere Storage 3 Enable or disable the adapter. Option Description Enable the software iSCSI adapter a Under Storage, click Storage Adapters, and click the Add icon. b Select Software iSCSI Adapter and confirm that you want to add the adapter. The software iSCSI adapter (vmhba#) is enabled and appears on the list of storage adapters. After enabling the adapter, the host assigns the default iSCSI name to it. You can now complete the adapter configuration.
vSphere Storage Step Description Set Up CHAP for iSCSI or iSER Adapter If your environment uses the Challenge Handshake Authentication Protocol (CHAP), configure it for your adapter. Set Up CHAP for Target You can also configure different CHAP credentials for each discovery address or static target. Enable Jumbo Frames for Networking If your environment supports Jumbo Frames, enable them for the adapter. Enable the VMware iSER Adapter Use the esxcli command to enable the VMware iSER adapter.
vSphere Storage Prerequisites Required privilege: Host .Configuration.Storage Partition Configuration Procedure 1 Navigate to the host. 2 Click the Configure tab. 3 Under Storage, click Storage Adapters, and select the adapter (vmhba#) to configure. 4 Click the Properties tab, and click Edit in the General panel. 5 (Optional) Modify the following general properties. Option Description iSCSI Name Unique name formed according to iSCSI standards that identifies the iSCSI adapter.
vSphere Storage Host vmhba vmk iSCSI or iSER adapter VMkernel adapter vSwitch vmnic physical NIC IP network iSCSI or iSER storage Follow these rules when configuring the port binding: n You can connect the software iSCSI adapter with any physical NICs available on your host. n The dependent iSCSI adapters must be connected only to their own physical NICs. n You must connect the iSER adapter only to the RDMA-capable network adapter.
vSphere Storage Figure 11‑1. 1:1 Adapter Mapping on Separate vSphere Standard Switches vSwitch1 Physical adapters VMkernel adapters vmnic1 iSCSI1 vmk1 vSwitch2 VMkernel adapters Physical adapters vmnic2 iSCSI2 vmk2 An alternative is to add all NICs and VMkernel adapters to the single vSphere switch. In this case, you must override the default network setup and make sure that each VMkernel adapter maps to only one corresponding active physical adapter. Figure 11‑2.
vSphere Storage Best Practices for Configuring Networking with Software iSCSI When you configure networking with software iSCSI, consider several best practices. Software iSCSI Port Binding You can bind the software iSCSI initiator on the ESXi host to a single or multiple VMkernel ports, so that iSCSI traffic flows only through the bound ports. When port binding is configured, the iSCSI initiator creates iSCSI sessions from all bound ports to all configured target portals. See the following examples.
vSphere Storage vmk1 192.168.0.1/24 vmnic1 Same subnet vmk2 192.168.0.2/24 vmnic2 IP Network vmk3 Single Target: 192.168.0.10/24 192.168.0.3/24 vmnic3 vmk2 192.168.0.4/24 vmnic4 In this example, all initiator ports and the target portal are configured in the same subnet. The target is reachable through all bound ports. You have four VMkernel ports and one target portal, so total of four paths are created. Without the port binding, only one path is created. Example 2.
vSphere Storage Paths Description Path 1 vmk1 and Port0 of Controller A Path 2 vmk1 and Port0 of Controlled B Path 3 vmk2 and Port1 of Controller A Path 4 vmk2 and Port2 of Controller B Routing with Software iSCSI You can use the esxcli command to add static routes for your iSCSI traffic. After you configure static routes, initiator and target ports in different subnets can communicate with each other. Example 1.
vSphere Storage You configure vmk1 and vmk2 in separate subnets, 192.168.1.0 and 192.168.2.0. Your target portals are also in separate subnets, 10.115.155.0 and 10.155.179.0. You can add the static route for 10.115.155.0 from vmk1. Make sure that the gateway is reachable from vmk1. # esxcli network ip route ipv4 add -gateway 192.168.1.253 -network 10.115.155.0/24 You then add static route for 10.115.179.0 from vmk2. Make sure that the gateway is reachable from vmk2.
vSphere Storage If you use a vSphere distributed switch with multiple uplink ports, for port binding, create a separate distributed port group per each physical NIC. Then set the team policy so that each distributed port group has only one active uplink port. For detailed information on distributed switches, see the vSphere Networking documentation. Procedure 1 Create a Single VMkernel Adapter for iSCSI Connect the VMkernel, which runs services for iSCSI storage, to a physical network adapter.
vSphere Storage 7 Specify the IP settings. 8 Review the information and click Finish. You created the virtual VMkernel adapter (vmk#) for a physical network adapter (vmnic#) on your host. What to do next If your host has one physical network adapter for iSCSI traffic, you must bind the virtual adapter that you created to the iSCSI adapter. If you have multiple network adapters, create additional VMkernel adapters and then perform iSCSI binding.
vSphere Storage c Make sure that you are using the existing switch, and click Next. d Complete configuration, and click Finish. What to do next Change the network policy for all VMkernel adapters, so that only one physical network adapter is active for each VMkernel adapter. You can then bind the VMkernel adapters to the appropriate iSCSI adapters.
vSphere Storage VMkernel Adapter (vmk#) Physical Network Adapter (vmnic#) vmk1 Active Adapters vmnic1 Unused Adapters vmnic2 vmk2 Active Adapters vmnic2 Unused Adapters vmnic1 What to do next After you perform this task, bind the VMkernel adapters to the appropriate iSCSI adapters. Bind iSCSI and VMkernel Adapters Bind an iSCSI adapter with a VMkernel adapter. Prerequisites Create a virtual VMkernel adapter for each physical network adapter on your host.
vSphere Storage Procedure 1 Navigate to the host. 2 Click the Configure tab. 3 Under Storage, click Storage Adapters, and select the appropriate iSCSI adapter from the list. 4 Click the Network Port Binding tab and click the View Details icon. 5 Review the VMkernel adapter information by switching between available tabs. Managing iSCSI Network Special considerations apply to network adapters, both physical and VMkernel, that are associated with an iSCSI adapter.
vSphere Storage Solution Follow the steps in Change Network Policy for iSCSI to set up the correct network policy for the iSCSIbound VMkernel adapter. Using Jumbo Frames with iSCSI ESXi supports the use of Jumbo Frames with iSCSI. Jumbo Frames are Ethernet frames with the size that exceeds 1500 Bytes. The maximum transmission unit (MTU) parameter is typically used to measure the size of Jumbo Frames.
vSphere Storage 5 On the Properties page, change the MTU parameter. This step sets the MTU for all physical NICs on that standard switch. Set the MTU value to the largest MTU size among all NICs connected to the standard switch. ESXi supports the MTU size of up to 9000 Bytes. Enable Jumbo Frames for Independent Hardware iSCSI To enable Jumbo Frames for independent hardware iSCSI adapters in the vSphere Client, change the default value of the maximum transmission units (MTU) parameter.
vSphere Storage The ESXi system supports these discovery methods: Dynamic Discovery Also known as SendTargets discovery. Each time the initiator contacts a specified iSCSI server, the initiator sends the SendTargets request to the server. The server responds by supplying a list of available targets to the initiator. The names and IP addresses of these targets appear on the Static Discovery tab.
vSphere Storage 5 Configure the discovery method. Discovery Method Description Dynamic Discovery a Click Dynamic Discovery and click Add. b Enter the IP address or DNS name of the storage system and click OK. c Rescan the iSCSI adapter. After establishing the SendTargets session with the iSCSI system, your host populates the Static Discovery list with all newly discovered targets. Static Discovery a Click Static Discovery and click Add.
vSphere Storage Selecting CHAP Authentication Method ESXi supports unidirectional CHAP for all types of iSCSI initiators, and bidirectional CHAP for software and dependent hardware iSCSI. Before configuring CHAP, check whether CHAP is enabled at the iSCSI storage system. Also, obtain information about the CHAP authentication method the system supports. If CHAP is enabled, configure it for your initiators, making sure that the CHAP authentication credentials match the credentials on the iSCSI storage.
vSphere Storage Set Up CHAP for iSCSI or iSER Adapter When you set up CHAP name and secret at the iSCSI adapter level, all targets receive the same parameters from the adapter. By default, all discovery addresses or static targets inherit CHAP parameters that you set up at the adapter level. The CHAP name cannot exceed 511 alphanumeric characters and the CHAP secret cannot exceed 255 alphanumeric characters.
vSphere Storage 8 Rescan the iSCSI adapter. If you change the CHAP parameters, they are used for new iSCSI sessions. For existing sessions, new settings are not used until you log out and log in again. Set Up CHAP for Target If you use software and dependent hardware iSCSI adapters, you can configure different CHAP credentials for each discovery address or static target. The CHAP name cannot exceed 511 and the CHAP secret 255 alphanumeric characters.
vSphere Storage 7 If configuring bidirectional CHAP, specify incoming CHAP credentials. Make sure to use different secrets for the outgoing and incoming CHAP. 8 Click OK. 9 Rescan the iSCSI adapter. If you change the CHAP parameters, they are used for new iSCSI sessions. For existing sessions, new settings are not used until you log out and login again. Configuring Advanced Parameters for iSCSI You might need to configure additional parameters for your iSCSI initiators.
vSphere Storage Table 11‑3. Additional Parameters for iSCSI Initiators (Continued) Advanced Parameter Description DefaultTimeToWait Minimum time in seconds to wait before attempting a logout or an active task reassignment after an unexpected connection termination or reset. DefautTimeToRetain Maximum time in seconds, during which reassigning the active task is still possible after a connection termination or reset.
vSphere Storage 4 5 Configure advanced parameters. n To configure advanced parameters at the adapter level, under Adapter Details, click the Advanced Options tab and click Edit. n Configure advanced parameters at the target level. a Click the Targets tab and click either Dynamic Discovery or Static Discovery. b From the list of available targets, select a target to configure and click Advanced Options. Enter any required values for the advanced parameters you want to modify.
vSphere Storage Procedure u To list iSCSI sessions, run the following command: esxcli iscsi session list The command takes these options: Option Description -A|--adapter=str The iSCSI adapter name, for example, vmhba34. -s|--isid=str The iSCSI session identifier. -n|--name=str The iSCSI target name, for example, iqn.X. Add iSCSI Sessions Use the vCLI to add an iSCSI session for a target you specify or to duplicate an existing session.
vSphere Storage Procedure u To remove a session, run the following command: esxcli iscsi session remove The command takes these options: Option Description -A|--adapter=str The iSCSI adapter name, for example, vmhba34. This option is required. -s|--isid=str The ISID of a session to remove. You can find it by listing all session. -n|--name=str The iSCSI target name, for example, iqn.X. What to do next Rescan the iSCSI adapter. VMware, Inc.
Booting from iSCSI SAN 12 When you set up your host to boot from a SAN, your host's boot image is stored on one or more LUNs in the SAN storage system. When the host starts, it boots from the LUN on the SAN rather than from its local disk. You can use boot from the SAN if you do not want to handle maintenance of local storage or have diskless hardware configurations, such as blade systems. ESXi supports different methods of booting from the iSCSI SAN. Table 12‑1.
vSphere Storage n Configure proper ACLs on your storage system. n The boot LUN must be visible only to the host that uses the LUN. No other host on the SAN is permitted to see that boot LUN. n n If a LUN is used for a VMFS datastore, multiple hosts can share the LUN. Configure a diagnostic partition. n With the independent hardware iSCSI only, you can place the diagnostic partition on the boot LUN.
vSphere Storage Configure Independent Hardware iSCSI Adapter for SAN Boot If your ESXi host uses an independent hardware iSCSI adapter, such as QLogic HBA, you can configure the adapter to boot from the SAN. This procedure discusses how to enable the QLogic iSCSI HBA to boot from the SAN. For more information and more up-to-date details about QLogic adapter configuration settings, see the QLogic website.
vSphere Storage 3 Select Primary Boot Device Settings. a Enter the discovery Target IP and Target Port. b Configure the Boot LUN and iSCSI Name parameters. n If only one iSCSI target and one LUN are available at the target address, leave Boot LUN and iSCSI Name blank. After your host reaches the target storage system, these text boxes are populated with appropriate information. n c 4 If more than one iSCSI target and LUN are available, supply values for Boot LUN and iSCSI Name. Save changes.
vSphere Storage 3 After the successful connection, the iSCSI boot firmware writes the networking and iSCSI boot parameters in to the iBFT. The firmware stores the table in the system memory. Note The system uses this table to configure its own iSCSI connection and networking and to start up. 4 The BIOS boots the boot device. 5 The VMkernel starts loading and takes over the boot operation. 6 Using the boot parameters from the iBFT, the VMkernel connects to the iSCSI target.
vSphere Storage 3 Install ESXi to iSCSI Target When setting up your host to boot from iBFT iSCSI, install the ESXi image to the target LUN. 4 Boot ESXi from iSCSI Target After preparing the host for an iBFT iSCSI boot and copying the ESXi image to the iSCSI target, perform the actual boot. Configure iSCSI Boot Parameters To begin an iSCSI boot process, a network adapter on your host must have a specially configured iSCSI boot firmware.
vSphere Storage Install ESXi to iSCSI Target When setting up your host to boot from iBFT iSCSI, install the ESXi image to the target LUN. Prerequisites n Configure iSCSI boot firmware on your boot NIC to point to the target LUN that you want to use as the boot LUN. n Change the boot sequence in the BIOS so that iSCSI precedes the DVD-ROM. n If you use Broadcom adapters, set Boot to iSCSI target to Disabled. Procedure 1 Insert the installation media in the CD/DVD-ROM drive and restart the host.
vSphere Storage Shared iSCSI and Management Networks Configure the networking and iSCSI parameters on the first network adapter on the host. After the host boots, you can add secondary network adapters to the default port group. Isolated iSCSI and Management Networks When you configure isolated iSCSI and management networks, follow these guidelines to avoid bandwidth problems. n Your isolated networks must be on different subnets.
vSphere Storage Problem A loss of network connectivity occurs after you delete a port group. Cause When you specify a gateway in the iBFT-enabled network adapter during ESXi installation, this gateway becomes the system's default gateway. If you delete the port group associated with the network adapter, the system's default gateway is lost. This action causes the loss of network connectivity. Solution Do not set an iBFT gateway unless it is required.
Best Practices for iSCSI Storage 13 When using ESXi with the iSCSI SAN, follow recommendations that VMware offers to avoid problems. Check with your storage representative if your storage system supports Storage API - Array Integration hardware acceleration features. If it does, refer to your vendor documentation to enable hardware acceleration support on the storage system side. For more information, see Chapter 24 Storage Hardware Acceleration.
vSphere Storage n Change LUN IDs only when VMFS datastores deployed on the LUNs have no running virtual machines. If you change the ID, virtual machines running on the VMFS datastore might fail. After you change the ID of the LUN, you must rescan your storage to reset the ID on your host. For information on using the rescan, see Storage Rescan Operations. n If you change the default iSCSI name of your iSCSI adapter, make sure that the name you enter is worldwide unique and properly formatted.
vSphere Storage Each server application must have access to its designated storage with the following conditions: n High I/O rate (number of I/O operations per second) n High throughput (megabytes per second) n Minimal latency (response times) Because each application has different requirements, you can meet these goals by selecting an appropriate RAID group on the storage system.
vSphere Storage Figure 13‑1. Single Ethernet Link Connection to Storage When systems read data from storage, the storage responds with sending enough data to fill the link between the storage systems and the Ethernet switch. It is unlikely that any single system or virtual machine gets full use of the network speed. However, this situation can be expected when many systems share one storage device. When writing data to storage, multiple systems or virtual machines might attempt to fill their links.
vSphere Storage If the transactions are large and multiple servers are sending data through a single switch port, an ability to buffer can be exceeded. In this case, the switch drops the data it cannot send, and the storage system must request a retransmission of the dropped packet. For example, if an Ethernet switch can buffer 32 KB, but the server sends 256 KB to the storage device, some of the data is dropped.
vSphere Storage Using VLANs or VPNs does not provide a suitable solution to the problem of link oversubscription in shared configurations. VLANs and other virtual partitioning of a network provide a way of logically designing a network. However, they do not change the physical capabilities of links and trunks between switches. When storage traffic and other network traffic share physical connections, oversubscription and lost packets might become possible.
Managing Storage Devices 14 Manage local and networked storage device that your ESXi host has access to.
vSphere Storage Table 14‑1. Storage Device Information (Continued) Storage Device Information Description Operational State Indicates whether the device is attached or detached. For details, see Detach Storage Devices. LUN Logical Unit Number (LUN) within the SCSI target. The LUN number is provided by the storage system. If a target has only one LUN, the LUN number is always zero (0). Type Type of device, for example, disk or CD-ROM.
vSphere Storage 5 Use the icons to perform basic storage management tasks. Availability of specific icons depends on the device type and configuration. 6 Icon Description Refresh Refresh information about storage adapters, topology, and file systems. Rescan Rescan all storage adapters on the host to discover newly added storage devices or VMFS datastores. Detach Detach the selected device from the host. Attach Attach the selected device to the host.
vSphere Storage Icon Description Refresh Refresh information about storage adapters, topology, and file systems. Rescan Rescan all storage adapters on the host to discover newly added storage devices or VMFS datastores. Detach Detach the selected device from the host. Attach Attach the selected device to the host. Rename Change the display name of the selected device. Turn On LED Turn on the locator LED for the selected devices. Turn Off LED Turn off the locator LED for the selected devices.
vSphere Storage 4K Native Format with Software Emulation Another advanced format that ESXi supports is the 4Kn sector technology. In the 4Kn devices, both physical and logical sectors are 4096 bytes (4 KiB) in length. The device does not have an emulation layer, but exposes its 4Kn physical sector size directly to ESXi. ESXi detects and registers the 4Kn devices and automatically emulates them as 512e. The device is presented to upper layers in ESXi as 512e.
vSphere Storage Device Identifiers Depending on the type of storage, the ESXi host uses different algorithms and conventions to generate an identifier for each storage device. SCSI INQUIRY identifiers. The host uses the SCSI INQUIRY command to query a storage device. The host uses the resulting data, in particular the Page 83 information, to generate a unique identifier. Device identifiers that are based on Page 83 are unique across all hosts, persistent, and have one of the following formats: n naa.
vSphere Storage n LLUN is the LUN number that shows the position of the LUN within the target. The LUN number is provided by the storage system. If a target has only one LUN, the LUN number is always zero (0). For example, vmhba1:C0:T3:L1 represents LUN1 on target 3 accessed through the storage adapter vmhba1 and channel 0. Legacy Identifier In addition to the SCSI INQUIRY or mpx. identifiers, ESXi generates an alternative legacy name for each device. The identifier has the following format: vml.
vSphere Storage When you perform VMFS datastore management operations, such as creating a VMFS datastore or RDM, adding an extent, and increasing or deleting a VMFS datastore, your host or the vCenter Server automatically rescans and updates your storage. You can disable the automatic rescan feature by turning off the Host Rescan Filter. See Turn Off Storage Filters. In certain cases, you need to perform a manual rescan.
vSphere Storage Perform Adapter Rescan When you make changes in your SAN configuration and these changes are isolated to storage accessed through a specific adapter, perform rescan for only this adapter. Procedure 1 Navigate to the host. 2 Click the Configure tab. 3 Under Storage, click Storage Adapters, and select the adapter to rescan from the list. 4 Click the Rescan Adapter icon. Change the Number of Scanned Storage Devices The range of scanned LUN IDs for an ESXi host can be from 0 to 16,383.
vSphere Storage Identifying Device Connectivity Problems When your ESXi host experiences a problem while connecting to a storage device, the host treats the problem as permanent or temporary depending on certain factors. Storage connectivity problems are caused by a variety of reasons.
vSphere Storage If the device returns from the PDL condition, the host can discover it, but treats it as a new device. Data consistency for virtual machines on the recovered device is not guaranteed. Note When a device fails without sending appropriate SCSI sense codes or an iSCSI login rejection, the host cannot detect PDL conditions. In this case, the host continues to treat the device connectivity problems as APD even when the device fails permanently.
vSphere Storage Detach Storage Devices Safely detach a storage device from your host. You might need to detach the device to make it inaccessible to your host, when, for example, you perform a hardware upgrade on the storage side. Prerequisites n The device does not contain any datastores. n No virtual machines use the device as an RDM disk. n The device does not contain a diagnostic partition or a scratch partition. Procedure 1 Navigate to the host. 2 Click the Configure tab.
vSphere Storage n Operational state of the device changes to Lost Communication. n All paths are shown as Dead. n A warning about the device being permanently inaccessible appears in the VMkernel log file. To recover from the unplanned PDL condition and remove the unavailable device from the host, perform the following tasks. Task Description Power off and unregister all virtual machines that are running on the datastore affected by the PDL condition. See vSphere Virtual Machine Administration.
vSphere Storage Even though the device and datastores are unavailable, virtual machines remain responsive. You can power off the virtual machines or migrate them to a different datastore or host. If later the device paths become operational, the host can resume I/O to the device and end the special APD treatment. Disable Storage APD Handling The storage all paths down (APD) handling on your ESXi host is enabled by default.
vSphere Storage Procedure 1 Navigate to the host. 2 Click the Configure tab. 3 Under System, click Advanced System Settings. 4 In the Advanced System Settings table, select the Misc.APDTimeout parameter and click the Edit icon. 5 Change the default value. You can enter a value between 20 and 99999 seconds. Verify the Connection Status of a Storage Device Use the esxcli command to verify the connection status of a particular storage device.
vSphere Storage Procedure 1 Navigate to the host. 2 Click the Configure tab. 3 Under Storage, click Storage Devices. 4 From the list of storage devices, select one or more disks and enable or disable the locator LED indicator. Option Description Enable Click the Turns on the locator LED icon. Disable Click the Turns off the locator LED icon. Erase Storage Devices Certain functionalities, such as vSAN or virtual flash resource, require that you use clean devices.
Working with Flash Devices 15 In addition to the regular storage hard disk drives (HDDs), ESXi supports flash storage devices. Unlike the regular HDDs that are electromechanical devices containing moving parts, the flash devices use semiconductors as their storage medium and have no moving parts. Typically, the flash devices are resilient and provide faster access to data. To detect flash devices, ESXi uses an inquiry mechanism based on T10 standards.
vSphere Storage Table 15‑1. Using Flash Devices with ESXi Functionality Description vSAN vSAN requires flash devices. For more information, see the Administering VMware vSAN documentation. VMFS Datastores You can create VMFS datastores on flash devices. Use the datastores for the following purposes: Virtual Flash Resource (VFFS) n Store virtual machines. Certain guest operating systems can identify virtual disks stored on these datastores as flash virtual disks. See Identifying Flash Virtual Disks.
vSphere Storage However, ESXi might not recognize certain storage devices as flash devices when their vendors do not support automatic flash device detection. In other cases, certain devices might not be detected as local, and ESXi marks them as remote. When devices are not recognized as the local flash devices, they are excluded from the list of devices offered for vSAN or virtual flash resource. Marking these devices as local flash makes them available for vSAN and virtual flash resource.
vSphere Storage 2 Click the Configure tab. 3 Under Storage, click Storage Devices. 4 From the list of storage devices, select one or several remote devices and click the Mark as Local icon. 5 Click Yes to save your changes. Monitor Flash Devices You can monitor certain critical flash device parameters, including Media Wearout Indicator, Temperature, and Reallocated Sector Count, from an ESXi host. Use the esxcli command to monitor flash devices.
vSphere Storage Procedure 1 Obtain the total number of blocks written to the flash device since the last reboot. Run the esxcli storage core device stats get -d=device_ID command. For example: ~ # esxcli storage core device stats get -d t10.xxxxxxxxxxxxxxx Device: t10.xxxxxxxxxxxxxxx Successful Commands: xxxxxxx Blocks Read: xxxxxxxx Blocks Written: 629145600 Read Operations: xxxxxxxx The Blocks Written item in the output shows the number of blocks written to the device since the last reboot.
vSphere Storage n I/O caching filters, if required by your vendors. See Chapter 23 Filtering Virtual Machine I/O. Before setting up the virtual flash resource, make sure that you use devices approved by the VMware Compatibility Guide. Considerations for Virtual Flash Resource When you configure a virtual flash resource to be used by ESXi hosts and virtual machines, several considerations apply. n You can have only one virtual flash resource, also called a VFFS volume, on a single ESXi host.
vSphere Storage 4 From the list of available flash devices, select one or more devices to use for the virtual flash resource and click OK. Under certain circumstances, you might not be able to see flash devices on the list. For more information, see Marking Storage Devices. The virtual flash resource is created. The Device Backing area lists all devices that you use for the virtual flash resource.
vSphere Storage 4 5 Select the setting to change and click the Edit button. Parameter Description VFLASH.VFlashResourceUsageThresh old The system triggers the Host vFlash resource usage alarm when a virtual VFLASH.MaxResourceGBForVmCache An ESXi host stores Flash Read Cache metadata in RAM. The default limit of total virtual machine cache size on the host is 2 TB. You can reconfigure this setting. You must restart the host for the new setting to take effect.
vSphere Storage 4 Select the flash datastore in the list and click the Edit virtul flash host swap cache propertities icon. 5 Allocate appropriate space for host cache. 6 Click OK. Configure Host Swap Cache with Virtual Flash Resource You can reserve a certain amount of virtual flash resource for host swap cache. Note This task is available only in the vSphere Web Client. Prerequisites 1 Set up a virtual flash resource. Set Up Virtual Flash Resource.
About VMware vSphere Flash Read Cache 16 Flash Read Cache™ can accelerate virtual machine performance by using host-resident flash devices as a cache. You can reserve a Flash Read Cache for any individual virtual disk. The Flash Read Cache is created only when a virtual machine is powered on. It is discarded when a virtual machine is suspended or powered off. When you migrate a virtual machine, you can migrate the cache.
vSphere Storage DRS Support for Flash Read Cache DRS supports virtual flash as a resource. DRS manages virtual machines with Flash Read Cache reservations. Every time DRS runs, it displays the available virtual flash capacity reported by the ESXi host. Each host supports one virtual flash resource. DRS selects a host that has sufficient available virtual flash capacity to start a virtual machine.
vSphere Storage 4 To enable Flash Read Cache for the virtual machine, enter a value in the Virtual Flash Read Cache text box. 5 Click Advanced to specify the following parameters. Note This option is available only in the vSphere Web Client. 6 Parameter Description Reservation Select a cache size reservation. Block Size Select a block size. Click OK.
vSphere Storage 5 6 If you have multiple virtual disks with Flash Read Cache, you can adjust the migration setting for each individual disk. a Click Advanced. b Select a virtual disk for which you want to modify the migration setting. c From the drop-down menu in the Virtual Flash Read Cache Migration Setting column, select an appropriate option. Complete your migration configuration and click Finish.
Working with Datastores 17 Datastores are logical containers, analogous to file systems, that hide specifics of physical storage and provide a uniform model for storing virtual machine files. Datastores can also be used for storing ISO images, virtual machine templates, and floppy images.
vSphere Storage Table 17‑1. Types of Datastores Datastore Type Description VMFS (version 5 and 6) Datastores that you deploy on block storage devices use the vSphere Virtual Machine File System (VMFS) format. VMFS is a special high-performance file system format that is optimized for storing virtual machines. See Understanding VMFS Datastores. NFS (version 3 and 4.1) An NFS client built into ESXi uses the Network File System (NFS) protocol over TCP/IP to access a designated NFS volume.
vSphere Storage You can increase the capacity of the datastore while the virtual machines are running on the datastore. This ability lets you add new space to your VMFS datastores as your virtual machine requires it. VMFS is designed for concurrent access from multiple physical machines and enforces the appropriate access controls on the virtual machine files. Versions of VMFS Datastores Several versions of the VMFS file system have been released since its introduction.
vSphere Storage Table 17‑3. Comparing VMFS5 and VMFS6 (Continued) Features and Functionalities VMFS5 VMFS6 Support for virtual machines with large capacity virtual disks, or disks greater than 2 TB Yes Yes Support of small files of 1 KB Yes Yes Default use of ATS-only locking mechanisms on storage devices that support ATS. See VMFS Locking Mechanisms. Yes Yes Block size Standard 1 MB Standard 1 MB Default snapshots VMFSsparse for virtual disks smaller than 2 TB.
vSphere Storage VMFS Datastores as Repositories ESXi can format SCSI-based storage devices as VMFS datastores. VMFS datastores primarily serve as repositories for virtual machines. Note Always have only one VMFS datastore for each LUN. You can store multiple virtual machines on the same VMFS datastore. Each virtual machine, encapsulated in a set of files, occupies a separate single directory.
vSphere Storage You can distribute virtual machines across different physical servers. That means you run a mix of virtual machines on each server, so that not all experience high demand in the same area at the same time. If a server fails, you can restart virtual machines on another physical server. If the failure occurs, the on-disk lock for each virtual machine is released. For more information about VMware DRS, see the vSphere Resource Management documentation.
vSphere Storage ATS-Only Mechanism For storage devices that support T10 standard-based VAAI specifications, VMFS provides ATS locking, also called hardware assisted locking. The ATS algorithm supports discrete locking per disk sector. All newly formatted VMFS5 and VMFS6 datastores use the ATS-only mechanism if the underlying storage supports it, and never use SCSI reservations. When you create a multi-extent datastore where ATS is used, vCenter Server filters out non-ATS devices.
vSphere Storage Table 17‑4. VMFS Locking Information Fields Values Locking Modes Descriptions Indicates the locking configuration of the datastore. ATS-only The datastore is configured to use the ATS-only locking mode. ATS+SCSI The datastore is configured to use the ATS mode. If ATS fails or is not supported, the datastore can revert to SCSI. ATS upgrade pending The datastore is in the process of an online upgrade to the ATS-only mode.
vSphere Storage Procedure 1 Prepare for an Upgrade to ATS-Only Locking You must perform several steps to prepare your environment for an online or offline upgrade to ATSonly locking. 2 Upgrade Locking Mechanism to the ATS-Only Type If a VMFS datastore is ATS-only compatible, you can upgrade its locking mechanism from ATS +SCSI to ATS-only. Prepare for an Upgrade to ATS-Only Locking You must perform several steps to prepare your environment for an online or offline upgrade to ATS-only locking.
vSphere Storage Procedure 1 Perform an upgrade of the locking mechanism by running the following command: esxcli storage vmfs lockmode set -a|--ats -l|--volume-label= VMFS label -u|-volume-uuid= VMFS UUID. 2 For an online upgrade, perform additional steps. a Close the datastore on all hosts that have access to the datastore, so that the hosts can recognize the change. You can use one of the following methods: b n Unmount and mount the datastore.
vSphere Storage Sparse disks use the copy-on-write mechanism, in which the virtual disk contains no data, until the data is copied there by a write operation. This optimization saves storage space. Depending on the type of your datastore, delta disks use different sparse formats. Snapshot Formats VMFS5 VMFS6 VMFSsparse For virtual disks smaller than 2 TB. N/A SEsparse For virtual disks larger than 2 TB. For all disks.
vSphere Storage VMFS5 Datastores You cannot upgrade a VMFS5 datastore to VMFS6. If you have a VMFS5 datastore in your environment, create a VMFS6 datastore and migrate virtual machines from the VMFS5 datastore to VMFS6. VMFS3 Datastores ESXi no longer supports VMFS3 datastores. The ESXi host automatically upgrades VMFS3 to VMFS5 when mounting existing datastores. The host performs the upgrade operation in the following circumstances: n At the first boot after an upgrade to ESXi 6.
vSphere Storage 3 Repeat the upgrade operation. See Upgrade a VMFS3 Datastore to VMFS5. Upgrade a VMFS3 Datastore to VMFS5 After you mount the VMFS3 datastore that failed to upgrade automatically, upgrade it to VMFS5. Procedure 1 Navigate to the VMFS3 datastore. 2 Select Upgrade to VMFS5 from the right-click menu. 3 Verify that the hosts accessing the datastore support VMFS5. 4 Click OK to start the upgrade. 5 Perform a rescan on all hosts that are associated with the datastore.
vSphere Storage Characteristics NFS version 3 NFS version 4.
vSphere Storage NFS Upgrades When you upgrade ESXi from a version earlier than 6.5, existing NFS 4.1 datastores automatically begin supporting functionalities that were not available in the previous ESXi release. These functionalities include Virtual Volumes, hardware acceleration, and so on. ESXi does not support automatic datastore conversions from NFS version 3 to NFS 4.1. If you want to upgrade your NFS 3 datastore, the following options are available: n Create the NFS 4.
vSphere Storage n NFS and Hardware Acceleration Virtual disks created on NFS datastores are thin-provisioned by default. To be able to create thickprovisioned virtual disks, you must use hardware acceleration that supports the Reserve Space operation. n NFS Datastores When you create an NFS datastore, make sure to follow specific guidelines. NFS Server Configuration When you configure NFS servers to work with ESXi, follow recommendation of your storage vendor.
vSphere Storage n If you use multiple ports for NFS traffic, make sure that you correctly configure your virtual switches and physical switches. n NFS 3 and NFS 4.1 support IPv6. NFS File Locking File locking mechanisms are used to restrict access to data stored on a server to only one user or process at a time. NFS 3 and NFS 4.1 use incompatible file locking mechanisms. NFS 3 locking on ESXi does not use the Network Lock Manager (NLM) protocol. Instead, VMware provides its own locking protocol.
vSphere Storage NFS 4.1 provides multipathing for servers that support the session trunking. When the trunking is available, you can use multiple IP addresses to access a single NFS volume. Client ID trunking is not supported. NFS and Hardware Acceleration Virtual disks created on NFS datastores are thin-provisioned by default. To be able to create thickprovisioned virtual disks, you must use hardware acceleration that supports the Reserve Space operation. NFS 3 and NFS 4.
vSphere Storage The behavior of the NFS Client rule set (nfsClient) is different from other rule sets. For more information about firewall configurations, see the vSphere Security documentation. NFS Client Firewall Behavior The NFS Client firewall rule set behaves differently than other ESXi firewall rule sets. ESXi configures NFS Client settings when you mount or unmount an NFS datastore. The behavior differs for different versions of NFS.
vSphere Storage Procedure 1 Navigate to the host. 2 Click the Configure tab. 3 Under System, click Firewall, and click Edit. 4 Scroll down to an appropriate version of NFS to make sure that the port is open. Using Layer 3 Routed Connections to Access NFS Storage When you use Layer 3 (L3) routed connections to access NFS storage, consider certain requirements and restrictions.
vSphere Storage n Kerberos for authentication and data integrity (krb5i), in addition to identity verification, provides data integrity services. These services help to protect the NFS traffic from tampering by checking data packets for any potential modifications. Kerberos supports cryptographic algorithms that prevent unauthorized users from gaining access to NFS traffic. The NFS 4.
vSphere Storage Procedure 1 On the NFS server, configure an NFS volume and export it to be mounted on the ESXi hosts. a Note the IP address or the DNS name of the NFS server and the full path, or folder name, for the NFS share. For NFS 4.1, you can collect multiple IP addresses or DNS names to use the multipathing support that the NFS 4.1 datastore provides. b 2 If you plan to use Kerberos authentication with NFS 4.1, specify the Kerberos credentials to be used by ESXi for authentication.
vSphere Storage 3 Enable Kerberos Authentication in Active Directory If you use NFS 4.1 storage with Kerberos, you must add each ESXi host to an Active Directory domain and enable Kerberos authentication. Kerberos integrates with Active Directory to enable single sign-on and provides an extra layer of security when used across an insecure network connection. What to do next After you configure your host for Kerberos, you can create an NFS 4.1 datastore with Kerberos enabled. Configure DNS for NFS 4.
vSphere Storage 5 c To synchronize with the NTP server, enter its IP addresses. d Click Start or Restart in the NTP Service Status section. Click OK. The host synchronizes with the NTP server. Enable Kerberos Authentication in Active Directory If you use NFS 4.1 storage with Kerberos, you must add each ESXi host to an Active Directory domain and enable Kerberos authentication.
vSphere Storage n Create a VMFS Datastore VMFS datastores serve as repositories for virtual machines. You can set up VMFS datastores on any SCSI-based storage devices that the host discovers, including Fibre Channel, iSCSI, and local storage devices. n Create an NFS Datastore You can use the New Datastore wizard to mount an NFS volume. n Create a Virtual Volumes Datastore You use the New Datastore wizard to create a Virtual Volumes datastore.
vSphere Storage 7 Define configuration details for the datastore. a b Specify partition configuration. Option Description Use all available partitions Dedicates the entire disk to a single VMFS datastore. If you select this option, all file systems and data currently stored on this device are destroyed. Use free space Deploys a VMFS datastore in the remaining free space of the disk.
vSphere Storage 4 Enter the datastore parameters. Option Description Datastore name The system enforces a 42 character limit for the datastore name. Folder The mount point folder name Server The server name or IP address. You can use IPv6 or IPv4 formats. With NFS 4.1, you can add multiple IP addresses or server names if the NFS server supports trunking. The ESXi host uses these values to achieve multipathing to the NFS server mount point.
vSphere Storage What to do next After you create the Virtual Volumes datastore, you can perform such datastore operations as renaming the datastore, browsing datastore files, unmounting the datastore, and so on. You cannot add the Virtual Volumes datastore to a datastore cluster. Managing Duplicate VMFS Datastores When a storage device contains a VMFS datastore copy, you can mount the datastore with the existing signature or assign a new signature.
vSphere Storage n The resignaturing process is fault tolerant. If the process is interrupted, you can resume it later. n You can mount the new VMFS datastore without a risk of its UUID conflicting with UUIDs of any other datastore from the hierarchy of device snapshots. Mount a VMFS Datastore Copy Use datastore resignaturing if you want to retain the data stored on the VMFS datastore copy. If you do not need to resignature the VMFS datastore copy, you can mount it without changing its signature.
vSphere Storage n Dynamically add the extent. The datastore can span over up to 32 extents with the size of each extent of more than 2 TB, yet appear as a single volume. The spanned VMFS datastore can use any or all its extents at any time. It does not need to fill up a particular extent before using the next one. Increase VMFS Datastore Capacity You can dynamically increase the capacity of a VMFS datastore.
vSphere Storage Administrative Operations for Datastores After creating datastores, you can perform several administrative operations on the datastores. Certain operations, such as renaming a datastore, are available for all types of datastores. Others apply to specific types of datastores. n Change Datastore Name You can change the name of an existing datastore. n Unmount Datastores When you unmount a datastore, it remains intact, but can no longer be seen from the hosts that you specify.
vSphere Storage Unmount Datastores When you unmount a datastore, it remains intact, but can no longer be seen from the hosts that you specify. The datastore continues to appear on other hosts, where it remains mounted. Do not perform any configuration operations that might result in I/O to the datastore while the unmounting is in progress. Note Make sure that the datastore is not used by vSphere HA Heartbeating. vSphere HA Heartbeating does not prevent you from unmounting the datastore.
vSphere Storage If you have unmounted an NFS or a Virtual Volumes datastore from all hosts, the datastore disappears from the inventory. To mount the NFS or Virtual Volumes datastore that has been removed from the inventory, use the New Datastore wizard. A datastore of any type that is unmounted from some hosts while being mounted on others, is shown as active in the inventory. Procedure 1 Navigate to the datastore.
vSphere Storage Use Datastore Browser Use the datastore file browser to manage contents of your datastores. You can browse folders and files that are stored on the datastore. You can also use the browser to upload files and perform administrative tasks on your folders and files. Procedure 1 Open the datastore browser. a Display the datastore in the inventory. b Right-click the datastore and select Browse Files. 2 Explore the contents of the datastore by navigating to existing folders and files.
vSphere Storage Prerequisites Required privilege: Datastore.Browse Datastore Procedure 1 Open the datastore browser. a Display the datastore in the inventory. b Right-click the datastore and select Browse Files. 2 (Optional) Create a folder to store the file or folder. 3 Upload the file or folder. Option Upload a file Upload a folder (available only in the vSphere Client) 4 Description a Select the target folder and click Upload Files.
vSphere Storage Move or Copy Datastore Folders or Files Use the datastore browser to move or copy folders or files to a new location, either on the same datastore or on a different datastore. Note Virtual disk files are moved or copied without format conversion. If you move a virtual disk to a datastore that belongs to a host different from the source host, you might need to convert the virtual disk. Otherwise, you might not be able to use the disk. You cannot copy VM files across vCenter Servers.
vSphere Storage Inflate Thin Virtual Disks If you created a virtual disk in the thin format, you can change the format to thick. You use the datastore browser to inflate the thin virtual disk. Prerequisites n Make sure that the datastore where the virtual machine resides has enough space. n Make sure that the virtual disk is thin. n Remove snapshots. n Power off your virtual machine. Procedure 1 Navigate to the folder of the virtual disk you want to inflate. a Navigate to the virtual machine.
vSphere Storage Procedure 1 Browse to the vCenter Server instance. 2 Click the Configure tab. 3 Under Settings, click Advanced Settings, and click Edit. 4 Specify the filter to turn off. In the Name and Value text boxes at the bottom of the page, enter appropriate information. Name Value config.vpxd.filter.vmfsFilter False config.vpxd.filter.rdmFilter False config.vpxd.filter.sameHostsAndTrans portsFilter False config.vpxd.filter.
vSphere Storage Table 17‑6. Storage Filters (Continued) Filter Name Description config.vpxd.filter.sameHostsAndTransp ortsFilter Filters out LUNs ineligible for use as VMFS datastore extents because of host or storage type incompatibility. Prevents you from adding the following LUNs as extents: (Same Hosts and Transports Filter) config.vpxd.filter.hostRescanFilter (Host Rescan Filter) n LUNs not exposed to all hosts that share the original VMFS datastore.
vSphere Storage d e Click Add Row and add the following parameters: Name Value scsi#.returnNoConnectDuringAPD True scsi#.returnBusyOnNoConnectStatus False Click OK. Collecting Diagnostic Information for ESXi Hosts on a Storage Device During a host failure, ESXi must be able to save diagnostic information to a preconfigured location for diagnostic and technical support purposes.
vSphere Storage n If a host that uses a shared diagnostic partition fails, reboot the host and extract log files immediately after the failure. Otherwise, the second host that fails before you collect the diagnostic data of the first host might save the core dump. Procedure 1 Navigate to the host. 2 Right-click the host, and select Add Diagnostic Partition. If you do not see this menu item, the host already has a diagnostic partition. 3 Specify the type of diagnostic partition.
vSphere Storage Set Up a File as Core Dump Location If the size of your available core dump partition is insufficient, you can configure ESXi to use a file for diagnostic information. Typically, a core dump partition of 2.5 GB is created during ESXi installation. For upgrades from ESXi 5.0 and earlier, the core dump partition is limited to 100 MB. For this type of upgrade, during the boot process the system might create a core dump file on a VMFS datastore.
vSphere Storage 3 Activate the core dump file for the host: esxcli system coredump file set The command takes the following options: Option Description --path | -p The path of the core dump file to use. The file must be pre-allocated. --smart | -s This flag can be used only with --enable | -e=true. It causes the file to be selected using the smart selection algorithm.
vSphere Storage 2 Remove the file from the VMFS datastore: system coredump file remove --file | -f file_name The command takes the following options: Option Description --file | -f Enter the name of the dump file to be removed. If you do not enter the name, the command removes the default configured core dump file. --force | -F Deactivate and unconfigure the dump file being removed. This option is required if the file has not been previously deactivated and is active.
vSphere Storage #esxcli storage vmfs extent list The Device Name and Partition columns in the output identify the device. For example: Volume Name 1TB_VMFS5 2 XXXXXXXX XXXXXXXX Device Name naa.00000000000000000000000000000703 Partition 3 Check for VMFS errors. Provide the absolute path to the device partition that backs the VMFS datastore, and provide a partition number with the device name. For example: # voma -m vmfs -f check -d /vmfs/devices/disks/naa.
vSphere Storage Table 17‑7. VOMA Command Options (Continued) Command Option Description -v | --version Display the version of VOMA. -h | --help Display the help message for the VOMA command. For more details, see the VMware Knowledge Base article 2036767. Configuring VMFS Pointer Block Cache Pointer blocks, also called indirection blocks, are file system resources that contain addresses to VMFS file blocks.
vSphere Storage Obtain Information for VMFS Pointer Block Cache You can get information about VMFS pointer block cache use. This information helps you understand how much space the pointer block cache consumes. You can also determine whether you must adjust the minimum and maximum sizes of the pointer block cache. Prerequisites Install vCLI or deploy the vSphere Management Assistant (vMA) virtual machine. See Getting Started with vSphere Command-Line Interfaces.
vSphere Storage 4 In Advanced System Settings, select the appropriate item. Option Description VMFS3.MinAddressableSpaceTB Minimum size of all open files that VMFS cache guarantees to support. VMFS3.MaxAddressableSpaceTB Maximum size of all open files that VMFS cache supports before eviction starts. 5 Click the Edit button and change the value. 6 Click OK.
Understanding Multipathing and Failover 18 To maintain a constant connection between a host and its storage, ESXi supports multipathing. With multipathing, you can use more than one physical path that transfers data between the host and an external storage device. If a failure of any element in the SAN network, such as an adapter, switch, or cable, occurs, ESXi can switch to another viable physical path. This process of path switching to avoid failed components is known as path failover.
vSphere Storage In the following illustration, multiple physical paths connect each server with the storage device. For example, if HBA1 or the link between HBA1 and the FC switch fails, HBA2 takes over and provides the connection. The process of one HBA taking over for another is called HBA failover. Figure 18‑1.
vSphere Storage Figure 18‑2. Host-Based Path Failover hardware iSCSI software iSCSI host 1 host 2 software adapter HBA2 HBA1 NIC2 NIC1 IP network SP iSCSI storage Hardware iSCSI and Failover With hardware iSCSI, the host typically has two or more hardware iSCSI adapters. The host uses the adapters to reach the storage system through one or more switches.
vSphere Storage Array-Based Failover with iSCSI Some iSCSI storage systems manage path use of their ports automatically and transparently to ESXi. When using one of these storage systems, your host does not see multiple ports on the storage and cannot choose the storage port it connects to. These systems have a single virtual port address that your host uses to initially communicate.
vSphere Storage Figure 18‑4. Port Reassignment 10.0.0.1 10.0.0.2 storage 10.0.0.1 10.0.0.1 10.0.0.2 storage With this form of array-based failover, you can have multiple paths to the storage only if you use multiple ports on the ESXi host. These paths are active-active. For additional information, see iSCSI Session Management. Path Failover and Virtual Machines A path failover occurs when the active path to a LUN is changed from one path to another.
vSphere Storage 4 Double-click TimeOutValue. 5 Set the value data to 0x3c (hexadecimal) or 60 (decimal) and click OK. After you make this change, Windows waits at least 60 seconds for delayed disk operations to finish before it generates errors. 6 Reboot guest OS for the change to take effect. Pluggable Storage Architecture and Path Management This topic introduces the key concepts behind the ESXi storage multipathing.
vSphere Storage The MPP claim rules are ordered. Lower rule numbers have preference over higher rule numbers. The NMP claim rules are not ordered. Table 18‑1. Multipathing Acronyms Acronym Definition PSA Pluggable Storage Architecture NMP Native Multipathing Plug-in. Generic VMware multipathing module. PSP Path Selection Plug-in. Handles path selection for a given device. SATP Storage Array Type Plug-in. Handles path failover for a given storage array. MPP (third-party) Multipathing Plug-in.
vSphere Storage As the Pluggable Storage Architecture illustration shows, multiple third-party MPPs can run in parallel with the VMware NMP or HPP. When installed, the third-party MPPs can replace the behavior of the native modules. The MPPs can take control of the path failover and the load-balancing operations for the specified storage devices. Figure 18‑5.
vSphere Storage VMware NMP Flow of I/O When a virtual machine issues an I/O request to a storage device managed by the NMP, the following process takes place. 1 The NMP calls the PSP assigned to this storage device. 2 The PSP selects an appropriate physical path on which to issue the I/O. 3 The NMP issues the I/O request on the path selected by the PSP. 4 If the I/O operation is successful, the NMP reports its completion. 5 If the I/O operation reports an error, the NMP calls the appropriate SATP.
vSphere Storage Prerequisites Install vCLI or deploy the vSphere Management Assistant (vMA) virtual machine. See Getting Started with vSphere Command-Line Interfaces. For troubleshooting, run esxcli commands in the ESXi Shell. Procedure u To list all storage devices, run the following command: esxcli storage nmp device list Use the --device | -d=device_ID parameter to filter the output of this command to show a single device. Example: Displaying NMP Storage Devices # esxcli storage nmp device list mpx.
vSphere Storage Each PSP enables and enforces a corresponding path selection policy. VMW_PSP_MRU — Most Recently Used (VMware) The Most Recently Used (VMware) policy is enforced by VMW_PSP_MRU. It selects the first working path discovered at system boot time. When the path becomes unavailable, the host selects an alternative path. The host does not revert to the original path when that path becomes available. The Most Recently Used policy does not use the preferred path setting.
vSphere Storage Generally, the NMP determines which SATP to use for a specific storage device and associates the SATP with the physical paths for that storage device. The SATP implements the tasks that include the following: n Monitors the health of each physical path. n Reports changes in the state of each physical path. n Performs array-specific actions necessary for storage fail-over. For example, for active-passive devices, it can activate passive paths.
vSphere Storage Example: Displaying SATPs for the Host # esxcli storage nmp Name VMW_SATP_MSA VMW_SATP_ALUA VMW_SATP_DEFAULT_AP VMW_SATP_SVC VMW_SATP_EQL VMW_SATP_INV VMW_SATP_EVA VMW_SATP_ALUA_CX VMW_SATP_SYMM VMW_SATP_CX VMW_SATP_LSI VMW_SATP_DEFAULT_AA VMW_SATP_LOCAL satp list Default PSP VMW_PSP_MRU VMW_PSP_MRU VMW_PSP_MRU VMW_PSP_FIXED VMW_PSP_FIXED VMW_PSP_FIXED VMW_PSP_FIXED VMW_PSP_RR VMW_PSP_RR VMW_PSP_MRU VMW_PSP_MRU VMW_PSP_FIXED VMW_PSP_FIXED Description Placeholder (plugin not loaded) Placeh
vSphere Storage n SCSI-3 persistent reservations or any shared devices. n 4Kn devices with software emulation. You cannot use the HPP to claim these devices. HPP Best Practices To achieve the fastest throughput from a high-speed storage device, follow these recommendations. n Use the vSphere version that supports the HPP, such as vSphere 6.7. n Use the HPP for high-speed local flash devices. n Do not activate the HPP for HDDs, slower flash devices, or remote storage.
vSphere Storage This sample command instructs the HPP to claim all devices with the vendor NVMe. Modify this rule to claim the devices you specify. Make sure to follow these recommendations: n For the rule ID parameter, use the number within the 1–49 range to make sure that the HPP claim rule precedes the build-in NMP rules. The default NMP rules 50–54 are reserved for locally attached storage devices. n Use the --force-reserved option.
vSphere Storage 2 Verify that the latency threshold is set: esxcli storage core device latencythreshold list Device -------------------naa.55cd2e404c1728aa naa.500056b34036cdfd naa.55cd2e404c172bd6 3 Latency Sensitive Threshold --------------------------0 milliseconds 0 milliseconds 50 milliseconds Monitor the status of the latency sensitive threshold. Check VMkernel logs for the following entries: n Latency Sensitive Gatekeeper turned on for device device.
vSphere Storage The module that owns the device becomes responsible for managing the multipathing support for the device. By default, the host performs a periodic path evaluation every five minutes and assigns unclaimed paths to the appropriate module. For the paths managed by the NMP module, a second set of claim rules is used. These rules assign an SATP and PSP modules to each storage device and determine which Storage Array Type Policy and Path Selection Policy to apply.
vSphere Storage Procedure 1 Navigate to the datastore. 2 Click the Configure tab. 3 Click Connectivity and Multipathing. 4 Select a host to view multipathing details for its devices. 5 Under Multipathing Policies, review the module that owns the device, such as NMP. You can also see the Path Selection Policy and Storage Array Type Policy assigned to the device.
vSphere Storage 6 Select a path policy. By default, VMware supports the following path selection policies. If you have a third-party PSP installed on your host, its policy also appears on the list. n Fixed (VMware) n Most Recently Used (VMware) n Round Robin (VMware) 7 For the fixed policy, specify the preferred path. 8 To save your settings and exit the dialog box, click OK. Disable Storage Paths You can temporarily disable paths for maintenance or other reasons.
vSphere Storage The rules fall into these categories: Core Claim Rules These claim rules determine which multipathing module, the NMP, HPP, or a third-party MPP, claims the specific device. SATP Claim Rules Depending on the device type, these rules assign a particular SATP submodule that provides vendor-specific multipathing management to the device. You can use the esxcli commands to add or change the core and SATP claim rules.
vSphere Storage By default, the PSA claim rule 101 masks Dell array pseudo devices. Do not delete this rule, unless you want to unmask these devices. n List Multipathing Claim Rules for the Host Use the esxcli command to list available multipathing claim rules. Claim rules indicate whether the NMP, HPP, or a third-party MPP manages a given physical path.
vSphere Storage n You can use the MASK_PATH module to hide unused devices from your host. By default, the PSA claim rule 101 masks Dell array pseudo devices with a vendor string DELL and a model string Universal Xport. n The Rule Class column in the output describes the category of a claim rule. It can be MP (multipathing plug-in), Filter, or VAAI. n The Class column shows which rules are defined and which are loaded. The file parameter in the Class column indicates that the rule is defined.
vSphere Storage Option Description -c|--claimrule-class= Claim rule class to use in this operation. You can specify MP (default), Filter, or VAAI. To configure hardware acceleration for a new array, add two claim rules, one for the VAAI filter and another for the VAAI plug-in. See Add Hardware Acceleration Claim Rules for detailed instructions. -d|--device= UID of the device. Valid only when --type is device. -D|--driver= Driver for the HBA of the paths to use.
vSphere Storage Option Description -t|--type= Type of matching to use for the operation. Valid values are the following. Required. n -V|--vendor= vendor n location n driver n transport n device n target Vendor of the paths to use. Valid only if --type is vendor. Valid values are values of the vendor string from the SCSI inquiry string. Run vicfg-scsidevs -l on each device to see vendor string values. --wwnn= World-Wide Node Number for the target.
vSphere Storage Option Description -p|--path= If --type is path, this option indicates the unique path identifier (UID) or the runtime name of a path to run claim rules on. -T|--target= If --type is location, value of the SCSI target number for the paths to run claim rules on. To run claim rules on paths with any target number, omit this option. -t|--type= Type of claim to perform.
vSphere Storage Procedure 1 Delete a claim rule from the set of claim rules. esxcli storage core claimrule remove Note By default, the PSA claim rule 101 masks Dell array pseudo devices. Do not delete this rule, unless you want to unmask these devices. The command takes the following options: Option Description -c|--claimrule-class= Indicate the claim rule class (MP, Filter, VAAI). -P|--plugin= Indicate the plug-in. -r|--rule= Indicate the rule ID.
vSphere Storage 5 If a claim rule for the masked path exists, remove the rule. esxcli storage core claiming unclaim 6 Run the path claiming rules. esxcli storage core claimrule run After you assign the MASK_PATH plug-in to a path, the path state becomes irrelevant and is no longer maintained by the host. As a result, commands that display the masked path's information might show the path state as dead.
vSphere Storage Procedure 1 Delete the MASK_PATH claim rule. esxcli storage core claimrule remove -r rule# 2 Verify that the claim rule was deleted correctly. esxcli storage core claimrule list 3 Reload the path claiming rules from the configuration file into the VMkernel. esxcli storage core claimrule load 4 Run the esxcli storage core claiming unclaim command for each path to the masked storage device.
vSphere Storage Option Description -f|--force Force claim rules to ignore validity checks and install the rule anyway. -h|--help Show the help message. -M|--model=string Set the model string when adding SATP a claim rule. Vendor/Model rules are mutually exclusive with driver rules. -o|--option=string Set the option string when adding a SATP claim rule. -P|--psp=string Set the default PSP for the SATP claim rule. -O|--psp-option=string Set the PSP options for the SATP claim rule.
vSphere Storage If you turn off the per file I/O scheduling model, your host reverts to a legacy scheduling mechanism. The legacy scheduling maintains only one I/O queue for each virtual machine and storage device pair. All I/Os between the virtual machine and its virtual disks are moved into this queue. As a result, I/Os from different virtual disks might interfere with each other in sharing the bandwidth and affect each other's performance.
Raw Device Mapping 19 Raw device mapping (RDM) provides a mechanism for a virtual machine to have direct access to a LUN on the physical storage subsystem. The following topics contain information about RDMs and provide instructions on how to create and manage RDMs.
vSphere Storage Typically, you use VMFS datastores for most virtual disk storage. On certain occasions, you might use raw LUNs or logical disks located in a SAN. For example, you might use raw LUNs with RDMs in the following situations: n When SAN snapshot or other layered applications run in the virtual machine. The RDM enables backup offloading systems by using features inherent to the SAN.
vSphere Storage File System Operations Makes it possible to use file system utilities to work with a mapped volume, using the mapping file as a proxy. Most operations that are valid for an ordinary file can be applied to the mapping file and are redirected to operate on the mapped device. Snapshots Makes it possible to use virtual machine snapshots on a mapped volume. Snapshots are not available when the RDM is used in physical compatibility mode.
vSphere Storage SAN Management Agents Makes it possible to run some SAN management agents inside a virtual machine. Similarly, any software that needs to access a device by using hardware-specific SCSI commands can be run in a virtual machine. This kind of software is called SCSI target-based software. When you use SAN management agents, select a physical compatibility mode for the RDM.
vSphere Storage n If you use vMotion to migrate virtual machines with RDMs, make sure to maintain consistent LUN IDs for RDMs across all participating ESXi hosts. n Flash Read Cache does not support RDMs in physical compatibility. Virtual compatibility RDMs are supported with Flash Read Cache. Raw Device Mapping Characteristics An RDM is a special mapping file in a VMFS volume that manages metadata for its mapped device.
vSphere Storage Raw Device Mapping with Virtual Machine Clusters Use an RDM with virtual machine clusters that require access to the same raw LUN for failover scenarios. The setup is similar to that of a virtual machine cluster that accesses the same virtual disk file, but an RDM replaces the virtual disk file. Figure 19‑3.
vSphere Storage Create Virtual Machines with RDMs When you give your virtual machine direct access to a raw SAN LUN, you create an RDM disk that resides on a VMFS datastore and points to the LUN. You can create the RDM as an initial disk for a new virtual machine or add it to an existing virtual machine. When creating the RDM, you specify the LUN to be mapped and the datastore on which to put the RDM. Although the RDM disk file has the same.
vSphere Storage c Select a compatibility mode. Option Description Physical Allows the guest operating system to access the hardware directly. Physical compatibility is useful if you are using SAN-aware applications on the virtual machine. However, a virtual machine with a physical compatibility RDM cannot be cloned, made into a template, or migrated if the migration involves copying the disk.
Storage Policy Based Management 20 Within a software-defined data center, Storage Policy Based Management (SPBM) plays a major role by helping to align storage with application demands of your virtual machines. It provides a storage policy framework that serves as a single unified control panel across a broad range of data services and storage solutions. As an abstraction layer, SPBM abstracts storage services delivered by Virtual Volumes, vSAN, I/O filters, or other storage entities.
vSphere Storage This chapter includes the following topics: n Virtual Machine Storage Policies n Workflow for Virtual Machine Storage Policies n Populating the VM Storage Policies Interface n About Rules and Rule Sets n Creating and Managing VM Storage Policies n About Storage Policy Components n Storage Policies and Virtual Machines n Default Storage Policies Virtual Machine Storage Policies Virtual machine storage policies are essential to virtual machine provisioning through SPBM.
vSphere Storage Step Description Populate the VM Storage Policies interface with appropriate data. The VM Storage Policies interface is populated with information about datastores and data services that are available in your storage environment. This information is obtained from storage providers and datastore tags. n For entities represented by storage providers, verify that an appropriate provider is registered. Entities that use the storage provider include vSAN, Virtual Volumes, and I/O filters.
vSphere Storage This information is obtained from storage providers, also called VASA providers. Another source is datastore tags. Storage Capabilities and Services Certain datastores, for example, Virtual Volumes and vSAN, are represented by the storage providers. Through the storage providers, the datastores can advertise their capabilities in the VM Storage Policy interface.
vSphere Storage n Data services the I/O filters provide. Prerequisites Register the storage providers that require manual registration. For more information, see the appropriate documentation: n Administering VMware vSAN n Chapter 22 Working with Virtual Volumes n Chapter 23 Filtering Virtual Machine I/O Procedure 1 Browse to the vCenter Server instance. 2 Click the Configure tab, and click Storage Providers.
vSphere Storage d e 2 Category Property Example Category Name Storage Location Description Category for tags related to location of storage Tags Per Object Many tags Associable Object Types Datastore and Datastore Cluster Click OK. Create a storage tag. a On the Tags tab, click Tags. b Click the Add Tag icon. c Specify the properties for the tag. See the following example. d 3 Specify the category properties. See the following example.
vSphere Storage About Rules and Rule Sets After the VM Storage Policies interface is populated with the appropriate data, you can start creating your storage policies. Creating a policy involves defining specific storage placement rules and rules to configure data services. Rules The rule is a basic element of the VM storage policy. Each individual rule is a statement that describes a single requirement for virtual machine storage and data services.
vSphere Storage Placement Rules: TagBased Tag-based rules reference datastore tags. These rules can define the VM placement, for example, request as a target all datastores with the VMFSGold tag. You can also use the tag-based rules to fine-tune your VM placement request further. For example, exclude datastores with the Palo Alto tag from the list of your Virtual Volumes datastores. See Create a VM Storage Policy for Tag-Based Placement.
vSphere Storage Virtual machine storage policy Rules for Host Based Services rule 1 and rule 2 Datastore Specific Rules Set 1 Datastore Specific Rules Set 2 or Datastore Specific Rules Set 3 or rule 1_1 rule 2_1 rule 3_1 rule 1_2 rule 2_2 rule 3_2 rule 1_3 rule 3_3 Creating and Managing VM Storage Policies To create and manage storage policies for your virtual machines, you use the VM Storage Policies interface.
vSphere Storage 2 Define Common Rules for a VM Storage Policy On the Common rules page, specify which data services to include in the VM storage policy. The data services are provided by software components that are installed on your ESXi hosts and vCenter Server. The VM storage policy that includes common rules activates specified data services for the virtual machine. 3 Create Storage-Specific Rules for a VM Storage Policy Use the Rule Set page to define storage placement rules.
vSphere Storage n For information about I/O filters, see Chapter 23 Filtering Virtual Machine I/O. n For information about storage policy components, see About Storage Policy Components. Procedure 1 Enable common rules by selecting Use common rules in the VM storage policy. 2 Click the Add component ( ) icon and select a data service category from the drop-down menu, for example, Replication.
vSphere Storage 2 Define placement rules. Placement rules request a specific storage entity as a destination for the virtual machine. They can be capability-based or tag-based. Capability-based rules are based on data services that storage entities such as vSAN and Virtual Volumes advertise through storage (VASA) providers. Tag-based rules reference tags that you assign to datastores.
vSphere Storage 4 (Optional) To define another rule set, click Add another rule set and repeat Step 2 through Step 3. Multiple rule sets allow a single policy to define alternative storage placement parameters, often from several storage providers. 5 Click Next. Finish VM Storage Policy Creation You can review the list of datastores that are compatible with the VM storage policy and change any storage policy settings.
vSphere Storage Procedure 1 2 Open the Create VM Storage Policy wizard. a Click Menu > Policies and Profiles. b Under Policies and Profiles, click VM Storage Policies c Click Create VM Storage Policy. Enter the policy name and description, and click Next. Option Action vCenter Server Select the vCenter Server instance. Name Enter the name of the storage policy. Description Enter the description of the storage policy.
vSphere Storage Create a VM Storage Policy for Virtual Volumes To define the VM storage policy in the vSphere Client, use the Create VM Storage Policy wizard. In this task, you create a custom storage policy compatible with Virtual Volumes. When you define the VM storage policy for Virtual Volumes, you create rules to configure storage and data services provided by the VVols datastore. The rules are applied when the VM is placed on the VVols datastore.
vSphere Storage 4 On the Virtual Volumes rules page, define storage placement rules for the target VVols datastore. a Click the Placement tab and click Add Rule. b From the Add Rule drop-down menu, select available capability and specify its value. For example, you can specify the number of read operations per second for the Virtual Volumes objects. You can include as many rules as you need for the selected storage entity.
vSphere Storage Create a VM Storage Policy for Tag-Based Placement Tag-based rules reference the tags that you assign to the datastores and can filter the datastores to be used for placement of the VMs. To define tag-based placement in the vSphere Client, use the Create VM Storage Policy wizard. Prerequisites n Make sure that the VM Storage Policies interface is populated with information about storage entities and data services that are available in your storage environment.
vSphere Storage Edit or Clone a VM Storage Policy If storage requirements for virtual machines and virtual disks change, you can modify the existing storage policy. You can also create a copy of the existing VM storage policy by cloning it. While cloning, you can optionally select to customize the original storage policy. Prerequisites Required privilege: StorageProfile.View Procedure 1 Navigate to the storage policy.
vSphere Storage The component describes one type of service from one service provider. The services can vary depending on the providers that you use, but generally belong in one of the following categories. n Compression n Caching n Encryption n Replication When you create the storage policy component, you define the rules for one specific type and grade of service.
vSphere Storage Procedure 1 Open the New Storage Policy Component dialog box. Option Description In the vSphere Web Client a From the vSphere Web Client Home, click Policies and Profiles > VM Storage Policies. b Click the Storage Policy Component tab. In the vSphere Client a Click Menu > Policies and Profiles. b Under Policies and Profiles, click Storage Policy Components. 2 Click Create Storage Policy Component. 3 Select the vCenter Server instance.
vSphere Storage Procedure 1 Navigate to the storage policy component to edit or clone. Option Description In the vSphere Web Client a From the vSphere Web Client Home, click Policies and Profiles > VM Storage Policies. b Click the Storage Policy Components tab. In the vSphere Client 2 a Click Menu > Policies and Profiles. b Under Policies and Profiles, click Storage Policy Components. Select the component and click one of the following icons.
vSphere Storage This topic describes how to assign the VM storage policy when you create a virtual machine. For information about other deployment methods that include cloning, deployment from a template, and so on, see the vSphere Virtual Machine Administration documentation. You can apply the same storage policy to the virtual machine configuration file and all its virtual disks.
vSphere Storage 4 Complete the virtual machine provisioning process. After you create the virtual machine, the Summary tab displays the assigned storage policies and their compliance status. What to do next If storage placement requirements for the configuration file or the virtual disks change, you can later modify the virtual policy assignment.
vSphere Storage 4 Specify the VM storage policy for your virtual machine. Option Actions Apply the same storage policy to all virtual machine objects (in the vSphere Web Client) a Select the policy from the VM storage policy drop-down menu. b Click Apply to all. a Select the object, for example, VM home. b In the VM Storage Policy column, select the policy from the drop-down menu.
vSphere Storage 4 View the compliance status. Compliance Status Description Compliant The datastore that the virtual machine or virtual disk uses has the storage capabilities compatible with the policy requirements. Noncompliant The datastore that the virtual machine or virtual disk uses does not have the storage capabilities compatible with the policy requirements. You can migrate the virtual machine files and virtual disks to compliant datastores.
vSphere Storage 2 Navigate to the noncompliant storage policy. Option Description In the vSphere Web Client a From the vSphere Web Client Home, click Policies and Profiles > VM Storage Policies. b Click the Storage Policy tab. In the vSphere Client 3 a Click Menu > Policies and Profiles. b Under Policies and Profiles, click VM Storage Policies. Display the list of compatible datastores for the noncompliant storage policy.
vSphere Storage 5 Check the compliance status. Compliance Status Description Compliant The datastore that the virtual machine or virtual disk uses has the storage capabilities that the policy requires. Noncompliant The datastore that the virtual machine or virtual disk uses does not have the storage capabilities that the policy requires. When you cannot bring the noncompliant datastore into compliance, migrate the files or virtual disks to a compatible datastore.
vSphere Storage Change the Default Storage Policy for a Datastore For Virtual Volumes and vSAN datastores, VMware provides storage policies that are used as the default during the virtual machine provisioning. You can change the default storage policy for a selected Virtual Volumes or vSAN datastore. Note A storage policy that contains replication rules should not be specified as a default storage policy. Otherwise, the policy prevents you from selecting replication groups.
Using Storage Providers 21 A storage provider is a software component that is offered by VMware or developed by a third party through vSphere APIs for Storage Awareness (VASA). The storage provider can also be called VASA provider. The storage providers integrate with various storage entities that include external physical storage and storage abstractions, such as vSAN and Virtual Volumes. Storage providers can also support software solutions, for example, I/O filters.
vSphere Storage Both persistence storage and data service providers can belong to one of these categories. Built-in Storage Providers Built-in storage providers are offered by VMware. Typically, they do not require registration. For example, the storage providers that support vSAN or I/O filters are build-in and become registered automatically. Third-Party Storage Providers When a third party offers a storage provider, you typically must register the provider.
vSphere Storage You reference these data services when you define storage requirements for virtual machines and virtual disks in a storage policy. Depending on your environment, the SPBM mechanism ensures appropriate storage placement for a virtual machine or enables specific data services for virtual disks. For details, see Creating and Managing VM Storage Policies. n Storage status. This category includes reporting about status of various storage entities.
vSphere Storage When you upgrade a storage provider to a later VASA version, you must unregister and reregister the provider. After registration, vCenter Server can detect and use the functionality of the later VASA version. Note If you use vSAN, the storage providers for vSAN are registered and appear on the list of storage providers automatically. vSAN does not support manual registration of storage providers. See the Administering VMware vSAN documentation.
vSphere Storage Procedure 1 Navigate to vCenter Server. 2 Click the Configure tab, and click Storage Providers. 3 In the Storage Providers list, view the storage providers registered with vCenter Server. The list shows general information including the name of the storage provider, its URL and status, version of VASA APIs, storage entities the provider represents, and so on. 4 To display additional details, select a specific storage provider or its component from the list.
Working with Virtual Volumes 22 Virtual Volumes virtualizes SAN and NAS devices by abstracting physical hardware resources into logical pools of capacity. The Virtual Volumes functionality changes the storage management paradigm from managing space inside datastores to managing abstract storage objects handled by storage arrays.
vSphere Storage uses these datastores as virtual machine storage. Typically, the datastore is the lowest granularity level at which data management occurs from a storage perspective. However, a single datastore contains multiple virtual machines, which might have different requirements. With the traditional approach, it is difficult to meet the requirements of an individual virtual machine. The Virtual Volumes functionality helps to improve granularity.
vSphere Storage n Protocol Endpoints Although storage systems manage all aspects of virtual volumes, ESXi hosts have no direct access to virtual volumes on the storage side. Instead, ESXi hosts use a logical I/O proxy, called the protocol endpoint, to communicate with virtual volumes and virtual disk files that virtual volumes encapsulate. ESXi uses protocol endpoints to establish a data path on demand from virtual machines to their respective virtual volumes.
vSphere Storage Snapshot-VVol A virtual memory volume to hold the contents of virtual machine memory for a snapshot. Thick-provisioned. Other A virtual volume for specific features. For example, a digest virtual volume is created for Content-Based Read Cache (CBRC). Example: Types of Virtual Volumes Typically, a VM creates a minimum of three virtual volumes, data-VVol, config-VVol, and swap-VVol. The maximum depends on how many virtual disks and snapshots reside on the VM.
vSphere Storage The storage provider delivers information from the underlying storage container. The storage container capabilities appear in vCenter Server and the vSphere Client. Then, in turn, the storage provider communicates virtual machine storage requirements, which you can define in the form of a storage policy, to the storage layer. This integration process ensures that a virtual volume created in the storage layer meets the requirements outlined in the policy.
vSphere Storage Each virtual volume is bound to a specific protocol endpoint. When a virtual machine on the host performs an I/O operation, the protocol endpoint directs the I/O to the appropriate virtual volume. Typically, a storage system requires just a few protocol endpoints. A single protocol endpoint can connect to hundreds or thousands of virtual volumes. On the storage side, a storage administrator configures protocol endpoints, one or several per storage container.
vSphere Storage After vCenter Server discovers storage containers exported by storage systems, you must mount them as Virtual Volumes datastores. The Virtual Volumes datastores are not formatted in a traditional way like, for example, VMFS datastores. You must still create them because all vSphere functionalities, including FT, HA, DRS, and so on, require the datastore construct to function properly.
vSphere Storage Virtual Volumes and Storage Protocols A virtual volumes-based storage system provides protocol endpoints that are discoverable on the physical storage fabric. ESXi hosts use the protocol endpoints to connect to virtual volumes on the storage. Operation of the protocol endpoints depends on storage protocols that expose the endpoints to ESXi hosts. Virtual Volumes supports NFS version 3 and 4.1, iSCSI, Fibre Channel, and FCoE.
vSphere Storage Virtual volumes on NAS devices support the same NFS Remote Procedure Calls (RPCs) that ESXi hosts use when connecting to NFS mount points. On NAS devices, a config‐VVol is a directory subtree that corresponds to a config‐VVolID. The config‐ VVol must support directories and other operations that are necessary for NFS. Virtual Volumes Architecture An architectural diagram provides an overview of how all components of the Virtual Volumes functionality interact with each other.
vSphere Storage Virtual volumes are objects exported by a compliant storage system and typically correspond one-to-one with a virtual machine disk and other VM-related files. A virtual volume is created and manipulated out-ofband, not in the data path, by a VASA provider. A VASA provider, or a storage provider, is developed through vSphere APIs for Storage Awareness.
vSphere Storage 3 After receiving and validating the CSR, the SMS presents it to the VMCA on behalf of the VASA provider, requesting a CA signed certificate. The VMCA can be configured to function as a standalone CA, or as a subordinate to an enterprise CA. If you set up the VMCA as a subordinate CA, the VMCA signs the CSR with the full chain. 4 The signed certificate with the root certificate is passed to the VASA provider.
vSphere Storage Storage Container Base VVol Read Write For information about creating and managing snapshots, see the vSphere Virtual Machine Administration documentation. Before You Enable Virtual Volumes To work with Virtual Volumes, you must make sure that your storage and vSphere environment are set up correctly. Prepare Storage System for Virtual Volumes To prepare your storage system environment for Virtual Volumes, follow these guidelines. For additional information, contact your storage vendor.
vSphere Storage Synchronize vSphere Storage Environment with a Network Time Server If you use Virtual Volumes, configure Network Time Protocol (NTP) to make sure all ESXi hosts on the vSphere network are synchronized. Procedure 1 Navigate to the host. 2 Click the Configure tab. 3 Under System, select Time Configuration. 4 Click Edit and set up the NTP server. 5 a Select Use Network Time Protocol (Enable NTP client). b Set the NTP Service Startup Policy.
vSphere Storage 3 Review and Manage Protocol Endpoints ESXi hosts use a logical I/O proxy, called protocol endpoint, to communicate with virtual volumes and virtual disk files that virtual volumes encapsulate. Protocol endpoints are exported, along with associated storage containers, by the storage system through a storage provider. Protocol endpoints become visible in the vSphere Client after you map a storage container to a Virtual Volumes datastore.
vSphere Storage 5 Specify the security method. Action Description Direct vCenter Server to the storage provider certificate Select the Use storage provider certificate option and specify the certificate's location. Use a thumbprint of the storage provider certificate If you do not guide vCenter Server to the provider certificate, the certificate thumbprint is displayed. You can check the thumbprint and approve it. vCenter Server adds the certificate to the truststore and proceeds with the connection.
vSphere Storage Review and Manage Protocol Endpoints ESXi hosts use a logical I/O proxy, called protocol endpoint, to communicate with virtual volumes and virtual disk files that virtual volumes encapsulate. Protocol endpoints are exported, along with associated storage containers, by the storage system through a storage provider. Protocol endpoints become visible in the vSphere Client after you map a storage container to a Virtual Volumes datastore.
vSphere Storage 6 Select a path policy. The path policies available for your selection depend on the storage vendor support. n Fixed (VMware) n Most Recently Used (VMware) n Round Robin (VMware) 7 For the fixed policy, specify the preferred path. 8 To save your settings and exit the dialog box, click OK. Provision Virtual Machines on Virtual Volumes Datastores You can provision virtual machines on a Virtual Volumes datastore.
vSphere Storage Virtual Volumes and Replication Virtual Volumes supports replication and disaster recovery. With the array-based replication, you can offload replication of virtual machines to your storage array and use full replication capabilities of the array. You can replicate a single VM object, such as a virtual disk. You can also group several VM objects or virtual machines to replicate them as a single unit. Array-based replication is policy driven.
vSphere Storage Storage Requirements Implementation of Virtual Volumes replication depends on your array and might be different for storage vendors. Generally, the following requirements apply to all vendors. n The storage arrays that you use to implement replication must be compatible with Virtual Volumes. n The arrays must integrate with the version of the storage (VASA) provider compatible with Virtual Volumes replication.
vSphere Storage If no preconfigured groups are available, Virtual Volumes can use an automatic method. With the automatic method, Virtual Volumes creates a replication group on demand and associates this group with a Virtual Volumes object being provisioned. If you use the automatic replication group, all components of a virtual machine are assigned to the group. You cannot mix preconfigured and automatic replication groups for components of the same virtual machine.
vSphere Storage Now provision a VM with two disks, one associated with replication group Anaheim: B, the second associated with replication group Anaheim: D. This configuration is invalid. Both replication groups replicate to the New-York fault domain, however, only one replicates to the Boulder fault domain. Source Target Fault Domain: “Anaheim” Fault Domain: “Boulder” Repl. Group:”Anaheim:A” Repl. Group:”Boulder:A” Repl. Group:”Anaheim:B” Repl. Group:”Boulder:B” Repl. Group:”Anaheim:C” Repl.
vSphere Storage Replication Guidelines and Considerations When you use replication with Virtual Volumes, specific considerations apply. n You can apply the replication storage policy only to a configuration virtual volume and a data virtual volume. Other VM objects inherit the replication policy in the following way: n The memory virtual volume inherits the policy of the configuration virtual volume. n The digest virtual volume inherits the policy of the data virtual volume.
vSphere Storage n Best Practices for Storage Container Provisioning Follow these best practices when provisioning storage containers on the vSphere Virtual Volumes array side. n Best Practices for vSphere Virtual Volumes Performance To ensure optimal vSphere Virtual Volumes performance results, follow these recommendations. Guidelines and Limitations in Using vSphere Virtual Volumes For the best experience with vSphere Virtual Volumes functionality, you must follow specific guidelines.
vSphere Storage n Host profiles that contain Virtual Volumes datastores are vCenter Server specific. After you extract this type of host profile, you can attach it only to hosts and clusters managed by the same vCenter Server as the reference host. Best Practices for Storage Container Provisioning Follow these best practices when provisioning storage containers on the vSphere Virtual Volumes array side.
vSphere Storage If your environment uses LUN IDs that are greater than 1023, change the number of scanned LUNs through the Disk.MaxLUN parameter. See Change the Number of Scanned Storage Devices. Best Practices for vSphere Virtual Volumes Performance To ensure optimal vSphere Virtual Volumes performance results, follow these recommendations.
vSphere Storage Ensuring that Storage Provider Is Available To access vSphere Virtual Volumes storage, your ESXi host requires a storage provider (VASA provider). To ensure that the storage provider is always available, follow these guidelines: n Do not migrate a storage provider VM to Virtual Volumes storage. n Back up your storage provider VM. n When appropriate, use vSphere HA or Site Recovery Manager to protect the storage provider VM.
vSphere Storage Table 22‑1. esxcli storage vvol commands (Continued) Namespace Command Option Description esxcli storage vvol storagecontainer list List all available storage containers. abandonedvvol scan Scan the specified storage container for abandoned VVols. esxcli storage vvol vasacontext get Show the VASA context (VC UUID) associated with the host. esxcli storage vvol vasaprovider list List all storage (VASA) providers associated with the host.
vSphere Storage On virtual datastores, all other large-sized files, such as virtual disks, memory snapshots, swap, and digest, are stored as separate virtual volumes. Config-VVols are created as 4-GB virtual volumes. Generic content of the config-VVol usually consumes only a fraction of this 4-GB allocation, so config-VVols are typically thin-provisioned to conserve backing space.
vSphere Storage Solution To avoid problems when migrating VMs with memory snapshots across virtual and nonvirtual datastores, use hardware version 11. Follow these guidelines when migrating version 10 or earlier VMs with memory snapshots: n Migrating a version 10 or earlier VM with memory snapshots to a virtual datastore is not supported. The only workaround is to remove all snapshots. Upgrading the hardware version does not solve this problem.
Filtering Virtual Machine I/O 23 I/O filters are software components that can be installed on ESXi hosts and can offer additional data services to virtual machines. The filters process I/O requests, which move between the guest operating system of a virtual machine and virtual disks. The I/O filters can be offered by VMware or created by third parties through vSphere APIs for I/O Filtering (VAIO).
vSphere Storage Datastore Support I/O filters can support all datastore types including the following: n VMFS n NFS 3 n NFS 4.1 n Virtual Volumes (VVol) n vSAN Types of I/O Filters VMware provides certain categories of I/O filters that are installed on your ESXi hosts. In addition, VMware partners can create the I/O filters through the vSphere APIs for I/O Filtering (VAIO) developer program. The I/O filters can serve multiple purposes.
vSphere Storage The basic components of I/O filtering include the following: VAIO Filter Framework A combination of user world and VMkernel infrastructure provided by ESXi. With the framework, you can add filter plug-ins to the I/O path to and from virtual disks. The infrastructure includes an I/O filter storage provider (VASA provider). The provider integrates with the Storage Policy Based Management (SPBM) system and exports filter capabilities to vCenter Server.
vSphere Storage Each Virtual Machine Executable (VMX) component of a virtual machine contains a Filter Framework that manages the I/O filter plug-ins attached to the virtual disk. The Filter Framework invokes filters when the I/O requests move between the guest operating system and the virtual disk. Also, the filter intercepts any I/O access towards the virtual disk that happens outside of a running VM. The filters run sequentially in a specific order.
vSphere Storage VM I/O Path Cache Filter I/O Path Virtual Machine Cache Virtual Flash Resource (VFFS) SSD SSD SSD SSD SSD SSD Flash Storage Devices Flash Storage Devices ESXi To set up a virtual flash resource, you use flash devices that are connected to your host. To increase the capacity of your virtual flash resource, you can add more flash drives.
vSphere Storage n Web server to host partner packages for filter installation. The server must remain available after initial installation. When a new host joins the cluster, the server pushes appropriate I/O filter components to the host. Configure I/O Filters in the vSphere Environment To set up data services that the I/O filters provide for your virtual machines, follow several steps. Prerequisites n Create a cluster that includes at least one ESXi host.
vSphere Storage A storage provider, also called a VASA provider, is automatically registered for every ESXi host in the cluster. Successful auto-registration of the I/O filter storage providers triggers an event at the host level. If the storage providers fail to auto-register, the system raises alarms on the hosts. View I/O Filters and Storage Providers You can review I/O filters available in your environment and verify that the I/O filter providers appear as expected and are active.
vSphere Storage Procedure 1 Navigate to the host. 2 Click the Configure tab. 3 Under Virtual Flash, select Virtual Flash Resource Management and click Add Capacity. 4 From the list of available flash drives, select one or more drives to use for the virtual flash resource and click OK. The virtual flash resource is created. The Device Backing area lists all the drives that you use for the virtual flash resource.
vSphere Storage Prerequisites Verify that the I/O filter is installed on the ESXi host where the virtual machine runs. Procedure 1 Start the virtual machine provisioning process and follow the appropriate steps. 2 Assign the same storage policy to all virtual machine files and disks. a On the Select storage page, select a storage policy from the VM Storage Policy drop-down menu. b Select the datastore from the list of compatible datastores and click Next.
vSphere Storage n When a new host joins the cluster that has I/O filters, the filters installed on the cluster are deployed on the host. vCenter Server registers the I/O filter storage provider for the host. Any cluster changes become visible in the VM Storage Policies interface of the vSphere Client. n When you move a host out of a cluster or remove it from vCenter Server, the I/O filters are uninstalled from the host. vCenter Server unregisters the I/O filter storage provider.
vSphere Storage Procedure 1 To upgrade the filter, run the vendor-provided installer. During the upgrade, vSphere ESX Agent Manager automatically places the hosts into maintenance mode. The installer identifies any existing filter components and removes them before installing the new filter components. 2 Verify that the I/O filter components are properly uninstalled from your ESXi hosts: esxcli software vib list After the upgrade, vSphere ESX Agent Manager places the hosts back into operational mode.
vSphere Storage If you use Storage vMotion to migrate a virtual machine with I/O filters, a destination datastore must be connected to hosts with compatible I/O filters installed. You might need to migrate a virtual machine with I/O filters across different types of datastores, for example between VMFS and Virtual Volumes. If you do so, make sure that the VM storage policy includes rule sets for every type of datastore you are planning to use.
vSphere Storage Prerequisites Install vCLI or deploy the vSphere Management Assistant (vMA) virtual machine. See Getting Started with vSphere Command-Line Interfaces. For troubleshooting, run esxcli commands in the ESXi Shell. Procedure 1 Install the VIBs by running the following command: esxcli software vib install --depot path_to_VMware_vib_ZIP_file Options for the install command allow you to perform a dry run, specify a specific VIB, bypass acceptance-level verification, and so on.
Storage Hardware Acceleration 24 The hardware acceleration functionality enables the ESXi host to integrate with compliant storage systems. The host can offload certain virtual machine and storage management operations to the storage systems. With the storage hardware assistance, your host performs these operations faster and consumes less CPU, memory, and storage fabric bandwidth. Block storage devices, Fibre Channel and iSCSI, and NAS devices support the hardware acceleration.
vSphere Storage Hardware Acceleration Requirements The hardware acceleration functionality works only if you use an appropriate host and storage array combination. Table 24‑1.
vSphere Storage n Hardware assisted locking, also called atomic test and set (ATS). Supports discrete virtual machine locking without use of SCSI reservations. This operation allows disk locking per sector, instead of the entire LUN as with SCSI reservations. Check with your vendor for the hardware acceleration support. Certain storage arrays require that you activate the support on the storage side. On your host, the hardware acceleration is enabled by default.
vSphere Storage You can use several esxcli commands to query storage devices for the hardware acceleration support information. For the devices that require the VAAI plug-ins, the claim rule commands are also available. For information about esxcli commands, see Getting Started with vSphere Command-Line Interfaces.
vSphere Storage Procedure u Run the esxcli storage core device list -d=device_ID command. The output shows the hardware acceleration, or VAAI, status that can be unknown, supported, or unsupported. # esxcli storage core device list -d naa.XXXXXXXXXXXX4c naa.XXXXXXXXXXXX4c Display Name: XXXX Fibre Channel Disk(naa.
vSphere Storage Procedure 1 To list the filter claim rules, run the esxcli storage core claimrule list --claimrule-class=Filter command. In this example, the filter claim rules specify devices that the VAAI_FILTER filter claims.
vSphere Storage Prerequisites Install vCLI or deploy the vSphere Management Assistant (vMA) virtual machine. See Getting Started with vSphere Command-Line Interfaces. For troubleshooting, run esxcli commands in the ESXi Shell. Procedure 1 Define a new claim rule for the VAAI filter by running the esxcli storage core claimrule add --claimrule-class=Filter --plugin=VAAI_FILTER command.
vSphere Storage Prerequisites Install vCLI or deploy the vSphere Management Assistant (vMA) virtual machine. See Getting Started with vSphere Command-Line Interfaces. For troubleshooting, run esxcli commands in the ESXi Shell. Procedure u Use the following command and enter the XCOPY options: esxcli storage core claimrule add --claimrule-class=VAAI For information about the options that the command takes, see Add Multipathing Claim Rules.
vSphere Storage Hardware Acceleration on NAS Devices With the hardware acceleration, ESXi hosts can integrate with NAS devices and use several hardware operations that NAS storage provides. The hardware acceleration uses vSphere APIs for Array Integration (VAAI) to facilitate communications between the hosts and storage devices. The VAAI NAS framework supports both versions of NFS storage, NFS 3 and NFS 4.1.
vSphere Storage 2 Set the host acceptance level: esxcli software acceptance set --level=value The host acceptance level must be the same or less restrictive than the acceptance level of any VIB you want to add to the host. The value can be one of the following: 3 n VMwareCertified n VMwareAccepted n PartnerSupported n CommunitySupported Install the VIB package: esxcli software vib install -v|--viburl=URL The URL specifies the URL to the VIB package to install.
vSphere Storage Prerequisites This topic discusses how to update a VIB package using the esxcli command. For more details, see the vSphere Upgrade documentation. Install vCLI or deploy the vSphere Management Assistant (vMA) virtual machine. See Getting Started with vSphere Command-Line Interfaces. For troubleshooting, run esxcli commands in the ESXi Shell. Procedure 1 Upgrade to a new plug-in version: esxcli software vib update -v|--viburl=URL The URL specifies the URL to the VIB package to update.
vSphere Storage n The source or destination VMDK is in sparse or hosted format. n The source virtual machine has a snapshot. n The logical address and transfer length in the requested operation are not aligned to the minimum alignment required by the storage device. All datastores created with the vSphere Client are aligned automatically. n The VMFS has multiple LUNs or extents, and they are on different arrays. Hardware cloning between arrays, even within the same VMFS datastore, does not work.
Thin Provisioning and Space Reclamation 25 vSphere supports two models of storage provisioning, thick provisioning and thin provisioning. Thick provisioning It is a traditional model of the storage provisioning. With the thick provisioning, large amount of storage space is provided in advance in anticipation of future storage needs. However, the space might remain unused causing underutilization of storage capacity.
vSphere Storage ESXi supports thin provisioning for virtual disks. With the disk-level thin provisioning feature, you can create virtual disks in a thin format. For a thin virtual disk, ESXi provisions the entire space required for the disk’s current and future activities, for example 40 GB. However, the thin disk uses only as much storage space as the disk needs for its initial operations. In this example, the thin-provisioned disk occupies only 20 GB of storage.
vSphere Storage You can use Storage vMotion or cross-host Storage vMotion to transform virtual disks from one format to another. Thick Provision Lazy Zeroed Creates a virtual disk in a default thick format. Space required for the virtual disk is allocated when the disk is created. Data remaining on the physical device is not erased during creation, but is zeroed out on demand later on first write from the virtual machine. Virtual machines do not read stale data from the physical device.
vSphere Storage Procedure 1 2 Create a virtual machine. a Right-click any inventory object that is a valid parent object of a virtual machine, such as a data center, folder, cluster, resource pool, or host, and select New Virtual Machine. b Select Create a new virtual machine and click Next. c Follow the steps required to create a virtual machine. Configure the thin virtual disk. a On the Customize Hardware page, click the Virtual Hardware tab.
vSphere Storage Procedure 1 Right-click the virtual machine and select Edit Settings. 2 Click the Virtual Hardware tab. 3 Click the Hard Disk triangle to expand the hard disk options. The Type text box shows the format of your virtual disk. What to do next If your virtual disk is in the thin format, you can inflate it to its full size. Inflate Thin Virtual Disks If you created a virtual disk in the thin format, you can change the format to thick.
vSphere Storage Handling Datastore Over-Subscription Because the provisioned space for thin disks can be greater than the committed space, a datastore oversubscription can occur, which results in the total provisioned space for the virtual machine disks on the datastore being greater than the actual capacity. Over-subscription can be possible because usually not all virtual machines with thin disks need the entire provisioned datastore space simultaneously.
vSphere Storage n Use storage systems that support T10-based vSphere Storage APIs - Array Integration (VAAI), including thin provisioning and space reclamation. For information, contact your storage provider and check the VMware Compatibility Guide. Monitoring Space Use The thin provision integration functionality helps you to monitor the use of space on thin-provisioned LUNs and to avoid running out of space.
vSphere Storage The following thin provisioning status indicates that the storage device is thin-provisioned. # esxcli storage core device list -d naa.XXXXXXXXXXXX4c naa.XXXXXXXXXXXX4c Display Name: XXXX Fibre Channel Disk(naa.XXXXXXXXXXXX4c) Size: 20480 Device Type: Direct-Access Multipath Plugin: NMP --------------------Thin Provisioning Status: yes --------------------- An unknown status indicates that a storage device is thick.
vSphere Storage ESXi host Storage Array VMFS Datastore VMs Physical Disk Blocks The command can also originate directly from the guest operating system. Both VMFS5 and VMFS6 datastores can provide support to the unmap command that proceeds from the guest operating system. However, the level of support is limited on VMFS5. Depending on the type of your VMFS datastore, you use different methods to configure space reclamation for the datastore and your virtual machines.
vSphere Storage The operation helps the storage array to reclaim unused free space. Unmapped space can be then used for other storage allocation requests and needs. Asynchronous Reclamation of Free Space on VMFS6 Datastore On VMFS6 datastores, ESXi supports the automatic asynchronous reclamation of free space. VMFS6 can run the unmap command to release free storage space in the background on thin-provisioned storage arrays that support unmap operations.
vSphere Storage Space Reclamation Priority Description Configuration None Disables the unmap operations for the datastore. vSphere Client esxcli command Low (default) Sends the unmap command at a less frequent rate, 25–50 MB per second. vSphere Client Medium Sends the command at a rate twice faster than the low rate, 50–100 MB per second. esxcli command High Sends the command at a rate three times faster than the low rate, over 100 MB per second.
vSphere Storage 4 On the Partition configuration page, specify the space reclamation parameters. The parameters define granularity and the priority rate at which space reclamation operations are performed. You can also use this page to disable space reclamation for the datastore. Option Description Block size The block size on a VMFS datastore defines the maximum file size and the amount of space the file occupies. VMFS6 supports the block size of 1 MB.
vSphere Storage Use the ESXCLI Command to Change Space Reclamation Parameters You can change the default space reclamation priority, granularity, and other parameters. Procedure u Use the following command to set space reclamation parameters. esxcli storage vmfs reclaim config set The command takes these options: Option Description -b|--reclaim-bandwidth Space reclamation fixed bandwidth in MB per second. -g|--reclaim-granularity Minimum granularity of automatic space reclamation in bytes.
vSphere Storage Example: Obtaining Parameters for VMFS6 Space Reclamation You can also use the esxcli storage vmfs reclaim config get -l=VMFS_label|-u=VMFS_uuid command to obtain information for the space reclamation configuration.
vSphere Storage Space Reclamation for VMFS6 Virtual Machines VMFS6 generally supports automatic space reclamation requests that generate from the guest operating systems, and passes these requests to the array. Many guest operating systems can send the unmap command and do not require any additional configuration. The guest operating systems that do not support the automatic unmaps might require user intervention.
Using vmkfstools 26 vmkfstools is one of the ESXi Shell commands for managing VMFS volumes, storage devices, and virtual disks. You can perform many storage operations using the vmkfstools command. For example, you can create and manage VMFS datastores on a physical partition, or manipulate virtual disk files, stored on VMFS or NFS datastores. Note After you make a change using the vmkfstools, the vSphere Client might not be updated immediately. Use a refresh or rescan operation from the client.
vSphere Storage Table 26‑1. vmkfstools Command Arguments (Continued) Argument Description device Specifies devices or logical volumes. This argument uses a path name in the ESXi device file system. The path name begins with /vmfs/devices, which is the mount point of the device file system. Use the following formats when you specify different types of devices: path n /vmfs/devices/disks for local or SAN-based disks. n /vmfs/devices/lvm for ESXi logical volumes.
vSphere Storage You can specify the -v suboption with any vmkfstools option. If the output of the option is not suitable for use with the -v suboption, vmkfstools ignores -v. Note Because you can include the -v suboption in any vmkfstools command line, -v is not included as a suboption in the option descriptions. File System Options File system options allow you to create and manage VMFS datastores. These options do not apply to NFS. You can perform many of these tasks through the vSphere Client.
vSphere Storage You can specify the following suboptions with the -C option. n -S|--setfsname - Define the volume label of the VMFS datastore you are creating. Use this suboption only with the -C option. The label you specify can be up to 128 characters long and cannot contain any leading or trailing blank spaces. Note vCenter Server supports the 80 character limit for all its entities. If a datastore name exceeds this limit, the name gets shortened when you add this datastore to vCenter Server.
vSphere Storage You must specify the full path name for the head and span partitions, for example /vmfs/devices/disks/disk_ID:1. Each time you use this option, you add an extent to the VMFS datastore, so that the datastore spans multiple partitions. Caution When you run this option, you lose all data that previously existed on the SCSI device you specified in span_partition.
vSphere Storage Supported Disk Formats When you create or clone a virtual disk, you can use the -d|--diskformat suboption to specify the format for the disk. Choose from the following formats: n zeroedthick (default) – Space required for the virtual disk is allocated during creation. Any data remaining on the physical device is not erased during creation, but is zeroed out on demand on first write from the virtual machine. The virtual machine does not read stale data from disk.
vSphere Storage This option creates a virtual disk at the specified path on a datastore. Specify the size of the virtual disk. When you enter the value for size, you can indicate the unit type by adding a suffix of k (kilobytes), m (megabytes), or g (gigabytes). The unit type is not case-sensitive. vmkfstools interprets either k or K to mean kilobytes. If you do not specify a unit type, vmkfstools defaults to bytes. You can specify the following suboptions with the -c option.
vSphere Storage Follow this example: vmkfstools --eagerzero /vmfs/volumes/myVMFS/VMName/disk.vmdk Removing Zeroed Blocks Use the vmkfstools command to remove zeroed blocks. -K|--punchzero This option deallocates all zeroed out blocks and leaves only those blocks that were allocated previously and contain valid data. The resulting virtual disk is in thin format. Deleting a Virtual Disk Use the vmkfstools command to delete a virtual disk file at the specified path on the VMFS volume.
vSphere Storage By default, ESXi uses its native methods to perform the cloning operations. If your array supports the cloning technologies, you can off-load the operations to the array. To avoid the ESXi native cloning, specify the -N|--avoidnativeclone option. Example: Example for Cloning or Converting a Virtual Disk This example illustrates cloning the contents of a master virtual disk from the templates repository to a virtual disk file named myOS.vmdk on the myVMFS file system.
vSphere Storage n After you extend the disk, you might need to update the file system on the disk. As a result, the guest operating system recognizes the new size of the disk and can use it. Upgrading Virtual Disks This option converts the specified virtual disk file from ESX Server 2 formats to the ESXi format. Use this option to convert virtual disks of type LEGACYSPARSE, LEGACYPLAIN, LEGACYVMFS, LEGACYVMFS_SPARSE, and LEGACYVMFS_RDM.
vSphere Storage When specifying the device parameter, use the following format: /vmfs/devices/disks/disk_ID For example, vmkfstools -z /vmfs/devices/disks/disk_ID my_rdm.vmdk Listing Attributes of an RDM Use the vmkfstools command to list the attributes of a raw disk mapping. The attributes help you identify the storage device to which your RDM files maps. -q|--queryrdm my_rdm.vmdk This option prints the name of the raw disk RDM.
vSphere Storage Checking Disk Chain for Consistency Use the vmkfstools command to check the entire snapshot chain. You can determine if any of the links in the chain are corrupted or any invalid parent-child relationships exist. -e|--chainConsistent Storage Device Options You can use the device options of the vmkfstools command to perform administrative task for physical storage devices.
vSphere Storage When entering the device parameter, use the following format: /vmfs/devices/disks/disk_ID:P Breaking Device Locks Use the vmkfstools command to break the device lock on a particular partition. -B|--breaklock device When entering the device parameter, use the following format: /vmfs/devices/disks/disk_ID:P You can use this command when a host fails in the middle of a datastore operation, such as expand the datastore, add an extent, or resignature.