vSphere Storage Update 1 ESXi 6.0 vCenter Server 6.0 This document supports the version of each product listed and supports all subsequent versions until the document is replaced by a new edition. To check for more recent editions of this document, see http://www.vmware.com/support/pubs.
vSphere Storage You can find the most up-to-date technical documentation on the VMware Web site at: http://www.vmware.com/support/ The VMware Web site also provides the latest product updates. If you have comments about this documentation, submit your feedback to: docfeedback@vmware.com Copyright © 2009–2016 VMware, Inc. All rights reserved. Copyright and trademark information. VMware, Inc. 3401 Hillview Ave. Palo Alto, CA 94304 www.vmware.com 2 VMware, Inc.
Contents About vSphere Storage 9 Updated Information 11 1 Introduction to Storage 13 Storage Virtualization 13 Types of Physical Storage 14 Target and Device Representations 17 Storage Device Characteristics 18 Supported Storage Adapters 20 Datastore Characteristics 21 How Virtual Machines Access Storage 24 Comparing Types of Storage 24 2 Overview of Using ESXi with a SAN 27 ESXi and SAN Use Cases 28 Specifics of Using SAN Storage with ESXi 28 ESXi Hosts and Multiple Storage Arrays 29 Making LUN Decis
vSphere Storage 6 Booting ESXi from Fibre Channel SAN 49 Boot from SAN Benefits 49 Boot from Fibre Channel SAN Requirements and Considerations Getting Ready for Boot from SAN 50 Configure Emulex HBA to Boot from SAN 52 Configure QLogic HBA to Boot from SAN 53 50 7 Booting ESXi with Software FCoE 55 Requirements and Considerations for Software FCoE Boot 55 Best Practices for Software FCoE Boot 56 Set Up Software FCoE Boot 56 Troubleshooting Installation and Boot from Software FCoE 57 8 Best Practices f
Contents Checking Ethernet Switch Statistics 119 13 Managing Storage Devices 121 Storage Device Characteristics 121 Understanding Storage Device Naming 123 Storage Refresh and Rescan Operations 124 Identifying Device Connectivity Problems 126 Edit Configuration File Parameters 131 Enable or Disable the Locator LED on Storage Devices 132 14 Working with Flash Devices 133 Using Flash Devices with vSphere 134 Marking Storage Devices 134 Monitor Flash Devices 136 Best Practices for Flash Devices 136 Abou
vSphere Storage 18 Raw Device Mapping 205 About Raw Device Mapping 205 Raw Device Mapping Characteristics 208 Create Virtual Machines with RDMs 210 Manage Paths for a Mapped LUN 211 19 Working with Virtual Volumes 213 Virtual Volumes Concepts 214 Guidelines when Using Virtual Volumes 217 Virtual Volumes and Storage Protocols 218 Virtual Volumes Architecture 219 Virtual Volumes and VMware Certificate Authority 220 Before You Enable Virtual Volumes 221 Configure Virtual Volumes 221 Provision Virtual Machi
Contents Storage Provider Requirements and Considerations 278 Storage Status Reporting 279 Register Storage Providers 279 Securing Communication with Storage Providers 280 View Storage Provider Information 280 Unregister Storage Providers 280 Update Storage Providers 281 26 Using vmkfstools 283 vmkfstools Command Syntax 283 vmkfstools Options 284 Index VMware, Inc.
vSphere Storage 8 VMware, Inc.
About vSphere Storage ® vSphere Storage describes storage options available to VMware ESXi and explains how to configure your ESXi system so that it can use and manage different types of storage. In addition, vSphere Storage explicitly ® concentrates on Fibre Channel and iSCSI storage area networks (SANs) as storage options and discusses specifics of using ESXi in Fibre Channel and iSCSI environments.
vSphere Storage 10 VMware, Inc.
Updated Information This vSphere Storage is updated with each release of the product or when necessary. This table provides the update history of the vSphere Storage. Revision Description EN-001799-06 “Use Datastore Browser,” on page 169 has been updated to include more details. EN-001799-05 n n EN-001799-04 n n “I/O Filter Guidelines and Best Practices,” on page 252 has been updated to include a statement about the I/O filters and snapshot trees.
vSphere Storage 12 VMware, Inc.
Introduction to Storage 1 This introduction describes storage options available in vSphere and explains how to configure your host so that it can use and manage different types of storage.
vSphere Storage Other storage virtualization capabilities that vSphere provides include Virtual SAN, Virtual Flash, Virtual Volumes, and policy-based storage management. For information about Virtual SAN, see the Administering VMware Virtual SAN. Types of Physical Storage The ESXi storage management process starts with storage space that your storage administrator preallocates on different storage systems.
Chapter 1 Introduction to Storage However, if you use a cluster of hosts that have just local storage devices, you can implement Virtual SAN. Virtual SAN transforms local storage resources into software-defined shared storage and allows you to use features that require shared storage. For details, see the Administering VMware Virtual SAN documentation. Networked Storage Networked storage consists of external storage systems that your ESXi host uses to store virtual machine files remotely.
vSphere Storage In this configuration, a host connects to a SAN fabric, which consists of Fibre Channel switches and storage arrays, using a Fibre Channel adapter. LUNs from a storage array become available to the host. You can access the LUNs and create datastores for your storage needs. The datastores use the VMFS format. For specific information on setting up the Fibre Channel SAN, see Chapter 3, “Using ESXi with Fibre Channel SAN,” on page 35.
Chapter 1 Introduction to Storage For specific information on setting up the iSCSI SAN, see Chapter 9, “Using ESXi with iSCSI SAN,” on page 63. Network-attached Storage (NAS) Stores virtual machine files on remote file servers accessed over a standard TCP/IP network. The NFS client built into ESXi uses Network File System (NFS) protocol version 3 and 4.1 to communicate with the NAS/NFS servers. For network connectivity, the host requires a standard network adapter.
vSphere Storage Figure 1‑5. Target and LUN Representations target LUN LUN LUN target target target LUN LUN LUN storage array storage array In this illustration, three LUNs are available in each configuration. In one case, the host sees one target, but that target has three LUNs that can be used. Each LUN represents an individual storage volume. In the other example, the host sees three different targets, each having one LUN.
Chapter 1 Introduction to Storage Table 1‑1. Storage Device Information (Continued) Storage Device Information Description Partition Format A partition scheme used by the storage device. It could be of a master boot record (MBR) or GUID partition table (GPT) format. The GPT devices can support datastores greater than 2TB. For more information, see “VMFS Datastores and Storage Disk Formats,” on page 147. Partitions Primary and logical partitions, including a VMFS datastore, if configured.
vSphere Storage Supported Storage Adapters Storage adapters provide connectivity for your ESXi host to a specific storage unit or network. ESXi supports different classes of adapters, including SCSI, iSCSI, RAID, Fibre Channel, Fibre Channel over Ethernet (FCoE), and Ethernet. ESXi accesses the adapters directly through device drivers in the VMkernel. Depending on the type of storage you use, you might need to enable and configure a storage adapter on your host.
Chapter 1 Introduction to Storage Datastore Characteristics Datastores are logical containers, analogous to file systems, that hide specifics of each storage device and provide a uniform model for storing virtual machine files. You can display all datastores available to your hosts and analyze their properties. Datastores are added to vCenter Server in the following ways: n You can create a VMFS5 datastore, an NFS version 3 or 4.1 datastore, or a virtual datastore using the New Datastore wizard.
vSphere Storage Table 1‑3. Datastore Information (Continued) Datastore Information Applicable Datastore Type Description Storage I/O Control VMFS NFS Information on whether cluster-wide storage I/O prioritization is enabled. See the vSphere Resource Management documentation. Hardware Acceleration VMFS NFS Virtual SAN VVOL Information on whether the underlying storage entity supports hardware acceleration. The status can be Supported, Not Supported, or Unknown.
Chapter 1 Introduction to Storage 4 Use tabs to access additional information and modify datastore properties. Tab Description Getting Started View introductory information and access basic actions. Summary View statistics and configuration for the selected datastore. Monitor View alarms, performance data, resource allocation, events, and other status information for the datastore. Manage View and modify datastore properties, alarm definitions, tags, and permissions.
vSphere Storage How Virtual Machines Access Storage When a virtual machine communicates with its virtual disk stored on a datastore, it issues SCSI commands. Because datastores can exist on various types of physical storage, these commands are encapsulated into other forms, depending on the protocol that the ESXi host uses to connect to a storage device. ESXi supports Fibre Channel (FC), Internet SCSI (iSCSI), Fibre Channel over Ethernet (FCoE), and NFS protocols.
Chapter 1 Introduction to Storage Table 1‑4. Networked Storage that ESXi Supports (Continued) Technology Protocols Transfers Interface iSCSI IP/SCSI Block access of data/LUN n n NAS IP/NFS File (no direct LUN access) iSCSI HBA or iSCSI-enabled NIC (hardware iSCSI) Network adapter (software iSCSI) Network adapter The following table compares the vSphere features that different types of storage support. Table 1‑5.
vSphere Storage 26 VMware, Inc.
Overview of Using ESXi with a SAN 2 Using ESXi with a SAN improves flexibility, efficiency, and reliability. Using ESXi with a SAN also supports centralized management, failover, and load balancing technologies. The following are benefits of using ESXi with a SAN: n You can store data securely and configure multiple paths to your storage, eliminating a single point of failure. n Using a SAN with ESXi systems extends failure resistance to the server.
vSphere Storage n “SAN Storage Backup Considerations,” on page 32 ESXi and SAN Use Cases When used with a SAN, ESXi can benefit from multiple vSphere features, including Storage vMotion, Distributed Resource Scheduler (DRS), High Availability, and so on.
Chapter 2 Overview of Using ESXi with a SAN ESXi Hosts and Multiple Storage Arrays An ESXi host can access storage devices presented from multiple storage arrays, including arrays from different vendors. When you use multiple arrays from different vendors, the following considerations apply: n If your host uses the same Storage Array Type Plugin (SATP) for multiple arrays, be careful when you need to change the default Path Selection Policy (PSP) for that SATP. The change will apply to all arrays.
vSphere Storage Use the Predictive Scheme to Make LUN Decisions When setting up storage for ESXi systems, before creating VMFS datastores, you must decide on the size and number of LUNs to provision. You can experiment using the predictive scheme. Procedure 1 Provision several LUNs with different storage characteristics. 2 Create a VMFS datastore on each LUN, labeling each datastore according to its characteristics.
Chapter 2 Overview of Using ESXi with a SAN Not all applications need to be on the highest-performance, most-available storage—at least not throughout their entire life cycle. Note If you need some of the functionality of the high tier, such as snapshots, but do not want to pay for it, you might be able to achieve some of the high-performance characteristics in software. For example, you can create snapshots in software.
vSphere Storage File-Based (VMFS) Solution When you use an ESXi system in conjunction with a SAN, you must decide whether file-based tools are more suitable for your particular situation. When you consider a file-based solution that uses VMware tools and VMFS instead of the array tools, be aware of the following points: n Using VMware tools and VMFS is better for provisioning. One large LUN is allocated and multiple .vmdk files can be placed on that LUN.
Chapter 2 Overview of Using ESXi with a SAN n Overall impact on SAN environment, storage performance (while backing up), and other applications. n Identification of peak traffic periods on the SAN (backups scheduled during those peak periods can slow the applications and the backup process). n Time to schedule all backups within the data center. n Time it takes to back up an individual application. n Resource availability for archiving data; usually offline media access (tape).
vSphere Storage 34 VMware, Inc.
Using ESXi with Fibre Channel SAN 3 When you set up ESXi hosts to use FC SAN storage arrays, special considerations are necessary. This section provides introductory information about how to use ESXi with a FC SAN array.
vSphere Storage Ports in Fibre Channel SAN In the context of this document, a port is the connection from a device into the SAN. Each node in the SAN, such as a host, a storage device, or a fabric component has one or more ports that connect it to the SAN. Ports are identified in a number of ways. WWPN (World Wide Port Name) A globally unique identifier for a port that allows certain applications to access the port.
Chapter 3 Using ESXi with Fibre Channel SAN With ESXi hosts, use a single-initiator zoning or a single-initiator-single-target zoning. The latter is a preferred zoning practice. Using the more restrictive zoning prevents problems and misconfigurations that can occur on the SAN. For detailed instructions and best zoning practices, contact storage array or switch vendors.
vSphere Storage 38 VMware, Inc.
Configuring Fibre Channel Storage 4 When you use ESXi systems with SAN storage, specific hardware and system requirements exist. This chapter includes the following topics: n “ESXi Fibre Channel SAN Requirements,” on page 39 n “Installation and Setup Steps,” on page 40 n “N-Port ID Virtualization,” on page 41 ESXi Fibre Channel SAN Requirements In preparation for configuring your SAN and setting up your ESXi system to use SAN storage, review the requirements and recommendations.
vSphere Storage Setting LUN Allocations This topic provides general information about how to allocate LUNs when your ESXi works in conjunction with SAN. When you set LUN allocations, be aware of the following points: Storage provisioning To ensure that the ESXi system recognizes the LUNs at startup time, provision all LUNs to the appropriate HBAs before you connect the SAN to the ESXi system. VMware recommends that you provision all LUNs to all ESXi HBAs at the same time.
Chapter 4 Configuring Fibre Channel Storage 3 Perform any necessary storage array modification. Most vendors have vendor-specific documentation for setting up a SAN to work with VMware ESXi. 4 Set up the HBAs for the hosts you have connected to the SAN. 5 Install ESXi on the hosts. 6 Create virtual machines and install guest operating systems. 7 (Optional) Set up your system for VMware HA failover or for using Microsoft Clustering Services. 8 Upgrade or modify your environment as needed.
vSphere Storage For information, see the VMware Compatibility Guide and refer to your vendor documentation. n Use HBAs of the same type, either all QLogic or all Emulex. VMware does not support heterogeneous HBAs on the same host accessing the same LUNs. n If a host uses multiple physical HBAs as paths to the storage, zone all physical paths to the virtual machine. This is required to support multipathing even though only one path at a time will be active.
Chapter 4 Configuring Fibre Channel Storage 5 Deselect the Temporarily Disable NPIV for this virtual machine check box. 6 Select Generate new WWNs. 7 Specify the number of WWNNs and WWPNs. A minimum of 2 WWPNs are needed to support failover with NPIV. Typically only 1 WWNN is created for each virtual machine. The host creates WWN assignments for the virtual machine.
vSphere Storage 44 VMware, Inc.
Configuring Fibre Channel over Ethernet 5 To access Fibre Channel storage, an ESXi host can use the Fibre Channel over Ethernet (FCoE) protocol. The FCoE protocol encapsulates Fibre Channel frames into Ethernet frames. As a result, your host does not need special Fibre Channel links to connect to Fibre Channel storage, but can use 10Gbit lossless Ethernet to deliver Fibre Channel traffic.
vSphere Storage Configuration Guidelines for Software FCoE When setting up your network environment to work with ESXi software FCoE, follow the guidelines and best practices that VMware offers. Network Switch Guidelines Follow these guidelines when you configure a network switch for software FCoE environment: n On the ports that communicate with your ESXi host, disable the Spanning Tree Protocol (STP).
Chapter 5 Configuring Fibre Channel over Ethernet 6 Enter a network label. Network label is a friendly name that identifies the VMkernel adapter that you are creating, for example, FCoE. 7 Specify a VLAN ID and click Next. Because FCoE traffic requires an isolated network, make sure that the VLAN ID you enter is different from the one used for regular networking on your host. For more information, see the vSphere Networking documentation.
vSphere Storage 48 VMware, Inc.
Booting ESXi from Fibre Channel SAN 6 When you set up your host to boot from a SAN, your host's boot image is stored on one or more LUNs in the SAN storage system. When the host starts, it boots from the LUN on the SAN rather than from its local disk. ESXi supports booting through a Fibre Channel host bus adapter (HBA) or a Fibre Channel over Ethernet (FCoE) converged network adapter (CNA).
vSphere Storage Boot from Fibre Channel SAN Requirements and Considerations Your ESXi boot configuration must meet specific requirements. Table 6‑1. Boot from SAN Requirements Requirement Description ESXi system requirements Follow vendor recommendation for the server booting from a SAN. Adapter requirements Enable and correctly configure the adapter, so it can access the boot LUN. See your vendor documentation.
Chapter 6 Booting ESXi from Fibre Channel SAN 2 Configure the storage array. a From the SAN storage array, make the ESXi host visible to the SAN. This process is often referred to as creating an object. b From the SAN storage array, set up the host to have the WWPNs of the host’s adapters as port names or node names. c Create LUNs. d Assign LUNs. e Record the IP addresses of the switches and storage arrays. f Record the WWPN for each SP.
vSphere Storage Configure Emulex HBA to Boot from SAN Configuring the Emulex HBA BIOS to boot from SAN includes enabling the BootBIOS prompt and enabling BIOS. Procedure 1 Enable the BootBIOS Prompt on page 52 When you configure the Emulex HBA BIOS to boot ESXi from SAN, you need to enable the BootBIOS prompt. 2 Enable the BIOS on page 52 When you configure the Emulex HBA BIOS to boot ESXi from SAN, you need to enable BIOS.
Chapter 6 Booting ESXi from Fibre Channel SAN g Select 1. WWPN. (Boot this device using WWPN, not DID). h Select x to exit and Y to reboot. 4 Boot into the system BIOS and move Emulex first in the boot controller sequence. 5 Reboot and install on a SAN LUN. Configure QLogic HBA to Boot from SAN This sample procedure explains how to configure the QLogic HBA to boot ESXi from SAN. The procedure involves enabling the QLogic HBA BIOS, enabling the selectable boot, and selecting the boot LUN.
vSphere Storage 54 10 If any remaining storage processors show in the list, press C to clear the data. 11 Press Esc twice to exit and press Enter to save the setting. VMware, Inc.
Booting ESXi with Software FCoE 7 ESXi supports boot from FCoE capable network adapters. When you install and boot ESXi from an FCoE LUN, the host can use a VMware software FCoE adapter and a network adapter with FCoE capabilities. The host does not require a dedicated FCoE HBA. You perform most configurations through the option ROM of your network adapter. The network adapters must support one of the following formats, which communicate parameters about an FCoE boot device to VMkernel.
vSphere Storage n Coredump is not supported on any software FCoE LUNs, including the boot LUN. n Multipathing is not supported at pre-boot. n Boot LUN cannot be shared with other hosts even on shared storage. Best Practices for Software FCoE Boot VMware recommends several best practices when you boot your system from a software FCoE LUN. n Make sure that the host has access to the entire boot LUN. The boot LUN cannot be shared with other hosts even on shared storage.
Chapter 7 Booting ESXi with Software FCoE Install and Boot ESXi from Software FCoE LUN When you set up your system to boot from a software FCoE LUN, you install the ESXi image to the target LUN. You can then boot your host from that LUN. Prerequisites n Configure the option ROM of the network adapter to point to a target LUN that you want to use as the boot LUN. Make sure that you have information about the bootable LUN.
vSphere Storage n Use the esxcli command to verify whether the boot LUN is present. esxcli conn_options hardware bootdevice list 58 VMware, Inc.
Best Practices for Fibre Channel Storage 8 When using ESXi with Fibre Channel SAN, follow best practices that VMware offers to avoid performance problems. The vSphere Web Client offer extensive facilities for collecting performance information. The information is graphically displayed and frequently updated. You can also use the resxtop or esxtop command-line utilities. The utilities provide a detailed look at how ESXi uses resources in real time.
vSphere Storage n Become familiar with the various monitor points in your storage network, at all visibility points, including host's performance charts, FC switch statistics, and storage performance statistics. n Be cautious when changing IDs of the LUNs that have VMFS datastores being used by your ESXi host. If you change the ID, the datastore becomes inactive and its virtual machines fail. You can resignature the datastore to make it active again.
Chapter 8 Best Practices for Fibre Channel Storage Tuning statically balanced storage arrays is a matter of monitoring the specific performance statistics (such as I/O operations per second, blocks per second, and response time) and distributing the LUN workload to spread the workload across all the SPs. Note Dynamic load balancing is not currently supported with ESXi. Server Performance with Fibre Channel You must consider several factors to ensure optimal server performance.
vSphere Storage 62 VMware, Inc.
Using ESXi with iSCSI SAN 9 You can use ESXi in conjunction with a storage area network (SAN), a specialized high-speed network that connects computer systems to high-performance storage subsystems. Using ESXi together with a SAN provides storage consolidation, improves reliability, and helps with disaster recovery. To use ESXi effectively with a SAN, you must have a working knowledge of ESXi systems and SAN concepts.
vSphere Storage Ports in the iSCSI SAN A single discoverable entity on the iSCSI SAN, such as an initiator or a target, represents an iSCSI node. Each node has one or more ports that connect it to the SAN. iSCSI ports are end-points of an iSCSI session. Each node can be identified in a number of ways. IP Address Each iSCSI node can have an IP address associated with it so that routing and switching equipment on your network can establish the connection between the server and storage.
Chapter 9 Using ESXi with iSCSI SAN The 16-hexadecimal digits are text representations of a 64-bit number of an IEEE EUI (extended unique identifier) format. The top 24 bits are a company ID that IEEE registers with a particular company. The lower 40 bits are assigned by the entity holding that company ID and must be unique. iSCSI Initiators To access iSCSI targets, your host uses iSCSI initiators.
vSphere Storage Figure 9‑1. Target Compared to LUN Representations target LUN LUN LUN target target target LUN LUN LUN storage array storage array Three LUNs are available in each of these configurations. In the first case, the host detects one target but that target has three LUNs that can be used. Each of the LUNs represents individual storage volume. In the second case, the host detects three different targets, each having one LUN.
Chapter 9 Using ESXi with iSCSI SAN Discovery A discovery session is part of the iSCSI protocol, and it returns the set of targets you can access on an iSCSI storage system. The two types of discovery available on ESXi are dynamic and static. Dynamic discovery obtains a list of accessible targets from the iSCSI storage system, while static discovery can only try to access one particular target by target name and address.
vSphere Storage How Virtual Machines Access Data on an iSCSI SAN ESXi stores a virtual machine's disk files within a VMFS datastore that resides on a SAN storage device. When virtual machine guest operating systems issue SCSI commands to their virtual disks, the SCSI virtualization layer translates these commands to VMFS file operations.
10 Configuring iSCSI Adapters and Storage Before ESXi can work with a SAN, you must set up your iSCSI adapters and storage. To do this, you must first observe certain basic requirements and then follow best practices for installing and setting up hardware or software iSCSI adapters to access the SAN. The following table lists the iSCSI adapters (vmhbas) that ESXi supports and indicates whether VMkernel networking configuration is required. Table 10‑1.
vSphere Storage n “Configuring CHAP Parameters for iSCSI Adapters,” on page 98 n “Configuring Advanced Parameters for iSCSI,” on page 102 n “iSCSI Session Management,” on page 103 ESXi iSCSI SAN Requirements You must meet several requirements for your ESXi host to work properly with a SAN. n Verify that your SAN storage hardware and firmware combinations are supported in conjunction with ESXi systems. For an up-to-date list, see VMware Compatibility Guide.
Chapter 10 Configuring iSCSI Adapters and Storage Network Configuration and Authentication Before your ESXi host can discover iSCSI storage, the iSCSI initiators must be configured and authentication might have to be set up. n For software iSCSI and dependent hardware iSCSI, networking for the VMkernel must be configured. You can verify the network configuration by using the vmkping utility. With software iSCSI and dependent iSCSI, IPv4 and IPv6 protocols are supported.
vSphere Storage What to do next If required, configure CHAP parameters and jumbo frames. View Independent Hardware iSCSI Adapters View an independent hardware iSCSI adapter to verify that it is correctly installed and ready for configuration. After you install an independent hardware iSCSI adapter on a host, it appears on the list of storage adapters available for configuration. You can view its properties. Prerequisites Required privilege: Host.Configuration.
Chapter 10 Configuring iSCSI Adapters and Storage Edit Network Settings for Hardware iSCSI After you install an independent hardware iSCSI adapter, you might need to change its default network settings so that the adapter is configured properly for the iSCSI SAN. Procedure 1 Browse to the host in the vSphere Web Client navigator. 2 Click the Manage tab, and click Storage. 3 Click Storage Adapters, and select the adapter (vmhba#) to configure.
vSphere Storage Prerequisites Required privilege: Host.Configuration.Storage Partition Configuration Procedure 1 Browse to the host in the vSphere Web Client navigator. 2 Click the Manage tab, and click Storage. 3 Click Storage Adapters and select the iSCSI adapter to configure from the list. 4 Under Adapter Details, click the Targets tab. 5 Configure the discovery method. Option Description Dynamic Discovery a Click Dynamic Discovery and click Add.
Chapter 10 Configuring iSCSI Adapters and Storage Flow control manages the rate of data transmission between two nodes to prevent a fast sender from overrunning a slow receiver. For best results, enable flow control at the end points of the I/O path, at the hosts and iSCSI storage systems. To enable flow control for the host, use the esxcli system module parameters command. For details, see the VMware knowledge base article at http://kb.vmware.
vSphere Storage What to do next Although the dependent iSCSI adapter is enabled by default, to make it functional, you must set up networking for the iSCSI traffic and bind the adapter to the appropriate VMkernel iSCSI port. You then configure discovery addresses and CHAP parameters. Modify General Properties for iSCSI Adapters You can change the default iSCSI name and alias assigned to your iSCSI adapters. For the for independent hardware iSCSI adapters, you can also change the default IP settings.
Chapter 10 Configuring iSCSI Adapters and Storage Create Network Connections for iSCSI Configure connections for the traffic between the software or dependent hardware iSCSI adapters and the physical network adapters. The following tasks discuss the iSCSI network configuration with a vSphere standard switch. If you use a vSphere distributed switch with multiple uplink ports, for port binding, create a separate distributed port group per each physical NIC.
vSphere Storage You created the virtual VMkernel adapter (vmk#) for a physical network adapter (vmnic#) on your host. What to do next If your host has one physical network adapter for iSCSI traffic, you must bind the virtual adapter that you created to the iSCSI adapter. If you have multiple network adapters, create additional VMkernel adapters and then perform iSCSI binding. The number of virtual adapters must correspond to the number of physical adapters on the host.
Chapter 10 Configuring iSCSI Adapters and Storage Change Network Policy for iSCSI If you use a single vSphere standard switch to connect multiple VMkernel adapters to multiple network adapters, set up network policy so that only one physical network adapter is active for each VMkernel adapter. By default, for each VMkernel adapter on the vSphere standard switch, all network adapters appear as active. You must override this setup, so that each VMkernel adapter maps to only one corresponding active physical.
vSphere Storage Procedure 1 Browse to the host in the vSphere Web Client navigator. 2 Click the Manage tab, and click Storage. 3 Click Storage Adapters and select the software or dependent iSCSI adapter to configure from the list. 4 Under Adapter Details, click the Network Port Binding tab and click Add. 5 Select a VMkernel adapter to bind with the iSCSI adapter. Note Make sure that the network policy for the VMkernel adapter is compliant with the binding requirements.
Chapter 10 Configuring iSCSI Adapters and Storage 5 Configure the discovery method. Option Description Dynamic Discovery a Click Dynamic Discovery and click Add. b Type the IP address or DNS name of the storage system and click OK. c Rescan the iSCSI adapter. After establishing the SendTargets session with the iSCSI system, you host populates the Static Discovery list with all newly discovered targets. Static Discovery a b c Click Static Discovery and click Add.
vSphere Storage Prerequisites Required privilege: Host.Configuration.Storage Partition Configuration Note If you boot from iSCSI using the software iSCSI adapter, the adapter is enabled and the network configuration is created at the first boot. If you disable the adapter, it is reenabled each time you boot the host. Procedure 1 Browse to the host in the vSphere Web Client navigator. 2 Click the Manage tab, and click Storage. 3 Click Storage Adapters, and click Add.
Chapter 10 Configuring iSCSI Adapters and Storage Create Network Connections for iSCSI Configure connections for the traffic between the software or dependent hardware iSCSI adapters and the physical network adapters. The following tasks discuss the iSCSI network configuration with a vSphere standard switch. If you use a vSphere distributed switch with multiple uplink ports, for port binding, create a separate distributed port group per each physical NIC.
vSphere Storage You created the virtual VMkernel adapter (vmk#) for a physical network adapter (vmnic#) on your host. What to do next If your host has one physical network adapter for iSCSI traffic, you must bind the virtual adapter that you created to the iSCSI adapter. If you have multiple network adapters, create additional VMkernel adapters and then perform iSCSI binding. The number of virtual adapters must correspond to the number of physical adapters on the host.
Chapter 10 Configuring iSCSI Adapters and Storage Change Network Policy for iSCSI If you use a single vSphere standard switch to connect multiple VMkernel adapters to multiple network adapters, set up network policy so that only one physical network adapter is active for each VMkernel adapter. By default, for each VMkernel adapter on the vSphere standard switch, all network adapters appear as active. You must override this setup, so that each VMkernel adapter maps to only one corresponding active physical.
vSphere Storage Procedure 1 Browse to the host in the vSphere Web Client navigator. 2 Click the Manage tab, and click Storage. 3 Click Storage Adapters and select the software or dependent iSCSI adapter to configure from the list. 4 Under Adapter Details, click the Network Port Binding tab and click Add. 5 Select a VMkernel adapter to bind with the iSCSI adapter. Note Make sure that the network policy for the VMkernel adapter is compliant with the binding requirements.
Chapter 10 Configuring iSCSI Adapters and Storage 5 Configure the discovery method. Option Description Dynamic Discovery a Click Dynamic Discovery and click Add. b Type the IP address or DNS name of the storage system and click OK. c Rescan the iSCSI adapter. After establishing the SendTargets session with the iSCSI system, you host populates the Static Discovery list with all newly discovered targets. Static Discovery a b c Click Static Discovery and click Add.
vSphere Storage 3 Click Storage Adapters, and select the adapter (vmhba#) to configure. 4 Under Adapter Details, click the Properties tab and click Edit in the General panel. 5 To change the default iSCSI name for your adapter, enter the new name. Make sure the name you enter is worldwide unique and properly formatted or some storage devices might not recognize the iSCSI adapter. 6 (Optional) Enter the iSCSI alias. The alias is a name that you use to identify the iSCSI adapter.
Chapter 10 Configuring iSCSI Adapters and Storage Figure 10‑1.
vSphere Storage Figure 10‑2. 1:1 adapter mapping on separate vSphere standard switches vSwitch1 Physical adapters VMkernel adapters vmnic1 iSCSI1 vmk1 vSwitch2 Physical adapters VMkernel adapters vmnic2 iSCSI2 vmk2 An alternative is to add all NICs and VMkernel adapters to a single vSphere standard switch. In this case, you must override the default network setup and make sure that each VMkernel adapter maps to only one corresponding active physical adapter.
Chapter 10 Configuring iSCSI Adapters and Storage Guidelines for Using iSCSI Port Binding in ESXi You can use multiple VMkernel adapters bound to iSCSI to have multiple paths to an iSCSI array that broadcasts a single IP address. When you use port binding for multipathing, follow these guidelines: n iSCSI ports of the array target must reside in the same broadcast domain and IP subnet as the VMkernel adapters.
vSphere Storage Create a Single VMkernel Adapter for iSCSI Connect the VMkernel, which runs services for iSCSI storage, to a physical network adapter. Procedure 1 Browse to the host in the vSphere Web Client navigator. 2 Click Actions > Add Networking. 3 Select VMkernel Network Adapter, and click Next. 4 Select New standard switch to create a vSphere standard switch. 5 Click the Add adapters icon, and select the network adapter (vmnic#) to use for iSCSI.
Chapter 10 Configuring iSCSI Adapters and Storage c Make sure that you are using the existing switch, and click Next. d Click the Add adapters icon, and select one or more network adapters (vmnic#) to use for iSCSI. With dependent hardware iSCSI adapters, select only those NICs that have a corresponding iSCSI component. e 5 Complete configuration, and click Finish. Create iSCSI VMkernel adapters for all physical network adapters that you added.
vSphere Storage VMkernel Adapter (vmk#) Physical Network Adapter (vmnic#) vmk1 Active Adapters vmnic1 Unused Adapters vmnic2 vmk2 Active Adapters vmnic2 Unused Adapters vmnic1 What to do next After you perform this task, bind the virtual VMkernel adapters to the software iSCSI or dependent hardware iSCSI adapters. Bind iSCSI and VMkernel Adapters Bind an iSCSI adapter with a VMkernel adapter. Prerequisites Create a virtual VMkernel adapter for each physical network adapter on your host.
Chapter 10 Configuring iSCSI Adapters and Storage Managing iSCSI Network Special consideration apply to network adapters, both physical and VMkernel, that are associated with an iSCSI adapter. After you create network connections for iSCSI, an iSCSI indicator on a number of Networking dialog boxes becomes enabled. This indicator shows that a particular virtual or physical network adapter is iSCSI-bound.
vSphere Storage n To set up and verify physical network switches for Jumbo Frames, consult your vendor documentation. The following table explains the level of support that ESXi provides to Jumbo Frames. Table 10‑3. Support of Jumbo Frames Type of iSCSI Adapters Jumbo Frames Support Software iSCSI Supported Dependent Hardware iSCSI Supported. Check with vendor. Independent Hardware iSCSI Supported. Check with vendor.
Chapter 10 Configuring iSCSI Adapters and Storage Configuring Discovery Addresses for iSCSI Adapters You need to set up target discovery addresses, so that the iSCSI adapter can determine which storage resource on the network is available for access. The ESXi system supports these discovery methods: Dynamic Discovery Also known as SendTargets discovery. Each time the initiator contacts a specified iSCSI server, the initiator sends the SendTargets request to the server.
vSphere Storage 5 Configure the discovery method. Option Description Dynamic Discovery a Click Dynamic Discovery and click Add. b Type the IP address or DNS name of the storage system and click OK. c Rescan the iSCSI adapter. After establishing the SendTargets session with the iSCSI system, you host populates the Static Discovery list with all newly discovered targets. Static Discovery a b c Click Static Discovery and click Add. Enter the target’s information and click OK Rescan the iSCSI adapter.
Chapter 10 Configuring iSCSI Adapters and Storage ESXi supports the following CHAP authentication methods: Unidirectional CHAP In unidirectional CHAP authentication, the target authenticates the initiator, but the initiator does not authenticate the target. Bidirectional CHAP In bidirectional CHAP authentication, an additional level of security enables the initiator to authenticate the target. VMware supports this method for software and dependent hardware iSCSI adapters only.
vSphere Storage n Required privilege: Host.Configuration.Storage Partition Configuration Procedure 1 Display storage adapters and select the iSCSI adapter to configure. 2 Under Adapter Details, click the Properties tab and click Edit in the Authentication panel. 3 Specify authentication method. 4 n None n Use unidirectional CHAP if required by target n Use unidirectional CHAP unless prohibited by target n Use unidirectional CHAP n Use bidirectional CHAP.
Chapter 10 Configuring iSCSI Adapters and Storage 3 From the list of available targets, select a target to configure and click Authentication. 4 Deselect Inherit settings from parent and specify authentication method. 5 n None n Use unidirectional CHAP if required by target n Use unidirectional CHAP unless prohibited by target n Use unidirectional CHAP n Use bidirectional CHAP. To configure bidirectional CHAP, you must select this option. Specify the outgoing CHAP name.
vSphere Storage Configuring Advanced Parameters for iSCSI You might need to configure additional parameters for your iSCSI initiators. For example, some iSCSI storage systems require ARP (Address Resolution Protocol) redirection to move iSCSI traffic dynamically from one port to another. In this case, you must activate ARP redirection on your host. The following table lists advanced iSCSI parameters that you can configure using the vSphere Web Client.
Chapter 10 Configuring iSCSI Adapters and Storage Table 10‑5. Additional Parameters for iSCSI Initiators (Continued) Advanced Parameter Description Configurable On ARP Redirect Allows storage systems to move iSCSI traffic dynamically from one port to another. ARP is required by storage systems that do arraybased failover. Software iSCSI Dependent Hardware iSCSI Independent Hardware iSCSI Delayed ACK Allows systems to delay acknowledgment of received data packets.
vSphere Storage You can also establish a session to a specific target port. This can be useful if your host connects to a singleport storage system that, by default, presents only one target port to your initiator, but can redirect additional sessions to a different target port. Establishing a new session between your iSCSI initiator and another target port creates an additional path to the storage system.
Chapter 10 Configuring iSCSI Adapters and Storage Procedure u To add or duplicate an iSCSI session, run the followign command: esxcli --server=server_name iscsi session add The command takes these options: Option Description -A|--adapter=str The iSCSI adapter name, for example, vmhba34. This option is required. -s|--isid=str The ISID of a session to duplicate. You can find it by listing all session. -n|--name=str The iSCSI target name, for example, iqn.X. What to do next Rescan the iSCSI adapter.
vSphere Storage 106 VMware, Inc.
Booting from iSCSI SAN 11 When you set up your host to boot from a SAN, your host's boot image is stored on one or more LUNs in the SAN storage system. When the host starts, it boots from the LUN on the SAN rather than from its local disk. You can use boot from the SAN if you do not want to handle maintenance of local storage or have diskless hardware configurations, such as blade systems. ESXi supports different methods of booting from the iSCSI SAN. Table 11‑1.
vSphere Storage n Configure a diagnostic partition. n With independent hardware iSCSI only, you can place the diagnostic partition on the boot LUN. If you configure the diagnostic partition in the boot LUN, this LUN cannot be shared across multiple hosts. If a separate LUN is used for the diagnostic partition, it can be shared by multiple hosts. n If you boot from SAN using iBFT, you cannot set up a diagnostic partition on a SAN LUN.
Chapter 11 Booting from iSCSI SAN 2 Use the BIOS to set the host to boot from the CD/DVD-ROM drive first. 3 During server POST, press Crtl+q to enter the QLogic iSCSI HBA configuration menu. 4 Select the I/O port to configure. By default, the Adapter Boot mode is set to Disable. 5 6 Configure the HBA. a From the Fast!UTIL Options menu, select Configuration Settings > Host Adapter Settings.
vSphere Storage When you first boot from iSCSI, the iSCSI boot firmware on your system connects to an iSCSI target. If login is successful, the firmware saves the networking and iSCSI boot parameters in the iBFT and stores the table in the system's memory. The system uses this table to configure its own iSCSI connection and networking and to start up. The following list describes the iBFT iSCSI boot sequence. 1 When restarted, the system BIOS detects the iSCSI boot firmware on the network adapter.
Chapter 11 Booting from iSCSI SAN 3 Install ESXi to iSCSI Target on page 111 When setting up your host to boot from iBFT iSCSI, install the ESXi image to the target LUN. 4 Boot ESXi from iSCSI Target on page 112 After preparing the host for an iBFT iSCSI boot and copying the ESXi image to the iSCSI target, perform the actual boot. Configure iSCSI Boot Parameters To begin an iSCSI boot process, a network adapter on your host must have a specially configured iSCSI boot firmware.
vSphere Storage n If you use Broadcom adapters, set Boot to iSCSI target to Disabled. Procedure 1 Insert the installation media in the CD/DVD-ROM drive and restart the host. 2 When the installer starts, follow the typical installation procedure. 3 When prompted, select the iSCSI LUN as the installation target. The installer copies the ESXi boot image to the iSCSI LUN. 4 After the system restarts, remove the installation DVD.
Chapter 11 Booting from iSCSI SAN n Use the first physical network adapter for the management network. n Use the second physical network adapter for the iSCSI network. Make sure to configure the iBFT. n After the host boots, you can add secondary network adapters to both the management and iSCSI networks. Change iBFT iSCSI Boot Settings If settings, such as the IQN name, IP address, and so on, change on the iSCSI storage or your host, update the iBFT.
vSphere Storage Solution 114 1 Use the vSphere Web Client to connect to the ESXi host. 2 Re-configure the iSCSI and networking on the host to match the iBFT parameters. 3 Perform a rescan. VMware, Inc.
Best Practices for iSCSI Storage 12 When using ESXi with the iSCSI SAN, follow best practices that VMware offers to avoid problems. Check with your storage representative if your storage system supports Storage API - Array Integration hardware acceleration features. If it does, refer to your vendor documentation for information on how to enable hardware acceleration support on the storage system side. For more information, see Chapter 23, “Storage Hardware Acceleration,” on page 259.
vSphere Storage If there are no running virtual machines on the VMFS datastore, after you change the ID of the LUN, you must use rescan to reset the ID on your host. For information on using rescan, see “Storage Refresh and Rescan Operations,” on page 124. n If you need to change the default iSCSI name of your iSCSI adapter, make sure the name you enter is worldwide unique and properly formatted.
Chapter 12 Best Practices for iSCSI Storage Because each application has different requirements, you can meet these goals by choosing an appropriate RAID group on the storage system. To achieve performance goals, perform the following tasks: n Place each LUN on a RAID group that provides the necessary performance levels. Pay attention to the activities and resource utilization of other LUNS in the assigned RAID group.
vSphere Storage When writing data to storage, multiple systems or virtual machines might attempt to fill their links. As Dropped Packets shows, when this happens, the switch between the systems and the storage system has to drop data. This happens because, while it has a single connection to the storage device, it has more traffic to send to the storage system than a single link can carry.
Chapter 12 Best Practices for iSCSI Storage Configuration changes to avoid this problem involve making sure several input Ethernet links are not funneled into one output link, resulting in an oversubscribed link. When a number of links transmitting near capacity are switched to a smaller number of links, oversubscription is a possibility.
vSphere Storage 120 VMware, Inc.
Managing Storage Devices 13 Manage local and networked storage device that your ESXi host has access to.
vSphere Storage Table 13‑1. Storage Device Information (Continued) Storage Device Information Description Owner The plug-in, such as the NMP or a third-party plug-in, that the host uses to manage paths to the storage device. For details, see “Managing Multiple Paths,” on page 188. Hardware Acceleration Information about whether the storage device assists the host with virtual machine management operations. The status can be Supported, Not Supported, or Unknown.
Chapter 13 Managing Storage Devices 3 Click Storage Adapters. All storage adapters installed on the host are listed under Storage Adapters. 4 Select the adapter from the list and click the Devices tab. Storage devices that the host can access through the adapter are displayed. Understanding Storage Device Naming Each storage device, or LUN, is identified by several names.
vSphere Storage Legacy Identifier In addition to the SCSI INQUIRY or mpx. identifiers, for each device, ESXi generates an alternative legacy name. The identifier has the following format: vml.number The legacy identifier includes a series of digits that are unique to the device and can be derived in part from the Page 83 information, if it is available. For nonlocal devices that do not support Page 83 information, the vml. name is used as the only available unique identifier.
Chapter 13 Managing Storage Devices n Change the path masking on a host. n Reconnect a cable. n Change CHAP settings (iSCSI only). n Add or remove discovery or static addresses (iSCSI only). n Add a single host to the vCenter Server after you have edited or removed from the vCenter Server a datastore shared by the vCenter Server hosts and the single host. Important If you rescan when a path is unavailable, the host removes the path from the list of paths to the device.
vSphere Storage You can modify the Disk.MaxLUN parameter depending on your needs. For example, if your environment has a smaller number of storage devices with LUN IDs from 0 through 100, you can set the value to 101 to improve device discovery speed on targets that do not support REPORT_LUNS. Lowering the value can shorten the rescan time and boot time. However, the time to rescan storage devices might depend on other factors, including the type of storage system and the load on the storage system.
Chapter 13 Managing Storage Devices The vSphere Web Client displays the following information for the device: n The operational state of the device changes to Lost Communication. n All paths are shown as Dead. n Datastores on the device are grayed out. The host automatically removes the PDL device and all paths to the device if no open connections to the device exist, or after the last connection closes. You can disable the automatic removal of paths by setting the advanced host parameter Disk.
vSphere Storage 5 Perform any necessary reconfiguration of the storage device by using the array console. 6 Reattach the storage device. See “Attach Storage Devices,” on page 128. 7 Mount the datastore and restart the virtual machines. See “Mount Datastores,” on page 168. Detach Storage Devices Safely detach a storage device from your host. You might need to detach the device to make it inaccessible to your host, when, for example, you perform a hardware upgrade on the storage side.
Chapter 13 Managing Storage Devices 2 Unmount the datastore. See “Unmount Datastores,” on page 168. 3 Perform a rescan on all ESXi hosts that had access to the device. See “Perform Storage Rescan,” on page 125. Note If the rescan is not successful and the host continues to list the device, some pending I/O or active references to the device might still exist.
vSphere Storage Procedure 1 Browse to the host in the vSphere Web Client navigator. 2 Click the Manage tab, and click Settings. 3 Under System, click Advanced System Settings. 4 Under Advanced System Settings, select the Misc.APDHandlingEnable parameter and click the Edit icon. 5 Change the value to 0. If you disabled the APD handling, you can reenable it when a device enters the APD state.
Chapter 13 Managing Storage Devices 2 Check the connection status in the Status: field. n on - Device is connected. n dead - Device has entered the APD state. The APD timer starts. n dead timeout - The APD timeout has expired. n not connected - Device is in the PDL state.
vSphere Storage Enable or Disable the Locator LED on Storage Devices Use the locator LED to identify specific storage devices, so that you can locate them among other devices. You can turn the locator LED on or off. Procedure 132 1 Browse to the host in the vSphere Web Client navigator. 2 Click the Manage tab, and click Storage. 3 Click Storage Devices. 4 From the list of storage devices, select one or more disks and enable or disable the locator LED indicator.
Working with Flash Devices 14 In addition to regular storage hard disk drives (HDDs), vSphere supports flash storage devices. Unlike the regular HDDs that are electromechanical devices containing moving parts, flash devices use semiconductors as their storage medium and have no moving parts. Typically, flash devices are very resilient and provide faster access to data. To detect flash devices, ESXi uses an inquiry mechanism based on T10 standards.
vSphere Storage Using Flash Devices with vSphere In your vSphere environment, you can use flash devices for a variety of functionalities. Table 14‑1. Using Flash Devices with vSphere Functionality Description Virtual SAN Virtual SAN requires flash devices. For more information, see the Administering VMware Virtual SAN documentation. VMFS Datastores You can create VMFS datastores on flash devices. Use the datastores for the following purposes: n Store virtual machines.
Chapter 14 Working with Flash Devices However, ESXi might not recognize certain storage devices as flash devices when their vendors do not support automatic flash device detection. In other cases, certain non-SATA SAS flash devices might not be detected as local. When devices are not recognized as local flash, they are excluded from the list of devices offered for Virtual SAN or virtual flash resource. Marking these devices as local flash makes them available for virtual SAN and virtual flash resource.
vSphere Storage Monitor Flash Devices You can monitor certain critical flash device parameters, including Media Wearout Indicator, Temperature, and Reallocated Sector Count, from an ESXi host. Use the esxcli command to monitor flash devices. In the procedure, --server=server_name specifies the target server. The specified target server prompts you for a user name and password. Other connection options, such as a configuration file or session file, are supported.
Chapter 14 Working with Flash Devices 2 Calculate the total number of writes and convert to GB. One block is 512 bytes. To calculate the total number of writes, multiply the Blocks Written value by 512, and convert the resulting value to GB. In this example, the total number of writes since the last reboot is approximately 322 GB. 3 Estimate the average number of writes per day in GB. Divide the total number of writes by the number of days since the last reboot.
vSphere Storage n After you set up a virtual flash resource, the total available capacity can be used and consumed by both ESXi hosts as host swap cache and virtual machines as read cache. n You cannot choose individual flash devices to be used for either swap cache or read cache. All flash devices are combined into a single flash resource entity. Set Up Virtual Flash Resource You can set up a virtual flash resource or add capacity to existing virtual flash resource.
Chapter 14 Working with Flash Devices Virtual Flash Advanced Settings You can change advanced options for virtual flash. Procedure 1 In the vSphere Web Client, navigate to the host. 2 Click the Manage tab and click Settings. 3 Under System, click Advanced System Settings. 4 Select the setting to change and click the Edit button. 5 Option Description VFLASH.
vSphere Storage 3 Click Host Cache Configuration. 4 Select the datastore in the list and click the Allocate space for host cache icon. 5 To enable the host swap cache on a per-datastore basis, select the Allocate space for host cache check box. By default, maximum available space is allocated for host cache. 6 (Optional) To change the host cache size, select Custom size and make appropriate adjustments. 7 Click OK.
About VMware vSphere Flash Read Cache 15 Flash Read Cache™ lets you accelerate virtual machine performance through the use of host resident flash devices as a cache. You can reserve a Flash Read Cache for any individual virtual disk. The Flash Read Cache is created only when a virtual machine is powered on, and it is discarded when a virtual machine is suspended or powered off. When you migrate a virtual machine you have the option to migrate the cache.
vSphere Storage DRS Support for Flash Read Cache DRS supports virtual flash as a resource. DRS manages virtual machines with Flash Read Cache reservations. Every time DRS runs, it displays the available virtual flash capacity reported by the ESXi host. Each host supports one virtual flash resource. DRS selects a host that has sufficient available virtual flash capacity to start a virtual machine.
Chapter 15 About VMware vSphere Flash Read Cache 6 Click Advanced to specify the following parameters. Option 7 Description Reservation Select a cache size reservation. Block Size Select a block size. Click OK. Migrate Virtual Machines with Flash Read Cache When you migrate a powered on virtual machine from one host to another, you can specify whether or not to migrate Flash Read Cache contents with the virtual disks.
vSphere Storage n 144 Make sure that the VM Hardware panel displays correct Virtual Flash Read Cache information for each virtual disk. VMware, Inc.
Working with Datastores 16 Datastores are logical containers, analogous to file systems, that hide specifics of physical storage and provide a uniform model for storing virtual machine files. Datastores can also be used for storing ISO images, virtual machine templates, and floppy images. Depending on the storage you use, datastores can be of the following types: n VMFS datastores that are backed by the Virtual Machine File System format. See “Understanding VMFS Datastores,” on page 146.
vSphere Storage n “Configuring VMFS Pointer Block Cache,” on page 179 Understanding VMFS Datastores To store virtual disks, ESXi uses datastores, which are logical containers that hide specifics of physical storage from virtual machines and provide a uniform model for storing virtual machine files. Datastores that you deploy on block storage devices use the vSphere VMFS format, a special high-performance file system format that is optimized for storing virtual machines.
Chapter 16 Working with Datastores n Support of small files of 1KB. n Ability to open any file located on a VMFS5 datastore in a shared mode by a maximum of 32 hosts. n Scalability improvements on storage devices that support hardware acceleration. For information, see Chapter 23, “Storage Hardware Acceleration,” on page 259. n Default use of ATS-only locking mechanisms on storage devices that support ATS.
vSphere Storage When you run multiple virtual machines, VMFS provides specific locking mechanisms for virtual machine files, so that virtual machines can operate safely in a SAN environment where multiple ESXi hosts share the same VMFS datastore. In addition to virtual machines, the VMFS datastores can store other files, such as virtual machine templates and ISO images.
Chapter 16 Working with Datastores n Changing a file's attributes n Powering a virtual machine on or off n Creating or deleting a VMFS datastore n Expanding a VMFS datastore n Creating a template n Deploying a virtual machine from a template n Migrating a virtual machine with vMotion When metadata changes are made in a shared storage enviroment, VMFS uses special locking mechanisms to protect its data and prevent multiple hosts from concurrently writing to the metadata.
vSphere Storage Prerequisites Install vCLI or deploy the vSphere Management Assistant (vMA) virtual machine. See Getting Started with vSphere Command-Line Interfaces. For troubleshooting, run esxcli commands in the ESXi Shell. Procedure u To display information related to VMFS locking mechanisms, run the following command: esxcli --server=server_name storage vmfs lockmode list . The table lists items that the output of the command might include. Table 16‑2.
Chapter 16 Working with Datastores Procedure 1 Prepare for an Upgrade to ATS-Only on page 151 You must perform several steps to prepare your environment for an online or offline upgrade to ATSonly locking. 2 Upgrade Locking Mechanism to ATS-Only on page 151 If a VMFS datastore is ATS-only compatible, you can upgrade its locking mechanism from ATS+SCSI to ATS-only.
vSphere Storage 2 For an online upgrade, perform additional steps. a Close the datastore on all hosts that have access to the datastore, so that the hosts can recognise the change. You can use one of the following methods: b n Unmount and mount the datastore. n Put the datastore into maintenance mode and exit maintenance mode.
Chapter 16 Working with Datastores n Fault Tolerance (FT) and Host Profiles Note NFS 4.1 does not support legacy Fault Tolerance. n ISO images, which are presented as CD-ROMs to virtual machines n Virtual machine snapshots n Virtual machines with large capacity virtual disks, or disks greater than 2TB. Virtual disks created on NFS datastores are thin-provisioned by default, unless you use hardware acceleration that supports the Reserve Space operation. NFS 4.1 does not support hardware acceleration.
vSphere Storage NFS Datastore Guidelines n To use NFS 4.1, upgrade your vSphere environment to version 6.x. You cannot mount an NFS 4.1 datastore to hosts that do not support version 4.1. n You cannot use different NFS versions to mount the same datastore. NFS 3 and NFS 4.1 clients do not use the same locking protocol. As a result, accessing the same virtual disks from two incompatible clients might result in incorrect behavior and cause data corruption. n NFS 3 and NFS 4.
Chapter 16 Working with Datastores n NFS 4.1 does not support hardware acceleration. This limitation does not allow you to create thick virtual disks on NFS 4.1 datastores. n NFS 4.1 supports the Kerberos authentication protocol to secure communication with the NFS server. For more information, see “Using Kerberos Credentials for NFS 4.1,” on page 158. n NFS 4.1 uses share reservations as a locking mechanism. n NFS 4.1 supports inbuilt file locking. n NFS 4.
vSphere Storage The behavior of the NFS Client rule set (nfsClient) is different from other rule sets. When the NFS Client rule set is enabled, all outbound TCP ports are open for the destination hosts in the list of allowed IP addresses. The NFS 4.1 rule set opens outgoing connections to destination port 2049, which is the port named in the specification for version 4.1 protocol. The outgoing connections are open for all IP addresses at the time of the first mount.
Chapter 16 Working with Datastores 4 Scroll down to an appropriate version of NFS to make sure that the port is opened. Using Layer 3 Routed Connections to Access NFS Storage When you use Layer 3 (L3) routed connections to access NFS storage, consider certain requirements and restructions. Ensure that your environment meets the following requirements: n Use Cisco's Hot Standby Router Protocol (HSRP) in IP Router.
vSphere Storage 3 If you plan to use Kerberos authentication with the NFS 4.1 datastore, configure the ESXi hosts for Kerberos authentication. Make sure that each host that mounts this datastore is a part of an Active Directory domain and its NFS authentication credentials are set. What to do next You can now create an NFS datastore on the ESXi hosts. Using Kerberos Credentials for NFS 4.1 With NFS version 4.1, ESXi supports Kerberos authentication mechanism.
Chapter 16 Working with Datastores 3 Enable Kerberos Authentication in Active Directory on page 160 If you use NFS 4.1 storage with Kerberos, you must add each ESXi host to an Active Directory domain and enable Kerberos authentication. Kerberos integrates with Active Directory to enable single sign-on and provides an additional layer of security when used across an insecure network connection. What to do next After you configure your host for Kerberos, you can create an NFS 4.
vSphere Storage Enable Kerberos Authentication in Active Directory If you use NFS 4.1 storage with Kerberos, you must add each ESXi host to an Active Directory domain and enable Kerberos authentication. Kerberos integrates with Active Directory to enable single sign-on and provides an additional layer of security when used across an insecure network connection. Prerequisites Set up an AD domain and a domain administrator account with the rights to add hosts to the domain.
Chapter 16 Working with Datastores Prerequisites Install and configure any adapters that your storage requires. Rescan the adapters to discover newly added storage devices. Procedure 1 In the vSphere Web Client navigator, select vCenter Inventory Lists > Datastores 2 Click the Create a New Datastore icon. 3 Type the datastore name and if required, select the placement location for the datastore. The vSphere Web Client enforces a 42 character limit for the datastore name.
vSphere Storage 5 Specify an NFS version. n NFS 3 n NFS 4.1 Important If multiple hosts access the same datastore, you must use the same protocol on all hosts. 6 Type the server name or IP address and the mount point folder name. With NFS 4.1, you can add multiple IP addresses or server names if the server supports trunking. The host uses these values to achieve multipathing to the NFS server mount point. You can use IPv4 or IPv6 addresses for NFS 3 and non-Kerberos NFS 4.1.
Chapter 16 Working with Datastores In addition to LUN snapshotting and replication, the following storage device operations might cause ESXi to mark the existing datastore on the device as a copy of the original datastore: n LUN ID changes n SCSI device type changes, for example, from SCSI-2 to SCSI-3 n SPC-2 compliancy enablement ESXi can detect the VMFS datastore copy and display it in the vSphere Web Client.
vSphere Storage n After resignaturing, the storage device replica that contained the VMFS copy is no longer treated as a replica. n A spanned datastore can be resignatured only if all its extents are online. n The resignaturing process is crash and fault tolerant. If the process is interrupted, you can resume it later.
Chapter 16 Working with Datastores Table 16‑3. Comparing Upgraded and Newly Formatted VMFS5 Datastores (Continued) Characteristics Upgraded VMFS5 Formatted VMFS5 Partition format MBR. Conversion to GPT happens only after you expand the datastore to a size larger than 2TB. GPT Datastore limits Retains limits of VMFS3 datastore.
vSphere Storage n Dynamically add a new extent. The datastore can span over up to 32 extents with the size of each extent of more than 2TB, yet appear as a single volume. The spanned VMFS datastore can use any or all of its extents at any time. It does not need to fill up a particular extent before using the next one. Note Datastores that only support hardware assisted locking, also called atomic test and set (ATS) mechanism cannot span over non-ATS devices.
Chapter 16 Working with Datastores Administrative Operations for Datastores After creating datastores, you can perform several administrative operations on the datastores. Certain operations, such as renaming a datastore, are available for all types of datastores. Others apply to specific types of datastores. n Change Datastore Name on page 167 You can change the name of an existing datastore.
vSphere Storage Unmount Datastores When you unmount a datastore, it remains intact, but can no longer be seen from the hosts that you specify. The datastore continues to appear on other hosts, where it remains mounted. Do not perform any configuration operations that might result in I/O to the datastore while the unmount is in progress. Note Make sure that the datastore is not used by vSphere HA heartbeating. vSphere HA heartbeating does not prevent you from unmounting the datastore.
Chapter 16 Working with Datastores 2 Right-click the datastore to mount and select one of the following options: n Mount Datastore n Mount Datastore on Additional Hosts Whether you see one or another option depends on the type of datastore you use. 3 Select the hosts that should access the datastore. Remove VMFS Datastores You can delete any type of VMFS datastore, including copies that you have mounted without resignaturing.
vSphere Storage Procedure 1 Open the datastore browser. a Display the datastore in the inventory. b Right-click the datastore and select Browse Files. 2 Explore the contents of the datastore by navigating to existing folders and files. 3 Perform administrative tasks by using the icons and options. Icons and Options Descriptions Install the Client Integration plug-in or Upload a file to the datastore. See “Upload Files to Datastores,” on page 170. Create a folder on the datastore.
Chapter 16 Working with Datastores Procedure 1 Open the datastore browser. a Display the datastore in the inventory. b Right-click the datastore and select Browse Files. 2 (Optional) Create a folder to store the file. 3 Select the target folder and click the Upload a file to the datastore icon ( ). 4 Locate the item to upload on the local computer and click Open. 5 Refresh the datastore file browser to see the uploaded file on the list.
vSphere Storage Procedure 1 Open the datastore browser. a Display the datastore in the inventory. b Right-click the datastore and select Browse Files. 2 Browse to an object you want to move, either a folder or a file. 3 Select the object and click the Move selection to a new location icon. 4 Specify the destination location. 5 (Optional) Select Overwrite files and folders with matching names at the destination. 6 Click OK.
Chapter 16 Working with Datastores c Click the Related Objects tab and click Datastores. The datastore that stores the virtual machine files is listed. d Select the datastore and click the Navigate to the datastore file browser icon. The datastore browser displays contents of the datastore. 2 Open the virtual machine folder and browse to the virtual disk file that you want to convert. The file has the .vmdk extension and is marked with the virtual disk ( 3 ) icon.
vSphere Storage Storage Filtering vCenter Server provides storage filters to help you avoid storage device corruption or performance degradation that might be caused by an unsupported use of storage devices. These filters are available by default. Table 16‑4. Storage Filters Filter Name Description config.vpxd.filter.vmfsFilter (VMFS Filter) Filters out storage devices, or LUNs, that are already used by a VMFS datastore on any host managed by vCenter Server.
Chapter 16 Working with Datastores d e Click Add Row and add the following parameters: Name Value scsi#.returnNoConnectDuringAPD True scsi#.returnBusyOnNoConnectStatus False Click OK. Collecting Diagnostic Information for ESXi Hosts on a Storage Device During a host failure, ESXi must be able to save diagnostic information to a preconfigured location for diagnostic and technical support purposes.
vSphere Storage 3 Specify the type of diagnostic partition. Option Description Private local Creates the diagnostic partition on a local disk. This partition stores fault information only for your host. Private SAN storage Creates the diagnostic partition on a non-shared SAN LUN. This partition stores fault information only for your host. Shared SAN storage Creates the diagnostic partition on a shared SAN LUN.
Chapter 16 Working with Datastores Procedure 1 Create a VMFS datastore core dump file by running the following command: esxcli system coredump file add The command takes the following options, but they are not required and can be omitted: 2 Option Description --datastore | -d datastore_UUID or datastore_name If not provided, the system selects a datastore of sufficient size. --file | -f file_name If not provided, the system specifies a unique name for the core dump file.
vSphere Storage Prerequisites Install vCLI or deploy the vSphere Management Assistant (vMA) virtual machine. See Getting Started with vSphere Command-Line Interfaces. For troubleshooting, run esxcli commands in the ESXi Shell.
Chapter 16 Working with Datastores #esxcli storage vmfs extent list The Device Name and Partition columns in the output identify the device. For example: Volume Name 1TB_VMFS5 2 XXXXXXXX XXXXXXXX Device Name naa.600508e000000000b367477b3be3d703 Partition 3 Run VOMA to check for VMFS errors. Provide the absolute path to the device partition that backs the VMFS datastore, and provide a partition number with the device name. For example: # voma -m vmfs -f check -d /vmfs/devices/disks/naa.
vSphere Storage You can configure the minimum and maximum sizes of the pointer block cache on each ESXi host. When the size of the pointer block cache approaches the configured maximum size, an eviction mechanism removes some pointer block entries from the cache. Base the maximum size of the pointer block cache on the working size of all open virtual disk files that reside on VMFS datastores. All VMFS datastores on the host use a single pointer block cache.
Chapter 16 Working with Datastores Procedure u To obtain or reset the pointer block cache statistics, use the following command: esxcli storage vmfs pbcache Option Description get Get VMFS pointer block cache statistics. reset Reset the VMFS pointer block cache statistics.
vSphere Storage 182 VMware, Inc.
Understanding Multipathing and Failover 17 To maintain a constant connection between a host and its storage, ESXi supports multipathing. Multipathing is a technique that lets you use more than one physical path that transfers data between the host and an external storage device. In case of a failure of any element in the SAN network, such as an adapter, switch, or cable, ESXi can switch to another physical path, which does not use the failed component.
vSphere Storage Figure 17‑1. Multipathing and Failover with Fibre Channel Host 2 Host 1 HBA2 HBA1 HBA3 HBA4 switch switch SP1 SP2 storage array Similarly, if SP1 fails or the links between SP1 and the switches breaks, SP2 takes over and provides the connection between the switch and the storage device. This process is called SP failover. VMware ESXi supports both HBA and SP failovers with its multipathing capability.
Chapter 17 Understanding Multipathing and Failover Figure 17‑2. Host-Based Path Failover hardware iSCSI software iSCSI host 1 host 2 software adapter HBA2 HBA1 NIC2 NIC1 IP network SP iSCSI storage Failover with Hardware iSCSI With hardware iSCSI, the host typically has two or more hardware iSCSI adapters available, from which the storage system can be reached using one or more switches.
vSphere Storage Array-Based Failover with iSCSI Some iSCSI storage systems manage path use of their ports automatically and transparently to ESXi. When using one of these storage systems, your host does not see multiple ports on the storage and cannot choose the storage port it connects to. These systems have a single virtual port address that your host uses to initially communicate.
Chapter 17 Understanding Multipathing and Failover Figure 17‑4. Port Reassignment 10.0.0.1 10.0.0.2 storage 10.0.0.1 10.0.0.1 10.0.0.2 storage With this form of array-based failover, you can have multiple paths to the storage only if you use multiple ports on the ESXi host. These paths are active-active. For additional information, see “iSCSI Session Management,” on page 103.
vSphere Storage 6 Reboot guest OS for the change to take effect. Managing Multiple Paths To manage storage multipathing, ESXi uses a collection of Storage APIs, also called the Pluggable Storage Architecture (PSA). The PSA is an open, modular framework that coordinates the simultaneous operation of multiple multipathing plug-ins (MPPs).
Chapter 17 Understanding Multipathing and Failover Figure 17‑5. Pluggable Storage Architecture VMkernel pluggable storage architecture third-party MPP third-party MPP VMware NMP VMware SATP VMware PSP VMware SATP VMware PSP VMware SATP third-party SATP third-party PSP The multipathing modules perform the following operations: n Manage physical path claiming and unclaiming. n Manage creation, registration, and deregistration of logical devices. n Associate physical paths with logical devices.
vSphere Storage After the NMP determines which SATP to use for a specific storage device and associates the SATP with the physical paths for that storage device, the SATP implements the tasks that include the following: n Monitors the health of each physical path. n Reports changes in the state of each physical path. n Performs array-specific actions necessary for storage fail-over. For example, for active-passive devices, it can activate passive paths.
Chapter 17 Understanding Multipathing and Failover 2 The PSP selects an appropriate physical path on which to issue the I/O. 3 The NMP issues the I/O request on the path selected by the PSP. 4 If the I/O operation is successful, the NMP reports its completion. 5 If the I/O operation reports an error, the NMP calls the appropriate SATP. 6 The SATP interprets the I/O command errors and, when appropriate, activates the inactive paths.
vSphere Storage Disabled The path is disabled and no data can be transferred. Dead The software cannot connect to the disk through this path. If you are using the Fixed path policy, you can see which path is the preferred path. The preferred path is marked with an asterisk (*) in the Preferred column. For each path you can also display the path's name. The name includes parameters that describe the path: adapter ID, target ID, and device ID.
Chapter 17 Understanding Multipathing and Failover Setting a Path Selection Policy For each storage device, the ESXi host sets the path selection policy based on the claim rules. By default, VMware supports the following path selection policies. If you have a third-party PSP installed on your host, its policy also appears on the list. Fixed (VMware) The host uses the designated preferred path, if it has been configured. Otherwise, it selects the first working path discovered at system boot time.
vSphere Storage 8 Click OK to save your settings and exit the dialog box. Disable Storage Paths You can temporarily disable paths for maintenance or other reasons. You disable a path using the Paths panel. You have several ways to access the Paths panel, from a datastore, a storage device, or an adapter view. This task explains how to disable a path using a storage device view. Procedure 1 Browse to the host in the vSphere Web Client navigator. 2 Click the Manage tab, and click Storage.
Chapter 17 Understanding Multipathing and Failover n By default, the PSA claim rule 101 masks Dell array pseudo devices. Do not delete this rule, unless you want to unmask these devices. List Multipathing Claim Rules for the Host Use the esxcli command to list available multipathing claim rules. Claim rules indicate which multipathing plug-in, the NMP or any third-party MPP, manages a given physical path.
vSphere Storage n Any paths not described in the previous rules are claimed by NMP. n The Rule Class column in the output describes the category of a claim rule. It can be MP (multipathing plug-in), Filter, or VAAI. n The Class column shows which rules are defined and which are loaded. The file parameter in the Class column indicates that the rule is defined. The runtime parameter indicates that the rule has been loaded into your system.
Chapter 17 Understanding Multipathing and Failover Display NMP Storage Devices Use the esxcli command to list all storage devices controlled by the VMware NMP and display SATP and PSP information associated with each device. In the procedure, --server=server_name specifies the target server. The specified target server prompts you for a user name and password. Other connection options, such as a configuration file or session file, are supported.
vSphere Storage 2 Option Description -D|--driver= Indicate the driver of the paths to use in this operation. -f|--force Force claim rules to ignore validity checks and install the rule anyway. --if-unset= Execute this command if this advanced user variable is not set to 1. -i|--iqn= Indicate the iSCSI Qualified Name for the target to use in this operation. -L|--lun= Indicate the LUN of the paths to use in this operation.
Chapter 17 Understanding Multipathing and Failover Delete Multipathing Claim Rules Use the esxcli commands to remove a multipathing PSA claim rule from the set of claim rules on the system. In the procedure, --server=server_name specifies the target server. The specified target server prompts you for a user name and password. Other connection options, such as a configuration file or session file, are supported. For a list of connection options, see Getting Started with vSphere Command-Line Interfaces.
vSphere Storage 2 Assign the MASK_PATH plug-in to a path by creating a new claim rule for the plug-in. esxcli --server=server_name storage core claimrule add -P MASK_PATH 3 Load the MASK_PATH claim rule into your system. esxcli --server=server_name storage core claimrule load 4 Verify that the MASK_PATH claim rule was added correctly. esxcli --server=server_name storage core claimrule list 5 If a claim rule for the masked path exists, remove the rule.
Chapter 17 Understanding Multipathing and Failover Procedure 1 Delete the MASK_PATH claim rule. esxcli --server=server_name storage core claimrule remove -r rule# 2 Verify that the claim rule was deleted correctly. esxcli --server=server_name storage core claimrule list 3 Reload the path claiming rules from the configuration file into the VMkernel.
vSphere Storage Option Description -M|--model=string Set the model string when adding SATP a claim rule. Vendor/Model rules are mutually exclusive with driver rules. -o|--option=string Set the option string when adding a SATP claim rule. -P|--psp=string Set the default PSP for the SATP claim rule. -O|--psp-option=string Set the PSP options for the SATP claim rule. -s|--satp=string The SATP for which a new rule will be added.
Chapter 17 Understanding Multipathing and Failover 5 Select one of the following options: n To disable the per file scheduling mechanism, change the value to No. Note After you turn off the per file I/O scheduling model, your host reverts to a legacy scheduling mechanism that uses a single I/O queue. The host maintains the single I/O queue for each virtual machine and storage device pair. All I/Os between the virtual machine and its virtual disks stored on the storage device are moved into this queue.
vSphere Storage 204 VMware, Inc.
Raw Device Mapping 18 Raw device mapping (RDM) provides a mechanism for a virtual machine to have direct access to a LUN on the physical storage subsystem. The following topics contain information about RDMs and provide instructions on how to create and manage RDMs.
vSphere Storage For example, you need to use raw LUNs with RDMs in the following situations: n When SAN snapshot or other layered applications run in the virtual machine. The RDM better enables scalable backup offloading systems by using features inherent to the SAN. n In any MSCS clustering scenario that spans physical hosts — virtual-to-virtual clusters as well as physical-to-virtual clusters.
Chapter 18 Raw Device Mapping Snapshots Makes it possible to use virtual machine snapshots on a mapped volume. Snapshots are not available when the RDM is used in physical compatibility mode. vMotion Lets you migrate a virtual machine with vMotion. The mapping file acts as a proxy to allow vCenter Server to migrate the virtual machine by using the same mechanism that exists for migrating virtual disk files. Figure 18‑2.
vSphere Storage n Replication software Such software uses a physical compatibility mode for RDMs so that the software can access SCSI devices directly. Various management products are best run centrally (not on the ESXi machine), while others run well on the virtual machines. VMware does not certify these applications or provide a compatibility matrix. To find out whether a SAN management application is supported in an ESXi environment, contact the SAN management software provider.
Chapter 18 Raw Device Mapping In physical mode, the VMkernel passes all SCSI commands to the device, with one exception: the REPORT LUNs command is virtualized so that the VMkernel can isolate the LUN to the owning virtual machine. Otherwise, all physical characteristics of the underlying hardware are exposed. Physical mode is useful to run SAN management agents or other SCSI target-based software in the virtual machine.
vSphere Storage Table 18‑1. Features Available with Virtual Disks and Raw Device Mappings (Continued) ESXi Features Virtual Disk File Virtual Mode RDM Physical Mode RDM Snapshots Yes Yes No Distributed Locking Yes Yes Yes Clustering Cluster-in-a-box only Cluster-in-a-box cluster-across-boxes Physical-to-virtual clustering cluster-across-boxes SCSI Target-Based Software No No Yes VMware recommends that you use virtual disk files for the cluster-in-a-box type of clustering.
Chapter 18 Raw Device Mapping 10 11 Select a compatibility mode. Option Description Physical Allows the guest operating system to access the hardware directly. Physical compatibility is useful if you are using SAN-aware applications on the virtual machine. However, a virtual machine with a physical compatibility RDM cannot be cloned, made into a template, or migrated if the migration involves copying the disk.
vSphere Storage 212 VMware, Inc.
Working with Virtual Volumes 19 The Virtual Volumes functionality changes the storage management paradigm from managing space inside datastores to managing abstract storage objects handled by storage arrays. With Virtual Volumes, an individual virtual machine, not the datastore, becomes a unit of storage management, while storage hardware gains complete control over virtual disk content, layout, and management. Historically, vSphere storage management used a datastore-centric approach.
vSphere Storage Virtual Volumes Concepts With Virtual Volumes, abstract storage containers replace traditional storage volumes based on LUNs or NFS shares. In vCenter Server, the storage containers are represented by virtual datastores. Virtual datastores remove artificial boundaries of traditional datastores and are used to store virtual volumes, objects that encapsulate virtual machine files. Watch the video to learn more about different components of the Virtual Volumes functionality.
Chapter 19 Working with Virtual Volumes n A configuration virtual volume, or a home directory, represents a small directory that contains metadata files for a virtual machine. The files include a .vmx file, descriptor files for virtual disks, log files, and so forth. The configuration virtual volume is formatted with a file system. When ESXi uses SCSI protocol to connect to storage, configuration virtual volumes are formatted with VMFS.
vSphere Storage After you register a storage provider associated with the storage system, vCenter Server discovers all configured storage containers along with their storage capability profiles, protocol endpoints, and other attributes. A single storage container can export multiple capability profiles. As a result, virtual machines with diverse needs and different storage policy settings can be a part of the same storage container.
Chapter 19 Working with Virtual Volumes Virtual Volumes and VM Storage Policies A virtual machine that runs on a virtual datastore requires a VM storage policy. A VM storage policy is a set of rules that contains placement and quality of service requirements for a virtual machine. The policy enforces appropriate placement of the virtual machine within Virtual Volumes storage and guarantees that storage can satisfy virtual machine requirements.
vSphere Storage Virtual Volumes and Storage Protocols The Virtual Volumes functionality supports Fibre Channel, FCoE, iSCSI, and NFS. Storage transports expose protocol endpoints to ESXi hosts. When the SCSI-based protocol is used, the protocol endpoint represents a LUN defined by a T10-based LUN WWN. For the NFS protocol, the protocol endpoint is a mount point, such as IP address or DNS name and a share name.
Chapter 19 Working with Virtual Volumes Virtual Volumes Architecture An architectural diagram provides an overview of how all components of the Virtual Volumes functionality interact with each other.
vSphere Storage A VASA provider, or a storage provider, is developed through vSphere APIs for Storage Awareness. The storage provider enables communication between the vSphere stack — ESXi hosts, vCenter server, and the vSphere Web Client — on one side, and the storage system on the other. The VASA provider runs on the storage side and integrates with vSphere Storage Monitoring Service (SMS) to manage all aspects of Virtual Volumes storage.
Chapter 19 Working with Virtual Volumes Before You Enable Virtual Volumes To work with Virtual Volumes, you must make sure that your storage and vSphere environment are set up correctly. Follow these guidelines to prepare your storage system environment for Virtual Volumes. For additional information, contact your storage vendor. n The storage system or storage array that you use must be able to support Virtual Volumes and integrate with vSphere through vSphere APIs for Storage Awareness (VASA).
vSphere Storage Procedure 1 Register Storage Providers for Virtual Volumes on page 222 Your Virtual Volumes environment must include storage providers, also called VASA providers. Typically, third-party vendors develop storage providers through the VMware APIs for Storage Awareness (VASA). Storage providers facilitate communication between vSphere and the storage side. You must register the storage provider in vCenter Server to be able to work with Virtual Volumes.
Chapter 19 Working with Virtual Volumes 4 (Optional) To direct vCenter Server to the storage provider certificate, select the Use storage provider certificate option and specify the certificate's location. If you do not select this option, a thumbprint of the certificate is displayed. You can check the thumbprint and approve it. 5 Click OK to complete the registration. vCenter Server discovers and registers the Virtual Volumes storage provider.
vSphere Storage 5 Use tabs under Protocol Endpoint Details to access additional information and modify properties for the selected protocol endpoint. Tab Description Properties View the item properties and characteristics. For SCSI (block) items, view and edit multipathing policies. Paths (SCSI protocol endpoints only) Display paths available for the protocol endpoint. Disable or enable a selected path. Change the Path Selection Policy. Datastores Display a corresponding virtual datastore.
Chapter 19 Working with Virtual Volumes 3 Change Default Storage Policy for a Virtual Datastore on page 226 For virtual machines provisioned on virtual datastores, VMware provides a default No Requirements policy. You cannot edit this policy, but you can designate a newly created policy as default. Define a VM Storage Policy for Virtual Volumes You can create a VM storage policy compatible with a virtual datastore. Prerequisites Verify that the Virtual Volumes storage provider is available and active.
vSphere Storage 2 Assign the same storage policy to all virtual machine files and disks. a On the Select Storage page, select the storage policy compatible with Virtual Volumes, for example VVols Silver, from the VM Storage Policy drop-down menu. b Select the virtual datastore from the list of available datastores and click Next. The datastore becomes the destination storage resource for the virtual machine configuration file and all virtual disks. 3 Change the storage policy for the virtual disk.
Virtual Machine Storage Policies 20 Virtual machine storage policies are essential to virtual machine provisioning. These policies help you define storage requirements for the virtual machine and control which type of storage is provided for the virtual machine, how the virtual machine is placed within the storage, and which data services are offered for the virtual machine. When you define a storage policy, you specify storage requirements for applications that run on virtual machines.
vSphere Storage Understanding Virtual Machine Storage Policies Virtual machine storage policies capture storage characteristics that virtual machine home files and virtual disks require to run applications within the virtual machine. You can create several storage policies to define the types and classes of storage requirements. Each storage policy is not only a set of constraints that apply simultaneously.
Chapter 20 Virtual Machine Storage Policies n Rules Based on Tags on page 229 Rules based on tags reference datastore tags that you associate with specific datastores. You can apply more than one tag to a datastore. Common Rules Common rules are based on data services that are generic for all types of storage and do not depend on a datastore.
vSphere Storage About Datastore-Specific and Common Rule Sets A storage policy can include one or several rule sets that describe requirements for virtual machine storage resources. It can also include common rules. If common rules are not available in your environment or not defined, you can create a policy that includes datastore-specific rule sets. To define a policy, one rule set is required. Additional rule sets are optional.
Chapter 20 Virtual Machine Storage Policies 4 Apply the VM storage policy to a virtual machine. You can apply the storage policy when deploying the virtual machine or configuring its virtual disks. See “Assign Storage Policies to Virtual Machines,” on page 237. 5 Change the storage policy for virtual machine home files or virtual disks. See “Change Storage Policy Assignment for Virtual Machine Files and Disks,” on page 238.
vSphere Storage 7 8 (Optional) Create a category: a Select New Category. b Specify the category options. Category Property Example Category Name Storage Category Description Category for tags related to storage Cardinality Many tags per object Associable Object Types Datastore and Datastore Cluster Click OK. The new tag is assigned to the datastore and appears on the datastore's Summary tab in the Tags pane.
Chapter 20 Virtual Machine Storage Policies 5 Finish VM Storage Policy Creation on page 234 You can review the list of datastores that are compatible with the VM storage policy and change any storage policy settings. What to do next You can apply this storage policy to virtual machines. If you use object-based storage, such as Virtual SAN and Virtual Volumes, you can designate this storage policy as the default.
vSphere Storage Procedure 1 On the Rule Set page, select a storage provider, for example, Virtual SAN or Virtual Volumes, from the Rules based on data services drop-down menu. The page expands to show data services provided by the storage resource. 2 Select a data service to include and specify its value. Verify that the values you provide are within the range of values that the data services profile of the storage resource advertises.
Chapter 20 Virtual Machine Storage Policies 3 Click Finish. The VM storage policy appears in the list. Delete a Virtual Machine Storage Policy You can delete a storage policy if you are not using it for any virtual machine or virtual disk. Procedure 1 From the vSphere Web Client Home, click Policies and Profiles > VM Storage Policies. 2 In the VM Storage Policies interface, select a policy to delete and click the Delete a VM Storage Policy icon ( ). 3 Click Yes.
vSphere Storage Storage Policies and Virtual Machines After you define a VM storage policy, you can apply it to a virtual machine. You apply the storage policy when provisioning the virtual machine or configuring its virtual disks. Depending on its type and configuration, the policy might serve different purposes. It can select the most appropriate datastore for the virtual machine and enforce the required level of service, or it can enable specific data services for the virtual machine and its disks.
Chapter 20 Virtual Machine Storage Policies User-Defined Default Policies for Virtual Machine Storage You can create a VM storage policy that is compatible with Virtual SAN or Virtual Volumes, and designate this policy as the default for Virtual SAN and virtual datastores. The user-defined default policy replaces the default storage policy that VMware provided. Each Virtual SAN and virtual datastore can have only one default policy at a time.
vSphere Storage 2 Assign the same storage policy to all virtual machine files and disks. a On the Select Storage page, select a storage policy from the VM Storage Policy drop-down menu. Based on its configuration, the storage policy separates all datastores into compatible and incompatible sets. If the policy references data services offered by a specific storage entity, for example, Virtual Volumes, the compatible list includes datastores that represent only that type of storage.
Chapter 20 Virtual Machine Storage Policies 5 6 Specify the VM storage policy for your virtual machine. Option Description Apply the same storage policy to all virtual machine objects. a b Select the policy from the VM storage policy drop-down menu. Click Apply to all. Apply different storage policies to the VM home object and virtual disks. a b Select the object. Select the policy from the VM storage policy drop-down menu for the object. Click OK.
vSphere Storage If the status is Out of Date, reapply the policy to the objects. See “Reapply Virtual Machine Storage Policy,” on page 241. Check Compliance for a VM Storage Policy You can check whether a virtual machine uses a datastore that is compatible with the storage requirements specified in the VM storage policy. Prerequisites Verify that the virtual machine has a storage policy that is associated with it. Procedure 1 In the vSphere Web Client, browse to the virtual machine.
Chapter 20 Virtual Machine Storage Policies Prerequisites Verify that the VM Storage Policy Compliance field on the virtual machine Summary tab displays the Not Compliant status. Procedure 1 In the vSphere Web Client, browse to the virtual machine. 2 Click the Summary tab. The VM Storage Policy Compliance panel on the VM Storage Policies pane shows the Not Compliant status. 3 Click the policy link in the VM Storage Policies panel.
vSphere Storage 242 VMware, Inc.
Filtering Virtual Machine I/O 21 vSphere APIs for I/O Filtering (VAIO) provide a framework that allows third parties to create software components called I/O filters. The filters can be installed on ESXi hosts and can offer additional data services to virtual machines by processing I/O requests that move between the guest operating system of a virtual machine and virtual disks.
vSphere Storage n Replication. Replicates all write I/O operations to an external target location, such as another host or cluster. Note You can install several filters from the same category, such as caching, on your ESXi host. However, you can have only one filter from the same category per virtual disk. I/O Filtering Components Several components are involved in the I/O filtering process.
Chapter 21 Filtering Virtual Machine I/O Virtual Machine GuestOS I/O Path Filter 1 vCenter Server Filter 2 3rd Party CIM Provider 3rd Party Web Client Extension Plugin Filter N VAIO Filter Framework I/O Path Virtual Disk Each Virtual Machine Executable (VMX) component of a virtual machine contains a Filter Farmwork that manages the I/O filter plug-ins attached to the virtual disk. The Filter Framework invokes filters when I/O requests move between the guest operating system and the virtual disk.
vSphere Storage Using Flash Storage Devices with Cache I/O Filters A cache I/O filter can use a local flash device to cache virtual machine data. If your caching I/O filter uses local flash devices, before activating the filter, you need to configure a virtual flash resource, also known as VFFS volume, on your ESXi host. While processing the virtual machine read I/Os, the filter creates a virtual machine cache and places it on the VFFS volume.
Chapter 21 Filtering Virtual Machine I/O Deploy and Configure I/O Filters in the vSphere Environment You can install the I/O filters in your vSphere environment and then enable data services that the filters provide on your virtual machines. Prerequisites VMware partners create I/O filters through the vSphere APIs for I/O Filtering (VAIO) developer program and distribute them as filter packages.
vSphere Storage 2 Verify that the I/O filter components are properly installed on your ESXi hosts: esxcli --server=server_name software vib list The filter appears on the list of VIB packages. View I/O Filter Storage Providers After you deploy I/O filters, a storage provider, also called a VASA provider, is automatically registered for every ESXi host in the cluster. You can verify that the I/O filter storage providers appear as expected and are active.
Chapter 21 Filtering Virtual Machine I/O 4 From the list of available flash drives, select one or more drives to use for the virtual flash resource and click OK. The virtual flash resource is created. The Device Backing area lists all the drives that you use for the virtual flash resource. Enable I/O Filter Data Services on Virtual Disks Enabling data services that I/O filters provide is a two-step process.
vSphere Storage 5 On the Common Rules page, specify the I/O filter services to activate for the virtual machine. You can combine I/O filters from different categories, such as replication and caching, in one storage policy. Or create different polices for each category. You can use only one filter from the same category, for example caching, per storage policy. a Select Use common rules in the storage policy. b From the Add rule drop-down menu, select an I/O filter category.
Chapter 21 Filtering Virtual Machine I/O Assign the I/O Filter Policy to Virtual Machines To activate data services that I/O filters provide, associate the I/O filter policy with virtual disks. You can assign the policy when you create or edit a virtual machine. You can assign the I/O filter policy during an initial deployment of a virtual machine. This topic describes how to assign the policy when you create a new virtual machine.
vSphere Storage Uninstall I/O Filters from a Cluster You can uninstall I/O filters deployed in an ESXi host cluster. Prerequisites n Required privileges: Host.Config.Patch. Procedure 1 Uninstall the I/O filter by running the installer that your vendor provides. During uninstallation, vSphere ESX Agent Manager automatically places the hosts into maintenance mode. If the uninstallation is successful, the filter and any related components are removed from the hosts.
Chapter 21 Filtering Virtual Machine I/O n Cloning or migration of a virtual machine with I/O filter policy from one host to another requires the destination host to have a compatible filter installed. This requirement applies to migrations initiated by an administrator or by such functionalities as HA or DRS. n When you convert a template to a virtual machine, and the template is configured with I/O filter policy, the destination host must have the compatible I/O filter installed.
vSphere Storage 254 VMware, Inc.
VMkernel and Storage 22 The VMkernel is a high-performance operating system that runs directly on the ESXi host. The VMkernel manages most of the physical resources on the hardware, including memory, physical processors, storage, and networking controllers. To manage storage, VMkernel has a storage subsystem that supports several Host Bus Adapters (HBAs) including parallel SCSI, SAS, Fibre Channel, FCoE, and iSCSI.
vSphere Storage Figure 22‑1.
Chapter 22 VMkernel and Storage n Array Thin Provisioning APIs. Help to monitor space use on thin-provisioned storage arrays to prevent out-of-space conditions, and to perform space reclamation. See “Array Thin Provisioning and VMFS Datastores,” on page 273. n Storage APIs - Storage Awareness. These vCenter Server-based APIs enable storage arrays to inform the vCenter Server about their configurations, capabilities, and storage health and events. See Chapter 25, “Using Storage Providers,” on page 277.
vSphere Storage 258 VMware, Inc.
Storage Hardware Acceleration 23 The hardware acceleration functionality enables the ESXi host to integrate with compliant storage arrays and offload specific virtual machine and storage management operations to storage hardware. With the storage hardware assistance, your host performs these operations faster and consumes less CPU, memory, and storage fabric bandwidth. The hardware acceleration is supported by block storage devices, Fibre Channel and iSCSI, and NAS devices.
vSphere Storage Hardware Acceleration Requirements The hardware acceleration functionality works only if you use an appropriate host and storage array combination. Table 23‑1. Hardware Acceleration Storage Requirements ESXi Block Storage Devices NAS Devices ESXi version 6.0 Support T10 SCSI standard, or block storage plug-ins for array integration (VAAI) Support NAS plug-ins for array integration Note NFS 4.1 does not support hardware acceleration.
Chapter 23 Storage Hardware Acceleration In addition to hardware acceleration support, ESXi includes support for array thin provisioning. For information, see “Array Thin Provisioning and VMFS Datastores,” on page 273. Disable Hardware Acceleration for Block Storage Devices On your host, the hardware acceleration for block storage devices is enabled by default. You can use the vSphere Web Client advanced settings to disable the hardware acceleration operations.
vSphere Storage Procedure u Run the esxcli --server=server_name storage core plugin list --plugin-class=value command. For value, enter one of the following options: n Type VAAI to display plug-ins. The output of this command is similar to the following example: #esxcli --server=server_name storage core plugin list --plugin-class=VAAI Plugin name Plugin class VMW_VAAIP_EQL VAAI VMW_VAAIP_NETAPP VAAI VMW_VAAIP_CX VAAI n Type Filter to display the Filter.
Chapter 23 Storage Hardware Acceleration Prerequisites Install vCLI or deploy the vSphere Management Assistant (vMA) virtual machine. See Getting Started with vSphere Command-Line Interfaces. For troubleshooting, run esxcli commands in the ESXi Shell. Procedure u Run the esxcli --server=server_name storage core device vaai status get -d=device_ID command. If the device is managed by a VAAI plug-in, the output shows the name of the plug-in attached to the device.
vSphere Storage Add Hardware Acceleration Claim Rules To configure hardware acceleration for a new array, you need to add two claim rules, one for the VAAI filter and another for the VAAI plug-in. For the new claim rules to be active, you first define the rules and then load them into your system. This procedure is for those block storage devices that do not support T10 SCSI commands and instead use the VAAI plug-ins. In the procedure, --server=server_name specifies the target server.
Chapter 23 Storage Hardware Acceleration Delete Hardware Acceleration Claim Rules Use the esxcli command to delete existing hardware acceleration claim rules. In the procedure, --server=server_name specifies the target server. The specified target server prompts you for a user name and password. Other connection options, such as a configuration file or session file, are supported. For a list of connection options, see Getting Started with vSphere Command-Line Interfaces.
vSphere Storage In the procedure, --server=server_name specifies the target server. The specified target server prompts you for a user name and password. Other connection options, such as a configuration file or session file, are supported. For a list of connection options, see Getting Started with vSphere Command-Line Interfaces. Prerequisites Install vCLI or deploy the vSphere Management Assistant (vMA) virtual machine. See Getting Started with vSphere Command-Line Interfaces.
Chapter 23 Storage Hardware Acceleration Update NAS Plug-Ins Upgrade hardware acceleration NAS plug-ins on your host when a storage vendor releases a new plug-in version. In the procedure, --server=server_name specifies the target server. The specified target server prompts you for a user name and password. Other connection options, such as a configuration file or session file, are supported. For a list of connection options, see Getting Started with vSphere Command-Line Interfaces.
vSphere Storage n The source file type is RDM and the destination file type is non-RDM (regular file). n The source VMDK type is eagerzeroedthick and the destination VMDK type is thin. n The source or destination VMDK is in sparse or hosted format. n The source virtual machine has a snapshot. n The logical address and transfer length in the requested operation are not aligned to the minimum alignment required by the storage device.
Storage Thick and Thin Provisioning 24 vSphere supports two models of storage provisioning, thick provisioning and thin provisioning. Thick provisioning It is a traditional model of storage provisioning. With thick provisioning, large amount of storage space is provided in advance in anticipation of future storage needs. However, the space might remain unused causing underutilization of storage capacity.
vSphere Storage Figure 24‑1. Thick and thin virtual disks VM 1 VM 2 THICK THIN 80GB 40GB 40GB 40GB 20GB provisioned capacity used capacity virtual disks datastore 20GB 40GB About Virtual Disk Provisioning Policies When you perform certain virtual machine management operations, such as creating a virtual disk, cloning a virtual machine to a template, or migrating a virtual machine, you can specify a provisioning policy for the virtual disk file.
Chapter 24 Storage Thick and Thin Provisioning Thin provisioning is the fastest method to create a virtual disk because it creates a disk with just the header information. It does not allocate or zero out storage blocks. Storage blocks are allocated and zeroed out when they are first accessed. Note If a virtual disk supports clustering solutions such as Fault Tolerance, do not make the disk thin. You can manually inflate the thin disk, so that it occupies the entire provisioned space.
vSphere Storage Procedure 1 In the vSphere Web Client, browse to the virtual machine. 2 Double-click the virtual machine and click the Summary tab. 3 Review the storage usage information in the upper right area of the Summary tab. Determine the Disk Format of a Virtual Machine You can determine whether your virtual disk is in thick or thin format. Procedure 1 In the vSphere Web Client, browse to the virtual machine. 2 Right-click the virtual machine and select Edit Settings.
Chapter 24 Storage Thick and Thin Provisioning 3 Right-click the virtual disk file and select Inflate. Note The option might not be available if the virtual disk is thick or when the virtual machine is running. The inflated virtual disk occupies the entire datastore space originally provisioned to it.
vSphere Storage n Storage array has appropriate firmware that supports T10-based Storage APIs - Array Integration (Thin Provisioning). For information, contact your storage provider and check the HCL. Space Usage Monitoring The thin provision integration functionality helps you to monitor the space usage on thin-provisioned LUNs and to avoid running out of space.
Chapter 24 Storage Thick and Thin Provisioning Multipath Plugin: NMP --------------------Thin Provisioning Status: yes Attached Filters: VAAI_FILTER VAAI Status: supported --------------------- An unknown status indicates that a storage device is thick. Note Some storage systems present all devices as thin-provisioned no matter whether the devices are thin or thick. Their thin provisioning status is always yes. For details, check with your storage vendor.
vSphere Storage 276 VMware, Inc.
Using Storage Providers 25 A storage provider is a software component that is either offered by vSphere or is developed by a third party through the vSphere APIs for Storage Awareness (VASA). Storage providers integrate with a variety of storage entities that include external physical storage and storage abstractions, such as Virtual SAN and Virtual Volumes. Storage providers can also support software solutions, for example, I/O filters developed through vSphere APIs for I/O Filtering.
vSphere Storage Storage Providers and Storage Data Representation vCenter Server and ESXi communicate with the storage provider to obtain information that the storage provider collects from underlying physical and software-defined storage, or from available I/O filters. vCenter Server can then display the storage data in the vSphere Web Client. Information that the storage provider supplies can be divided into the following categories: n Storage data services and capabilities.
Chapter 25 Using Storage Providers Storage Status Reporting If you use storage providers, the vCenter Server can collect status characteristics for physical storage devices and display this information in the vSphere Web Client. The status information includes events and alarms. n Events indicate important changes in the storage configuration. Such changes might include creation and deletion of a LUN, or a LUN becoming inaccessible due to LUN masking.
vSphere Storage Securing Communication with Storage Providers To communicate with a storage provider, the vCenter Server uses a secure SSL connection. The SSL authentication mechanism requires that both parties, the vCenter Server and the storage provider, exchange SSL certificates and add them to their truststores. The vCenter Server can add the storage provider certificate to its truststore as part of the storage provider installation.
Chapter 25 Using Storage Providers Update Storage Providers The vCenter Server periodically updates storage data in its database. The updates are partial and reflect only those storage changes that storage providers communicate to the vCenter Server. When needed, you can perform a full database synchronisation for the selected storage provider. Procedure 1 Browse to vCenter Server in the vSphere Web Client navigator. 2 Click the Manage tab, and click Storage Providers.
vSphere Storage 282 VMware, Inc.
26 Using vmkfstools vmkfstools is one of the ESXi Shell commands for managing VMFS volumes and virtual disks. You can perform many storage operations using the vmkfstools command. For example, you can create and manage VMFS datastores on a physical partition, or manipulate virtual disk files, stored on VMFS or NFS datastores. Note After you make a change using the vmkfstools, the vSphere Web Client might not be updated immediately. You need to use a refresh or rescan operation from the client.
vSphere Storage Table 26‑1. vmkfstools command arguments (Continued) Argument Description device Specifies devices or logical volumes. This argument uses a path name in the ESXi device file system. The path name begins with /vmfs/devices, which is the mount point of the device file system. Use the following formats when you specify different types of devices: n path /vmfs/devices/disks for local or SAN-based disks. n /vmfs/devices/lvm for ESXi logical volumes.
Chapter 26 Using vmkfstools File System Options File system options allow you to create and manage VMFS datastores. These options do not apply to NFS. You can perform many of these tasks through the vSphere Web Client. Listing Attributes of a VMFS Volume Use the vmkfstools command to list attributes of a VMFS volume. -P --queryfs -h --humanreadable When you use this option on any file or directory that resides on a VMFS volume, the option lists the attributes of the specified volume.
vSphere Storage Example for Creating a VMFS File System This example illustrates creating a new VMFS5 datastore named my_vmfs on the naa.ID:1 partition. The file block size is 1MB. vmkfstools -C vmfs5 -S my_vmfs /vmfs/devices/disks/naa.ID:1 Extending an Existing VMFS Volume Use the vmkfstools command to add an extent to a VMFS volume.
Chapter 26 Using vmkfstools Virtual Disk Options Virtual disk options allow you to set up, migrate, and manage virtual disks stored in VMFS and NFS file systems. You can also perform most of these tasks through the vSphere Web Client. Supported Disk Formats When you create or clone a virtual disk, you can use the -d --diskformat suboption to specify the format for the disk. Choose from the following formats: n n zeroedthick (default) – Space required for the virtual disk is allocated during creation.
vSphere Storage You can specify the following suboptions with the -c option. n -a specifies the controller that a virtual machine uses to communicate with the virtual disks. You can choose between BusLogic, LSI Logic, IDE, LSI Logic SAS, and VMware Paravirtual SCSI. n -d specifies disk formats. n -W specifies whether the virtual disk is a file on VMFS or NFS datastore, or an object on a Virtual SAN or Virtual Volumes datastore. n --policyFile fileName specifies VM storage policy for the disk.
Chapter 26 Using vmkfstools Renaming a Virtual Disk This option renames a virtual disk file at the specified path on the VMFS volume. You must specify the original file name or file path oldName and the new file name or file path newName. -E --renamevirtualdisk oldName newName Cloning or Converting a Virtual Disk or RDM Use the vmkfstools command to create a copy of a virtual disk or raw disk you specify. A non-root user is not allowed to clone a virtual disk or an RDM.
vSphere Storage You can configure a virtual machine to use this virtual disk by adding lines to the virtual machine configuration file, as in the following example: scsi0:0.present = TRUE scsi0:0.fileName = /vmfs/volumes/myVMFS/myOS.vmdk If you want to convert the format of the disk or change the adapter type, use the -d|--diskformat and the a|--adaptertype suboptions. For example: vmkfstools -i /vmfs/volumes/myVMFS/templates/gold-master.vmdk /vmfs/volumes/myVMFS/myOS.
Chapter 26 Using vmkfstools Creating a Virtual Compatibility Mode Raw Device Mapping This option creates a Raw Device Mapping (RDM) file on a VMFS volume and maps a raw LUN to this file. After this mapping is established, you can access the LUN as you would a normal VMFS virtual disk. The file length of the mapping is the same as the size of the raw LUN it points to.
vSphere Storage Checking and Repairing Virtual Disks Use this option to check or repair a virtual disk in case of an unclean shutdown. -x , --fix [check|repair] Checking Disk Chain for Consistency With this option, you can check the entire disk chain. You can determine if any of the links in the chain are corrupted or any invalid parent-child relationships exist. -e --chainConsistent Storage Device Options Device options allows you to perform administrative task for physical storage devices.
Index Symbols B * next to path 191 backups considerations 32 third-party backup package 33 best practices, FCoE 46 bidirectional CHAP 98 BIOS, enabling for BFS 52 block devices 208 boot adapters 51 boot BIOS prompt, enabling for BFS 52 boot from DVD-ROM 51 boot from iSCSI SAN configuring HBAs 108 configuring iSCSI settings 109 guidelines 107 hardware iSCSI 108 iBFT 109 preparing SAN 108 software iSCSI 109 boot from SAN benefits 49 boot LUN considerations 50 configuring Emulex HBAs 52 configuring Qlogic
vSphere Storage compatibility modes physical 208 virtual 208 configuration files, virtual machines 131 configuration parameters, virtual machines 131 configure a swap cache using a virtual flash resource 137 configure Flash Read Cache 142 configuring, dynamic discovery 73, 80, 86, 97 configuring DNS for NFS 4.
Index F failover I/O delay 186 transparent 36, 66 failover paths, status 191 FC HBA setup 40 FC SAN accessing 37 hardware requirements 39 FCoE, best practices 46 FCoE adapters 45 Fibre Channel, concepts 35 Fibre Channel over Ethernet 45 Fibre Channel SAN best practices 59 preventing problems 59 file systems, upgrading 146 file-based (VMFS) solution 32 files copying 171 moving 171 FIP 46 firewall, NFS client 156 Fixed path policy 190, 193 flash device, monitoring 136 flash devices best Practices 136 estimat
vSphere Storage installing an ESXi host 111 limitations 110 networking best practices 112 setting up ESXi 110 troubleshooting 113 IDE 14 independent hardware iSCSI adapters, change IP address 73 installation preparing for boot from SAN 50 steps 40 IP address 64 IQN 64 iSCSI 15 iSCSI initiators configuring CHAP 99 configuring advanced parameters 103 hardware 71 setting up CHAP parameters 98 iSCSI SAN accessing 68 best practices 115 boot 107 concepts 63 preventing problems 115 iSCSI adapter, modifying genera
Index MPPs displaying 196 See also multipathing plug-ins MRU path policy 193 multipathing active paths 191 broken paths 191 considerations 194 disabled paths 191 standby paths 191 viewing the current state of 191 multipathing claim rules adding 197 deleting 199 multipathing plug-ins, path claiming 191 multipathing policy 193 multipathing state 192 N N-Port ID Virtualization, See NPIV NAA 64 NAS 15 NAS plug-ins installing 265 uninstalling 266 upgrading 267 Native Multipathing Plug-In 188, 189 network conne
vSphere Storage protocol endpoints editing paths 224 managing 223 PSA, See Pluggable Storage Architecture PSPs, See Path Selection Plug-Ins Q Qlogic HBA BIOS, enabling for BFS 53 queue depth 70 R RAID devices 208 raw device mapping, see RDM 205 RDM advantages 206 and virtual disk files 209 dynamic name resolution 209 overview 205 physical compatibility mode 208 virtual compatibility mode 208 with clustering 209 RDMs and snapshots 208 path management 211 reclaiming space 275 remove a virtual flash resourc
Index disconnections 126 displaying 197 displaying for a host 19, 122 displaying for an adapter 19, 122 hardware acceleration status 262 managing 121 marking as local flash 134 naming 123 paths 192 viewing 18, 121 storage filters disabling 173 host rescan 174 RDM 174 same host and transports 174 VMFS 174 storage policies cloning 235 editing 235 noncompliant 240 reapplying 241 rule sets 230 virtual machines 236 storage profiles, conversion 227 storage provider, requirements 278 storage adapters viewing 20 v
vSphere Storage reviewing capabilities 248 workflow 247 virtual datastores characteristics 21 default policies 236 mounting 168 unmounting 168 virtual flash disabling 138 DRS support 142 HA support 142 Virtual SAN datastores, characteristics 21 Virtual Volumes changing default policy 226 guidelines 217 virtual datastore, changing default policy 226 virtual disk, repair 292 virtual disk files copying 171 renaming 172 virtual disks extending 290 formats 270 supported formats 287 virtual flash resource config
Index inflating thin disks 288 initializing virtual disks 288 migrating virtual disks 290 overview 283 RDM attributes 291 removing zeroed blocks 288 renaming virtual disks 289 SCSI reservations 292 syntax 283 upgrading virtual disks 290 virtual disk options 287 virtual disks conversion 288 vmkfstools -C command 285 vmkfstools -G command 286 vmkfstools -P command 285 vmkfstools -v command 284 vmkfstools -Z command 286 vmkfstools command options 284 vmkfstools examples cloning disks 289 creating RDMs 291 cre
vSphere Storage 302 VMware, Inc.