White Papers
Table Of Contents
- 1 Introduction
- 2 Fibre Channel switch zoning
- 3 Host initiator settings
- 4 Modifying queue depth and timeouts
- 4.1 Host bus adapter queue depth
- 4.2 Storage driver queue depth and timeouts
- 4.3 Adjusting settings for permanent device loss conditions
- 4.4 Modifying the VMFS queue depth for virtual machines (DSNRO)
- 4.5 Adaptive queue depth
- 4.6 Modifying the guest operating system queue depth
- 4.7 Setting operating system disk timeouts
- 5 Guest virtual SCSI adapter selection
- 6 Mapping volumes to an ESXi server
- 6.1 Basic volume mapping concepts
- 6.2 Basic SC Series volume mappings
- 6.3 Multipathed volume concepts
- 6.4 Multipathed SC Series volumes
- 6.5 Configuring the VMware iSCSI software initiator for a single path
- 6.6 Configuring the VMware iSCSI software initiator for multipathing
- 6.7 iSCSI port multi-VLAN configuration recommendations
- 6.8 Configuring the FCoE software initiator for multipathing
- 6.9 VMware multipathing policies
- 6.10 Multipathing using a fixed path selection policy
- 6.11 Multipathing using a round robin path selection policy
- 6.12 Asymmetric logical unit access (ALUA) for front-end SAS
- 6.13 Unmapping volumes from an ESXi host
- 6.14 Mapping volumes from multiple arrays
- 6.15 Multipathing resources
- 7 Boot from SAN
- 8 Volume creation and sizing
- 9 Volume mapping layout
- 10 Raw device mapping (RDM)
- 11 Data Progression and storage profile selection
- 12 Thin provisioning and virtual disks
- 13 Extending VMware volumes
- 14 Snapshots (replays) and virtual machine backups
- 15 Replication and remote recovery
- 16 VMware storage features
- A Determining the appropriate queue depth for an ESXi host
- B Deploying vSphere client plug-ins
- C Configuring Dell Storage Manager VMware integrations
- D Host and cluster settings
- E Additional resources
VMware storage features
68 Dell EMC SC Series: Best Practices with VMware vSphere | 2060-M-BP-V
works at the VMFS level. If a large file is deleted from within a VMDK, the space would not be returned to the
pagepool unless the VMDK itself was deleted.
Note: In most patch levels of ESXi, the dead space reclamation primitive must be invoked manually. See the
article, Using esxcli in vSphere 5.5 and 6.0 to reclaim VMFS deleted blocks on thin-provisioned LUNs, in the
VMware Knowledge Base.
With vSphere 6.5, the default space reclamation behavior is different compared to other versions. When using
VMFS-6 formatted datastores, space reclamation is enabled by default on all datastores, and the reclamation
process is automatically invoked by the ESXi hosts. The automatic space-reclamation process operates
asynchronously at low priority and is not immediate. In addition, certain pages will not be freed back into the
SC Series page pool until after the daily Data Progression cycle has completed.
With vSphere 6.7, the maximum rate at which automatic space reclamation occurs can be increased using
the Space Reclamation Settings window (see Figure 38). The default rate of 100 MB/sec is intended to
minimize impact on VM I/O. Higher performing arrays, such as Dell EMC SC All-Flash arrays, can operate
with a higher automatic space-reclamation rate with minimal impact to VM I/O.
The reclamation rate can be increased if faster reclamation is wanted (for example, with SC All-Flash arrays).
However, a higher automatic space-reclamation rate may cause an impact to VM I/O or other I/O served,
depending on the configuration and the overall load on the array. When adjusting this value, the
recommended starting point is 500 MB/s, and this setting can be increased or decreased as the load permits.
For SC Series hybrid flash arrays or arrays with all spinning disks, it is recommended not to alter the default
rate at which automatic space reclamation occurs.
vSphere 6.7 client (HTML5) space reclamation settings