Best Practices Dell EMC SC Series: Best Practices with VMware vSphere Abstract This document provides best practices for integrating VMware® vSphere® 5.x7.x hosts with Dell EMC™ SC Series storage.
Revisions Revisions Date Description July 2016 Initial release: Combined vSphere 5.x and 6.x best practice documents, added SCOS 7.1 updates September 2016 Minor revisions and corrections October 2016 Changed Disk.AutoremoveOnPDL to reflect current VMware guidance January 2017 Updated for vSphere 6.5 changes; added appendix D summarizing all host settings February 2017 Updated Linux guest disk timeout recommendations in section 4.7.
Table of contents Table of contents Revisions.............................................................................................................................................................................2 Acknowledgments ...............................................................................................................................................................2 Table of contents ................................................................................................
Table of contents 6.9 VMware multipathing policies ...........................................................................................................................28 6.10 Multipathing using a fixed path selection policy ...............................................................................................32 6.11 Multipathing using a round robin path selection policy .....................................................................................33 6.
Table of contents 15.1 Synchronous replication ...................................................................................................................................60 15.2 Asynchronous replication .................................................................................................................................60 15.3 Replication considerations with standard replications ......................................................................................61 15.
Introduction 1 Introduction This document provides configuration examples, tips, recommended settings, and other storage guidelines for integrating VMware® vSphere® hosts with the Dell EMC™ SC Series storage. It also answers many frequently asked questions about how VMware interacts with SC Series features like Dynamic Capacity (thin provisioning), Data Progression (automated tiering), and Remote Instant Replay (replication). 1.
Fibre Channel switch zoning 2 Fibre Channel switch zoning Zoning Fibre Channel switches for an ESXi host is like zoning any other server connected to the SC Series array. The fundamental points are explained in this section. 2.1 Single initiator multiple target zoning Each Fibre Channel zone created should have a single initiator (HBA port) and multiple targets (SC Series front-end ports). Each HBA port requires its own Fibre Channel zone that contains itself and the SC Series front-end ports.
Fibre Channel switch zoning 2.4 Virtual ports If the SC Series array is configured to use virtual port mode, include all the front-end virtual ports within each fault domain in the zone with each ESXi initiator. See Figure 1.
Host initiator settings 3 Host initiator settings Ensure the initiator settings are configured in the ESXi host according to Appendix A: Required Adapter and Server OS Settings in the latest Dell Storage Compatibility Matrix.
Modifying queue depth and timeouts 4 Modifying queue depth and timeouts Queue depth is defined as the number of disk transactions that can be in flight between an initiator and a target. The initiator is an ESXi host HBA port or iSCSI initiator, and the target is the SC Series front-end port. Since any given target can have multiple initiators sending it data, the initiator queue depth is used to throttle the number of transactions. Throttling transactions keeps the target from becoming flooded with I/O.
Modifying queue depth and timeouts Caution: Before running the following commands, refer to the latest documentation from VMware for the latest information. 4.2.1 Fibre Channel HBAs For each of these adapters, the method to set the driver queue depth and timeouts uses the following general steps: 1.
Modifying queue depth and timeouts 4.2.2 Software iSCSI initiator Similarly, for the software iSCSI initiator, complete the following steps: 1. Set the queue depth to 255 (example shown): esxcli system module parameters set -m iscsi_vmk -p iscsivmk_LunQDepth=255 2.
Modifying queue depth and timeouts 4.3 Adjusting settings for permanent device loss conditions When using vSphere High Availability (HA), it is a best practice to modify the following host settings in dealing with permanent device loss (PDL) conditions. 1. From within the host advanced system settings, apply the following modifications: - VMkernel.Boot.
Modifying queue depth and timeouts Example queue utilization with the DSNRO set to 32. Note: The DSNRO limit does not apply to volumes mapped as raw device mappings (RDMs). Each RDM will have its own queue. The DSNRO setting can be modified on a per-datastore basis using the command line: esxcli storage core device set -d
Modifying queue depth and timeouts Example PowerCLI Script (dsnro.ps1): #Connect to vCenter. Change server, user, and password credentials below Connect-VIServer -Server ‘vCenter_Server_IP_or_FQDN’ -User ‘administrator@vsphere.
Modifying queue depth and timeouts 4.5 Adaptive queue depth At times of high congestion, VMware has an adaptive queue depth algorithm that can be enabled with the QFullSampleSize and QFullThreshold variables. These variables aid in relieving the congestion by dynamically reducing and increasing the logical unit number (LUN) queue depth. Due to the architecture of SC Series storage, enabling these settings is not recommended unless under the guidance of Dell Support.
Modifying queue depth and timeouts b. For LSI Logic SAS (LSI_SAS): Windows Registry Editor Version 5.00 [HKLM\SYSTEM\CurrentControlSet\Services\LSI_SAS\Parameters\Device] "DriverParameter"="MaximumTargetQueueDepth=128;" ; The semicolon is required at the end of the queue depth value "MaximumTargetQueueDepth"=dword:00000080 ; 80 hex is equal to 128 in decimal Registry setting for the LSI Logic SAS vSCSI Adapter c.
Modifying queue depth and timeouts 4.7.1 Windows 1. Back up the registry. 2. Using the Registry Editor, modify the following key. Windows Registry Editor Version 5.00 [HKLM\SYSTEM\CurrentControlSet\Services\Disk] "TimeOutValue"=dword:0000003c ; 3c in hex is equal to 60 seconds in decimal Registry key to set the Windows disk timeout 3. Reboot the virtual machine. Note: This registry value is automatically set when installing VMware Tools.
Guest virtual SCSI adapter selection 5 Guest virtual SCSI adapter selection When creating a new virtual machine, there are four types of virtual SCSI (vSCSI) controllers to choose from. Based on the operating system selected, vSphere will automatically recommend and select a SCSI controller that is best suited for that particular operating system. The best practice is to follow the client recommendation. The nuances of each adapter are described in the following subsections. vSCSI adapter selection 5.
Mapping volumes to an ESXi server 6 Mapping volumes to an ESXi server Within the SC Series, mapping is the process of presenting a volume to a host. The following subsections describe basic concepts on how vSphere treats different scenarios. 6.1 Basic volume mapping concepts When sharing volumes between ESXi hosts for vMotion, HA, and DRS, for consistency it is recommended that each volume is mapped to clustered ESXi hosts using the same LUN.
Mapping volumes to an ESXi server In the mapping wizard, the system can auto select the LUN number, or a preferred LUN number can be manually specified from the advanced settings screen shown in Figure 8. Manually specifying a LUN in the advanced settings screen This advanced option allows administrators who already have a LUN numbering scheme to continue using it. However, if a LUN is not manually specified, the system automatically selects a LUN for each volume incrementally starting at LUN 1.
Mapping volumes to an ESXi server Keep in mind that when a volume uses multiple paths, both the ESXi initiators are mapped from the same controller, through different front-end ports.
Mapping volumes to an ESXi server Multipathing to an ESXi host is automatic when the server object has more than one HBA or iSCSI initiator ports assigned to it. In other words, the advanced options must be used if the server does not need a volume multipathed. Advanced server mapping options Advanced mapping options for an ESXi host 23 Function Description Select LUN Clear the Use next available LUN option to manually specify the LUN.
Mapping volumes to an ESXi server 6.5 Configuring the VMware iSCSI software initiator for a single path Although it is not recommended, for instances where iSCSI multipathing cannot be configured, the steps required for a single-path iSCSI configuration are as follows. From within the VMware vSphere Client: 1. In the ESXi host Security Profile > ESXi firewall, enable the Software iSCSI Client. 2. Add a VMkernel port to a virtual switch assigned to the physical NIC for iSCSI (see Figure 11).
Mapping volumes to an ESXi server 6.6 Configuring the VMware iSCSI software initiator for multipathing To configure the VMware iSCSI software initiator for multipathing, see sections “Configuring Software iSCSI Adapter” and “Multiple Network Adapters in iSCSI Configuration” in the vSphere Storage Guide at VMware vSphere documentation.
Mapping volumes to an ESXi server For example, with subnet 192.168.0.x and subnet mask 255.255.255.0, the vSwitch would look like Figure 13. iSCSI ports and NICs on a single vSwitch/single subnet (VMkernel port binding allowed) Note: Using port binding is not recommended when the VMkernel ports are on different networks as shown in Figure 14, because it may cause long rescan times and other problems. See Considerations for using software iSCSI port binding in ESX/ESXi in the VMware Knowledge Base.
Mapping volumes to an ESXi server 6.7 iSCSI port multi-VLAN configuration recommendations For SC Series systems with iSCSI capabilities, SCOS 6.5 and later has multiple-VLAN support for diverse network configurations and multitenant environments. This multi-VLAN support allows iSCSI storage I/O to be separated between vSwitches for customers using software iSCSI initiators within the guest.
Mapping volumes to an ESXi server Configuration of VLANs within the Dell Storage Manager (Enterprise Manager) client Note: For added flexibility, Jumbo frames can be enabled on a per-fault-domain basis. 6.8 Configuring the FCoE software initiator for multipathing When using the ESXi software FCoE, volumes are assigned to the VMW_SATP_LOCAL by default. In some instances, this default policy will cause paths to go unclaimed.
Mapping volumes to an ESXi server 6.9.1 Round robin policy for standard volumes The round robin path selection policy uses automatic path selection and load balancing to rotate I/O through all paths. Round robin load balancing does not aggregate the storage link bandwidth. It merely distributes the load for the datastore volume in bursts evenly and sequentially across paths in an alternating fashion.
Mapping volumes to an ESXi server 6.9.1.1 Setting round robin to be the default path selection policy The round robin path selection policy (PSP) should be set to the default using the following command. After creating the claim rule and rebooting, all SC Series volumes and protocol endpoints will acquire this policy. esxcli storage nmp satp rule add -s VMW_SATP_ALUA -V COMPELNT -P VMW_PSP_RR -o disable_action_OnRetryErrors -e "Dell EMC SC Series Claim Rule" -O "policy=iops;iops=3" Caution: With ESXi 6.
Mapping volumes to an ESXi server ~ # esxcli storage nmp device set --device=naa.xxx --psp=VMW_PSP_RR 3. Set the IOPS of the device to 3. ~ # esxcli storage nmp psp roundrobin deviceconfig set --device naa.xxx -type=iops --iops=3 Note: Monitor the vSphere host processor because changing these variables can affect its performance. To automatically set the IOPS path change condition for all volumes mapped to an ESXi host, the claim rule can be modified by adding -O "policy=iops;iops=3".
Mapping volumes to an ESXi server When using the fixed policy, if a path fails, all the datastores using it as their preferred path will fail over to the secondary path. When service resumes, the datastores will resume I/O on their preferred path. Here is an example of using the fixed policy: 1. HBA1 loses connectivity; HBA2 takes over its connections. 2. HBA1 resumes connectivity; HBA2 will fail its connections back to HBA1. Example of fixed datastore path selection policy set with a preferred path 6.9.
Mapping volumes to an ESXi server Volume: "LUN20-vm-storage" → Mapped to ESX1/HBA2 -as- LUN 20 (Standby) This example would cause all I/O for both volumes to be transferred over HBA1.
Mapping volumes to an ESXi server proxy standard volume traffic between controllers, the paths to the second controller will be set to the standby state, and only change during a controller failover. Note: Because of the ALUA protocol addition, the vSphere storage array type plug-in (SATP) module settings, such as configuring round robin as the default PSP, will need to be changed on each host when upgrading to SCOS 6.6 and later.
Mapping volumes to an ESXi server 3. Once the datastore has been successfully unmounted, select Detach on the disk device. Detaching a storage device 4. Repeat step 1 through step 3 for each host the volume is presented. 5. Within the DSM Client, unmap the volume. 6. Within the vSphere client, rescan the adapters to ensure that the disk has been removed. Note: Graceful removal of volumes from an ESXi host is done automatically when using the Dell SC Series vSphere client plug-in.
Boot from SAN 7 Boot from SAN Booting ESXi hosts from SAN yields both advantages and disadvantages. Sometimes, such as with blade servers that do not have internal disk drives, booting from SAN may be the only option. However, many ESXi hosts can have internal mirrored drives, providing the flexibility of choice. The benefits of booting from SAN are obvious—it alleviates the need for internal drives and allows the ability to take snapshots (replays) of the boot volume.
Boot from SAN Once advanced mapping is enabled, there are a few options that need to be changed for the initial installation, as shown in Figure 22. Advanced mapping screen for configuring boot from SAN Map volume using LUN 0: Checked Maximum number of paths allowed: Single-path Once the ESXi host is running correctly, the second path can then be added to the boot volume by modifying the mapping.
Volume creation and sizing 8 Volume creation and sizing Administrators are tasked with complex decisions such as determining the best volume size, number of virtual machines per datastore, and file system versions for their environment. 8.1 Volume sizing and the 64 TB limit Although the maximum size of a volume that can be presented to ESXi is 64 TB, the general recommendation is to start with smaller and more manageable initial datastore sizes and expand them as needed.
Volume creation and sizing The most common indicator a datastore has too many virtual machines is if the queue depth of the datastore is regularly exceeding set limits and increasing disk latency. Remember that if the driver module is set to a queue depth of 256, the maximum queue depth of each datastore is also 256. Meaning that if there are 16 virtual machines on a datastore, all heavily driving a queue depth of 32 (16 x 32 = 512), they are essentially overdriving the disk queues by double.
Volume creation and sizing Figure 23 shows an example of a fully aligned partition in the SC Series where one guest I/O will only access necessary disk sectors. Fully aligned partition in SC Series storage Figure 24 shows an example of an unaligned partition in a traditional SAN where alignment can improve performance.
Volume creation and sizing 8.4 VMFS file systems and block sizes Within ESXi, it is recommended to use VMFS-6, which is the recommended file system format selectable from the vSphere client when creating a datastore. 8.4.1 VMFS-3 If there are any remaining VMFS-3 datastores in the environment, it is recommended they be retired, and virtual machines be migrated to the latest-version VMFS-5/6 formatted datastore.
Volume mapping layout 9 Volume mapping layout In addition to volume sizing, another important factor to consider is the placement of files and virtual machine data. 9.1 Multiple virtual machines per volume One of the most common techniques in virtualization is to place more than one virtual machine on each volume. Encapsulation of virtual machines within datastores results in higher consolidation ratios.
Volume mapping layout remaining hosts. The sudden reduction in overall memory could cause a sudden increase in paging activity that could overload the datastore, causing a storage performance decrease. Second, that datastore could become a single point of failure. Operating systems are not very tolerant of unexpected disk drive removal.
Volume mapping layout Virtual machine placement (with RDMs) Timesaver: To help organize the LUN layout for ESXi clusters, some administrators prefer to store their layout in a spreadsheet. Not only does this help to design their LUN layout in advance, but it also improves organization as the clusters grow larger. Note: Many factors may influence architecting storage regarding the placement of virtual machines.
Volume mapping layout 9.2 One virtual machine per volume Although creating one volume for each virtual machine is not a common technique, there are both advantages and disadvantages that will be discussed below. Keep in mind that deciding to use this technique should be based on business-related factors and may not be appropriate for all circumstances. Using a 1:1 virtual machine-to-datastore ratio should be the exception, not the rule.
Raw device mapping (RDM) 10 Raw device mapping (RDM) A raw device mapping (RDM) is used to map a volume directly to a virtual machine. When an RDM set to physical compatibility mode is mapped to a virtual machine, the operating system writes directly to the volume bypassing the VMFS file system. There are several distinct advantages and disadvantages to using RDMs, but usually, using the VMFS datastores is recommended instead of using RDMs.
Data Progression and storage profile selection 11 Data Progression and storage profile selection Data Progression migrates inactive data to the lower tier of inexpensive storage while keeping the most active data on the highest tier of fast storage, as shown in Figure 26. This works to the advantage of VMware datastores because multiple virtual machines are kept on a single volume.
Data Progression and storage profile selection The following is an advanced example of virtual machine RAID groupings in which forcing volumes into different profiles is wanted.
Data Progression and storage profile selection Within a VMware environment, this new functionality may change the storage profile strategies previously implemented in the environment. Administrators should review all the new storage profile selections, such as the ones shown in Figure 27, to see if improvements can be made to their existing tiering strategies. New storage profile available with SCOS 6.
Data Progression and storage profile selection 11.2.1 Data Reduction Input Within the advanced volume settings, the Data Reduction Input setting can be specified to control which pages are eligible for data reduction. Advanced volume settings showing the Data Reduction Input selection All Snapshot Pages: All frozen pages in the lowest tier of the system that are part of a snapshot, are eligible for data reduction.
Thin provisioning and virtual disks 12 Thin provisioning and virtual disks Dell SC Series thin provisioning allows less storage to be consumed for virtual machines, saving upfront storage costs. This section describes the relationship that this feature has with virtual machine storage. 12.1 Virtual disk formats In ESXi, VMFS can store virtual disks using one of the four different formats described in the following sections. Virtual disk format selection 12.1.
Thin provisioning and virtual disks 12.1.3 Thin provisioned (thin) The logical space required for the virtual disk is not allocated during creation, but it is allocated on demand during the first write issued to the block. Like thick disks, this format will also zero out the block before writing data, inducing extra I/O, and an additional amount of write latency. 12.1.
Thin provisioning and virtual disks those files. Although Windows reports only 5 GB in-use, thin provisioning has assigned those blocks to that volume, so the array will still report 15 GB of data used. When Windows deletes a file, it merely removes the entry in the file allocation table, and there are no onboard mechanisms for the SC Series to determine if an allocated block is still in use by the operating system.
Extending VMware volumes 13 Extending VMware volumes Within an ESXi host, there are three ways to extend or grow storage. This section provides the general steps required. Additional information can be found in the vSphere Storage Guide, section “Increasing VMFS Datastore Capacity", and in the vSphere Virtual Machine Administration Guide, section “Virtual Disk Configuration” located within the VMware vSphere documentation. 13.
Extending VMware volumes 13.1.2 Adding a new extent to an existing datastore This legacy functionality was used with previous versions of vSphere to concatenate multiple volumes to create VMFS-3 datastores larger than 2 TB. Since the maximum datastore size with VMFS-5 has been increased to 64 TB, the use of extents is no longer necessary.
Snapshots (replays) and virtual machine backups 14 Snapshots (replays) and virtual machine backups Backup and recovery are important to any virtualized infrastructure. This section discusses several common techniques to improve the robustness of virtualized environments such as using snapshots (replays) and virtual machine backups. 14.1 Backing up virtual machines The key to any good backup strategy is to test the backup and verify the results.
Snapshots (replays) and virtual machine backups • • Dell Storage PowerShell SDK: This scripting tool also allows scripting for many of the same storage tasks using the Microsoft PowerShell scripting language. Dell Compellent Command Utility (CompCU): A Java-based scripting tool that allows scripting for many of the SC Series tasks (such as taking snapshots).
Snapshots (replays) and virtual machine backups Once the datastore has been resignatured, the snap datastore is accessible. The datastores tab showing the snapshot datastore The recovery datastore is now designated with snap-xxxxxxxx-originalname. From this tab, the datastore can be browsed to perform the recovery using one of the methods listed below. Note: All prior tasks in this section can be automated by using the recovery functionality in the SC Series vSphere Client Plug-ins.
Snapshots (replays) and virtual machine backups 14.2.3 Recovering an entire virtual machine To recover an entire virtual machine from the snap datastore: 1. Browse to the virtual machine configuration file (*.vmx). 2. Right-click the file and select Register VM. 3. Use the wizard to add the virtual machine into inventory. Caution: To prevent name or IP address conflicts when powering on the newly recovered virtual machine, power off or use an isolated network or private vSwitch.
Replication and remote recovery 15 Replication and remote recovery SC Series replication in coordination with the vSphere line of products can provide a robust disaster recovery solution. Because each replication method affects recovery differently, choosing the correct method to meet business requirements is important. This section provides a brief summary of the different options. 15.
Replication and remote recovery Asynchronous replications usually have more flexible bandwidth requirements, making it the most common replication method. Another benefit of asynchronous replication is that snapshots are transferred to the destination volume, allowing for checkpoints at the source system and destination system. 15.
Replication and remote recovery data centers. Because of the nature of the proxied I/O, any disruption to the link or primary Live Volume causes the secondary Live Volume datastore to become unavailable as well. If for any reason the primary Live Volume goes down permanently, the administrators need to perform a recovery on the secondary Live Volume from the last known good snapshot. The DSM replication disaster recovery wizard is designed to help with this type of recovery. 15.4.
Replication and remote recovery definitions can be created. The critical volume could get 80 Mb of the bandwidth, and the lower priority volume could get 20 Mb of the bandwidth. 15.6 Virtual machine recovery at a DR site When recovering virtual machines at the disaster recovery site, the same general steps as outlined in the section 14.2 should be followed.
VMware storage features 16 VMware storage features The vSphere platform has several features that correspond with SC Series features. This section details considerations that should be made about features such as Storage I/O Controls (SIOC), storage distributed resource scheduler (SDRS), and VAAI. 16.1 Storage I/O Controls (SIOC) SIOC is a feature that was introduced in ESX/ESXi 4.1 to help VMware administrators regulate storage performance and provide fairness across hosts sharing a volume.
VMware storage features As shown in Figure 36, the default setting for the congestion threshold is 30 milliseconds of latency or 90 percent of peak throughput. Setting the congestion threshold As a best practice, leave this setting at default value unless under the guidance of VMware or Dell Support.
VMware storage features 16.2 Storage distributed resource scheduler (SDRS) SDRS is a feature introduced in ESXi 5.0 that automatically load balances virtual machines within a datastore cluster based on capacity or performance. When creating SDRS datastore clusters with SC Series storage, remember a few guidelines: • • • Group datastores with similar disk characteristics, such as replicated, non-replicated, storage profile, application, or performance. Use SDRS for initial placement based on capacity.
VMware storage features 16.3 vStorage APIs for array integration (VAAI) ESXi 5.x introduced five primitives that the ESXi host can use to offload specific storage functionality to the array. These primitives are all enabled by default in vSphere 6.x because they are based on new T10 standardized SCSI-3 commands. Caution: Using VAAI requires multiple software version prerequisites from both VMware and Dell.
VMware storage features works at the VMFS level. If a large file is deleted from within a VMDK, the space would not be returned to the pagepool unless the VMDK itself was deleted. Note: In most patch levels of ESXi, the dead space reclamation primitive must be invoked manually. See the article, Using esxcli in vSphere 5.5 and 6.0 to reclaim VMFS deleted blocks on thin-provisioned LUNs, in the VMware Knowledge Base. With vSphere 6.
VMware storage features 16.3.5 Automatic dead space reclamation For automatic space reclamation to work, there are several requirements: • • • • The host must be running ESXi 6.5 and vCenter 6.5 or later. The datastore must be formatted with VMFS-6. The VMFS-6 datastore volumes must be stored within a 512k pagepool. VMware only supports automatic unmap on array block sizes less than 1 MB. The SC Series array must be running a SCOS version that supports VAAI UNMAP.
VMware storage features Adding a VASA 2.0 storage provider to vCenter The SC Series VASA 2.0 provider is a integrated component of the DSM Data Collector, and supersedes the VASA 1.0 provider built into the CITV or DSITV appliance. 16.5 Virtual Volumes (vVols) vVols with vSphere 6.x offer a new approach to managing storage that delivers more granular storage capabilities to the VMDK level.
Determining the appropriate queue depth for an ESXi host A Determining the appropriate queue depth for an ESXi host Adjusting the queue depth on ESXi hosts is a complicated subject. On one hand, increasing it can remove bottlenecks and help to improve performance (if there are enough back-end disks to handle the incoming requests). However, if set improperly, the ESXi hosts could overdrive the controller front-end ports or the back-end disks and potentially make the performance worse.
Determining the appropriate queue depth for an ESXi host A.2 iSCSI With 1 Gb SC Series front-end ports, leave the queue depth set to default and only increase if necessary. With 10 Gb SC Series front-end ports, use the following settings: • • • • A.
Determining the appropriate queue depth for an ESXi host If the LOAD is consistently greater than 1.00, and the latencies are still acceptable, the back-end disks have available IOPS so increasing the queue depth may make sense. However, if the LOAD is consistently less than 1.00, and the performance and latencies are acceptable, then there is usually no need to adjust the queue depth. In Figure 40, the device queue depth is set to 32. Three of the four volumes consistently have a LOAD above 1.00.
Deploying vSphere client plug-ins B Deploying vSphere client plug-ins With SC Series storage, multiple vSphere client plug-ins are available to aid in administration and management of the arrays. Depending on the version, these plug-ins allow administrators to provision datastores, take snapshots, create replications, and even report usage and performance statistics directly from within the vSphere client. B.
Configuring Dell Storage Manager VMware integrations C Configuring Dell Storage Manager VMware integrations With DSM, the Data Collector can be configured to gather storage statistics and perform basic storage administration functions with vCenter. This functionality is also used to configure and enable vVols. To add vCenter credentials into the Data Collector, enter the Servers viewer screen, right-click Servers, and select Register Server.
Host and cluster settings D Host and cluster settings This section summarizes the host settings for each of the storage protocols. At the time of this writing, these settings were valid for vSphere 7.x running on SCOS 6.6 and later. For each setting, see the referenced section number to determine if the setting is applicable for your environment. D.
Host and cluster settings • All protocols: HA Cluster Settings (section 4.3) esxcli system settings advanced set -o "/Disk/AutoremoveOnPDL" -i 1 • All protocols: Advanced options HA Cluster setting (section 4.3) das.maskCleanShutdownEnabled = True D.2 Optional settings These settings are optional but are listed because they solve certain corner-case performance issues. Settings in this section should be tested to ascertain if the settings improve function.
Additional resources E Additional resources E.1 Technical support and resources Dell.com/support is focused on meeting customer needs with proven services and support. Storage technical documents and videos provide expertise that helps to ensure customer success on Dell EMC storage platforms. E.2 VMware support For VMware support, see the following resources: • • • • • 78 VMware vSphere Documentation VMware Knowledge Base VMware.