White Papers
Administration best practices
20 Dell EMC PowerVault ME4 Series and Microsoft Hyper-V | 3921-BP-WS
Note the following:
• The active/optimized paths are associated with the ME4 Series storage controller head that owns the
volume. The active/unoptimized paths are associated with the other controller head.
• If each controller has four FE transport paths configured (shown in Figure 12), each volume that is
mapped should list eight total paths: four that are optimized, and four that are unoptimized.
Best practices recommendations include the following:
• Changes to MPIO registry settings on the Windows or Hyper-V host (such as time-out values) should
not be made unless directed by ME4 Series documentation, or unless directed by Dell EMC support
to solve a specific problem.
• Configure all available FE ports on an ME4 Series array (when it is connected to a SAN) to use your
preferred transport to optimize throughput and maximize performance.
• If using a direct-connect option for iSCSI, SAS or FC, configure each host to use at least two
matching ports (one from each controller head) to provide MPIO and failover protection against a
single-path or controller-head failure.
• Verify that current versions of software are installed (such as OS, boot code, firmware, and drivers)
for all components in the data path:
- ME4 Series arrays
- Data switches
- HBAs, NICs, converged network adapters (CNAs)
• Verify that all hardware is supported per the latest version of the Dell EMC hardware Compatibility
Matrix.
3.4.4 Guest VMs and in-guest iSCSI
ME4 Series storage supports in-guest iSCSI to present block storage volumes directly to guest VMs. The
setup and configuration are essentially the same as for a physical host server, except that the VM is using
virtual hardware. Follow the guidance in the ME4 Series Administrator’s Guide to optimize iSCSI settings,
such as Jumbo frames.
3.4.5 Direct-attached in-guest iSCSI storage use cases
Although ME4 Series arrays support in-guest iSCSI volumes mapped to guest VMs, direct-attached storage
for guest VMs is not recommended as a best practice unless there is a specific use case that requires it.
Typical use cases include:
• Situations where a workload has very high I/O requirements, and the performance gain over using a
virtual hard disk is important. Direct-attached disks bypass the host server file system. This reduces
host CPU overhead for managing guest VM I/O. For many workloads, there will be no notable
difference in performance between direct-attached and virtual hard disks.
• VM clustering on legacy platforms prior to support for shared virtual hard disks, which became
available with the 2012 R2 release of Hyper-V, and enhanced with Hyper-V 2016.
• When needing to troubleshoot I/O performance on a volume and it must be isolated from all other
servers and workloads.
• When there is a need to create custom snapshot or replication policies or profiles on ME4 Series
storage for a specific data volume.
• When a single data volume presented to a guest VM will exceed the maximum size for a VHD (2 TB)
or VHDX (64 TB).