White Papers

Administration best practices
21 Dell EMC PowerVault ME4 Series and Microsoft Hyper-V | 3921-BP-WS
There are also disadvantages to using direct-attached storage for guest VMs:
The ability to perform native Hyper-V snapshots is lost. However, the ability to leverage ME4 Series
snapshots of the underlying volume is unaffected.
Complexity increases, requiring more management overhead to support.
VM mobility is reduced due to creating a physical hardware layer dependency.
Note: Legacy environments that are using direct-attached disks for guest VM clustering should consider
switching to shared virtual hard disks, particularly when migrating to Windows Server 2016 Hyper-V.
3.4.6 Guest VMs and pass-through disks
A block-based pass-through disk is a special type of Hyper-V disk that is mapped to a Hyper-V host or cluster,
and then is passed through directly to a Hyper-V guest VM. The Hyper-V host or cluster has visibility to a
pass-through disk but does not have I/O access. Hyper-V keeps it in a reserved state because only the guest
VM has I/O access.
Using pass-through disks is a legacy design that is discouraged unless there is a specific use case that
requires it. They are no longer necessary in most cases because of the feature enhancements with newer
releases of Hyper-V (generation 2 guest VMs, VHDX format, and shared VHDs in Windows Server 2016
Hyper-V.) Use cases for pass-through disks are similar to the list provided for direct-attached iSCSI storage in
section 3.4.5.
Reasons to avoid using pass-through disks include the following:
The ability to perform native Hyper-V snapshots is lost, which is similar to direct-attached storage.
The use of a pass-through disk as a boot volume on a guest VM prevents the use of a differencing
disk.
VM mobility is reduced by creating a dependency on the physical layer.
This can result in many LUNs presented to hosts or cluster nodes which can become unmanageable
and impractical at larger scale.
3.4.7 ME4 Series and Hyper-V server clusters
When mapping shared volumes (quorum disks, cluster disks, or cluster shared volumes) to multiple hosts,
make sure that the volume is mapped to all nodes in the cluster using a consistent LUN number. Leverage
host groups on the ME4 Series array to simplify the task of mapping a consistent LUN number to multiple
hosts.
As a best practice and a time-saving tip, configure the nodes in a cluster so that they are identical with regard
to the number of disks and LUNs. In this way, when mapping new storage LUNs, the next available LUN ID
will be the same on all hosts. By doing this, having to change LUN IDs later to make them consistent can be
avoided.
3.4.8 Volume design considerations for ME4 Series storage
One design consideration that is often unclear is choosing the number of guest VMs to place on an ME4
Series Hyper-V volume, or cluster shared volume (CSV), when Hyper-V hosts are clustered. While many-to-
one and one-to-one strategies both have advantages, a many-to-one strategy presents a good design starting
point in most scenarios, and can be adjusted for specific uses cases.