White Papers

Administration best practices
23 Dell EMC PowerVault ME4 Series and Microsoft Hyper-V | 3921-BP-WS
3.7 Placement of Active Directory domain controllers
It is a best practice to avoid configuring a Microsoft Active Directory
®
(AD) domain controller as a Hyper-V
guest VM on a Hyper-V cluster when the cluster service requires AD authentication to start.
Consider this scenario: A service outage takes the cluster offline (including the domain controller VM). When
attempting to recover, unless there is another domain controller available outside of the affected cluster, the
cluster service will not start because it cannot authenticate.
Note: This order dependency can be avoided with Windows Server 2016 Hyper-V because the cluster service
uses certificates to authenticate instead of AD. With Windows Server 2016, Hyper-V clusters can also be
comprised of nodes that are in workgroups or domains.
Encountering this scenario may be a service-affecting event depending on how long it takes to recover. It may
be necessary to manually recover the domain controller VM to a standalone Hyper-V host outside of the
cluster, or to another cluster.
This situation can be avoided by doing the following:
Configure at least one domain controller as a physical server booting from local disk.
Place virtualized domain controllers on standalone Hyper-V hosts or on individual cluster nodes if
there is an AD dependency for cluster services.
Use Hyper-V Replica (2012 and newer) to ensure that the guest VM can be recovered on another
host.
Place virtualized backup domain controllers on separate clusters, so that a service-affecting event
with any one cluster does not result in all domain controllers becoming unavailable. This does not
protect against cases where there is a site outage that takes all the clusters (and therefore all the
virtualized AD servers) offline.
Leverage Windows Server 2016 Hyper-V, which does not have an AD dependency to authenticate
cluster services.
3.8 Queue depth best practices for Hyper-V
Queue depth is defined as the total number of disk transactions that can be transmitting between an initiator
(a port on the host server) and a target (a port on the storage array). The initiator is typically a Windows
Server HBA FC port or iSCSI initiator, and the target is an FC or iSCSI port on the SAN array (in this case,
the ME4 Series array). Since any given target port can have multiple initiator ports sending it data, the initiator
queue depth is generally used to throttle the number of transactions any given initiator can send to a target
from a host to keep the target from becoming flooded. When flooding happens, the transactions are queued,
which can cause higher latencies and degraded performance for the affected workloads.
3.8.1 When to change queue depth
One issue that is commonly considered is the best practices for queue depth settings for Windows Server
hosts and nodes with Hyper-V. On a Windows Server host, queue depth is a function of the Microsoft
storport.sys driver and the vendor-specific miniport driver for the FC HBA, iSCSI NIC, or CNA.
In many cases, there is no need to change the default queue depth, unless there is a specific use where
changing the queue depth is known to improve performance. For example, if a storage array is connected to a
small number of Windows Server Hyper-V cluster nodes hosting a large block sequential read application
workload, increasing the queue depth setting may be very beneficial. However, if the storage array has many