Deployment Guide

read-only domain controller (RODC) at the remote office. If you are using an RODC at the remote site, connectivity to the central
management infrastructure with a writeable domain controller is mandatory during deployment of the Azure Stack HCI cluster.
NOTE: Dell does not support expansion of a two-node cluster to a larger cluster size. A three-node cluster provides
fault-tolerance only for simultaneous failure of a single node and a single drive. If the deployment requires future
expansion and better fault tolerance, consider starting with a four-node cluster at a minimum.
NOTE: For recommended server and network switch placement in the racks, port mapping on the top-of-rack (ToR) and
out-of-band (OOB) switches, and details about configuring the ToR and OOB switches, see https://
infohub.delltechnologies.com/section-assets/network-integration-and-host-network-configuration-options.
This deployment guide provides instructions and PowerShell commands for manually deploying an Azure Stack HCI cluster. For
information about configuring host networking and creating an Azure Stack HCI cluster by using System Center Virtual Machine Manager
(VMM), see https://infohub.delltechnologies.com/section-assets/vmm-wiki.
Solution integration and network connectivity
Each of the variants in Dell EMC Solutions for Azure Stack HCI supports a specific type of network connectivity. The type of network
connectivity determines the solution integration requirements.
For information about all possible topologies within both fully converged and nonconverged solution integration, including with
switchless storage networking, and host operating system network configuration, see https://infohub.delltechnologies.com/section-
assets/network-integration-and-host-network-configuration-options.
For switchless storage networking, ensure that the server cabling is done according to the instructions detailed in the "Full Mesh
Switchless Storage Network Cabling Instructions" section of the network configuration options wiki at https://
infohub.delltechnologies.com/section-assets/network-integration-and-host-network-configuration-options.
For sample switch configurations for these network connectivity options, see https://infohub.delltechnologies.com/t/sample-
network-switch-configuration-files-2/.
Fully converged network connectivity
In the fully converged network configuration, both storage and management/virtual machine (VM) traffic use the same set of network
adapters. The adapters are configured with Switch Embedded Teaming (SET). In this network configuration, when using RoCE, you must
configure data center bridging (DCB).
The following table shows when to configure DCB based on the chosen network card and switch topology:
Table 1. DCB configuration based on network card and switch topology
Network card on node Switch topology
Fully converged* Nonconverged
Mellanox (RoCE) DCB (required) No DCB
QLogic (iWARP) DCB (not required) No DCB
* For simplicity, all converged sample switch configurations have DCB configured for both Mellanox and QLogic. However, for QLogic
(iWARP), except for high-performance configurations such as all-NVMe, DCB is not required.
Nonconverged network connectivity
In the nonconverged network configuration, storage traffic uses a dedicated set of network adapters either in a SET configuration or as
physical adapters. A separate set of network adapters is used for management, VM, and other traffic classes. In this connectivity method,
DCB configuration is optional because storage traffic has its own dedicated fabric.
The switchless storage networking deployment model also implements nonconverged network connectivity without the need for network
switches for storage traffic.
Solution Overview
9