Microsoft HCI Solutions from Dell Technologies Deployment Guide Part Number: H17977.
Notes, cautions, and warnings NOTE: A NOTE indicates important information that helps you make better use of your product. CAUTION: A CAUTION indicates either potential damage to hardware or loss of data and tells you how to avoid the problem. WARNING: A WARNING indicates a potential for property damage, personal injury, or death. © 2019 —2021 Dell Inc. or its subsidiaries. All rights reserved. Dell, EMC, and other trademarks are trademarks of Dell Inc. or its subsidiaries.
Contents Chapter 1: Introduction................................................................................................................. 5 Document overview............................................................................................................................................................ 5 Audience and scope............................................................................................................................................................
Dell Technologies documentation..................................................................................................................................22 Microsoft documentation................................................................................................................................................ 22 Appendix A: Appendix A: Persistent Memory for Windows Server HCI......................................... 23 Configuring persistent memory for Windows Server HCI................
1 Introduction Topics: • • • Document overview Audience and scope Known issues Document overview This deployment guide provides an overview of Microsoft HCI Solutions from Dell Technologies, guidance on how to integrate solution components, and instructions for preparing and deploying the solution infrastructure.
● Deploying and configuring Windows Server core operating system Hyper-V infrastructure Known issues Before starting the cluster deployment, see Dell EMC Solutions for Microsoft Azure Stack HCI - Known Issues.
2 Solution Overview Topics: • • • Solution introduction Deployment models Solution integration and network connectivity Solution introduction Microsoft HCI Solutions from Dell Technologies include various configurations of AX nodes. These AX nodes power the primary compute cluster that is deployed as a HCI. The HCI uses a flexible solution architecture rather than a fixed component design.
Figure 1. Switchless storage networking Scalable infrastructure The scalable offering within Microsoft HCI Solutions from Dell Technologies encompasses various AX node configurations. In this Windows Server HCI solution, as many as 16 AX nodes power the primary compute cluster. The following figure illustrates one of the flexible solution architectures.
Figure 2. Scalable solution architecture Microsoft HCI Solutions from Dell Technologies do not include management infrastructure components such as a cluster for hosting management VMs and services such as Microsoft Active Directory, Domain Name System (DNS), Windows Server Update Services (WSUS), and Microsoft System Center components such as Operations Manager (SCOM).
If you are using an RODC at the remote site, connectivity to the central management infrastructure with a writeable domain controller is mandatory during deployment of the Azure Stack HCI cluster. NOTE: Dell Technologies does not support expansion of a two-node cluster to a larger cluster size. A three-node cluster provides fault-tolerance only for simultaneous failure of a single node and a single drive.
Nonconverged network connectivity In the nonconverged network configuration, storage traffic uses a dedicated set of network adapters either in a SET configuration or as physical adapters. A separate set of network adapters is used for management, VM, and other traffic classes. In this connectivity method, DCB configuration is optional for Qlogic (iWARP), but mandatory for Mellanox (RoCE) adapters.
3 Solution Deployment Topics: • • • • • • • • • • • • • • Introduction to solution deployment Deployment prerequisites Predeployment configuration Operating system deployment Installing roles and features Verifying firmware and software compliance with the support matrix Updating out-of-box drivers Changing the hostname Configuring host networking Joining cluster nodes to an Active Directory domain Deploying and configuring a host cluster Best practices and recommendations Recommended next steps Deployment
Table 2. Management services Management service Purpose Required/optional Active Directory User authentication Required Domain Name System Name resolution Required Windows Software Update Service (WSUS) Local source for Windows updates Optional SQL Server Database back-end for System Center VMM and System Center Operations Manager (SCOM) Optional Predeployment configuration Before deploying AX nodes, complete the required predeployment configuration tasks.
Configuring BIOS settings including the IPv4 address for iDRAC Perform these steps to configure the IPv4 address for iDRAC. You can also perform these steps to configure any additional BIOS settings. Steps 1. During the system boot, press F12. 2. At System Setup Main Menu, select iDRAC Settings. 3. Under iDRAC Settings, select Network. 4. Under IPV4 SETTINGS, at Enable IPv4, select Enabled. 5. Enter the static IPv4 address details. 6. Click Back, and then click Finish.
NOTE: The command output that is shown in the subsequent sections might show only Mellanox ConnectX-4 LX adapters as physical adapters. The output is shown only as an example. NOTE: For the PowerShell commands in this section and subsequent sections that require a network adapter name, run the Get-NetAdapter cmdlet to retrieve the correct value for the associated physical network port.
Installing roles and features Deployment and configuration of a Windows Server 2016, Windows Server 2019, or Azure Stack HCI operating system cluster requires enabling specific operating system roles and features.
($_.Name -like "*Broadcom*") -or ($_.Name -like "*marvell*") } 2. Update the out-of-box drivers to the required versions, if necessary. For the latest Dell Technologies supported versions of system components, see the Support Matrix for Microsoft HCI Solutions. Download the driver installers from https://www.dell.com/support or by using the Dell EMC Azure Stack HCI Solution Catalog. NOTE: The QLogic FastLinQ adapter does not have an in-box driver in Windows Server 2016.
NOTE: The host operating system network configuration must be complete before you join cluster nodes to the Active Directory domain. Joining cluster nodes to an Active Directory domain Before you can create a cluster, the cluster nodes must be a part of an Active Directory domain. NOTE: Connecting to Active Directory Domain Services by using the host management network might require routing to the Active Directory network. Ensure that this routing is in place before joining cluster nodes to the domain.
Enabling Storage Spaces Direct After you create the cluster, run the Enable-ClusterS2D cmdlet to configure Storage Spaces Direct on the cluster. Do not run the cmdlet in a remote session; instead, use the local console session. Run the Enable-ClusterS2d cmdlet as follows: Enable-ClusterS2D -Verbose The Enable-ClusterS2D cmdlet generates an HTML report of all configurations and includes a validation summary.
$currentPageFile = Get-WmiObject -Class Win32_PageFileSetting if ($currentPageFile.Name -eq $pageFilePath) { $currentPageFile.InitialSize = $InitialSize $currentPageFile.MaximumSize = $MaximumSize $currentPageFile.Put() } else { $currentPageFile.Delete() Set-WmiInstance -Class Win32_PageFileSetting -Arguments @{Name=$pageFilePath; InitialSize = $initialSize; MaximumSize = $maximumSize} } Configuring a cluster witness A cluster witness must be configured for a two-node cluster.
Enable jumbo frames Enabling jumbo frames specifically on the interfaces supporting the storage network might help improve the overall read/write performance of the Azure Stack HCI cluster. An end-to-end configuration of jumbo frames is required to take advantage of this technology. However, support for jumbo frame sizes varies among software, NIC, and switch vendors. The lowest value within the data path determines the maximum frame size that is used for that path.
4 References Topics: • • Dell Technologies documentation Microsoft documentation Dell Technologies documentation These links provide more information from Dell Technologies: ● iDRAC documentation ● Support Matrix for Microsoft HCI Solutions ● Operations Guide—Managing and Monitoring the Solution Infrastructure Life Cycle Microsoft documentation The following link provides more information about Storage Spaces Direct: Storage Spaces Direct overview 22 References
A Appendix A: Persistent Memory for Windows Server HCI Topics: • • Configuring persistent memory for Windows Server HCI Configuring Windows Server HCI persistent memory hosts Configuring persistent memory for Windows Server HCI Intel Optane DC persistent memory is designed to improve overall data center system performance and lower storage latencies by placing storage data closer to the processor on nonvolatile media.
Configuring persistent memory BIOS settings Configure the BIOS to enable persistent memory. Steps 1. During system startup, press F12 to enter System BIOS. 2. Select BIOS Settings > Memory Settings > Persistent Memory. 3. Verify that System Memory is set to Non-Volatile DIMM. 4. Select Intel Persistent Memory. The Intel Persistent Memory page provides an overview of the server's Intel Optane DC persistent memory capacity and configuration. 5. Select Region Configuration.
Configuring Windows Server HCI persistent memory hosts Three types of device objects are related to persistent memory on Windows Server 2019: the NVDIMM root device, physical INVDIMMs, and logical persistent memory disks. In Device Manager, physical INVDIMMs are displayed under Memory devices, while logical persistent disks are under Persistent memory disks. The NVDIMM root device is under System Devices. The scmbus.sys driver controls the NVDIMM root device. The nvdimm.
Managing persistent memory using Windows PowerShell Windows Server 2019 provides a PersistentMemory PowerShell module that enables user management of the persistent storage space. PS C:\> Get-Command -Module PersistentMemory CommandType Name Version Source ------------------------Cmdlet Get-PmemDisk 1.0.0.0 PersistentMemory Cmdlet Get-PmemPhysicalDevice 1.0.0.0 PersistentMemory Cmdlet Get-PmemUnusedRegion 1.0.0.0 PersistentMemory Cmdlet Initialize-PmemPhysicalDevice 1.0.0.
1111 008906320000 INVDIMM device 0 GB 1121 008906320000 INVDIMM device 0 GB 121 008906320000 INVDIMM device 0 GB 21 008906320000 INVDIMM device 0 GB Healthy {Ok} B11 102005395 126 GB Healthy {Ok} B12 102005395 126 GB Healthy Healthy {Ok} {Ok} A12 A9 102005395 102005395 126 GB 126 GB 2. Run Get-PmemUnusedRegion to verify that two unused Pmem regions are available, one region for each physical CPU: PS C:\> Get-PmemUnusedRegion RegionId -------1 3 TotalSizeInBytes DeviceId -------------------