Dell EMC SC Series Storage with SAS Front-end Support for Microsoft Hyper-V Abstract This document describes how to configure Microsoft® Hyper-V® hosts equipped with supported SAS HBAs to access SAN storage on select Dell EMC™ SC Series arrays with SAS front-end ports.
Revisions Revisions Date Revision October 2015 Initial release with support for Dell SCv2000 February 2016 Updated technical support contact information July 2016 Updated to include support for Dell SC4020 October 2017 Updated to include support for Dell EMC SCv3000 and Dell EMC SC5020 Acknowledgments Author: Marty Glaser The information in this publication is provided “as is.” Dell Inc.
Table of contents Table of contents Revisions.............................................................................................................................................................................2 Acknowledgments ...............................................................................................................................................................2 Executive summary.................................................................................................
Executive summary Executive summary Select Dell EMC™ SC Series arrays support serial-attached SCSI (SAS) front-end (FE) ports for connecting host servers equipped with a supported SAS host bus adapter (HBA) directly to SC Series array SAN storage. SAS FE is a simple, cost-effective transport option that is ideal for locations such as a branch office with a limited number of host servers.
Introduction 1 Introduction All SC Series arrays can be configured to support iSCSI or Fibre Channel (FC) for connecting host servers to SAN storage. These are flexible, robust, and highly scalable transport options that are the best choice for most SAN environments. For a location such as a small branch office, SAS FE is an attractive, simple, costeffective transport option because it does not require additional switch hardware or support expertise.
Introduction Front and rear views of the SCv3000/SC5020 with SAS FE ports For more information about the SC Series arrays discussed in this paper — including release notes, getting started guides, system deployment guides, and owner’s manuals — see the resources available at Dell Support. 1.2 Microsoft Windows Server clustering and Hyper-V Microsoft Windows Server clustering and Hyper-V provide the foundation for creating highly available (HA) host server and VM configurations in Microsoft environments.
Introduction For small environments with a limited number of physical host servers, SAS FE connectivity provides comparable performance and resiliency to FC or iSCSI but without the extra cost and complexity of additional hardware components. There are important scale and design factors to consider if choosing SAS FE instead of FC or iSCSI: Scale: With SAS FE, the number of physical hosts per SC Series array is limited to a maximum of four MPIO hosts.
SAS FE host path configuration options 2 SAS FE host path configuration options When an SC Series array is equipped with SAS FE ports, a total of eight ports (four per controller) are available to connect host servers. To support MPIO, each SC Series array supports a maximum of four host servers (two SAS ports per host).
SAS FE host path configuration options With SAS FE, up to four host servers can be connected to each SC Series array in any combination of standalone hosts or cluster nodes. Figures 5 through 10 show several different MPIO cabling options for Hyper-V hosts and clusters. Each color in Figures 5 through 10 represents a separate SAS FE fault domain. Fault domains protect host servers against a single path or single controller failure. Each SAS FE fault domain consists of two SAS FE ports.
SAS FE host path configuration options SCv3000/SC5020 with a 4-node MPIO Hyper-V cluster 10 Dell EMC SC Series Storage with SAS Front-end Support for Microsoft Hyper-V | 3026-CD-WS | V3
SAS FE host path configuration options SCv2000/SC4020 with two 2-node MPIO Hyper-V clusters 11 Dell EMC SC Series Storage with SAS Front-end Support for Microsoft Hyper-V | 3026-CD-WS | V3
SAS FE host path configuration options SCv3000/SC5020 with two 2-node MPIO Hyper-V clusters 12 Dell EMC SC Series Storage with SAS Front-end Support for Microsoft Hyper-V | 3026-CD-WS | V3
SAS FE host path configuration options SCv2000/SC4020 with a 3-node MPIO Hyper-V cluster and MPIO standalone host 13 Dell EMC SC Series Storage with SAS Front-end Support for Microsoft Hyper-V | 3026-CD-WS | V3
SAS FE host path configuration options SCv3000/SC5020 with a 3-node MPIO Hyper-V cluster and MPIO standalone host 14 Dell EMC SC Series Storage with SAS Front-end Support for Microsoft Hyper-V | 3026-CD-WS | V3
SAS FE host path configuration options SCv2000/SC4020 with a 2-node MPIO Hyper-V cluster with two SAS PCIe cards per host SCv3000/SC5020 with a 2-node MPIO Hyper-V cluster with two SAS PCIe cards per host 15 Dell EMC SC Series Storage with SAS Front-end Support for Microsoft Hyper-V | 3026-CD-WS | V3
Configure Hyper-V hosts to access SC Series arrays with SAS FE ports 3 Configure Hyper-V hosts to access SC Series arrays with SAS FE ports This section provides instructions for configuring Microsoft hosts with SAS HBAs to access SAN storage on an SC Series array equipped with SAS FE ports. Step-by-step guidance is also provided for configuring a Hyper-V cluster to use cluster shared volumes (CSVs) presented to Hyper-V nodes with SAS FE. 3.
Configure Hyper-V hosts to access SC Series arrays with SAS FE ports Stage the host servers with a supported Windows Server OS version and patch them to the desired level. Windows Server 2008 R2 or newer is required to support SAS FE HBA drivers. Windows Server 2016 (with Desktop) is used in the examples shown in this document.
Configure Hyper-V hosts to access SC Series arrays with SAS FE ports 1. While observing safe electrostatic discharge (ESD) precautions, power off the Windows host server and install a supported SAS HBA PCIe card into an available full or half-height PCIe slot. In this example, a Dell 12Gb SAS HBA is installed in a full-height PCIe slot in a Dell PowerEdge R630 (13G) server. 2. Power on the Windows host and press F10 at boot to access the Dell Server Lifecycle Controller (LC).
Configure Hyper-V hosts to access SC Series arrays with SAS FE ports 3. After updating the firmware, reboot the server and log in to Windows. Launch the Device Manager, and under Storage controllers, open the SAS HBA properties and note the driver version. SAS HBA listed in Device Manager 4. Compare the driver version with the version that is available online for your server hardware and OS. If there is a newer driver available, download and install it.
Configure Hyper-V hosts to access SC Series arrays with SAS FE ports Note: The default driver installed by Windows may not be the correct driver or may be outdated. It is important to install the latest driver available from Dell EMC. Verify that the HBA firmware is current when the Windows driver is updated. 5. Repeat steps 1–4 to install SAS HBAs in additional Windows host servers. Verify that the SAS HBA firmware and drivers are current on all hosts before continuing. 3.
Configure Hyper-V hosts to access SC Series arrays with SAS FE ports 3. Set the number of fault domains to create to 4 (the maximum possible) and click Next. 4. Review the fault domain information. A matching port on each SAS controller is automatically paired to create each fault domain. Click Next. Note: SAS FE ports must be assigned to a fault domain before the ports become available to host servers. 5. Allow the wizard to complete and click Finish. 6.
Configure Hyper-V hosts to access SC Series arrays with SAS FE ports 7. Right-click a fault domain and select Edit to view additional information or to perform actions such as renaming the fault domain or assigning friendly names to each physical controller port. 8. As a best practice, modify the fault domain names and port names. This enables intuitive administration in later steps and makes troubleshooting easier.
Configure Hyper-V hosts to access SC Series arrays with SAS FE ports 3.4 Connect Windows host servers to the SC Series array with SAS cables The following example provides step-by-step guidance for configuring a two-node MPIO Hyper-V cluster using two Dell PowerEdge R630 servers, two Dell 12Gbps SAS HBAs, four SAS cables, and an SC5020 array equipped with SAS FE ports. Configuration example with R630 hosts and SC5020 Modify these steps to fit the design of your environment. 1.
Configure Hyper-V hosts to access SC Series arrays with SAS FE ports 3.5 Create server objects 1. In the DSM client, click Storage > Volumes and create folders and subfolders to logically group your volumes. Do the same under Storage > Servers to logically group your server objects. In this example, a simple tree is created for the objects associated with a Hyper-V cluster named MG-HVSAS01. 2.
Configure Hyper-V hosts to access SC Series arrays with SAS FE ports 3. Repeat step 2 and note the initiator WWNs for the other host server SAS ports. In this example, the initiator WWN for port S1351 Top ends in 4801. 4. Repeat the process for the second (bottom) controller. In this example, the initiator WWNs end in 4C01 and 4800. - Host Server S1350: Top = 4C00 Bot = 4C01 Host Server S1351: Top = 4801 Bot = 4800 5. Under Storage, right-click the desired Servers subfolder and click Create Server.
Configure Hyper-V hosts to access SC Series arrays with SAS FE ports 6. In the wizard, configure the following: a. Provide a name for the host server. In this example, the server is named S1350. b. Select the correct MPIO operating system from the drop-down list. Windows Server 2016 Hyper-V MPIO is used in this example. c. Use the information from step 4 to determine the correct initiator SAS port (HBA). In this example, the WWN ending in 4C00 is correct for the MPIO host S1350. d. Click OK. 7.
Configure Hyper-V hosts to access SC Series arrays with SAS FE ports 9. Verify that the correct controller ports are listed for each host server and that they are up before continuing. 3.6 Create cluster server object on the storage array To simplify managing cluster volumes on the SC Series array, create a server cluster object with the desired host servers as members of the cluster. In this example, the member servers are the two MPIO hosts, S1350 and S1351. 1.
Configure Hyper-V hosts to access SC Series arrays with SAS FE ports 2. Click Add Server to Cluster, provide a name for the cluster, and add the desired hosts (in this example, S1350 and S1351). 3. The selected hosts are now listed below the cluster server object. 3.7 Create and map storage volumes to the server cluster Now that the cluster object is created on the storage array, the next step is to create and map storage to the cluster object.
Configure Hyper-V hosts to access SC Series arrays with SAS FE ports 2. In this example, the first volume created is a quorum disk. Provide an intuitive name for the volume, and set a volume size and a snapshot profile. For Server, select the cluster server object. Configure the other volume settings as desired, and click OK. 3. Repeat steps 1 and 2 to create and map at least one additional data volume to the server cluster object.
Configure Hyper-V hosts to access SC Series arrays with SAS FE ports 4. Click the server cluster object and look under the Mappings tab to view the two new volumes along with the mapping details. Each volume will have two paths listed for each host in the cluster for a total of four paths.
Configure Hyper-V hosts to access SC Series arrays with SAS FE ports 3.8 Configure MPIO on the host servers Now that two volumes are mapped to both servers through the cluster server object, the next step is to enable MPIO on the Windows host servers. 1. Log in to the first Windows host and launch Disk Management. From the Action drop-down menu, select Rescan Disk.
Configure Hyper-V hosts to access SC Series arrays with SAS FE ports 2. To correct this so that only one instance of each disk is shown, launch MPIO Properties on the Windows host. Under the Discover Multi-Paths tab, select Add support for SAS devices and click Add. 3. When prompted, reboot the Windows host. 4. After rebooting, launch Disk Management again and verify that only one instance of each drive is displayed. 5. Initialize each disk and bring it online. 6. Format each volume. a.
Configure Hyper-V hosts to access SC Series arrays with SAS FE ports 8. Repeat steps 1–4 and step 7 for each additional host in the cluster. In this example, the second host is named S1351. It is not necessary to initialize or format the volumes (steps 5 and 6) on other cluster nodes because these steps have already been completed on the first node.
Create a Hyper-V cluster 4 Create a Hyper-V cluster The two configured Windows hosts in this example are now ready to form a new Windows Hyper-V cluster. This document assumes that the reader is already familiar with creating Hyper-V clusters.
Create a Hyper-V cluster 4. Select Run All Tests and click Next and allow the validation to complete. 5. Examine the results by viewing the report. If any failures are displayed that prevent clustering, correct these deficiencies and then re-run cluster validation. It may be difficult to pass all tests. Minor deficiencies will not prevent the nodes from being clustered. 6. In this example, all the tests pass.
Create a Hyper-V cluster 4.2 Create a new Hyper-V cluster 1. If the nodes are suitable for clustering, enable Create the cluster now using the validated nodes on the Summary screen and click Finish to start the Create Cluster wizard. The Create Cluster wizard can also be run from the Actions pane in Failover Cluster Manager if not continuing the steps from the previous section. 2. Provide a name for the new cluster along with an IP address. In this example, the cluster is named MG-HV-SAS01. Click Next.
Create a Hyper-V cluster 4.3 Convert the cluster disk to a cluster shared volume As a final step, after creating a Hyper-V cluster, convert the 500 GB cluster disk to a cluster shared volume before creating VM roles. 1. Open Failover Cluster Manager and the new cluster will appear in the left pane. Click each of the objects in the tree (Roles, Nodes, Storage, Networking, and Cluster Events) and view information about each one in the center pane. 2. Expand Storage and click Disks.
Support for guest VMs with SAS pass-through disks 5 Support for guest VMs with SAS pass-through disks Use of pass-through (PT) disks is a legacy Hyper-V configuration that (while still supported) is discouraged by Microsoft and Dell EMC. If a guest VM has the MPIO feature installed, it is necessary to change the AllowFullSCSICommandSet attribute from “false” to “true” in order for a guest VM to support SAS volumes that are presented as PT disks.
Support for guest VMs with SAS pass-through disks 5.2 Windows Server 2012 R2 and newer For MPIO-enabled guest VMs running on Windows Server 2012 R2 and newer, complete the following steps for each guest VM, substituting the name of the VM for Vm_Name. For guest VMs running on Windows Server 2008 R2 or 2012, see section 5.1. 1.
Additional resources A Additional resources Dell.com/support is focused on meeting customer needs with proven services and support. Dell EMC TechCenter is an online technical community where IT professionals have access to numerous resources for Dell EMC software, hardware, and services. Storage Solutions Technical Documents on Dell TechCenter provide expertise that helps to ensure customer success on Dell EMC storage platforms.