Dell EMC PowerStore Host Configuration Guide August 2021 Rev.
Notes, cautions, and warnings NOTE: A NOTE indicates important information that helps you make better use of your product. CAUTION: A CAUTION indicates either potential damage to hardware or loss of data and tells you how to avoid the problem. WARNING: A WARNING indicates a potential for property damage, personal injury, or death. © 2020 - 2021 Dell Inc. or its subsidiaries. All rights reserved. Dell, EMC, and other trademarks are trademarks of Dell Inc. or its subsidiaries.
Contents Additional Resources.....................................................................................................................6 Chapter 1: Introduction................................................................................................................. 7 Purpose..................................................................................................................................................................................
VMware Paravirtual SCSI Controllers.................................................................................................................... 29 Virtual Disk Provisioning............................................................................................................................................ 29 Virtual Machine Guest OS Settings........................................................................................................................
Queue Depth................................................................................................................................................................. 51 Solaris Host Parameter Settings.................................................................................................................................... 51 Configuring Solaris native multipathing..................................................................................................................
Preface As part of an improvement effort, revisions of the software and hardware are periodically released. Some functions that are described in this document are not supported by all versions of the software or hardware currently in use. The product release notes provide the most up-to-date information about product features. Contact your service provider if a product does not function properly or does not function as described in this document.
1 Introduction Topics: • Purpose Purpose This document provides guidelines and best practices on attaching and configuring external hosts to PowerStore systems, or in conjunction with other storage systems. It includes information on topics such as multipathing, zoning, and timeouts. This document may also include references to issues found in the field and notify you on known issues.
2 Best Practices for Storage Connectivity This chapter contains the following topics: Topics: • • • General SAN Guidelines Fibre Channel SAN Guidelines iSCSI SAN Guidelines General SAN Guidelines This section provides general guidelines for storage connectivity. NOTE: This document describes mainly the storage-specific recommendations for PowerStore. It is recommended to always consult with the OS documentation for the up-to-date guidelines specific for the used operating system.
Fibre Channel SAN Guidelines This section describes the best practices for attaching hosts to a PowerStore cluster in a highly available resilient and optimal Fibre Channel SAN. Recommended Configuration Values Summary The following table summarizes the recommended configuration values related to Fibre Channel SAN. Validation Impact Severity Refer to Section Use two separate fabrics.
1. Host 2. PowerStore Appliance 3. Fibre Channel Switch 4. Node Recommended Zoning Configuration for Fibre Channel Consider the following recommendations when setting up a Fibre Channel SAN infrastructure. ● You can zone the host to 1-4 appliances. It is recommended to zone the host to as many appliances as possible to allow volume migration to and from all appliances. ● Use two separate fabrics. Each fabric should be on a different physical FC switch for resiliency.
1. Host 2. PowerStore Appliance 3. FC SAN Switch 4. Node The following diagram describes a simple Fibre Channel connectivity with two (2) PowerStore appliances: 1. Host 2. PowerStore Appliance 3. FC SAN Switch 4.
1. Host 2. PowerStore Appliance 3. FC SAN Switch 4. Node The following diagram describes a simple Fibre Channel connectivity with four (4) PowerStore appliances: 1. Host 2.
3. FC SAN Switch 4. Node NOTE: Refer to your Fibre Channel switch user manual for implementation instructions. iSCSI SAN Guidelines This section details the best practices for attaching hosts to a PowerStore cluster in a highly-available, resilient and optimal iSCSI SAN. Recommended Configuration Values Summary The following table summarizes the recommended variables related to iSCSI SAN: Validation Impact Severity Refer to Section Each host should be connected to both nodes of each appliance.
● A host must be connected at minimum with one path to each node for redundancy. Recommended Configuration for iSCSI Consider the following recommendations when setting up an iSCSI SAN infrastructure. ● External hosts can be attached via ISCSI to a PowerStore cluster via either the embedded 4-port card or via a SLIC: ○ Hosts connected via the first two ports of the 4-port card, will be connected using ToR switches (also used for PowerStore internal communication).
2. PowerStore Appliance 3. ToR/iSCSI Switch 4. Node NOTE: For detailed information on connecting the PowerStore appliance to the ToR/iSCSI switch, refer to the PowerStore Network Planning Guide and the Network Configuration Guide for Dell PowerSwitch Series. The following diagram describes a simple iSCSI connectivity with two (2) PowerStore appliances: 1. Host 2. PowerStore Appliance 3. ToR/iSCSI Switch 4.
1. Host 2. PowerStore Appliance 3. ToR/iSCSI Switch 4. Node NOTE: For detailed information on connecting the PowerStore appliance to the ToR/iSCSI switch, refer to the PowerStore Network Planning Guide and the Network Configuration Guide for Dell PowerSwitch Series. The following diagram describes a simple iSCSI connectivity with four (4) PowerStore appliances: 1. Host 2.
3. ToR/iSCSI Switch 4. Node NOTE: For detailed information on connecting the PowerStore appliance to the ToR/iSCSI switch, refer to the PowerStore Network Planning Guide and the Network Configuration Guide for Dell PowerSwitch Series. NOTE: Make sure to connect port 0 of each node to a different iSCSI switch. NOTE: Refer to your iSCSI switch user manual for implementation instructions.
3 Host Configuration for VMware vSphere ESXi This chapter contains the following topics: Topics: • • • • • • • • • • Chapter Scope Recommended Configuration Values Summary Fibre Channel Configuration iSCSI Configuration vStorage API for System Integration (VAAI) Settings Setting the Maximum I/O Confirming UNMAP Priority Configuring VMware vSphere with PowerStore Storage in a Multiple Cluster Configuration Multipathing Software Configuration Post-Configuration Steps Chapter Scope This chapter provides gui
Validation Impact Severity Refer to Section ESXi configuration: Keep the UNMAP priority for the host at the lowest possible value (default value for ESXi 6.5). Stability & Performance Mandatory Confirming UNMAP Priority Path selection policy: VMW_PSP_RR Stability & Performance Mandatory Configuring vSphere Native Multipathing Alignment: Guest OS virtual machines Storage efficiency & should be aligned. Performance Warning Disk Formatting iSCSI configuration: Configure endto-end Jumbo Frames.
NVMe over Fibre Channel Configuration on ESXi Hosts For details on NVMe over Fibre Channel (NVMe-FC) configuration with ESXi hosts, see the VMware vSphere Storage document for the vSphere version running on the ESXi hosts (the About VMware NVMe storage section). NOTE: NVMe-FC on an ESXi hosts connected to PowerStore is currently not supported with Raw Device Mapping files (RDMs) and VMware vSphere Virtual volumes (vVols).
iSCSI Configuration This section describes the recommended configuration that should be applied when attaching hosts to a PowerStore cluster using iSCSI. NOTE: This section applies only for iSCSI. If you are using only Fibre Channel with vSphere and PowerStore, go to Fibre Channel HBA Configuration. NOTE: Be sure to review iSCSI SAN Guidelines before you proceed.
3. Make sure that both VMkernel interfaces are attached to the same vSwitch. 4. Override the default Network Policy for iSCSI. For details refer to VMware vSphere documentation. For example, with ESXi 6.5, refer to https://docs.vmware.com/en/VMware-vSphere/6.5/ com.vmware.vsphere.storage.doc/GUID-9C90F3F6-6095-427A-B20C-D46531E39D32.html 5. Configure port binding for each VMkernel interface as described in the VMware vSphere documentation. For example, with ESXi 6.5, refer to https://docs.vmware.
esxcli iscsi adapter param set -A adapter_name -k LoginTimeout -v value_in_sec Example For example: esxcli iscsi adapter param set -A vmhba64 -k LoginTimeout -v 30 No-Op Interval Follow these steps to set the iSCSI No-Op interval. About this task The noop iSCSI settings (NoopInterval and NoopTimout) are used to determine if a path is dead, when it is not the active path. iSCSI will passively discover if this path is dead using NoopTimout.
NOTE: These settings will enable ATS-only on supported VMFS Datastores, as noted in VMware KB# 1021976 (https:// kb.vmware.com/s/article/1021976) Setting the Maximum I/O Follow this guideline to set the maximum I/O request size for storage devices. Disk.DiskMaxIOSize determines the maximum I/O request size passed to storage devices. With PowerStore, it is required to change this parameter from 32767 (the default setting of 32MB) to 1024 (1MB). Example: Setting Disk.
For reference, this table also includes the corresponding recommendations for settings when vSphere is connected to PowerStore storage only. Setting Scope/Granularity Multi-Storage Setting PowerStore Only Setting UCS FC Adapter Policy Per vHBA default default Cisco nfnic lun_queue_depth_per_path Global default (32) default (32) Disk.SchedNumReqOutstanding LUN default default Disk.SchedQuantum Global default default Disk.
Configuring NMP Round Robin as the Default Pathing Policy for All PowerStore Volumes Follow this procedure to configure NMP Round Robin as the default pathing policy for all PowerStore volumes using the ESXi command line. About this task NOTE: As of VMware ESXi version 6.7, Patch Release ESXi670-201912001, the satp rule presented in this procedure is already integrated into the ESXi kernel. NOTE: Use this method when no PowerStore volume is presented to the host.
(naa.68ccf098003a54f16d2eddc3217da922) -Device: naa.68ccf09000000000c9f6d1acda1e4567 Device Display Name: DellEMC Fibre Channel RAID Ctlr (naa.68ccf09000000000c9f6d1acda1e4567) -Device: naa.68ccf098009c1cf3bfe0748a9183681a Device Display Name: DellEMC Fibre Channel Disk (naa.68ccf098009c1cf3bfe0748a9183681a) -Device: naa.68ccf09000000000c9f6d1acda1e4567 Device Display Name: DellEMC Fibre Channel RAID Ctlr (naa.68ccf09000000000c9f6d1acda1e4567) -Device: naa.
lastPathIndex=0: NumIOsPending=0,numBytesPending=0} -naa.68ccf09800e8fa24ea37a1bc49d9f6b8 Device Display Name: DellEMC Fibre Channel Disk (naa.
Presenting PowerStore Volumes to the ESXi Host Specify ESXi as the operating system when presenting PowerStore volumes to the ESXi host. NOTE: Using data reduction and /or encryption software on the host side will affect the PowerStore cluster data reduction. NOTE: When using iSCSI software initiator with ESXi and PowerStore storage, it is recommended to use only lower case characters in the IQN to correctly present the PowerStore volumes to ESXi.
Virtual Machine Guest OS Settings This section details the recommended settings and considerations for virtual machines guest OS. ● LUN Queue Depth - For optimal virtual machine operation, configure the virtual machine guest OS to use the default queue depth of the virtual SCSI controller. For details on adjusting the guest OS LUN queue depth, refer to VMware KB# 2053145 on the VMware website (https://kb.vmware.com/kb/2053145).
4 Host Configuration for Microsoft Windows This chapter contains the following topics: Topics: • • • • • Recommended Configuration Values Summary Fibre Channel Configuration iSCSI Configuration Multipathing Software Configuration Post-Configuration Steps - Using the PowerStore system Recommended Configuration Values Summary The following table summarizes all used variables and their values when configuring hosts for Microsoft Windows. NOTE: Unless indicated otherwise, use the default parameters values.
Validation Impact Severity Refer to Section fsutil behavior set DisableDeleteNotify 0 Fibre Channel Configuration This section describes the recommended configuration that should be applied when attaching hosts to PowerStore cluster using Fibre Channel. NOTE: This section applies only to FC. If you are using only iSCSI with Windows, go to iSCSI HBA Configuration. NOTE: Before you proceed, review Fibre Channel SAN Guidelines.
Configuring Native Multipathing Using Microsoft Multipath I/O (MPIO) This topic describes configuring native multipathing using Microsoft Multipath I/O (MPIO). For optimal operation with PowerStore storage, configure the Round-Robin (RR) policy or the Least Queue Depth policy for MPIO for devices presented from PowerStore. Using these policies, I/O operations are balanced across all available paths.
Post-Configuration Steps - Using the PowerStore system This topic describes the post-configuration steps using the PowerStore system. After the host configuration is completed, you can use the PowerStore storage from the host. You can create, present, and manage volumes accessed from the host via PowerStore Manager, CLI, or REST API. Refer to the PowerStore Manager Online Help, CLI Reference Guide, or REST API Reference Guide for additional information.
5 Host Configuration for Linux This chapter contains the following topics: Topics: • • • • • Recommended Configuration Values Summary Fibre Channel Configuration iSCSI Configuration Multipathing Software Configuration Post-Configuration Steps - Using the PowerStore system Recommended Configuration Values Summary The following table summarizes all used and recommended variables and their values when configuring hosts for Linux. NOTE: Unless indicated otherwise, use the default parameters values.
Validation Impact Severity Refer to Section Performance Recommended Creating a File System ● Set node.session.timeo.replaceme nt_timeout = 15 Temporarily disable UNMAP during file system creation. ● When creating a file system using the mke2fs command - Use the "-E nodiscard" parameter ● When creating a file system using the mkfs.
Setting Up Emulex NVMe HBA Follow these steps to setup an Emulex NVMe HBA. Steps 1. Access the Linux host as root. 2. Edit the /etc/modprobe.d/lpfc.conf configuration file with the following data: options lpfc lpfc_lun_queue_depth=128 lpfc_sg_seg_cnt=256 lpfc_max_luns=65535 lpfc_enable_fc4_type=3 iSCSI Configuration This section provides an introduction to the recommended configuration to be applied when attaching hosts to PowerStore cluster using iSCSI. NOTE: This section applies only to iSCSI.
Configuring the PowerStore Cluster Disk Device with iSCSI Single Network Subnet Support NOTE: This subsection is applicable only to a cluster running PowerStore OS version 1, or in case you are using only a single network subnet for the iSCSI portals. In PowerStore OS version 1, only a single network subnet is supported for the iSCSI target portals. By design, on various Linux distributions, only two network interfaces can be configured on the same network subnet.
Setting the Reverse Path Filtering to 2 (loose) on the relevant network interfaces makes them both accessible and routable. To apply this change, add the following lines to /etc/sysctl.conf: net.ipv4.conf.eth2.rp_filter = 2 net.ipv4.conf.eth3.rp_filter = 2 NOTE: In this example, eth2 and eth3 are the network interfaces used for iSCSI. Make sure to modify to the relevant interfaces. To reload the configuration: sysctl -p To view the current Reverse Path Filtering configuration on the system: sysctl -ar "\.
Parameter Description Value path_grouping_policy Specifies the default path grouping policy to apply to PowerStore volumes ● Paths are grouped by priorities assigned by the cluster. ● A higher priority (50) is set as Active/Optimized. Lower priority (10) is set as Active/Non-Optimized. group_by_prio path_checker Specifies TEST UNIT READY as the default method used to determine the state of the paths. tur detect_prio If set to yes, multipath will try to detect whether the device supports ALUA.
When configuring DM-multipathing for PowerStore NVMe-FC devices on the Linux host, configure the multipath settings file /etc/multipath.conf: devices { device { vendor product uid_attribute prio failback path_grouping_policy # path_checker path_selector detect_prio fast_io_fail_tmo no_path_retry rr_min_io_rq } //other devices} } .
Post-Configuration Steps - Using the PowerStore system After the host configuration is completed, you can access the PowerStore system from the host. You can create, present, and manage volumes accessed from the host via PowerStore Manager, CLI, or REST API. Refer to the PowerStore Manager Online Help, CLI Reference Guide, or REST API Reference Guide for additional information.
To disable UNMAP during file system creation: ● When creating a file system using the mke2fs command - Use the "-E nodiscard" parameter. ● When creating a file system using the mkfs.xfs command - Use the "-K" parameter. For a more efficient data utilization and better performance, use Ext4 file system with PowerStore cluster storage instead of Ext3. For details on converting to Ext4 file system (from either Ext3 or Ext2), refer to https://ext4.wiki.kernel.org/index.php/ UpgradeToExt4.
6 Host Configuration for AIX This chapter contains the following topics: Topics: • • • Recommended Configuration Values Summary Fibre Channel Configuration Dell EMC AIX ODM Installation Recommended Configuration Values Summary The following table summarizes all used and recommended variables and their values when configuring hosts for AIX. NOTE: Unless indicated otherwise, use the default parameters values.
Validation Impact Severity Refer to Section dyntrk= yes Fibre Channel Configuration This section describes the recommended configuration that should be applied when attaching AIX hosts to PowerStore cluster using Fibre Channel. NOTE: When using Fibre Channel with PowerStore, the FC Host Bus Adapters (HBA) issues that are described in this section should be addressed for optimal performance.
Fast I/O Failure for Fibre Channel Devices This topic describes the Fast I/O Failure feature for FC devices and details the setting recommendations. AIX supports Fast I/O Failure for Fibre Channel devices after link events in a switched environment. When the FC adapter driver detects a link event, such as a lost link between a storage device and a switch, it waits for the fabric to stabilize (approximately 15 s).
Fibre Channel Adapter Device Driver Maximum I/O Size Set the max_xfer_size attribute for optimal AIX host operation over FC with PowerStore. Prerequisites The max_xfer_size FC HBA adapter device driver attribute for the fscsi device controls the maximum I/O size that the adapter device driver can handle. This attribute also controls a memory area the adapter uses for data transfers. For optimal AIX host operation over FC with PowerStore, perform the following steps: Steps 1.
EMC.PowerStore.aix.rte EMC.PowerStore.fcp.MPIO.rte 6.2.0.1 6.2.0.1 USR USR APPLY APPLY SUCCESS SUCCESS 5. Run the following command to install the following filesets to support PowerPath (an RPQ for PowerPath is required for this configuration): installp -ad . EMC.POwerStore.aix.rte EMC.PowerStore.fcp.rte Installation Summary ------------------------Name Level Part Event Result ---------------------------------------------------------------EMC.PowerStore.aix.rte 6.2.0.1 USR APPLY SUCCESS EMC.
7 Host Configuration for Solaris This chapter contains the following topics: Topics: • • • • Recommended Configuration Values Summary Fibre Channel Configuration Solaris Host Parameter Settings Post configuration steps - using the PowerStore system Recommended Configuration Values Summary The following table summarizes all used and recommended variables and their values when configuring hosts for Solaris Operating System. NOTE: Unless indicated otherwise, use the default parameters values.
Validation Config File Impact Severity Refer to Section Maximum I/O size for ssd driver for Solaris 10, 11-11.3 (SPARC) ssd.conf Stability Mandatory Updating ssd.conf configuration file sd.conf Stability Mandatory Updating sd.conf configuration file ssd.conf Stability Recommended Updating ssd.conf configuration file sd.conf Stability Recommended Updating sd.conf configuration file Stability Mandatory Updating scsi_vhci.
Queue Depth Queue depth is the amount of SCSI commands (including I/O requests) that can be handled by a storage device at a given time. A queue depth can be set on either of the following: ● Initiator level - HBA queue depth ● LUN level - LUN queue depth The LUN queue depth setting controls the amount of outstanding I/O requests per a single path. The HBA queue depth (also referred to as execution throttle) setting controls the amount of outstanding I/O requests per HBA port.
# cp -p /etc/driver/drv/scsi_vhci.con /etc/driver/drv/scsi_vhci.conf_ORIG # vi /etc/driver/drv/scsi_vhci.conf Example Below are the entries recommended for PowerStore storage.
Parameter Description Value mpxio-disable Specifies whether MPxIO is disabled. MPxIO can be enabled for Fibre Channel storage or it can be disabled for a particular HBA. no fp_offline_ticker Used to prevent errors from being generated immediately for transient/ brief connection interruptions, and should prevent any errors if the connections are restored before the fcp and fp delays expire.
Parameter Description Value generated immediately for transient/ brief connection interruptions, and should prevent any errors if the connections are restored before the fcp and fp delays expire. Updating ssd.conf configuration file (Solaris 10 and 11.0-11.3 SPARC) About this task The ssd.conf host file is used to control options for SCSI disk storage device. Steps 1. Run the following command to verify the ssd.conf file location: # ls /etc/driver/drv/ 2.
# cp /kernel/drv/sd/conf /etc/driver/drv 3. Run the following commands to create a backup copy and modify the file: # cp -p /etc/driver/drv/sd.conf /etc/driver/drv/sd.conf_ORIG # vi /etc/driver/drv/sd.conf Example Below are the entries recommended for PowerStore storage.
8 Host Configuration for HP-UX This chapter contains the following topics: Topics: • • • • • Recommended Configuration Values Summary Fibre Channel Configuration HP-UX Host Parameter Settings Multipathing Software Configuration Post-Configuration Steps - Using the PowerStore System Recommended Configuration Values Summary The following table summarizes all used variables and their values when configuring hosts for HP-UX NOTE: Unless indicated otherwise, use the default parameters values.
Validation Impact Severity Refer to Section ● To temporarily disable UNMAP for the targeted device on the host (prior to file system creation): #vxdisk set reclaim=off "disk name" ● To re-enable UNMAP for the targeted device on the host (after file system creation): # vxdisk reclaim "disk name" Fibre Channel Configuration This section describes the recommended configuration that should be applied when attaching hosts to PowerStore cluster using Fibre Channel.
● Persistent through reboot - scsimgr save_attr -a escsi_maxphys= The set value is defined in 4KB increments.
Creating a file system About this task File system configuration and management are out of the scope of this document. NOTE: Some file systems may require you to properly-align the file system on the PowerStore volume. It is recommended to use specified tools to optimally match your host with application requirements. NOTE: Creating a file system formatting with UNMAP enabled on a host connected to PowerStore may result in an increased amount of write I/Os to the storage subsystem.
A Considerations for Boot from SAN with PowerStore This appendix provides considerations for configuring boot from SAN with PowerStore. Topics: • Consideration for Boot from SAN with PowerStore Consideration for Boot from SAN with PowerStore NOTE: See your operating system documentation for general boot from SAN configuration. NOTE: The current PowerStore version does not support mapping individual LUNs to host under host group.