Dell PowerVault MD3260i Series Storage Arrays Deployment Guide
Notes, Cautions, and Warnings NOTE: A NOTE indicates important information that helps you make better use of your computer. CAUTION: A CAUTION indicates either potential damage to hardware or loss of data and tells you how to avoid the problem. WARNING: A WARNING indicates a potential for property damage, personal injury, or death. © 2013 Dell Inc. All Rights Reserved.
Contents 1 Introduction..................................................................................................................................7 System Requirements...............................................................................................................................................7 Management Station Requirements.................................................................................................................. 7 Introduction To Storage Arrays...............
Using A WINS Server.......................................................................................................................................26 Linux Host Setup..................................................................................................................................................... 26 Using DHCP......................................................................................................................................................
Step 8: (Optional) Set Up In-Band Management.................................................................................................... 42 8 Appendix — Using Internet Storage Naming Service........................................................43 9 Appendix — Load Balancing.................................................................................................. 45 Windows Load Balance Policy..................................................................................................
Introduction 1 This guide provides information about deploying Dell PowerVault MD3260i storage arrays. The deployment process includes: • Hardware installation • Modular Disk Storage Manager (MD Storage Manager) installation • Initial system configuration Other information provided include system requirements, storage array organization, and utilities. NOTE: For more information on product documentation, see support.dell.com/manuals.
One or more host servers attached to the storage array can access the data on the storage array. You can also establish multiple physical paths between the host(s) and the storage array so that loss of any single path (for example, through failure of a host server port) does not result in loss of access to data on the storage array.
Hardware Installation 2 Before using this guide, ensure that you review the instructions in the: • • • Getting Started Guide — The Getting Started Guide that shipped with the storage array, provides information to configure the initial setup of the system. Dell PowerVault MD3260/3260i/3660i/3660f/3060e Storage Arrays Administrator's Guide — The Administrator's Guide provides information about important concepts you must know before setting up your storage solution.
Redundant And Non-Redundant Configurations Non-redundant configurations are configurations that provide only a single data path from a host to the storage array. This type of configuration is only recommended for non-critical data storage. Path failure from a failed or removed cable, a failed HBA, or a failed or removed RAID controller module results in loss of host access to storage on the storage array.
Figure 1. Eight Hosts With a Single Data Path In the following figure, up to four servers are directly attached to the RAID controller modules. If the host server has a second Ethernet connection to the array, it can be attached to the iSCSI ports on the array's second controller. This configuration provides improved availability by allowing two separate physical paths for each host, which ensures full redundancy if one of the paths fail.
Figure 2. Four Hosts Connected to Two Controllers In the following figure, up to four cluster nodes are directly attached to two RAID controller modules. Since each cluster node has redundant paths, loss of a single path would still allow access to the to the storage array through the alternate path.
Figure 3. Four Cluster Nodes Connected to Two Controllers Network-Attached Configurations You can also cable the host servers to the RAID controller module iSCSI ports through industry-standard 1GB Ethernet switches. An iSCSI configuration that uses Ethernet switches is frequently referred to as an IP SAN.
the PowerVault MD3260i Series storage array can support up to 64 hosts simultaneously. This configuration supports single-path data configurations. The following figure shows up to 64 stand-alone servers attached (using multiple sessions) to a RAID controller module through a network. Hosts that have a second Ethernet connection to the network allow two separate physical paths for each host, which ensures full redundancy if one of the paths fail.
Figure 4.
Cabling PowerVault MD3060e Expansion Enclosures You can expand the capacity of your PowerVault MD3260i Series storage array by adding PowerVault MD3060e expansion enclosures. You can expand the physical disk pool to a maximum of 120 (or 180, if enabled using Premium Feature activation) physical disks using a maximum of two expansion enclosures. NOTE: Hot plug of MD3060e expansion enclosure is not recommended. Power on all MD3060e expansion enclosures before you power on the array enclosure.
Installing MD Storage Manager 3 The PowerVault MD Series resource media contains software and drivers for both Linux and Microsoft Windows operating systems. The root of the media contains a readme.txt file covering changes to the software, updates, fixes, patches, and other important data applicable to both Linux and Windows operating systems. The readme.
Modular Disk Configuration Utility The PowerVault MD Configuration Utility (MDCU) is an optional utility that provides a consolidated approach for configuring the management and iSCSI host ports, and creating sessions for the iSCSI modular disk storage arrays. It is recommended that you use PowerVault MDCU to configure iSCSI on each host server connected to the storage array. Graphical Installation (Recommended) The MD Storage Manager configures, manages, and monitors the storage array.
NOTE: The MD Storage Manager installer automatically installs the required drivers, firmware, and operating system patches/hotfixes to operate your storage array. These drivers and firmware are also available at dell.com/support. In addition, see the Support Matrix at dell.com/support/manuals for any additional settings and/or software required for your specific storage array. Console Installation NOTE: Console installation only applies to Linux systems that are not running a graphical environment.
Post Installation Tasks 4 Before using the storage array for the first time, complete a number of initial configuration tasks in the order shown. These tasks are performed using the MD Storage Manager. NOTE: If Dynamic Host Configuration Protocol (DHCP) is not used, initial configuration using the management station must be performed on the same physical subnet as the storage array.
iSCSI Configuration Worksheet The IPv4 Settings — Worksheet and IPv6 Settings — Worksheet helps you plan your configuration. Recording host server and storage array IP addresses at a single location enables you to configure your setup faster and more efficiently. Guidelines For Configuring Your Network For iSCSI provides general network setup guidelines for both Windows and Linux environments. It is recommended that you review these guidelines before completing the worksheet.
• Host configuration Storage Array Configuration Before a host iSCSI initiator and an iSCSI-based storage array can communicate, they must be configured with information such as which IP addresses and authentication method to use. Since iSCSI initiators establish connections with an already configured storage array, the first task is to configure your storage arrays to make them available for iSCSI initiators.
• For redundancy in a dual controller (duplex) configuration, ensure each host network interface is configured to connect to both storage array controllers. • For optimal load balancing, ensure each host network interface that is used for iSCSI traffic is configured to connect to each storage array controller. • It is recommended that each host network interface only establishes one iSCSI session per storage array controller.
Guidelines For Configuring Your Network For iSCSI 5 This section provides general guidelines for setting up your network environment and IP addresses for use with the iSCSI ports on your host server and storage array. Your specific network environment may require different or additional steps than shown here, so make sure you consult with your system administrator before performing this setup.
3. Restart network services using the following command: /etc/init.d/network restart Using A DNS Server If you are using static IP addressing: 1. In the Control Panel, select Network connections or Network and Sharing Center and then click Manage network connections. 2. Right-click the network connection you want to configure and select Properties. 3.
NETWORKING=yes HOSTNAME=mymachine.mycompany.com 2. Edit the configuration file for the connection you want to configure, either /etc/sysconfig/network-scripts/ifcfg-ethX (for Red Hat Enterprise Linux) or /etc/sysconfig/network/ifcfg-eth-id-XX:XX:XX:XX:XX (for SUSE Enterprise Linux). BOOTPROTO=dhcpm Also, verify that an IP address and netmask are not defined. 3. Restart network services using the following command: /etc/init.
Uninstalling MD Storage Manager 6 Uninstalling MD Storage Manager From Windows Uninstall MD Storage Manager From Microsoft Windows Operating Systems Other Than Microsoft Windows Server 2008 Use the Change/Remove Program feature to uninstall the Modular Disk Storage Manager from Microsoft Windows operating systems other than Microsoft Windows Server 2008. To uninstall the Modular Disk Storage Manager from Microsoft Windows Server 2008: 1. Double-click Add or Remove Programs from the Control Panel. 2.
Uninstalling MD Storage Manager From Linux By default, PowerVault MD Storage Manager is installed in the /opt/dell/mdstoragemanager directory. If another directory was used during installation, navigate to that directory before beginning the uninstallation procedure. 1. From the installation directory, open the Uninstall Dell MD Storage Software directory. 2. Run the file Uninstall Dell MD Storage Software.exe. 3. From the Uninstall window, click Next, and follow the instructions on the screen.
Appendix — Manual Configuration Of iSCSI 7 The following sections contain step-by-step instructions for configuring iSCSI on your storage array. However, before beginning, it is important to understand where each of these steps occur in relation to your host server or storage array environment. The table below shows each iSCSI configuration step and where it occurs.
Automatic Storage Array Discovery 1. Launch MD Storage Manager. If this is the first storage array to be set up, the Add New Storage Array window is displayed. 2. Select Automatic and click OK. After discovery is complete, a confirmation screen is displayed. It may take several minutes for the discovery process to complete. Closing the discovery status window before the discovery process completes cancels the discovery process. 3. Click Close to close the screen. Manual Storage Array Discovery 1.
Task Purpose Configure a storage array To create virtual disks and map them to hosts. Step 2: Configure The iSCSI Ports On The Storage Array By default, the iSCSI ports on the storage array are set to the following IPv4 settings: Controller 0, Port 0: IP: 192.168.130.101 Subnet Mask: 255.255.255.0 Controller 0, Port 1: IP: 192.168.131.101 Subnet Mask: 255.255.255.0 Controller 0, Port 2: IP: 192.168.132.101 Subnet Mask: 255.255.255.0 Controller 0, Port 3: IP: 192.168.133.101 Subnet Mask: 255.255.255.
For Windows Server 2003 Or Windows Server 2008 GUI Version 1. Click Start→ Programs→ Microsoft iSCSI Initiator or click Start→ All Programs→ Administrative Tools→ iSCSI Initiator. 2. Click the Discovery tab. 3. Under Target Portals, click Add and enter the IP address or DNS name of the iSCSI port on the storage array. 4. If the iSCSI storage array uses a custom TCP port, change the Port number. 5. Click Advanced and set the following values on the General tab: The default is 3260.
a) Edit or verify that the node.startup = manual line is disabled. b) Edit or verify that the node.startup = automatic line is enabled. This enables automatic startup of the service at boot time. c) Verify that the following time-out value is set to 30 by running the following command: node.session.timeo.replacement_timeout = 30 d) Save and close the /etc/iscsi/iscsid.conf file. 5. From the console, restart the iSCSI service with the following command: service iscsi start 6.
Target CHAP In target CHAP, the storage array authenticates all requests for access issued by the iSCSI initiator(s) on the host server using a CHAP secret. To set up target CHAP authentication, you must enter a CHAP secret on the storage array, then configure each iSCSI initiator on the host server to send that secret each time it attempts to access the storage array.
2. CHAP Setting Description None This is the default selection. If None is the only selection, the storage array allows an iSCSI initiator to log on without supplying any type of CHAP authentication. None and CHAP The storage array allows an iSCSI initiator to log on with or without CHAP authentication. CHAP If CHAP is selected and None is deselected, the storage array requires CHAP authentication before allowing access. To configure a CHAP secret, select CHAP and select CHAP Secret. 3.
3. If you are using mutual CHAP authentication, click the General tab and select Secret. At Enter a secure secret, enter the mutual CHAP secret you entered for the storage array 4. Click the Discovery tab. 5. Under Target Portals, select the IP address of the iSCSI port on the storage array and click Remove. The iSCSI port you configured on the storage array during target discovery disappears. 6.
For Red Hat Enterprise Linux 5 Or 6, SUSE Linux Enterprise Server 10 Or 11 1. To enable CHAP (optional), the following line needs to be enabled in your /etc/iscsi/iscsid.conf file: node.session.auth.authmethod = CHAP 2. To set a user name and password for CHAP authentication of the initiator by the target(s), edit the following lines: node.session.auth.username = node.session.auth.password = 3.
9. Go to Connected Targets. 10. Verify that the targets are connected and the status is true. Step 7: Connect To The Target Storage Array From The Host Server For Windows Server 2008 GUI Version 1. Click Start → Program → Microsoft iSCSI Initiator or Start → All Programs → Administrative Tools → iSCSI Initiator. 2. Click the Targets tab. If previous target discovery was successful, the IQN of the storage array should be displayed under Targets. 3. Click Log On. 4.
3. Log on to the target: iscsicli PersistentLoginTarget Target_Name Report_To_PNP Target_Portal_Address TCP_Port_Number_Of_Target_Portal * * * Login_Flags * * * * * Username Password Authtype * Mapping_Count where – is the target name as displayed in the target list. Use the iscsicli ListTargets command to display the target list. – is T, which exposes the LUN to the operating system as a storage device.
To review optimal network setup and configuration settings, see Guidelines For Configuring Your Guidelines For Configuring Your Network For iSCSI. Step 8: (Optional) Set Up In-Band Management Out-of-band management (see Step 1: Discover the Storage Array (Out-of-band Management Only) ) is the recommended method for managing the storage array. However, to optionally set up in-band management, use the steps shown below.
Appendix — Using Internet Storage Naming Service 8 Internet Storage Naming Service (iSNS) server, supported only on Microsoft Windows iSCSI environments, eliminates the need to manually configure each individual storage array with a specific list of initiators and target IP addresses. Instead, iSNS automatically discovers, manages, and configures all iSCSI devices in your environment. For more information on iSNS, including installation and configuration, see microsoft.com.
Appendix — Load Balancing 9 Windows Load Balance Policy Multi-path drivers select the I/O path to a virtual disk through a specific RAID controller module. When the multi-path driver receives a new I/O, the driver tries to find a path to the current RAID controller module that owns the virtual disk. If that path cannot be found, the multi-path driver migrates the virtual disk ownership to the secondary RAID controller module.
The Computer Management window is displayed. 2. Click Device Manager to show the list of devices attached to the host. 3. Right-click the multi-path disk device for which you want to set load balance policies, then select Properties. 4. From the MPIO tab, select the load balance policy you want to set for this disk device. Changing The Load Balance Policy Using The Windows Server 2008 Disk Management Options 1. From the host desktop, right-click My Computer and select Manage.
Figure 5. Initiator Configuration Two sessions with one TCP connection are configured from the host to each controller (one session per port), for a total of four sessions. The multi-path failover driver balances I/O access across the sessions to the ports on the same controller. In a duplex configuration, with virtual disks on each controller, creating sessions using each of the iSCSI data ports of both controllers increases bandwidth and provides load balancing.
Appendix — Stopping And Starting iSCSI Services In Linux 10 To manually stop the iSCSI services in Linux, certain steps must be followed to maintain parallel processing between the storage array and the host server. 1. Stop all I/O. 2. Unmount all correlated file systems. 3. Stop iSCSI service by running the following command: /etc/init.
11 IPv4 Settings — Worksheet NOTE: If you need additional space for more than one host server, use an additional sheet. Static IP address (host server) Subnet A Default gateway (must be different for each NIC) iSCSI port 1 ___ . ___ . ___ . ___ ___ . ___ . ___ . ___ ___ . ___ . ___ . ___ iSCSI port 2 ___ . ___ . ___ . ___ ___ . ___ . ___ . ___ ___ . ___ . ___ . ___ iSCSI port 3 ___ . ___ . ___ . ___ ___ . ___ . ___ . ___ ___ . ___ . ___ . ___ iSCSI port 4 ___ . ___ . ___ . ___ ___ . ___ .
Static IP address (host server) Subnet A Default gateway (must be different for each NIC) iSCSI port 0, In 2 ___ . ___ . ___ . ___ ___ . ___ . ___ . ___ ___ . ___ . ___ . ___ iSCSI port 0, In 3 ___ . ___ . ___ . ___ ___ . ___ . ___ . ___ ___ . ___ . ___ . ___ Management port cntrl 0 ___ . ___ . ___ . ___ ___ . ___ . ___ . ___ ___ . ___ . ___ . ___ iSCSI port 1, In 0 ___ . ___ . ___ . ___ ___ . ___ . ___ . ___ ___ . ___ . ___ . ___ iSCSI port 1, In 1 ___ . ___ . ___ . ___ ___ . ___ .
12 IPv6 Settings — Worksheet NOTE: If you need additional space for more than one host server, use an additional sheet. Host iSCSI port 1 Host iSCSI port 2 Link local IP address ___ . ___ . ___ . ___ Link local IP address ___ . ___ . ___ . ___ Routable IP address ___ . ___ . ___ . ___ Routable IP address ___ . ___ . ___ . ___ Subnet prefix ___ . ___ . ___ . ___ Subnet prefix ___ . ___ . ___ . ___ Gateway ___ . ___ . ___ . ___ Gateway ___ . ___ . ___ .
Router IP address ____ : ____ : ____ : ____ : ____ : ____ : ____ : ____ iSCSI controller 0, In 1 IP address FE80 : 0000 : 0000 : 0000 : ____ : ____ : ____ : ____ Routable IP address 1 ____ : ____ : ____ : ____ : ____ : ____ : ____ : ____ Routable IP address 2 ____ : ____ : ____ : ____ : ____ : ____ : ____ : ____ Router IP address ____ : ____ : ____ : ____ : ____ : ____ : ____ : ____ iSCSI controller 0, In 2 IP address FE80 : 0000 : 0000 : 0000 : ____ : ____ : ____ : ____ Routable IP address 1 _
iSCSI controller 1, In 3 IP address FE80 : 0000 : 0000 : 0000 : ____ : ____ : ____ : ____ Routable IP address 1 ____ : ____ : ____ : ____ : ____ : ____ : ____ : ____ Routable IP address 2 ____ : ____ : ____ : ____ : ____ : ____ : ____ : ____ Router IP address ____ : ____ : ____ : ____ : ____ : ____ : ____ : ____ 55