Dell™ PowerVault™ Modular Disk 3000i Systems Installation Guide w w w. d e l l . c o m | s u p p o r t . d e l l .
Notes, Notices NOTE: A NOTE indicates important information that helps you make better use of your computer. NOTICE: A NOTICE indicates either potential damage to hardware or loss of data and tells you how to avoid the problem. ____________________ Information in this document is subject to change without notice. © 2008 Dell Inc. All rights reserved. Reproduction in any manner whatsoever without the written permission of Dell Inc. is strictly forbidden.
Contents 1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . System Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Management Station Hardware Requirements Introduction to Storage Arrays . 2 Hardware Installation 7 . . . . . . . . . . . . . . . . . . . . . . . . . . 8 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . About the Enclosure Connections . . . . . . . . . . . . . . . . . . . .
Documentation for Windows Systems. . . . . . . . . . . . . . . . . . . . . . 28 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 28 . . . . . . . . . . . . . . . . . . . . . . . 29 Viewing Resource CD Contents. Installing the Manuals . . . . . Documentation for Linux Systems . Viewing Resource CD Contents. Installing the Manuals . . . . . 4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Step 5: Configure CHAP Authentication on the Storage Array (optional) . Configuring Target CHAP Authentication on the Storage Array . Configuring Mutual CHAP Authentication on the Storage Array. . . . 47 . . . . . 47 48 . . . . . Step 6: Configure CHAP Authentication on the Host Server (optional) . . . . . 49 . . . . . . . . . . 49 50 50 52 53 . . . . . . . 54 If you are using Windows Server 2003 or Windows Server 2008 GUI version . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Contents
Introduction This guide outlines the steps for configuring the Dell™ PowerVault™ Modular Disk 3000i (MD3000i). The guide also covers installing the MD Storage Manager software, installing and configuring the Microsoft® iSCSI and Linux initiators, and accessing documentation from the PowerVault MD3000i Resource CD. Other information provided includes system requirements, storage array organization, initial software startup and verification, and discussions of utilities and premium features.
Introduction to Storage Arrays A storage array includes various hardware components, such as physical disks, RAID controller modules, fans, and power supplies, gathered into enclosures. An enclosure containing physical disks accessed through RAID controller modules is called a RAID enclosure. One or more host servers attached to the storage array can access the data on the storage array.
Hardware Installation This chapter provides guidelines for planning the physical configuration of your Dell™ PowerVault™ MD3000i storage array and for connecting one or more hosts to the array. For complete information on hardware configuration, see the Dell™ PowerVault™ MD3000i Hardware Owner’s Manual. Storage Configuration Planning Consider the following items before installing your storage array: • Evaluate data storage needs and administrative requirements. • Calculate availability requirements.
Cabling the Enclosure You can connect up to 16 hosts and two expansion enclosures to the storage array. To plan your configuration, complete the following tasks: 1 Evaluate your data storage needs and administrative requirements. 2 Determine your hardware capabilities and how you plan to organize your data. 3 Calculate your requirements for the availability of your data. 4 Determine how you plan to back up your data. The iSCSI interface provides many versatile host-to-controller configurations.
Figure 2-1.
Figure 2-2. Up to Four Direct-Attached Servers, Single-Path Data, Dual Controllers (Duplex) 1 Management traffic 2 4 3 1 standalone (up to four) host server 4 corporate, public or private network 2 Ethernet management port (2) 3 MD3000i RAID Enclosure (dual controllers) Dual Path Data Configuration In Figure 2-3, up to two servers are directly attached to the MD3000i RAID controller module.
Figure 2-3. One or Two Direct-Attached Servers (or Two-Node Cluster), Dual-Path Data, Dual Controllers (Duplex) 2 1 3 5 4 1 standalone (one or two) host server 2 two-node cluster 4 MD3000i RAID Enclosure (dual controllers) 5 corporate, public or private network 3 Ethernet management port (2) Network-Attached Solutions You can also cable your host servers to the MD3000i RAID controller iSCSI ports through an IP storage area network (SAN) industry-standard 1GB Ethernet switch.
Figure 2-4.
Figure 2-5. Up to 16 Dual SAN-Configured Servers, Dual-Path Data, Dual Controllers (Duplex) 1 2 3 5 4 1 up to 16 standalone host servers 2 IP SAN (dual Gigabit Ethernet switches) 4 MD3000i RAID Enclosure (dual controllers) 5 corporate, public or private network 3 Ethernet management port (2) Attaching MD1000 Expansion Enclosures One of the features of the MD3000i is the ability to add up to two MD1000 expansion enclosures for additional capacity.
Expanding with Previously Configured MD1000 Enclosures Use this procedure if your MD1000 is now directly attached to and configured on a Dell PERC 5/E system. Data from virtual disks created on a PERC 5 SAS controller cannot be directly migrated to an MD3000i or to an MD1000 expansion enclosure connected to an MD3000i.
6 Turn on attached units: a Turn on the MD1000 expansion enclosure(s). Wait for the enclosure status LED to light blue. b Turn on the MD3000i and wait for the status LED to indicate that the unit is ready: c • If the status LEDs light a solid amber, the MD3000i is still coming online. • If the status LEDs are blinking amber, there is an error that can be viewed using the MD Storage Manager. • If the status LEDs light a solid blue, the MD3000i is ready.
4 Turn on attached units: a Turn on the MD1000 expansion enclosure(s). Wait for the enclosure status LED to light blue. b Turn on the MD3000i and wait for the status LED to indicate that the unit is ready: c • If the status LEDs light a solid amber, the MD3000i is still coming online. • If the status LEDs are blinking amber, there is an error that can be viewed using the MD Storage Manager. • If the status LEDs light a solid blue, the MD3000i is ready.
Software Installation The MD3000i Resource CD contains all documentation pertinent to MD3000i hardware and MD Storage Manager software. It also includes software and drivers for both Linux and Microsoft® Windows® operating systems. The MD3000i Resource CD contains a readme.txt file covering changes to the software, updates, fixes, patches, and other important data applicable to both Linux and Windows operating systems. The readme.
Depending on whether you are using a Windows Server 2003 operating system or a Linux operating system, refer to the following steps for downloading and installing the iSCSI initiator. Installing the iSCSI Initiator on a Windows Host Server 1 Refer to the Dell™ PowerVault™ MD3000i Support Matrix on support.dell.com for the latest version and download location of the Microsoft iSCSI Software Initiator software. 2 From the host server, download the iSCSI Initiator software.
3 Select the iscsi-initiator-utils - iSCSI daemon and utility programs option. 4 Click Close, then Update. NOTE: Depending upon your installation method, the system will ask for the required source to install the package. Installing the iSCSI Initiator on a RHEL 5 System You can install the iSCSI initiator software on Red Hat Enterprise Linux 5 systems either during or after operating system installation.
Installing the iSCSI Initiator on a SLES 9 System You can install the iSCSI initiator software on SUSE® Linux Enterprise Servers (SLES) 9 SP3 systems either during or after operating system installation. To install the iSCSI initiator during SLES 9 installation: 1 At the YaST Installation Settings screen, click Change. 2 Click Software, then select Detailed Selection to see a complete list of packages. 3 Select Various Linux Tools, then select linux-iscsi. 4 Click Accept.
4 When the open-iscsi and yast2-iscsi-client modules are displayed, select them. 5 Click Accept. Installing MD Storage Software The MD3000i Storage Software provides the host-based storage agent, multipath driver, and MD Storage Manager application used to operate and manage the storage array solution. The MD Storage Manager application is installed on a host server to configure, manage, and monitor the storage array.
4 Click Next. 5 Accept the terms of the License Agreement, and click Next. The screen shows the default installation path. 6 Click Next to accept the path, or enter a new path and click Next. 7 Select an installation type: • Typical (Full installation) — This package installs both the management station and host software. It includes the necessary host-based storage agent, multipath driver, and MD Storage Manager software.
If you are reconfiguring a cluster node into a stand alone host, double-click the MD3000i Cluster to Stand Alone.reg file located in the windows\utility directory of the MD3000i Resource CD. This merges the file into the host registry. NOTE: These registry files set the host up for the correct failback operation.
3 At the CD main menu, type 2 and press Enter. The installation wizard appears. 4 Click Next. 5 Accept the terms of the License Agreement and click Next. 6 Select an installation type: • Typical (Full installation) — This package installs both the management station and host options. It includes the necessary host-based storage agent, multipath driver, and MD Storage Manager software.
Installing a Dedicated Management Station (Windows and Linux) Optionally, you can manage your storage array over the network via a dedicated system attached to the array via the Ethernet management port. If you choose this option, follow these steps to install MD Storage Manager on that dedicated system. 1 (Windows) From the CD main menu, select Install MD3000i Storage Software. 2 (Linux) From the CD main menu, type 2 and press . The Installation Wizard appears. 3 Click Next.
Documentation for Windows Systems Viewing Resource CD Contents 1 Insert the CD. If autorun is disabled, navigate to the CD and double-click setup.exe. NOTE: On a server running Windows Server 2008 Core version, navigate to the CD and run the setup.bat utility. Only the MD3000i Readme can be viewed on Windows Server 2008 Core versions. Other MD3000i documentation cannot be viewed or installed.
Documentation for Linux Systems Viewing Resource CD Contents 1 Insert the CD. For some Linux distributions, a screen appears asking if you want to run the CD. Select Yes if the screen appears. If no screen appears, execute ./install.sh within the linux folder on the CD.
Installing the Manuals 1 Insert the CD, if necessary, and from the menu screen, type 5 and press . 2 A screen appears showing the default location for installation. Press to accept the path shown, or enter a different path and press . 3 When installation is complete, press any key to return to the main menu. 4 To view the installed documents, open a browser window and navigate to the installation directory.
Array Setup and iSCSI Configuration To use the storage array, you must configure iSCSI on both the host server(s) and the storage array. Step-by-step instructions for configuring iSCSI are described in this section. However, before proceeding here, you must have already installed the Microsoft iSCSI initiator and the MD Storage Manager software. If you have not, refer to Software Installation and complete those procedures before attempting to configure iSCSI.
Table 4-1. Standard Terminology Used in iSCSI Configuration (continued) Term Definition iSCSI host port The iSCSI port (two per controller) on the storage array. iSNS (Microsoft Internet Storage An automated discovery, management and configuration tool Naming Service) used by some iSCSI devices. management station The system from which you manage your host server/storage array configuration. storage array The enclosure containing the storage data accessed by the host server.
Table 4-2. iSCSI Configuration Worksheet (IPv4 settings) A host server cntl. 0 B cntl. 1 Mutual CHAP Secret MD3000i 192.168.130.101 (In 0 default) 192.168.128.102 (Mgmt network port) 192.168.131.101 (In 1 default) 192.168.131.102 (In 1 default) 192.168.128.101 (Mgmt network port) 192.168.130.102 (In 0 default) Target CHAP Secret If you need additional space for more than one host server, use an additional sheet.
Table 4-3. iSCSI Configuration Worksheet (IPv6 settings) A host server cntl. 0 B Mutual CHAP Secret cntl. 1 MD3000i Target CHAP Secret If you need additional space for more than one host server, use an additional sheet.
Configuring iSCSI on Your Storage Array The following sections contains step-by-step instructions for configuring iSCSI on your storage array. However, before beginning, it is important to understand where each of these steps occur in relation to your host server/storage array environment. Table 4-4 below shows each specific iSCSI configuration step and where it occurs. Table 4-4. Host Server vs.
Step 1: Discover the Storage Array (Out-of-band management only) Default Management Port Settings By default, the storage array management ports will be set to DHCP configuration. If the controller(s) on your storage array is unable to get IP configuration from a DHCP server, it will timeout after ten seconds and fall back to a default static IP address. The default IP configuration is: Controller 0: IP: 192.168.128.101 Subnet Mask: 255.255.255.0 Controller 1: IP: 192.168.128.102 Subnet Mask: 255.255.255.
Set Up the Array 1 When discovery is complete, the name of the first storage array found appears under the Summary tab in MD Storage Manager. 2 The default name for the newly discovered storage array is Unnamed. If another name appears, click the down arrow next to that name and choose Unnamed in the drop-down list. 3 Click the Initial Setup Tasks option to see links to the remaining post-installation tasks. For more information about each task, see the User’s Guide.
Table 4-5. Initial Storage Array Setup Tasks (continued) Task Purpose Information Needed Set the management port IP addresses on each controller. To set the management port IP addresses to match your public network configuration. Although DHCP is supported, static IP addressing is recommended. In MD Storage Manager, select Initial Setup Tasks→ Configure Ethernet Management Ports, then specify the IP configuration for each management port on the storage array controllers.
Step 2: Configure the iSCSI Ports on the Storage Array By default, the iSCSI ports on the storage array are set to the following IPv4 settings: Controller 0, Port 0: IP: 192.168.130.101 Subnet Mask: 255.255.255.0 Port: 3260 Controller 0, Port 1: IP: 192.168.131.101 Subnet Mask: 255.255.255.0 Port: 3260 Controller 1, Port 0: IP: 192.168.130.102 Subnet Mask: 255.255.255.0 Port: 3260 Controller 1, Port 1: IP: 192.168.131.102 Subnet Mask: 255.255.255.0 Port: 3260 NOTE: No default gateway is set.
Step 3: Perform Target Discovery from the iSCSI Initiator This step identifies the iSCSI ports on the storage array to the host server. Select the set of steps in one of the following sections (Windows or Linux) that corresponds to your operating system. If you are using Windows Server 2003 or Windows Server 2008 GUI version 1 Click Start→ Programs→ Microsoft iSCSI Initiator or Start→ All Programs→ Administrative Tools→ iSCSI Initiator. 2 Click the Discovery tab.
If you are using Linux Server Configuration of the iSCSI initiator for Red Hat® Enterprise Linux® version 4 and SUSE® Linux Enterprise Server 9 distributions is performed by modifying the /etc/iscsi.conf file, which is installed by default when you install MD Storage Manager. You can edit the file directly, or replace the default file with a sample file included on the MD3000i Resource CD. To use the sample file included on the CD: 1 Save the default /etc/iscsi.
FirstBurstLength=262144 MaxBurstLength=16776192 6 Restart the iSCSI daemon by executing the following command from the console: /etc/init.d/iscsi restart 7 Verify that the server can connect to the storage array by executing this command from a console: iscsi -ls If successful, an iSCSI session has been established to each iSCSI port on the storage array.
4 Edit the following entries in the /etc/iscsi/iscsid.conf file: a Edit (or verify) that the node.startup = manual line is disabled. b Edit (or verify) that the node.startup = automatic line is enabled. This will enable automatic startup of the service at boot time. c Verify that the following time-out value is set to 144: node.session.timeo.replacement_timeout = 144 d Save and close the /etc/iscsi/iscsid.conf file.
Step 4: Configure Host Access This step specifies which host servers will access virtual disks on the storage array. You should perform this step: • before mapping virtual disks to host servers • any time you connect new host servers to the storage array 1 Launch MD Storage Manager. 2 Click on the Configure tab, then select Configure Host Access (Manual). 3 At Enter host name, enter the host server to be available to the storage array for virtual disk mapping.
Understanding CHAP Authentication Before proceeding to either Step 5: Configure CHAP Authentication on the Storage Array (optional) or Step 6: Configure CHAP Authentication on the Host Server (optional), it would be useful to gain an overview of how CHAP authentication works. What is CHAP? Challenge Handshake Authentication Protocol (CHAP) is an optional iSCSI authentication method where the storage array (target) authenticates iSCSI initiators on the host server.
CHAP Definitions To summarize the differences between target CHAP and mutual CHAP authentication, see Table 4-6. Table 4-6. CHAP Types Defined CHAP Type Description Target CHAP Sets up accounts that iSCSI initiators use to connect to the target storage array. The target storage array then authenticates the iSCSI initiator. Mutual CHAP Applied in addition to target CHAP, mutual CHAP sets up an account that a target storage array uses to connect to an iSCSI initiator.
Step 5: Configure CHAP Authentication on the Storage Array (optional) If you are configuring CHAP authentication of any kind (either target-only or target and mutual), you must complete this step and Step 6: Configure CHAP Authentication on the Host Server (optional). If you are not configuring any type of CHAP, skip these steps and go to Step 7: Connect to the Target Storage Array from the Host Server. NOTE: If you choose to configure mutual CHAP authentication, you must first configure target CHAP.
Configuring Mutual CHAP Authentication on the Storage Array The initiator secret must be unique for each host server that connects to the storage array and must not be the same as the target CHAP secret. 1 From MD Storage Manager, click on the iSCSI tab, then select Enter Mutual Authentication Permissions. 2 Select an initiator on the host server and click the CHAP Secret. 3 Enter the Initiator CHAP secret, confirm it in Confirm initiator CHAP secret, and click OK.
Step 6: Configure CHAP Authentication on the Host Server (optional) If you configured CHAP authentication in Step 5: Configure CHAP Authentication on the Storage Array (optional), complete the following steps. If not, skip to Step 7: Connect to the Target Storage Array from the Host Server. Select the set of steps in one of the following sections (Windows or Linux) that corresponds to your operating system.
8 Click OK. If discovery session failover is desired, repeat step 5 and step 6 (in this step) for all iSCSI ports on the storage array. Otherwise, single-host port configuration is sufficient. NOTE: If the connection fails, make sure that all IP addresses are entered correctly. Mistyped IP addresses are a common cause of connection problems.
OutgoingPassword=0123456789abcdef DiscoveryAddress=172.168.10.7 OutgoingUsername=iqn.1987-05.com.cisco:01.742b2d31b3e OutgoingPassword=0123456789abcdef DiscoveryAddress=172.168.10.8 OutgoingUsername=iqn.1987-05.com.cisco:01.742b2d31b3e OutgoingPassword=0123456789abcdef DiscoveryAddress=172.168.10.9 OutgoingUsername=iqn.1987-05.com.cisco:01.
If you are using RHEL 5 or SLES 10 SP1 1 To enable CHAP (optional), the following line needs to be enabled in your /etc/iscsi/iscsid.conf file. node.session.auth.authmethod = CHAP 2 To set a username and password for CHAP authentication of the initiator by the target(s), edit the following lines as shown: node.session.auth.username = node.session.auth.
discovery.sendtargets.auth.password = password_1 discovery.sendtargets.auth.username = iqn.198405.com.dell:powervault.123456 discovery.sendtargets.auth.password_in = test1234567890 If you are using SLES10 SP1 via the GUI 1 Select Desktop→ YaST→ iSCSI Initiator. 2 Click Service Start, then select When Booting. 3 Select Discovered Targets, then select Discovery. 4 Enter the IP address of the port. 5 Click Next. 6 Select any target that is not logged in and click Log in.
Step 7: Connect to the Target Storage Array from the Host Server If you are using Windows Server 2003 or Windows Server 2008 GUI 1 Click Start→ Programs→ Microsoft iSCSI Initiator or Start→ All Programs→ Administrative Tools→ iSCSI Initiator. 2 Click the Targets tab. If previous target discovery was successful, the iqn of the storage array should be displayed under Targets. 3 Click Log On. 4 Select Automatically restore this connection when the system boots. 5 Select Enable multi-path.
If you are using Windows Server 2008 Core Version 1 Set the iSCSI initiator services to start automatically (if not already set): sc \\ config msiscsi start= auto 2 Start the iSCSI service (if necessary): sc start msiscsi 3 Log on to the target: iscsicli PersistentLoginTarget * * * * * * * * * where is the target name as displayed
To view active sessions to the target, use the following command: iscsicli SessionList To support storage array controller failover, the host server must be connected to at least one iSCSI port on each controller. Repeat step 3 for each iSCSI port on the storage array that you want to establish as a failover target. (The Target_Portal_Address will be different for each port you connect to). PersistentLoginTarget does not initiate a login to the target until after the system is rebooted.
SESSION STATUS : ESTABLISHED AT Wed May SESSION ID : ISID 00023d000002 TSIH 4 9 18:20:28 CDT 2007 ******************************************************************************* Viewing the status of your iSCSI connections In MD Storage Manager, clicking the iSCSI tab and then Configure iSCSI Host Ports will show the status of each iSCSI port you attempted to connect and the configuration state of all IP addresses.
Step 8: (Optional) Set Up In-Band Management Out-of-band management (see Step 1: Discover the Storage Array (Out-of-band management only)) is the recommended method for managing the storage array. However, to optionally set up in-band management, use the steps shown below. The default iSCSI host port IPv4 addresses are shown below for reference: Controller 0, Port 0: IP: 192.168.130.101 Controller 0, Port 1: IP: 192.168.131.101 Controller 1, Port 0: IP: 192.168.130.102 Controller 1, Port 1: IP: 192.168.131.
Premium Features If you purchased premium features for your storage array, you can set them up at this point. Click Tools→ View/Enable Premium Features or View and Enable Premium Features on the Initial Setup Tasks dialog box to review the features available.
– • 60 Needs Upgrade — The storage array is running a level of firmware that is no longer supported by MD Storage Manager. Support Information Bundle — The Gather Support Information link on the Support tab saves all storage array data, such as profile and event log information, to a file that you can send if you seek technical assistance for problem resolution. It is helpful to generate this file before you contact Dell support with MD3000i-related issues.
Uninstalling Software The following sections contain information on how to uninstall MD Storage Manager software from both host and management station systems. Uninstalling From Windows Use the Change/Remove Program feature to uninstall MD Storage Manager from a Microsoft® Windows® operating systems other than Windows Server 2008: 1 From the Control Panel, double-click Add or Remove Programs. 2 Select MD Storage Manager from the list of programs.
3 From the Uninstall window, click Next and follow the on-screen instructions. 4 Select Yes to restart the system, then click Done. Uninstalling From Linux Use the following procedure to uninstall MD Storage Manager from a Linux system. 1 By default, MD Storage Manager is installed in the /opt/dell/mdstoragemanager directory. If another directory was used during installation, navigate to that directory before beginning the Uninstall procedure. 2 From the installation directory, type .
Guidelines for Configuring Your Network for iSCSI This section gives general guidelines for setting up your network environment and IP addresses for use with the iSCSI ports on your host server and storage array. Your specific network environment may require different or additional steps than shown here, so make sure you consult with your system administrator before performing this setup.
4 Select Use the following IP address and enter the IP address, subnet mask, and default gateway addresses. If using a DNS server 1 On the Control Panel, select Network connections or Network and Sharing Center. Then click Manage network connections. 2 Right-click the network connection you want to configure and select Properties. 3 On the General tab (for a local area connection) or the Networking tab (for all other connections), select Internet Protocol (TCP/IP), and then click Properties.
NOTE: The server IP addresses must be configured for network communication to the same IP subnet as the storage array management and iSCSI ports. Configuring TCP/IP on Linux using DHCP (root users only) 1 Edit the /etc/sysconfig/network file as follows: NETWORKING=yes HOSTNAME=mymachine.mycompany.com 2 Edit the configuration file for the connection you want to configure, either /etc/sysconfig/networkscripts/ifcfg-ethX (for RHEL) or /etc/sysconfig/network/ifcfg-eth-id-XX:XX:XX:XX:XX (for SUSE).
Network Configuration Guidelines
Index A H alerts, 38 hot spares, 8-9 iSCSI management, 36 installing dedicated management station, 27 iSNS, 35 C I cabling, 9-10 diagrams, 11 direct attached, 10 enclosure, 10 redundancy and nonredundancy, 10 single path, 10 initial storage array setup, 35 alerts, 38 password, 37 renaming, 37 set IP addresses, 37 CHAP, 45 mutual, 45 target, 45 cluster host setting up, 24 cluster node reconfiguring, 25 D disk group, 8 documentation, 28 manuals, 28, 30 E enclosure connections, 9 event monitor, 24-2
Index R U RAID, 8 uninstalling Windows, 61 RDAC MPP driver, 26 readme, 28-29 Recovery Guru, 59 V Resource CD, 19, 25, 29-30 virtual disk, 8-9 Virtual Disk Copy, 9, 59 S Snapshot Virtual Disk, 9, 59 status, 37, 60 Volume Shadow-copy Service See VSS VSS, 25 status icons, 37, 59 storage array, 8 W Storage Array Profile, 59 Windows, 19, 28, 61 storage configuration and planning, 9 T troubleshooting, 59 68 Index