Dell EMC PowerVault ME4 Series Storage System Deployment Guide July 2020 Rev.
Notes, cautions, and warnings NOTE: A NOTE indicates important information that helps you make better use of your product. CAUTION: A CAUTION indicates either potential damage to hardware or loss of data and tells you how to avoid the problem. WARNING: A WARNING indicates a potential for property damage, personal injury, or death. © 2018 – 2020 Dell Inc. or its subsidiaries. All rights reserved. Dell, EMC, and other trademarks are trademarks of Dell Inc. or its subsidiaries.
Contents Chapter 1: Before you begin............................................................................................................. 6 Unpack the enclosure........................................................................................................................................................... 6 Safety guidelines....................................................................................................................................................................
Dual-controller module configurations........................................................................................................................ 26 Chapter 5: Connect power cables and power on the storage system..................................................30 Power cable connection..................................................................................................................................................... 30 Chapter 6: Perform system and storage setup.................
Correcting enclosure IDs............................................................................................................................................... 81 Host I/O...........................................................................................................................................................................81 Dealing with hardware faults...................................................................................................................................
1 Before you begin This document describes initial hardware setup for Dell EMC PowerVault ME4 Series enclosures. Topics: • • • • • • • Unpack the enclosure Safety guidelines Installation checklist Planning for installation Preparing for installation Disk drive module Populating drawers with DDICs Unpack the enclosure Examine the packaging for crushes, cuts, water damage, or any other evidence of mishandling during transit.
Figure 2. Unpacking the 5U84 enclosure 1. Storage system enclosure 3. Documentation 5. Rackmount right rail (5U84) 2. DDICs (Disk Drive in Carriers) 4. Rackmount left rail (5U84) 6. Drawers ○ DDICs ship in a separate container and must be installed into the enclosure drawers during product installation. For rackmount installations, DDICs are installed after the enclosure is mounted in the rack. See Populating drawers with DDICs on page 14.
CAUTION: Do not try to lift the enclosure by yourself: • Fully configured 2U12 enclosures can weigh up to 32 kg (71 lb) • Fully configured 2U24 enclosures can weigh up to 30 kg (66 lb) • Fully configured 5U84 enclosures can weigh up to 135 kg (298 lb). An unpopulated enclosure weighs 46 kg (101 lb). • Use a minimum of two people to lift the 5U84 enclosure from the shipping box and install it in the rack.
Rack system safety precautions The following safety requirements must be considered when the enclosure is mounted in a rack: • • • • The rack construction must support the total weight of the installed enclosures. The design should incorporate stabilizing features to prevent the rack from tipping or being pushed over during installation or in normal use. When loading a rack with enclosures, fill the rack from the bottom up; and empty the rack from the top down.
Table 1. Installation checklist (continued) Step Task Where to find procedure • See Linux hosts on page 47. Install the required host software. See VMware ESXi hosts on page 54. See Citrix XenServer hosts on page 60. Perform the initial configuration tasks.3 10 1 See Using guided setup on page 32. The environment in which the enclosure operates must be dust-free to ensure adequate airflow.
Preparing the site and host server Before beginning the enclosure installation, verify that the site where you plan to install your storage system has the following: • • Each redundant power supply module requires power from an independent source or a rack power distribution unit with Uninterruptible Power Supply (UPS). 2U enclosures use standard AC power and the 5U84 enclosure requires high-line (high-voltage) AC power. A host computer configured with the appropriate software, BIOS, and drives.
• • Secure location of the carrier into and out of drive slots. Positive spring-loading of the drive/midplane connector. The carrier can use this interface: • Dual path direct dock Serial Attached SCSI. The following figures display the supported drive carrier modules: Figure 3. Dual path LFF 3.5" drive carrier module Figure 4. Dual path SFF 2.5" drive carrier module Figure 5. 2.5" to 3.
Figure 6. Blank drive carrier modules: 3.5" drive slot (left); 2.5" drive slot (right) DDIC in a 5U enclosure Each disk drive is installed in a DDIC that enables secure insertion of the disk drive into the drawer with the appropriate SAS carrier transition card. The DDIC features a slide latch button with directional arrow. The slide latch enables you to install and secure the DDIC into the disk slot within the drawer.
Figure 8. 2.5" drive in a 3.5" DDIC with a hybrid drive carrier adapter Populating drawers with DDICs The 5U84 enclosure does not ship with DDICs installed. Before populating drawers with DDICs, ensure that you adhere to the following guidelines: • • • • • • • The minimum number of disks that are supported by the enclosure is 28, 14 in each drawer. DDICs must be added to disk slots in complete rows (14 disks at a time).
2 Mount the enclosures in the rack This section describes how to unpack the ME4 Series Storage System equipment, prepare for installation, and safely mount the enclosures into the rack. Topics: • • • • Rackmount rail kit Install the 2U enclosure Install the 5U84 enclosure Connect optional expansion enclosures Rackmount rail kit Rack mounting rails are available for use in 19-inch rack cabinets. The rails have been designed and tested for the maximum enclosure weight.
Figure 10. Secure brackets to the rail (left hand rail shown for 2U) 1. 3. 5. 7. 9. 11. Front rack post (square hole) Left rail Clamping screw (B) Fastening screw (A) Left rail position locking screw Key: Rail kit fasteners used in rack-mount installation 2. 4. 6. 8. 10.
The adjustment range of the rail kit from the front post to the rear post is 660 mm–840 mm. This range suits a one-meter deep rack within Rack Specification IEC 60297. 1. To facilitate access, remove the door from the rack. 2. Ensure that the preassembled rails are at their shortest length. NOTE: See the reference label on the rail. 3. Locate the rail location pins inside the front of the rack, and extend the length of the rail assembly to position the rear location pins.
Connect optional expansion enclosures ME4 Series controller enclosures support 2U12, 2U24, and 5U84 expansion enclosures. 2U12 and 2U24 expansion enclosures can be intermixed, however 2U expansion enclosures cannot be intermixed with 5U84 expansion enclosures in the same storage system. NOTE: To add expansion enclosures to an existing storage system, power down the controller enclosure before connecting the expansion enclosures.
Figure 13. Cabling connections between a 2U controller enclosure and 2U expansion enclosures 1. 3. 5. 7. 9. Controller module A (0A) IOM (1A) IOM (2A) IOM (3A) IOM (9A) 2. 4. 6. 8. 10. Controller module B (0B) IOM (1B) IOM (2B) IOM (3B) IOM (9B) Figure 14. Cabling connections between a 5U controller enclosure and 5U expansion enclosures on page 19 shows the maximum cabling configuration for a 5U84 controller enclosure with 5U84 expansion enclosures (four enclosures including the controller enclosure).
Figure 15. Cabling connections between a 2U controller enclosure and 5U84 expansion enclosures 1. 3. 5. 7. Controller module A (0A) IOM (1A) IOM (2A) IOM (3A) 2. 4. 6. 8. Controller module B (0B) IOM (1B) IOM (2B) IOM (3B) Label the back-end cables Make sure to label the back-end SAS cables that connect the controller enclosure and the expansion enclosures.
3 Connect to the management network Perform the following steps to connect a controller enclosure to the management network: 1. Connect an Ethernet cable to the network port on each controller module. 2. Connect the other end of each Ethernet cable to a network that your management host can access, preferably on the same subnet. NOTE: If you connect the iSCSI and management ports to the same physical switches, Dell EMC recommends using separate VLANs. Figure 16.
4 Cable host servers to the storage system This section describes the different ways that host servers can be connected to a storage system. Topics: • • • Cabling considerations Connecting the enclosure to hosts Host connection Cabling considerations Host interface ports on ME4 Series controller enclosures can connect to respective hosts using direct-attach or switch-attach methods. Another important cabling consideration is cabling controller enclosures to enable the replication feature.
Alternatively, the ME4 Series enables you to set the CNC ports to use FC and iSCSI protocols in combination. When configuring a combination of host interface protocols, host ports 0 and 1 must be configured for FC, and host ports 2 and 3 must be configured for iSCSI. The CNC ports must use qualified SFP+ connectors and cables for the selected host interface protocol. For more information, see SFP+ transceiver for FC/iSCSI ports on page 93.
Figure 18. Two subnet switch example (IPv4) Table 3. Two subnet switch example No. Device IP Address Subnet 1 A0 192.68.10.200 10 2 A1 192.68.11.210 11 3 A2 192.68.10.220 10 4 A3 192.68.11.230 11 5 B0 192.68.10.205 10 6 B1 192.68.11.215 11 7 B2 192.68.10.225 10 8 B3 192.68.11.235 11 9 Switch A N/A N/A 10 Switch B N/A N/A 11 Host server 1, Port 0 192.68.10.20 10 12 Host server 1, Port 1 192.68.11.20 11 13 Host server 2, Port 0 192.68.10.
Host connection ME4 Series controller enclosures support up to eight direct-connect server connections, four per controller module. Connect appropriate cables from the server HBAs to the controller module host ports as described in the following sections. 16 Gb Fibre Channel host connection To connect controller modules supporting FC host interface ports to a server HBA or switch, using the controller CNC ports, select a qualified FC SFP+ transceiver.
Connecting direct attach configurations A dual-controller configuration improves application availability. If a controller failure occurs, the affected controller fails over to the healthy partner controller with little interruption to data flow. A failed controller can be replaced without the need to shut down the storage system. NOTE: In the following examples, a single diagram represents CNC, SAS, and 10Gbase-T host connections for ME4 Series controller enclosures.
Figure 21. Connecting hosts: ME4 Series 5U direct attach – one server, one HBA, dual path 1. Server 2. Controller module in slot A 3. Controller module in slot B Figure 22. Connecting hosts: ME4 Series 2U direct attach – two servers, one HBA per server, dual path 1. Server 1 3. Controller module in slot A 2. Server 2 4. Controller module in slot B Figure 23. Connecting hosts: ME4 Series 5U direct attach – two servers, one HBA per server, dual path 1. Server 1 3. Controller module in slot A 2.
Figure 25. Connecting hosts: ME4 Series 5U direct attach – four servers, one HBA per server, dual path 1. Server 1 3. Server 3 5. Controller module A 2. Server 2 4. Server 4 6. Controller module B Dual-controller module configurations – switch-attached A switch-attached solution—or SAN—places a switch between the servers and the controller enclosures within the storage system.
Figure 27. Connecting hosts: ME4 Series 5U switch-attached – two servers, two switches 1. Server 1 3. Switch A 5. Controller module A 2. Server 2 4. Switch B 6. Controller module B Label the front-end cables Make sure to label the front-end cables to identify the controller module and host interface port to which each cable connects.
5 Connect power cables and power on the storage system Before powering on the enclosure system, ensure that all modules are firmly seated in their correct slots. Verify that you have successfully completed the Installation checklist on page 9 instructions. Once you have completed steps 1–7, you can access the management interfaces using your web-browser to complete the system setup.
Testing enclosure connections See Powering on on page 31. Once the power-on sequence succeeds, the storage system is ready to be connected as described in Connecting the enclosure to hosts on page 22. Grounding checks The enclosure system must be connected to a power source that has a safety electrical grounding connection.
6 Perform system and storage setup The following sections describe how to setup a Dell EMC PowerVault ME4 Series storage system: Topics: • • Record storage system information Using guided setup Record storage system information Use the System Information Worksheet on page 95 to record the information that you need to install the ME4 Series storage system. Using guided setup Upon completing the hardware installation, use PowerVault Manager to configure, provision, monitor, and manage the storage system.
The storage system displays the Welcome panel. The Welcome panel provides options for setting up and provisioning your storage system. 4. If the storage system is running G280 firmware: a. b. c. d. e. Click Get Started. Read the Commercial Terms of Sale and End User License Agreement, and click Accept. Type a new username for the storage system in the Username field. Type password for the new username in the Password and Confirm Password fields. Click Apply and Continue.
6. Click Host Setup to access the Host Setup wizard and follow the prompts to continue provisioning your system by attaching hosts. For more information, see Host system requirements on page 41. Configuring system settings The System Settings panel provides options for you to quickly configure your system. Navigate the options by clicking the tabs on the left side of the panel. Tabs with a red asterisk next to them are required. To apply and save changes, click Apply.
IPv4 uses 32-bit addresses. 3. Select the type of IP address settings to use for each controller from the Source drop-down menu: • • Select Manual to specify static IP addresses. Select DHCP to allow the system to automatically obtain IP addresses from a DHCP server. 4. If you selected Manual, perform the following steps: , and then a. Type the IP address, IP mask, and Gateway addresses for each controller. b. Record the IP addresses.
• To disable email notifications, clear the Enable Email Notifications check box. 4. If email notification is enabled, select the minimum severity for which the system should send email notifications: Critical (only); Error (and Critical); Warning (and Error and Critical); Resolved (and Error, Critical, and Warning); Informational (all). 5. If email notification is enabled, in one or more of the Email Address fields enter an email address to which the system should send notifications.
For a system with 4-port SFP+ controller modules (CNC), all host ports ship from the factory in Fibre Channel (FC) mode. However, the ports can be configured as a combination of FC or iSCSI ports. FC ports support use of qualified 16 Gb/s SFP transceivers. You can set FC ports to auto-negotiate the link speed or to use a specific link speed. iSCSI ports support use of qualified 10 Gb/s SFP transceivers.
Table 4. Options for iSCSI ports (continued) Enable Jumbo Frames Enables or disables support for jumbo frames. Allowing for 100 bytes of overhead, a normal frame can contain a 1400-byte payload whereas a jumbo frame can contain a maximum 8900-byte payload for larger data transfers. NOTE: Use of jumbo frames can succeed only if jumbo-frame support is enabled on all network components in the data path.
• • • • • • • Enable Authentication (CHAP). Enables or disables use of Challenge Handshake Authentication Protocol. Enabling or disabling CHAP in this panel updates the setting in the Configure CHAP panel (available in the Hosts topic by selecting Action > Configure CHAP. CHAP is disabled by default. Link Speed. ○ auto—Auto-negotiates the proper speed. ○ 1 Gb/s— This setting does not apply to 10 Gb/sec HBAs. Enable Jumbo Frames: Enables or disables support for jumbo frames.
○ ○ ○ ○ ○ Up to 32 pools per installed RAID controller and one disk group per pool RAID levels 0, 1, 3, 5, 6, 10, 50, ADAPT, and NRAID Adding individual disks to increase RAID capacity is supported for RAID 0, 3, 5, 6, 10, 50, and ADAPT disk groups Configurable chunk size per disk group Global, dedicated, and/or dynamic hot spares NOTE: Dell EMC recommends using virtual storage. NOTE: After you create a disk group using one storage type, the system will use that storage type for additional disk groups.
7 Perform host setup This section describes how to perform host setup for Dell EMC PowerVault ME4 Series storage systems. Dell EMC recommends performing host setup on only one host at a time. For a list of supported HBAs or iSCSI network adapters, see the Dell EMC PowerVault ME4 Series Storage System Support Matrix. For more information, see the topics about initiators, hosts, and host groups, and attaching hosts and volumes in the Dell EMC PowerVault ME4 Series Storage System Administrator’s Guide.
Attach FC hosts to the storage system Perform the following steps to attach FC hosts to the storage system: 1. Ensure that all HBAs have the latest supported firmware and drivers as described on Dell.com/support. For a list of supported FC HBAs, see the Dell EMC ME4 Series Storage System Support Matrix on Dell.com/support. 2. Use the FC cabling diagrams to cable the hosts to the storage system either by using switches or connecting the hosts directly to the storage system. 3.
Enable MPIO for the volumes on the Windows server Perform the following steps to enable MPIO for the volumes on the Windows server: 1. Open the Server Manager. 2. Select Tools > MPIO. 3. Click the Discover Multi-Paths tab. 4. Select DellEMC ME4 in the Device Hardware Id list. If DellEMC ME4 is not listed in the Device Hardware Id list: a. Ensure that there is more than one connection to a volume for multipathing. b. Ensure that Dell EMC ME4 is not already listed in the Devices list on the MPIO Devices tab.
Table 6. Example worksheet for host server with dual port iSCSI NICs (continued) Management IP ME4024 controller A port 1 172.2.101.128 ME4024 controller B port 1 172.2.201.129 ME4024 controller A port 3 172.2.103.128 ME4024 controller B port 3 172.2.203.129 Subnet Mask 255.255.0.0 NOTE: The following instructions document IPv4 configurations with a dual switch subnet for network redundancy and failover. It does not cover IPv6 configuration.
4. Using the planning worksheet that you created in the Prerequisites section, type the IP address of a port on controller A that is on the first subnet and click OK. 5. Repeat steps 3-4 to add the IP address of a port on the second subnet that is from controller B . 6. Click the Targets tab, select a discovered target, and click Connect. 7. Select the Enable multi-path check box and click Advanced. The Advanced Settings dialog box opens.
Format volumes on the Windows server Perform the following steps to format a volume on a Windows server: 1. 2. 3. 4. 5. 6. 7. Open Server Manager. Select Tools > Computer Management. Right-click Disk Management and select Rescan Disks. Right-click on the new disk and select Online. Right-click on the new disk again select Initialize Disk. The Initialize Disk dialog box opens. Select the partition style for the disk and click OK.
4. Type a host name in the Host Name field. 5. Using the information documented in step 4 of Attach SAS hosts to the storage system, select the SAS initiators for the host you are configuring, then click Next. 6. Group hosts together with other hosts in a cluster. • For cluster configurations, group hosts together so that all hosts within the group share the same storage. ○ If this host is the first host in the cluster, select Create a new host group, type a name for the host group, and click Next.
Attach hosts to the storage system Perform the following steps to attach Fibre Channel hosts to the storage system: 1. Ensure that all HBAs have the latest supported firmware and driversas described on the Dell Support portal . For a list of supported standard FC HBAs, see the Dell EMC PowerVault ME4 Seriesstorage Matrix on the Dell website. For OEMs, contact your hardware provider. 2.
2. If no configuration exists, use the information that is listed from running the command in step 1 to copy a default template to the directory /etc. 3. If the DM multipath kernel driver is not loaded: a. Run the systemctl enable multipathd command to enable the service to run automatically. b. Run the systemctl start multipathd command to start the service. 4. Run the multipath command to load storage devices along with the configuration file. 5.
Table 7. Example worksheet for single host server with dual port iSCSI NICs (continued) Management IP Server iSCSI NIC 1 172.2.96.46 ME4024 controller A port 1 172.2.101.128 ME4024 controller B port 1 172.2.201.129 ME4024 controller A port 3 172.2.103.128 ME4024 controller B port 3 172.2.203.129 Subnet Mask 255.255.0.0 The following instructions document IPv4 configurations with a dual switch subnet for network redundancy and failover. It does not cover IPv6 configuration.
9. Repeat steps 1-8 for each NIC you are assigning IP addresses to (NIC1 and NIC2 in the planning worksheet you created in the “Prerequisites” section). 10. Select OK to exit network settings. 11. Select OK to exit YaST. Configure the iSCSI initiators to connect to the ME4 Series storage system For RHEL 7 1. From the server terminal or console, run the following iscsiadm command to discover targets (port A0): iscsiadm –m discovery –t sendtargets –p Where is the IP address.
7. On the Attach Volumes page, specify the name, size, and pool for each volume, and click Next. To add a volume, click Add Row. To remove a volume, click Remove. NOTE: Dell EMC recommends that you update the name with the hostname to better identify the volumes. 8. On the Summary page, review the host configuration settings, and click Configure Host. If the host is successfully configured, a Success dialog box is displayed 9.
2. Use the SAS cabling diagrams to cable the host servers directly to the storage system. 3. Identify SAS HBA initiators to connect to the storage system by doing the following: a. Open a terminal session. b. Run the dmesg|grep scsi|grep slot command. c. Record the WWN numeric name. Register the host and create and map volumes 1. Log in to the PowerVault Manager. 2. Access the Host Setup wizard by doing one of the following: 3. 4. 5. 6. • From the Welcome screen, click Host Setup.
3. 4. 5. 6. Run the mkdir/mnt/VolA command to create a mount point for this file system with a referenced name, such as VolA. Run the mount /dev/mapper/mpatha /mnt/VolA command to mount the file system. Begin using the file system as any other directory to host applications or file services. Repeat steps 1-5 for other provisioned volumes from the PowerVault Manager. For example, to /dev/mapper/mpathb, correlating to sg block devices /dev/sdc and /dev/sde.
7. On the Attach Volumes page, specify the name, size, and pool for each volume, and click Next. To add a volume, click Add Row. To remove a volume, click Remove. NOTE: Dell EMC recommends that you update the name with the hostname to better identify the volumes. 8. On the Summary page, review the host configuration settings, and click Configure Host. If the host is successfully configured, a Success dialog box is displayed 9.
Table 8. Example worksheet for single host server with dual port iSCSI NICs (continued) Management IP Subnet 1 Server iSCSI NIC 1 172.1.96.46 ME4024 controller A port 0 172.1.100.128 ME4024 controller B port 0 172.1.200.129 ME4024 controller A port 2 172.1.102.128 ME4024 controller B port 2 172.1.202.129 Subnet Mask 255.255.0.0 Subnet 2 Server iSCSI NIC 1 172.2.96.46 ME4024 controller A port 1 172.2.101.128 ME4024 controller B port 1 172.2.201.129 ME4024 controller A port 3 172.2.103.
Configure the software iSCSI Adapter on the ESXi host Perform the following steps to configure a software iSCSI adapter on the ESXI host: NOTE: If you plan to use VMware ESXi with 10GBase-T controllers, you must perform one of the following tasks: • Update the controller firmware to the latest version posted on Dell.com/support before connecting the ESXi host to the ME4 Series storage system.
VMware Volume rescan and datastore creation Perform the following steps to rescan volumes and create datastores: 1. Log in to the VMware vCenter Server, then click the ESXi host that was configured in step 5 of Attach SAS hosts to the storage system on page 58. 2. On the Configure tab, select Storage > Storage Adapters, then select the software iSCSI adapter HBA and click the Rescan option. 3. Click OK on the Rescan Storage dialog box.
a. For cluster configurations, use the “Host groups” setting to group hosts in a cluster. • • If this host is the first host in the cluster, select Create a new host group, then provide a name and click Next. If this host is being added to a host group that exists, select Add to existing host group. Select the group from the dropdown list, then click Next. b. For stand-alone hosts, select the Do not group this host option, then click Next. 7.
2. On the Configure tab, select Storage Devices. 3. Perform a rescan of the storage devices. 4. Select the iSCSI disk (Dell EMC iSCSI disk) created in the Register the host and create and map volumes on page 54 procedure, then select the Properties tab below the screen. 5. Scroll down to select the Edit Multipathing option, then select Round Robin (VMware) from the drop-down list. 6. Click OK. 7.
5. Click OK. 6. Right-click the host, and select Exit Maintenance Mode. Repeat the previous steps for all the hosts in the pool. Register hosts and create volumes Perform the following steps to register hosts, and create volumes using the PowerVault Manager: 1. Log in to the PowerVault Manager. 2. Access the Host Setup wizard: • • From the Welcome screen, click Host Setup. From the Home topic, click Action > Host Setup. 3. Confirm that all the Fibre Channel prerequisites have been met, then click Next.
The new SR is displayed in the Resources pane, at the pool level. iSCSI host server configuration for Citrix XenServer The following sections describe how to configure iSCSI host servers running Citrix XenServer: Prerequisites • • • • • • Complete the PowerVault Manager guided setup process and storage setup process. See the cabling diagrams within this guide before attaching a host to the storage system; careful planning ensures a successful deployment.
Configure the software iSCSI adapter on the XenServer host Perform the following steps to configure a software iSCSI adapter on a XenServer host: 1. Log in to XenCenter and select the XenServer host. 2. Select the pool in the Resources pane, and click the Networking tab. 3. Identify and document the network name that is used for iSCSI traffic. 4. Click Configure The Configure IP Address dialog box is displayed. 5. Select Add IP address in the left pane. a. b. c. d.
7. Group hosts together with other hosts in a cluster. a. Select the host to add to the host group. b. Select Action > Add to Host Group. The Add to Host Group dialog box is displayed. c. Type a host group name or select a host group from the Host Group Select field and click OK. 8. Map the volumes to the host group. a. Click the Volumes topic, select the volume to map. If a volume does not exist, create a volume. b. Select Action > Map Volumes. c. d. e. f. The Map dialog box is displayed.
Attach SAS hosts to the storage system Perform the following steps to attach Fibre Channel (FC) hosts to the storage system: 1. Ensure that all HBAs have the latest supported firmware and drivers as described on Dell.com/support. For a list of supported SAS HBAs, see the Dell EMC ME4 Series Storage System Support Matrix. 2. Use the SAS cabling diagrams to cable the hosts to the storage system either by using switches or connecting the hosts directly to the storage system. 3.
Create a Storage Repository on the volume Perform the following steps to create a Storage Repository (SR) on the volume at the pool level: 1. Log in to XenCenter and select the XenServer host. 2. Select the pool in the Resources pane. 3. Click New Storage. The New Storage Repository wizard opens. 4. Select Hardware HBA as the storage type and click Next. 5. Type a name for the new SR in the Name field. 6. Click Next. The wizard scans for available LUNs and then displays a page listing all the LUNs found. 7.
8 Troubleshooting and problem solving These procedures are intended to be used only during initial configuration for verifying that hardware setup is successful. They are not intended to be used as troubleshooting procedures for configured systems using production data and I/O. NOTE: For further troubleshooting help after setup, and when data is present, see Dell.com/support.
Table 10. Ops panel functions—2U enclosure front panel (continued) No. Indicator Status Blinking blue (2 Hz): Enclosure management is busy Constant amber: module fault present Blinking amber: logical fault (2 s on, 1 s off) 3 Unit identification display Green (seven-segment display: enclosure sequence) 4 Identity Blinking blue (0.25 Hz): system ID locator is activated Off: Normal state System power LED (green) LED displays green when system power is available.
Table 11. Ops panel functions – 5U enclosure front panel No.
2. Check the host activity LED. If there is activity, stop all applications that access the storage system. 3. Check the Cache Status LED to verify that the controller cached data is flushed to the disk drives. • Solid – Cache contains data yet to be written to the disk. • Blinking – Cache data is being written to CompactFlash. • Flashing at 1/10 second on and 9/10 second off – Cache is being refreshed by the supercapacitor. • Off – Cache is clean (no unwritten data). 4.
This step isolates the problem to the expansion cable or to the controller module expansion port. Is the expansion port status LED on? Yes – You now know that the expansion cable is good. Return the cable to the original port. If the expansion port status LED remains off, you have isolated the fault to the controller module expansion port. Replace the controller module. No – Proceed to the next step. 6. Move the expansion cable back to the original port on the controller enclosure. 7.
Table 13. Ops panel LED states (continued) System Power (Green/ Amber) Module Fault (Amber) Identity (Blue) LED display Associated LEDs/ Alarms Status On On X X No module LEDs Enclosure logical fault On Blink X X Module status LED on SBB module Unknown (invalid or mixed) SBB module type is installed, I2C bus failure (interSBB communications).
5U enclosure LEDs Use the LEDs on the 5U enclosure to help troubleshoot initial start-up problems. NOTE: When the 5U84 enclosure is powered on, all LEDs are lit for a short period to ensure that they are working. This behavior does not indicate a fault unless LEDs remain lit after several seconds. PSU LEDs The following table describes the LED states for the PSU: Table 14.
Table 16. Ops panel LED descriptions (continued) LED Status/description Drawer 0 Fault Amber indicates a disk, cable, or sideplane fault in drawer 0. Open the drawer and check DDICs for faults. Drawer 1 Fault Amber indicates a disk, cable, or sideplane fault in drawer 1. Open the drawer and check DDICs for faults. Drawer LEDs The following table describes the LEDs on the drawers: Table 17.
Table 18. DDIC LED descriptions (continued) Fault LED (Amber) Status/description* Off Storage system: Degraded (non-critical) Blinking: 3 s on/1 s off Storage system: Degraded (critical) Off Storage system: Quarantined Blinking: 3 s on/1 s off Storage system: Offline (dequarantined) Off Storage system: Reconstruction Off Processing I/O (whether from host or internal activity) *If multiple conditions occur simultaneously, the LED state behaves as indicated in the previous table.
• If the previous actions do not resolve the fault, contact your supplier for assistance. Controller module replacement may be necessary. IOM LEDs Use the IOM LEDs on the face plate to monitor the status of an IOM . Table 20.
Table 21. Troubleshooting 2U alarm conditions (continued) Status Severity Alarm SBB interface module removed Warning None Drive power control fault Warning – no loss of disk power S1 Drive power control fault Fault – critical – loss of disk power S1 Drive removed Warning None Insufficient power available Warning None For details about replacing modules, see the Dell EMC PowerVault ME4 Series Storage System Owner’s Manual.
Table 24. Troubleshooting thermal alarm Symptom Cause Recommended action 4. Check for excessive recirculation of heated air from rear to front. Use of the enclosure in a fully enclosed rack is not recommended. 5. If possible, shut down the enclosure and investigate the problem before continuing. Troubleshooting 5U enclosures Common problems that may occur with your 5U enclosure system. The Module Fault LED on the Ops panel, described in Figure 31.
Fault isolation methodology ME4 Series Storage Systems provide many ways to isolate faults. This section presents the basic methodology that is used to locate faults within a storage system, and to identify the pertinent CRUs affected. As noted in Using guided setup on page 32, use the PowerVault Manager to configure and provision the system upon completing the hardware installation. Configure and enable event notification to be notified when a problem occurs that is at or above the configured severity.
Performing basic steps You can use any of the available options that are described in the previous sections to perform the basic steps comprising the fault isolation methodology. Gather fault information When a fault occurs, gather as much information as possible. Doing so helps determine the correct action that is needed to remedy the fault.
Correcting enclosure IDs When installing a system with expansion enclosures attached, the enclosure IDs might not agree with the physical cabling order. This issue occurs if the controller was previously attached to enclosures in a different configuration, and the controller attempts to preserve the previous enclosure IDs. To correct this condition, ensure that both controllers are up, and perform a rescan using the PowerVault Manager or the CLI.
• • • • Solid – Cache contains data yet to be written to the disk. Blinking – Cache data is being written to CompactFlash in the controller module. Flashing at 1/10 second on and 9/10 second off – Cache is being refreshed by the supercapacitor. Off – Cache is clean (no unwritten data). 4. Remove the SFP+ transceiver and host cable and inspect for damage. 5. Reseat the SFP+ transceiver and host cable.
• Off – Cache is clean (no unwritten data). 4. Remove the host cable and inspect for damage. 5. Reseat the host cable. Is the host link status LED on? • • Yes – Monitor the status to ensure that there is no intermittent error present. If the fault occurs again, clean the connections to ensure that a dirty connector is not interfering with the data path. No – Proceed to the next step. 6. Move the host cable to a port with a known good link status.
• No – Proceed to the next step. 7. Move the expansion cable back to the original port on the controller enclosure. 8. Move the expansion cable on the expansion enclosure to a known good port on the expansion enclosure. Is the host link status LED on? • • Yes – You have isolated the problem to the expansion enclosure port. Replace the IOM in the expansion enclosure. No – Proceed to the next step. 9.
A Cabling for replication The following sections describe how to cable storage systems for replication: Topics: • • • • Connecting two storage systems to replicate volumes Host ports and replication Example cabling for replication Isolating replication faults Connecting two storage systems to replicate volumes The replication feature performs asynchronous replication of block-level data from a volume in a primary system to a volume in a secondary system.
NOTE: ME4 Series 5U84 enclosures support dual-controller configurations only. ME4 Series 2U controller enclosures support single controller and dual-controller configurations. • If a partner controller module fails, the storage system fails over and runs on a single controller module until the redundancy is restored. • In dual-controller module configurations, a controller module must be installed in each slot to ensure sufficient airflow through the enclosure during operation.
Series 5U storage systems for replication – multiple servers, one switch, and one network on page 87 shows the rear panel of two 5U84 enclosures with I/O and replication occurring on the same network. In the configuration, Virtual Local Area Network (VLAN) and zoning could be employed to provide separate networks for iSCSI and FC. Create a VLAN or zone for I/O and a VLAN or zone for replication to isolate I/O traffic from replication traffic.
Figure 37. Connecting two ME4 Series 2U storage systems for replication – multiple servers, multiple switches, one network 1. 2U controller enclosures 3. Connection to host servers 2. Two switches (I/O) 4. Switch (Replication) Figure 38. Connecting two ME4 Series 5U storage systems for replication – multiple servers, multiple switches, one network 1. 5U controller enclosures 3. Connection to host servers 2. Two switches (I/O) 4.
5. Ethernet WAN Figure 40. Connecting two ME4 Series 5U storage systems for replication – multiple servers, multiple switches, two networks 1. 5U controller enclosures 3. Connection to host servers (network A) 5. Ethernet WAN 2. Two switches (I/O) 4. Connection to host servers (network B) Isolating replication faults Replication is a disaster-recovery feature that performs asynchronous replication of block-level data from a volume in a primary storage system to a volume in a secondary storage system.
• To initiate replication, use the replicate CLI command or in the PowerVault Manager Replications topic, select Action > Replicate. Using the PowerVault Manager, monitor the storage system event logs for information about enclosure-related events, and to determine any necessary recommended actions NOTE: These steps are a general outline of the replication setup.
Table 27. Diagnostics for replication setup – Creating a replication set (continued) Answer Possible reasons Action No On controller enclosures equipped with iSCSI host interface ports, replication set creation fails due to use of CHAP. If using , see the topics about configuring CHAP and working in replications within the Dell EMC PowerVault ME4 Series Storage System Administrator’s Guide.
Table 29. Diagnostics for replication setup: Checking for a successful replication (continued) Answer Possible reasons Action No Communication link is down Review event logs for indicators of a specific fault in a host or replication data path component.
B SFP+ transceiver for FC/iSCSI ports This section describes how to install the small form-factor pluggable (SFP+) transceivers ordered with the ME4 Series FC/iSCSI controller module. Locate the SFP+ transceivers Locate the SFP+ transceivers that shipped with the controller enclosure, which look similar to the generic SFP+ transceiver that is shown in the following figure: Figure 41. Install an SFP+ transceiver into the ME4 Series FC/iSCSI controller module 1. CNC-based controller module face 3.
6. Connect a qualified fiber-optic interface cable into the duplex jack of the SFP+ transceiver. If you do not plan to use the SFP+ transceiver immediately, reinsert the plug into the duplex jack of SFP+ transceiver to keep its optics free of dust. Verify component operation View the port Link Status/Link Activity LED on the controller module face plate. A green LED indicates that the port is connected and the link is up.
C System Information Worksheet Use the system information worksheet to record the information that is needed to install the ME4 Series Storage System. ME4 Series Storage System information Gather and record the following information about the ME4 Series storage system network and the administrator user: Table 30. ME4 Series Storage System network Item Information Service tag Management IPv4 address (ME4 Series Storage System management address) _____ . _____ . _____ .
Table 32. iSCSI Subnet 1 (continued) Item Information IPv4 address for storage controller module B: port 0 _____ . _____ . _____ . _____ IPv4 address for storage controller module A: port 2 _____ . _____ . _____ . _____ IPv4 address for storage controller module B: port 2 _____ . _____ . _____ . _____ Table 33. iSCSI Subnet 2 Item Information Subnet mask _____ . _____ . _____ . _____ Gateway IPv4 address _____ . _____ . _____ .
Table 35. WWNs in fabric 1 (continued) Item FC switch port Information FC switch port Information WWN of storage controller A: port 2 WWN of storage controller B: port 2 WWNs of server HBAs: Table 36.
D Setting network port IP addresses using the CLI port and serial cable You can manually set the static IP addresses for each controller module. Alternatively, you can specify that IP addresses should be set automatically for both controllers through communication with a Dynamic Host Configuration Protocol (DHCP) server. In DHCP mode, the network port IP address, subnet mask, and gateway are obtained from a DCHP server. If a DHCP server is not available, the current network addresses are not changed.
Figure 42. Connecting a USB cable to the CLI port 3. Start a terminal emulator and configure it to use the display settings in Table 37. Terminal emulator display settings on page 99 and the connection settings in Table 38. Terminal emulator connection settings on page 99. Table 37. Terminal emulator display settings Parameter Value Terminal emulation mode VT-100 or ANSI (for color support) Font Terminal Translations None Columns 80 Table 38.
If you are connecting to a storage system with G280 firmware that has been deployed: a. Type the username of a user with the manage role at the login prompt and press Enter. b. Type the password for the user at the Password prompt and press Enter. 7.
Mini-USB Device Connection The following sections describe the connection to the mini-USB port: Emulated serial port When a computer is connected to a controller module using a mini-USB serial cable, the controller presents an emulated serial port to the computer. The name of the emulated serial port is displayed using a customer vendor ID and product ID. Serial port configuration is unnecessary.
Known issues with the CLI port and mini-USB cable on Microsoft Windows When using the CLI port and cable for setting network port IP addresses, be aware of the following known issue on Windows: Problem The computer might encounter issues that prevent the terminal emulator software from reconnecting after the controller module restarts or the USB cable is unplugged and reconnected. Workaround To restore a connection that stopped responding when the controller module was restarted: 1.