Dell EMC PowerVault ME4 Series Storage System Administrator’s Guide July 2020 Rev.
Notes, cautions, and warnings NOTE: A NOTE indicates important information that helps you make better use of your product. CAUTION: A CAUTION indicates either potential damage to hardware or loss of data and tells you how to avoid the problem. WARNING: A WARNING indicates a potential for property damage, personal injury, or death. © 2018 – 2020 Dell Inc. or its subsidiaries. All rights reserved. Dell, EMC, and other trademarks are trademarks of Dell Inc. or its subsidiaries.
Contents Chapter 1: Getting started..............................................................................................................10 New user setup.................................................................................................................................................................... 10 Configure and provision a new storage system...............................................................................................................
Create disk groups and pools....................................................................................................................................... 33 Open the guided disk group and pool creation wizard..............................................................................................34 Attaching hosts and volumes in the Host Setup wizard................................................................................................
Configuring advanced settings..........................................................................................................................................65 Changing disk settings..................................................................................................................................................65 Changing system cache settings.................................................................................................................................
Managing spares................................................................................................................................................................. 85 Global spares.................................................................................................................................................................. 86 Dedicated spares.........................................................................................................................................
Manually initiate replication from the Volumes topic...............................................................................................104 Schedule a replication from the Volumes topic........................................................................................................104 Manage replication schedules from the Volumes topic................................................................................................
Schedule a replication from the Replications topic..................................................................................................125 Stopping a replication........................................................................................................................................................ 126 Stop a replication.........................................................................................................................................................
External details for connUnitPortTable..................................................................................................................... 148 Configure SNMP event notification in the PowerVault Manager..........................................................................148 SNMP management.................................................................................................................................................... 148 Using FTP and SFTP.............................
1 Getting started PowerVault Manager is a web-based interface for configuring, monitoring, and managing the storage system. Each controller module in the storage system contains a web server, which is accessed when you sign in to the PowerVault Manager. You can access all functions from either controller in a dual-controller system. If one controller becomes unavailable, you can continue to manage the storage system from the partner controller.
a. Click Get Started. b. Read the Commercial Terms of Sale and End User License Agreement (EULA), and click Accept. c. Specify a new user name and password for the system, and click Apply and Continue. The Welcome panel that is displayed provides options to set up and provision your system. For more information about using these options, see Guided setup on page 32. NOTE: If you are unable to use the 10.0.0.
Tips for using PowerVault Manager The following list contains tips for using PowerVault Manager: • • • • • • • • • • Do not use the Back, Forward, Reload, or Refresh buttons in the browser. PowerVault Manager has a single page on which content changes as you perform tasks and automatically updates to show current data. A red asterisk ( ) identifies a required setting. As you set options in action panels, PowerVault Manager informs you whether a value is invalid or a required option is not set.
• In the filter field, enter the text to find. As you type, only items that contain the specified text remain shown. Because a filter is • • active, the icon changes ( ). Previous search terms are listed below the field. Previous search terms that match displayed values are shown in bold. If the filter list has an entry for the text you want to find, select that entry. To show all items in the column, click the filter icon and select All. To clear all filters and show all items, click Clear Filters.
NOTE: For best security, Sign out when you are ready to end your session. Do not close the browser window unless you are certain that it is the only browser instance. Sign in 1. In the web browser address field, type https:// and press Enter. The Sign In page opens. If the Sign In page is not displayed, verify that you have typed the correct IP address. NOTE: HTTPS is enabled by default. To enable HTTP, see Enable or disable system-management settings. 2.
All disks in a disk group must be the same type SSD: enterprise SAS, or midline SAS, For example, a disk group can contain different models of disks, and disks with different capacities and sector formats. If you mix disks with different capacities, the smallest disk determines the logical capacity of all other disks in the disk group, for all RAID levels except ADAPT.
Only a single read-cache disk group may exist within a pool. Increasing the size of read cache within a pool requires the user to remove the read-cache disk group, and then re-add a larger read-cache disk group. It is possible to have a read-cache disk group that consists of one or two disks with a non-fault tolerant RAID level. For more information on read cache, see About SSD read cache.
Table 2. RAID level comparison (continued) RAID level Min.
Table 3. Number of disks per RAID level to optimize virtual disk group performance (continued) RAID level Number of disks (data and parity) 6 4 total (2 data disks, 2 parity disks); 6 total (4 data disks, 2 parity disks); 10 total (8 data disks, 2 parity disks) 10 4–16 total ADAPT 12–128 total Table 4. Linear disk group expansion by RAID level RAID level Expansion capability Maximum disks NRAID Cannot expand. 1 0, 3, 5, 6 You can add from 1 to 4 disks at a time. 16 1 Cannot expand.
Gauging the percentage of life remaining for SSDs An SSD can be written and erased a limited number of times. Through the SSD Life Left disk property, you can gauge the percentage of disk life remaining. This value is polled every 5 minutes. When the value decreases to 20%, an event is logged with Informational severity. This event is logged again with Warning severity when the value decreases to 5%, 2% or 1%, and 0%.
Because data is characterized every five seconds and moved to the appropriate storage device, no fixed rule is used to determine which SSDs are used. For this reason, using SSDs with the same DWPD values is advised. About SSD read cache Unlike tiering, where a single copy of specific blocks of data resides in either spinning disks or SSDs, the Read Flash Cache (RFC) feature uses one SSD read-cache disk group per pool as a read cache for frequently accessed data only.
Virtual pools and disk groups The volumes within a virtual pool are allocated virtually (separated into fixed size pages, with each page allocated randomly from somewhere in the pool) and thinly (meaning that they initially exist as an entity but don't have any physical storage allocated to them). They are also allocated on-demand (as data is written to a page, it is allocated).
Linear volumes Linear volumes make use of a method of storing user data in sequential, fully allocated physical blocks. Mapping between the logical data presented to hosts and the physical location where it is stored is fixed, or static. About volume cache options You can set options that optimize reads and writes performed for each volume. It is recommended that you use the default settings.
• The Disabled option turns off read-ahead cache. This is useful if the host is triggering read ahead for what are random accesses. This can happen if the host breaks up the random I/O into two smaller reads, triggering read ahead. About thin provisioning Thin provisioning is a virtual storage feature that allows a system administrator to overcommit physical storage resources. This allows the host system to operate as though it has more storage available than is actually allocated to it.
• • Performance – This setting prioritizes volume data to the higher tiers of service. If no space is available, lower performing tier space is used. Volume data moves into higher performing tiers based on the frequency of access and available space in the tiers. NOTE: The Performance affinity setting does not require an SSD tier and uses the highest performance tier available. Archive – This setting prioritizes the volume data to the lowest tier of service.
CHAP definitions. This information may be useful in configuring CHAP entries for new hosts. This information becomes visible when an iSCSI discovery session is established, because the storage system does not require discovery sessions to be authenticated. CHAP authentication must succeed for normal sessions to move to the full feature phase. About volume mapping Mappings between a volume and one or more initiators, hosts, or host groups hosts enable hosts to view and access the volume.
The system treats a snapshot like any other volume. The snapshot can be mapped to hosts with read-only access, read-write access, or no access, depending on the purpose of the snapshot. Snapshots use the rollback feature, which replaces the data of a source volume or snapshot with the data of a snapshot that was created from it. Snapshots also use the reset snapshot feature, which enables you to replace the data in a snapshot with the current data in the source volume.
About reconstruction If one or more disks fail in a disk group and spares of the appropriate size (same or larger) and type (same as the failed disks) are available, the storage system automatically uses the spares to reconstruct the disk group. Disk group reconstruction does not require I/O to be stopped, so volumes can continue to be used while reconstruction is in progress. If no spares are available, reconstruction does not start automatically.
Historical performance statistics for disks, pools, and tiers are displayed in graphs for ease of analysis. Historical statistics focus on disk workload. You can view historical statistics to determine whether I/O is balanced across pools and to identify disks that are experiencing errors or are performing poorly. The system samples historical statistics for disks every quarter hour and retains these samples for 6 months.
For information about the procedures to update firmware in controller modules, expansion modules, and disk drives, see Updating firmware on page 59. That topic also describes how to use the activity progress interface to view detailed information about the progress of a firmware-update operation. About managed logs As the storage system operates, it records diagnostic data in several types of log files.
SupportAssist data The data that SupportAssist sends does not provide technical support with the information that is needed to connect to an ME4 Series array, because passwords are not transmitted.
If DNS server functionality is operational and reachable by the controller's nslookup service, the FQDN for each controller is also shown. If nslookup output is not available, the domain name will show '-'. NOTE: DNS settings are limited to SMTP server configuration for email notification only.
2 Working in the Home topic The Home topic provides options to set up and configure your system and manage tasks, and displays an overview of the storage managed by the system. The content presented depends on the completion of all required actions in the Welcome panel. The standard Home topic is hidden by the Welcome panel until all required actions are complete.
Provisioning disk groups and pools The Storage Setup wizard guides you through each step of the process, including creating disk groups and pools in preparation for attaching hosts and volumes. NOTE: You can cancel the wizard at any time, but changes that are made in completed steps are saved. Access the Storage Setup wizard from the Welcome panel or by choosing Action > Storage Setup. When you access the wizard, you must select the storage type for your environment.
Open the guided disk group and pool creation wizard 1. Access Storage Setup by doing one of the following: • • From the Welcome panel, click Storage Setup. From the Home topic, click Action > Storage Setup. 2. Follow the on-screen directions to provision your system.
the volume will reside. Follow the instructions in the wizard to create the volumes shown in the table. Be sure to balance volume ownership between controllers. Once you are ready to move to the next step, click Next. Configuration summary The summary displays the host configuration you defined in the wizard. If you are happy with the setup, finish the process by selecting Configure Host.
• Current data throughput (MB/s) for all ports, calculated over the interval since these statistics were last requested or reset. Capacity information The Capacity block shows two color-coded bars. The lower bar represents the physical capacity of the system, showing the capacity of disk groups, spares, and unused disk space, if any. The upper bar identifies how the capacity is allocated and used. The upper bar shows the reserved, allocated, and unallocated space for the system.
The number of volumes and virtual snapshots for the pool owned by the controller appears above the top horizontal bar for both virtual and linear storage. Hover the cursor anywhere in a storage block to display the Storage Information panel. The Storage Information panel only contains information for the type of storage that you are using. Table 6.
• CAUTION: This type of operation must be performed offline. Removing a virtual disk group or pool while the system is online may result in corruption and possible data loss. The system must be powered off before any disks are removed. If the pool conflict was unexpected—for example, you did not realize that there was a previous pool on the disks of the old system and data that is contained on the disks is no longer needed: 1. Remove the disks that were from the old system out of the new system. 2.
• • In the banner, click the System Date/Time Bar panel and select Set Date and Time. In the Welcome panel, select System Settings > Date and Time. 2. If checked, clear the Network Time Protocol (NTP) check box. 3. To set the Date value, enter the current date in the format YYYY-MM-DD. 4. To set the Time value, enter two-digit values for the hour and minutes and select either AM, PM, or 24H (24-hour clock). 5.
The following options apply only to a standard user: • Roles. Select one or more of the following roles: • ○ Monitor. Enables the user to view but not change system status and settings. This is enabled by default and cannot be disabled. ○ Manage. Enables the user to change system settings. Interfaces. Select one or more of the following interfaces: ○ ○ ○ ○ • WBI. Enables access to the PowerVault Manager. CLI. Enables access to the command-line interface. SMI-S.
• • To save your settings and continue configuring your system, click Apply. To save your settings and close the panel, click Apply and Close. A confirmation panel appears. 5. Click OK to save your changes. Otherwise, click Cancel. Create a user from an existing user 1. Log in as a user with the manage role and perform one of the following: • • • • In the Home topic, select Action > System Settings, then click the Managing Users tab. In the System topic, select Action > System Settings.
5. Click OK to save your changes. Otherwise, click Cancel. If you clicked OK, the user is removed, the table is updated, and any sessions associated with that user name are terminated. NOTE: The system requires that at least one user with the manage role to exist. Configuring network ports on controller modules If you used the default 10.0.0.2/10.0.0.
4. If you selected Manual, perform the following steps: , and then a. Type the IP address, IP mask, and Gateway addresses for each controller. b. Record the IP addresses. NOTE: The following IP addresses are reserved for internal use by the storage system: 169.254.255.1, 169.254.255.2, 169.254.255.3, 169.254.255.4, and 127.0.0.1. Because these addresses are routable, do not use them anywhere in your network. 5.
• • • The name is not case-sensitive. The name must start with a letter and end with a letter or digit. The name can include letters, numbers, or hyphens; no periods. 4. Enter up to three network addresses for each controller in the DNS Servers fields. The resolver queries the network in the order that is listed until reaching a valid destination address. Any valid setting is treated as enabling DNS resolution for the system. 5.
• Activity Progress Reporting. Provides access to the activity progress interface using HTTP port 8081. This mechanism reports whether a firmware update or partner firmware update operation is active and shows the progress through each step of the operation. When the update operation completes, status is presented indicating either the successful completion, or an error indication if the operation failed. • In-band SES Capability.
3. Select the Email tab. 4. In the SMTP Server address field, enter the network address of the SMTP mail server to use for the email messages. 5. In the Sender Domain field, enter a domain name, which is joined with an @ symbol to the sender name to form the “from” address for remote notification. The domain name can have a maximum of 255 bytes. Because this name is used as part of an email address, do not include spaces or the following: \ " : ; < > ( ) If the domain name is not valid, some email servers f
Test email notifications settings Perform the following steps to test email notifications settings: 1. Configure your system to send email notifications. 2. Click Test Email. A test notification is sent to the notification email addresses. 3. Verify that the test notification reached the intended recipient. 4. Click OK. If there was an error in sending a test notification, event 611 is displayed in the confirmation.
4. Set the managed log option: • To enable managed logs, select the Enable Managed Logs check box. • To disable managed logs, clear the Enable Managed Logs check box. 5. If the managed logs option is enabled, type the email address of the log-collection system in the Email destination address field. The email address must use the format user-name@domain-name and can have a maximum of 320 bytes. For example: LogCollector@mydomain.com. 6.
3. Verify that the test notification reached the intended location. 4. Click OK. If there was an error in sending a test notification, event 611 is displayed in the confirmation. Configuring SupportAssist SupportAssist sends configuration and diagnostic information from an ME4 Series storage system to technical support. When enabled, you agree to allow the feature to remotely monitor the storage system, collect diagnostic information, and transmit the data to a remote support server.
• • • • • • State – Operational status of SupportAssist on the ME4 Series storage system. Operation Mode – Operational mode of SupportAssist on the ME4 Series storage system. Last Logs Send Status – Status of the last attempt to send ME4 Series storage system logs to SupportAssist. Last Logs Send Time – Date and time of the last attempt to send ME4 Series storage system logs to SupportAssist. Last Event Send Status – Status of the last attempt to send ME4 Series storage system events to SupportAssist.
Changing host port settings You can configure controller host-interface settings for ports except for systems with a 4-port SAS controller module. To enable the system to communicate with hosts, you must configure the host-interface options of the system. NOTE: If the current settings are correct, port configuration is optional. For a system with a 2-port SAS controller module, host ports can be configured to use standard cables.
• • Gateway: For IPv4, gateway IP address for assigned port IP address. Default Router: For IPv6, default router for assigned port IP address. 3. In the Advanced Settings section of the panel, set the options that apply to all iSCSI ports: • • • • • • • Enable Authentication (CHAP): Enables or disables the use of the Challenge Handshake Authentication Protocol (CHAP).
• • • • IP Address: For IPv4 or IPv6, the port IP address. For corresponding ports in each controller, assign one port to one subnet and the other port to a second subnet. Ensure that each iSCSI host port in the storage system is assigned a different IP address. For example, in a system using IPv4: ○ Controller A port 2: 10.10.10.100 ○ Controller A port 3: 10.11.10.120 ○ Controller B port 2: 10.10.10.110 ○ Controller B port 3: 10.11.10.130 Netmask: For IPv4, subnet mask for assigned port IP address.
2. Select the schedule to modify. The schedule settings appear at the bottom of the panel. 3. If you want to replicate the last snapshot in the primary volume, select the Last Snapshot check box. The snapshot must exist at the time of the replication. This snapshot may have been created either manually or by scheduling the snapshot. NOTE: This option is unavailable when replicating volume groups. 4. Specify a date and a time in the future to specify when to run the scheduled task.
3 Working in the System topic Topics: • • • • • • • • • • Viewing system components Systems Settings panel Resetting host ports Rescanning disk channels Clearing disk metadata Updating firmware Changing FDE settings Configuring advanced settings Using maintenance mode Restarting or shutting down controllers Viewing system components The System topic enables you to see information about each enclosure and its physical components in front, rear, and tabular views. Components vary by enclosure model.
• • • Power On Hours – Total number of hours that the disk has been powered on since it was manufactured. This value is updated in 30minute increments. FDE State – FDE state of the disk. For more information about FDE states, see the Dell EMC PowerVault ME4 Series Storage System CLI Guide. FDE lock keys – FDE lock keys are generated from the FDE passphrase and manage locking and unlocking the FDE-capable disks in the system.
Table 9. Table view information (continued) Field Description Type Shows the component type: enclosure, disk, power supply, controller module, network port, host port, expansion port, CompactFlash card, or I/O module (expansion module). Enclosure Shows the enclosure ID. Location Shows the location of the component. • • • • Information For an enclosure, the location is shown in the format Rack rack-ID.shelf-ID. You can set the location through the CLI set enclosure command.
Table 9. Table view information (continued) Field Description • • • • • ○ Warning The disk is present but the system is having communication problems with the disk LED processor. For disk and midplane types where this processor also controls power to the disk, poweron failure will result in the Error status. ○ Error The disk is present but not detected by the expander. ○ Unknown Initial status when the disk is first detected or powered on. ○ Not Present The disk slot indicates that no disk is present.
Clearing disk metadata You can clear metadata from a leftover disk to make it available for use. CAUTION: Only use this command when all disk groups are online and leftover disks exist. Improper use of this command may result in data loss. Do not use this command when a disk group is offline and one or more leftover disks exist. If you are uncertain whether to use this command, contact technical support for assistance.
• • • If any unwritten cache data is present, the firmware update will not proceed. Before you can update the firmware, unwritten data must be removed from cache. For more information about the clear cache command, see the Dell EMC PowerVault ME4 Series Storage System CLI Guide. If a disk group is quarantined, contact technical support for help resolving the problem that is causing the component to be quarantined before updating the firmware.
If PFU is enabled, allow as additional 10 minutes to 20 minutes for the partner controller to be updated. 5. Clear your web browser cache, then sign in to the PowerVault Manager. If PFU is still running on the controller you sign in to, a panel shows PFU progress and prevents you from performing other tasks until PFU is complete. NOTE: If PFU is enabled for the system, after firmware update has completed on both controllers, check the system health.
2. Select the Update Disk Drives tab. This tab shows information about each disk drive in the system. 3. Select the disk drives to update. 4. Click File and select the firmware file to install. 5. Click OK. CAUTION: Do not power cycle enclosures or restart a controller during the firmware update. If the update is interrupted or there is a power failure, the disk drive might become inoperative. If this occurs, contact technical support. It typically takes several minutes for the firmware to load.
Table 10. Activity progress properties and values (continued) Property Value ○ 2 – The operation is in progress. The other properties will indicate the progress item (message, current, total, percent). ○ 10 or higher – The operation for this component completed with a failure. The code and message indicate the reason for the error. Message A textual message indicating the progress status or error condition.
NOTE: The FDE tabs are dynamic, and the Clear All FDE Keys option is not available on a secured system until the current passphrase is entered in the Current Passphrase field. (If you do not have a passphrase, the Clear All FDE Keys option is not displayed. If you have a passphrase but have not entered it, you can view but not access this option.) If there is no passphrase, set one using the procedure in Setting the passphrase. Clear lock keys Performing the steps to clear the lock keys: 1.
Set or change the import passphrase 1. In the System topic, select Action > Full Disk Encryption. The Full Disk Encryption panel opens with the FDE General Configuration tab selected. 2. Select the Set Import Lock Key ID tab. 3. In the Passphrase field, enter the passphrase associated with the displayed lock key. 4. Re-enter the passphrase. 5. Click Set. A dialog box will confirm the passphrase was changed successfully.
disk and is the same type: SATA SSD, SAS SSD, enterprise SAS, or midline SAS. If a spare or available compatible disk is already present, the dynamic spares feature uses that disk to start the reconstruction and the replacement disk can be used for another purpose. Change the dynamic spares setting 1. In the System topic, select Action > Advanced Settings > Disk. 2. Either select enable, or clear to disable the Dynamic Spare Capability option. The dynamic spares setting is enabled by default. 3.
Changing system cache settings The Cache tab provides options to change the synchronize-cache mode, missing LUN response, host control of the write-back cache setting, cache redundancy mode, and auto-write-through cache triggers and behaviors. Changing the synchronize-cache mode You can control how the storage system handles the SCSI SYNCHRONIZE CACHE command. Typically you can use the default setting.
Change auto-write-through cache triggers and behaviors 1. In the System topic, select Action > Advanced Settings > Cache. 2. In the Auto-Write Through Cache Trigger Conditions section, either select to enable or clear to disable the options: Controller Failure Changes to write-through if a controller fails. In a dual-controller system this option is disabledby default. In Single Controller mode this option is grayed out.
When a scrub is complete, event 207 is logged and specifies whether errors were found and whether user action is required. Enabling background disk group scrub is recommended. NOTE: If you choose to disable background disk group scrub, you can still scrub a selected disk group by using Action > Disk Group Utilities. Configure background scrub for disk groups 1. In the System topic, choose Action > Advanced Settings > System Utilities. 2.
NOTE: Maintenance mode can also be manually enabled or disabled on an ME4 Series storage system. Enable maintenance mode Perform the following steps manually enable maintenance mode on the ME4 Series storage system: 1. Perform one of the following actions to access the SupportAssist options: • • • In the Home topic, select Action > System Settings, then click the SupportAssist tab. In the System topic, select Action > System Settings, then click the SupportAssist tab.
2. Select the Restart operation. 3. Select the controller type to restart: Management or Storage. 4. Select the controller module to restart: Controller A, Controller B, or both. 5. Click OK. A confirmation panel appears 6. Click OK. A message is displayed that describes restart activity. Shutting down controllers Perform a shut down before you remove a controller module from an enclosure, or before you power off its enclosure for maintenance, repair, or a move.
4 Working in the Hosts topic Topics: • • • • • • • • • • • • • Viewing hosts Create an initiator Modify an initiator Delete initiators Add initiators to a host Remove initiators from hosts Remove hosts Rename a host Add hosts to a host group Remove hosts from a host group Rename a host group Remove host groups Configuring CHAP Viewing hosts The Hosts topic shows a tabular view of information about initiators, hosts, and host groups that are defined in the system.
• Access. Shows the type of access assigned to the mapping: • • ○ read-write—The mapping permits read and write access. ○ read-only—The mapping permits read access. ○ no-access—The mapping prevents access. LUN. Shows whether the mapping uses a single LUN or a range of LUNs (indicated by *). Ports. Lists the controller host ports to which the mapping applies. Each number represents corresponding ports on both controllers. To display more information about a mapping, see Viewing map details.
• • To use an existing host, select its name in the Host Select list. To create a host, enter a name for the host in the Host Select field. A host name is case sensitive and can have a maximum of 32 bytes. It cannot already exist in the system or include the following: " , . < \ 4. Click OK. For the selected initiators, the Host value changes from -- to the specified host name. Remove initiators from hosts You can remove all except the last initiator from a host.
Rename a host group You can rename a host group. 1. In the Hosts topic, select a host group to rename. 2. Select Action > Rename Host Group. The Rename Host Group panel opens. 3. In the New Host Group Name field, enter a new name for the host group. A host group name is case sensitive and can have a maximum of 32 bytes. It cannot already exist in the system or include the following: " , . < \ If the name is used by another host group, you are prompted to enter a different name. 4. Click OK.
7. To use mutual CHAP: • • • Select the Mutual CHAP check box. In the Mutual CHAP Name field, enter the IQN obtained in step 1. The value is case sensitive and can include a maximum of 223 bytes and the following: 0–9, lowercase a–z, hyphen, colon, and period. In the Mutual CHAP Secret field, enter a secret for the initiator to use to authenticate the target. The secret is case sensitive, can include 12–16 bytes, and must differ from the initiator secret.
5 Working in the Pools topic Topics: • • • • • • • • • • Viewing pools Adding a disk group Modifying a disk group Removing disk groups Expanding a disk group Managing spares Create a volume Changing pool settings Verifying and scrubbing disk groups Removing a disk group from quarantine Viewing pools The Pools topic shows a tabular view of information about the pools and disk groups that are defined in the system, as well as information for the disks that each disk group contains.
Related Disk Groups table When you select a pool in the pools table, the disk groups for it appear in the Related Disk Groups table. For selected pools, the Related Disk Groups table shows the following information: Table 12. Disk Groups table Field Description Name Shows the name of the disk group. Health Shows the health of the disk group: OK, Degraded, Fault, N/A, or Unknown. Pool Shows the name of the pool to which the disk group belongs. RAID Shows the RAID level for the disk group.
Table 12. Disk Groups table (continued) Field Description Disks Shows the number of disks in the disk group. To see more information about a disk group, select the pool for the disk group in the pools table, then hover the cursor over the disk group in the Related Disk Groups table. The Disk Group Information panel opens and displays detailed information about the disk group. Table 13.
Table 15.
Adding virtual disk groups The system supports a maximum of two pools, one per controller module: A and B. You can add up to 16 virtual disk groups for each virtual pool. If a virtual pool does not exist, the system will automatically add it when creating the disk group. Once a virtual pool and disk group exist, volumes can be added to the pool. Once you add a virtual disk group, you cannot modify it.
Table 16. Disk group options (continued) Option Description RAID Level Select one of the following RAID levels when creating a virtual or linear disk group: • • • • • • RAID 1 – Requires 2 disks. RAID 5 – Requires 3-16 disks. RAID 6 – Requires 4-16 disks. RAID 10 – Requires 4-16 disks, with a minimum of two RAID-1 subgroups, each having two disks. RAID 50 – (only appears for linear disk groups). Requires 6-32 disks, with a minimum of two RAID-5 subgroups, each having three disks.
NOTE: Disks that are already used or are not available for use are not populated in the table. 5. Click Add. If your disk group contains both 512n and 512e disks, a dialog box appears. Perform one of the following: • • To create the disk group, click Yes. To cancel the request, click No. If the task succeeds, the new disk group appears in the Related Disk Groups table in the Pools topic. Modifying a disk group You can rename any virtual and read-cache disk group.
Unless a virtual pool consists exclusively of SSDs, if a virtual pool has more than one disk group and at least one volume that contains data, the system attempts to drain the disk group to be deleted by moving the volume data that it contains to other disk groups in the pool.
Adding single-ported disks to a disk group that contains dual-ported disks is supported. However, because single-ported disks are not fault-tolerant, a confirmation prompt will appear. NOTE: Expansion can take hours or days to complete, depending on the disk group's RAID level and size, disk speed, utility priority, and other processes running on the storage system. You can stop expansion only by deleting the disk group.
Global spares In the PowerVault Manager, you can designate a maximum of 64 global spares for disk groups that do not use the ADAPT RAID level. If a disk in any fault-tolerant virtual or linear disk group fails, a global spare—which must be the same size or larger and the same type as the failed disk—is automatically used to reconstruct the disk group. This is true of RAID 1, 5, 6, 10 for virtual disk groups and RAID 1, 3, 5, 6, 10, 50 for linear ones.
Add dedicated spares 1. In the Pools topic, select the linear pool for the disk group that you are modifying in the pools table. Then, select the disk group in the Related Disk Groups table. 2. Select Action > Manage Spares. The Manage Spares panel opens. 3. Check the Assign dedicated spares to the disk group box, then select the disk group in which you want the dedicated spare to reside. 4. In the Add New Spares section, click on available disks to select them. 5. Click Add Spares.
Verifying and scrubbing disk groups Verify a disk group If you suspect that a fault-tolerant, mirror or parity, disk group has a problem, run the Verify utility to check the disk group's integrity. For example, if you haven't checked the system for parity inconsistencies recently and are concerned about the disk health, verify its disk groups. The Verify utility analyzes the selected disk group to find and fix inconsistencies between its redundancy data and its user data.
3. Select Action > Disk Group Utilities. The Disk Group Utilities panel opens, showing the current job status. 4. Click Scrub Disk Group. A message confirms that the scrub has started. 5. Click OK. The panel shows the scrub's progress. Abort a disk group scrub 1. In the Pools topic, select the pool for the disk group that you are verifying in the pools table. Then, select the disk group in the Related Disk Groups table.
• During system operation, a disk group loses redundancy plus one more disk. For example, three disks are inaccessible in a RAID-6 disk group or two disks are inaccessible for other fault-tolerant RAID levels. The disk group will be automatically dequarantined if after 60 seconds the disk group status is FTOL, FTDN, or CRIT. Quarantine isolates the disk group from host access and prevents the system from changing the disk group status to OFFL.
6 Working in the Volumes topic Topics: • • • • • • • • • • • • • • • • • Viewing volumes Creating a virtual volume Creating a linear volume Modifying a volume Copying a volume or snapshot Abort a volume copy Adding volumes to a volume group Removing volumes from a volume group Renaming a volume group Remove volume groups Rolling back a virtual volume Deleting volumes and snapshots Creating snapshots Resetting a snapshot Creating a replication set from the Volumes topic Initiating or scheduling a replicatio
Snapshots table in the Volumes topic To see more information about a snapshot and any child snapshots taken of it, select the snapshot or volume that is associated with it in the volumes table. If it is not already selected, click the Snapshots tab. The snapshots and all related snapshots appear in the Snapshots table. The Snapshots table shows the following snapshot information. By default, the table shows 10 entries at a time. • • • • • • Name – Shows the name of the snapshot.
Replication Sets table in the Volumes topic To see information about the replication set for a volume or volume group, select a volume in the volumes table. If it is not already selected, select the Replication Sets tab. The replication appears in the Replication Sets table. The Replication Sets table shows the following information. By default, the table shows 10 entries at a time. • • • Name – Shows the replication set name. Primary Volume – Shows the primary volume name.
• ○ Deleted – The schedule has been deleted. Task Type – Shows the type of schedule: ○ TakeSnapshot – The schedule creates a snapshot of a source volume. ○ ResetSnapshot – The schedule deletes the data in the snapshot and resets it to the current data in the volume from which the snapshot was created. The snapshot's name and other volume characteristics are not changed. ○ VolumeCopy – The schedule copies a source volume to a new volume.
4. Optional: Change the number of volumes to create. See the System configuration limits topic in the PowerVault Manager help for the maximum number of volumes supported per pool. 5. Optional: Specify a volume tier affinity setting to automatically associate the volume data with a specific tier, moving all volume data to that tier whenever possible. The default is No Affinity. For more information about the volume tier affinity feature, see About automated tiered storage. 6.
Modifying a volume You can change the name and cache settings for a volume. You can also expand a volume. If a virtual volume is not a secondary volume involved in replication, you can expand the size of the volume but not make it smaller. If a linear volume is neither the parent of a snapshot nor a primary or secondary volume, you can expand the size of the volume but not make it smaller. Because volume expansion does not require I/O to be stopped, the volume can continue to be used during expansion.
write data is not to be included in the copy, then you may safely leave the snapshot mounted. During a copy using snapshot modified data, the system takes the snapshot off line. Copy a virtual volume or snapshot Perform the following steps to copy a virtual volume or snapshot: 1. In the Volumes topic, select a virtual volume or snapshot. 2. Select Action > Copy Volume. The Copy Volume panel opens. 3. Optional: In the New Volume field, change the name for the new volume.
Removing volumes from a volume group You can remove volumes from a volume group. You cannot remove all volumes from a group. At least one volume must remain. Removing a volume from a volume group will ungroup the volumes but will not delete them. To remove all volumes from a volume group, see Removing volume groups. To see more information about a volume, hover the cursor over the volume in the table. Viewing volumes contains more details about the Volume Information panel that appears.
6. Click Yes to continue. Otherwise, click No. If you clicked Yes, the volume groups and their volumes are deleted and the volumes table is updated. Rolling back a virtual volume You can replace the data of a source volume or virtual snapshot with the data of a snapshot that was created from it. CAUTION: When you perform a rollback, the data that existed on the volume is replaced by the data on the snapshot. All data on the volume written since the snapshot was created is lost.
Creating snapshots You can create snapshots of selected virtual volumes or of virtual snapshots. You can create snapshots immediately or schedule snapshot creation. If the large pools feature is enabled, through use of the large-pools parameter of the set advanced-settings CLI command, the maximum number of volumes in a snapshot tree is limited to 9, base volume plus 8 snapshots.
Resetting a snapshot As an alternative to taking a new snapshot of a volume, you can replace the data in a standard snapshot with the current data in the source volume. The snapshot name and mappings are not changed. This feature is supported for all snapshots in a tree hierarchy. However, a virtual snapshot can only be reset to the parent volume or snapshot from which it was created. CAUTION: To avoid data corruption, unmount a snapshot from hosts before resetting the snapshot.
If a replication set is deleted, the internal snapshots created by the system for replication are also deleted. After the replication set is deleted, the primary and secondary volumes can be used like any other base volumes or volume groups. Primary volumes and volume groups The volume, volume group, or snapshot that will be replicated is called the primary volume or volume group. It can belong to only one replication set.
• • • • • • If the replication set is deleted, any existing snapshots automatically created by snapshot history rules will not be deleted. You will be able to manage those snapshots like any other snapshots. Manually creating a snapshot will not increase the snapshot count associated with the snapshot history. Manually created snapshots are not managed by the snapshot history feature. The snapshot history feature generates a new name for the snapshot that it intends to create.
• • If you selected the Scheduled check box, click OK. The Schedule Replications panel opens and you can set the options to create a schedule for replications. For more information on scheduling replications, see Initiating or scheduling a replication from the Volumes topic. Otherwise, you have the option to perform the first replication. Click Yes to begin the first replication, or click No to initiate the first replication later.
6. Specify a date and a time in the future to be the first instance when the scheduled task will run, and to be the starting point for any specified recurrence. • • To set the Date value, enter the current date in the format YYYY-MM-DD. To set the Time value, enter two-digit values for the hour and minutes and select either AM, PM, or 24H (24-hour clock). The minimum interval is one hour. 7. Optional: If you want the task to run more than once, select the Repeat check box.
Delete a schedule from the Volumes topic Perform the following steps to delete a schedule from the Volumes topic: 1. Select Action > Manage Schedules. The Manage Schedules panel opens. 2. Select the schedule to delete. 3. Click Delete Schedule. A confirmation panel appears. 4. Click OK.
7 Working in the Mappings topic Topics: • • • Viewing mappings Mapping initiators and volumes View map details Viewing mappings The Mapping topic shows a tabular view of information about mappings that are defined in the system. By default, the table shows 20 entries at a time and is sorted first by host and second by volume. The mapping table shows the following information: • Group.Host.Nickname. Identifies the initiators to which the mapping applies: ○ ○ ○ ○ ○ All Other Initiators.
redundancy mode is shown as Active-Active ULP. ULP uses the T10 Technical Committee of INCITS Asymmetric Logical Unit Access (ALUA) extensions, in SPC-3, to negotiate paths with aware host systems. Unaware host systems see all paths as being equal. If a host group or host is mapped to a volume or volume group, all of the initiators within that group will have an individual map to each volume that makes up the request.
Table 24. Available volume groups and volumes Row description Name Type A row with these values volume-group-name appears for a volume/snapshot that is grouped into a volume group. Select this row to apply map settings to all volumes/ snapshots in this volume group. * Group A row with these values appears for each volume/ snapshot. Select this row to apply map settings to this volume/snapshot.
4. Once the list is correct, to apply changes, click Apply or OK. A confirmation panel appears. To discard the changes instead of applying them, click Reset. 5. Click Yes to continue. Otherwise, click No. If you clicked Yes, the mapping changes are processed. 6. To close the panel, click Cancel. Remove mappings You can remove one or more selected mappings between initiators and volumes. 1. Perform one of the following: • • In the Mapping topic, select one or more mappings from the table.
8 Working in the Replications topic Topics: • • • • • • • • • • • • • • About replicating virtual volumes in the Replications topic Viewing replications Querying a peer connection Creating a peer connection Modifying a peer connection Deleting a peer connection Creating a replication set from the Replications topic Modifying a replication set Deleting a replication set Initiating or scheduling a replication from the Replications topic Stopping a replication Suspending a replication Resuming a replication M
Using a volume group for a replication set enables you to make sure that multiple volumes are synchronized at the same time. When a volume group is replicated, snapshots of all of the volumes are created simultaneously. In doing so, it functions as a consistency group, ensuring consistent copies of a group of volumes. The snapshots are then replicated as a group. Even though the snapshots may differ in size, replication is not complete until all of the snapshots are replicated.
Figure 1. Process for initial replication A User view 1 Step 1: User initiates replication for the first time. B Internal view 2 Step 2: Current primary volume contents replace S1 contents. a Primary system 3 Step 3: S1 contents are fully replicated over the peer connection to counterpart S1, replacing S1 contents. b Secondary system 4 Step 4: S1 contents replace the secondary volume contents.
Figure 2. Process for subsequent replications A User view 1 Step 1: User initiates replication after the first replication has completed. B Internal view 2 Step 2: S1 contents replace S2 contents. a Primary system 3 Step 3: Current primary volume contents replace S1 contents. b Secondary system 4 Step 4: S1 contents replace the secondary volume contents.
Even though the internal snapshots are hidden from the user, they do consume snapshot space (and thus pool space) from the virtual pool. If the volume is the base volume for a snapshot tree, the count of maximum snapshots in the snapshot tree may include the internal snapshots for it even though they are not listed. Internal snapshots and internal volume groups count against system limits, but do not display.
NOTE: Using a volume group in a replication set ensures consistent simultaneous copies of the volumes in the volume group. This means that the state of all replicated volumes can be known when a disaster occurs since the volumes are synchronized to the same point in time. Accessing the data while keeping the replication set intact If you want to continue replicating changed data from the primary data center system, you will need to keep the replication set intact.
2. Create a peer connection between the backup system and the data center system, if necessary. 3. Create a replication set using the backup system’s volume or snapshot as the primary volume and the data center system as the secondary system. 4. Replicate the volume from the backup system to the data center system. Prepare the backup system for disaster recovery after the replication is complete 1. Delete the replication set. 2. Delete the volume on the backup system.
• Secondary Volume. Shows the secondary volume name. For replication sets that use volume groups, the secondary volume name is volume-group-name.* where .* signifies that the replication set contains more than one volume. If the volume is on the local • system, the icon appears. Status. Shows the status of the replication set. • • ○ Not Ready – The replication set is not ready for replications because the system is still preparing the replication set.
2. If you did not select a peer connection from the Peer Connections table, enter the remote host port address to query in the text box. 3. Click OK. A processing dialog box appears while the remote port address is queried. If successful, detailed information about the remote system and controllers is displayed. An error message appears if the operation is unsuccessful.
replication operations, such as creating replication sets, initiating replications, or suspending replication operations. The system that does not have CHAP enabled will be unable to perform any replication operations, including modifying and deleting the peer connection. For full replication functionality for both systems, set up CHAP for a peer connection (see the following procedure).
NOTE: You can change protocols used in the peer connection between FC and iSCSI by modifying the peer connection to use the remote port address of the new protocol. 4. Enter the name and password of a user assigned a manage role on the remote system. 5. Click OK. The peer connection is modified and the Peer Connections table is updated. Deleting a peer connection You can delete a peer connection if there are no replication sets that belong to the peer connection.
Secondary volumes and volume groups When the replication set is created—either through the CLI or the PowerVault Manager—secondary volumes and volume groups are created automatically. Secondary volumes and volume groups cannot be mapped, moved, expanded, deleted, or participate in a rollback operation. Create a snapshot of the secondary volume or volume group and use the snapshot for mapping and accessing data.
○ low. Snapshots can be deleted. This parameter is unrelated to snapshot history, and because the default is never delete, snapshot history snapshots will normally not be affected in a low virtual memory situation. When this option is disabled, snapshot history will not be kept. If this option is disabled after a replication set has been established, any existing snapshots will be kept, but not updated.
• • Discard. Discard the new replication request. Queue Latest. Take a snapshot of the primary volume and queue the new replication request. If the queue contained an older replication request, discard that older request. A maximum of one replication can be queued. If the queue policy is set to Queue Latest and a replication is running and another is queued, you cannot change the queue policy to Discard. You must manually remove the queued replication before you can change the policy. 5.
NOTE: If you change the time zone of the secondary system in a replication set whose primary and secondary systems are in different time zones, you must restart the system to enable management interfaces to show proper time values for replication operations. If a replication fails, the system suspends the replication set. The replication operation will attempt to resume if it has been more than 10 minutes since the replication set was suspended.
• • Either make sure the Time Constraint check box is cleared, which allows the schedule to run at any time, or select the check box to specify a time range within which the schedule should run. Either make sure the Date Constraint check box is cleared, which allows the schedule to run on any day, or select the check box to specify the days when the schedule should run. 8. Click OK. The schedule is created. Stopping a replication You can stop a replication on the primary system of a replication set.
Resume a replication NOTE: If CHAP is enabled on one system within a peer connection, be sure that CHAP is configured properly on the corresponding peer system before initiating this operation. For more information about configuring CHAP, see CHAP and replication. 1. In the Replications topic, select a replication set for which replications were suspended in the Replication Sets table. 2. Select Action > Resume Replication. 3. Click OK.
9 Working in the Performance topic Topics: • • • • Viewing performance statistics Updating historical statistics Exporting historical performance statistics Resetting performance statistics Viewing performance statistics The Performance topic shows performance statistics for the following types of components: disks, disk groups, virtual pools, virtual tiers, host ports, controllers, and volumes. For more information about performance statistics, see About performance statistics.
Table 28. Historical performance System component Graph Description Disk, group, pool, tier Total IOPS Total number of read and write operations per second since the last sampling time. Disk, group, pool, tier Read IOPS Number of read operations per second since the last sampling time. Disk, group, pool, tier Write IOPS Number of write operations per second since the last sampling time.
Table 28. Historical performance (continued) System component Graph Description Tier Number of Page Moves In Number of pages moved into this tier from a different tier. Tier Number of Page Moves Out Number of pages moved out of this tier to other tiers. Tier Number of Page Rebalances Number of pages moved between disk groups in this tier to automatically load balance. Tier Number of Initial Allocations Number of pages that are allocated as a result of host writes.
Exporting historical performance statistics You can export historical performance statistics in CSV format to a file on the network. You can then import the data into a spreadsheet or other third-party application. The number of data samples downloaded is fixed at 100 to limit the size of the data file to be generated and transferred. The default is to retrieve all the available data, up to six months, aggregated into 100 samples. You can specify a different time range by specifying a start and end time.
10 Working in the banner and footer Topics: • • • • • • • • • • • • Banner and footer overview Viewing system information Viewing certificate information Viewing connection information Viewing system date and time information Viewing user information Viewing health information Viewing event information Viewing capacity information Viewing host information Viewing tier information Viewing recent system activity Banner and footer overview The banner of the PowerVault Manager interface contains four panels t
Viewing certificate information By default, the system generates a unique SSL certificate for each controller. For the strongest security, you can replace the default system-generated certificate with a certificate issued from a trusted certificate authority. The Certificate Information panel shows information for the active SSL certificates that are stored on the system for each controller. Tabs A and B contain unformatted certificate text for each of the corresponding controllers.
Changing date and time settings You can change the storage system date and time, which appear in the date/time panel in the banner. It is important to set the date and time so that entries in system logs and notifications have correct time stamps. You can set the date and time manually or configure the system to use NTP to obtain them from a network-attached server. When NTP is enabled, and if an NTP server is available, the system time and date can be obtained from the NTP server.
Viewing user information The user panel in the banner shows the name of the signed-in user. Hover the cursor over this panel to display the User Information panel, which shows the roles, accessible interfaces, and session timeout for this user. The icon indicates that the panel has a menu. Click anywhere in the panel to change settings for the signed-in user (monitor role) or to manage all users (manage role). For more information on user roles and settings, see Managing users.
Viewing event information If you are having a problem with the system, review the event log before calling technical support. Information shown in the event log might enable you to resolve the problem. To view the event log, in the footer, click the events panel and select Show Event List. The Event Log Viewer panel opens. The panel shows a tabular view of the 1000 most recent events logged by either controller. All events are logged, regardless of notification settings.
When reviewing the event log, look for recent Critical, Error, or Warning events. For each, click the message to view additional information and recommended actions. Follow the recommended actions to resolve the problems.
Hover the cursor anywhere in the panel to display the Host I/O Information panel, which shows the current port IOPS and data throughput (MB/s) values for each controller. Viewing tier information The tier I/O panel in the footer shows a color-coded bar for each virtual pool (A, B, or both) that has active I/O. The bars are sized to represent the relative IOPS for each pool. Each bar contains a segment for each tier that has active I/O. The segments are sized to represent the relative IOPS for each tier.
A Other management interfaces Topics: • • • • SNMP reference Using FTP and SFTP Using SMI-S Using SLP SNMP reference This appendix describes the Simple Network Management Protocol (SNMP) capabilities that Dell EMC storage systems support. This includes standard MIB-II, the FibreAlliance SNMP Management Information Base (MIB) version 2.2 objects, and enterprise traps. The storage systems can report their status through SNMP.
Enterprise traps Traps can be generated in response to events occurring in the storage system. These events can be selected by severity and by individual event type. A maximum of three SNMP trap destinations can be configured by IP address. Enterprise event severities are informational, minor, major, and critical. There is a different trap type for each of these severities. The trap format is represented by the enterprise traps MIB.
Table 30. FA MIB 2.
Table 30. FA MIB 2.2 objects, descriptions, and values (continued) Object Description Value connUnitEventFilter Defines the event severity that will be logged by this connectivity unit. Settable only through the PowerVault Manager.
Table 30. FA MIB 2.2 objects, descriptions, and values (continued) Object Description Value connUnitPortIndex Unique value for each connUnitPortEntry between 1 and connUnitNumPorts Unique value for each port, between 1 and the number of ports connUnitPortType Port type not-present (3), or n-port (5) for point-topoint topology, or l-port (6) connUnitPortFCClassCap Bit mask that specifies the classes of service capability of this port. If this is not applicable, returns all bits set to zero.
Table 30. FA MIB 2.
Table 31.
External details for connUnitSensorTable Table 32.
Table 32.
External details for connUnitPortTable Table 33. connUnitPortTable index and name values connUnitPortIndex connUnitPortName 0 Host Port 0 (Controller A) 1 Host Port 1 (Controller B) 2 Host Port 2 (Controller B) 3 Host Port 3 (Controller B) Configure SNMP event notification in the PowerVault Manager 1. Verify that the storage system’s SNMP service is enabled. See Enable or disable system-management settings. 2. Configure and enable SNMP traps. See Setting system notification settings. 3.
Download system logs Perform the following steps to download the system logs: 1. In the PowerVault Manager, prepare to use FTP/SFTP: a. Determine the network-port IP addresses of the system controllers. See Configuring controller network ports. b. Verify that the FTP/SFTP service is enabled on the system. See Enable or disable system-management settings. c. Verify that the user you will log in as has permission to use the FTP interface. See Adding, modifying, and deleting users. 2.
5. Enter: get managed-logs:log-type filename.zip where: • • log-type specifies the type of log data to transfer: ○ crash1, crash2, crash3, or crash4: One of the Storage Controller’s four crash logs. ○ ecdebug: Expander Controller log. ○ mc: Management Controller log. ○ scdebug: Storage Controller log. filename is the file that contains the transferred data. Dell EMC recommends using a filename that identifies the system, controller, and date. get managed-logs:scdebug Storage2-A_scdebug_2011_08_22.
• • date/time-range is optional and specifies the time range of data to transfer, in the format: start.yyyy-mm-dd.hh:mm.[AM| PM].end.yyyy-mm-dd.hh:mm.[AM|PM]. The string must contain no spaces. filename.csv is the file that contains the data. Dell EMC recommends using a filename that identifies the system, controller, and date. get perf:start.2019-01-26.12:00.PM.end.2019-01-26.23:00.PM Storage2_A_20120126.csv In FTP, wait for the message Operation Complete to appear.
• To ensure success of an online update, select a period of low I/O activity. This helps the update complete as quickly as possible and avoids disruptions to host and applications due to timeouts. Attempting to update a storage system that is processing a large, I/Ointensive batch job will likely cause hosts to lose connectivity with the storage system. Updating controller module firmware In a dual-controller system, both controller modules should run the same firmware version.
NOTE: If PFU is enabled for the system, after firmware update has completed on both controllers, check the system health. If the system health is Degraded and the health reason indicates that the firmware version is incorrect, verify that you specified the correct firmware file. If this problem persists, contact technical support. Updating expansion module firmware An expansion enclosure can contain one or two expansion modules. Each expansion module contains an enclosure management processor (EMP).
You can specify to update all disks or only specific disks. If you specify to update all disks and the system contains more than one type of disk, the update will be attempted on all disks in the system. The update will only succeed for disks whose type matches the file, and will fail for disks of other types. Prepare for update 1. Obtain the appropriate firmware file and download it to your computer or network. 2.
NOTE: If the update fails, verify that you specified the correct firmware file and try the update a second time. If it fails again, contact technical support. 7. If you are updating specific disks, repeat step 4 for each remaining disk to update. 8. Quit the FTP/SFTP session. 9. If the updated disks must be power cycled: a. Shut down both controllers by using the PowerVault Manager. b. Power cycle all enclosures as described in the Dell EMC PowerVault ME4 Series Storage System Deployment Guide. 10.
SMI-S replaces multiple disparate managed object models, protocols, and transports with a single object-oriented model for each type of component in a storage network. The specification was created by SNIA to standardize storage management solutions. SMI-S enables management applications to support storage devices from multiple vendors quickly and reliably because they are no longer proprietary. SMI-S detects and manages storage elements by type, not by vendor.
SMI-S implementation SMI-S is implemented with the following components: • • CIM server (called a CIM Object Manager or CIMOM), which listens for WBEM requests (CIM operations over HTTP/HTTPS) from a CIM client, and responds. CIM provider, which communicates to a particular type of managed resource—for example, storage systems—and provides the CIMOM with information about them.
• • • • • • enumerateInstaneceNames associators associatorNames references referenceNames invokeMethod SMI-S profiles SMI-S is organized around profiles, which describe objects relevant for a class of storage subsystem. SMI-S includes profiles for arrays, FC HBAs, FC switches, and tape libraries. Profiles are registered with the CIM server and advertised to clients using SLP. Table 34. Supported SMI-S profiles Profile/subprofile/package Description Array profile Describes RAID arraysystems.
Table 34. Supported SMI-S profiles (continued) Profile/subprofile/package Description Disk Sparing subprofile Provides the ability to describe the current spare disk configuration, to allocate/de-allocate spare disks, and to clear the state of unavailable disk drives. Object Manager Adapter subprofile Allows the client to manage the Object Manager Adapters of a SMI Agent. In particular, it can be used to turn the indication service on and off.
Table 35. CIM Alertindicationevents (continued) FRU/Event category Corresponding SMI-S class Operational status values that would trigger alert conditions SAS Port SMI_SASTargetPort Stopped, OK iSCSI Port SMI_ISCSIEthernetPort Stopped, OK Life cycle indications The SMI-S interface provides CIM life cycle indications for changes in the physical and logical devices in the storage system.
Table 36. Life cycle indications (continued) Profile or subprofile Element description and name WQL or CQL Multiple Computer System SELECT * FROM CIM_InstModification WHERE SourceInstance ISA CIM_ComputerSystem AND SourceInstance.OperationalStatus <> PreviousInstance.OperationalStatus WQL Send life cycle indication when a logical component degrades or upgrades the system. Multiple Computer System SELECT * FROM CIM_InstModification WHERE SourceInstance ISA CIM_RedundancySet AND SourceInstance.
2. In an SMI-S client: a. Subscribe using the SELECT * FROM CIM_InstCreation WHERE SourceInstance ISA CIM_LogicalFile filter. b. Subscribe using the SELECT * FROM CIM_InstDeletion WHERE SourceInstance ISA CIM_LogicalFile filter. For more information about the managed logs feature, see About managed logs. Testing SMI-S Use an SMI-S certified client for SMI-S 1.5. Common clients include Microsoft System Center, IBM Tivoli, EMC CommandCenter and CA Unicenter.
You can enable or disable the SLP service in the PowerVault Manager, as described in Enable or disable system-management settings on page 44, or by using the CLI set protocols command, as described in the Dell EMC PowerVault ME4 Series Storage System CLI Guide. If the SLP service is enabled, you can test it by using an open source tool, such as slptool from www.openslp.org. Table 39.
B Administering a log-collection system A log-collection system receives log data that is incrementally transferred from a storage system for which the managed logs feature is enabled, and is used to integrate that data for display and analysis. For information about the managed logs feature, see About managed logs. Over time, a log-collection system can receive many log files from one or more storage systems. The administrator organizes and stores these log files on the log-collection system.
In push mode, when the administrator receives an email with an attached ecdebug file from Storage1, the administrator would open the attachment and unzip it into the ecdebug subdirectory of the Storage1 directory. In pull mode, when the administrator receives notification that an SC debug log needs to be transferred from Storage2, the administrator would use the storage system’s FTP/SFTP interface to get the log and save it into the scdebug subdirectory of the Storage2 directory.
C Best practices This appendix describes best practices for configuring and provisioning a storage system. Topics: • • • • • • • Pool setup RAID selection Disk count per RAID level Disk groups in a pool Tier setup Multipath configuration Physical port selection Pool setup In a storage system with two controller modules, try to balance the workload of the controllers. Each controller can own one virtual pool.
• Example 2: Consider a RAID-5 disk group with six disks. The equivalent of five disks now provide usable capacity. Assume the controller again uses a stripe unit of 512-KB. When a 4-MB page is pushed to the disk group, one stripe will contain a full page, but the controller must read old data and old parity from two of the disks in combination with the new data in order to calculate new parity. This is known as a read-modify-write, and it's a performance killer with sequential workloads.
Enabling MPIO on Windows 1. 2. 3. 4. 5. 6. 7. 8. 9. Start Server Manager if it is not already running. In the Manage menu, select Add Roles and Features. In the Add Roles and Features Wizard, select Role-based or Feature Based Installation. Click Next. Select the server from the pool and then click Next. Click Next again to go to the feature selection window. Select the Multipath IO checkbox and then click Next. Click Install. When prompted, reboot the system.
D System configuration limits The following table lists the system configuration limits for ME4 Series storage systems: Table 42.
Table 42.
Table 42.
E Glossary of terms The following table lists definitions of the terms used in ME4 Series publications: Table 43. Glossary of ME4 Series terms Term Definition 2U12 An enclosure that is two rack units in height and can contain 12 disks. 2U24 An enclosure that is two rack units in height and can contain 24 disks. 5U84 An enclosure that is five rack units in height and can contain 84 disks. AES Advanced Encryption Standard. AFA All-flash array. A storage system that uses only SSDs, without tiering.
Table 43. Glossary of ME4 Series terms (continued) Term Definition CNC Converged Network Controller A controller module whose host ports can be set to operate in FC or iSCSI mode, using qualified SFP and cable options. Changing the host-port mode is also known as changing the ports’ personality.
Table 43. Glossary of ME4 Series terms (continued) Term Definition EMP Enclosure management processor. An Expander Controller subsystem that provides SES data such as temperature, power supply and fan status, and the presence or absence of disks. enclosure A physical storage device that contains I/O modules, disk drives, and other FRUs. See also controller enclosure, expansion enclosure. enclosure management processor See EMP. ESD Electrostatic discharge. ESM Environmental Service Module.
Table 43. Glossary of ME4 Series terms (continued) Term Definition IOPS I/O operations per second. IQN iSCSI Qualified Name. iSCSI Internet SCSI. iSNS Internet Storage Name Service. JBOD “Just a bunch of disks.” See expansion enclosure. LBA Logical block address. The address used for specifying the location of a block of data.
Table 43. Glossary of ME4 Series terms (continued) Term Definition network port The Ethernet port on a controller module through which its Management Controller is connected to the network. NTP Network time protocol. NV device Nonvolatile device. The CompactFlash memory card in a controller module. OID Object Identifier. In SNMP, an identifier for an object in a MIB. orphan data See unwritable cache data.
Table 43. Glossary of ME4 Series terms (continued) Term Definition quick rebuild A virtual-storage feature that reduces the time that user data is less than fully fault-tolerant after a disk failure in a disk group. The quick-rebuild process rebuilds only data stripes that contain user data. Data stripes that have not been allocated to user data are rebuilt in the background. RAID head See controller enclosure. RBOD “RAID bunch of disks.” See controller enclosure.
Table 43. Glossary of ME4 Series terms (continued) Term Definition SFTP SSH File Transfer Protocol. A secure secondary interface for installing firmware updates, downloading logs, and installing security certificates and keys. All data sent between the client and server will be encrypted. SHA Secure Hash Algorithm. shelf See enclosure. sideplane A printed circuit board to which components connect longitudinally within an enclosure. SLP Service Location Protocol.
Table 43. Glossary of ME4 Series terms (continued) Term Definition • • • Performance, which uses SSDs (high speed) Standard, which uses enterprise-class spinning SAS disks (10k/15k RPM, higher capacity) Archive, which uses midline spinning SAS disks (<10k RPM, high capacity). tier migration The automatic movement of blocks of data, associated with a single virtual volume, between tiers based on the access patterns that are detected for the data on that volume. tray See enclosure.