Dell EMC PowerVault ME4 Series Storage System Administrator’s Guide July 2021 Rev.
Notes, cautions, and warnings NOTE: A NOTE indicates important information that helps you make better use of your product. CAUTION: A CAUTION indicates either potential damage to hardware or loss of data and tells you how to avoid the problem. WARNING: A WARNING indicates a potential for property damage, personal injury, or death. © 2018 – 2021 Dell Inc. or its subsidiaries. All rights reserved. Dell, EMC, and other trademarks are trademarks of Dell Inc. or its subsidiaries.
Contents Chapter 1: Getting started........................................................................................................... 10 New user setup.................................................................................................................................................................. 10 Configure and provision a new storage system..........................................................................................................
Create disk groups and pools................................................................................................................................... 35 Open the guided disk group and pool creation wizard....................................................................................... 36 Attaching hosts and volumes in the Host Setup wizard..........................................................................................36 Verify prerequisites in the Host Setup wizard.................
Configuring advanced settings...................................................................................................................................... 68 Changing disk settings............................................................................................................................................... 68 Changing system cache settings.............................................................................................................................
Managing spares............................................................................................................................................................... 90 Global spares................................................................................................................................................................ 90 Dedicated spares.............................................................................................................................................
Manually initiate replication from the Volumes topic........................................................................................ 109 Schedule a replication from the Volumes topic...................................................................................................110 Manage replication schedules from the Volumes topic.......................................................................................... 110 Modify scheduled replication tasks from the Volumes topic................
Schedule a replication from the Replications topic........................................................................................... 132 Stopping a replication..................................................................................................................................................... 132 Stop a replication.......................................................................................................................................................
External details for connUnitPortTable................................................................................................................ 155 Configure SNMP event notification in the PowerVault Manager..................................................................156 SNMP management.................................................................................................................................................. 156 Using FTP and SFTP............................................
1 Getting started PowerVault Manager is a web-based interface for configuring, monitoring, and managing the storage system. Each controller module in the storage system contains a web server, which is accessed when you sign in to the PowerVault Manager. You can access all functions from either controller in a dual-controller system. If one controller becomes unavailable, you can continue to manage the storage system from the partner controller.
● For an IPv6 network, type https://fd6e:23ce:fed3:19d1::1 to access controller module A. 4. If the storage system is running G275 firmware, sign in to the PowerVault Manager using the user name manage and password !manage. For more information about signing in, see Signing in and signing out on page 14. For more information about using these options, see Guided setup on page 34. If the storage system is running G280 firmware: a. Click Get Started. b.
NOTE: ● To ● To ● To ○ ○ ○ ○ The help content in PowerVault Manager is not viewable if you use the Microsoft Edge browser that shipped with Windows 10. see the help window, you must enable pop-up windows. optimize the display, use a color monitor and set its color quality to the highest setting. navigate beyond the Sign In page (with a valid user account): For Internet Explorer, set the local-intranet security option to medium or medium-low.
1. In the first column to sort by, click its heading once or twice to reorder items. 2. In the second column to sort by, Shift+click its heading once or twice to reorder items. If you Shift+click a third time, the column is deselected. 3. Continue for each additional column to sort by. Using filters to find items with specified text To filter a multicolumn table, in the filter field above the table, enter the text to find. As you type, only items that contain the specified text remain shown.
or for one or more selected rows, and it can be displayed in row format or column format. The exported CSV file contains all of the data in the table including information that is displayed in the hover panels. 1. 2. 3. 4. 5. Select one or more rows of data to export from a table that has an Export to CSV button. Click Export to CSV. The Export Data to CSV panel opens. Click All to export all of the data within the selected table, or click Selected to export only selected files.
Some advantages of using virtual storage are: ● It allows performance to scale as the number of disks in the pool increases. ● It virtualizes physical storage, allowing volumes to share available resources in a highly efficient way. ● It allows a volume to be comprised of more than 16 disks. Virtual storage provides the foundation for data-management features such as thin provisioning, automated tiered storage, SSD read cache, and the quick rebuild feature.
Linear disk groups A linear disk group requires the specification of a set of disks, RAID level, disk group type, and a name. Whenever the system creates a linear disk group, it also creates an identically named linear pool at the same time. No further disk groups can be added to a linear pool. For maximum performance, all of the disks in a linear disk group must share the same classification, which is determined by disk type, size, and speed.
Table 1.
Table 2. RAID level comparison (continued) RAID level Min.
SAS, for example), and in the same tier, but can have different capacities. ADAPT is shown as a RAID level in the management interfaces. ADAPT disk groups use all available space to maintain fault tolerance, and data is spread evenly across all of the disks. When new data is added, new disks are added, or the system recognizes that data is not distributed across disks in a balanced way, it moves the data to maintain balance across the disk group.
Internal disk management SSDs use multiple algorithms to manage SSD endurance features. These include wear leveling, support for Unmap commands, and over-provisioning to minimize write amplification. Wear leveling Wear leveling is a technique for prolonging the service life of some kinds of erasable computer storage media, such as the flash memory used in SSDs.
When a read-cache group consists of one SSD, it automatically uses NRAID. When a read-cache group consists of two SSDs, it automatically uses RAID 0. For more information on SSDs, see About SSDs on page 19. About spares Spare disks are unused disks in your system that you designate to automatically replace a failed disk, restoring fault tolerance to disk groups in the system. Types of spares include: ● Dedicated spare. Reserved for use by a specific linear disk group to replace a failed disk.
NOTE: The physical capacity limit for a virtual pool is 512 TiB. When overcommit is enabled, the logical capacity limit is 1 PiB. ● When the overcommit feature is disabled, the host does not lose read or write access to the pool volumes when the pool reaches or exceeds the high threshold value. ● When the overcommit feature is enabled, the storage system sends the data protect sense key Add, Sense: Space allocation failed write protect to the host when the pool reaches or exceeds the high threshold value.
Linear volumes Linear volumes make use of a method of storing user data in sequential, fully allocated physical blocks. Mapping between the logical data presented to hosts and the physical location where it is stored is fixed, or static. About volume cache options You can set options that optimize reads and writes performed for each volume. It is recommended that you use the default settings.
● The Adaptive option works well for most applications: it enables adaptive read-ahead, which allows the controller to dynamically calculate the optimum read-ahead size for the current workload. ● The Stripe option sets the read-ahead size to one stripe. The controllers treat NRAID and RAID-1 disk groups internally as if they have a stripe size of 512 KB, even though they are not striped. ● Specific size options let you select an amount of data for all accesses.
When the status of a disk group in the Performance Tier becomes critical (CRIT), the system will automatically drain data from that disk group to disk groups using spinning disks in other tiers providing that they can contain the data on the degraded disk group. This occurs because similar wear across the SSDs is likely, so more failures may be imminent. If a system only has one class of disk, no tiering occurs.
For ease of management, you can group 1 to 128 initiators that represent a server into a host. You can also group 1 to 256 hosts into a host group. This fact enables you to perform mapping operations for all initiators in a host, or all initiators and hosts in a group, instead of for each initiator or host individually. An initiator must have a nickname to be added to a host, and an initiator can be a member of only one host. A host can be a member of only one group.
About operating with a single controller If you purchased a 2U controller enclosure with a single controller module, note that it does not offer redundant configuration and, in the case of controller failure, leaves the system at risk for data unavailability. For more information, see About data protection with a single controller. NOTE: If you are operating a system with a single controller, some functionality described in the documentation may be unavailable or not applicable to your system.
a modified snapshot cannot be reverted. If you want a virtual snapshot to provide the capability to revert the contents of the source volume or snapshot to when the snapshot was created, create a snapshot for this purpose and archive it so you do not change the contents. For snapshots, the reset snapshot feature is supported for all snapshots in a tree hierarchy. However, a snapshot can only be reset to the immediate parent volume or snapshot from which it was created.
4. The missing drive is placed back in its slot or the missing drive is detected and shows up. The status of the drive is LEFTOVER. 5. Metadata of the LEFTOVER drive is cleared and the drive joins the disk group. NOTE: If more than one drive in the disk group has a status of LEFTOVER, please contact technical support before proceeding with any action. 6. A copyback operation from the spare drive to the drive that joined the disk group begins. The status of the disk group is CPYBK. 7.
For more information about performance statistics, see Viewing performance statistics, Updating historical statistics, Exporting historical performance statistics, and Resetting performance statistics. About firmware updates Controller modules, expansion modules, and disk drives contain firmware that operate them. As newer firmware versions become available, they may be installed at the factory or at a customer maintenance depot or they may be installed by storage-system administrators at customer sites.
● Management Controller (MC) log Each log-file type also contains system-configuration information. The capacity status of each log file is maintained, as well as the status of what data has already been transferred. Three capacity-status levels are defined for each log file: ● Need to transfer—The log file has filled to the threshold at which content needs to be transferred. This threshold varies for different log files.
About CloudIQ CloudIQ provides storage monitoring and proactive service, giving you information tailored to your needs, access to near real-time analytics, and the ability to monitor storage systems from anywhere at any time. CloudIQ simplifies storage monitoring and service by providing: ● Proactive serviceability that informs you about issues before they impact your environment.
A system and the FDE-capable disks in the system are initially unsecured but can be secured at any point. Until the system is secured, FDE-capable disks function exactly like disks that do not support FDE. Enabling FDE protection involves setting a passphrase and securing the system. Data that was present on the system before it was secured is accessible in the same way it was when it was unsecured.
2 Working in the Home topic The Home topic provides options to set up and configure your system and manage tasks, and displays an overview of the storage managed by the system. The content presented depends on the completion of all required actions in the Welcome panel. The standard Home topic is hidden by the Welcome panel until all required actions are complete.
6. Click Host Setup to access the Host Setup wizard and follow the prompts to continue provisioning your system by attaching hosts. For more information see Attaching hosts and volumes. Provisioning disk groups and pools The Storage Setup wizard guides you through each step of the process, including creating disk groups and pools in preparation for attaching hosts and volumes. NOTE: You can cancel the wizard at any time, but changes that are made in completed steps are saved.
Linear storage environments If you are operating in a linear storage environment, the Create Advanced Pools panel opens. Select Add Disk Groups and follow the instructions to manually create disk groups one at a time. Select Manage Spares and follow the instructions to manually select global spares. Click the icon for more information about options presented. Open the guided disk group and pool creation wizard 1.
Add and manage volumes in the Host Setup wizard The Volumes section of the wizard provides options for you to add and manage volumes. By default, the system presents one volume on each pool, with each volume size defaulting to 100GB. The wizard lets you change the volume name and size and select the pool where the volume will reside. Follow the instructions in the wizard to create the volumes shown in the table. Be sure to balance volume ownership between controllers.
Table 5.
unallocated storage for the pool with the same information as the capacity top bar graph, but for the pool instead of the system. The bottom horizontal bar represents the size of the pool. The disk group utilization graph consists of a graph with vertical measurements. The size of each disk group in the virtual pool is proportionally represented by a horizontal section of the graph. Vertical shading for each disk group section represents the relative space allocated in that disk group.
This type of operation is not common, and you should consider your conflict resolution options carefully. To resolve this conflict, do either of the following: ● If the pool conflict was expected—for example, you want to access data on the disk group from pool A of the old system: 1. Unmount and unmap the LUNs from any host accessing volumes on the new system. 2. Stop I/O from hosts accessing any volumes on the new system and power down the new system. 3.
NTP server time is provided in the UTC time scale, which provides several options: ● To synchronize the times and logs between storage devices installed in multiple time zones, set all the storage devices to use UTC. ● To use the local time for a storage device, set its time zone offset. ● If a time server can provide local time rather than UTC, configure the storage devices to use that time server, with no further time adjustment.
and encryption. For information about configuring trap notifications, see Setting system notification settings. For information about the MIB, see SNMP reference. As a user with the manage role, you can modify or delete any user other than your current user. Users with the monitor role can change all settings for their own user except for user type and role. However, users with the monitor role can only view the settings for other users.
at least one uppercase character, one lowercase character, and one non-alphabetic character. A password can include printable UTF-8 characters except for a space or the following characters: " ' , < > \ ● Trap Host Address - If the account type is Trap Target, specify the network address of the host system that will receive SNMP traps. The value can be an IPv4 address, IPv6 address, or FQDN. Adding, modifying, and deleting users Add a user 1.
● To save your settings and close the panel, click Apply and Close. A confirmation panel appears. 5. Click OK to save your changes. Otherwise, click Cancel. Delete a user other than your current user 1. Log in as a user with the manage role and perform one of the following: ● In the Home topic, select Action > System Settings, then click the Managing Users tab. ● In the System topic, select Action > System Settings, then click the Manage Users tab.
● Controller A IP address: fd6e:23ce:fed3:19d1::1 ● Controller B IP address: fd6e:23ce:fed3:19d1::2 ● Gateway IP address: fd6e:23ce:fed3:19d1::3 CAUTION: Changing IP settings can cause management hosts to lose access to the storage system after the changes are applied in the confirmation step. After you set the type of controller network ports to use, you can configure domain names using the Domain Name Service (DNS). DNS accepts IPv4 and IPv6 address formats.
NOTE: The following IP addresses are reserved for internal use by the storage system: 169.254.255.1, 169.254.255.2, 169.254.255.3, 169.254.255.4, and 127.0.0.1. Because these addresses are routable, do not use them anywhere in your network. 5. If you selected Auto, complete the remaining steps to allow the controllers to obtain IP addresses. 6. Click Apply. A confirmation panel appears. 7. Click OK. 8. Sign out and use the new IP address to access PowerVault Manager.
● Command Line Interface (CLI). An advanced-user interface that is used to manage the system and can be used to write scripts. SSH (secure shell) is enabled by default. The default port number for SSH is 22. Telnet is disabled by default, but you can enable it in the CLI. ● Storage Management Initiative Specification (SMI-S). Used for remote management of the system through your network. You can enable use of secure (encrypted) or unsecure (unencrypted) SMI-S: ○ Enable.
Setting system notification settings The Notifications tab provides options for you to set up and test several types of system notifications. ● Configuring SMTP settings. ● Sending notifications to email addresses when events occur in the system. ● Sending notifications to SNMP trap hosts. ● Enabling managed logs settings, which transfers log data to a log-collection system. For more information about the managed logs feature, see About managed logs.
Send email notifications Perform the following steps to send email notifications: 1. Perform one of the following to access the options in the Notifications tab: ● In the Home topic, select Action > System Settings, then click Notifications. ● In the System topic, select Action > System Settings, then click Notifications. ● In the footer, click the events panel and select Set Up Notifications. ● In the Welcome panel, select System Settings, and then click the Notifications tab. 2.
The default is public. 5. In the Write community field, enter the SNMP write password for your network. This string must differ from the readcommunity string. The value is case-sensitive and can have a maximum of 31 bytes. It can include any character except for the following: " < > The default is private. 6. In the Trap Host Address fields, enter the network addresses of hosts that are configured to receive SNMP traps. The values can be IPv4 addresses, IPv6 addresses, or FQDNs. 7.
Test managed logs notification settings Perform the following steps to test managed logs notification settings: 1. Configure your system to send a notification when managed logs are sent to the log collection system. 2. Click Test Managed Logs. A test event is sent to the log collection system. 3. Verify that the test notification reached the intended location. 4. Click OK. If there was an error in sending a test notification, event 611 is displayed in the confirmation.
In a dual-controller system, controller A is responsible for sending data to the SupportAssist server. If controller A is down, controller B sends data to the support server. Enable SupportAssist Perform the following steps to enable SupportAssist on an ME4 Series storage system: If the ME4 Series storage system does not have direct access to the Internet, configure a web proxy. See Configure SupportAssist to use a web proxy on page 53. 1.
● To manually place the system into maintenance mode, click Enable Maintenance. Placing the system into maintenance mode notifies SupportAssist not to create support tickets during planned system downtime. ● To manually remove the system from maintenance mode, click Disable Maintenance. ● To manually send storage system logs to SupportAssist, click Send Logs, and click Yes on the confirmation panel.
The number of ports that are displayed depends on the configuration of the system. CNC host ports can be configured as all FC or all iSCSI ports, or a combination of both. FC ports support use of qualified 8 Gb/s or 16 Gb/s SFPs. You can set FC ports to auto-negotiate the link speed or to use a specific link speed. iSCSI ports support use of qualified 1 Gb/s, 10 Gb/s SFPs. or qualified 10 Gb/s Direct Attach Copper (DAC) cables. iSCSI port speeds are auto-negotiated.
● Enable Jumbo Frames: Enables or disables support for jumbo frames. Allowing for 100 bytes of overhead, a normal frame can contain a 1400-byte payload whereas a jumbo frame can contain a maximum 8900-byte payload for larger data transfers. NOTE: Use of jumbo frames can succeed only if jumbo-frame support is enabled on all network components in the data path. ● iSCSI IP Version: Specifies whether IP values use Internet Protocol version 4 (IPv4) or version 6 (IPv6) format. IPv4 uses 32-bit addresses.
● Enable Authentication (CHAP): Enables or disables the use of the Challenge Handshake Authentication Protocol (CHAP). Enabling or disabling CHAP in this panel updates its setting in the Configure CHAP panel (available in the Hosts topic by selecting Action > Configure CHAP. CHAP is disabled by default. NOTE: CHAP records for iSCSI login authentication must be defined if CHAP is enabled. To create CHAP records, see Configuring CHAP. ● Link Speed: ○ Auto – Auto-negotiates the proper speed.
● To set the Time value, enter two-digit values for the hour and minutes and select either AM, PM, or 24H (24-hour clock). 5. If you want the task to run more than once, select the Repeat check box. ● Specify how often the task should repeat. Enter a number, and select the appropriate time unit. Replications can recur no less than 30 minutes apart. ● To allow the schedule to run without an end date, clear the End check box.
3 Working in the System topic Topics: • • • • • • • • • • Viewing system components Systems Settings panel Resetting host ports Rescanning disk channels Clearing disk metadata Updating firmware Changing FDE settings Configuring advanced settings Using maintenance mode Restarting or shutting down controllers Viewing system components The System topic enables you to see information about each enclosure and its physical components in front, rear, and tabular views. Components vary by enclosure model.
The following are descriptions of some Disk Information panel items: ● Power On Hours – Total number of hours that the disk has been powered on since it was manufactured. This value is updated in 30-minute increments. ● FDE State – FDE state of the disk. For more information about FDE states, see the Dell EMC PowerVault ME4 Series Storage System CLI Guide. ● FDE lock keys – FDE lock keys are generated from the FDE passphrase and manage locking and unlocking the FDE-capable disks in the system.
Table 9. Table view information Field Description Health Shows the health of the component: OK, Degraded, Fault, N/A, or Unknown. Type Shows the component type: enclosure, disk, power supply, controller module, network port, host port, expansion port, CompactFlash card, or I/O module (expansion module). Enclosure Shows the enclosure ID. Location Shows the location of the component. ● For an enclosure, the location is shown in the format Rack rack-ID.shelf-ID.
Table 9. Table view information (continued) Field Description ● ● ● ● ● ● ● ○ Error The disk is present but not detected by the expander. ○ Unknown Initial status when the disk is first detected or powered on. ○ Not Present The disk slot indicates that no disk is present. ○ Unrecoverable The disk is present but has unrecoverable errors. ○ Unavailable The disk is present but cannot communicate with the expander. ○ Unsupported The disk is present but is an unsupported type.
Clearing disk metadata You can clear metadata from a leftover disk to make it available for use. CAUTION: Only use this command when all disk groups are online and leftover disks exist. Improper use of this command may result in data loss. Do not use this command when a disk group is offline and one or more leftover disks exist. If you are uncertain whether to use this command, contact technical support for assistance.
updated. Any conditions that are detected are listed with their potential risks. For information about this command, see the Dell EMC PowerVault ME4 Series Storage System CLI Guide. ● If any unwritten cache data is present, the firmware update will not proceed. Before you can update the firmware, unwritten data must be removed from cache. For more information about the clear cache command, see the Dell EMC PowerVault ME4 Series Storage System CLI Guide.
expansion module enclosure management processor (EMP) to be updated. This task typically takes 2.5 minutes for each EMP in a drive enclosure. If the Storage Controller cannot be updated, the update operation is canceled. Verify that you specified the correct firmware file and repeat the update. If this problem persists, contact technical support. When firmware update on the local controller is complete, users are automatically signed out and the MC restarts.
You can specify to update all disks or only specific disks. If you specify to update all disks and the system contains more than one type of disk, the update will be attempted on all disks in the system. The update will only succeed for disks whose type matches the file, and will fail for disks of other types. Prepare to update disk-drive firmware 1. Follow the best practices in Best practices for firmware update. 2. Obtain the appropriate firmware file and download it to your computer or network. 3.
Table 10. Activity progress properties and values Property Value Time The date and time of the latest status update. Seconds The number of seconds this component has been active. Component The name of the object being processed. Status The status of the component representing its progress/completion state. ○ ACTIVE – The operation for this component is currently active and in progress. ○ OK – The operation for this component completed successfully and is now inactive.
The Full Disk Encryption panel opens with the FDE General Configuration tab selected. 2. Type a passphrase in the Passphrase field of the Set/Create Passphrase section. A passphrase is case-sensitive and can include 8–32 printable UTF-8 characters except for the following: , < > \ 3. Retype the passphrase in the Re-enter Passphrase field. 4. Perform one of the following: ● To secure the system now, click Secure, and then click Set. A dialog box confirms that the passphrase was changed successfully.
Repurposing the system You can repurpose a system to erase all data on the system and return its FDE state to unsecure. CAUTION: Repurposing a system erases all disks in the system and restores the FDE state to unsecure. Repurposing disks You can repurpose a disk that is no longer part of a disk group. Repurposing a disk resets the encryption key on the disk, deleting all data on the disk.
Change the SMART setting 1. In the System topic, select Action > Advanced Settings > Disk. 2. Set the SMART Configuration option to one of the following: ● Don’t Modify. Allows current disks to retain their individual SMART settings and does not change the setting for new disks added to the system. ● Enabled. Enables SMART for all current disks after the next rescan and automatically enables SMART for new disks added to the system. This option is the default. ● Disabled.
To configure a time period to suspend and resume DSD for all disks, see Scheduling drive spin down for available disks and global spares. DSD affects disk operations as follows: ● Spun-down disks are not polled for SMART events. ● Operations requiring access to disks may be delayed while the disks are spinning back up. Configure DSD for available disks and global spares 1. In the System topic, select Action > Advanced Settings > Disk. 2.
Change the synchronize-cache mode 1. In the System topic, select Action > Advanced Settings > Cache. 2. Set the Sync Cache Mode option to either: ● Immediate. Good status is returned immediately and cache content is unchanged. This is the default. ● Flush to Disk. Good status is returned only after all write-back data for the specified volume is flushed to disk. 3. Click Apply.
Cache Power Changes to write-through if cache backup power is not fully charged or fails. Enabled by default. CompactFlash Changes to write-through if CompactFlash memory is not detected during POST, fails during POST, or fails while the controller is under operation. Enabled by default. Power Supply Failure Changes to write-through if a power supply unit fails. Disabled by default. Fan Failure Changes to write-through if a cooling fan fails. Disabled by default.
NOTE: If you choose to disable background disk group scrub, you can still scrub a selected disk group by using Action > Disk Group Utilities. Configure background scrub for disk groups 1. In the System topic, choose Action > Advanced Settings > System Utilities. 2. Set the options: ● Either select to enable, or clear to disable the Disk Group Scrub option. This option is enabled by default.
Using maintenance mode Enabling maintenance mode prevents SupportAssist from creating support tickets during planned system downtime. An ME4 Series storage system automatically enters maintenance mode during a user-initiated restart of a controller or during a firmware update. When the controller restart or firmware update is complete, the ME4 Series storage system automatically exits maintenance mode. NOTE: Maintenance mode can also be manually enabled or disabled on an ME4 Series storage system.
Perform a restart Perform the following steps to restart a controller: 1. Perform one of the following: ● In the banner, click the system panel and select Restart System. ● In the System topic, select Action > Restart System. The Controller Restart and Shut Down panel opens. 2. Select the Restart operation. 3. Select the controller type to restart: Management or Storage. 4. Select the controller module to restart: Controller A, Controller B, or both. 5. Click OK. A confirmation panel appears 6. Click OK.
4 Working in the Hosts topic Topics: • • • • • • • • • • • • • Viewing hosts Create an initiator Modify an initiator Delete initiators Add initiators to a host Remove initiators from hosts Remove hosts Rename a host Add hosts to a host group Remove hosts from a host group Rename a host group Remove host groups Configuring CHAP Viewing hosts The Hosts topic shows a tabular view of information about initiators, hosts, and host groups that are defined in the system.
○ volume-group-name.*—The mapping applies to all volumes in this volume group. ● Access. Shows the type of access assigned to the mapping: ○ read-write—The mapping permits read and write access. ○ read-only—The mapping permits read access. ○ no-access—The mapping prevents access. ● LUN. Shows whether the mapping uses a single LUN or a range of LUNs (indicated by *). ● Ports. Lists the controller host ports to which the mapping applies. Each number represents corresponding ports on both controllers.
Add initiators to a host You can add existing named initiators to an existing host or to a new host. To add an initiator to a host, the initiator must be mapped with the same access, port, and LUN settings to the same volumes or volume groups as every other initiator in the host. 1. In the Hosts topic, select 1 through 128 named initiators to add to a host. 2. Select Action > Add to Host. The Add to Host panel opens. 3.
Remove hosts from a host group You can remove all except the last host from a host group. Removing a host from a host group will ungroup the host but will not delete it. 1. In the Hosts topic, select 1 through 256 hosts to remove from their host group. 2. Select Action > Remove from Host Group. The Remove from Host Group panel opens and lists the hosts to be removed. 3. Click OK. For the selected hosts, the Group value changes to --. Rename a host group You can rename a host group. 1.
NOTE: Enabling or disabling CHAP here will update its setting in the Advanced Settings tab in the Host Ports Settings panel. 4. Perform one of the following: ● To modify an existing record, select it. The record values appear in the fields below the CHAP records list for editing. You cannot edit the IQN. ● To add a new record, click New. 5. For a new record, in the Node Name (IQN) field, enter the IQN of the initiator.
5 Working in the Pools topic Topics: • • • • • • • • • • Viewing pools Adding a disk group Modifying a disk group Removing disk groups Expanding a disk group Managing spares Create a volume Changing pool settings Verifying and scrubbing disk groups Removing a disk group from quarantine Viewing pools The Pools topic shows a tabular view of information about the pools and disk groups that are defined in the system, as well as information for the disks that each disk group contains.
Related Disk Groups table When you select a pool in the pools table, the disk groups for it appear in the Related Disk Groups table. For selected pools, the Related Disk Groups table shows the following information: Table 12. Disk Groups table Field Description Name Shows the name of the disk group. Health Shows the health of the disk group: OK, Degraded, Fault, N/A, or Unknown. Pool Shows the name of the pool to which the disk group belongs. RAID Shows the RAID level for the disk group.
Table 12. Disk Groups table (continued) Field Description ● UP – Up. The disk group is online and does not have fault-tolerant attributes. Disks Shows the number of disks in the disk group. To see more information about a disk group, select the pool for the disk group in the pools table, then hover the cursor over the disk group in the Related Disk Groups table. The Disk Group Information panel opens and displays detailed information about the disk group. Table 13.
Table 15.
Adding virtual disk groups The system supports a maximum of two pools, one per controller module: A and B. You can add up to 16 virtual disk groups for each virtual pool. If a virtual pool does not exist, the system will automatically add it when creating the disk group. Once a virtual pool and disk group exist, volumes can be added to the pool. Once you add a virtual disk group, you cannot modify it.
Table 16. Disk group options (continued) Option Description Assign to (optional, only appears for linear disk groups) For a system operating in Active-Active ULP mode, this option specifies the controller module to own the group. To let the system automatically load-balance groups between controller modules, select the Auto setting instead of Controller A or Controller B. RAID Level Select one of the following RAID levels when creating a virtual or linear disk group: ● RAID 1 – Requires 2 disks.
NOTE: The ADAPT RAID level does not have a dedicated spare option. 4. Select the disks that you want to add to the disk group from the table. NOTE: Disks that are already used or are not available for use are not populated in the table. 5. Click Add. If your disk group contains both 512n and 512e disks, a dialog box appears. Perform one of the following: ● To create the disk group, click Yes. ● To cancel the request, click No.
Removing disk groups You can delete a single disk group or select multiple disk groups and delete them in a single operation. By removing disk groups, you can also remove pools. Removing all disk groups within a pool will also trigger the automatic removal of the associated pool. If all disk groups for a pool have volumes assigned and are selected for removal, a confirmation panel will warn the user that the pool and all its volumes will be removed.
Table 17. Disk group expansion Disk Group Type Expand Available Notes Linear Yes Excludes NRAD and RAID 1. Virtual No Add a new disk group to a virtual pool. ADAPT Virtual or Linear Yes When expanding a disk group, all disks in the disk group must be the same type (enterprise SAS, for example). Disk groups support a mix of 512n and 512e disks. However, for best performance, all disks should use the same sector format. For more information about disk groups, see About disk groups.
5. Click Modify. A confirmation panel appears. 6. Click Yes to continue. Otherwise click No. If you clicked Yes, the disk group expansion starts. 7. To close the confirmation panel, click OK. Managing spares The Manage Spares panel displays a list of current spares and lets you add and remove global spares for virtual and linear disk groups, and dedicated spares for linear disk groups. The options in the panel are dependent on the type of disk group selected.
4. To close the confirmation panel, click OK. Dedicated spares The Manage Spares panel consists of two sections. The top section lists the current spares in the system and includes information about each. The bottom section lists all the available disks that can be designated as spares and includes details about each disk. If you selected a linear disk group, this section displays disks that can be used as dedicated spares for the selected disk group. Click individual disks within the table to select them.
default is 75 percent. If the pool is not overcommitted, the event has an Informational severity. If the pool is overcommitted, the event has a Warning severity. ● High Threshold: When this percentage of virtual pool capacity has been used, event 462 is generated to alert the administrator to add capacity to the pool. This value is automatically calculated based on the available capacity of the pool minus 200 GB of reserved space. If the pool is not overcommitted, the event has an Informational severity.
4. Click Verify Disk Group. A message confirms that verification has started. 5. Click OK. The panel shows the progress of the disk group verification. Abort a disk group verification Perform the following steps to abort a disk group verification: 1. In the Pools topic, select the pool for the disk group that you are verifying in the pools table. 2. Select the disk group in the Related Disk Groups table. 3. Select Action > Disk Group Utilities.
NOTE: If the disk group is being scrubbed, but the Abort Scrub button is unavailable, a background scrub is in progress. To stop the background scrub, disable the Disk Group Scrub option as described in Configuring system utilities on page 72. 4. Click Abort Scrub. A message confirms that scrub has been aborted. 5. Click OK.
● If after 60 seconds from being quarantined the disk group is QTCR or QTDN, the disk group is automatically dequarantined. The inaccessible disks are marked as failed and the disk group status changes to critical (CRIT) or fault tolerant with a down disk (FTDN). If the inaccessible disks later come online, they are marked as leftover (LEFTOVR). ● The dequarantine command is used to manually remove a disk group from quarantine.
6 Working in the Volumes topic Topics: • • • • • • • • • • • • • • • • • Viewing volumes Creating a virtual volume Creating a linear volume Modifying a volume Copying a volume or snapshot Abort a volume copy Adding volumes to a volume group Removing volumes from a volume group Renaming a volume group Remove volume groups Rolling back a virtual volume Deleting volumes and snapshots Creating snapshots Resetting a snapshot Creating a replication set from the Volumes topic Initiating or scheduling a replicatio
NOTE: For more information about write policy and read-ahead size, see Modifying a volume. Snapshots table in the Volumes topic To see more information about a snapshot and any child snapshots taken of it, select the snapshot or volume that is associated with it in the volumes table. If it is not already selected, click the Snapshots tab. The snapshots and all related snapshots appear in the Snapshots table. The Snapshots table shows the following snapshot information.
Replication Sets table in the Volumes topic To see information about the replication set for a volume or volume group, select a volume in the volumes table. If it is not already selected, select the Replication Sets tab. The replication appears in the Replication Sets table. The Replication Sets table shows the following information. By default, the table shows 10 entries at a time. ● Name – Shows the replication set name. ● Primary Volume – Shows the primary volume name.
○ Deleted – The schedule has been deleted. ● Task Type – Shows the type of schedule: ○ TakeSnapshot – The schedule creates a snapshot of a source volume. ○ ResetSnapshot – The schedule deletes the data in the snapshot and resets it to the current data in the volume from which the snapshot was created. The snapshot's name and other volume characteristics are not changed. ○ VolumeCopy – The schedule copies a source volume to a new volume.
4. Optional: Change the number of volumes to create. See the System configuration limits topic in the PowerVault Manager help for the maximum number of volumes supported per pool. 5. Optional: Specify a volume tier affinity setting to automatically associate the volume data with a specific tier, moving all volume data to that tier whenever possible. The default is No Affinity. For more information about the volume tier affinity feature, see About automated tiered storage. 6.
Modifying a volume You can change the name and cache settings for a volume. You can also expand a volume. If a virtual volume is not a secondary volume involved in replication, you can expand the size of the volume but not make it smaller. If a linear volume is neither the parent of a snapshot nor a primary or secondary volume, you can expand the size of the volume but not make it smaller. Because volume expansion does not require I/O to be stopped, the volume can continue to be used during expansion.
To ensure the integrity of a copy of a virtual snapshot with modified data, unmount the snapshot or perform a system cache flush. The snapshot will not be available for read or write access until the copy is complete, at which time you can remount the snapshot. If modified write data is not to be included in the copy, then you may safely leave the snapshot mounted. During a copy using snapshot modified data, the system takes the snapshot off line.
Removing volumes from a volume group You can remove volumes from a volume group. You cannot remove all volumes from a group. At least one volume must remain. Removing a volume from a volume group will ungroup the volumes but will not delete them. To remove all volumes from a volume group, see Removing volume groups. To see more information about a volume, hover the cursor over the volume in the table. Viewing volumes contains more details about the Volume Information panel that appears.
2. In the Volumes topic, select a volume that belongs to each volume group that you want to remove. You can remove 1 through 100 volume groups at a time. 3. Select Action > Remove Volume Group. The Remove Volume Group panel opens and lists the volume groups to be removed. 4. Select the Delete Volumes check box. 5. Click OK. A confirmation panel appears. 6. Click Yes to continue. Otherwise, click No. If you clicked Yes, the volume groups and their volumes are deleted and the volumes table is updated.
Delete volumes and snapshots 1. Verify that hosts are not accessing the volumes and snapshots that you want to delete. 2. In the Volumes topic, select 1 through 100 items (volumes, snapshots, or both) to delete. 3. Select Action > Delete Volumes. The Delete Volumes panel opens with a list of the items to be deleted. 4. Click Delete. The items are deleted and the volumes table is updated. Creating snapshots You can create snapshots of selected virtual volumes or of virtual snapshots.
5. Click OK. ● If Scheduled is not selected, the snapshot is created. ● If Scheduled is selected, the schedule is created and can be viewed in the Manage Schedules panel. For information on modifying or deleting schedules through this panel, see Managing scheduled tasks. Resetting a snapshot As an alternative to taking a new snapshot of a volume, you can replace the data in a standard snapshot with the current data in the source volume. The snapshot name and mappings are not changed.
replications. By default, the secondary volume or volume group and infrastructure are created in the pool corresponding to the one for the primary volume or volume group (A or B). Optionally, you can select the other pool. A peer connection must be defined to create and use a replication set. A replication set can specify only one peer connection and pool. When creating a replication set, communication between the peer connection systems must be operational during the entire process.
● ● ● ● must be greater than the number of existing snapshots in the replication set, regardless of whether snapshot history is enabled. If you select a snapshot retention count value that is less than the current number of snapshots, an error message is displayed. Thus, you must manually delete the excess snapshots before reducing the snapshot count setting. When the snapshot count is exceeded, the oldest unmapped snapshot will be discarded automatically.
● Modify the Snapshot Basename to change the snapshot name. The name is case sensitive and can have a maximum of 26 bytes. It cannot already exist in the system or include the following characters: " , < \ ● Set the Retention Priority to specify the snapshot retention priority. ● Optional: Select the Primary Volume Snapshot History check box to keep a snapshot history for the primary volume on the primary system 10. Optional: Select the Scheduled check box to schedule recurring replications. 11. Click OK.
Schedule a replication from the Volumes topic 1. In the Volumes topic, select a replication set in the Replication Sets table. 2. Select Action > Replicate. The Replicate panel opens. 3. Select the Schedule check box. 4. Enter a name for the replication schedule task. The name is case sensitive and can have a maximum of 32 bytes. It cannot already exist in the system or include the following: " , < \ 5.
● To allow the schedule to run on any day, clear the Date Constraint check box. To specify the days when the schedule can run, select the Date Constraint check box. 7. Click Apply. A confirmation panel appears. 8. Click OK. Delete a schedule from the Volumes topic Perform the following steps to delete a schedule from the Volumes topic: 1. Select Action > Manage Schedules. The Manage Schedules panel opens. 2. Select the schedule to delete. 3. Click Delete Schedule. A confirmation panel appears. 4. Click OK.
7 Working in the Mappings topic Topics: • • • Viewing mappings Mapping initiators and volumes View map details Viewing mappings The Mapping topic shows a tabular view of information about mappings that are defined in the system. By default, the table shows 20 entries at a time and is sorted first by host and second by volume. The mapping table shows the following information: ● Group.Host.Nickname. Identifies the initiators to which the mapping applies: ○ All Other Initiators.
The storage system uses Unified LUN Presentation (ULP), which can expose all LUNs through all host ports on both controllers. The interconnect information is managed in the controller firmware. ULP appears to the host as an active-active storage system where the host can choose any available path to access a LUN regardless of disk group ownership. When ULP is in use, the controllers' operating/redundancy mode is shown as Active-Active ULP.
Table 23. Available host groups, hosts, and initiators (continued) Row description Group Host Nickname this row to apply map settings to this initiator. ID name The Available Volume Groups and Volumes table shows one or more of the following rows: Table 24. Available volume groups and volumes Row description Group Name Type A row with these values volume-group-name appears for a volume/ snapshot that is grouped into a volume group.
NOTE: When mapping a volume to a host with the Linux ext3 file system, specify read-write access. Otherwise, the file system will be unable to mount the volume and will report an error such as “unknown partition table.” ○ Ports. Port selections specify controller host ports through which initiators are permitted to access, or are prevented from accessing, the volume. Selecting a port number automatically selects the corresponding port in each controller.
● LUN. Shows whether the mapping uses a single LUN or a range of LUNs (indicated by *). By default, the table is sorted by this column. ● Ports. Lists the controller host ports to which the mapping applies. Each number represents corresponding ports on both controllers. 3. Click OK.
8 Working in the Replications topic Topics: • • • • • • • • • • • • • • About replicating virtual volumes in the Replications topic Viewing replications Querying a peer connection Creating a peer connection Modifying a peer connection Deleting a peer connection Creating a replication set from the Replications topic Modifying a replication set Deleting a replication set Initiating or scheduling a replication from the Replications topic Stopping a replication Suspending a replication Resuming a replication M
● A replication set for a volume group consumes two internal volume groups if the queue policy is set to Discard, or three if the queue policy is set to Queue Latest. Each internal volume group contains a number of volumes equal to the number of volumes in the base volume group. Internal snapshots and internal volume groups count against system limits, but do not display. Using a volume group for a replication set enables you to make sure that multiple volumes are synchronized at the same time.
Figure 1. Process for initial replication A User view 1 Step 1: User initiates replication for the first time. B Internal view 2 Step 2: Current primary volume contents replace S1 contents. a Primary system 3 Step 3: S1 contents are fully replicated over the peer connection to counterpart S1, replacing S1 contents. b Secondary system 4 Step 4: S1 contents replace the secondary volume contents.
Figure 2. Process for subsequent replications A User view 1 Step 1: User initiates replication after the first replication has completed. B Internal view 2 Step 2: S1 contents replace S2 contents. a Primary system 3 Step 3: Current primary volume contents replace S1 contents. b Secondary system 4 Step 4: S1 contents replace the secondary volume contents.
space used by the primary volume. At most, the two internal snapshots together for each volume may consume twice the amount of disk space as the primary volume from which they are snapped. Even though the internal snapshots are hidden from the user, they do consume snapshot space (and thus pool space) from the virtual pool. If the volume is the base volume for a snapshot tree, the count of maximum snapshots in the snapshot tree may include the internal snapshots for it even though they are not listed.
Disaster recovery The replication feature supports manual disaster recovery only. It is not integrated with third-party disaster recovery software. Since replication sets of virtual volumes cannot reverse the direction of the replication, carefully consider how the replicated data will be accessed at the secondary backup site when a disaster occurs. NOTE: Using a volume group in a replication set ensures consistent simultaneous copies of the volumes in the volume group.
Manually transfer operations from the data center system to the backup system 1. Create a snapshot of the secondary volume, use a snapshot history snapshot, or delete the replication set. 2. Map the snapshot or the secondary volume, depending on the option that you choose in step 1, to hosts. Restore operations to the data center system 1. If the old primary volume still exists on the data center system, delete it.
Replication Sets table The Replication Sets table shows the following information. By default, the table shows 10 entries at a time. NOTE: If you change the time zone of the secondary system in a replication set whose primary and secondary systems are in different time zones, you must restart the system to enable management interfaces to show proper time values for replication operations. ● Name. Shows the replication set name. ● Primary Volume. Shows the primary volume name.
Querying a peer connection You can view information about systems you might use in a peer connection before creating the peer connection, or you can view information about systems currently in a peer connection before modifying the peer connection. Query a peer connection 1. In the Replications topic, do one of the following to display the Query Peer Connection panel: ● Select the peer connection to query in the Peer Connections table, then select Action > Query Peer Connection.
3. Enter the destination port address for the remote system. 4. Enter the name and password of a user with the manage role on the remote system. 5. Click OK. 6. If the task succeeds, click OK in the confirmation dialog. The peer connection is created and the Peer Connections table is updated. If the task does not succeed, the Create Peer Connection panel appears with errors in red text. Correct the errors, then click OK.
Changing the peer connection name will not affect the network connection so any running replications will not be interrupted. NOTE: Changing the remote port address will modify the network connection, which is permitted only if no replications are running and new replications are prevented from running. For the peer connection, stop any running replications and either suspend its replication sets or make sure its network connection is offline.
If a volume group is part of a replication set, volumes cannot be added to or deleted from the volume group. If a replication set is deleted, the internal snapshots created by the system for replication are also deleted. After the replication set is deleted, the primary and secondary volumes can be used like any other base volumes or volume groups. Primary volumes and volume groups The volume, volume group, or snapshot that will be replicated is called the primary volume or volume group.
● ● ● ● ● ● The snapshot number is incremented each time a replication is requested, whether or not the replication completes — for example, if the replication was queued and subsequently removed from the queue. If the replication set is deleted, any existing snapshots automatically created by snapshot history rules will not be deleted. You will be able to manage those snapshots like any other snapshots.
● If you selected the Scheduled check box, click OK. The Schedule Replications panel opens and you can set the options to create a schedule for replications. For more information on scheduling replications, see Initiating or scheduling a replication from the Replications topic. ● Otherwise, you have the option to perform the first replication. Click Yes to begin the first replication, or click No to initiate the first replication later.
snapshots like any other snapshots. For more information, see Maintaining replication snapshot history from the Replications topic. NOTE: If the peer connection is down and there is no communication between the primary and secondary systems, use the local-only parameter of the delete replication-set CLI command on both systems to delete the replication set. For more information, see the Dell EMC PowerVault ME4 Series Storage System CLI Guide. Delete a replication set 1.
Schedule a replication from the Replications topic 1. In the Replications topic, select a replication set from the Replication Sets table. 2. Select Action > Replicate. The Replicate panel opens. 3. Select the Schedule check box. 4. Type a name for the replication schedule task. The name is case sensitive and can have a maximum of 32 bytes. It cannot already exist in the system or include the following: " , < \ 5.
Suspending a replication You can suspend replication operations for a specified replication set from its primary system. You can suspend replications from a replication set's primary system only. When you suspend a replication set, all replications in progress are paused and no new replications are allowed to occur. You can abort suspended replications.
NOTE: This option is unavailable when replicating volume groups. 5. Specify a date and a time in the future to be the first instance when the scheduled task will run, and to be the starting point for any specified recurrence. ● To set the Date value, enter the current date in the format YYYY-MM-DD. ● To set the Time value, enter two-digit values for the hour and minutes and select either AM, PM, or 24H (24-hour clock). 6. If you want the task to run more than once, select the Repeat check box.
9 Working in the Performance topic Topics: • • • • Viewing performance statistics Updating historical statistics Exporting historical performance statistics Resetting performance statistics Viewing performance statistics The Performance topic shows performance statistics for the following types of components: disks, disk groups, virtual pools, virtual tiers, host ports, controllers, and volumes. For more information about performance statistics, see About performance statistics.
Table 28. Historical performance System component Graph Description Disk, group, pool, tier Total IOPS Total number of read and write operations per second since the last sampling time. Disk, group, pool, tier Read IOPS Number of read operations per second since the last sampling time. Disk, group, pool, tier Write IOPS Number of write operations per second since the last sampling time.
Table 28. Historical performance (continued) System component Graph Description not cause any allocations. Pages are allocated as data is written. Tier Number of Page Moves In Number of pages moved into this tier from a different tier. Tier Number of Page Moves Out Number of pages moved out of this tier to other tiers. Tier Number of Page Rebalances Number of pages moved between disk groups in this tier to automatically load balance.
Exporting historical performance statistics You can export historical performance statistics in CSV format to a file on the network. You can then import the data into a spreadsheet or other third-party application. The number of data samples downloaded is fixed at 100 to limit the size of the data file to be generated and transferred. The default is to retrieve all the available data, up to six months, aggregated into 100 samples. You can specify a different time range by specifying a start and end time.
10 Working in the banner and footer Topics: • • • • • • • • • • • • Banner and footer overview Viewing system information Viewing certificate information Viewing connection information Viewing system date and time information Viewing user information Viewing health information Viewing event information Viewing capacity information Viewing host information Viewing tier information Viewing recent system activity Banner and footer overview The banner of the PowerVault Manager interface contains four panels t
Viewing certificate information By default, the system generates a unique SSL certificate for each controller. For the strongest security, you can replace the default system-generated certificate with a certificate issued from a trusted certificate authority. The Certificate Information panel shows information for the active SSL certificates that are stored on the system for each controller. Tabs A and B contain unformatted certificate text for each of the corresponding controllers.
The icon indicates that the panel has a menu. Click anywhere in the panel to display a menu to change date and time settings. Changing date and time settings You can change the storage system date and time, which appear in the date/time panel in the banner. It is important to set the date and time so that entries in system logs and notifications have correct time stamps. You can set the date and time manually or configure the system to use NTP to obtain them from a network-attached server.
A confirmation panel appears. 6. Click Yes to save your changes. Otherwise, click No. Viewing user information The user panel in the banner shows the name of the signed-in user. Hover the cursor over this panel to display the User Information panel, which shows the roles, accessible interfaces, and session timeout for this user. The icon indicates that the panel has a menu. Click anywhere in the panel to change settings for the signed-in user (monitor role) or to manage all users (manage role).
● Otherwise, you are prompted to specify the file location and name. The default file name is store.zip. Change the name to identify the system, controller, and date. NOTE: The file must be uncompressed before the files it contains can be examined. The first file to examine for diagnostic data is store_yyyy_mm_dd__hh_mm_ss.logs. Viewing event information If you are having a problem with the system, review the event log before calling technical support.
● ● ● ● ● ○ Informational. A configuration or state change occurred, or a problem occurred that the system corrected. No action is required. ○ Resolved. A condition that caused an event to be logged has been resolved. No action is required. Date/Time. The date and time when the event occurred, shown in the format year-month-day hour:minutes:seconds. Time stamps have one-second granularity. ID. The event ID. The prefix A or B identifies the controller that logged the event. Code.
● Unallocated: The unallocated space for the disk groups, both total and by pool ● Uncommitted: For virtual disk groups, the uncommitted space in each pool (total space minus the allocated and unallocated space) and total uncommitted space Viewing host information The host I/O panel in the footer shows a pair of color-coded bars for each controller that has active I/O.
View notification history 1. Click the activity panel in the footer and select Notification History. The Notification History panel opens. 2. View activity notifications, using the navigation buttons. 3. Click Close when you are finished.
A Other management interfaces Topics: • • • • SNMP reference Using FTP and SFTP Using SMI-S Using SLP SNMP reference This appendix describes the Simple Network Management Protocol (SNMP) capabilities that Dell EMC storage systems support. This includes standard MIB-II, the FibreAlliance SNMP Management Information Base (MIB) version 2.2 objects, and enterprise traps. The storage systems can report their status through SNMP.
Enterprise traps Traps can be generated in response to events occurring in the storage system. These events can be selected by severity and by individual event type. A maximum of three SNMP trap destinations can be configured by IP address. Enterprise event severities are informational, minor, major, and critical. There is a different trap type for each of these severities. The trap format is represented by the enterprise traps MIB.
Table 30. FA MIB 2.
Table 30. FA MIB 2.2 objects, descriptions, and values (continued) Object Description Value connUnitRevsUnitId connUnitId of the connectivity unit that contains this revision table Same as connUnitId connUnitRevsIndex Unique value for each connUnitRevsEntry between 1 and connUnitNumRevs See External details for certain FA MIB 2.2 objects connUnitRevsRevId Vendor-specific string identifying a revision of a component of the connUnit String specifying the code version.
Table 30. FA MIB 2.2 objects, descriptions, and values (continued) Object Description Value connUnitPortFCClassOp Bit mask that specifies the classes of service that are currently operational. If this is not applicable, returns all bits set to zero.
Table 30. FA MIB 2.
Table 31.
Table 32.
Table 32.
Table 33. connUnitPortTable index and name values (continued) connUnitPortIndex connUnitPortName 3 Host Port 3 (Controller B) Configure SNMP event notification in the PowerVault Manager 1. Verify that the storage system’s SNMP service is enabled. See Enable or disable system-management settings. 2. Configure and enable SNMP traps. See Setting system notification settings. 3. Optionally, configure a user account to receive SNMP traps. See Adding, modifying, and deleting users.
2. Open a Command Prompt (Windows) or a terminal window (UNIX) and go to the destination directory for the log file. 3. Type: sftp -P port controller-network-address or ftp controller-network-address sftp -P 1022 10.235.216.152 or ftp 10.1.0.9 4. Log in as a user that has permission to use the FTP/SFTP interface. 5. Type: get logs filename.zip where filename is the file that contains the logs. Dell EMC recommends using a filename that identifies the system, controller, and date.
○ crash1, crash2, crash3, or crash4: One of the Storage Controller’s four crash logs. ○ ecdebug: Expander Controller log. ○ mc: Management Controller log. ○ scdebug: Storage Controller log. ● filename is the file that contains the transferred data. Dell EMC recommends using a filename that identifies the system, controller, and date. get managed-logs:scdebug Storage2-A_scdebug_2011_08_22.zip In FTP, wait for the message Operation Complete to appear.
● date/time-range is optional and specifies the time range of data to transfer, in the format: start.yyyy-mm-dd.hh:mm. [AM|PM].end.yyyy-mm-dd.hh:mm.[AM|PM]. The string must contain no spaces. ● filename.csv is the file that contains the data. Dell EMC recommends using a filename that identifies the system, controller, and date. get perf:start.2019-01-26.12:00.PM.end.2019-01-26.23:00.PM Storage2_A_20120126.csv In FTP, wait for the message Operation Complete to appear.
● To ensure success of an online update, select a period of low I/O activity. This helps the update complete as quickly as possible and avoids disruptions to host and applications due to timeouts. Attempting to update a storage system that is processing a large, I/O-intensive batch job will likely cause hosts to lose connectivity with the storage system. Updating controller module firmware In a dual-controller system, both controller modules should run the same firmware version.
8. Quit the FTP/SFTP session. 9. Clear your web browser cache, and then sign in to the PowerVault Manager. If PFU is running on the controller you sign in to, a dialog box shows PFU progress and prevents you from performing other tasks until PFU is complete. NOTE: If PFU is enabled for the system, after firmware update has completed on both controllers, check the system health.
10. Quit the FTP session. 11. Verify that each updated expansion module has the correct firmware version. Updating disk firmware You can update disk firmware by loading a firmware file obtained from your reseller. Disks can be updated from either controller. NOTE: Disks of the same model in the storage system must have the same firmware revision. You can specify to update all disks or only specific disks.
put firmware-file disk:enclosure-ID:slot-number For example: put AS10.bin disk:1:11 CAUTION: Do not power cycle enclosures, or restart a controller during the firmware update. If the update is interrupted or there is a power failure, the disk might become inoperative. If this issue occurs, contact technical support. It typically takes several minutes for the firmware to load. In FTP, wait for the message Operation Complete to appear. No messages are displayed in SFTP.
where key-file-name is the name of the security key file for your specific system. 7. Restart both Management Controllers to have the new security certificate take effect. Using SMI-S This appendix provides information for network administrators who are managing the storage system from a storage management application through the Storage Management Initiative Specification (SMI-S).
● ● ● ● ● Storage Enclosure subprofile (if expansion enclosures are attached) Disk Sparing subprofile Object Manager Adapter subprofile Thin Provisioning profile Pools from Volumes profile The embedded SMI-S provider supports: ● HTTPS using SSL encryption on the default port 5989, or standard HTTP on the default port 5988. Both ports cannot be enabled at the same time.
● Implementation Namespace - root/smis ● Interop Namespace - root/interop The embedded provider set includes the following providers: ● Instance Provider ● Association Provider ● Method Provider ● Indication Provider The embedded provider supports the following CIM operations: ● getClass ● enumerateClasses ● enumerateClassNames ● getInstance ● enumerateInstances ● enumerateInstaneceNames ● associators ● associatorNames ● references ● referenceNames ● invokeMethod SMI-S profiles SMI-S is organized around pr
Table 34. Supported SMI-S profiles (continued) Profile/subprofile/package Description Extent Composition Provides an abstraction of how it virtualizes exposable block storage elements from the underlying Primordial storage pool. Location subprofile Models the location details of product and its sub-components. Sensors profile Specializes the DMTF Sensors profile. Software Inventory profile Models installed and available software and firmware.
In a dual-controller configuration, both controller A and B alert events are sent via controller A’s SMI-S provider. The event categories in the following table pertain to FRU assemblies and certain FRU components. Table 35.
Table 36. Life cycle indications (continued) Profile or subprofile Element description and name WQL or CQL Send life cycle indication when a create or delete operation completes for a volume, LUN, or snapshot. Masking and Mapping SELECT * FROM CIM_InstCreation WHERE SourceInstance ISA CIM_AuthorizedSubject Both Send life cycle indication when a host privilege is created or deleted.
Configure access to the SMI-S interface for other users 1. Log in as a user with the manage role that also has access to the SMI-S interface. 2. If the user does not exist, create the user using the following command: create user interfaces wbi,cli,smis,ftp roles manage username 3.
Table 38. Troubleshooting (continued) Problem Cause Solution SMI-S is not responding to client requests. SMI-S configuration may have become corrupted. Use the CLI command reset smis-configuration. For more information, see the Dell EMC PowerVault ME4 Series Storage System CLI Guide. Using SLP ME4 Series storage systems support Service Location Protocol (SLP, srvloc), which is a service discovery protocol that allows computers and other devices to find services in a LAN without prior configuration.
Table 40.
B Administering a log-collection system A log-collection system receives log data that is incrementally transferred from a storage system for which the managed logs feature is enabled, and is used to integrate that data for display and analysis. For information about the managed logs feature, see About managed logs. Over time, a log-collection system can receive many log files from one or more storage systems. The administrator organizes and stores these log files on the log-collection system.
Storing log files It is recommended to store log files hierarchically by storage-system name, log-file type, and date/time. Then, if historical analysis is required, the appropriate log-file segments can easily be located and can be concatenated into a complete record.
C Best practices This appendix describes best practices for configuring and provisioning a storage system. Topics: • • • • • • • Pool setup RAID selection Disk count per RAID level Disk groups in a pool Tier setup Multipath configuration Physical port selection Pool setup In a storage system with two controller modules, try to balance the workload of the controllers. Each controller can own one virtual pool.
data disks and the one disk providing parity is the parity disk. In reality, the parity is distributed among all the disks, but conceiving of it in this way helps with the example. Note that the number of data disks is a power of two (2, 4, and 8). The controller will use a 512-KB stripe unit size when the data disks are a power of two. This results in a 4-MB page being evenly distributed across two stripes. This is ideal for performance. ● Example 2: Consider a RAID-5 disk group with six disks.
Multipath configuration ME4 Series storage systems comply with the SCSI-3 standard for Asymmetrical Logical Unit Access (ALUA). ALUA compliant storage systems will provide optimal and non-optimal path information to the host during device discovery, but the operating system must be directed to use ALUA. You can use the following procedures to direct Windows and Linux systems to use ALUA. Use one of the following procedures to enable MPIO. Enabling MPIO on Windows 1. 2. 3. 4. 5. 6. 7. 8. 9.
Physical port selection In a system configured to use either all FC or all iSCSI ports, use the ports in the following order: 1. A0,B0 2. A2,B2 3. A1,B1 4. A3,B3 The reason for doing so is that each pair of ports (A0,A1 or A2,A3) are connected to a dedicated CNC chip. If you are not using all four ports on a controller, it is best to use one port from each pair (A0,A2) to ensure better I/O balance on the front end.
D System configuration limits The following table lists the system configuration limits for ME4 Series storage systems: Table 42.
Table 42.
Table 42.
E Glossary of terms The following table lists definitions of the terms used in ME4 Series publications: Table 43. Glossary of ME4 Series terms Term Definition 2U12 An enclosure that is two rack units in height and can contain 12 disks. 2U24 An enclosure that is two rack units in height and can contain 24 disks. 5U84 An enclosure that is five rack units in height and can contain 84 disks. AES Advanced Encryption Standard. AFA All-flash array. A storage system that uses only SSDs, without tiering.
Table 43. Glossary of ME4 Series terms (continued) Term Definition CIMOM Common Information Model Object Manager A component in CIM that handles the interactions between management applications and providers. CNC Converged Network Controller A controller module whose host ports can be set to operate in FC or iSCSI mode, using qualified SFP and cable options. Changing the host-port mode is also known as changing the ports’ personality.
Table 43. Glossary of ME4 Series terms (continued) Term Definition dynamic spare An available compatible disk that is automatically assigned, if the dynamic spares option is enabled, to replace a failed disk in a disk group with a fault-tolerant RAID level. See also available disk, compatible disk, dedicated spare, global spare. EBOD Expanded Bunch of Disks. Expansion enclosure attached to a controller enclosure. EC Expander Controller.
Table 43. Glossary of ME4 Series terms (continued) Term Definition HBA Host bus adapter. A device that facilitates I/O processing and physical connectivity between a host and the storage system. host A user-defined group of initiators that represents a server. host group A user-defined group of hosts for ease of management, such as for mapping operations. host port A port on a controller module that interfaces to a host computer, either directly or through a network switch.
Table 43. Glossary of ME4 Series terms (continued) Term Definition volume, and a LUN that identifies the volume to the host system. See also default mapping, explicit mapping, masking. masking A volume-mapping setting that specifies no access to that volume by hosts. See also default mapping, explicit mapping. MC Management Controller.
Table 43. Glossary of ME4 Series terms (continued) Term Definition to each other, and they both maintain a peer connection with the other. Asynchronous replication of volumes may occur in either direction between peer systems configured in a peer connection. See also peer connection. PFU Partner firmware update. The automatic update of the partner controller when the user updates firmware on one controller. PGR Persistent group reservations.
Table 43. Glossary of ME4 Series terms (continued) Term Definition SBB Storage Bridge Bay. A specification that standardizes physical, electrical, and enclosuremanagement aspects of storage enclosure design. SC Storage Controller. A processor (located in a controller module) that is responsible for RAID controller functions. The SC is also referred to as the RAID controller. See also EC, MC. secondary system The storage system that contains a replication set’s secondary volume.
Table 43. Glossary of ME4 Series terms (continued) Term Definition SNIA Storage Networking Industry Association. An association regarding storage networking technology and applications. source volume A volume that has snapshots. Used as a synonym for parent volume. SSD Solid-state drive. SSH Secure Shell. A network protocol for secure data communication. SSL Secure Sockets Layer. A cryptographic protocol that provides security over the internet.
Table 43. Glossary of ME4 Series terms (continued) Term Definition virtual pool A container for volumes that is composed of one or more virtual disk groups. volume A logical representation of a fixed-size, contiguous span of storage that is presented to host systems for the purpose of storing data. volume copy An independent copy of the data in a linear volume. The capability to copy volumes makes use of snapshot functionality.