Dell PowerVault MD 34XX/38XX Series Storage Arrays Administrator's Guide
Notes, Cautions, and Warnings NOTE: A NOTE indicates important information that helps you make better use of your computer. CAUTION: A CAUTION indicates either potential damage to hardware or loss of data and tells you how to avoid the problem. WARNING: A WARNING indicates a potential for property damage, personal injury, or death. Copyright © 2014 Dell Inc. All rights reserved. This product is protected by U.S. and international copyright and intellectual property laws.
Contents 1 Introduction..............................................................................................................15 Dell PowerVault Modular Disk Storage Manager .............................................................................. 15 User Interface.......................................................................................................................................15 Enterprise Management Window.....................................................................
Disk Group Operations Limit.........................................................................................................27 RAID Background Operations Priority................................................................................................28 Virtual Disk Migration And Disk Roaming...........................................................................................28 Disk Migration................................................................................................
Setting A Password........................................................................................................................ 51 Adding Or Editing A Comment To An Existing Storage Array......................................................51 Removing Storage Arrays..............................................................................................................52 Enabling Premium Features................................................................................................
Adding A Host To A Host Group...................................................................................................74 Removing A Host From A Host Group......................................................................................... 74 Moving A Host To A Different Host Group...................................................................................74 Removing A Host Group.............................................................................................................
Drawer Loss Protection.....................................................................................................................101 Host-To-Virtual Disk Mapping..........................................................................................................102 Creating Host-To-Virtual Disk Mappings...................................................................................102 Modifying And Removing Host-To-Virtual Disk Mapping.......................................................
9 Using SSD Cache................................................................................................... 127 How SSD Cache Works..................................................................................................................... 127 Benefits Of SSD Cache...................................................................................................................... 127 Choosing SSD Cache Parameters.............................................................................
Creating A Snapshot Group..............................................................................................................144 Creating A Consistency Group Repository (Manually).............................................................. 146 Changing Snapshot Group Settings........................................................................................... 147 Renaming A Snapshot Group.....................................................................................................
About The Advanced Path........................................................................................................... 177 Preparing Host Servers To Create The Snapshot Using The Advanced Path............................177 Creating The Snapshot Using The Advanced Path.................................................................... 179 Specifying Snapshot Virtual Disk Names..........................................................................................
Display The Multipath Device Topology Using The Multipath Command................................197 Create A New fdisk Partition On A Multipath Device Node...................................................... 198 Add A New Partition To Device Mapper.....................................................................................198 Create A File System On A Device Mapper Partition................................................................. 198 Mount A Device Mapper Partition......................
16 Management Firmware Downloads...............................................................215 Downloading RAID Controller And NVSRAM Packages.................................................................. 215 Downloading Both RAID Controller And NVSRAM Firmware..........................................................215 Downloading Only NVSRAM Firmware............................................................................................ 217 Downloading Physical Disk Firmware..................
20 Getting Help........................................................................................................243 Contacting Dell.................................................................................................................................
Introduction 1 CAUTION: See the Safety, Environmental, and Regulatory Information document for important safety information before following any procedures listed in this document.
NOTE: The toolbar is available only in the EMW. • The tabs, beneath the toolbar — Tabs are used to group the tasks that you can perform on a storage array. • The status bar, beneath the tabs — The status bar shows status messages and status icons related to the storage array. NOTE: By default, the toolbar and status bar are not displayed. To view the toolbar or the status bar, select View → Toolbar or View → Status Bar.
Inheriting The System Settings Use the Inherit System Settings option to import the operating system theme settings into the MD Storage Manager. Importing system theme settings affects the font type, font size, color, and contrast in the MD Storage Manager. 1. From the EMW, open the Inherit System Settings window in one of these ways: 2. – Select the Setup tab, and under Accessibility, click Inherit System Settings. Select Inherit system settings for color and font. 3. Click OK.
– Hardware – Storage and copy services – Hosts and mappings – Information on storage capacity – Premium features • Performance tab — You can track a storage array’s key performance data and identify performance bottlenecks in your system.
NOTE: Always check for updates on dell.com/support/manuals and read the updates first because they often supersede information in other documents.
About Your MD Series Storage Array 2 This chapter describes the storage array concepts, which help in configuring and operating the Dell MD Series storage arrays. Physical Disks, Virtual Disks, And Disk Groups Physical disks in your storage array provide the physical storage capacity for your data. Before you can begin writing data to the storage array, you must configure the physical storage capacity into logical components, called disk groups and virtual disks.
Status Mode Description Optimal Hot Spare Standby The physical disk in the indicated slot is configured as a hot spare. Optimal Hot Spare in use The physical disk in the indicated slot is in use as a hot spare within a disk group. Failed Assigned, Unassigned, Hot Spare in use, or Hot Spare Standby The physical disk in the indicated slot has failed because of an unrecoverable error, an incorrect drive type or drive size, or by its operational state being set to failed.
State Description function properly, but performance may be affected and additional disk failures may result in data loss. Offline A virtual disk with one or more member disks in an inaccessible (failed, missing, or offline) state. Data on the virtual disk is no longer accessible. Force online The storage array forces a virtual disk that is in an Offline state to an Optimal state. If all the member physical disks are not available, the storage array forces the virtual disk to a Degraded state.
Storage Manager does not enforce 120-physical disk limit when you setup a RAID 0 or RAID 10 configuration. Exceeding the 120-physical disk limit may cause your storage array to be unstable. RAID Level Usage To ensure best performance, you must select an optimal RAID level when you create a system physical disk.
RAID 10 CAUTION: Do not attempt to create virtual disk groups exceeding 120 physical disks in a RAID 10 configuration even if premium feature is activated on your storage array. Exceeding the 120physical disk limit may cause your storage array to be unstable. RAID 10, a combination of RAID 1 and RAID 0, uses disk striping across mirrored disks. It provides high data throughput and complete data redundancy.
A consistency check is similar to a background initialization. The difference is that background initialization cannot be started or stopped manually, while consistency check can. NOTE: It is recommended that you run data consistency checks on a redundant array at least once a month. This allows detection and automatic replacement of unreadable sectors.
RAID level. You can perform a RAID level migration while the system is still running and without rebooting, which maintains data availability. Segment Size Migration Segment size refers to the amount of data (in kilobytes) that the storage array writes on a physical disk in a virtual disk before writing data on the next physical disk. Valid values for the segment size are 8 KB, 16 KB, 32 KB, 64 KB, 128 KB, and 256 KB.
If a redundant RAID controller module fails with an existing disk group process, the process on the failed controller is transferred to the peer controller. A transferred process is placed in a suspended state if there is an active disk group process on the peer controller. The suspended processes are resumed when the active process on the peer controller completes or is stopped.
• Any storage array different from the MD storage array you migrate to (for example, from an MD3460 storage array to an MD3860i storage array), the receiving storage array (MD3860i storage array in the example) does not recognize the migrating metadata and that data is lost. In this case, the receiving storage array initializes the physical disks and marks them as unconfigured capacity.
Host Server-To-Virtual Disk Mapping The host server attached to a storage array accesses various virtual disks on the storage array through its host ports. Specific virtual disk-to-LUN mappings to an individual host server can be defined. In addition, the host server can be part of a host group that shares access to one or more virtual disks. You can manually configure a host server-to-virtual disk mapping.
repository, which is used to save subsequent modifications made by the host application to the base virtual disk without affecting the referenced snapshot image. Snapshot images can be created manually or automatically by establishing a schedule that defines the date and time you want to create the snapshot image.
• Back up data. • Copy data from disk groups that use smaller-capacity physical disks to disk groups using greater capacity physical disks. • Restore snapshot virtual disk data to the source virtual disk. Virtual disk copy generates a full copy of data from the source virtual disk to the target virtual disk in a storage array. • Source virtual disk — When you create a virtual disk copy, a copy pair consisting of a source virtual disk and a target virtual disk is created on the same storage array.
Multi-Path Software Multi-path software (also referred to as the failover driver) is the software resident on the host server that provides management of the redundant data path between the host server and the storage array. For the multi-path software to correctly manage a redundant path, the configuration must have redundant iSCSI connections and cabling. The multi-path software identifies the existence of multiple paths to a virtual disk and establishes a preferred path to that disk.
Load Balancing A load balance policy is used to determine which path is used to process I/O. Multiple options for setting the load balance policies let you optimize I/O performance when mixed host interfaces are configured. You can choose one of these load balance policies to optimize I/O performance: • Round-robin with subset — The round-robin with subset I/O load balance policy routes I/O requests, in rotation, to each available data path to the RAID controller module that owns the virtual disks.
Type of Performance Monitoring Sampling Interval Length of Time Displayed Maximum Number of Objects Displayed Ability to Save Data How Monitoring Starts and Stops Real-time graphical 5 sec 5 min rolling window 5 No Starts automatically when AMW opens. Stops automatically when AMW closes. Real-time textual 5-3600 sec Most current value No limit Yes Starts and stops manually. Also stops when View Real-time Textual Performance Monitor dialog closes or AMW closes.
Performance Data Implications for Performance Tuning You might notice a disparity in the total I/Os (workload) of RAID controller modules. For example, the workload of one RAID controller module is heavy or is increasing over time while that of the other RAID controller module is lighter or more stable. In this case, you might want to change the RAID controller module ownership of one or more virtual disks to the RAID controller module with the lighter workload.
Performance Data Implications for Performance Tuning individual virtual disk, look at the current IOPS and the maximum IOPS. You should see higher rates for sequential I/O patterns than for random I/O patterns. Regardless of your I/O pattern, enable write caching to maximize the I/O rate and to shorten the application response time. For more information about read/write caching and performance, see the related topics listed at the end of this topic. MBs/sec See IOs/sec.
Performance Data Implications for Performance Tuning cache read prefetch can increase the cache hit percentage for a sequential I/O workload. Viewing Real-time Graphical Performance Monitor Data You can view real-time graphical performance as a single graph or as a dashboard that shows six graphs on one screen. A real-time performance monitor graph plots a single performance metric over time for up to five objects. The x-axis of the graph represents time.
4. In the Select an object(s) list, select the objects for which you want to view performance data. You can select up to five objects to monitor on one graph. Use Ctrl-Click and Shift-Click to select multiple objects. Each object is plotted on a separate line on the graph. NOTE: If you do not see a line that you defined on the graph, it might be overlapping another line. 5. To save the changed portlet to the dashboard, click Save to Dashboard, and then click OK.
Metric Storage Array RAID Controller Modules Virtual Disks Snapshot Virtual Disks Thin Virtual Disk Disks Groups or Disk Pools Physical Disks Total I/Os X X X X X X – IOs/sec X X X X X X – MBs/sec X X X X X X – I/O Latency – – X X X – X Cache hit % X X X X X – X Viewing Real-time Textual Performance Monitor 1.
Saving Real-time Textual Performance Data A feature that real-time textual performance monitoring has that real-time graphical performance monitoring does not have is that you can save the data. Saving the data saves only one set of data from the most recent sampling interval. 1. In the Array Management Window (AMW), do one of the following: – Click the Performance tab, and then click the Launch real-time textual performance monitor link.
4. To confirm, click OK. To indicate that background performance monitoring is in progress, the Start link changes to Stop, and the system shows an In Progress icon next to the Stop link. NOTE: For accurate data, do not change the system date or time while using background performance monitor. If you must change the system date, stop and restart the background performance monitor. 5. To manually stop background performance monitoring, click the Stop link.
reaches 9999K, at which time it appears in millions (M). For amounts greater than 9999K but less than 100M, the value appears in tenths (for example, 12.3M). 1. In the Array Management Window (AMW), click the Performance tab. 2. Click the Launch background performance monitor link. The View Current option is available only when performance monitoring is in progress. You can tell that background performance monitoring is in progress by the presence of the In Progress icon next to the Stop link.
(K), beginning with 100K until the number reaches 9999K, at which time it appears in millions (M). For amounts greater than 9999K but less than 100M, the value appears in tenths (for example, 12.3M). 1. In the Array Management Window (AMW), click the Performance tab. 2. Click the Launch background performance monitor link. The View Current Background Performance Monitor dialog appears. 3. Click the Launch saved background performance monitor link. The Load Background Performance dialog appears. 4.
replace a physical disk. The original physical disk’s name contains an asterisk indicating that it is invalid and no longer exists. The new physical disk has the same name without an asterisk.
3 Discovering And Managing Your Storage Array You can manage a storage array in two ways: • • Out-of-band management In-band management Out-Of-Band Management In the out-of-band management method, data is separate from commands and events. Data travels through the host-to-controller interface, while commands and events travel through the management port Ethernet cables.
Access Virtual Disk Each RAID controller module in an MD Series storage array maintains a special virtual disk, called the access virtual disk. The host-agent software uses the access virtual disk to communicate management requests and event information between the storage management station and the RAID controller module in an in-band-managed storage array and cannot be removed without deleting the entire virtual disk, virtual disk group or virtual disk pair.
NOTE: It can take several minutes for the MD Storage Manager to connect to the specified storage array. To add a storage array manually: 1. In the EMW, select Edit → Add Storage Array. 2. Select the relevant management method: – Out-of-band management — Enter a DNS/Network name, IPv4 address, or IPv6 address for the RAID Controller Module in the storage array.
• Configure Ethernet management ports — Configure the network parameters for the Ethernet management ports on the RAID controller modules if you are managing the storage array by using the out-of-band management connections. • View and enable premium features — Your MD Storage Manager may include premium features. View the premium features that are available and the premium features that are already started. You can start available premium features that are currently stopped.
Setting A Password You can configure each storage array with a password to protect it from unauthorized access. The MD Storage Manager prompts for the password when an attempt is made to change the storage array configuration, such as, when a virtual disk is created or deleted. View operations do not change the storage array configuration and do not require a password. You can create a new password or change an existing password. To set a new password or change an existing password: 1.
3. Type a comment. NOTE: The number of characters in the comment must not exceed 60 characters. 4. Click OK. This option updates the comment in the Table view and saves it in your local storage management station file system. The comment does not appear to administrators who are using other storage management stations. Removing Storage Arrays You can remove a storage array from the list of managed arrays if you no longer want to manage it from a specific storage management station.
To configure a failover alert delay: 1. In the AMW, on the menu bar, select Storage Array → Change → Failover Alert Delay. The Failover Alert Delay window is displayed. 2. In Failover alert delay, enter a value between 0 and 60 minutes. 3. Click OK. 4. If you have set a password for the selected storage array, the Enter Password dialog is displayed. Type the current password for the storage array. Changing The Cache Settings On The Storage Array To change the storage array cache settings: 1.
Configuring Alert Notifications The MD Storage Manager can send an alert for any condition on the storage array that requires your attention. Alerts can be sent as e-mail messages or as Simple Network Management Protocol (SNMP) trap messages. You can configure alert notifications either for all the storage arrays or a single storage array. To configure alert notifications: 1. For all storage arrays, in the EMW: a) b) c) d) Select the Setup tab. Select Configure Alerts. Select All storage arrays. Click OK.
3. In the Configure Alerts dialog, select the Mail Server tab and do the following: a) Type the name of the Simple Mail Transfer Protocol (SMTP) mail server. The SMTP mail server is the name of the mail server that forwards the e-mail alert to the configured e-mail addresses. b) In Email sender address, type the e-mail address of the sender. Use a valid e-mail address. The e-mail address of the sender (the network administrator) is displayed on each e-mail alert sent to the destination.
• The storage array • The event monitor 1. Open the Configure Alerts dialog by performing one of these actions in the EMW: – On the Devices tab, select a node and then on the menu bar, select Edit → Configure Alerts. Go to step 3. NOTE: This option enables you to set up alerts for all the storage arrays connected to the host. 2. – On the Setup, select Configure Alerts. Go to step 2.
• For more specific notifications, you can configure the alert destinations at the storage management station, host, and storage array levels. 1. Do one of the following actions based on whether you want to configure alerts for a single storage array or for all storage arrays. – Single storage array – In the Enterprise Management Window (EMW), select the Devices tab. Right-click the storage array that you want to send alerts, and then select Configure Alerts.
3. Select the SNMP - Storage Array Origin Trap tab. The Configure Alerts dialog appears. The Configured communities table is populated with the currently configured community names and the Configured SNMP addresses table is populated with the currently configured trap destinations. NOTE: If the SNMP - Storage Array Origin Trap tab does not appear, this feature might not be available on your RAID controller module model. 4.
The learn cycle completes the following operations: • Discharges the battery to a predetermined threshold • Charges the battery back to full capacity A learn cycle starts automatically when you install a new battery module. Learn cycles for batteries in both RAID controller modules in a duplex system occur simultaneously. Learn cycles are scheduled to start automatically at regular intervals, at the same time and on the same day of the week. The interval between cycles is described in weeks.
Using iSCSI 4 NOTE: The following sections are relevant only to MDxx0i storage arrays that use the iSCSI protocol. Changing The iSCSI Target Authentication To change the iSCSI target authentication: 1. In the AMW, select the Setup tab. 2. Select Manage iSCSI Settings. The Manage iSCSI Settings window is displayed and by default, the Target Authentication tab is selected. 3. To change the authentication settings, select: – None — If you do not require initiator authentication.
3. Select the Remote Initiator Configuration tab. 4. Select an initiator in the Select an Initiator area. The initiator details are displayed. 5. Click CHAP Secret to enter the initiator CHAP permissions in the dialog that is displayed. 6. Click OK. 7. Click OK in the Manage iSCSI Settings window. For more information, see the online help topics. Creating CHAP Secrets When you set up an authentication method, you can choose to create a CHAP secret.
Changing The iSCSI Target Identification You cannot change the iSCSI target name, but you can associate an alias with the target for simpler identification. Aliases are useful because the iSCSI target names are not intuitive. Provide an iSCSI target alias that is meaningful and easy to remember. To change the iSCSI target identification: 1. In the AMW, select the Setup tab. 2. Select Manage iSCSI Settings. The Manage iSCSI Settings window is displayed. 3. Select the Target Configuration tab. 4.
Configuring The iSCSI Host Ports The default method for configuring the iSCSI host ports, for IPv4 addressing, is DHCP. Always use this method unless your network does not have a DHCP server. It is advisable to assign static DHCP addresses to the iSCSI ports to ensure continuous connectivity. For IPv6 addressing, the default method is Stateless auto-configuration. Always use this method for IPv6. To configure the iSCSI host ports: 1. In the AMW, select the Setup tab. 2. Select Configure iSCSI Host Ports.
Advanced iSCSI Host Port Settings NOTE: Configuring the advanced iSCSI host ports settings is optional. Use the advanced settings for the individual iSCSI host ports to specify the TCP frame size, the virtual LAN, and the network priority. Setting Description Virtual LAN (VLAN) A method of creating independent logical networks within a physical network. Several VLANs can exist within a network. VLAN 1 is the default VLAN.
4. To end the session: a) Select the session that you want to end, and then click End Session. The End Session confirmation window is displayed. b) Click Yes to confirm that you want to end the iSCSI session. NOTE: If you end a session, any corresponding connections terminate the link between the host and the storage array, and the data on the storage array is no longer available.
Table 2. Host Topology Actions Desired Action Steps to Complete Action Move a host 1. 2. Move a host group 3. Click the Host Mappings tab. Select the Host that you want to move, and then select Host Mappings → Move. Select a host group to move the host to and click OK. Manually delete the host and the host group 1. 2. Click the Host Mappings tab. Select the item that you want to remove and select Host Mappings → Remove. Rename the host or the host group 1. 2. Click the Host Mappings tab.
Event Monitor 5 An event monitor is provided with Dell PowerVault Modular Disk Storage Manager. The event monitor runs continuously in the background and monitors activity on the managed storage arrays. If the event monitor detects any critical problems, it can notify a host or remote system using e-mail, Simple Network Management Protocol (SNMP) trap messages, or both. For the most timely and continuous notification of events, enable the event monitor on a management station that runs 24 hours a day.
Linux To enable the event monitor, at the command prompt, type SMmonitor start and press . When the program startup begins, the following message is displayed: SMmonitor started. To disable the event monitor, start terminal emulation application (console ox xterm) and at the command prompt, type SMmonitor stop, and press . When the program shutdown is complete, the following message is displayed: Stopping Monitor process.
About Your Host 6 Configuring Host Access Dell PowerVault Modular Disk Storage Manager (MD Storage Manager) is comprised of multiple modules. One of these modules is the Host Context Agent, which is installed as part of the MD Storage Manager installation and runs continuously in the background. If the Host Context Agent is running on a host, that host and the host ports connected from it to the storage array are automatically detected by the MD Storage Manager.
Using The Host Mappings Tab In the Host Mappings tab, you can: • Define hosts and hosts groups • Add mappings to the selected host groups For more information, see the online help topics. Defining A Host You can use the Define Host Wizard in the AMW to define a host for a storage array. Either a known unassociated host port identifier or a new host port identifier can be added. A user label must be specified before the host port identifier may be added (the Add button is disabled until one is entered).
9. In the Host Group Question window, you can select: – Yes — This host shares access to the same virtual disks with other hosts. – No — This host does NOT share access to the same virtual disks with other hosts. 10. Click Next. 11. If you select: – Yes — The Specify Host Group window is displayed. – No — Go to step 13. 12. Enter the name of the host group or select an existing host group and click Next. The Preview window is displayed. 13. Click Finish.
3. Perform one of the following actions: – From the menu bar, select Host Mappings → Define → Host Group. – Right-click the storage array or the Default Group, and select Define → Host Group from the pop-up menu. The Define Host Group window is displayed. 4. Type the name of the new host group in Enter new host group name. 5. Select the appropriate hosts in the Select hosts to add area. 6. Click Add. The new host is added in the Hosts in group area.
Removing A Host Group To remove a host group: 1. In the AMW, select the Host Mappings tab, select the host group node in the object tree. 2. Perform one of these actions: – From the menu bar, select Host Mappings → Host Group → Remove. – Right-click the host group node, and select Remove from the pop-up menu. The Remove dialog is displayed. 3. Click Yes. The selected host group is removed.
To start or stop the Host Context Agent on Windows: 1. Do one of the following: – Click Start → Settings → Control Panel → Administrative Tools → Services 2. – Click Start → Administrative Tools → Services From the list of services, select Modular Disk Storage Manager Agent. 3. If the Host Context Agent is running, click Action → Stop, then wait approximately 5 seconds. 4. Click Action → Start. I/O Data Path Protection You can have multiple host-to-array connections for a host.
3. To manage the host port identifiers in the Show host port identifiers associated with list: – For a specific host, select the host from the list of hosts that are associated with the storage array. 4. – For all hosts, select All hosts from the list of hosts that are associated with the storage array. If you are adding a new host port identifier, go to step 5. If you are managing an existing host port identifier, go to step 10. 5. Click Add. The Add Host Port Identifier dialog is displayed. 6.
Disk Groups, Standard Virtual Disks, And Thin Virtual Disks 7 Creating Disk Groups And Virtual Disks Disk groups are created in the unconfigured capacity of a storage array, and virtual disks are created in the free capacity of a disk group or disk pool. The maximum number of physical disks supported in a disk group is 120 (180 with the premium feature activated). The hosts attached to the storage array read and write data to the virtual disks.
Creating Disk Groups NOTE: If you have not created disk groups for a storage array, the Disk Pool Automatic Configuration Wizard is displayed when you open the AMW. For more information on creating storage space from disk pools, see Disk Pools. NOTE: Thin-provisioned virtual disks can be created from disk pools. If you are not using disk pools, only standard virtual disks can be created. For more information, see Thin Virtual Disks.
6. For manual configuration, the Manual Physical Disk Selection window is displayed: a) Select the appropriate RAID level in Select RAID level. You can select RAID levels 0, 1/10, 5, and 6. Depending on your RAID level selection, the physical disks available for the selected RAID level are displayed in Unselected physical disks table. b) In the Unselected physical disks table, select the appropriate physical disks and click Add.
To create standard virtual disks: 1. In the AMW, select the Storage & Copy Services tab. 2. Select a Free Capacity node from an existing disk group and do one of the following: – From the menu bar, select Storage → Virtual Disk → Create → Virtual Disk. – Right click on the Free Capacity and select Create Disk Group. The Create Virtual Disk: Specify Parameters window is displayed. 3. Select the appropriate unit for memory in Units and enter the capacity of the virtual disk in New virtual disk capacity.
To change the virtual disk modification priority: 1. In the AMW, select the Storage & Copy Services tab. 2. Select a virtual disk. 3. In the menu bar, select Storage → Virtual Disk → Change → Modification Priority. The Change Modification Priority window is displayed. 4. Select one or more virtual disks. Move the Select modification priority slider bar to the desired priority. NOTE: To select nonadjacent virtual disks, press click and select the appropriate virtual disks.
4. In the Cache Properties area, you can select: – Enable read caching – Enable write caching * Enable write caching without batteries — to permit write caching to continue even if the RAID controller module batteries are discharged completely, not fully charged, or are not present. * Enable write caching with mirroring — to mirror cached data across two redundant RAID controller modules that have the same cache size.
• The number of physical disks in the disk group • The number of physical disk ports • The processing power of the storage array RAID controller modules If you want this operation to complete faster, you can change the modification priority to the highest level, although this may decrease system I/O performance. To change the segment size of a virtual disk: 1. In the AMW, select the Storage & Copy Services tab and select a virtual disk. 2.
6. In the confirmation dialog, click Yes. A progress dialog is displayed, which indicates the number of virtual disks being changed. Thin Virtual Disks When creating virtual disks from a disk pool, you have the option to create thin virtual disks instead of standard virtual disks. Thin virtual disks are created with physical (or preferred) and virtual capacity, allowing flexibility to meet increasing capacity requirements.
enables you to limit the automatic growth of a virtual disk to an amount less than the defined virtual capacity. NOTE: Since less than full capacity is allocated when you create a thin virtual disk, insufficient free capacity may exist when certain operations are performed, such as snapshot images and snapshot virtual disks. If this occurs, an alert threshold warning is displayed.
Thin Virtual Disk States The following are the virtual disk states displayed in MD Storage Manager: • Optimal — Virtual disk is operating normally. • Full — Physical capacity of a thin virtual disk is full and no more host write requests can be processed. • Over Threshold — Physical capacity of a thin virtual disk is at or beyond the specified Warning Threshold percentage. The storage array status is shown as Needs Attention.
Rollback On Thin Virtual Disks Rollback operations are fully supported on thin virtual disks. A rollback operation restores the logical content of a thin virtual disk to match the selected snapshot image. There is no change to the consumed capacity of the thin virtual disk as a result of a rollback operation. Initializing A Thin Virtual Disk CAUTION: Possible loss of data – Initializing a thin virtual disk erases all data from the virtual disk.
4. Select Keep existing repository, and click Finish. The Confirm Initialization of Thin Virtual Disk window is displayed. 5. Read the warning and confirm if you want to initialize the thin virtual disk. 6. Type yes, and click OK. The thin virtual disk initializes. Initializing A Thin Virtual Disk With A Different Physical Capacity CAUTION: Initializing a thin virtual disk erases all data from the virtual disk. • You can create thin virtual disks only from disk pools, not from disk groups.
10. If you want to change the repository expansion policy or warning threshold, click View advanced repository settings. – Repository expansion policy – Select either Automatic or Manual. When the consumed capacity gets close to the physical capacity, you can expand the physical capacity. The MD storage management software can automatically expand the physical capacity or you can do it manually. If you select Automatic, you also can set a maximum expansion capacity.
9. Select a repository from the table. Existing repositories are placed at the top of the list. NOTE: The benefit of reusing an existing repository is that you can avoid the initialization process that occurs when you create a new one. 10. If you want to change the repository expansion policy or warning threshold, click View advanced repository settings. – Repository expansion policy – Select either Automatic or Manual.
to encrypt the data. A security capable physical disk works like any other physical disk until it is security enabled. Whenever the power is turned off and turned on again, all of the security enabled physical disks change to a security locked state. In this state, the data is inaccessible until the correct security key is provided by a RAID controller module. You can view the self encrypting disk status of any physical disk in the storage array from the Physical Disk Properties dialog.
• A security key is set up for the storage array. NOTE: The Secure Physical Disks option is inactive if these conditions are not true. The Secure Physical Disks option is inactive with a check mark to the left if the disk group is already security enabled. The Create a secure disk group option is displayed in the Create Disk Group Wizard–Disk Group Name and Physical Disk Selection dialog.
4. In New password, enter a string for the storage array password. If you are creating the storage array password for the first time, leave Current password blank. Follow these guidelines for cryptographic strength when you create the storage array password: – The password should be between eight and 30 characters long. – The password should contain at least one uppercase letter. – The password should contain at least one lowercase letter. – The password should contain at least one number. 5.
Changing A Security Key When you change a security key, a new security key is generated by the system. The new key replaces the previous key. You cannot view or read the key. However, a copy of the security key must be kept on some other storage medium for backup in case of system failure or for transfer to another storage array. A pass phrase that you provide encrypts and decrypts the security key for storage on other media.
To save the security key for the storage array, 1. In the AMW toolbar, select Storage Array → Security → Physical Disk Security → Save Key. The Save Security Key File - Enter Pass Phrase window is displayed. 2. Edit the default path by adding a file name to the end of the path or click Browse, navigate to the required folder and enter the name of the file. 3. In Pass phrase, enter a string for the pass phrase.
provision a physical disk. You can use the Secure Erase option if you want to remove all of the data on the physical disk and reset the physical disk security attributes. CAUTION: Possible loss of data access—The Secure Erase option removes all of the data that is currently on the physical disk. This action cannot be undone. Before you complete this option, make sure that the physical disk that you have selected is the correct physical disk.
4. Select the appropriate option, you can select: – View/change current hot spare coverage — to review hot spare coverage and to assign or unassign hot spare physical disks, if necessary. See step 5. – Automatically assign physical disks — to create hot spare physical disks automatically for the best hot spare coverage using available physical disks. – Manually assign individual physical disks — to create hot spare physical disks out of the selected physical disks on the Hardware tab.
• A standby hot spare is a physical disk that has been assigned as a hot spare and is available to take over for any failed physical disk. • An in-use hot spare is a physical disk that has been assigned as a hot spare and is currently replacing a failed physical disk. Hot Spare Drive Protection You can use a hot spare physical disk for additional data protection from physical disk failures that occur in a RAID Level 1, or RAID Level 5 disk group.
RAID Level Criteria for Enclosure Loss Protection Because a RAID level 5 requires a minimum of three physical disks, enclosure loss protections cannot be achieved if your storage array has less than three expansion enclosures. Because a RAID level 6 requires a minimum of five physical disks, enclosure loss protections cannot be achieved if your storage array has less than five expansion enclosures.
RAID Level Drawer Loss Protection Requirements RAID Level 1 and RAID Level 10 RAID Level 1 requires a minimum of two physical disks. Make sure that each physical disk in a remotely replicated pair is located in a different drawer. By locating each physical disk in a different drawer, you can have more than two physical disks of the disk group within the same drawer.
• Most hosts have 256 LUNs mapped per storage partition. The LUN numbering is from 0 through 255. If your operating system restricts LUNs to 127, and you try to map a virtual disk to a LUN that is greater than or equal to 127, the host cannot access it. • An initial mapping of the host group or host must be created using the Storage Partitioning Wizard before defining additional mappings. See Storage Partitioning. To create host to virtual disk mappings: 1. In the AMW, select the Host Mappings tab. 2.
To modify or remove host to virtual disk mapping: 1. In the AMW, select the Host Mappings tab. 2. In the Defined Mappings pane, perform one of these actions: – Select a single virtual disk, and select Host Mappings → LUN Mapping → Change. 3. – Right-click the virtual disk, and select Change from the pop-up menu. In the Host group or host list, select the appropriate host group or host. By default, the drop-down list shows the current host group or the host associated with the selected virtual disk. 4.
3. Click Yes to confirm the selection. Removing Host-To-Virtual Disk Mapping To remove the host to virtual disk mapping: 1. In the AMW, select the Host Mappings tab. 2. Select a virtual disk under Defined Mappings. 3. Perform one of these actions: – From the menu bar, select Host Mappings → LUN Mapping → Remove. 4. – Right-click the virtual disk, and select Remove from the pop-up menu. Click Yes to remove the mapping.
• You cannot cancel this operation after it begins. • The disk group must be in Optimal status before you can perform this operation. • Your data is available during this operation. • If you do not have enough capacity in the disk group to convert to the new RAID level, an error message is displayed, and the operation does not continue.
4. Delete the paths related with this device using the following command: # echo 1 > /sys/block/sd_x/device/delete Where, sd_x is the SD node (disk device) returned by the multipath command. Repeat this command for all paths related to this device. For example: #echo 1 > /sys/block/sdf/device/delete #echo 1 > /sys/block/sde/device/delete 5. Remove mapping from c, or delete the LUN if necessary. 6. If you want to map another LUN or increase volume capacity, perform this action from MD Storage Manager.
Group for any host is restricted to the limit imposed by the restricted host type. If a particular host with a non-restricted host type becomes part of a specific storage partition, you are able to change the mapping to a higher LUN. Storage Partitioning A storage partition is a logical entity consisting of one or more virtual disks that can be accessed by a single host or shared among hosts that are part of a host group.
Disk Group And Virtual Disk Expansion Adding free capacity to a disk group is achieved by adding unconfigured capacity on the array to the disk group. Data is accessible on disk groups, virtual disks, and physical disks throughout the entire modification operation. The additional free capacity can then be used to perform a virtual disk expansion on a standard or snapshot repository virtual disk. Disk Group Expansion To add free capacity to a disk group: 1.
The Total Unconfigured Capacity node, shown in the Storage & Copy Services tab, is a contiguous region of unassigned capacity on a defined disk group. When increasing virtual disk capacity, some or all of the free capacity may be used to achieve the required final capacity. Data on the selected virtual disk remains accessible while the process for increasing virtual disk capacity is in progress.
3. Back up the data on the virtual disks in the disk group. 4. Locate the disk group, and label the physical disks. 5. Place the disk group offline. 6. Obtain blank physical disk modules or new physical disks. On the target storage array, verify that: • The target storage array has available physical disk slots. • The target storage array supports the physical disks that you import. • The target storage array can support the new virtual disks.
• Virtual disk copy pairs • Snapshot virtual disks and snapshot repository virtual disks Storage Array Media Scan The media scan is a background operation that examines virtual disks to verify that data is accessible. The process finds media errors before normal read and write activity is disrupted and reports errors to the event log. NOTE: You cannot enable background media scans on a virtual disk comprised of Solid State Disks (SSDs).
8. Click OK. Suspending The Media Scan You cannot perform a media scan while performing another long-running operation on the disk drive such as reconstruction, copy-back, reconfiguration, virtual disk initialization, or immediate availability formatting. If you want to perform another long-running operation, you should suspend the media scan. NOTE: A background media scan is the lowest priority of the long-running operations. To suspend a media scan: 1.
Disk Pools And Disk Pool Virtual Disks 8 Disk pooling allows you to distribute data from each virtual disk randomly across a set of physical disks. Disk pooling provides RAID protection and consistent performance across a set of physical disks logically grouped together in the storage array. Although there is no limit on the maximum number of physical disks that can comprise a disk pool, each disk pool must have a minimum of 11 physical disks.
• You cannot change the segment size of the virtual disks in a disk pool. • You cannot export a disk pool from a storage array or import the disk pool to a different storage array. • You cannot change the RAID level of a disk pool. MD Storage Manager automatically configures disk pools as RAID level 6. • All physical disk types in a disk pool must be the same. • You can protect your disk pool with Self Encrypting Disk (SED), but the physical disk attributes must match.
7. To send alert notifications when the usable capacity of the disk pool is reaching a specified percentage, perform the following steps: a) Click View notification settings. b) Select the check box corresponding to a critical warning notification. You also can select the check box corresponding to an early warning notification. The early warning notification is available only after you select the critical warning notification. c) Select or type a value to specify a percentage of usable capacity.
physical disks to a storage array at the same time. This action enables the MD Storage Manager to recommend the best options for using the unconfigured capacity. You can review the options, and click Yes in the Automatic Configuration dialog to create one or more disk pools, or to add the unconfigured capacity to an existing disk pool, or both. If you click Yes, you also can create multiple equal-capacity virtual disks after the disk pool is created.
5. Click OK. Configuring Alert Notifications For A Disk Pool You can configure the MD storage manager to send alert notifications when the unconfigured (free) capacity of a disk pool is reaching a specified percentage. You can modify the alert notification settings after creating a disk pool. To configure alert notifications for a disk pool: 1. In AMW, select the Storage & Copy Services tab. 2. Select the disk pool. 3. From the menu bar, select Storage → Disk Pool → Change → Settings.
3. From the menu bar, select Storage → Disk Pool → Add Physical Disks (Capacity). The Add Physical Disks dialog is displayed. You can view information about: – The disk pool in the Disk Pool Information area. – The unassigned physical disks that can be added to the disk pool in the Select physical disks for addition area. NOTE: The RAID controller module firmware arranges the unassigned physical disk options with the best options listed at the top in the Select physical disks for addition area.
3. From the menu bar, select Storage → Disk Pool → Change → Settings. The Change Disk Pool Settings dialog is displayed. 4. In the Modification Priorities area, move the slider bars to select a priority level. You can choose a priority level for: – Degraded reconstruction – Critical reconstruction – Background operation You can select one of the following priority levels: – lowest – low – medium – high – highest The higher the priority level, the larger is the impact on host I/O and system performance.
• Disk pools are configured only as RAID Level 6. • You cannot use this option on RAID Level 0 disk groups that have no consistency. • If you use this option on a RAID Level 1 disk group, the consistency check compares the data on the replicated physical disks. • If you perform this operation on a RAID Level 5 or RAID Level 6 disk group, the check inspects the parity information that is striped across the physical disks. The information about RAID Level 6 applies also to disk pools.
– A new Unconfigured Capacity node if one did not exist previously. • You cannot delete a disk pool that has any of these conditions: – The disk pool contains a repository virtual disk, such as a snapshot group repository virtual disk, a replication repository virtual disk, or a Consistency Group member repository virtual disk. You must delete the logical component that has the associated repository virtual disk in the disk pool before you can delete the disk pool.
Secure Disk Pools You can create a secure disk pool from security capable physical disks. The physical disks in a secure disk pool become security enabled. Read access from and write access to the physical disks is only available through a RAID controller module that is configured with the correct security key. CAUTION: Possible loss of data access – When a disk pool is secured, the only way to remove security is to delete disk pool.
accommodate additional write requests until the physical capacity is increased. However, on a thin virtual disk, MD Storage Manager can automatically expand physical capacity of a thin virtual disk. You can also do it manually using Storage → Virtual Disk → Increase Repository Capacity. If you select the automatic expansion option, you can also set a maximum expansion capacity.
11. Use the Preferred capacity box to indicate the initial physical capacity of the virtual disk and the Units list to indicate the specific capacity units to use (MB, GB, or TB). NOTE: The physical capacity is the amount of physical disk space that is currently reserved for write requests. The physical capacity must be at least 4 GB in size, and cannot be larger than 256 GB.
Using SSD Cache 9 The SSD Cache feature utilizes solid-state disk (SSD) physical disks to improve read-only performance in your storage array. SSD physical disks are logically grouped together to provide secondary cache for use with the primary cache in the RAID controller module memory. Using SSD Cache improves application throughput and response times and delivers sustained performance improvement across diverse workloads, especially high-IOP workloads.
• whether you want to enable SSD cache on all eligible virtual disks currently mapped to hosts • whether to use SSD cache on existing virtual disks or when creating new virtual disks SSD Cache Restrictions The following restrictions apply to using SSD Cache feature: • SSD cache is not supported on Snapshots (Legacy) virtual disks or PiT‐based Snapshot images. • If you import or export base virtual disks that are SSD cache enabled or disabled, the cached data is not imported or exported.
Viewing Physical Components Associated With An SSD Cache To view the physical components associated with an SSD cache: 1. In the AMW, select the Storage & Copy Services tab. 2. In the tree view, select the SSD cache. and do one of the following: – From the menu bar, select Storage → SSD Cache → View Associated Physical Components. – Right click on the SSD cache and select View Associated Physical Components. – In the Table view for the SSD cache, click View Associated Physical Components.
3. Select the physical disk that you want to add and click Add. The following are not listed in the Add Physical Disks (Capacity) window: – Physical disk(s) in a non-optimal state. – Physical disks which are not SSD physical disks. – Physical disks not compatible with the physical disks currently in the SSD cache. Removing Physical Disks From An SSD Cache To remove physical disks from an SSD cache: 1. In the AMW, select the Storage & Copy Services tab. 2.
Renaming An SSD Cache To rename an SSD cache: 1. In the AMW, select the Storage & Copy Services tab. 2. In the tree view, select the SSD cache which you want to rename. 3. Do one of the following: – From the menu bar, select Storage → SSD Cache → Rename. – Right click on the SSD cache and select Rename. The Rename SSD Cache window is displayed. 4. Type a new name for the SSD cache and click OK. Deleting An SSD Cache To delete an SSD cache: 1. In the AMW, select the Storage & Copy Services tab. 2.
5. Select one of the following options from View results to choose the format you want to view the results: – Response Time 6. – Cache Hit % Click Start to run the performance modeling tool. NOTE: Depending on the cache capacity and workload, it may take about 10 to 20 hours to fully populate the cache. There is valid information even after a run of a few minutes, but it takes a number of hours to obtain the most accurate predictions.
10 Premium Feature—Snapshot Virtual Disk The following types of virtual disk snapshot premium features are supported on the MD storage array: • Snapshot Virtual Disks using multiple point-in-time (PiT) groups • Snapshot Virtual Disks (Legacy) using a separate repository for each snapshot NOTE: This section describes the Snapshot Virtual Disk using PiT groups. If you are using the Snapshot Virtual Disk (Legacy) premium feature, see Premium Feature—Snapshot Virtual Disks (Legacy).
• Standard virtual disks • Thin provisioned virtual disks • Consistency groups To create a snapshot image, you must first create a snapshot group and reserve snapshot repository space for the virtual disk. The repository space is based on a percentage of the current virtual disk reserve. You can delete the oldest snapshot image in a snapshot group either manually or you can automate the process by enabling the Auto-Delete setting for the snapshot group.
• Snapshot virtual disks and snapshot groups cannot exist on the same base virtual disk. A snapshot group uses a repository to save all data for the snapshot images contained in the group. A snapshot image operation uses less disk space than a full physical copy because the data stored in the repository is only the data that has changed since the latest snapshot image. A snapshot group is created initially with one repository virtual disk.
consistency group, the system automatically creates a new snapshot group that corresponds to this member virtual disk. A consistency group repository must be created for each member virtual disk in a consistency group in order to save data for all snapshot images in the group. A consistency group snapshot image comprises multiple snapshot virtual disks. Its purpose is to provide host access to a snapshot image that has been taken for each member virtual disk at the same moment in time.
• If you attempt to create a snapshot image and either of the following conditions below are present, the creation may remain in Pending state: – The base virtual disk that contains this snapshot image is a member of an Remote Replication group. – The base virtual disk is currently synchronizing. When synchronization is complete, the snapshot image creation will complete. • You cannot create a snapshot image on a failed virtual disk or on a snapshot group designated as Reserved.
Canceling A Pending Snapshot Image Use the Cancel Pending Snapshot Image option to cancel a snapshot image that was put in a Pending state when you attempted to create the snapshot image for either a snapshot group or a consistency group.
To delete the snapshot image, do the following: 1. From the AMW, select the Storage & Copy Services tab. 2. Select the snapshot image that you want to delete from the snapshot group or consistency group and then select one of the following menu paths to delete the snapshot image: – Copy Services → Snapshot Image → Delete. – Copy Services → Consistency Group → Consistency Group Snapshot Image → Delete. The Confirm Delete window is displayed. 3.
The snapshot image creation operation completes as soon as the synchronization operation is complete. To cancel the pending snapshot image creation before the synchronization operation completes, do the following: 1. From the AMW, select either the snapshot group or consistency group that contains the pending snapshot image. 2. Do one of the following: – Copy Services → Snapshot Group → Create Snapshot Image Schedule. – Copy Services → Consistency Group → Consistency Group Image → Create/Edit Schedule.
4. Do one of the following: – If you want to disable the schedule, de-select Enable Snapshot Image Scheduling. – If you want to use a different existing schedule, click Import settings from existing schedule. The Import Schedule Settings dialog is displayed. Select the new schedule you want to import from the Existing schedules table and then click Import. 5. – If you want to edit the schedule, modify the schedule settings. For more information on the schedule settings, see the online help.
• You cannot start a rollback operation if the base virtual disk is a secondary virtual disk in a remote replication. However, if the base virtual disk is the primary virtual disk in a remote replication, you can start a rollback operation. Additionally, you cannot perform a role reversal in a remote replication if the primary virtual disk is participating in a rollback operation.
3. Click Resume. The following may occur depending on the error condition: – If the resume rollback operation is successful — You can view the progress of the rollback operation in the Properties pane when you select the base virtual disk or the consistency group member virtual disk in the Logical pane. – If the resume rollback operation is not successful — The rollback operation is paused again.
view the progress of the rollback operation for a snapshot image and its associated base virtual disk or consistency group member virtual disk. 1. From the AMW, select the Storage & Copy Services tab. 2. Select the storage array for which you want to display the operations in progress. The Operations in Progress window is displayed. 3.
the oldest snapshot image and sets the auto-delete limit to the maximum allowable snapshot limit for a snapshot group. • If the base virtual disk resides on a standard disk group, the repository members for any associated snapshot group, can reside on either a standard disk group or a disk pool. If a base virtual disk resides on a disk pool, all repository members for any associated snapshot group must reside on the same disk pool as the base virtual disk.
Creating A Consistency Group Repository (Manually) During the creation of a consistency group, a consistency group repository is created to store the data for all the snapshot images contained in the group. A consistency group's repository is created initially with one individual repository virtual disk.Each virtual disk that belongs to a consistency group is referred to as a member virtual disk.
6. Select the repository, from the Repository candidates table, that you want to use for each member virtual disk in the consistency group. NOTE: Select a repository candidate that is closest to the capacity you specified. – The Repository candidates table shows both new and existing repositories that are capable of being used for each member virtual disk in the consistency group based on the value you specified for percentage or the value you specified for preferred capacity.
Renaming A Snapshot Group Use the Rename Snapshot Group option to change the name of the snapshot group when the current name is no longer meaningful or applicable. Keep these guidelines in mind when you name a snapshot group: • A name can consist of letters, numbers, and the special characters underscore (_), hyphen (-), and pound (#). If you choose any other characters, an error message is displayed. You are prompted to choose another name. • Limit the name to 30 characters.
• Consistency group member’s snapshot virtual disk The conversion operation requires that a repository be provisioned to support write operations on the snapshot virtual disk. 1. From the AMW, select the Storage & Copy Services tab. 2. Select either a snapshot virtual disk or a consistency group member’s snapshot virtual disk and then select Copy Services → Snapshot Virtual disk → Convert to Read-Write. 3. Select how you wish to create the repository for the Read-Write snapshot virtual disk.
• You cannot create a consistency group on a failed virtual disk. • A consistency group contains one snapshot group for each virtual disk that is a member of the consistency group. You cannot individually manage a snapshot group that is associated with a consistency group. Instead you must perform the manage operations (create snapshot image, delete snapshot image or snapshot group, and rollback snapshot image) at the consistency group level.
6. Click Finish. In the navigation tree, the consistency group and its properties are displayed under the Consistency Groups node. Creating A Consistency Group Repository (Manually) During the creation of a consistency group, a consistency group repository is created to store the data for all the snapshot images contained in the group. A consistency group's repository is created initially with one individual repository virtual disk.
6. Select the repository, from the Repository candidates table, that you want to use for each member virtual disk in the consistency group. NOTE: Select a repository candidate that is closest to the capacity you specified. – The Repository candidates table shows both new and existing repositories that are capable of being used for each member virtual disk in the consistency group based on the value you specified for percentage or the value you specified for preferred capacity.
• All existing snapshot images from the consistency group. • All existing snapshot virtual disks from the consistency group. • All the associated snapshot images that exist for each member virtual disk in the consistency group. • All the associated snapshot virtual disks that exist for each member virtual disk in the consistency group. • All associated repositories that exist for each member virtual disk in the consistency group (if selected). To delete a consistency group: 1.
If you decide to re-create the snapshot virtual disk or consistency group snapshot virtual disk, you must choose a snapshot image from the same base virtual disk. The following guidelines apply: • The Snapshot premium feature must be enabled on the storage array. • To add a new member virtual disk, the consistency group must have has less than the maximum number of allowable virtual disks (as defined by your configuration).
To remove a member virtual disk from a consistency group: 1. From the AMW, select the Storage & Copy Services tab. 2. Do one of the following: – Select the base virtual disk that you want to remove from the consistency group and then select Storage → Virtual disk → Remove From Consistency Group. 3. – Select the consistency group to which you want to add member virtual disks and then select Copy Services → Consistency Group → Remove Member Virtual Disks.
Creating A Snapshot Virtual Disk 1. From the AMW, select the Storage & Copy Services tab. 2. Do one of the following: – Select a base virtual disk, and then select Copy Services → Snapshot Virtual disk → Create. The Select Existing Snapshot Image or New Snapshot Image window is displayed. 3. – Select a base virtual disk, and then select Copy Services → Snapshot Image → Create Snapshot Virtual Disk. The Snapshot Virtual Disk Settings window is displayed. Go to step 4.
6. Select how to grant host access to the snapshot virtual disk. Do one of the following: – Select Read Write and go to step 7. – Select Read Only and click Finish to create the snapshot virtual disk. Go to step 8. NOTE: Repositories are not required for Read Only snapshot virtual disks.
To create a snapshot virtual disk repository: 1. From the Snapshot Virtual Disk Settings window, select Manual and click Next to define the properties for the snapshot virtual disk repository. The Snapshot Virtual disk Repository Settings - Manual window is displayed. 2. Choose how you want filter the repository candidates in the Repository candidates table, based on either a percentage of the base virtual disk capacity or by preferred capacity. For more information, see the online help topics.
If you decide to re-create the snapshot virtual disk or consistency group snapshot virtual disk, you must choose a snapshot image from the same base virtual disk. If you disable the snapshot virtual disk or consistency group snapshot virtual disk, the system performs the following actions: • Retains the World-Wide Name (WWN) for the snapshot virtual disk or consistency group snapshot virtual disk.
• The snapshot virtual disk or consistency group snapshot virtual disk must be in either an Optimal status or Disabled status. • For consistency group snapshot virtual disk, all member snapshot virtual disks must be in a Disabled state before you can re-create the consistency group snapshot virtual disk. • You cannot re-create an individual member snapshot virtual disk, you can re-create only the overall consistency group snapshot virtual disk.
Deleting A Snapshot Virtual Disk Or Consistency Group Snapshot Virtual Disk Use the Delete Snapshot Virtual Disk option to delete a snapshot virtual disk or consistency group snapshot virtual disk that is no longer needed for backup or software application testing purposes.
that the snapshot virtual disk repository is larger than you need, you can reduce its size to free up space that is needed by other logical virtual disks.
5. In the Map to host drop-down, specify how you want to map the host for each snapshot virtual disk created for a selected member virtual disk. This map attribute is applied to every member virtual disk you select in the consistency group. For more information on the map attributes, see the online help topics.
create the repository automatically using the default settings or you can manually create the repository by defining the capacity settings for the repository. You are initially creating an overall repository with one individual repository virtual disk. However, the overall repository can contain multiple repository virtual disks in the future for expansion purposes.
7. To edit an individual repository candidate: a) Select the candidate from the Repository candidates table and click Edit to modify the capacity settings for the repository. b) Click OK. 8. In the % full box, define the value that determines when a warning is triggered when the capacity of a consistency group snapshot virtual disk repository reaches the defined percentage. 9. Click Finish to create the repository.
3. Type yes in the text box and then click Disable to disable the snapshot virtual disk. The snapshot virtual disk or consistency group snapshot virtual disk is displayed in the Logical pane with the Disabled Snapshot status icon. If you disabled a read-write snapshot virtual disk or consistency group snapshot virtual disk, its associated snapshot repository virtual disk does not change status.
Changing The Modification Priority Of An Overall Repository Virtual Disk Use the Modification Priority option to specify the modification priority setting for an overall repository virtual disk on a storage array.
5. Select either With consistency check or Without consistency check, and click OK. A consistency check scans the blocks in a RAID Level 5 virtual disk, or a RAID Level 6 virtual disk and checks the consistency information for each block. A consistency check compares data blocks on RAID Level 1 mirrored physical disks. RAID Level 0 virtual disks have no data redundancy.
• Snapshot group • Snapshot virtual disk • Consistency group member virtual disk • Consistency group member snapshot virtual disk • Replicated pair Use this option when you receive a warning that the overall repository is in danger of becoming full. You can increase the repository capacity by performing one of these tasks: • Adding one or more existing repository virtual disks. • Creating a new repository virtual disk using free capacity that is available on a disk group or disk pool.
5. To add one or more existing repository virtual disks, perform the following steps: a) Select one or more repository virtual disks from the Eligible repository virtual disks table. The eligible repository virtual disks that have the same DS settings as the associated base virtual disk are only displayed. NOTE: You can click the Select all check box to add all the repository virtual disks displayed in the Eligible repository virtual disks table.
• You cannot increase or decrease the repository capacity for a snapshot virtual disk that is read-only because it does not have an associated repository. Only snapshot virtual disks that are read-write require a repository. • When you decrease capacity for a snapshot virtual disk or a consistency group member snapshot virtual disk, the system automatically transitions the virtual disk to a Disabled state. To decrease the overall repository capacity: 1.
CAUTION: Using the Revive option when there are still failures may cause data corruption or data loss, and the storage object will return to the Failed state. 1. From the AMW, select the Storage & Copy Services tab. 2. Select the storage object that you want to revive and then select one of the following menu paths (depending on the storage object you selected): – Copy Services → Snapshot Group → Advanced → Revive. – Copy Services → Snapshot Virtual Disk → Advanced → Revive. 3.
11 Premium Feature—Snapshot Virtual Disks (Legacy) The following types of virtual disk snapshot premium features are supported on the MD storage array: • Snapshot Virtual Disks using multiple point-in-time (PiT) groups • Snapshot Virtual Disks (Legacy) using a separate repository for each snapshot NOTE: This section describes the Snapshot Virtual Disk (Legacy) premium feature.
NOTE: Deleting a snapshot does not affect data on the source virtual disk. NOTE: The following host preparation sections also apply when using the snapshot feature through the CLI interface. Scheduling A Snapshot Virtual Disk When you create a snapshot virtual disk, you can choose whether the snapshot is created immediately or is created according to a schedule that you determine. This schedule can be a one-time snapshot creation or an snapshot creation that occurs at regularly occurring intervals.
• Snapshot schedules can be created when the snapshot virtual disk is initially created or can be added to existing snapshot virtual disks. Creating A Snapshot Virtual Disk Using The Simple Path You can choose the simple path to create a snapshot virtual disk if the disk group of the source virtual disk has the required amount of free space. A snapshot repository virtual disk requires a minimum of 8 MB free capacity.
• The following types of virtual disks are not valid source virtual disks: – Snapshot repository virtual disks – Snapshot virtual disks – Target virtual disks that are participating in a virtual disk copy • You cannot create a snapshot of a virtual disk that contains unreadable sectors. • You must satisfy the requirements of your host operating system for creating snapshot virtual disks.
Creating A Snapshot Virtual Disk Using The Advanced Path About The Advanced Path Use the advanced path to choose whether to place the snapshot repository virtual disk on free capacity or unconfigured capacity and to change the snapshot repository virtual disk parameters. You can select the advanced path regardless of whether you use free capacity or unconfigured capacity for the snapshot virtual disk.
If 8 MB of free capacity is not available in the disk group of the source virtual disk, the Create Snapshot Virtual Disks feature defaults to the advanced path. See Creating A Snapshot Virtual Disk Using The Advanced Path. NOTE: You can create concurrent snapshots of a source virtual disk on both the source disk group and on another disk group.
Creating The Snapshot Using The Advanced Path NOTE: Removing the drive letter of the associated virtual disk in Windows or unmounting the virtual drive in Linux helps to guarantee a stable copy of the drive for the Snapshot. Prepare the host server(s) as specified in Preparing Host Servers To Create The Snapshot Using The Advanced Path. To create a virtual disk snapshot using the advanced path: 1. Stop the host application accessing the source virtual disk, and unmount the source virtual disk. 2.
Specifying Snapshot Virtual Disk Names Choose a name that helps you associate the snapshot virtual disk and snapshot repository virtual disk with its corresponding source virtual disk. The following information is useful when naming virtual disks. By default, the snapshot name is shown in the Snapshot virtual disk name field as: — where sequence-number is the chronological number of the snapshot relative to the source virtual disk.
• • • The controller that has ownership of this virtual disk is currently adding capacity to another virtual disk. Each controller can add capacity to only one virtual disk at a time. No free capacity exists in the disk group. No unconfigured capacity is available to add to the disk group. NOTE: You can add a maximum of two physical disks at one time to increase snapshot repository virtual disk capacity. To expand the snapshot repository virtual disk from MD Storage Manager: 1.
12. Either accept the final capacity, or enter or select the appropriate capacity in Increase capacity by. 13. Click OK. The Storage & Copy Services tab is updated. The snapshot repository virtual disk that is having its capacity increased shows a status of Operation in Progress. In addition, the snapshot repository virtual disk shows its original capacity and the total capacity being added. The Free Capacity node involved in the increase shows a reduction in capacity.
NOTE: If you do not intend to re-create the snapshot virtual disk at a later time, in the object tree, select the snapshot virtual disk, and select Virtual Disk → Delete to remove it. The associated snapshot repository virtual disk is also removed. See the online help topics for more information on removing a snapshot virtual disk. NOTE: The SMdevices utility displays the snapshot virtual disk in its output, even after the snapshot virtual disk is disabled. To disable a snapshot virtual disk: 1.
Re-Creating A Snapshot Virtual Disk After first preparing the host server(s), re-create a snapshot virtual disk. For more information, see Preparing Host Servers To Create The Snapshot Using The Simple Path or Preparing Host Servers To Create The Snapshot Using The Advanced Path. NOTE: This action invalidates the current snapshot. To recreate a snapshot virtual disk: 1. In the AMW, select the Storage & Copy Services tab and select a snapshot virtual disk. 2.
Premium Feature—Virtual Disk Copy 12 NOTE: A virtual disk copy overwrites data on the target virtual disk. Before starting a virtual disk copy, ensure that you no longer need the data or back up the data on the target virtual disk. NOTE: If you ordered this feature, you received a Premium Feature Activation card that shipped in the same box as your Dell PowerVault MD Series storage array. Follow the directions on the card to obtain a key file and to enable the feature.
Using Virtual Disk Copy With Snapshot Or Snapshot (Legacy) Premium Feature After completion of the virtual disk copy of a snapshot (Legacy), the legacy snapshot is disabled. After completion of the virtual disk copy using a snapshot image, the snapshot image is deleted and the snapshot virtual disk is disabled. Snapshots created using older (Legacy) premium feature versions cannot be managed using newer snapshot premium feature options.
NOTE: If the snapshot virtual disk that is used as the copy source is active, the source virtual disk performance degrades due to copy-on-write operations. When the copy is complete, the snapshot is disabled and the source virtual disk performance is restored. Although the snapshot is disabled, the repository infrastructure and copy relationship remain intact.
NOTE: If you want to choose the base virtual disk of an older (legacy) snapshot virtual disk as your target virtual disk, you must first disable all snapshot (legacy) virtual disks that are associated with the base virtual disk. • A virtual disk participating in a modification operation cannot be selected as a source virtual disk or target virtual disk.
Before You Begin A virtual disk copy fails all snapshot virtual disks that are associated with the target virtual disk, if any exist. If you select a source virtual disk of a snapshot virtual disk, you must disable all of the snapshot virtual disks that are associated with the source virtual disk before you can select it as a target virtual disk. Otherwise, the source virtual disk cannot be used as a target virtual disk.
preferred RAID controller module of the source virtual disk. When the virtual disk copy is completed or is stopped, ownership of the target virtual disk is restored to its preferred RAID controller module. If ownership of the source virtual disk is changed during the virtual disk copy, ownership of the target virtual disk is also changed.
5. Right click on the selected source virtual disk and select Create → Virtual Disk Copy in the pop-up menu. The Select Copy Type wizard is displayed. 6. Select a copy type and click Next. NOTE: If you select Offline, the source virtual disk is not available for any I/O when the copy operation is in progress. The Select Target Virtual Disk window is displayed. 7. Select the appropriate target virtual disk and click Next. The Confirmation window is displayed. 8.
4. In the Copy Priority area, select the appropriate copy priority, depending on your system performance needs. NOTE: There are five copy priority rates available: – lowest – low – medium – high – highest If the copy priority is set at the lowest rate, I/O activity is prioritized, and the virtual disk copy takes longer. Stopping A Virtual Disk Copy You can stop a virtual disk copy operation that has an In Progress status, a Pending status, or a Failed status.
Before creating a new virtual disk copy for an existing copy pair, both the host server and the associated virtual disk you are recopying have to be in the proper state. Perform the following steps to prepare your host server and virtual disk: 1. Stop all I/O activity to the source and target virtual disk. 2. Using your Windows system, flush the cache to both the source and the target virtual disk (if mounted). At the host prompt, type: SMrepassist -f and press .
6. Set the copy priority. There are five copy priority rates available: lowest, low, medium, high, and highest. If the copy priority is set at the lowest rate, I/O activity is prioritized, and the virtual disk copy takes longer. If the copy priority is set to the highest priority rate, the virtual disk copy is prioritized, but I/O activity for the storage array might be affected. Removing Copy Pairs You can remove one or more virtual disk copies by using the Remove Copy Pairs option.
Device Mapper Multipath For Linux 13 Overview The MD Series storage arrays use a Linux operating system software framework, known as Device Mapper (DM), to enable multipath capabilities on Linux Host Servers. The DM multipath functionality is provided by a combination of drivers and utilities. This chapter describes how to use those utilities to complete the process of enabling MD Series storage arrays on a Linux system.
Prerequisites The following tasks must be completed before proceeding. For more information about step 1 through step 3, see the storage array’s Deployment Guide. For more information about step 4, see Creating Virtual Disks. 1. Install the host software from the MD Series storage arrays resource DVD — Insert the Resource media in the system to start the installation of Modular Disk Storage Manager (MD Storage Manager) and Modular Disk Configuration Utility (MDCU). NOTE: Installation of Red Hat 5.
In the following command descriptions, is used to indicate where a substitution must be made. On Red Hat Enterprise Linux systems, is the number assigned to the device. On SUSE Linux Enterprise Server systems, is the letter(s) assigned to the device. Scan For Newly Added Virtual Disks The rescan_dm_devs command scans the host server system looking for existing and newly added virtual disks mapped to the host server.
Create A New fdisk Partition On A Multipath Device Node The fdisk command allows creation of partition space for a file system on the newly scanned and mapped virtual disks that have been presented to Device Mapper. To create a partition with the multipathing device nodes /dev/mapper/mpath, for example, use the following command: # fdisk /dev/mapper/mpath where mpath is the multipathing device node on which you want to create the partition.
Mount A Device Mapper Partition Use the standard mount command to mount the Device Mapper partition, as shown below: # mount /dev/mapper/ Ready For Use The newly created virtual disks created on the MD Series storage array are now setup and ready to be used. Future reboots automatically find multipathing devices along with their partitions.
Command Description multipath –f Flushes out Device Mapper for the specified multipathing device. Used if the underlying physical devices are deleted/unmapped. multipath –F Flushes out all unused multipathing device maps. rescan_dm_devs Dell provided script. Forces a rescan of the host SCSI bus and aggregates multipathing devices as needed. Use this command when: • • • • LUNs are dynamically mapped to the hosts. New targets are added to the host.
Troubleshooting Question Answer How can I check if multipathd is running? Run the following command. Why does the multipath –ll command output not show any devices? First verify if the devices are discovered or not. The command #cat /proc/scsi/scsi displays all the devices that are already discovered. Then verify the multipath.conf to ensure that it is been updated with proper settings. After this, run multipath. Then run multipath –ll, the new devices must show up. /etc/init.
Configuring Asymmetric Logical Unit Access 14 If your MD Series RAID storage array supports Asymmetric Logical Unit Access (ALUA), active-active throughput allows I/O to pass from a RAID controller module to a virtual disk that is not owned by the controller. Without ALUA, the host multipath driver is required to send data requests targeted to a specific virtual disk to the owning RAID controller module. If the controller module does not own the virtual disk, it rejects the request.
Enabling ALUA On VMware ESXi VMware ESXi 5.x does not have Storage Array Type Plug-in (SATP) claim rules automatically set to support ALUA on the MD Series storage arrays. To enable ALUA, you must manually add the claim rule. Manually Adding SATP Rule In ESXi 5.x To manually add the SATP rule in ESXi 5.x: 1. Run the following command: # esxcli storage nmp satp rule add –s VMW_SATP_ALUA –V DELL –M array_PID -c tpgs_on Where, array_PID is your storage array model/product ID.
Setting Round-Robin Load Balancing Policy On ESXiBased Storage Arrays NOTE: Perform this procedure after you have enabled ALUA on VMware ESXi and verified if the host server is using ALUA for the MD storage array. For more information, see Enabling ALUA On VMware ESX/ESXi and Verifying If Host Server Is Using ALUA For MD Storage Array. To set a round-robin load balancing policy on your ESXi-based host server: 1. For ESXi 5.
15 Premium Feature—Remote Replication The following types of Remote Replication are supported on the MD storage array: • Remote Replication — Standard asynchronous replication using point-in-time images to batch the resynchronization between the local and remote site. This type of replication is supported on both Fibre Channel and iSCSI storage arrays (not between). • Remote Replication (Legacy) — Synchronous (or full-write) replication that synchronizes local and remote site data in real-time.
Types Of Remote Replication The following are the types of Remote Replication premium features supported on your storage array: • Remote Replication — Also known as standard or asynchronous, it is supported on both iSCSI- and Fibre Channel-based storage arrays (both local and remote storage arrays must use the same data protocol) and requires a dual RAID controller configuration. • Remote Replication (Legacy) — Also known as synchronous or full-write, it is supported on Fibre Channel storage arrays only.
Remote Replication Requirements And Restrictions To use the standard Remote Replication premium feature, you must have: • • • • • • • Two storage arrays with write access and both these storage arrays must have sufficient space to replicate data between them. Each storage must have a dual-controller Fibre Channel or iSCSI configuration (single-controller configurations are not supported).
Only RAID controller modules configured for Remote Replication can communicate with the reserved ports. The Remote Replication premium feature must be activated on both the local and storage arrays. NOTE: Perform the activation steps below on the local storage array first and then repeat them on the remote storage array. 1. In the AMW of the local storage array, select the Storage & Copy Services tab. 2. Select Copy Services → Remote Replication → Activate. 3.
Remote Replication Groups After the Remote Replication premium feature is successfully activated on both the local and remote storage arrays, you can create a Remote Replication group on the local storage array. This group will contain at least one replicated virtual disk pair—one on the local storage and one on the remote storage array. These disks serve as primary and secondary disks that share data synchronization settings to provide consistent backup between both storage arrays.
3. In Remote replication group name, enter a group name (30 characters maximum). 4. In the Choose the remote storage array drop-down, select a remote storage array. NOTE: If a remote storage array is not available, you cannot continue. Verify your network configuration or contact your network administrator. 5. In the Connection type drop-down, choose your data protocol (iSCSI or Fibre Channel only). 6.
Creating Replicated Pairs This procedure describes how to create the remote replicated pair on an existing remote replication group. To create a new Remote Replication group, see Creating a Remote Replication Group. 1. In the AMW of the local storage array, select the Storage & Copy Services tab. 2. Select Copy Services → Remote Replication → Remote Replication → Replication Group → Create Replication Pair. The Select Remote Replication Group window is displayed.
3. Do one of the following: – Select Automatic and select an existing disk pool or disk group from the table, then click Finish to automatically complete the replicated pair creation process with the default secondary virtual disk selection and repository settings. – Select Manual, then click Next to choose an existing virtual disk as the secondary virtual disk and define the repository parameters for the remote side of the remote replicated pair. The Remote Replicated pair is created.
Management Firmware Downloads 16 Downloading RAID Controller And NVSRAM Packages A version number exists for each firmware file. The version number indicates whether the firmware is a major version or a minor version. You can use the Enterprise Management Window (EMW) to download and activate both the major firmware versions and the minor firmware versions. You can use the Array Management Window (AMW) to download and activate only the minor firmware versions. NOTE: Firmware versions are of the format aa.
3. To locate the directory in which the file to download resides, click Select File next to the Selected RAID controller module firmware file text box. 4. In the File Selection area, select the file to download. By default, only the downloadable files that are compatible with the current storage array configuration are displayed. When you select a file in the File Selection area of the dialog, applicable attributes (if any) of the file are displayed in the File Information area.
15. If you want to download the NVSRAM file with the RAID controller module firmware, select Download NVSRAM file with firmware in the Select files area. Any attributes of the firmware file are displayed in the Firmware file information area. The attributes indicate the version of the firmware file. Any attributes of the NVSRAM file are displayed in the NVSRAM file information area. The attributes indicate the version of the NVSRAM file. 16.
7. Perform one of these actions: – Select Tools → Upgrade RAID Controller Module Firmware. – Select the Setup tab, and click Upgrade RAID Controller Module Firmware. The Upgrade RAID Controller Module Firmware window is displayed. The Storage array pane lists the storage arrays. The Details pane shows the details of the storage array that is selected in the Storage array pane. 8. In the Storage array pane, select the storage array for which you want to download the NVSRAM firmware.
• RAID configuration information is stored in the physical disk firmware and is used to communicate with other RAID components. CAUTION: Risk of application errors—Downloading the firmware could cause application errors. Keep these important guidelines in mind when you download firmware to avoid the risk of application errors: • • • • Downloading firmware incorrectly could result in damage to the physical disks or loss of data.
CAUTION: Risk of possible loss of data or risk of damage to the storage array—Downloading the expansion enclosure EMM firmware incorrectly could result in loss of data or damage to the storage array. Perform downloads only under the guidance of your Technical Support representative. CAUTION: Risk of making expansion enclosure EMM unusable—Do not make any configuration changes to the storage array while downloading expansion enclosure EMM firmware.
data from peer disks in the disk group and uses recovered data to correct the error. If the controller encounters an error while accessing a peer disk, it is unable to recover the data and affected sectors are added to the unreadable sector log maintained by the controller.
Firmware Inventory 17 A storage array is made up of many components, which may include RAID controller modules, physical disks, and enclosure management modules (EMMs). Each of these components contains firmware. Some versions of the firmware are dependent on other versions of firmware. To capture information about all of the firmware versions in the storage array, view the firmware inventory.
System Interfaces 18 Virtual Disk Service The Microsoft Virtual Disk Service (VDS) is a component of the Windows operating system. The VDS component utilizes third-party vendor specific software modules, known as providers, to access and configure third-party storage resources, such as MD Series storage arrays. The VDS component exposes a set of application programming interfaces (APIs) that provides a single interface for managing disks and other storage hardware.
• The number of snapshot virtual disks that can be created using a single snapshot set varies with the I/O load on the RAID controller modules. Under little or no I/O load, the number of virtual disks in a snapshot set must be limited to eight. Under high I/O loads, the limit must be three. • The snapshot virtual disks created in the MD Storage Manager are differential snapshots. Plex snapshots are not supported.
Storage Array Software 19 Start-Up Routine Look and listen during the array’s start-up routine for the indications described in the table below. For a description of the front- and back-panel indicators, see About Your Storage Array. Look/Listen for Action Alert messages See your storage management documentation. An unfamiliar See Getting Help.
Status Icon Description Unresponsive The storage management station cannot communicate with the storage array or one RAID controller module or both RAID controller modules in the storage array. Fixing Status A Needs Attention status has been corrected and the managed storage array is currently transitioning to an Optimal state. Unsupported The node is currently not supported by this version of MD Storage Manager.
Status Icon Description Upgrade status, the Alert Disables status icon is displayed next to the parent node in the tree view. Setting an Alert at the Parent Node Level You can set alerts at any of the nodes in the Tree view. Setting an alert at a parent node level, such as at a host level, sets alert for any child nodes.
Trace Buffers Trace information can be saved to a compressed file. The firmware uses the trace buffers to record processing activity, including exception conditions, that may be useful for debugging. Trace information is stored in the current buffer and can be moved to the flushed buffer after being retrieved. Because each RAID controller module has its own buffer, there may be more than one flushed buffer.
Collecting Physical Disk Data You can use the Collect Physical Disk Data option to collect log sense data from all the physical disks on your storage array. Log sense data consists of statistical information that is maintained by each of the physical disks in your storage array. Your Technical Support representative can use this information to analyze the performance of your physical disks and for troubleshooting problems that may exist.
support data collections do not occur. Suspending a schedule does not affect the automatic collection of support data during major event log (MEL) events. Resuming a schedule restarts the collection of support data on a scheduled basis. You can resume a suspended schedule at any time. 1. From the EMW, select Tools → Collect Support Data → Create/Edit Schedule. The Schedule Support Data Collection dialog is displayed. 2. In the Storage arrays table, select one or more storage arrays. 3.
Viewing The Event Log WARNING: Use this option only under the guidance of your Technical Support representative. To view the event log: 1. In the AMW, select Monitor → Reports → Event Log. The Event Log is displayed. By default, the summary view is displayed. 2. To view the details of each selected log entry, select View details. A detail pane is added to the event log that contains detailed information about the log item. You can view the details about a single log entry at a time. 3.
current configuration of the storage array. Create a new copy of the storage array profile if your configuration changes. 1. To open the storage array profile, in the AMW, perform one of the following actions: – Select Monitor → Reports → Storage Array Profile. – Select the Summary tab, and click View Storage Array Profile in the Monitor area. The Storage Array Profile dialog is displayed.
Viewing The Physical Associations You can use the Associated Physical Components option to view the physical components that are associated with source virtual disks, snapshot virtual disks, snapshot repository virtual disks, disk groups, unconfigured capacity, and free capacity in a storage array. To view the physical associations: 1. In the AMW, select a node in the Storage & Copy Services tab or in the object tree of the Host Mappings tab. 2. Click View Associated Physical Components.
11. If there is a cable or network accessibility problem, see step 20, if not step 12. 12. For an in-band managed storage array, make sure that the host is network accessible by using the ping command to verify that the host can be reached. Type one of these commands, and press . – ping – ping 13. If the verification is successful, see step 14, if not, step 15. 14.
Locating A Physical Disk You can physically locate and identify one or more of the physical disks in an expansion enclosure by activating physical disk LEDs. To locate the physical disk: 1. Select the Hardware tab. 2. Select the physical disks that you want to locate. 3. Select Hardware → Blink → Physical Disk. The LEDs on the selected physical disks blink. 4. When you have located the physical disks, click OK. The LEDs stop blinking.
Capturing The State Information Use the Capture State Information option to capture information about the current state of your storage array and save the captured information to a text file. You can then send the captured information to your Technical Support representative for analysis. CAUTION: Potential to cause an unresponsive storage array – The Capture State option can cause a storage array to become unresponsive to both the host and the storage management station.
• Source virtual disk and snapshot virtual disk (for example, if the snapshot virtual disk has been removed). • Standard virtual disk and virtual disk copy (for example, if the virtual disk copy has been removed). Unidentified Devices An unidentified node or device occurs when the MD Storage Manager cannot access a new storage array. Causes for this error include network connection problems, the storage array is turned off, or the storage array does not exist.
4. If you have an out-of-band storage array, use the following procedure. Click Refresh after each step to make sure of the results: a) Make sure that the network can access the controllers by using the ping command. Use the following syntax: ping If the network can access the controllers, continue to step b. If the network cannot access the controllers, skip to step c.
Starting The SMagent Software In Linux To start or restart the Host Context Agent software in Linux, enter the following command at the prompt: SMagent start The SMagent software may take a little time to initialize. The cursor is shown, but the terminal window does not respond. When the program starts, the following message is displayed: SMagent started. After the program completes the startup process, text similar to the following, is displayed: Modular Disk Storage Manager Agent, Version 90.02.A6.
Getting Help 20 Contacting Dell NOTE: If you do not have an active Internet connection, you can find contact information on your purchase invoice, packing slip, bill, or Dell product catalog. Dell provides several online and telephone-based support and service options. Availability varies by country and product, and some services may not be available in your area. To contact Dell for sales, technical support, or customer service issues: 1. Visit dell.com/support 2. Select your support category. 3.