HP MSA 2040 SMU Reference Guide Abstract This guide is for use by storage administrators to manage an HP MSA 2040 storage system by using its web interface, Storage Management Utility (SMU).
© Copyright 2014 Hewlett-Packard Development Company, L.P. Confidential computer software. Valid license from HP required for possession, use or copying. Consistent with FAR 12.211 and 12.212, Commercial Computer Software, Computer Software Documentation, and Technical Data for Commercial Items are licensed to the U.S. Government under vendor's standard commercial license. The information contained herein is subject to change without notice.
Contents 1 Getting started . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 Configuring and provisioning a new storage system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Browser setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Signing in . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Configuring system settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Changing the system date and time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Changing host interface settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Changing network interface settings. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Changing host mappings. . Configuring CHAP . . . . . . Modifying a schedule . . . . Deleting schedules . . . . . . .. .. .. .. . . . . .. .. .. .. .. .. .. .. . . . . .. .. .. .. . . . . .. .. .. .. .. .. .. .. . . . . .. .. .. .. .. .. .. .. . . . . .. .. .. .. . . . . .. .. .. .. ... ... ... ... .. .. .. .. .. .. .. .. . . . . .. .. .. .. . . . . .. .. .. .. ... ... ... ... .... .... .... .... . . . . .... .... .... .... . . . . . . . . . . . . . . . . . . . .
Viewing information about a snapshot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Snapshot properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Mapping properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Schedule properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Viewing replication properties, addresses, and images for a volume. . . . . . . . . . . . . . . . . . . . . . . . . . . Replication properties. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Replication addresses. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Replication images. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Testing SMI-S . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175 Troubleshooting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176 D Administering a log-collection system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177 How log files are transferred and identified . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Figures 1 2 3 4 5 6 Relationship between a master volume and its snapshots and snap pool . . . . . . . . . . . . . . . . . . . . . . 23 Rolling back a master volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 Creating a volume copy from a master volume or a snapshot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 Intersite and intrasite replication sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Figures
Tables 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 SMU communication status icons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 Settings for default users . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 Example applications and RAID levels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 RAID level comparison . . . . . . . . . .
Tables
1 Getting started The Storage Management Utility (SMU) is a web-based application for configuring, monitoring, and managing the storage system. Each controller module in the storage system contains a web server, which is accessed when you sign in to the SMU. In a dual-controller system, you can access all functions from either controller. If one controller becomes unavailable, you can continue to manage the storage system from the partner controller.
Tips for signing in and signing out • Do not include a leading zero in an IP address. For example, enter 10.1.4.33 not 10.1.4.033. • Multiple users can be signed in to each controller simultaneously. • For each active SMU session an identifier is stored in the browser. Depending on how your browser treats this session identifier, you might be able to run multiple independent sessions simultaneously.
Tips for using the help window • To display help for a component in the Configuration View panel, right-click the component and select Help. To display help for the content in the main panel, click either Help in the menu bar or the help icon in the upper right corner of the panel. • In the help window, click the table of contents icon to show or hide the Contents pane. • As the context in the main panel is changed, the corresponding help topic is displayed in the help window.
SNMPv3 user accounts have these options: • User Name. • Password. • SNMP User Type. Either: User Access, which allows the user to view the SNMP MIB; or Trap Target, which allows the user to receive SNMP trap notifications. Trap Target uses the IP address set with the Trap Host Address option. • Authentication Type. Either: MD5 authentication; SHA (Secure Hash Algorithm) authentication; or no authentication. Authentication uses the password set with the Password option. • Privacy Type.
When you create a vdisk you can use the default chunk size or one that better suits your application. The chunk size is the amount of contiguous data that is written to a disk before moving to the next disk. After a vdisk is created its chunk size cannot be changed. For example, if the host is writing data in 16-KB transfers, that size would be a good choice for random transfers because one host read would generate the read of exactly one disk in the volume.
TIP: A best practice is to designate spares for use if disks fail. Dedicating spares to vdisks is the most secure method, but it is also expensive to reserve spares for each vdisk. Alternatively, you can enable dynamic spares or assign global spares. Sparing rules for heterogeneous vdisks If you upgraded from an earlier release that did not distinguish between enterprise and midline SAS disks, you might have vdisks that contain both types of disks. These are called heterogeneous or mixed vdisks.
You can use a volume’s default name or change it to identify the volume’s purpose. For example, a volume used to store payroll information can be named Payroll. You can create vdisks with volumes by using the Provisioning Wizard, or you can create volumes manually.
About volume mapping Each volume has default host-access settings that are set when the volume is created; these settings are called the default mapping. The default mapping applies to any host that has not been explicitly mapped using different settings. Explicit mappings for a volume override its default mapping. Default mapping enables all attached hosts to see a volume using a specified LUN and access permissions set by the administrator.
About volume cache options You can set options that optimize reads and writes performed for each volume. Using write-back or write-through caching CAUTION: Only disable write-back caching if you fully understand how the host operating system, application, and adapter move data. If used incorrectly, you might hinder system performance. You can change a volume’s write-back cache setting.
• No-mirror. In this mode each controller stops mirroring its cache metadata to the partner controller. This improves write I/O response time but at the risk of losing data during a failover. ULP behavior is not affected, with the exception that during failover any write data in cache will be lost. • Atomic write. Not supported.
The following figure shows how the data state of a master volume is preserved in the snap pool by two snapshots taken at different points in time. The dotted line used for the snapshot borders indicates that snapshots are logical volumes, not physical volumes as are master volumes and snap pools.
The following figure shows the difference between rolling back the master volume to the data that existed when a specified snapshot was created (preserved), and rolling back preserved and modified data. MasterVolume-1 Snapshot-1 Preserved Data (Monday) Modified Data (Tuesday) When you use the rollback feature, you can choose to exclude the modified data, which will revert the data on the master volume to the preserved data when the snapshot was taken.
About the Volume Copy feature Volume Copy enables you to copy a volume or a snapshot to a new standard volume. While a snapshot is a point-in-time logical copy of a volume, the volume copy service creates a complete “physical” copy of a volume within a storage system. It is an exact copy of a source volume as it existed at the time the volume copy operation was initiated, consumes the same amount of space as the source volume, and is independent from an I/O perspective.
Guidelines to keep in mind when performing a volume copy include: • The destination vdisk must be owned by the same controller as the source volume. • The destination vdisk must have free space that is at least as large as the amount of space allocated to the original volume. A new volume will be created using this free space for the volume copy. • The destination vdisk does not need to have the same attributes (such as disk type, RAID level) as the volume being copied.
NOTE: To create an NRAID, RAID-0, or RAID-3 vdisk, you must use the CLI create vdisk command. For more information on this command, see the CLI Reference Guide.
Table 4 RAID level comparison (continued) RAID level Min.
The locale setting determines the character used for the decimal (radix) point, as shown below. Table 7 Decimal (radix) point character by locale Language Character Examples English, Chinese, Japanese, Korean Period (.) Dutch, French, German, Italian, Spanish Comma (,) 146.81 GB 3.0 Gbit/s 146,81 GB 3,0 Gbit/s Related topics • "About user accounts" (page 15) About the system date and time You can change the storage system’s date and time, which are displayed in the System Status panel.
About Configuration View icons The Configuration View panel uses the following icons to let you view physical and logical components of the storage system.
• During vdisk operation, if two disks fail and two compatible spares are available, the system uses both spares to reconstruct the vdisk. If one of the spares fails during reconstruction, reconstruction proceeds in “fail 2, fix 1” mode. If the second spare fails during reconstruction, reconstruction stops. When a disk fails, its Fault/UID LED is illuminated. When a spare is used as a reconstruction target, its Online/Activity LED is illuminated. For details about LED states, see your product’s User Guide.
• In pull mode, when log data has accumulated to a significant size, the system sends notifications via email, SNMP, or SMI-S to the log-collection system, which can then use FTP to transfer the appropriate logs from the storage system. The notification will specify the storage-system name, location, contact, and IP address and the log-file type (region) that needs to be transferred.
Disk-performance graphs include: • Data Transferred • Data Throughput • I/O • IOPS • Average Response Time • Average I/O Size • Disk Error Counters • Average Queue Depth Vdisk-performance graphs include: • Data Transferred • Data Throughput • Average Response Time You can save historical statistics in CSV format to a file for import into a spreadsheet or other third-party application. You can also reset historical statistics, which clears the retained data and continues to gather new samples.
• If the firmware in neither controller has the proper midplane serial number then the newer firmware version in either controller is transferred to the other controller. For information about the procedures to update firmware in controller modules, expansion modules, and disk drives, see "Updating firmware" (page 81). That topic also describes how to use the activity progress interface to view detailed information about the progress of a firmware-update operation.
2 Configuring the system Using the Configuration Wizard The Configuration Wizard helps you initially configure the system or change system configuration settings. The wizard guides you through the following steps. For each step you can view help by clicking the help icon in the wizard panel. As you complete steps they are highlighted at the bottom of the panel. If you cancel the wizard at any point, no changes are made.
CAUTION: Changing IP settings can cause management hosts to lose access to the storage system. To use DHCP to obtain IP values for network ports 1. Set the IP address source to DHCP. 2. Click Next to continue. To set static IP values for network ports 1. Determine the IP address, subnet mask, and gateway values to use for each controller. 2. Set the IP address source to manual. 3. Set the values for each controller. You must set a unique IP address for each network port. 4. Click Next to continue.
In-band management interfaces operate through the data path and can slightly reduce I/O performance. The in-band option is: • In-band SES Capability. Used for in-band monitoring of system status based on SCSI Enclosure Services (SES) data. If a service is disabled, it cannot be accessed. To allow specific users to access WBI, CLI, FTP or SMI-S, see "About user accounts" (page 15). To change management interface settings 1.
3. In the Managed Logs Notifications section, set the options: • Log Destination. The email address of the log-collection system. The email addresses must use the format user-name@domain-name and can have a maximum of 320 bytes. For example: LogCollector@MyDomain.com. • Include Logs. When the managed logs feature is enabled, this option activates “push” mode, which automatically attaches system log files to managed-logs email notifications that are sent to the log-collection system.
2. In the upper section of the panel, set the port-specific options: • IP Address. For IPv4 or IPv6, the port IP address. For corresponding ports in each controller, assign one port to one subnet and the other port to a second subnet. Ensure that each iSCSI host port in the storage system is assigned a different IP address. For example, in a system using IPv4: • Controller A port 3: 10.10.10.100 • Controller A port 4: 10.11.10.120 • Controller B port 3: 10.10.10.110 • Controller B port 4: 10.11.10.
• In Use. Either: • The number of user-created components that exist. • N/A. Not applicable. • Max Licensable. Either: • The number of user-created components that the maximum license supports. • N/A. Not applicable. • Expiration. One of the following: • Never. License doesn’t expire. • Number of days remaining for a temporary license. • Expired. Temporary license has expired and cannot be renewed. • N/A. No license installed.
• File Transfer Protocol (FTP). A secondary interface for installing firmware updates, downloading logs, and installing a license. • Simple Network Management Protocol (SNMP). Used for remote monitoring of the system through your network. • Service Debug. Used for technical support only. Enables or disables debug capabilities, including Telnet debug ports and privileged diagnostic user IDs. This is disabled by default.
To configure email notification for managed logs 1. In the Configuration View panel, right-click the system and select Configuration > Services > Email Notification. 2. In the main panel, set the options: • Log Destination. The email address of the log-collection system. The email addresses must use the format user-name@domain-name and can have a maximum of 320 bytes. For example: LogCollector@MyDomain.com. • Include Logs.
Configuring user accounts Adding users You can create either a general user that can access the WBI (SMU), CLI, FTP or SMI-S interfaces, or an SNMPv3 user that can access the MIB or receive trap notifications. SNMPv3 user accounts support SNMPv3 security features such as authentication and encryption. To add a general user 1. In the Configuration View panel, right-click the system and select Configuration > Users > Add New User. 2. In the main panel, set the options: • User Name.
• Password. A password is case sensitive and must contain 8–32 characters. A password cannot contain the following characters: angle brackets, backslash, comma, double quote, single quote, or space. If the password contains only printable ASCII characters then it must contain at least one uppercase character, one lowercase character, and one non-alphabetic character.
• Temperature Preference. Specifies the scale to use for temperature values: Celsius or Fahrenheit. • Auto Sign Out (minutes). Select the amount of time that the user’s session can be idle before the user is automatically signed out (2–720 minutes). The default is 30 minutes. • Locale. The user’s preferred display language, which overrides the system’s default display language.
Configuring system settings Changing the system date and time You can enter values manually for the system date and time, or you can set the system to use NTP as explained in "About the system date and time" (page 29). To use manual date and time settings 1. In the Configuration View panel, right-click the system and select Configuration > System Settings > Date, Time. The date and time options appear. 2. Set the options: • Time.
To change iSCSI host interface settings 1. In the Configuration View panel, right-click the system and select Configuration > System Settings > Host Interfaces. 2. In the Common Settings for iSCSI section of the panel, set the options that apply to all iSCSI ports: • Authentication (CHAP). Enables or disables use of Challenge Handshake Authentication Protocol. Disabled by default. NOTE: CHAP records for iSCSI login authentication must be defined if CHAP is enabled.
Changing network interface settings You can configure addressing parameters for each controller’s network port. You can set static IP values or use DHCP. In DHCP mode, network port IP address, subnet mask, and gateway values are obtained from a DHCP server if one is available. If a DHCP server is unavailable, current addressing is unchanged. You must have some means of determining what addresses have been assigned, such as the list of bindings on the DHCP server.
Configuring advanced settings Changing disk settings Configuring SMART Self-Monitoring Analysis and Reporting Technology (SMART) provides data that enables you to monitor disks and analyze why a disk failed. When SMART is enabled, the system checks for SMART events one minute after a restart and every five minutes thereafter. SMART events are recorded in the event log. To change the SMART setting 1. In the Configuration View panel, right-click the system and select Configuration > Advanced Settings > Disk.
Scheduling drive spin down for all disks For all disks that are configured to use drive spin down (DSD), you can configure a time period to suspend and resume DSD so that disks remain spun-up during hours of frequent activity. To configure DSD for a vdisk, see "Configuring drive spin down for a vdisk" (page 58). To configure DSD for available disks and global spares, see "Configuring drive spin down for available disks and global spares" (page 49).
Changing FDE general configuration Setting the passphrase You can set the FDE passphrase the system uses to write to and read from FDE-capable disks. From the passphrase, the system generates the lock key ID that is used to secure the FDE-capable disks. If the passphrase for a system is different from the passphrase associated with a disk, the system cannot access data on the disks. IMPORTANT: Be sure to record the passphrase as it cannot be recovered if lost. To set or change the passphrase 1.
3. Click Secure. Repurposing the system You can repurpose a system to erase all data on the system and return its FDE state to unsecure. CAUTION: Repurposing a system erases all disks in the system and restores the FDE state to unsecure. NOTE: If you want to repurpose more than one disk and the drive spin down (DSD) feature is enabled, disable DSD before repurposing the disks. You can re-enable it after the disks are repurposed.
To set or change the import passphrase 1. In the Configuration View panel, right-click the system and select Configuration > Advanced Settings > Full Disk Encryption and select the Set Import Lock Key ID tab. 2. In the Passphrase field, enter the passphrase associated with the displayed lock key. 3. Re-enter the passphrase. 4. Click Import Passphrase. A dialog box will confirm the passphrase was changed successfully.
Changing auto-write-through cache triggers and behaviors You can set conditions that cause (“trigger”) a controller to change the cache mode from write-back to write-through, as described in "About volume cache options" (page 21). You can also specify actions for the system to take when write-through caching is triggered. To change auto-write-through cache triggers and behaviors 1. In the Configuration View panel, right-click the system and select Configuration > Advanced Settings > Cache. 2.
TIP: If you choose to disable background vdisk scrub, you can still scrub a selected vdisk by using Tools > Media Scrub Vdisk (page 90). To configure background scrub for vdisks 1. In the Configuration View panel, right-click the system and select Configuration > Advanced Settings > System Utilities. 2. Set the options: • Either select (enable) or clear (disable) the Vdisk Scrub option. This option is enabled by default.
Configuring remote systems Adding a remote system You can add a management object to obtain information from a remote storage system. This allows a local system to track remote systems by their network-port IP addresses and cache their login credentials. The IP address can then be used in commands that need to interact with the remote system. To add a remote system 1. In the Configuration View panel, either: • Right-click the local system and select Configuration > Remote System > Add Remote System.
If a disk in the vdisk fails, a dedicated spare is automatically used to reconstruct the vdisk. A fault-tolerant vdisk other than RAID-6 becomes Critical when one disk fails. A RAID-6 vdisk becomes Degraded when one disk fails and Critical when two disks fail. After the vdisk’s parity or mirror data is completely written to the spare, the vdisk returns to fault-tolerant status.
Configuring drive spin down for a vdisk The drive spin down (DSD) feature monitors disk activity within system enclosures and spins down inactive disks to conserve energy. For a specific vdisk, you can enable or disable DSD and set the period of inactivity after which the vdisk’s disks and dedicated spares automatically spin down. To configure a time period to suspend and resume DSD for all vdisks, see "Scheduling drive spin down for all disks" (page 50).
Configuring a snapshot Changing a snapshot’s name To change a snapshot’s name 1. In the Configuration View panel, right-click a snapshot and select Configuration > Modify Snapshot Name. 2. Enter a new name. A snapshot name is case sensitive; cannot already exist in a vdisk; cannot include a comma, double quote, angle bracket, or backslash; and can have a maximum of 32 bytes. 3. Click Modify Name. The new name appears in the Configuration View panel.
Configuring the system
3 Provisioning the system Using the Provisioning Wizard The Provisioning Wizard helps you create a vdisk with volumes and to map the volumes to hosts. Before using this wizard, read documentation and Resource Library guidelines for your product to learn about vdisks, volumes, and mapping. Then plan the vdisks and volumes you want to create and the default mapping settings you want to use. The wizard guides you through the following steps.
• Assign to. If the system is operating in Active-Active ULP mode, optionally select a controller to be the preferred owner for the vdisk. Auto (the default) automatically assigns the owner to load-balance vdisks between controllers. If the system is operating in Single Controller mode, the Assign to setting is ignored and the system automatically load-balances vdisks in anticipation of the insertion of a second controller in the future. • RAID level. Select a RAID level for the vdisk.
Step 5: Setting the default mapping Specify default mapping settings to control whether and how hosts will be able to access the vdisk’s volumes. These settings include: • A logical unit number (LUN), used to identify a mapped volume to hosts. Both controllers share one set of LUNs. Each LUN can be assigned as the default LUN for only one volume in the storage system; for example, if LUN 5 is the default for Volume1, LUN5 cannot be the default LUN for any other volume.
• Number of Sub-vdisks. For a RAID-10 or RAID-50 vdisk, optionally change the number of sub-vdisks that the vdisk should contain. • Chunk size. For RAID 5, 6, 10, or 50, optionally set the amount of contiguous data that is written to a vdisk member before moving to the next member of the vdisk. For RAID 50, this option sets the chunk size of each RAID-5 sub-vdisk. The chunk size of the RAID-50 vdisk is calculated as: configured-chunk-size x (subvdisk-members - 1).
To change the system’s global spares 1. In the Configuration View panel, right-click the system and select Provisioning > Manage Global Spares. The main panel shows information about available disks in the system. Existing spares are labeled GLOBAL SP. • In the Disk Sets table, the number of white slots in the Disks field shows how many spares you can add. • In the Graphical or Tabular view, only existing global spares and suitable available disks are selectable. 2.
To create a volume in a vdisk 1. In the Configuration View panel, right-click a vdisk and select Provisioning > Create Volume. 2. In the main panel, set the options: • Volume name. This field is populated with a default name, which you can change. A volume name is case sensitive; cannot already exist in a vdisk; cannot include a comma, double quote, angle bracket, or backslash; and can have a maximum of 32 bytes. • Size. Optionally change the default size, which is all free space in the vdisk.
NOTE: The system might be unable to delete a large number of volumes in a single operation. If you specified to delete a large number of volumes, verify that all were deleted. If some of the specified volumes remain, repeat the deletion on those volumes. Changing default mapping for multiple volumes For all volumes in all vdisks or a selected vdisk, you can change the default access to those volumes by all hosts.
NOTE: You cannot map the secondary volume of a replication set. NOTE: When mapping a volume to a host using the Linux ext3 file system, specify read-write access; otherwise, the file system will be unable to mount/present/map the volume and will report an error such as “unknown partition table.” To explicitly map multiple volumes 1. In the Configuration View panel, right-click Vdisks or a vdisk and then select Provisioning > Map Volumes.
To delete the default mapping 1. Clear Map. 2. Click Apply. A message specifies whether the change succeeded or failed. 3. Click OK. Each mapping that uses the default settings is updated. Changing a volume’s explicit mappings CAUTION: Volume mapping changes take effect immediately. Make changes that limit access to volumes when the volumes are not in use. Be sure to unmount/unpresent/unmap a volume before changing the volume’s LUN. NOTE: You cannot map the secondary volume of a replication set.
Unmapping volumes You can delete all of the default and explicit mappings for multiple volumes. CAUTION: Volume mapping changes take effect immediately. Make changes that limit access to volumes when the volumes are not in use. Before changing a volume’s LUN, be sure to unmount/unpresent/unmap the volume. To unmap volumes 1. In the Configuration View panel, right-click Vdisks or a vdisk and then select Provisioning > Unmap Volumes. In the main panel, a table shows all the volumes for the selected vdisk. 2.
Creating a snapshot You can create a snapshot now or schedule the snapshot task. The first time a snapshot is created of a standard volume, the volume is converted to a master volume and a snap pool is created in the volume’s vdisk. The snap pool’s size is either 20% of the volume size or 5.37 GB, whichever is larger. The recommended minimum size for a snap pool is 50 GB. Before creating or scheduling snapshots, verify that the vdisk has enough free space to contain the snap pool.
Deleting snapshots You can use the Delete Snapshots panel to delete standard and replication snapshots. When you delete a snapshot, all data uniquely associated with that snapshot is deleted and associated space in the snap pool is freed for use. Snapshots can be deleted in any order, irrespective of the order in which they were created. CAUTION: Deleting a snapshot removes its mappings and schedules and deletes its data.
3. Set the options: • Start Schedule. Specify a date and a time in the future to be the first instance when the scheduled task will run, and to be the starting point for any specified recurrence. • Date must use the format yyyy-mm-dd. • Time must use the format hh:mm followed by either AM, PM, or 24H (24-hour clock). For example, 13:00 24H is the same as 1:00 PM. • Recurrence. Specify the interval at which the task should run. Set the interval to at least 2 minutes. The default is 1 minute.
• With Modified Data. If the source volume is a snapshot, select this option to include the snapshot’s modified data in the copy. Otherwise, the copy will contain only the data that existed when the snapshot was created. 4. Click Copy the Volume. A confirmation dialog appears. 5. Click Yes to continue; otherwise, click No. If you clicked Yes and With Modified Data is selected and the snapshot has modified data, a second confirmation dialog appears. 6. Click Yes to continue; otherwise, click No.
Rolling back a volume You can roll back (revert) the data in a volume to the data that existed when a specified snapshot was created. You also have the option of including its modified data (data written to the snapshot since it was created). For example, you might want to take a snapshot, mount/present/map it for read/write, and then install new software on the snapshot for testing. If the software installation is successful, you can roll back the volume to the contents of the modified snapshot.
Creating a snap pool Before you can convert a standard volume to a master volume or create a master volume for snapshots, a snap pool must exist. A snap pool and its associated master volumes can be in different vdisks, but must be owned by the same controller. To create a snap pool 1. In the Configuration View panel, right-click a vdisk and select Provisioning > Create Snap Pool. 2. In the main panel set the options: • Snap Pool name. Optionally change the default name for the snap pool.
Removing hosts To remove hosts 1. Verify that the hosts you want to remove are not accessing volumes. 2. In the Configuration View panel, either: • Right-click the system or Hosts and then select Provisioning > Remove Hosts. • Right-click a host and select Provisioning > Remove Host. 3. In the main panel, select the hosts to remove. To select or clear all items, toggle the checkbox in the heading row. 4. Click Remove Host(s). A confirmation dialog appears. 5.
To create an explicit mapping 1. In the Maps for Host table, select the Default mapping to override. 2. Select Map. 3. Set the LUN and select the ports and access type. 4. Click Apply. A message specifies whether the change succeeded or failed. 5. Click OK. The mapping becomes Explicit with the new settings. To modify an explicit mapping 1. In the Maps for Host table, select the Explicit mapping to change. 2. Set the LUN and select the ports and access type. 3. Click Apply.
Modifying a schedule To modify a schedule 1. In the Configuration View panel, right-click the system or a volume or a snapshot and select Provisioning > Modify Schedule. In the main panel, a table shows each schedule. 2. In the table, select the schedule to modify. For information about schedule status values, see "Schedule properties" (page 108). 3. Set the options: • Snapshot Prefix. Optionally change the default prefix to identify snapshots created by this task.
Deleting schedules If a component has a scheduled task that you no longer want to occur, you can delete the schedule. When a component is deleted, its schedules are also deleted. To delete task schedules 1. In the Configuration View panel, right-click the system or a volume or a snapshot and select Provisioning > Delete Schedule. 2. In the main panel, select the schedule to remove. 3. Click Delete Schedule. A confirmation dialog appears. 4. Click Yes to continue; otherwise, click No.
4 Using system tools Updating firmware You can view the current versions of firmware in controller modules, expansion modules, and disks, and install new versions. To monitor the progress of a firmware-update operation by using the activity progress interface, see "Using the activity progress interface" (page 83) below. TIP: To ensure success of an online update, select a period of low I/O activity.
6. Click Install Controller-Module Firmware File. A dialog box shows firmware-update progress. The process starts by validating the firmware file: • If the file is invalid, verify that you specified the correct firmware file. If you did, try downloading it again from the source location. • If the file is valid, the process continues. CAUTION: Do not perform a power cycle or controller restart during a firmware update.
CAUTION: Do not perform a power cycle or controller restart during the firmware update. If the update is interrupted or there is a power failure, the module might become inoperative. If this occurs, contact technical support. The module’s FRU might need to be returned to the factory for reprogramming. It typically takes 4.5 minutes to update each EMP in a D2700 enclosure, or 2.5 minutes to update each EMP in an MSA 2040 or P2000 drive enclosure. Wait for a message that the code load has completed. 7.
To access the activity progress interface 1. Enable the Activity Progress Monitor service; see "Changing management interface settings" (page 40). 2. In a new tab in your web browser, enter a URL of the form: http://controller-address:8081/cgi-bin/content.cgi?mc=MC-identifier&refresh=true where: • controller-address is required and specifies the IP address of a controller network port.
Saving logs To help service personnel diagnose a system problem, you might be asked to provide system log data. Using the SMU, you can save log data to a compressed zip file.
For SAS, you can reset a port pair (either the first and second ports or the third and fourth ports). Resetting a SAS host port issues a COMINIT/COMRESET sequence and might reset other ports. To reset a host port 1. In the Configuration View panel, right-click the system and select Tools > Reset Host Port. 2. Select the port or port pair to reset. 3. Click Reset Host Port. Rescanning disk channels A rescan forces a rediscovery of disks and enclosures in the storage system.
If spares are available, and the health of the vdisk is Degraded, the vdisk will use them to start reconstruction. When reconstruction is complete, you can clear the leftover disk’s metadata. Clearing the metadata will change the disk’s health to OK and its How Used state to AVAIL, making the disk available for use in a new vdisk or as a spare.
Shutting down Shutting down the Storage Controller in a controller module ensures that a proper failover sequence is used, which includes stopping all I/O operations and writing any data in write cache to disk. If the Storage Controller in both controller modules is shut down, hosts cannot access the system’s data. Perform a shut down before removing a controller module or powering down the system.
Expanding a vdisk You can expand the capacity of a vdisk by adding disks to it, up to the maximum number of disks that the storage system supports. Host I/O to the vdisk can continue while the expansion proceeds. You can then create or expand a volume to use the new free space, which becomes available when the expansion is complete. You can expand only one vdisk at a time.
3. Click OK. The panel shows the verification’s progress. To abort vdisk verification 1. In the Configuration View panel, right-click a fault-tolerant vdisk and select Tools > Verify Vdisk. 2. Click Abort Verify Utility. A message confirms that verification has been aborted. 3. Click OK. Scrubbing a vdisk The system-level Vdisk Scrub option (see "Configuring background scrub for vdisks" (page 54)) automatically checks all vdisks for disk defects.
Examples of when quarantine can occur are: • At system power-up, a vdisk has fewer disks online than at the previous power-up. This may happen because a disk is slow to spin up or because an enclosure is not powered up. The vdisk will be automatically dequarantined if the inaccessible disks come online and the vdisk status becomes FTOL (fault tolerant and online), or if after 60 seconds the vdisk status is QTCR or QTDN.
To remove a vdisk from quarantine (if specified by the recommended action for event 172 or 485) 1. In the Configuration View panel, right-click a quarantined vdisk and select Tools > Dequarantine Vdisk. 2. Click Dequarantine Vdisk. Depending on the number of disks that remain active in the vdisk, its health might change to Degraded (RAID 6 only) and its status changes to FTOL, CRIT, or FTDN. For status descriptions, see "Vdisk properties" (page 102).
Resetting or saving historical disk-performance statistics Resetting historical disk-performance statistics You can reset (clear) all historical performance statistics for all disks. When you reset historical statistics, an event will be logged and new data samples will continue to be stored every quarter hour. To reset historical disk performance statistics 1. In the Configuration View panel, right-click the local system and select Tools > Reset or Save Disk Performance Statistics. 2.
Using system tools
5 Viewing system status Viewing information about the system In the Configuration View panel, right-click the system and select View > Overview. The System Overview table shows: • Health. OK Degraded Fault Unknown • Component. System, Enclosures, Disks, Vdisks, Volumes, Schedules, Configuration Limits, Versions, Snap Pools, Snapshots, Licensed Features. • Count. • Capacity. • Storage Space. For descriptions of storage-space color codes, see "About storage-space color codes" (page 29).
• FDE Security Status • Unsecured. The system has not been secured with a passphrase. • Secured. The system has been secured with a passphrase. • Secured, Lock Ready. The system has been secured, and lock keys have been cleared. The system will become locked after the next power cycle. • Secured, Locked. The system disks are locked. Data cannot be accessed until the correct lock key is set. The System Redundancy table shows: • Controller Redundancy Mode. • Controller Redundancy Status.
• How Used Two values are listed together: the first is How Used and the second is Current Job. For example, for a disk used in a vdisk (VDISK) that is being scrubbed (VRSC), VDISKVRSC displays. • How Used • AVAIL: Available. • FAILED: The disk is unusable and must be replaced. Reasons for this status include: excessive media errors; SMART error; disk hardware failure; unsupported disk. • GLOBAL SP: Global spare. • LEFTOVR: Leftover.
• Free. Amount of free space remaining on the vdisk. • RAID. RAID level. • Status. • CRIT: Critical. The vdisk is online but isn't fault tolerant because some of its disks are down. • FTDN: Fault tolerant with down disks. The vdisk is online and fault tolerant, but some of its disks are down. • FTOL: Fault tolerant and online. • OFFL: Offline. Either the vdisk is using offline initialization, or its disks are down and data may be lost. • QTCR: Quarantined critical.
Snapshot properties When you select Snapshots in the System Overview table, a table shows each snapshot’s name; serial number; source volume; snap-pool name; amounts of snap data, unique data, and shared data; and vdisk name. • Snap data is the total amount of data associated with the specific snapshot (data copied from a source volume to a snapshot and data written directly to a snapshot). • Unique data is the amount of data that has been written to the snapshot since the last snapshot was taken.
Version properties When you select Versions in the System Overview table, a table shows the versions of firmware and hardware in each controller module. • Storage Controller CPU Type. • Bundle Version. • Build Date. • Storage Controller Code Version. • Storage Controller Code Baselevel. • Memory Controller FPGA Code Version. • Storage Controller Loader Code Version. • CAPI Version. • Management Controller Code Version. • Management Controller Loader Code Version. • Expander Controller Code Version.
When reviewing events, do the following: 1. For any critical, error, or warning events, click the message to view additional information and recommended actions. This information also appears in the Event Descriptions Reference Guide. Identify the primary events and any that might be the cause of the primary event. For example, an over-temperature event could cause a disk failure. 2. View the event log and locate other critical/error/warning events in the sequence for the controller that reported the event.
• STOP: The vdisk is stopped. • UNKN: Unknown. • UP: Up. The vdisk is online and does not have fault-tolerant attributes. • Disk Type. • SAS: Enterprise SAS. • SAS MDL: Midline SAS. • sSAS: SAS SSD. • Preferred Owner. Controller that owns the vdisk and its volumes during normal operation. • Current Owner. Either the preferred owner during normal operation or the partner controller when the preferred owner is offline. • Disks. Quantity of disks in the vdisk. • Spares.
• Serial Number. Vdisk serial number. • RAID. RAID level of the vdisk and all of its volumes. • Disks. Quantity of disks in the vdisk. • Spares. Quantity of dedicated spares in the vdisk. • Chunk Size. • For RAID levels except NRAID, RAID 1, and RAID 50, the configured chunk size for the vdisk. • For NRAID and RAID 1, chunk size has no meaning and is therefore shown as not applicable (N/A). • For RAID 50, the vdisk chunk size calculated as: configured-chunk-size x (subvdisk-members - 1).
Vdisk performance When you select Vdisk in the Vdisk Overview table and click the Performance tab, the Performance Statistics panel shows three graphs of historical performance statistics for the vdisk: Data Transferred, Data Throughput, and Average Response Time. Data samples are taken every quarter hour and the graphs represent up to 50 samples. To specify a time range of samples to display, set the start and end values and click Update.
Disk properties When you select Disks in the Vdisk Overview table, a Disk Sets table and enclosure view appear. The Disk Sets table shows: • Total Space. Total storage space in the vdisk, followed by a color-coded measure of how the space is used. • Type. For RAID 10 or RAID 50, the sub-vdisk that the disk is in; for other RAID levels, the disk’s RAID level; or SPARE. • Disk Type. • SAS: Enterprise SAS. • SAS MDL: Midline SAS. • sSAS: SAS SSD. • Disks. Quantity of disks in the vdisk or sub-vdisk. • Size.
Volume properties When you select Volumes in the Vdisk Overview table, the Volumes table shows: • Name. Volume name. • Serial Number. Volume serial number. • Size. Volume size. • Vdisk Name. The name of the vdisk containing the volume. Snap-pool properties When you select Snap Pools in the Vdisk Overview table, the Snap Pools table shows: • The snap pool’s name, serial number, size, and free space. • The quantity of master volumes and snapshots associated with the snap pool.
• Status. Replication volume status: • Initializing: The initial (full) replication to the volume is in progress. • Online: The volume is online and is consistent with the last replicated image. • Inconsistent: The volume is online but is in an inconsistent state. A full replication is required to initialize it. • Replicating: The volume is online and replication is in progress. • Replicate-delay: The volume is online but the in-progress replication has been temporarily delayed; a retry is occurring.
Schedule properties If any schedules exist for this volume, when you select the Schedules component, the Schedules table shows each schedule’s name, specification, status, next run time, task type, task status, and task state. For the selected schedule, two tables appear. The Schedule Details table shows: • Schedule Name. Schedule name. • Schedule Specification. The schedule’s start time and recurrence or constraint settings. • Status. • Uninitialized: Schedule is not yet ready to run.
Replication addresses If any remote port addresses are associated with this volume, when you select the Replication Addresses component, the Replication Addresses table shows: • Connected Ports. • For a remote primary or secondary volume, this field shows the IDs of up to two hosts ports in the local system that are connected to the remote system. If two ports are connected but only one is shown, this indicates that a problem is preventing half the available bandwidth from being used.
• SharedData. The amount of data that is potentially shared with other snapshots and the associated amount of space that will be freed if the snapshot is deleted. This represents the amount of data written directly to the snapshot. It also includes data copied from the source volume to the storage area for the oldest snapshot, since that snapshot does not share data with any other snapshot.
• Prefix. • Count. • Last Created. Viewing information about a snap pool In the Configuration View panel, right-click a snap pool and select View > Overview. The Snap Pool Overview table shows: • The capacity and space usage of the snap pool • The quantity of volumes using the snap pool • The quantity of snapshots in the snap pool For descriptions of storage-space color codes, see "About storage-space color codes" (page 29).
NOTE: The policies Delete Oldest Snapshot and Delete Snapshots do not apply business logic to the delete decision and may delete snapshots that are mounted/presented/mapped or modified. You may set retention priorities for a snap pool as a way of suggesting that some snapshots are more important than others, but these priorities do not ensure any specific snapshot is protected. For details about setting snap-pool thresholds and policies, see the CLI Reference Guide.
Host properties When you select Host in the Host Overview table, the Properties for Host table shows: • Host ID. WWPN or IQN. • Name. User-defined nickname for the host. • Discovered. If the host was discovered and its entry was automatically created, Yes. If the host entry was manually created, No. • Mapped. If volumes are mapped to the host, Yes; otherwise, No. • Profile. • Standard: Default profile. • HP-UX: The host uses Flat Space Addressing. • Host Type.
• Vendor. • Model. • Number of Disks. The number of disks installed in the enclosure. • Enclosure WWN. • Midplane Serial Number. • Part Number. • Manufacturing Date. • Manufacturing Location. • Revision. • EMP A Revision. Firmware revision of the Enclosure Management Processor in controller module A’s Expander Controller. • EMP B Revision. Firmware revision of the Enclosure Management Processor in controller module B’s Expander Controller. • EMP A Bus ID. • EMP B Bus ID. • EMP A Target ID.
• How Used. • AVAIL: Available. • FAILED: The disk is unusable and must be replaced. Reasons for this status include: excessive media errors; SMART error; disk hardware failure; unsupported disk. • GLOBAL SP: Global spare. • LEFTOVR: Leftover. • UNUSABLE: The disk cannot be used in a vdisk because the system is secured and the disk is not FDE-capable, or because the disk is locked to data access. • VDISK: Used in a vdisk. • VDISK SP: Spare assigned to a vdisk. • Type. • SAS: Enterprise SAS.
• FDE State. • Not Secured: The disk is not secured. • Not FDE-Capable: The disk is not FDE-capable. • Secured, Unlocked: The system is secured and the disk is unlocked. • Secured, Locked: The system is secured and the disk is locked to data access, preventing its use. • FDE Protocol Failure: A temporary state that can occur while the system is securing the disk.
To view summary performance data for a vdisk, use the Vdisk Overview panel as described on page 102. To view live (non-historical) performance statistics for one or more disks, in the CLI use the show disk-statistics command. Power supply properties When you select a power supply, a table shows: • Health. OK Degraded Fault Unknown • Health Reason. If Health is not OK, this field shows the reason for the health state. • Health Recommendation.
Controller module properties When you select a controller module, a table shows: • Health. OK Fault Unknown • Health Reason. If Health is not OK, this field shows the reason for the health state. • Health Recommendation. If Health is not OK, this field shows recommended actions to take to resolve the health issue. • Status. • Controller ID. • Description. • CPLD Version. • Storage Controller Code Version. • Model. • Storage Controller CPU Type. • Serial Number. • Part Number. • Position. • Hardware Version.
• Not Present: The controller module is not installed or is down. • Disconnected: Either no I/O link is detected or the port is not cabled. • Ports. The port ID, which is the controller ID and port number. • Media. • FC(L): Fibre Channel-Arbitrated Loop (public or private). • FC(P): Fibre Channel Point-to-Point. • FC(-): Fibre Channel disconnected. • Target ID. The port WWN. • Configured Speed. Auto, 4Gb, 8Gb, or 16Gb (Gbit/s). • Actual Speed. Actual link speed in Gbit/s, or blank if not applicable.
• Gateway. For IPv4, gateway for assigned port IP address. • Default Router. For IPv6, default router for assigned port IP address. • Link-Local Address. For IPv6, the link-local address that is automatically generated from the MAC address and assigned to the port. • SFP Status. • OK • Not present: No SFP is inserted in this port. • Not compatible: The SFP in this port is not qualified for use in this system. When this condition is detected, event 464 is logged.
Controller module: expansion port properties When you select an expansion (Out) port, a table shows: • Health. OK Degraded Fault N/A Unknown • Health Reason. If Health is not OK, this field shows the reason for the health state. • Health Recommendation. If Health is not OK, this field shows recommended actions to take to resolve the health issue. • Status. • Name. Controller module: CompactFlash properties When you select a CompactFlash card in the Rear Tabular view, a table shows: • Health.
I/O module: In port properties When you select an In port, a table shows: • Health. OK Degraded Fault N/A Unknown • Health Reason. If Health is not OK, this field shows the reason for the health state. • Health Recommendation. If Health is not OK, this field shows recommended actions to take to resolve the health issue. • Status. • Name. I/O module: Out port properties When you select an Out port, a table shows: • Health. OK Degraded Fault N/A Unknown • Health Reason.
6 Using Remote Snap to replicate volumes About the Remote Snap replication feature Remote Snap is a licensed feature for disaster recovery. This feature performs asynchronous (batch) replication of block-level data from a volume on a local storage system to a volume that can be on the same system or on a second, independent system. This second system can be located at the same site as the first system or at a different site.
Figure 4 Intersite and intrasite replication sets Remote replication uses snapshot functionality to track the data to be replicated and to determine the differences in data updated on the master volume, minimizing the amount of data to be transferred. In order to perform a replication, a snapshot of the primary volume is taken, creating a point-in-time image of the data.
NOTE: Snapshot operations are I/O-intensive. Every write to a unique location in a master volume after a snapshot is taken will cause an internal read and write operation to occur in order to preserve the snapshot data. If you intend to create snapshots of, create volume copies of, or replicate volumes in a vdisk, ensure that the vdisk contains no more than four master volumes, snap pools, or both.
• Delta replications: Delta data is the “list” of 64-KB blocks that differs between the last snapshot replicated and the next snapshot to be replicated. This delta data is then replicated from the replication snapshot on the primary volume to the secondary volume. Once the initial replication has completed, all future replications for that replication set will be delta replications so long as sync points are maintained. Action 5 is a delta replication.
• The reserve size is calculated as follows: • If the primary volume and the snap pool are each less than 500 GB, the reserve will be the same size as the primary volume. • If the primary volume is larger than 500 GB, the reserve size will be the maximum, 500 GB. • If the snap pool is larger than 500 GB, the reserve will be the same size as the snap pool.
3. Map the new primary volume to hosts, as was the original primary volume. Figure 6 Example of primary-volume failure If the original primary volume becomes accessible, you can set it to be the primary volume again as described in the following process overview: 1. Take a snapshot of the original primary volume. This preserves the volume’s current data state for later comparison with the new primary volume. 2. Remove the volume’s mappings. 3. Set the original primary volume to be a secondary volume. 4.
• Suspending (page 134), resuming (page 134), or aborting (page 134) a replication • "Exporting a replication image to a snapshot" (page 137) • "Changing the primary volume for a replication set" (page 137) • "Viewing replication properties, addresses, and images for a volume" (page 139) • "Viewing information about a replication image" (page 141) • "Viewing information about a remote primary or secondary volume" (page 140) Using the Replication Setup Wizard If the system is licensed to use remote replicat
Step 3: Selecting the replication mode Select the replication mode, which specifies whether the replication destination is in the local system or a remote system. If you want to replicate to a remote system that hasn’t already been added to the local system, you can add it. Local replication is allowed only if the primary and secondary volumes are in vdisks owned by different controllers. To replicate within the local system 1. Select Local Replication. 2.
Replicating a volume If the system is licensed to use remote replication, you can create a replication set that uses the selected volume as the primary volume, and to immediately start or schedule replication. The primary volume can be a standard volume or a master volume. To create a replication set you must select a secondary system and a secondary vdisk or volume. The secondary system can be the local system, or a remote system added by using the Add Remote System panel.
3. Select the link type used between the two systems. 4. If you want to start replication now: a. Select the Initiate Replication and Now options. b. Optionally change the default replication image name. A name is case sensitive; cannot already exist in a vdisk; cannot include a comma, double quote, angle bracket, or backslash; and can have a maximum of 32 bytes. c. Continue with step 7. 5. If you want to schedule replication: a. Select the Initiate Replication and Scheduled options. b.
Replicating a snapshot If the system is licensed to use remote replication, you can replicate an existing, primary snapshot that is mapped to a host. You can only replicate a snapshot of a volume that is already part of a replication set. If the selected snapshot hasn’t already been replicated to a secondary volume, each replication volume in the replication set is requested to replicate the snapshot data. Only snapshot preserved data is replicated; snapshot modified data is not replicated.
Suspending replication If the system is licensed to use remote replication, you can suspend the current replication operation for a selected a replication volume. You must perform this task on the system that owns the secondary volume. Once suspended, the replication must be resumed or aborted to allow the replication volume to resume normal operation. To suspend replication 1. In the Configuration View panel, right-click a local replication volume and select Provisioning > Suspend Replication. 2.
NOTE: • It is recommended that the vdisk that you are moving contains only secondary volumes and their snap pools. You are allowed to move other volumes along with secondary volumes and their snap pools, but be sure that you are doing so intentionally. • If you intend to move a vdisk’s enclosure and you want to allow I/O to continue to the other enclosures, it is best if it is at the end of the chain of connected enclosures.
3. Click Yes to continue; otherwise, click No. If you clicked Yes, the stop operation begins. A message indicates whether the task succeeded or failed. If the stop operation succeeds, the vdisk’s health is shown as Unknown, its status is shown as STOP, and its subcomponents are no longer displayed in the Configuration View panel. 4. If the stop operation succeeded for the secondary volume’s vdisk and for its snap pool’s vdisk (if applicable), you can move the disks into the remote system.
To reattach a secondary volume 1. In the Configuration View panel, right-click the secondary volume and select Provisioning > Reattach Replication Volume. 2. In the main panel, click Reattach Replication Volume. A message indicates whether the task succeeded or failed. • If the task succeeds, the secondary volume’s status changes to “Establishing proxy” while it is establishing the connection to the remote (primary) system in preparation for replication; then the status changes to Online.
To change the secondary volume of a replication set to be its primary volume 1. On the secondary system, in the Configuration View panel, right-click the secondary volume and select Provisioning > Set Replication Primary Volume. 2. In the main panel, select the secondary volume in the list. 3. Click Set Replication Primary Volume. In the Configuration View panel, the volume’s designation changes from Secondary Volume to Primary Volume. NOTE: The offline primary volume remains designated a Primary Volume.
Viewing replication properties, addresses, and images for a volume In the Configuration View panel, right-click a volume and select View > Overview.
• Not Attempted. Communication has not been attempted to the remote volume. • Online. The volumes in the replication set have a valid connection but communication is not currently active. • Active. Communication is currently active to the remote volume. • Offline. No connection is available to the remote system. • Connection Time. Date and time of the last communication with the remote volume, or N/A.
Replication image properties When you select the Replication Images component a table shows replication image details including the image serial number and name, snapshot serial number and name, and image creation date/time. Viewing information about a replication image In the Configuration View panel, right-click a replication image and select View > Overview.
Using Remote Snap to replicate volumes
7 Support and other resources Contacting HP For worldwide technical support information, see the HP support website: http://www.hp.
Typographic conventions Table 12 Document conventions Convention Element Blue text: Table 2 (page 6) Cross-reference links Blue, bold, underlined text Email addresses Blue, underlined text: http://www.hp.
8 Documentation feedback HP is committed to providing documentation that meets your needs. To help us improve the documentation, send any errors, suggestions, or comments to Documentation Feedback (docsfeedback@hp.com). Include the document title and part number, version number, or the URL when submitting your feedback.
Documentation feedback
A SNMP reference This appendix describes the Simple Network Management Protocol (SNMP) capabilities that MSA 2040 storage systems support. This includes standard MIB-II, the FibreAlliance SNMP Management Information Base (MIB) version 2.2 objects, and enterprise traps. MSA 2040 storage systems can report their status through SNMP. SNMP provides basic discovery using MIB-II, more detailed status with the FA MIB 2.2, and asynchronous notification using enterprise traps.
FA MIB 2.2 SNMP behavior The FA MIB 2.2 objects are in compliance with the FibreAlliance MIB v2.2 Specification (FA MIB2.2 Spec). For a full description of this MIB, go to: www.emc.com/microsites/fibrealliance. FA MIB 2.2 is a subset of FA MIB 4.0, which is included with HP System Insight Manager (SIM) and other products. The differences are described in "FA MIB 2.2 and 4.0 differences" (page 158). FA MIB 2.
Table 13 FA MIB 2.2 objects, descriptions, and values (continued) Object Description Value connUnitTable Includes the following objects as specified by the FA MIB2.
Table 13 FA MIB 2.2 objects, descriptions, and values (continued) Object Description Value connUnitContact Settable: Contact information for this connectivity unit Default: Uninitialized Contact connUnitLocation Settable: Location information for this connectivity unit Default: Uninitialized Location connUnitEventFilter Defines the event severity that will be logged by this connectivity unit. Settable only through SMU.
Table 13 FA MIB 2.2 objects, descriptions, and values (continued) Object Description Value connUnitPortTable Includes the following objects as specified by the FA MIB2.
Table 13 FA MIB 2.2 objects, descriptions, and values (continued) Object Description Value connUnitEventTable Includes the following objects as specified by the FA MIB2.
Table 13 FA MIB 2.
Table 14 connUnitRevsTable index and description values (continued) connUnitRevsIndex connUnitRevsDescription 13 Firmware Revision for Expander (Controller A) 14 Firmware Revision for Expander (Controller B) 15 Hardware Revision for Controller A 16 Hardware Revision for Controller B External details for connUnitSensorTable Table 15 connUnitSensorTable index, name, type, and characteristic values connUnitSensorIndex connUnitSensorName connUnitSensorType connUnitSensor Characteristic 1 CPU Temp
Table 15 connUnitSensorTable index, name, type, and characteristic values (continued) connUnitSensorIndex connUnitSensorName connUnitSensorType connUnitSensor Characteristic 29 Power Supply 1 Voltage, 3.3V power-supply(5) power(9) 30 Power Supply 2 Voltage, 12V power-supply(5) power(9) 31 Power Supply 2 Voltage, 5V power-supply(5) power(9) 32 Power Supply 2 Voltage, 3.
Enterprise trap MIB The following pages show the source for the HP enterprise traps MIB, msa2000traps.mib. This MIB defines the content of the SNMP traps that MSA 2040 storage systems generate. ---------------------- ---------------------------------------------------------------------------MSA2000 Array MIB for SNMP Traps $Revision: 11692 $ Copyright (c) 2008 Hewlett-Packard Development Company, L.P. Copyright (c) 2005-2008 Dot Hill Systems Corp. Confidential computer software.
--#SUMMARY "Informational storage event # %d, type %d, description: %s" --#ARGUMENTS {0,1,2} --#SEVERITY INFORMATIONAL --#TIMEINDEX 6 ::= 3001 msaEventWarningTrap TRAP-TYPE ENTERPRISE hpMSA VARIABLES { connUnitEventId, connUnitEventType, connUnitEventDescr } DESCRIPTION "An event has been generated by the storage array.
FA MIB 2.2 and 4.0 differences FA MIB 2.2 is a subset of FA MIB 4.0. Therefore, SNMP elements implemented in MSA 2040 systems can be accessed by a management application that uses FA MIB 4.0. The following tables are not implemented in 2.2: • connUnitServiceScalars • connUnitServiceTables • connUnitZoneTable • connUnitZoningAliasTable • connUnitSnsTable • connUnitPlatformTable The following variables are not implemented in 2.
B Using FTP Although SMU is the preferred interface for downloading log data and historical disk-performance statistics, updating firmware, installing a license, and installing a security certificate, you can also use FTP to do these tasks. IMPORTANT: Do not attempt to do more than one of the operations in this appendix at the same time. They can interfere with each other and the operations may fail.
NOTE: You must uncompress a zip file before you can view the files it contains. To examine diagnostic data, first view store_yyyy_mm_dd__hh_mm_ss.logs. Transferring log data to a log-collection system If the log-management feature is configured in pull mode, a log-collection system can access the storage system’s FTP interface and use the get managed-logs command to retrieve untransferred data from a system log file.
Downloading historical disk-performance statistics You can access the storage system’s FTP interface and use the get perf command to download historical disk-performance statistics for all disks in the storage system. This command downloads the data in CSV format to a file, for import into a spreadsheet or other third-party application. The number of data samples downloaded is fixed at 100 to limit the size of the data file to be generated and transferred.
Updating firmware You can update the versions of firmware in controller modules, expansion modules (in drive enclosures), and disks. TIP: To ensure success of an online update, select a period of low I/O activity. This helps the update complete as quickly as possible and avoids disruptions to host and applications due to timeouts. Attempting to update a storage system that is processing a large, I/O-intensive batch job will likely cause hosts to lose connectivity with the storage system.
6. Enter: ftp controller-network-address For example: ftp 10.1.0.9 7. Log in as an FTP user. 8. Enter: put firmware-file flash For example: put T230R01-01.bin flash CAUTION: Do not perform a power cycle or controller restart during a firmware update. If the update is interrupted or there is a power failure, the module might become inoperative. If this occurs, contact technical support. The module might need to be returned to the factory for reprogramming.
Updating expansion-module firmware A drive enclosure can contain one or two expansion modules. Each expansion module contains an enclosure management processor (EMP). All modules of the same product model should run the same firmware version. You can update the firmware in each expansion-module EMP by loading a firmware file obtained from the HP web download site, http://www.hp.com/support.
It typically takes 4.5 minutes to update each EMP in a D2700 enclosure, or 2.5 minutes to update each EMP in an MSA 2040 or P2000 drive enclosure. Wait for a message that the code load has completed. NOTE: If the update fails, verify that you specified the correct firmware file and try the update a second time. If it fails again, contact technical support. 9. If you are updating specific expansion modules, repeat step 8 for each remaining expansion module that needs to be updated. 10. Quit the FTP session.
4. Either: • To update all disks of the type that the firmware applies to, enter: put firmware-file disk • To update specific disks, enter: put firmware-file disk:enclosure-ID:slot-number For example: put firmware-file disk:1:11 CAUTION: Do not power cycle enclosures or restart a controller during the firmware update. If the update is interrupted or there is a power failure, the disk might become inoperative. If this occurs, contact technical support.
Installing a security certificate The storage system supports use of unique certificates for secure data communications, to authenticate that the expected storage systems are being managed. Use of authentication certificates applies to the HTTPS protocol, which is used by the web server in each controller module. As an alternative to using the CLI to create a security certificate on the storage system, you can use FTP to install a custom certificate on the system.
Using FTP
C Using SMI-S This appendix provides information for network administrators who are managing the storage system from a storage management application through the Storage Management Initiative Specification (SMI-S). SMI-S is a Storage Networking Industry Association (SNIA) standard that enables interoperable management for storage networks and storage devices.
• Software Inventory subprofile • Block Server Performance subprofile • Copy Services subprofile • Job Control subprofile • Storage Enclosure subprofile (if expansion enclosures are attached) • Disk Sparing subprofile • Object Manager Adapter subprofile The embedded SMI-S provider supports: • HTTPS using SSL encryption on the default port 5989, or standard HTTP on the default port 5988. Both ports cannot be enabled at the same time.
The embedded CIMOM can be configured either to listen to secure SMI-S queries from the clients on port 5989 and require credentials to be provided for all queries, or to listen to unsecure SMI-S queries from the clients on port 5988. This provider implementation complies with the SNIA SMI-S specification version 1.5.0. NOTE: Port 5989 and port 5988 cannot be enabled at the same time. The namespace details are given below.
Table 17 Supported SMI-S profiles (continued) Profile/subprofile/package Description Access Points subprofile Provides addresses of remote access points for management services. Fan profile Specializes the DMTF Fan profile by adding indications. Power Supply profile Specializes the DMTF Power Supply profile by adding indications.
CIM Alerts The implementation of alert indications allows a subscribing CIM client to receive events such as FC cable connects, Power Supply events, Fan events, Temperature Sensor events and Disk Drive events. If the storage system’s SMI-S interface is enabled, the system will send events as indications to SMI-S clients so that SMI-S clients can monitor system performance. For information about enabling the SMI-S interface, see "SMI-S configuration" (page 175).
Table 19 Life cycle indications (continued) Profile or Element description and name subprofile WQL or CQL Fan Both SELECT * FROM CIM_InstCreation WHERE SourceInstance ISA CIM_Fan Send life cycle indication when a fan is powered on or off. Job Control SELECT * FROM CIM_InstModification WHERE SourceInstance ISA CIM_ConcreteJob AND SourceInstance.OperationalStatus=17 AND SourceInstance.
SMI-S configuration In the default SMI-S configuration: • The secure SMI-S protocol is enabled, which is the recommended protocol for SMI-S. • The SMI-S interface is enabled for the manage user.
Troubleshooting Table 21 provides solutions to common SMI-S problems. Table 21 Troubleshooting Problem Cause Solution Unable to connect to the embedded SMI-S Array provider. SMI-S protocol is not enabled. Log in to the array as manage and type: set protocol smis enabled HTTP Error (Invalid username/password or 401 Unauthorized). User preferences are configurable for each user on the storage system.
D Administering a log-collection system A log-collection system receives log data that is incrementally transferred from a storage system whose managed logs feature is enabled, and is used to integrate that data for display and analysis. For information about the managed logs feature, see "About managed logs" (page 31). Over time, a log-collection system can receive many log files from one or more storage systems. The administrator organizes and stores these log files on the log-collection system.
Storing log files It is recommended to store log files hierarchically by storage-system name, log-file type, and date/time. Then, if historical analysis is required, the appropriate log-file segments can easily be located and can be concatenated into a complete record.
Glossary 2U12 A enclosure that is two rack units in height and can contain 12 disks. 2U24 A enclosure that is two rack units in height and can contain 24 disks. Additional Sense See ASC/ASCQ. Code/Additional Sense Code Qualifier Advanced Encryption Standard See AES. AES Advanced Encryption Standard. A specification for the encryption of data using a symmetric-key algorithm. Air Management Sled See AMS. ALUA Asymmetric Logical Unit Access. AMS For a 2U12 or 2U24 enclosure, Air Management Sled.
compatible disk A disk that can be used to replace a failed member disk of a vdisk because it both has enough capacity and is of the same type (SAS SSD, enterprise SAS, or midline SAS) as the disk that failed. See also available disk, dedicated spare, dynamic spare, and global spare. complex programmable logic device See CPLD. Configuration See CAPI. Application Programming Interface controller A (or B) A short way of referring to controller module A (or B).
DSD Drive spin down. A power-saving feature that monitors disk activity in the storage system and spins down inactive disks based on user-selectable policies. dual-port disk A disk that is connected to both controllers so it has two data paths, achieving fault tolerance. Dynamic Host See DHCP. Configuration Protocol dynamic spare An available compatible disk that is automatically assigned, if the dynamic spares option is enabled, to replace a failed disk in a vdisk with a fault-tolerant RAID level.
host An external port that the storage system is attached to. The external port may be a port in an I/O adapter in a server, or a port in a network switch. Product interfaces use the terms host and initiator interchangeably. host port A port on a controller module that interfaces to a host computer, either directly or through a network switch. host bus adapter See HBA. image ID A globally unique serial number that identifies the point-in-time image source for a volume.
masking A volume-mapping setting that specifies no access to that volume by hosts. See also default mapping and explicit mapping. master volume A volume that is enabled for snapshots and has an associated snap pool. MC Management Controller. A processor (located in a controller module) that is responsible for human-computer interface and computer-computer interfaces, including the WBI, CLI, and FTP interfaces, and interacts with the Storage Controller. See also EC and SC.
remote replication Asynchronous (batch) replication of block-level data from a volume in a primary system to a volume in one or more secondary systems by creating a replication snapshot of the primary volume and copying the snapshot data to the secondary systems via Fibre Channel or iSCSI links. The capability to perform remote replication is a licensed feature (Remote Snap). remote syslog support See syslog.
serial electrically erasable programmable ROM See SEEPROM. Service Location Protocol See SLP. SES SCSI Enclosure Services. The protocol that allows the initiator to communicate with the enclosure using SCSI commands. SFCB Small Footprint CIM Broker. SFF Small form factor. A type of disk drive. SHA Secure Hash Algorithm. A cryptographic hash function. SLP Service Location Protocol. Enables computers and other devices to find services in a local area network without prior configuration.
storage system A controller enclosure with at least one connected drive enclosure. Product documentation and interfaces use the terms storage system and system interchangeably. syslog A protocol for sending event messages across an IP network to a logging server. UCS Transformation Format - 8-bit See UTF-8. ULP Unified LUN Presentation. A RAID controller feature that enables a host to access mapped volumes through any controller host port.
Index Symbols * (asterisk) in option name 14 A activity progress interface 83 ALUA 20 array See system asterisk (*) in option name 14 B base for size representations 28 bytes versus characters 28 C cache configuring auto-write-through triggers and behaviors 54 configuring host access to 53 configuring system settings 53 configuring volume settings 58 certificate using FTP to install a security 167 CHAP adding or modifying records 78 configuring 38, 47 configuring for iSCSI hosts 78 deleting records 78 ov
configuring with Configuration Wizard 37 sending a test message 88 event severity icons 100 expansion module properties 121 expansion port properties 121 explicit mapping 20 F fan properties 117 FDE about 34 changing settings 50 clearing keys 51 repurposing disks 52 repurposing system 52 securing the system 51 setting FDE import lock key IDs 52 setting the passphrase 51 firmware about updating 33 using FTP to update controller module 162 using FTP to update disk drive 165 using FTP to update expansion modu
log-collection system administering 177 logs downloading debug 159 LUNs configuring response to missing 53 maximum number 99 priority configuring utility 55 Provisioning Wizard using to create a vdisk with volumes and mappings 61 provisioning, first-time 13 M RAID levels about 26 read-ahead caching optimizing 21 remote replication about 123 remote systems about managing 22 adding 56 checking links from local system 92 removing 56 viewing information about 122 replication address viewing information about
scheduling snapshot 71 snapshot reset 72 volume copy 73 scrub configuring background disk 55 configuring background vdisk 54 SCSI MODE SELECT command configuring handling of 53 SCSI SYNCHRONIZE CACHE command configuring handling of 53 secondary volume detaching 134 reattaching 136 security certificate using FTP to install 167 selective storage presentation See volume mapping shared data (snapshot) 110 shutting down controllers 87 sign out, auto setting user 43, 45 viewing remaining time 14 signing in to the
T tables sorting 14 task schedule See schedule temperature configuring controller shutdown for high 54 thresholds and policies, snap-pool 111 time and date about 29 configuring 46 U ULP 20 unique data (snapshot) 109 units for size representations 28 users about user accounts 15 adding 43 changing default passwords with Configuration Wizard 35 maximum that can sign in 14 modifying 44 removing 45 utility priority configuring 55 V vdisk aborting scrub 90 aborting verification 90 changing name 57 changing own
Index