HP MSA 1040 SMU Reference Guide Abstract This guide is for use by storage administrators to manage an HP MSA 1040 storage system by using its web interface, Storage Management Utility (SMU).
© Copyright 2014 Hewlett-Packard Development Company, L.P. Confidential computer software. Valid license from HP required for possession, use or copying. Consistent with FAR 12.211 and 12.212, Commercial Computer Software, Computer Software Documentation, and Technical Data for Commercial Items are licensed to the U.S. Government under vendor's standard commercial license. The information contained herein is subject to change without notice.
Contents 1 Getting started . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 Configuring and provisioning a new storage system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Browser setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Signing in . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Configuring system settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Changing the system date and time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Changing host interface settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Changing network interface settings. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Configuring CHAP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76 Modifying a schedule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77 Deleting schedules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78 4 Using system tools . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Viewing information about a snapshot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Snapshot properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Mapping properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Schedule properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Viewing replication properties, addresses, and images for a volume. . . . . . . . . . . . . . . . . . . . . . . . . . . Replication properties. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Replication addresses. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Replication images. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Testing SMI-S . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173 Troubleshooting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173 D Administering a log-collection system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175 How log files are transferred and identified . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Figures 1 2 3 4 5 6 Relationship between a master volume and its snapshots and snap pool . . . . . . . . . . . . . . . . . . . . . . 23 Rolling back a master volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 Creating a volume copy from a master volume or a snapshot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 Intersite and intrasite replication sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Figures
Tables 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 SMU communication status icons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 Settings for default users . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 Example applications and RAID levels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 RAID level comparison . . . . . . . . . .
Tables
1 Getting started The Storage Management Utility (SMU) is a web-based application for configuring, monitoring, and managing the storage system. Each controller module in the storage system contains a web server, which is accessed when you sign in to the SMU. In a dual-controller system, you can access all functions from either controller. If one controller becomes unavailable, you can continue to manage the storage system from the partner controller.
3. Click Sign In. If the system is available, the System Overview page is displayed; otherwise, a message indicates that the system is unavailable. Tips for signing in and signing out • Do not include a leading zero in an IP address. For example, enter 10.1.4.33 not 10.1.4.033. • Multiple users can be signed in to each controller simultaneously. • For each active SMU session an identifier is stored in the browser.
Tips for using the help window • To display help for a component in the Configuration View panel, right-click the component and select Help. To display help for the content in the main panel, click either Help in the menu bar or the help icon in the upper right corner of the panel. • In the help window, click the table of contents icon to show or hide the Contents pane. • As the context in the main panel is changed, the corresponding help topic is displayed in the help window.
SNMPv3 user accounts have these options: • User Name. • Password. • SNMP User Type. Either: User Access, which allows the user to view the SNMP MIB; or Trap Target, which allows the user to receive SNMP trap notifications. Trap Target uses the IP address set with the Trap Host Address option. • Authentication Type. Either: MD5 authentication; SHA (Secure Hash Algorithm) authentication; or no authentication. Authentication uses the password set with the Password option. • Privacy Type.
When you create a vdisk you can use the default chunk size or one that better suits your application. The chunk size is the amount of contiguous data that is written to a disk before moving to the next disk. After a vdisk is created its chunk size cannot be changed. For example, if the host is writing data in 16-KB transfers, that size would be a good choice for random transfers because one host read would generate the read of exactly one disk in the volume.
TIP: A best practice is to designate spares for use if disks fail. Dedicating spares to vdisks is the most secure method, but it is also expensive to reserve spares for each vdisk. Alternatively, you can enable dynamic spares or assign global spares. Sparing rules for heterogeneous vdisks If you upgraded from an earlier release that did not distinguish between enterprise and midline SAS disks, you might have vdisks that contain both types of disks. These are called heterogeneous or mixed vdisks.
You can use a volume’s default name or change it to identify the volume’s purpose. For example, a volume used to store payroll information can be named Payroll. You can create vdisks with volumes by using the Provisioning Wizard, or you can create volumes manually.
About volume mapping Each volume has default host-access settings that are set when the volume is created; these settings are called the default mapping. The default mapping applies to any host that has not been explicitly mapped using different settings. Explicit mappings for a volume override its default mapping. Default mapping enables all attached hosts to see a volume using a specified LUN and access permissions set by the administrator.
About volume cache options You can set options that optimize reads and writes performed for each volume. Using write-back or write-through caching CAUTION: Only disable write-back caching if you fully understand how the host operating system, application, and adapter move data. If used incorrectly, you might hinder system performance. You can change a volume’s write-back cache setting.
• No-mirror. In this mode each controller stops mirroring its cache metadata to the partner controller. This improves write I/O response time but at the risk of losing data during a failover. ULP behavior is not affected, with the exception that during failover any write data in cache will be lost. • Atomic write. Not supported.
The following figure shows how the data state of a master volume is preserved in the snap pool by two snapshots taken at different points in time. The dotted line used for the snapshot borders indicates that snapshots are logical volumes, not physical volumes as are master volumes and snap pools.
The following figure shows the difference between rolling back the master volume to the data that existed when a specified snapshot was created (preserved), and rolling back preserved and modified data. MasterVolume-1 Snapshot-1 Preserved Data (Monday) Modified Data (Tuesday) When you use the rollback feature, you can choose to exclude the modified data, which will revert the data on the master volume to the preserved data when the snapshot was taken.
About the Volume Copy feature Volume Copy enables you to copy a volume or a snapshot to a new standard volume. While a snapshot is a point-in-time logical copy of a volume, the volume copy service creates a complete “physical” copy of a volume within a storage system. It is an exact copy of a source volume as it existed at the time the volume copy operation was initiated, consumes the same amount of space as the source volume, and is independent from an I/O perspective.
Guidelines to keep in mind when performing a volume copy include: • The destination vdisk must be owned by the same controller as the source volume. • The destination vdisk must have free space that is at least as large as the amount of space allocated to the original volume. A new volume will be created using this free space for the volume copy. • The destination vdisk does not need to have the same attributes (such as disk type, RAID level) as the volume being copied.
NOTE: To create an NRAID, RAID-0, or RAID-3 vdisk, you must use the CLI create vdisk command. For more information on this command, see the CLI Reference Guide.
Table 4 RAID level comparison (continued) RAID level Min.
The locale setting determines the character used for the decimal (radix) point, as shown below. Table 7 Decimal (radix) point character by locale Language Character Examples English, Chinese, Japanese, Korean Period (.) Dutch, French, German, Italian, Spanish Comma (,) 146.81 GB 3.0 Gbit/s 146,81 GB 3,0 Gbit/s Related topics • "About user accounts" (page 15) About the system date and time You can change the storage system’s date and time, which are displayed in the System Status panel.
About Configuration View icons The Configuration View panel uses the following icons to let you view physical and logical components of the storage system.
• During vdisk operation, if two disks fail and two compatible spares are available, the system uses both spares to reconstruct the vdisk. If one of the spares fails during reconstruction, reconstruction proceeds in “fail 2, fix 1” mode. If the second spare fails during reconstruction, reconstruction stops. When a disk fails, its Fault/UID LED is illuminated. When a spare is used as a reconstruction target, its Online/Activity LED is illuminated. For details about LED states, see your product’s User Guide.
• In pull mode, when log data has accumulated to a significant size, the system sends notifications via email, SNMP, or SMI-S to the log-collection system, which can then use FTP to transfer the appropriate logs from the storage system. The notification will specify the storage-system name, location, contact, and IP address and the log-file type (region) that needs to be transferred.
Disk-performance graphs include: • Data Transferred • Data Throughput • I/O • IOPS • Average Response Time • Average I/O Size • Disk Error Counters • Average Queue Depth Vdisk-performance graphs include: • Data Transferred • Data Throughput • Average Response Time You can save historical statistics in CSV format to a file for import into a spreadsheet or other third-party application. You can also reset historical statistics, which clears the retained data and continues to gather new samples.
• If the firmware in neither controller has the proper midplane serial number then the newer firmware version in either controller is transferred to the other controller. For information about the procedures to update firmware in controller modules, expansion modules, and disk drives, see "Updating firmware" (page 79). That topic also describes how to use the activity progress interface to view detailed information about the progress of a firmware-update operation.
2 Configuring the system Using the Configuration Wizard The Configuration Wizard helps you initially configure the system or change system configuration settings. The wizard guides you through the following steps. For each step you can view help by clicking the help icon in the wizard panel. As you complete steps they are highlighted at the bottom of the panel. If you cancel the wizard at any point, no changes are made.
CAUTION: Changing IP settings can cause management hosts to lose access to the storage system. To use DHCP to obtain IP values for network ports 1. Set the IP address source to DHCP. 2. Click Next to continue. To set static IP values for network ports 1. Determine the IP address, subnet mask, and gateway values to use for each controller. 2. Set the IP address source to manual. 3. Set the values for each controller. You must set a unique IP address for each network port. 4. Click Next to continue.
In-band management interfaces operate through the data path and can slightly reduce I/O performance. The in-band option is: • In-band SES Capability. Used for in-band monitoring of system status based on SCSI Enclosure Services (SES) data. If a service is disabled, it cannot be accessed. To allow specific users to access WBI, CLI, FTP or SMI-S, see "About user accounts" (page 15). To change management interface settings 1.
3. In the Managed Logs Notifications section, set the options: • Log Destination. The email address of the log-collection system. The email addresses must use the format user-name@domain-name and can have a maximum of 320 bytes. For example: LogCollector@MyDomain.com. • Include Logs. When the managed logs feature is enabled, this option activates “push” mode, which automatically attaches system log files to managed-logs email notifications that are sent to the log-collection system.
2. In the upper section of the panel, set the port-specific options: • IP Address. For IPv4 or IPv6, the port IP address. For corresponding ports in each controller, assign one port to one subnet and the other port to a second subnet. Ensure that each iSCSI host port in the storage system is assigned a different IP address. For example, in a system using IPv4: • Controller A port 3: 10.10.10.100 • Controller A port 4: 10.11.10.120 • Controller B port 3: 10.10.10.110 • Controller B port 4: 10.11.10.
• In Use. Either: • The number of user-created components that exist. • N/A. Not applicable. • Max Licensable. Either: • The number of user-created components that the maximum license supports. • N/A. Not applicable. • Expiration. One of the following: • Never. License doesn’t expire. • Number of days remaining for a temporary license. • Expired. Temporary license has expired and cannot be renewed. • N/A. No license installed.
• File Transfer Protocol (FTP). A secondary interface for installing firmware updates, downloading logs, and installing a license. • Simple Network Management Protocol (SNMP). Used for remote monitoring of the system through your network. • Service Debug. Used for technical support only. Enables or disables debug capabilities, including Telnet debug ports and privileged diagnostic user IDs. This is disabled by default.
To configure email notification for managed logs 1. In the Configuration View panel, right-click the system and select Configuration > Services > Email Notification. 2. In the main panel, set the options: • Log Destination. The email address of the log-collection system. The email addresses must use the format user-name@domain-name and can have a maximum of 320 bytes. For example: LogCollector@MyDomain.com. • Include Logs.
Configuring user accounts Adding users You can create either a general user that can access the WBI (SMU), CLI, FTP or SMI-S interfaces, or an SNMPv3 user that can access the MIB or receive trap notifications. SNMPv3 user accounts support SNMPv3 security features such as authentication and encryption. To add a general user 1. In the Configuration View panel, right-click the system and select Configuration > Users > Add New User. 2. In the main panel, set the options: • User Name.
• Password. A password is case sensitive and must contain 8–32 characters. A password cannot contain the following characters: angle brackets, backslash, comma, double quote, single quote, or space. If the password contains only printable ASCII characters then it must contain at least one uppercase character, one lowercase character, and one non-alphabetic character.
• Temperature Preference. Specifies the scale to use for temperature values: Celsius or Fahrenheit. • Auto Sign Out (minutes). Select the amount of time that the user’s session can be idle before the user is automatically signed out (2–720 minutes). The default is 30 minutes. • Locale. The user’s preferred display language, which overrides the system’s default display language.
Configuring system settings Changing the system date and time You can enter values manually for the system date and time, or you can set the system to use NTP as explained in "About the system date and time" (page 29). To use manual date and time settings 1. In the Configuration View panel, right-click the system and select Configuration > System Settings > Date, Time. The date and time options appear. 2. Set the options: • Time.
To change iSCSI host interface settings 1. In the Configuration View panel, right-click the system and select Configuration > System Settings > Host Interfaces. 2. In the Common Settings for iSCSI section of the panel, set the options that apply to all iSCSI ports: • Authentication (CHAP). Enables or disables use of Challenge Handshake Authentication Protocol. Disabled by default. NOTE: CHAP records for iSCSI login authentication must be defined if CHAP is enabled.
Changing network interface settings You can configure addressing parameters for each controller’s network port. You can set static IP values or use DHCP. In DHCP mode, network port IP address, subnet mask, and gateway values are obtained from a DHCP server if one is available. If a DHCP server is unavailable, current addressing is unchanged. You must have some means of determining what addresses have been assigned, such as the list of bindings on the DHCP server.
Configuring advanced settings Changing disk settings Configuring SMART Self-Monitoring Analysis and Reporting Technology (SMART) provides data that enables you to monitor disks and analyze why a disk failed. When SMART is enabled, the system checks for SMART events one minute after a restart and every five minutes thereafter. SMART events are recorded in the event log. To change the SMART setting 1. In the Configuration View panel, right-click the system and select Configuration > Advanced Settings > Disk.
Scheduling drive spin down for all disks For all disks that are configured to use drive spin down (DSD), you can configure a time period to suspend and resume DSD so that disks remain spun-up during hours of frequent activity. To configure DSD for a vdisk, see "Configuring drive spin down for a vdisk" (page 55). To configure DSD for available disks and global spares, see "Configuring drive spin down for available disks and global spares" (page 49).
Changing the missing LUN response Some operating systems do not look beyond LUN 0 if they do not find a LUN 0 or cannot handle noncontiguous LUNs. The Missing LUN Response option handles these situations by enabling the host drivers to continue probing for LUNs until they reach the LUN to which they have access.
3. In the Auto-Write Through Cache Behaviors section, either select (enable) or clear (disable) the options: • Revert when Trigger Condition Clears. Changes back to write-back caching after the trigger condition is cleared. Enabled by default. • Notify Other Controller. Notifies the partner controller that a trigger condition occurred. Enable this option to have the partner also change to write-through mode for better data protection.
To configure background scrub for disks not in vdisks 1. In the Configuration View panel, right-click the system and select Configuration > Advanced Settings > System Utilities. 2. Either select (enable) or clear (disable) the Disk Scrub option. This option is disabled by default. 3. Click Apply. Configuring utility priority You can change the priority at which the Verify, Reconstruct, Expand, and Initialize utilities run when there are active I/O operations competing for the system’s controllers.
Deleting remote systems You can delete the management objects for remote systems. After establishing replication to a remote system, if you choose to delete the remote system you can safely do so without affecting replications. However, because the remote system’s name and IP address will no longer appear in user interfaces, record this information before deleting the remote system so that you can access it at a later time, such as to delete old replication images or for disaster recovery.
Changing a vdisk’s name To change a vdisk’s name 1. In the Configuration View panel, right-click a vdisk and select Configuration > Modify Vdisk Name. The main panel shows the vdisk’s name. 2. Enter a new name. A vdisk name is case sensitive; cannot already exist in the system; and cannot include a comma, double quote, angle bracket, or backslash. The name you enter can have a maximum of 32 bytes. 3. Click Modify Name. The new name appears in the Configuration View panel.
To configure DSD for a vdisk 1. In the Configuration View panel, right-click a vdisk and select Configuration > Configure Vdisk Drive Spin Down. 2. Set the options: • Either select (enable) or clear (disable) the Enable Drive Spin Down option. • Set the Drive Spin Down Delay (minutes) option, which is the period of inactivity after which the vdisk’s disks and dedicated spares automatically spin down, from 1–360 minutes. 3. Click Apply. When processing is complete a success dialog appears. 4. Click OK.
Configuring a snap pool Changing a snap pool’s name To change a snap pool’s name 1. In the Configuration View panel, right-click a snap pool and select Configuration > Modify Snap Pool Name. 2. Enter a new name. A snap pool name is case sensitive; cannot already exist in a vdisk; cannot include a comma, double quote, angle bracket, or backslash; and can have a maximum of 32 bytes. 3. Click Modify Name. The new name appears in the Configuration View panel.
Configuring the system
3 Provisioning the system Using the Provisioning Wizard The Provisioning Wizard helps you create a vdisk with volumes and to map the volumes to hosts. Before using this wizard, read documentation and Resource Library guidelines for your product to learn about vdisks, volumes, and mapping. Then plan the vdisks and volumes you want to create and the default mapping settings you want to use. The wizard guides you through the following steps.
• Assign to. If the system is operating in Active-Active ULP mode, optionally select a controller to be the preferred owner for the vdisk. Auto (the default) automatically assigns the owner to load-balance vdisks between controllers. If the system is operating in Single Controller mode, the Assign to setting is ignored and the system automatically load-balances vdisks in anticipation of the insertion of a second controller in the future. • RAID level. Select a RAID level for the vdisk.
Step 5: Setting the default mapping Specify default mapping settings to control whether and how hosts will be able to access the vdisk’s volumes. These settings include: • A logical unit number (LUN), used to identify a mapped volume to hosts. Both controllers share one set of LUNs. Each LUN can be assigned as the default LUN for only one volume in the storage system; for example, if LUN 5 is the default for Volume1, LUN5 cannot be the default LUN for any other volume.
• Number of Sub-vdisks. For a RAID-10 or RAID-50 vdisk, optionally change the number of sub-vdisks that the vdisk should contain. • Chunk size. For RAID 5, 6, 10, or 50, optionally set the amount of contiguous data that is written to a vdisk member before moving to the next member of the vdisk. For RAID 50, this option sets the chunk size of each RAID-5 sub-vdisk. The chunk size of the RAID-50 vdisk is calculated as: configured-chunk-size x (subvdisk-members - 1).
To change the system’s global spares 1. In the Configuration View panel, right-click the system and select Provisioning > Manage Global Spares. The main panel shows information about available disks in the system. Existing spares are labeled GLOBAL SP. • In the Disk Sets table, the number of white slots in the Disks field shows how many spares you can add. • In the Graphical or Tabular view, only existing global spares and suitable available disks are selectable. 2.
To create a volume in a vdisk 1. In the Configuration View panel, right-click a vdisk and select Provisioning > Create Volume. 2. In the main panel, set the options: • Volume name. This field is populated with a default name, which you can change. A volume name is case sensitive; cannot already exist in a vdisk; cannot include a comma, double quote, angle bracket, or backslash; and can have a maximum of 32 bytes. • Size. Optionally change the default size, which is all free space in the vdisk.
NOTE: The system might be unable to delete a large number of volumes in a single operation. If you specified to delete a large number of volumes, verify that all were deleted. If some of the specified volumes remain, repeat the deletion on those volumes. Changing default mapping for multiple volumes For all volumes in all vdisks or a selected vdisk, you can change the default access to those volumes by all hosts.
NOTE: You cannot map the secondary volume of a replication set. NOTE: When mapping a volume to a host using the Linux ext3 file system, specify read-write access; otherwise, the file system will be unable to mount/present/map the volume and will report an error such as “unknown partition table.” To explicitly map multiple volumes 1. In the Configuration View panel, right-click Vdisks or a vdisk and then select Provisioning > Map Volumes.
To delete the default mapping 1. Clear Map. 2. Click Apply. A message specifies whether the change succeeded or failed. 3. Click OK. Each mapping that uses the default settings is updated. Changing a volume’s explicit mappings CAUTION: Volume mapping changes take effect immediately. Make changes that limit access to volumes when the volumes are not in use. Be sure to unmount/unpresent/unmap a volume before changing the volume’s LUN. NOTE: You cannot map the secondary volume of a replication set.
Unmapping volumes You can delete all of the default and explicit mappings for multiple volumes. CAUTION: Volume mapping changes take effect immediately. Make changes that limit access to volumes when the volumes are not in use. Before changing a volume’s LUN, be sure to unmount/unpresent/unmap the volume. To unmap volumes 1. In the Configuration View panel, right-click Vdisks or a vdisk and then select Provisioning > Unmap Volumes. In the main panel, a table shows all the volumes for the selected vdisk. 2.
Creating a snapshot You can create a snapshot now or schedule the snapshot task. The first time a snapshot is created of a standard volume, the volume is converted to a master volume and a snap pool is created in the volume’s vdisk. The snap pool’s size is either 20% of the volume size or 5.37 GB, whichever is larger. The recommended minimum size for a snap pool is 50 GB. Before creating or scheduling snapshots, verify that the vdisk has enough free space to contain the snap pool.
Deleting snapshots You can use the Delete Snapshots panel to delete standard and replication snapshots. When you delete a snapshot, all data uniquely associated with that snapshot is deleted and associated space in the snap pool is freed for use. Snapshots can be deleted in any order, irrespective of the order in which they were created. CAUTION: Deleting a snapshot removes its mappings and schedules and deletes its data.
3. Set the options: • Start Schedule. Specify a date and a time in the future to be the first instance when the scheduled task will run, and to be the starting point for any specified recurrence. • Date must use the format yyyy-mm-dd. • Time must use the format hh:mm followed by either AM, PM, or 24H (24-hour clock). For example, 13:00 24H is the same as 1:00 PM. • Recurrence. Specify the interval at which the task should run. Set the interval to at least 2 minutes. The default is 1 minute.
• With Modified Data. If the source volume is a snapshot, select this option to include the snapshot’s modified data in the copy. Otherwise, the copy will contain only the data that existed when the snapshot was created. 4. Click Copy the Volume. A confirmation dialog appears. 5. Click Yes to continue; otherwise, click No. If you clicked Yes and With Modified Data is selected and the snapshot has modified data, a second confirmation dialog appears. 6. Click Yes to continue; otherwise, click No.
Rolling back a volume You can roll back (revert) the data in a volume to the data that existed when a specified snapshot was created. You also have the option of including its modified data (data written to the snapshot since it was created). For example, you might want to take a snapshot, mount/present/map it for read/write, and then install new software on the snapshot for testing. If the software installation is successful, you can roll back the volume to the contents of the modified snapshot.
Creating a snap pool Before you can convert a standard volume to a master volume or create a master volume for snapshots, a snap pool must exist. A snap pool and its associated master volumes can be in different vdisks, but must be owned by the same controller. To create a snap pool 1. In the Configuration View panel, right-click a vdisk and select Provisioning > Create Snap Pool. 2. In the main panel set the options: • Snap Pool name. Optionally change the default name for the snap pool.
Removing hosts To remove hosts 1. Verify that the hosts you want to remove are not accessing volumes. 2. In the Configuration View panel, either: • Right-click the system or Hosts and then select Provisioning > Remove Hosts. • Right-click a host and select Provisioning > Remove Host. 3. In the main panel, select the hosts to remove. To select or clear all items, toggle the checkbox in the heading row. 4. Click Remove Host(s). A confirmation dialog appears. 5.
To create an explicit mapping 1. In the Maps for Host table, select the Default mapping to override. 2. Select Map. 3. Set the LUN and select the ports and access type. 4. Click Apply. A message specifies whether the change succeeded or failed. 5. Click OK. The mapping becomes Explicit with the new settings. To modify an explicit mapping 1. In the Maps for Host table, select the Explicit mapping to change. 2. Set the LUN and select the ports and access type. 3. Click Apply.
Modifying a schedule To modify a schedule 1. In the Configuration View panel, right-click the system or a volume or a snapshot and select Provisioning > Modify Schedule. In the main panel, a table shows each schedule. 2. In the table, select the schedule to modify. For information about schedule status values, see "Schedule properties" (page 105). 3. Set the options: • Snapshot Prefix. Optionally change the default prefix to identify snapshots created by this task.
Deleting schedules If a component has a scheduled task that you no longer want to occur, you can delete the schedule. When a component is deleted, its schedules are also deleted. To delete task schedules 1. In the Configuration View panel, right-click the system or a volume or a snapshot and select Provisioning > Delete Schedule. 2. In the main panel, select the schedule to remove. 3. Click Delete Schedule. A confirmation dialog appears. 4. Click Yes to continue; otherwise, click No.
4 Using system tools Updating firmware You can view the current versions of firmware in controller modules, expansion modules, and disks, and install new versions. To monitor the progress of a firmware-update operation by using the activity progress interface, see "Using the activity progress interface" (page 81) below. TIP: To ensure success of an online update, select a period of low I/O activity.
6. Click Install Controller-Module Firmware File. A dialog box shows firmware-update progress. The process starts by validating the firmware file: • If the file is invalid, verify that you specified the correct firmware file. If you did, try downloading it again from the source location. • If the file is valid, the process continues. CAUTION: Do not perform a power cycle or controller restart during a firmware update.
CAUTION: Do not perform a power cycle or controller restart during the firmware update. If the update is interrupted or there is a power failure, the module might become inoperative. If this occurs, contact technical support. The module’s FRU might need to be returned to the factory for reprogramming. It typically takes 4.5 minutes to update each EMP in a D2700 enclosure, or 2.5 minutes to update each EMP in an MSA 1040 or P2000 drive enclosure. Wait for a message that the code load has completed. 7.
To access the activity progress interface 1. Enable the Activity Progress Monitor service; see "Changing management interface settings" (page 40). 2. In a new tab in your web browser, enter a URL of the form: http://controller-address:8081/cgi-bin/content.cgi?mc=MC-identifier&refresh=true where: • controller-address is required and specifies the IP address of a controller network port.
Saving logs To help service personnel diagnose a system problem, you might be asked to provide system log data. Using the SMU, you can save log data to a compressed zip file.
For SAS, you can reset a port pair (the first and second ports). Resetting a SAS host port issues a COMINIT/COMRESET sequence and might reset other ports. To reset a host port 1. In the Configuration View panel, right-click the system and select Tools > Reset Host Port. 2. Select the port or port pair to reset. 3. Click Reset Host Port. Rescanning disk channels A rescan forces a rediscovery of disks and enclosures in the storage system.
If spares are available, and the health of the vdisk is Degraded, the vdisk will use them to start reconstruction. When reconstruction is complete, you can clear the leftover disk’s metadata. Clearing the metadata will change the disk’s health to OK and its How Used state to AVAIL, making the disk available for use in a new vdisk or as a spare.
Shutting down Shutting down the Storage Controller in a controller module ensures that a proper failover sequence is used, which includes stopping all I/O operations and writing any data in write cache to disk. If the Storage Controller in both controller modules is shut down, hosts cannot access the system’s data. Perform a shut down before removing a controller module or powering down the system.
Expanding a vdisk You can expand the capacity of a vdisk by adding disks to it, up to the maximum number of disks that the storage system supports. Host I/O to the vdisk can continue while the expansion proceeds. You can then create or expand a volume to use the new free space, which becomes available when the expansion is complete. You can expand only one vdisk at a time.
3. Click OK. The panel shows the verification’s progress. To abort vdisk verification 1. In the Configuration View panel, right-click a fault-tolerant vdisk and select Tools > Verify Vdisk. 2. Click Abort Verify Utility. A message confirms that verification has been aborted. 3. Click OK. Scrubbing a vdisk The system-level Vdisk Scrub option (see "Configuring background scrub for vdisks" (page 52)) automatically checks all vdisks for disk defects.
Examples of when quarantine can occur are: • At system power-up, a vdisk has fewer disks online than at the previous power-up. This may happen because a disk is slow to spin up or because an enclosure is not powered up. The vdisk will be automatically dequarantined if the inaccessible disks come online and the vdisk status becomes FTOL (fault tolerant and online), or if after 60 seconds the vdisk status is QTCR or QTDN.
To remove a vdisk from quarantine (if specified by the recommended action for event 172 or 485) 1. In the Configuration View panel, right-click a quarantined vdisk and select Tools > Dequarantine Vdisk. 2. Click Dequarantine Vdisk. Depending on the number of disks that remain active in the vdisk, its health might change to Degraded (RAID 6 only) and its status changes to FTOL, CRIT, or FTDN. For status descriptions, see "Vdisk properties" (page 100).
Resetting or saving historical disk-performance statistics Resetting historical disk-performance statistics You can reset (clear) all historical performance statistics for all disks. When you reset historical statistics, an event will be logged and new data samples will continue to be stored every quarter hour. To reset historical disk performance statistics 1. In the Configuration View panel, right-click the local system and select Tools > Reset or Save Disk Performance Statistics. 2.
Using system tools
5 Viewing system status Viewing information about the system In the Configuration View panel, right-click the system and select View > Overview. The System Overview table shows: • Health. OK Degraded Fault Unknown • Component. System, Enclosures, Disks, Vdisks, Volumes, Schedules, Configuration Limits, Versions, Snap Pools, Snapshots, Licensed Features. • Count. • Capacity. • Storage Space. For descriptions of storage-space color codes, see "About storage-space color codes" (page 29).
The System Redundancy table shows: • Controller Redundancy Mode. • Controller Redundancy Status. • Controller A Status. • Controller B Status. Enclosure properties When you select Enclosures in the System Overview table, a table displays the following information for each enclosure: • Health. OK Degraded Fault Unknown If an enclosure’s health is not OK, select it in the Configuration View panel to view details about it. • Enclosure ID. • Enclosure WWN. • Vendor. • Model. • Number of Disks.
• VDISK: Used in a vdisk. • VDISK SP: Spare assigned to a vdisk. • Current Job • DRSC: Disks in the vdisk are being scrubbed. • EXPD: The vdisk is being expanded. • INIT: The vdisk is being initialized. • RCON: The vdisk is being reconstructed. • VRFY: The vdisk is being verified. • VRSC: The vdisk is being scrubbed. • Status. • Up: The disk is present and is properly communicating with the expander. • Spun Down: The disk is present and has been spun down by the DSD feature.
• Disk Type. • SAS: Enterprise SAS. • SAS MDL: Midline SAS. NOTE: In the Configuration View panel, if a vdisk contains more than one type of disk, its RAID-level label includes the suffix -MIXED. If no vdisks exist, the table displays no data. Volume properties When you select Volumes in the System Overview table, a table displays the following information for each volume: • Name. • Serial Number. • Size. Total size of the volume. • Vdisk Name. The name of the vdisk the volume resides on.
• Status. Expired or active. • Next Time. The next time the task is scheduled to run. The Task Details table displays specifics about the task: • Task Name. • Task Type. Type of task assigned to run. • Status. Outcome of the task. • Task State. Specific information about task type. When you select a task of type TakeSnapshot, a third table displays. The Retained Set table shows the name and serial number of each snapshot that the task has created and that is being retained.
Viewing the system event log In the Configuration View panel, right-click the system and select View > Event Log. The System Events panel shows the 100 most recent events that have been logged by either controller. All events are logged, regardless of event-notification settings. Click the buttons above the table to view all events, or only critical, warning, or informational events. The event log table shows the following information: • Severity. Critical.
Viewing information about all vdisks In the Configuration View panel, right-click Vdisks and select View > Overview. The Vdisks Overview table shows: • Health. OK Degraded Fault Unknown • Component. • Count. Number of components. • Capacity. Total capacity of the component. • Storage Space. Amount of space on the component. For descriptions of storage-space color codes, see "About storage-space color codes" (page 29). The Vdisks table shows more information about each vdisk. • Health. • Name. Vdisk name.
Viewing information about a vdisk In the Configuration View panel, right-click a vdisk and select View > Overview. The Vdisks Overview table shows: • Health. OK Degraded Fault Unknown • Component. Vdisk, disks, volumes. • Count. • Capacity. • Storage Space. For descriptions of storage-space color codes, see "About storage-space color codes" (page 29). Select a component to see more information about it. When the Vdisk component is selected, you can view properties or historical performance statistics.
• Status. • CRIT: Critical. The vdisk is online but isn’t fault tolerant because some of its disks are down. • FTDN: Fault tolerant with a down disk. The vdisk is online and fault tolerant, but some of its disks are down. • FTOL: Fault tolerant and online. • OFFL: Offline. Either the vdisk is using offline initialization, or its disks are down and data may be lost. • QTCR: Quarantined critical. The vdisk is critical with at least one inaccessible disk.
If aggregation is required, the system aggregates samples for each disk in the vdisk (as described in "Disk performance" (page 113)) and then aggregates the resulting values as follows: • For a count statistic such as data transferred, the aggregated values are added to produce the value of the aggregated sample. • For a rate statistic such as data throughput, the aggregated values are added and then are divided by their combined interval (seconds per sample multiplied by the number of samples).
The enclosure view table has two tabs. The Tabular tab shows: • Health. OK Degraded Fault N/A (if the disk is spun down) Unknown If the disk’s health is not OK, view health details in the Enclosure Overview panel (page 111). • Name. System-defined disk name using the format Disk-enclosure-number.disk-slot-number. • Type. • SAS: Enterprise SAS. • SAS MDL: Midline SAS. • State.
Viewing information about a volume In the Configuration View panel, right-click a volume and select View > Overview. The Volume Overview table shows: • Component. Volume or Maps. • Count. The quantity of mappings for the volume. • Capacity. The capacity of the volume. • Storage Space. The space usage of the volume. For descriptions of storage-space color codes, see "About storage-space color codes" (page 29). • Replication Addresses. The quantity of replication addresses for the volume.
• Status-Reason. More information about the status value, or N/A for Online status. • Monitor. Replication volume monitoring status: • OK: Communication to the remote volume is successfully occurring on the FC or iSCSI network. • Failed: Communication to the remote volume has failed because of an FC or iSCSI network issue or because the remote volume has gone offline. • Location. Local or Remote. • Primary Volume Name. Primary volume name.
• Expired: Schedule has expired. • Invalid: Schedule is invalid. • Next Time. The next time the associated task will run. The Task Details table shows different properties depending on the task type. Properties shown for all task types are: • Task Name. Task name. • Task Type. ReplicateVolume, ResetSnapshot, TakeSnapshot, or VolumeCopy. • Status. • Uninitialized: Task is not yet ready to run. • Ready: Task is ready to run. • Active: Task is running. • Error: Task has an error. • Invalid: Task is invalid.
Replication images If any replication images exist for this volume, when you select the Replication Images component, the Replication Images table shows information about each image. For the selected image, the Replication Images table shows: • Image Serial Number. Replication image serial number. • Image Name. User-defined name assigned to the primary replication image. • Snapshot Serial Number. Replication snapshot serial number associated with the image.
• Replication snapshot (Replicating): For a primary volume, a snapshot that is being replicated to a secondary system. • Replication snapshot (Current sync point): For a primary or secondary volume, the latest snapshot that is copy-complete on any secondary system in the replication set. • Replication snapshot (Common sync point): For a primary or secondary volume, the latest snapshot that is copy-complete on all secondary systems in the replication set.
Viewing information about a snap pool In the Configuration View panel, right-click a snap pool and select View > Overview. The Snap Pool Overview table shows: • The capacity and space usage of the snap pool • The quantity of volumes using the snap pool • The quantity of snapshots in the snap pool For descriptions of storage-space color codes, see "About storage-space color codes" (page 29).
Volume properties When you select the Client Volumes component, a table shows the name, serial number, size, vdisk name, and vdisk serial number for each volume using the snap pool. Snapshot properties When you select the Resident Snapshots component, a table shows each snapshot’s name; serial number; amounts of snap data, unique data, and shared data; and status (Available or Unavailable).
• Profile. • Standard: Default profile. • HP-UX: The host uses Flat Space Addressing. • Host Type. • If the host was discovered and its entry was automatically created, its host-interface type: FC; iSCSI; SAS. • If the host entry was manually created: Undefined. Mapping properties When you select Maps in the Host Overview table, the Maps for Host table shows: • Type. Explicit or Default. Settings for an explicit mapping override the default mapping. • Name. Volume name. • Serial Number.
• Manufacturing Location. • Revision. • EMP A Revision. Firmware revision of the Enclosure Management Processor in controller module A’s Expander Controller. • EMP B Revision. Firmware revision of the Enclosure Management Processor in controller module B’s Expander Controller. • EMP A Bus ID. • EMP B Bus ID. • EMP A Target ID. • EMP B Target ID. • Midplane Type. • Enclosure Power (watts). • PCIe 2-Capable. Shows whether the enclosure is capable of using PCI Express version 2.
• Model. • Size. • Speed (kr/min). • Transfer Rate. The data transfer rate in Gbit/s. Some 6-Gbit/s disks might not consistently support a 6-Gbit/s transfer rate. If this happens, the controller automatically adjusts transfers to those disks to 3 Gbit/s, increasing reliability and reducing error messages with little impact on system performance. This rate adjustment persists until the controller is restarted or power-cycled. • Revision. Disk firmware revision number. • Serial Number. • Current Job.
The system will change the time settings to match the times of the oldest and newest samples displayed. The graphs are updated each time you click either the Performance tab or the Update button. • Data Transferred. Shows the amounts of data read and written and the combined total over the sampling time period. The base unit is bytes. • Data Throughput. Shows the rates at which data are read and written and the combined total over the sampling time period. The base unit is bytes/s. • I/O.
Fan properties In a D2700 enclosure when you select a fan, a table shows: • Health. OK Degraded Fault Unknown • Health Reason. If Health is not OK, this field shows the reason for the health state. • Health Recommendation. If Health is not OK, this field shows recommended actions to take to resolve the health issue. • Status. • Location. • Speed. • Serial Number. • Firmware Version. • Hardware Version. Controller module properties When you select a controller module, a table shows: • Health.
Controller module: network port properties When you select a network port, a table shows: • Health. OK Degraded • Health Reason. If Health is not OK, this field shows the reason for the health state. • MAC Address. • Addressing Mode. • IP Address. • Gateway. • Subnet Mask. Controller module: FC host port properties When you select an FC host port, a table shows: • Health. OK Degraded Fault N/A • Health Reason. If Health is not OK, this field shows the reason for the health state. • Status.
Controller module: iSCSI host port properties When you select an iSCSI host port, a table shows: • Health. OK Degraded Fault N/A Unknown • Health Reason. If Health is not OK, this field shows the reason for the health state. • Status. • Up: The port is cabled and has an I/O link. • Warning: Not all of the port’s PHYs are up. • Not Present: The controller module is not installed or is down. • Disconnected: Either no I/O link is detected or the port is not cabled. • Ports.
Controller module: SAS host port properties When you select a SAS host port, a table shows: • Health. OK Degraded Fault N/A Unknown • Health Reason. If Health is not OK, this field shows the reason for the health state. • Health Recommendation. If Health is not OK, this field shows recommended actions to take to resolve the health issue. • Status. • Up: The port is cabled and has an I/O link. • Warning: Not all of the port’s PHYs are up. • Not Present: The controller module is not installed or is down.
Controller module: CompactFlash properties When you select a CompactFlash card in the Rear Tabular view, a table shows: • Health. OK Fault • Health Reason. If Health is not OK, this field shows the reason for the health state. • Health Recommendation. If Health is not OK, this field shows recommended actions to take to resolve the health issue. • Status. • Cache Flush. • Enabled: If the controller loses power, it will automatically write cache data to the CompactFlash card.
I/O module: Out port properties When you select an Out port, a table shows: • Health. OK Degraded Fault N/A Unknown • Health Reason. If Health is not OK, this field shows the reason for the health state. • Health Recommendation. If Health is not OK, this field shows recommended actions to take to resolve the health issue. • Status. • Name. Viewing information about a remote system In the Configuration View panel, right-click a remote system and select View > Overview.
6 Using Remote Snap to replicate volumes About the Remote Snap replication feature Remote Snap is a licensed feature for disaster recovery. This feature performs asynchronous (batch) replication of block-level data from a volume on a local storage system to a volume that can be on the same system or on a second, independent system. This second system can be located at the same site as the first system or at a different site.
Figure 4 Intersite and intrasite replication sets Remote replication uses snapshot functionality to track the data to be replicated and to determine the differences in data updated on the master volume, minimizing the amount of data to be transferred. In order to perform a replication, a snapshot of the primary volume is taken, creating a point-in-time image of the data.
NOTE: Snapshot operations are I/O-intensive. Every write to a unique location in a master volume after a snapshot is taken will cause an internal read and write operation to occur in order to preserve the snapshot data. If you intend to create snapshots of, create volume copies of, or replicate volumes in a vdisk, ensure that the vdisk contains no more than four master volumes, snap pools, or both.
• Delta replications: Delta data is the “list” of 64-KB blocks that differs between the last snapshot replicated and the next snapshot to be replicated. This delta data is then replicated from the replication snapshot on the primary volume to the secondary volume. Once the initial replication has completed, all future replications for that replication set will be delta replications so long as sync points are maintained. Action 5 is a delta replication.
• The reserve size is calculated as follows: • If the primary volume and the snap pool are each less than 500 GB, the reserve will be the same size as the primary volume. • If the primary volume is larger than 500 GB, the reserve size will be the maximum, 500 GB. • If the snap pool is larger than 500 GB, the reserve will be the same size as the snap pool.
3. Map the new primary volume to hosts, as was the original primary volume. Figure 6 Example of primary-volume failure If the original primary volume becomes accessible, you can set it to be the primary volume again as described in the following process overview: 1. Take a snapshot of the original primary volume. This preserves the volume’s current data state for later comparison with the new primary volume. 2. Remove the volume’s mappings. 3. Set the original primary volume to be a secondary volume. 4.
• Suspending (page 132), resuming (page 132), or aborting (page 132) a replication • "Exporting a replication image to a snapshot" (page 135) • "Changing the primary volume for a replication set" (page 135) • "Viewing replication properties, addresses, and images for a volume" (page 137) • "Viewing information about a replication image" (page 139) • "Viewing information about a remote primary or secondary volume" (page 138) Using the Replication Setup Wizard If the system is licensed to use remote replicat
Step 3: Selecting the replication mode Select the replication mode, which specifies whether the replication destination is in the local system or a remote system. If you want to replicate to a remote system that hasn’t already been added to the local system, you can add it. Local replication is allowed only if the primary and secondary volumes are in vdisks owned by different controllers. To replicate within the local system 1. Select Local Replication. 2.
Replicating a volume If the system is licensed to use remote replication, you can create a replication set that uses the selected volume as the primary volume, and to immediately start or schedule replication. The primary volume can be a standard volume or a master volume. To create a replication set you must select a secondary system and a secondary vdisk or volume. The secondary system can be the local system, or a remote system added by using the Add Remote System panel.
3. Select the link type used between the two systems. 4. If you want to start replication now: a. Select the Initiate Replication and Now options. b. Optionally change the default replication image name. A name is case sensitive; cannot already exist in a vdisk; cannot include a comma, double quote, angle bracket, or backslash; and can have a maximum of 32 bytes. c. Continue with step 7. 5. If you want to schedule replication: a. Select the Initiate Replication and Scheduled options. b.
Replicating a snapshot If the system is licensed to use remote replication, you can replicate an existing, primary snapshot that is mapped to a host. You can only replicate a snapshot of a volume that is already part of a replication set. If the selected snapshot hasn’t already been replicated to a secondary volume, each replication volume in the replication set is requested to replicate the snapshot data. Only snapshot preserved data is replicated; snapshot modified data is not replicated.
Suspending replication If the system is licensed to use remote replication, you can suspend the current replication operation for a selected a replication volume. You must perform this task on the system that owns the secondary volume. Once suspended, the replication must be resumed or aborted to allow the replication volume to resume normal operation. To suspend replication 1. In the Configuration View panel, right-click a local replication volume and select Provisioning > Suspend Replication. 2.
NOTE: • It is recommended that the vdisk that you are moving contains only secondary volumes and their snap pools. You are allowed to move other volumes along with secondary volumes and their snap pools, but be sure that you are doing so intentionally. • If you intend to move a vdisk’s enclosure and you want to allow I/O to continue to the other enclosures, it is best if it is at the end of the chain of connected enclosures.
3. Click Yes to continue; otherwise, click No. If you clicked Yes, the stop operation begins. A message indicates whether the task succeeded or failed. If the stop operation succeeds, the vdisk’s health is shown as Unknown, its status is shown as STOP, and its subcomponents are no longer displayed in the Configuration View panel. 4. If the stop operation succeeded for the secondary volume’s vdisk and for its snap pool’s vdisk (if applicable), you can move the disks into the remote system.
To reattach a secondary volume 1. In the Configuration View panel, right-click the secondary volume and select Provisioning > Reattach Replication Volume. 2. In the main panel, click Reattach Replication Volume. A message indicates whether the task succeeded or failed. • If the task succeeds, the secondary volume’s status changes to “Establishing proxy” while it is establishing the connection to the remote (primary) system in preparation for replication; then the status changes to Online.
To change the secondary volume of a replication set to be its primary volume 1. On the secondary system, in the Configuration View panel, right-click the secondary volume and select Provisioning > Set Replication Primary Volume. 2. In the main panel, select the secondary volume in the list. 3. Click Set Replication Primary Volume. In the Configuration View panel, the volume’s designation changes from Secondary Volume to Primary Volume. NOTE: The offline primary volume remains designated a Primary Volume.
Viewing replication properties, addresses, and images for a volume In the Configuration View panel, right-click a volume and select View > Overview.
• Not Attempted. Communication has not been attempted to the remote volume. • Online. The volumes in the replication set have a valid connection but communication is not currently active. • Active. Communication is currently active to the remote volume. • Offline. No connection is available to the remote system. • Connection Time. Date and time of the last communication with the remote volume, or N/A.
Replication image properties When you select the Replication Images component a table shows replication image details including the image serial number and name, snapshot serial number and name, and image creation date/time. Viewing information about a replication image In the Configuration View panel, right-click a replication image and select View > Overview.
Using Remote Snap to replicate volumes
7 Support and other resources Contacting HP For worldwide technical support information, see the HP support website: http://www.hp.
Typographic conventions Table 12 Document conventions Convention Element Blue text: Table 2 (page 6) Cross-reference links Blue, bold, underlined text Email addresses Blue, underlined text: http://www.hp.
8 Documentation feedback HP is committed to providing documentation that meets your needs. To help us improve the documentation, send any errors, suggestions, or comments to Documentation Feedback (docsfeedback@hp.com). Include the document title and part number, version number, or the URL when submitting your feedback.
Documentation feedback
A SNMP reference This appendix describes the Simple Network Management Protocol (SNMP) capabilities that MSA 1040 storage systems support. This includes standard MIB-II, the FibreAlliance SNMP Management Information Base (MIB) version 2.2 objects, and enterprise traps. MSA 1040 storage systems can report their status through SNMP. SNMP provides basic discovery using MIB-II, more detailed status with the FA MIB 2.2, and asynchronous notification using enterprise traps.
FA MIB 2.2 SNMP behavior The FA MIB 2.2 objects are in compliance with the FibreAlliance MIB v2.2 Specification (FA MIB2.2 Spec). For a full description of this MIB, go to: www.emc.com/microsites/fibrealliance. FA MIB 2.2 is a subset of FA MIB 4.0, which is included with HP System Insight Manager (SIM) and other products. The differences are described in "FA MIB 2.2 and 4.0 differences" (page 156). FA MIB 2.
Table 13 FA MIB 2.2 objects, descriptions, and values (continued) Object Description Value connUnitTable Includes the following objects as specified by the FA MIB2.
Table 13 FA MIB 2.2 objects, descriptions, and values (continued) Object Description Value connUnitContact Settable: Contact information for this connectivity unit Default: Uninitialized Contact connUnitLocation Settable: Location information for this connectivity unit Default: Uninitialized Location connUnitEventFilter Defines the event severity that will be logged by this connectivity unit. Settable only through SMU.
Table 13 FA MIB 2.2 objects, descriptions, and values (continued) Object Description Value connUnitPortTable Includes the following objects as specified by the FA MIB2.
Table 13 FA MIB 2.2 objects, descriptions, and values (continued) Object Description Value connUnitEventTable Includes the following objects as specified by the FA MIB2.
Table 13 FA MIB 2.
Table 14 connUnitRevsTable index and description values (continued) connUnitRevsIndex connUnitRevsDescription 13 Firmware Revision for Expander (Controller A) 14 Firmware Revision for Expander (Controller B) 15 Hardware Revision for Controller A 16 Hardware Revision for Controller B External details for connUnitSensorTable Table 15 connUnitSensorTable index, name, type, and characteristic values connUnitSensorIndex connUnitSensorName connUnitSensorType connUnitSensor Characteristic 1 CPU Temp
Table 15 connUnitSensorTable index, name, type, and characteristic values (continued) connUnitSensorIndex connUnitSensorName connUnitSensorType connUnitSensor Characteristic 29 Power Supply 1 Voltage, 3.3V power-supply(5) power(9) 30 Power Supply 2 Voltage, 12V power-supply(5) power(9) 31 Power Supply 2 Voltage, 5V power-supply(5) power(9) 32 Power Supply 2 Voltage, 3.
Enterprise trap MIB The following pages show the source for the HP enterprise traps MIB, msa2000traps.mib. This MIB defines the content of the SNMP traps that MSA 1040 storage systems generate. ---------------------- ---------------------------------------------------------------------------MSA2000 Array MIB for SNMP Traps $Revision: 11692 $ Copyright (c) 2008 Hewlett-Packard Development Company, L.P. Copyright (c) 2005-2008 Dot Hill Systems Corp. Confidential computer software.
--#SUMMARY "Informational storage event # %d, type %d, description: %s" --#ARGUMENTS {0,1,2} --#SEVERITY INFORMATIONAL --#TIMEINDEX 6 ::= 3001 msaEventWarningTrap TRAP-TYPE ENTERPRISE hpMSA VARIABLES { connUnitEventId, connUnitEventType, connUnitEventDescr } DESCRIPTION "An event has been generated by the storage array.
FA MIB 2.2 and 4.0 differences FA MIB 2.2 is a subset of FA MIB 4.0. Therefore, SNMP elements implemented in MSA 1040 systems can be accessed by a management application that uses FA MIB 4.0. The following tables are not implemented in 2.2: • connUnitServiceScalars • connUnitServiceTables • connUnitZoneTable • connUnitZoningAliasTable • connUnitSnsTable • connUnitPlatformTable The following variables are not implemented in 2.
B Using FTP Although SMU is the preferred interface for downloading log data and historical disk-performance statistics, updating firmware, installing a license, and installing a security certificate, you can also use FTP to do these tasks. IMPORTANT: Do not attempt to do more than one of the operations in this appendix at the same time. They can interfere with each other and the operations may fail.
NOTE: You must uncompress a zip file before you can view the files it contains. To examine diagnostic data, first view store_yyyy_mm_dd__hh_mm_ss.logs. Transferring log data to a log-collection system If the log-management feature is configured in pull mode, a log-collection system can access the storage system’s FTP interface and use the get managed-logs command to retrieve untransferred data from a system log file.
Downloading historical disk-performance statistics You can access the storage system’s FTP interface and use the get perf command to download historical disk-performance statistics for all disks in the storage system. This command downloads the data in CSV format to a file, for import into a spreadsheet or other third-party application. The number of data samples downloaded is fixed at 100 to limit the size of the data file to be generated and transferred.
Updating firmware You can update the versions of firmware in controller modules, expansion modules (in drive enclosures), and disks. TIP: To ensure success of an online update, select a period of low I/O activity. This helps the update complete as quickly as possible and avoids disruptions to host and applications due to timeouts. Attempting to update a storage system that is processing a large, I/O-intensive batch job will likely cause hosts to lose connectivity with the storage system.
6. Enter: ftp controller-network-address For example: ftp 10.1.0.9 7. Log in as an FTP user. 8. Enter: put firmware-file flash For example: put T230R01-01.bin flash CAUTION: Do not perform a power cycle or controller restart during a firmware update. If the update is interrupted or there is a power failure, the module might become inoperative. If this occurs, contact technical support. The module might need to be returned to the factory for reprogramming.
Updating expansion-module firmware A drive enclosure can contain one or two expansion modules. Each expansion module contains an enclosure management processor (EMP). All modules of the same product model should run the same firmware version. You can update the firmware in each expansion-module EMP by loading a firmware file obtained from the HP web download site, http://www.hp.com/support.
It typically takes 4.5 minutes to update each EMP in a D2700 enclosure, or 2.5 minutes to update each EMP in an MSA 1040 or P2000 drive enclosure. Wait for a message that the code load has completed. NOTE: If the update fails, verify that you specified the correct firmware file and try the update a second time. If it fails again, contact technical support. 9. If you are updating specific expansion modules, repeat step 8 for each remaining expansion module that needs to be updated. 10. Quit the FTP session.
4. Either: • To update all disks of the type that the firmware applies to, enter: put firmware-file disk • To update specific disks, enter: put firmware-file disk:enclosure-ID:slot-number For example: put firmware-file disk:1:11 CAUTION: Do not power cycle enclosures or restart a controller during the firmware update. If the update is interrupted or there is a power failure, the disk might become inoperative. If this occurs, contact technical support.
Installing a security certificate The storage system supports use of unique certificates for secure data communications, to authenticate that the expected storage systems are being managed. Use of authentication certificates applies to the HTTPS protocol, which is used by the web server in each controller module. As an alternative to using the CLI to create a security certificate on the storage system, you can use FTP to install a custom certificate on the system.
Using FTP
C Using SMI-S This appendix provides information for network administrators who are managing the storage system from a storage management application through the Storage Management Initiative Specification (SMI-S). SMI-S is a Storage Networking Industry Association (SNIA) standard that enables interoperable management for storage networks and storage devices.
• Software Inventory subprofile • Block Server Performance subprofile • Copy Services subprofile • Job Control subprofile • Storage Enclosure subprofile (if expansion enclosures are attached) • Disk Sparing subprofile • Object Manager Adapter subprofile The embedded SMI-S provider supports: • HTTPS using SSL encryption on the default port 5989, or standard HTTP on the default port 5988. Both ports cannot be enabled at the same time.
NOTE: Port 5989 and port 5988 cannot be enabled at the same time. The namespace details are given below.
Table 17 Supported SMI-S profiles (continued) Profile/subprofile/package Description Power Supply profile Specializes the DMTF Power Supply profile by adding indications. Profile Registration profile Models the profiles registered in the object manager and associations between registration classes and domain classes implementing the profile. Software subprofile Models software or firmware installed on the system.
In a dual-controller configuration, both controller A and B alert events are sent via controller A’s SMI-S provider. The event categories in Table 18 pertain to FRU assemblies and certain FRU components.
Table 19 Life cycle indications (continued) Profile or Element description and name subprofile WQL or CQL Masking and Mapping SELECT * FROM CIM_InstCreation WHERE SourceInstance ISA CIM_AuthorizedSubject Both Masking and Mapping SELECT * FROM CIM_InstCreation WHERE SourceInstance ISA CIM_ProtocolController Masking and Mapping SELECT * FROM CIM_InstCreation WHERE SourceInstance ISA CIM_ProtocolControllerForUnit Multiple Computer System SELECT * FROM CIM_InstCreation WHERE SourceInstance ISA CIM_C
create user level manage username 3. Type this command: set user username interfaces wbi,cli,smis,ftp Listening for managed-logs notifications For use with the storage system’s managed logs feature, the SMI-S provider can be set up to listen for notifications that log files have filled to a point that are ready to be transferred to a log-collection system. For more information about the managed logs feature, see "About managed logs" (page 31). To set up SMI-S to listen for managed logs notifications: 1.
Using SMI-S
D Administering a log-collection system A log-collection system receives log data that is incrementally transferred from a storage system whose managed logs feature is enabled, and is used to integrate that data for display and analysis. For information about the managed logs feature, see "About managed logs" (page 31). Over time, a log-collection system can receive many log files from one or more storage systems. The administrator organizes and stores these log files on the log-collection system.
Storing log files It is recommended to store log files hierarchically by storage-system name, log-file type, and date/time. Then, if historical analysis is required, the appropriate log-file segments can easily be located and can be concatenated into a complete record.
Glossary 2U12 A enclosure that is two rack units in height and can contain 12 disks. 2U24 A enclosure that is two rack units in height and can contain 24 disks. Additional Sense See ASC/ASCQ. Code/Additional Sense Code Qualifier Advanced Encryption Standard See AES. AES Advanced Encryption Standard. A specification for the encryption of data using a symmetric-key algorithm. Air Management Sled See AMS. ALUA Asymmetric Logical Unit Access. AMS For a 2U12 or 2U24 enclosure, Air Management Sled.
compatible disk A disk that can be used to replace a failed member disk of a vdisk because it both has enough capacity and is of the same type (enterprise SAS or midline SAS) as the disk that failed. See also available disk, dedicated spare, dynamic spare, and global spare. complex programmable logic device See CPLD. Configuration See CAPI. Application Programming Interface controller A (or B) A short way of referring to controller module A (or B).
Dynamic Host See DHCP. Configuration Protocol dynamic spare An available compatible disk that is automatically assigned, if the dynamic spares option is enabled, to replace a failed disk in a vdisk with a fault-tolerant RAID level. See also available disk, compatible disk, dedicated spare, and global spare. EC Expander Controller. A processor (located in the SAS expander in each controller module and expansion module) that controls the SAS expander and provides SES functionality.
image ID A globally unique serial number that identifies the point-in-time image source for a volume. All volumes that have identical image IDs have identical data content, whether they be snapshots or stand-alone volumes. initiator See host. I/O Manager A MIB-specific term for a controller module. I/O module See IOM. intrinsic methods Methods inherited from CIM and present in all classes such as getclass, createinstance, enumerateinstances, and associatorNames in SMI-S. IOM Input/output module.
metadata Data in the first sectors of a disk drive that stores all disk-, vdisk-, and volume-specific information including vdisk membership or spare identification, vdisk ownership, volumes and snapshots in the vdisk, host mapping of volumes, and results of the last media scrub. MIB Management Information Base. A database used for managing the entities in SNMP. mount To enable access to a volume from a host OS. Synonyms for this action include present and map. See also host, map/mapping, and volume.
replication snapshot A special type of snapshot, created by the remote replication feature, that preserves the state of data of a replication set's primary volume as it existed when the snapshot was created. For a primary volume, the replication process creates a replication snapshot on both the primary system and, when the replication of primary-volume data to the secondary volume is complete, on the secondary system.
SMART Self-Monitoring Analysis and Reporting Technology. A monitoring system for disk drives that monitors reliability indicators for the purpose of anticipating disk failures and reporting those potential failures. SMI-S Storage Management Initiative - Specification. The SNIA standard that enables interoperable management of storage networks and storage devices. The interpretation of CIM for storage. It provides a consistent definition and structure of data, using object-oriented techniques.
UTC Coordinated Universal Time. The primary time standard by which the world regulates clocks and time. It replaces Greenwich Mean Time. UTF-8 UCS transformation format - 8-bit. A variable-width encoding that can represent every character in the Unicode character set used for the CLI and WBI interfaces. vdisk A virtual disk comprising the capacity of one or more disks. The number of disks that a vdisk can contain is determined by its RAID level. vdisk spare See dedicated spare.
Index Symbols * (asterisk) in option name 14 A activity progress interface 81 ALUA 20 array See system asterisk (*) in option name 14 B base for size representations 28 bytes versus characters 28 C cache configuring auto-write-through triggers and behaviors 51 configuring host access to 51 configuring system settings 50 configuring volume settings 56 certificate using FTP to install a security 165 CHAP adding or modifying records 76 configuring 38, 47 configuring for iSCSI hosts 76 deleting records 76 ov
configuring with Configuration Wizard 37 sending a test message 86 event severity icons 98 expansion module properties 119 expansion port properties 118 explicit mapping 20 F fan properties 115 firmware about updating 33 using FTP to update controller module 160 using FTP to update disk drive 163 using FTP to update expansion module 162 using WBI to update controller module 79 using WBI to update disk 81 using WBI to update expansion module 80 firmware update, monitoring progress of 81 firmware update, par
management interface services configuring 40 configuring with Configuration Wizard 36 mapping volumes See volume mapping masked volume 20 master volumes about 22 maximum physical and logical entities supported 97 metadata clearing disk 84 MIB See SNMP missing LUN response configuring 51 modified snapshot data, deleting about 23 N network port 35 network port properties 116 network ports configuring 48 configuring with Configuration Wizard 35 NTP about 29 configuring 46 O Out port properties 118, 120 P pa
sign out, auto setting user 43, 45 viewing remaining time 14 signing in to the WBI 13 signing out of the WBI 14 single-controller system data-protection tips 31 size representations about 28 replication snapshot 122 SMART configuring 49 SMI-S architecture 168 Array profile supported profiles and subprofiles 167 Block Server Performance subprofile 170 CIM alerts 170 components 167 configuring 172 embedded array provider 167 implementation 168 life cycle indications 171 managed-logs notifications 173 profile
maximum that can sign in 14 modifying 44 removing 45 utility priority configuring 53 V vdisk aborting scrub 88 aborting verification 88 changing name 55 changing owner 55 configuring 54 configuring drive spin down 55 creating 61 creating with the Provisioning Wizard 59 expanding 87 removing from quarantine 88 scrubbing 88 starting a stopped 134 stopping 133 verifying redundant 87 viewing information about 100 vdisk health values 99, 100 vdisk performance graphs 101 vdisk properties 95, 100 vdisk reconstruc
Index