HP StorageWorks P9000 Performance Advisor Software User Guide This document describes how to use the HP StorageWorks P9000 Performance Advisor Software product (P9000 Performance Advisor), and includes information about user tasks and troubleshooting. This document is intended for users and HP service providers who have knowledge of the HP StorageWorks XP and P9000 disk arrays hardware, software, and storage systems.
© Copyright 2012 Hewlett-Packard Development Company, L.P. Confidential computer software. Valid license from HP required for possession, use or copying. Consistent with FAR 12.211 and 12.212, Commercial Computer Software, Computer Software Documentation, and Technical Data for Commercial Items are licensed to the U.S. Government under vendor's standard commercial license. The information contained herein is subject to change without notice.
Contents 1 Introduction to P9000 Performance Advisor ......................................... 15 Overview ................................................................................................................................. 15 2 Working with the P9000 Performance Advisor GUI .............................. 17 Introduction .............................................................................................................................. Title bar ......................................
Scheduling configuration data collection ............................................................................... Configuration collection schedules ................................................................................. Deleting configuration data collection schedules ..................................................................... Collecting performance data ......................................................................................................
Dashboard statistics .......................................................................................................... Component levels in the Statistics section ...................................................................... Dashboard statistics and threshold metrics ..................................................................... Dashboard busiest consumers ............................................................................................ Dashboard charts ...................
Determining disk space requirements ......................................................................................... 188 9 Viewing XP and P9000 disk array components .................................. 191 Introduction ............................................................................................................................ Viewing performance summary ................................................................................................
Ext-RG(s) navigation path ............................................................................................ Drive types navigation path ......................................................................................... Custom groups navigation path ................................................................................... Searching for components ................................................................................................. Viewing charts .......................
13 Troubleshooting issues for components associated with applications .... 347 Troubleshooting using real-time performance data from XP and P9000 disk arrays .......................... Introduction ..................................................................................................................... RealTime screen ............................................................................................................... Starting real-time performance data collection .....................
Array performance report .................................................................................................. Total I/O Rate report .................................................................................................. Total I/O Rate by hour of day report ............................................................................ Total I/O Rate Detail report ......................................................................................... Read/Write Ratio report ..........
Glossary .......................................................................................... 465 Index ...............................................................................................
Figures 1 P9000 Performance Advisor Dashboard .................................................................... 17 2 License screen ........................................................................................................ 23 3 Array View screen ................................................................................................... 54 4 Configuration Data Collection ..................................................................................
33 Host Basedscreen .................................................................................................. 362 34 HP StorageWorks P9000 Application Performance Extender Software screen ................ 376 35 HP P9000 Remote Web Console screen .................................................................. 385 36 Example of an SLPR .............................................................................................. 399 37 Example of a CLPR ...........................................
Tables 1 License management during installation or upgrade .................................................... 22 2 Meter based Term licenses for P9500 array 53036 with 105 TB-Days capacity ............... 49 3 Meter based Term licenses for P9500 array 53036 with negative TB-Days capacity ......... 49 4 Meter based Term licenses for P9500 array 53036 ..................................................... 50 5 Group Details screen ..............................................................................
33 XP1024 ............................................................................................................... 436 34 XP12000 ............................................................................................................. 436 35 XP10000 and SVS200 .......................................................................................... 437 36 XP24000 ............................................................................................................. 437 37 XP20000 ........
1 Introduction to P9000 Performance Advisor Overview HP StorageWorks P9000 Performance Advisor Software collects, monitors, and displays the performance of XP and P9000 disk arrays. P9000 Performance Advisor collects performance data for individual components such as LDEV, CHIP/CHA, ACP/DKA, DKC, and MP blades (applicable for only P9000 disk arrays).
P9000 Performance Advisor also provides P9000Watch, a troubleshooting tool that helps you to troubleshoot performance issues of the XP and the P9000 disk arrays. You can also launch the following: • P9000 Performance Advisor from the HP StorageWorks P9000 Tiered Storage Manager Software. For more information, see “Launching P9000 Performance Advisor from P9000 Tiered Storage Manager” on page 377. • P9000 Application Performance Extender from P9000 Performance Advisor.
2 Working with the P9000 Performance Advisor GUI Introduction The P9000 Performance Advisor screen has the following sections: • Title bar • Left pane • Right pane The left pane and the title bar are common to all the P9000 Performance Advisor screens. The Dashboard screen appears soon after you log on to P9000 Performance Advisor. The main functionalities of P9000 Performance Advisor can be accessed using the respective links in the left pane.
• Help: Click Help to launch the P9000 Performance Advisor help. • Sign Out: Click Sign Out if you want to log off from P9000 Performance Advisor. • HP P9000 APEX: Click HP P9000 APEX to launch P9000 Application Performance Extender from the P9000 Performance Advisor GUI.
Right pane The right pane displays the screen based on the menu that you select in the left pane. You can select related options on these screens to achieve the desired output. A tool tip is provided for every screen element, which provides a brief description of the screen element. The right pane also displays the Chart Work Area for those screens that require viewing the performance graphs for selected components.
screen sorts in an alphabetical order. The Value column sorts based on the numbers first followed by the alphabets. Resizing columns To resize a column width in a table: 1. Place the cursor of your pointing device on the column separator. The pointer or cursor changes as shown in the following image. 2. Press the mouse button and holding the pointer, move the column separator to either side, for increasing or decreasing the column width.
3 Managing licenses for XP and P9000 disk arrays This chapter discusses the following topics: • • • • • • “Introduction” on page 21 “Instant-on license on P9000 Performance Advisor installation” on page 24 “Instant-on license expiration” on page 25 “Grace period expiration” on page 27 “P9000 Performance Advisor licenses” on page 28 “Generating licenses” on page 36 • • • • “Installing licenses” on page 37 “Viewing aggregate License status” on page 40 “Viewing status for individual licenses” on page 40 “Re
So, usable capacity = Internal LDEVs — (External Volumes + Virtual Volumes) Table 1 License management during installation or upgrade Installation or upgrade License management When you install P9000 Performance Advisor v5.4 for the first time You are provided an Instant-on license, which is automatically enabled after installation. The Instant-on license (trial license) is provided with every instance of P9000 Performance Advisor.
License screen You can add new licenses and view the status of existing licenses on the License screen. Figure 2 on page 23 shows the License screen that appears when you click PA and DB Settings > License in the left pane. NOTE: The License screen in P9000 Performance Advisor displays only the internal raw disk capacity of the XP disk arrays and the usable capacity of the P9500 disk arrays.
Screen elements Description In this section, view the status of all the Permanent licenses installed on P9000 Performance Advisor. It displays the aggregate of all valid license capacities for each XP or P9000 disk array monitored by P9000 Performance Advisor. If a Meter based Term license is installed for a P9000 disk array in addition to the Permanent license, the status of the installed TB-Days is also displayed (example, 50TB, 100TB-Days).
NOTE: The term, monitored XP or P9000 disk arrays, refers to those XP or P9000 disk arrays for which at least one round of configuration data collection is complete. The term, unmonitored XP or P9000 disk arrays refers to those XP or P9000 disk arrays for which configuration data collection is not yet performed. • It is applicable across all the XP and the P9000 disk arrays that are monitored by the current instance of P9000 Performance Advisor. It is not bound to a specific XP or P9000 disk array.
• It initiates a grace period of 60 days for all the monitored XP and P9000 disk arrays. During the grace period, you can monitor unlimited internal raw disk capacities of multiple XP disk arrays and usable capacities of multiple P9000 disk arrays. You can also perform all the P9000 Performance Advisor related operations on the XP and the P9000 disk arrays.
Screen elements Description License capacity (TB) Displays the aggregate capacity of all valid license keys installed. License status Displays the current status of the license, as Installed. End Date Displays Never, as Permanent license is for an unlimited duration. P9000 Performance Advisor begins to monitor the XP or the P9000 disk array for the new installed license (Permanent).
For P9000 Performance Advisor to continue configuration collection for additional internal raw disk capacities on the monitored XP disk arrays or usable capacities on the monitored P9000 disk arrays, install Permanent licenses on P9000 Performance Advisor for each of the XP or the P9000 disk arrays. Contact your HP representative to procure the additional licenses.
IMPORTANT: • It is mandatory that you have the P9000 Performance Advisor registration number to generate a Permanent frame license. This registration number is included in the product entitlement certificate that is provided with every P9000 Performance Advisor License To USE (LTU) purchased. For more information on generating licenses, see “Generating licenses at the HPAC license key website” on page 36.
For more information on generating and installing Meter based Term license, see “Generating licenses at the HPAC license key website” on page 36 and “Installing licenses” on page 37. Meter based Term license requirement Meter based Term licenses are useful when you want to monitor an additional usable capacity for a defined duration or when there is an unplanned surge in the usable capacity that might subsequently reduce. For steady state license requirements, use Permanent licenses.
So, 50TB usable capacity is monitored every day beginning December'10 for the next 39 days. After the spike in usable capacity reduces to 75TB, P9000 Performance Advisor uses the existing Permanent license that is already installed. So, the company has managed the short duration spike in usable capacity with Meter based Term license and also retained the Permanent license to monitor the existing 75TB usable capacity.
At the time of installing the Meter based Term license, if the usable capacity is within the Permanent licensed capacity, the installed TB-Days remain dormant till the usable capacity exceeds the Permanent licensed capacity. They are activated only after the Permanent license is completely used. The TB-Days are used for the duration when the usable capacity exceeds the installed Permanent licensed capacity and the exceeded capacity can be managed by the installed TB-Days.
Column Headings - License Status section Description Displays Never. End Date This is because, the Permanent license which is for an unlimited duration is currently active. Consider that the usable capacity exceeds the 50TB Permanent licensed capacity by 10TB in the first half of 12/03/2010. As a result, the 90TB-Days are activated and P9000 Performance Advisor uses 10TB-Days, and updates the following fields after 1:00 PM on the same day.
After 1:00 PM on 12/11/2010 (Day 1 of grace period), the following fields on the License screen - License Status section display: Column Headings - License Status section Description 50TB, 0TB-Days License Capacity Term (Days) This is because, 90TB-Days are completely used by 12/11/2010. Displays 0. Zero days, as there are no TB-Days to use. License Status Displays Capacity Insufficient. End Date Displays Expired.
After 1:00 PM, the above fields are updated to display: • License Capacity: 50TB, +1000TB-Days • License Status: Installed • Term (Days): 10 • End Date: 12/11/2010 6. If 90.5TB is monitored on the same day, P9000 Performance Advisor considers 91TB-Days and again updates the following fields to reflect the latest data, which is as follows: • License Capacity: 50TB, +909TB-Days • License Status: Installed • Term (Days): 9 • End Date: 12/10/2010 Nine days count from 12/02/2010 If 85.
valent to 3TB-Days is monitored on 11/30/2010, followed by 2TB-Days on 12/01/2010. As a result, the 5TB-Days ends by 12/01/2010. Example scenario 5 Consider the following scenario: 1. 2. 3. 4. 5. 6. A P9000 disk array has a usable capacity of 25TB. A Permanent license is installed on 11/28/2010 to monitor the 25TB usable capacity. 12TB-Days are installed on the same day, because you plan to use another 1TB usable capacity every day later for 12 days.
2. Click Generate a License Key in the Main Menu section. The Generate License Key screen appears. 3. Enter the registration number in the Registration Number or Product Authorization Key box. Ensure that the registration number is same as that mentioned in the product entitlement certificate. 4. Click Next >>. The Array information input screen appears. The following details are displayed: 5.
1. Click PA and DB Settings > License in the left pane. The License screen appears. 2. Click Browse in the Add New License File section. 3. Navigate to the folder where the license (.dat) file is stored. 4. Select the license that you want to add and click Open. The license file appears in the File Name box.
5. Click Add License. CAUTION: After the licenses are installed, do not modify the date and time on the management station where P9000 Performance Advisor is installed. Modifying them may result in inaccurate configuration and performance collections. The following details are updated in the View License File Status section.
Click Refresh to view the latest data on the License screen.
3. Click View Details. The View License Detail section appears. The following image shows the license details for 53036, which belongs to the P9500 Disk Array Type. In addition to the details displayed in the License Status section, the following details specific to the installed license appear in the View License Detail section: Screen elements Description Displays the license type.
Screen elements Description Displays the available license capacity. • If you select an XP disk array record, this column always displays the Installed License Capacity value. • If you select a P9000 disk array record whose usable capacity is monitored using only a Permanent license, this column displays the Installed License Capacity value. • In case of Meter based Term licenses: 1.
Screen elements Description If you select an XP or a P9000 disk array record whose usable capacity is monitored using only a Permanent license, this column is blank as the Permanent license is for an unlimited duration. In case of Meter based Term licenses: • If you select a P9000 disk array record for which both the Permanent license and TB-Days of Meter based Term license are installed, and the installed TB-Days are dormant, this column is blank.
Viewing license history The View License History section displays the list of events generated on the View License screen for each license key. The time stamp when an event occurred is also displayed for each event record. You can search for events generated during a specific duration. Provide the start and end date and time, and click Find to view the events generated during the selected duration.
Because this is a short term unplanned surge in storage requests, you can install TB-Days of Meter based Term license to monitor the additional usable capacity for the specified duration. To monitor 25TB for five days (at the rate of 25TB a day), generate and install 125TB-Days on 11/30/2010.
The every day reduction in TB-Days is equal to the additional usable capacity because of which the grace period has started. NOTE: Reduction or negative counting is only applicable for the installed TB-Days. It is not applicable for Permanent licenses. After 60 days grace period, P9000 Performance Advisor stops configuration data collection for any additional usable capacities. It continues performance data collection for the existing usable capacity. Example scenario 8 Consider the following points: 1. 2.
When fractions of a TB of additional usable capacity is monitored and the installed TB-Days are not sufficient, P9000 Performance Advisor considers it as a capacity violation and enters a grace period of 60 days. In such a case, if you install the appropriate TB-Days, P9000 Performance Advisor ends the grace period for that particular P9000 disk array. Example scenario 8 Consider the following points: 1. 2. 3. 4. 5. A P9000 disk array has a usable capacity of 75TB.
Violating licensed capacity After 60 days of grace period, P9000 Performance Advisor considers it as a capacity violation and stops configuration data collection for any additional internal raw disk or usable capacity. It continues the existing performance data collection. During a capacity violation phase, if you do one of the following: • Install a Permanent license for P9000 Performance Advisor to monitor the XP disk array.
NOTE: Once a Meter based Term license is removed, it cannot be added again. However, another Meter based Term license can be installed.
2. In the View License Status section, select the P9000 disk array record for which you want to remove the Meter based Term license, and click Remove License. 3. In the Remove License dialog box, select METER from the License Type list. 4. Click Remove License(s). The Confirm Delete dialog box appears. 5. Click Yes. The message indicating the removal of the license appears on top of the Remove License dialog box.
If the permanent license is removed, when the Meter based Term license has a positive count. It will enter the grace period and Meter based Term license will not work.
Managing licenses for XP and P9000 disk arrays
4 Collecting configuration and performance data This chapter discusses the following topics: • • • • “Introduction” on page 53 “Configuring host information” on page 55 “Configuration data” on page 58 “Performance data” on page 71 Introduction P9000 Performance Advisor interacts with the XP and the P9000 disk arrays through hosts that have the operating system specific P9000 Performance Advisor host agents installed.
NOTE: • P9000 Performance Advisor also collects the real-time performance data from the XP and the P9000 disk arrays. For more information, see “Troubleshooting using real-time performance data from XP and P9000 disk arrays” on page 347. • To distinguish the external parity group from the normal parity group in case of an outband collection, the external parity group fb4 number is displayed between the range of 101 to 16484.
Screen elements Description Available XP/P9000 disk arrays Displays the XP and the P9000 disk arrays that communicate with P9000 Performance Advisor through the connected hosts. Each XP or P9000 disk array is represented as an icon. Their DKC number and the model type are displayed on the icon. Click an XP or a P9000 disk array icon to view the associated command device records highlighted in the Configuration Collection table.
Tasks you can perform under the Host Information tab • “Requesting host agent updates” on page 56 • “Removing host agent information from P9000 Performance Advisor” on page 58 Related Topics • “Collecting configuration data” on page 62 • “Scheduling configuration data collection” on page 65 • “Performance data” on page 71 Requesting host agent updates Prerequisites Ensure that the following prerequisites are met: • Ensure that the version of the host agent installed on the host matches with the version of
2. Click the Host Information tab. The details for the P9000 Performance Advisor host agents appear in the Host Information table: Screen elements Description Host Displays the system name of the host. OS Displays the operating system installed on the host and its current version. HA Version Displays the version of the host agent installed on the host. RMLib Version Displays the RMLIB version installed on the host.
• • • • • “Removing host agent information from P9000 Performance Advisor” on page 58 “Collecting configuration data” on page 62 “Scheduling configuration data collection” on page 65 “Performance data” on page 71 “Starting real-time performance data collection” on page 350 Removing host agent information from P9000 Performance Advisor IMPORTANT: • You can remove a host agent record when its status shows as Requested or Received under Status.
• One-time configuration data collection: Use this collection type if you want to collect the configuration data only once. Any new configuration changes to the XP and the P9000 disk arrays, such as new components that are added after the collection completes are not captured in the existing configuration collection. • Scheduled configuration data collection: Use this collection type if you want to schedule the configuration data collection periodically on a hourly, daily, weekly, or a monthly basis.
Screen elements Description DeviceFile Displays the device file for the command device. Last Collection TimeStamp Displays —, if the configuration collection is not yet initiated for an XP or a P9000 disk array. After the configuration data collection is initiated, the Last Collection displays the date and time when P9000 Performance Advisor receives the complete configuration data from the XP or the P9000 disk array.
• Components 101-16483 represent the external RAID group information collected using the outband mode. • The outband mode of configuration data collection is only supported for the P9500 Disk Array and the XP24000, XP20000, XP12000, and XP10000 Disk Arrays. • With every outband mode of configuration data collection for an XP or a P9000 disk array, P9000 Performance Advisor gets the latest internal raw disk capacity of that XP disk array or the latest usable capacity of that P9500 disk array.
One-time configuration data collection Prerequisites Ensure that the following prerequisites are met before you start the configuration data collection for the XP and the P9000 disk arrays. These prerequisites are common for both the one-time and scheduled configuration data collection: • Start the configuration data collection only when a command device is created on an XP or a P9000 disk array.
5. Retain the Collection Type as Outband (default selection), if you want P9000 Performance Advisor to directly collect data from the XP or the P9000 disk array through the array SVP (not applicable for XP1024/128 Disk Arrays). Then, proceed to step 6. Select the Collection Type as Inband, if you want the RAID Manager Library to collect the configuration data from an XP or a P9000 disk array and transfer to P9000 Performance Advisor, and proceed to next step.
6. Based on the disk array and mode of collection that you selected, following are the further course of steps: If you selected an XP disk array and either of the following modes of configuration data collection: • The outband mode - In this case, manually enter the SVP IP address in the SVP IP Address text box and proceed to next step to initiate the configuration data collection.
• • • • • “Filtering event records” on page 158 “Configuring email and SNMP settings” on page 88 “Starting real-time performance data collection” on page 350 “Viewing performance summary” on page 196 “Plotting charts” on page 262 Scheduling configuration data collection IMPORTANT: The schedule start time is set to the management station time where P9000 Performance Advisor is installed. Prerequisites For the set of prerequisites, see “Collecting configuration data” on page 62.
4. Select Collection Period as Recurring. Figure 4 on page 66 shows scheduling configuration data collection for 53036, which belongs to the P9500 Disk Array type. Figure 4 Configuration Data Collection . 5. Select one of the following as the Collection Schedule. By default, the collection is scheduled for every sunday at 00:00 hours: • • • • Hourly Daily Weekly Monthly For more information on the above-mentioned collection schedules, see “Configuration collection schedules” on page 69.
6. Retain the Collection Type as Outband (default selection), if you want P9000 Performance Advisor to directly collect data from the XP or the P9000 disk array through the array SVP (not applicable for XP1024/128 Disk Arrays). Proceed to next step. Select the Collection Type as Inband, if you want the RAID Manager Library to collect the configuration data from an XP or a P9000 disk array and transfer to P9000 Performance Advisor, and proceed to next step.
7. Based on the disk array and mode of collection that you selected, following are the further course of steps: If you selected an XP disk array and either of the following modes of configuration data collection: • The outband mode - In this case, manually enter the SVP IP address in the SVP IP Address text box and proceed to next step to initiate the configuration data collection.
• • • • • • • • • “Deleting configuration data collection schedules” on page 70 “Performance data” on page 71 “Providing user-friendly names for XP and P9000 disk arrays” on page 92 “Registering the XP or P9000 disk array SVP IP address in P9000 Performance Advisor” on page 92 “Filtering event records” on page 158 “Configuring email and SNMP settings” on page 88 “Starting real-time performance data collection” on page 350 “Viewing performance summary” on page 196 “Plotting charts” on page 262 Configuratio
Collection Schedule Description Examples If the collection schedule is selected as Monthly, the Monthly Schedule appears with options for scheduling the collection on a particular date (Based on Date) or day (Based on Day) of a month. Every time the schedule is executed, P9000 Performance Advisor collects the configuration data for the last one month only. • If you want to schedule the collection on a particular date: • Select the Monthly Schedule as Based on Date, if it is not selected by default.
Collecting performance data After completing the configuration data collection for the XP and the P9000 disk arrays, schedule the performance data collection for the associated components, which belong to the following component types: • • • • • • • DKC Ports RAID Groups Ext RAID Groups THP pools Snapshot pools Cont. Access Journals You can create two performance data collection schedules for an XP or a P9000 disk array, as it enables you to frequently monitor the respective components.
Initially, when performance data collection is not yet configured for the XP and the P9000 disk arrays, the following details are displayed in the Performance Collection table, under the Performance Collection tab: Screen elements Description Array Displays the DKC number of the XP or the P9000 disk array. Host ID Displays the system name of the host. Port Displays the port that is configured to communicate data between the command device on an XP or a P9000 disk array and the associated host.
Creating performance data collection schedules IMPORTANT: • Only one schedule can be created on a selected command device. For a better performance, select a maximum of two command devices that belong to different ports. • A schedule cannot be created for the same XP or P9000 disk array through two different host agents. • HP recommends that you should allow two minutes per 1,000 LDEVs for the management station to keep up with the collection.
3. Click Create. The Create button is enabled only when you select an XP or a P9000 disk array record under the Performance Collection tab.
6. In the respective component type lists, select the check boxes for the components to collect their performance data. The following component type lists are displayed: For an XP disk array, the DKC provides data on the CHIPs, ACPs, Cache, SLPR, CLPR, and the SM. DKC For a P9000 disk array, the DKC provides data on the MP blades, in addition to the data on the Cache, CLPR, and the SM. NOTE: SLPR does not exist in the P9000 disk arrays.
Figure 5 Performance Data Collection . 1 Resource type list. 7. Set the frequency in minutes for the DKC, RAID groups, and the port performance data collection by selecting the frequency from the respective Frequency list. 8. Select the check box for Stagger Schedule if you want to stagger the data collection time at different intervals.
10. Click Save for the changes to take effect. Click Cancel, if you do not want to configure a schedule for the current selection. Click Refresh to view the updated list of performance data schedules. The new schedule starts automatically. The following table provides the subsequent changes that occur in the Performance Data section for the selected XP or the P9000 disk array record. Screen elements Description Schedule Name Displays the new schedule name. Components Displays the selected components.
• “Starting real-time performance data collection” on page 350 Enabling performance collection schedules for automatic updates You can enable the performance data collection schedules to automatically collect the performance data for newly discovered RAID groups and ports. The new RAID groups and ports in an XP or a P9000 disk array are discovered during the scheduled configuration data collection.
While creating a performance data collection schedule: 1. 2. 3. The RAID groups and ports are not selected from the respective component type lists. Instead, the ThP, snapshot, continuous access journal volumes, or the external RAID groups are selected from the respective component type lists. The Add new RAID Groups, Ports to this schedule check box is selected.
• The newly discovered RAID groups and ports are not added to this performance schedule, as the Add new RAID Groups, Ports to this schedule check box is not selected. However, if a second schedule is created, you can still select the Add new RAID Groups, Ports to this schedule check box. The new RAID groups and ports are automatically added to the second schedule. • If a second schedule is not created, the list of new RAID groups and ports are still available for selection in the first schedule.
• “Stopping performance data collection” on page 81 • “Deleting performance data collection schedule” on page 82 • “Starting real-time performance data collection” on page 350 Editing performance data collection schedules You can add or remove components from an existing performance data collection schedule, and edit the frequency of data collection.
P9000 Performance Advisor stops the collection from the next collection cycle. The current performance data collection schedule stops only after the current data collection is complete, as per the selected collection schedule. For example, if you had configured an hourly collection at 11:00 a.m and stopped the schedule at 11:30 a.m., the current performance data collection still continues as per the selected collection schedule and ends only at 12:00 p.m.
2. Click the Performance Data tab and select the XP or the P9000 disk array record for which you want to delete the associated performance data collection schedule. 3. Click Delete. The Delete button is enabled only when you select an XP or a P9000 disk array record under the Performance Collection tab. A dialog box appears prompting you to confirm whether you want to delete the schedule. 4. Click OK. The performance data collection schedule is permanently deleted.
3. If you type y at the prompt, you are further prompted to provide the minimum and maximum java heap size values. The minimum heap size value must be more than or equal to 512 MB, and the maximum heap size value must be less than or equal to 2048 MB. If heap size values are already set, the current minimum and maximum heap size values are also displayed for your reference. If you type n at the prompt, the command prompt window closes. 4.
The user must run the Resize Heap tool again and reset the value to a lower size.
Collecting configuration and performance data
5 Configuring common settings for P9000 Performance Advisor This chapter discusses the following topics: • • • • • • “Introduction” on page 87 “Configuring email and SNMP settings” on page 88 “Setting time zone for management station” on page 95 “Setting severity level” on page 94 “Registering the XP or P9000 disk array SVP IP address in P9000 Performance Advisor” on page 92 “Providing user-friendly names for XP and P9000 disk arrays” on page 92 In addition, this chapter also discusses the following topic
Manage custom groups, where you create, view, modify, or delete the custom groups Settings > Custom Groups Manage the fabricated LDEV records, where you modify or delete the incomplete LDEV records, and also replicate settings across the LDEV records Settings > Data Grid Update Manage P9000 Performance Advisor users profiles, where you create, modify or delete the user profiles, and view their group properties “Managing custom groups” on page 99 “Managing fabricated LDEV records” on page 106 Settings
IMPORTANT: • The new email notification settings that you provide are automatically updated in the serverparameters.properties file. Hence, a manual reboot of the P9000 Performance Advisor management station is not required. • The Email Address is a mandatory field. Provide a valid destination email address that receives the email notifications when the alarms and reports are generated, or the performance data collection fails. For example, test1@xyz.
3. Configure the following settings on the Email Settings screen: SMTP Server Settings • The IP address or host name of the SMTP server that will be used for processing emails. The default SMTP server IP address is localhost. • The related port number (accepts only numbers). The default port number is 25. P9000 Performance Advisor uses the above settings to dispatch email notifications to the intended recipients when the alarms or reports are generated, or the performance data collection fails.
• The name of the customer for whom the report is generated. • The name of the consultant who is associated with the customer. • The location of the XP or the P9000 disk array for which the report is generated. This information is useful if the XP or the P9000 disk array is located in a different site, away from the management station. Data Collection Email Settings • A valid destination email address, as specified under Alarm Email Settings.
• “Configuring alarms and viewing alarms history” on page 137 • “Configuration data” on page 58 • “Generating, saving, or scheduling reports” on page 327 Providing user-friendly names for XP and P9000 disk arrays P9000 Performance Advisor enables you to provide unique, user-friendly names for the monitored XP and P9000 disk arrays.
IMPORTANT: • For a P9000 disk array (such as the P9500) or for an XP24000 Disk Array, the IP address of the management station is also registered with the array SVP. • For a P9000 disk array (such as the P9500), it is recommended that you maintain separate SVP login credentials, which you can use for outband mode of configuration data collection.
4. Click Save Credentials. The SVP IP address, user name, and password are saved in P9000 Performance Advisor database. P9000 Performance Advisor also uses these credentials to validate the connection with the P9000 disk array. NOTE: On a few occasions, the SVP IP address, user name, and password are not saved. It might be because the SVP is offline. Wait for a few minutes and try again. 5. Click Register. The SVP IP address that was saved is also registered with the management station.
NOTE: This change affects only those messages that are created after you instigated the severity change. All messages that were logged before you set the severity level still remain in the P9000 Performance Advisor database and appear on the Event Log screen. To set the severity level: 1. Click PA and DB Settings > User Settings in the left pane. The User Settings screen appears. 2.
Related Topics • “Setting severity level” on page 94 • “Setting the duration to predict the LDEV response time” on page 96 Setting the duration to predict the LDEV response time You can set the duration that P9000 Performance Advisor must use to predict the average read and write response time of LDEVs. Complete the following steps to select the duration: 1. Click PA and DB Settings > User Settings in the left pane. The User Settings screen appears. 2.
1. Attempts to restart the HP StorageWorks P9000 Performance Advisor Tomcat service 'n' number of times, where 'n' indicates the retry count that is specified. By default, the retry count is set to five, which means that five attempts are made to restart the HP StorageWorks P9000 Performance Advisor Tomcat service before a notification is dispatched. For more information on specifying the retry count, see Configuring retry count on page 99. 2.
NOTE: • If you configure the SMTP parameters but do not specify a retry count, the HP StorageWorks P9000 Performance Advisor Monitor service does not attempt to restart the HP StorageWorks P9000 Performance Advisor Tomcat service. Also, it does not dispatch any notification to the intended recipients.
1. Click PA and DB Settings > Email Settings in the left pane. The Email Settings screen appears. 2. In the Performance Advisor Monitor Settings section, provide the destination email address and the email subject. it is mandatory to provide the destination email address. You can provide multiple email addresses. 3. Click Save.
• Configure alarms on the associated LDEVs, so that P9000 Performance Advisor monitors and sends appropriate notifications to intended recipients. For more information, see “Setting threshold level” on page 142 The following are important notes on custom groups: • The LDEVs associated with multiple RAID groups or multiple ACPs are treated as separate group of items. For example, if you have an LDEV associated with the RAID group 1-1 1-2, you must select 1-1 1-2 in the RAID Groups list.
4. Click Create Custom Group. The Create Custom Group button is enabled only when you select LDEV records in the Custom Groups table. The selected set of LDEV records are included in the custom group and the new custom group is listed under List of Custom Groups. You can view the custom group details by clicking View.
• P9000 XP Continuous Access Synchronous is installed on an XP24000 Disk Array (primary storage server) to create a secondary copy of the production data. The production data is located on the primary volume (P-VOL) in the same XP24000 Disk Array. The secondary copy is residing on the secondary volume (S-VOL) in an XP12000 Disk Array. • The Oracle database server is located on a P-VOL in an XP24000 Disk Array and the data is replicated onto two S-VOLs.
3. Click View. The View button is enabled only when you select LDEV records in the Custom Groups table. The View Custom Group Details screen appears providing the list of LDEVs added to the selected custom group. The following table describes the column headings in the Group Details screen. Table 5 Group Details screen Screen elements Description DKC Displays the IDs of the selected XP and P9000 disk arrays.
Screen elements Description LUSE Master Displays the LDEV ID of the LUSE master, if the selected LDEV is a LUSE component. If the LDEV is not a LUSE component, this field is blank. Displays the following options to indicate whether or not the selected LDEV is an Ext-LUN (Ext-LDEV): • - (hyphen) = Normal LUN Ext-Lun • E = Ext-Lun • P = Ext-Lun provider (the selected LDEV is used as an Ext-LUN for another XP or P9000 disk array) Host Group Displays the host group name for the host.
2. To add the LDEV records to a custom group: a. Select a custom group from the list under List of Custom Groups. b. In the LDEV records table, select the check boxes for the LDEV records that you want to add to the custom group. Alternatively, use the Custom Groups filters to view specific set of the LDEV records. For more information on using filters, see “Creating custom groups” on page 100. c. Click Add. The Add button is enabled only when you select a custom group under List of Custom Groups.
Managing fabricated LDEV records P9000 Performance Advisor enables you to modify the fabricated or incomplete LDEV records that it gets from the RMLIB. These LDEV records contain no host to array connectivity data because of unknown host connections, and are displayed in a tabular format on the Data Grid Update screen. The modifications made to the fabricated LDEV records are automatically updated on all the P9000 Performance Advisor screens that display these LDEVs.
Screen elements Description Total No. of Records: Displays the total number of LDEV records that you can view on the Data Grid Update screen. This number is inclusive of all the LDEV records that are displayed on all the pages in the Data Grid Update screen. No. of Pages: Displays the total number of pages that you can view on the Data Grid Update screen. No. of records per page Displays the total number of records displayed on the current page of the Data Grid Update screen.
Related Topics • “Modifying records” on page 108 • “Applying Template” on page 110 Modifying fabricated LDEV records You can modify or delete the fabricated LDEV records, and also replicate values from an LDEV record to other LDEV records. You can perform these tasks on the LDEV records listed on the current page of the Data Grid Update screen. IMPORTANT: The LUSE components that are fabricated internally are not displayed. Use the P9000 Performance Advisor CLUI to modify such LUSE components.
3. Enter the values for the following in the text boxes located above the Data Grid Update table: • • • • • Host Target:LUN Volume Group Device File SSID Alternatively, click in the text boxes under the respective column headings in the Data Grid Update table and make the necessary changes.
3. Click Delete Records. A dialog box appears prompting you to confirm the removal of the selected LDEV records. 4. Click OK. The record is removed from the existing list of records. Related topics • “Querying for fabricated LDEV records” on page 107 • “Applying Template” on page 110 Using template IMPORTANT: • Ensure that the LDEV record used as a template is immediately preceding the other set of LDEV records. • The Target:LUN and device file data cannot be replicated across the LDEV records.
You can also login as a storageadmin, who is an administrator user of CV XP and has the same privileges as the administrator user of P9000 Performance Advisor. The Security screen appears when you click PA and DB Settings > Security in the left pane. The Security screen displays users who are authorized to use P9000 Performance Advisor and their groups.
A new user record appears under Users. By default, records are sorted in an alphabetical order. Related Topics • “Changing password” on page 112 • “Deleting a user record” on page 112 • “Viewing group properties” on page 113 Changing password IMPORTANT: The password that you provide must not exceed more than 32 characters. To change the password for a user: 1. Click PA and DB Settings > Security in the left pane. The Security screen appears. 2. Select a user record from the list displayed under Users.
Viewing group properties To view the properties of a group: 1. Click PA and DB Settings > Security in the left pane. The Security screen appears. 2. Select a user record from the list of records displayed under Groups. 3. Click Properties. The following details are displayed in the Group Details window. 4. • The group name and a brief description of the group. • The names of the users who are members of the selected group Click Close to return to the Security screen.
Configuring common settings for P9000 Performance Advisor
6 Monitoring performance of XP and P9000 disk arrays This chapter discusses the following topics: • “Introduction” on page 115 • “Configuring dashboard threshold settings” on page 118 • “Viewing dashboard” on page 124 Introduction P9000 Performance Advisor provides a dashboard, where you can view the overall usage status of the XP and the P9000 disk arrays. The overall usage status is based on the usage of individual components.
In addition, the average usage summary for components is also derived from the set threshold duration and verified against the threshold limits set for metrics in the particular category. Thereafter, the statistics are displayed on the Dashboard screen. IMPORTANT: • The threshold duration is the period during which P9000 Performance Advisor monitors the point in time and average usage of components, and determines the overall health of the XP or the P9000 disk array.
2. 3. 4. Click a status icon in the Frontend, Cache, Backend, or the MP Blade (applicable only for the P9000 disk arrays) category in the XP/P9000 Array Health section to view the corresponding average usage summary of individual components in the Statistics section. For more information, see “XP/P9000 array health” on page 126. Select components and associated metrics in the Statistics section to plot the corresponding usage graphs in the Chart Work Area.
The Component Information section, where the busiest and least busiest components are displayed. These components are associated with the corresponding port, RAID group, or MP blade selected in the Statistics section. You can plot their usage graphs in the Chart Work Area.
1. Do one of the following: Click PA and DB Settings > Threshold Setting in the left pane. OR Click Edit Threshold on the Dashboard screen. The Dashboard screen appears by default when you launch P9000 Performance Advisor or when you click Monitoring in the left pane.
3. Enter the threshold value. When you set the threshold limits, P9000 Performance Advisor verifies the usage of components against the set threshold limits. Accordingly, the appropriate status icons and the average usage summary values are displayed on the Dashboard screen. • If you have not set the threshold limit or if you do not want to view the XP or P9000 disk array overall usage data for a particular category, enter –1 or 0 in the metric text box.
the P9000 disk arrays, and the average usage summary of components, specify the threshold limit for at least one metric in the respective category. The changes you make on the Threshold Setting screen are immediately reflected on the Dashboard screen. By default, P9000 Performance Advisor retrieves data for the past six hours from the time you saved the threshold settings. It considers the management station time to calculate the threshold duration.
1. Go to the Component Settings section on the Threshold Setting screen. 2. From the Maximum Components list, select the maximum number of consumers you want to view in the Component Information section of the Dashboard screen. 3. Select Ascending or Descending as the Sort by Average Response Time. 4. Click Save to update the consumer settings. The maximum X busiest consumers appear in the Component Information section on the Dashboard screen.
Screen elements Description Cache If the cache exceeds the defined threshold limit during the specified threshold duration for a cache metric, the status icon appears in the Cache category in the XP/P9000 Array Health section of the Dashboard screen. For example, if the cache write pending for the Write Pending (%) metric exceeds the defined threshold even once, the XP/P9000 Array Health section.
Screen elements Description RG Util (%) The RG utilization threshold value indicates the average overall RAID group utilization that you define for an individual RAID group over the threshold duration. P9000 Performance Advisor uses this value to verify whether the average overall RAID group utilization of each RAID group is within or beyond the set threshold limit. The default threshold value is 50%. If the utilization of one RAID group exceeds the defined threshold, the status icon appears.
Sections Description Displays the statistics of the average usage summary of individual components for the category, for which you click the status icon. For example, the Statistics section displays the average usage summary of ports and CHA MPs, if you click in the Frontend category for an XP disk array. If the usage of a component exceeds the defined threshold limits during the Statistics (Frontend, Cache, Backend, or MP Blade) threshold duration, the in the Statistics section.
XP/P9000 array health The following table describes the different status icons that depict the overall health of the XP and the P9000 disk arrays in the Frontend, Cache, Backend, and the MP Blade (applicable for only the P9000 disk arrays) categories. Status icon Description Critical. Indicates that the usage of at least one component has crossed the set threshold limit during the specified threshold duration. Warning.
The overall usage status of an XP or a P9000 disk array in a category is based on the usage of components in that category. The usage data is collected only on those metrics whose threshold limits are set on the Threshold Setting screen. For example, assume that you have set the threshold limit for only the RG Seq Reads (IOPS) (Avg Seq Reads) metric in the Backend category.
• • • • The The The The average average average average sequential backend write tracks on individual RAID groups utilization of a RAID group utilization of an ACP/DKA pair utilization of an MP blade IMPORTANT: • The average CHA MPs and the DKA MPs utilization metrics are applicable only for the XP disk arrays. • The average MP blade utilization metrics is applicable only for the P9000 disk arrays. 2.
Component levels Description Can include the following: • The components whose usage corresponding to a particular metric is at 95% of the threshold limit or higher during the specified threshold duration. The Components shown as black text status icon in such cases appears as in the appropriate category, if there are no other components that are over utilized in that category.
Category Metrics Description RG Seq Reads (IOPS) (Avg Seq Reads), RG NonSeq Reads (IOPS) (Avg NonSeq Reads), RG Writes (IOPS) (Avg Writes): Average of the frontend sequential and random I/Os on an individual RAID group in the XP or the P9000 disk array backend.
IMPORTANT: Combined backend transfers: In Thin Provisioned environments, the overall backend transfers at the RAID group level are reported using combined backend transfer metric. For a Thin Provisioned V-Vol where the ThP pool is associated with multiple RAID groups, the backend transfers are not tracked to the corresponding RAID group level. The backend transfers contributed by all V-Vols in a ThP pool are combined and reported as combined backend transfers for each participating RAID group.
IMPORTANT: At a time, you can view the maximum X busiest consumers for only one frontend, backend, or MP blade record that you select in the Statistics section. To view the maximum X busiest consumers: 1. Based on your requirement, select a record corresponding to a port, RAID group, or an MP blade in the respective Frontend, Backend, or the MP Blade Statistics section. 2. Click Show Consumers. The maximum X busiest consumers are displayed in the Components Information section.
Metrics Description Block IO MBPS The frontend throughput in MB/s read and written to the LDEV during the specified threshold duration. RG Util (%) The average of the overall RAID group utilization of an individual RAID group associated with the LDEV. Backend Transfer The I/Os between the cache and the RAID groups during the specified threshold duration. For a P9000 disk array, the average utilization of an individual MP blade by the associated consumer is displayed under the Util % column.
1. Select a record corresponding to a port, CLPR, RAID group, or an MP blade in the Frontend, Cache, Backend, or the MP Blade Statistics section, or a corresponding component record from the Components Information section. While selecting the records, press the Shift key for sequential selection or the Ctrl key for random selection of multiple component records. 2. Click Plot Chart. The Plot Chart is enabled only when you select a component record.
3. Select the check box for the metric, for which you want to view the performance or usage graph of the selected component, and click OK. P9000 Performance Advisor plots the appropriate graphs in the Chart Work Area. The duration for which the data points are plotted in the chart depends on the threshold duration specified on the Threshold Setting screen. By default, the graphs are plotted for data points collected in the last 6 hours of the management station's time.
1 High watermark level.
7 Configuring alarms and managing events This chapter discusses the following topics: • • • • “Introduction” on page 137 “Configuring alarms and viewing alarms history” on page 137 “Managing alarm history” on page 150 “Viewing events” on page 157 Introduction P9000 Performance Advisor enables you to activate alarms on components, so that timely notifications can be dispatched to intended recipients when the performance of components rise beyond a particular limit.
IMPORTANT: You can configure and activate alarms on components only if you have logged into P9000 Performance Advisor as an Administrator, or a user who is granted administrator privileges.
2. In the component selection tree under Data Source, select components on which you want to configure alarms. For more information on selecting components and related metrics, see “Selecting components and metrics” on page 265. You can also search for components on which you want to configure alarms. For more information, see “Searching for components” on page 149. 3. Click Add alarm(s).
For a new component record, the following default values are displayed in the Alarms table: • • • • • • Selected XP or P9000 disk array name under Array Selected component under Resource Selected metric category under Metric Category Selected metric under Metric 999999 under Threshold The destination email and SNMP addresses configured on the Email Settings screen (PA and DB Settings > Email Settings. If not configured, the Email Destination and SNMP Destination fields are shown blank.
• Deleting component records. For more information, see “Deleting records in the Alarms table” on page 148 If you want to configure notification and monitoring settings across component records, use the Shift key for sequential selection of records and Ctrl key for random selection of records. Filtering records based on metrics and alarm status The alarm filters are available in the Show section above the Alarms table.
the Alarms Status list. The set of RAID group records are further filtered to display only RAID group, 1–3 for the RAID Group Total IO – Frontend metric and Passive alarm status. Click Clear Filter any time while selecting values from the filter options. It removes the current selection and displays all the records in the Alarms table. Also, the current selection on the XP or P9000 disk array or component in the component selection tree is removed.
• • • • • “Applying a template” on page 147 “Deleting records in the Alarms table” on page 148 “Filtering records in Alarms History table” on page 153 “Viewing graph of metric value's performance” on page 155 “Filtering event records” on page 158 Configuring alarm notifications Alarms are triggered and notifications sent to selected users, when the current performance value of a component crosses the set threshold level, which is also configured as the dispatch at threshold level.
2. In the Alarms table, select the component records for which you want to specify the threshold level. You can also filter component records in the Alarms table. 3. To receive an email notification, enter the email address in the text box under Email Destination. By default, email notifications are sent to administrator@localhost, which is the common destination email address for all alarm notifications.
Related Topics • • • • • • • • • “Adding or removing metric values” on page 138 “Setting threshold level” on page 142 “Establishing scripts for alarms” on page 145 “Enabling or disabling alarms” on page 146 “Applying a template” on page 147 “Deleting records in the Alarms table” on page 148 “Filtering records in Alarms History table” on page 153 “Viewing graph of metric value's performance” on page 155 “Filtering event records” on page 158 Establishing scripts for alarms In addition to configuring email a
Run Script C:\Temp\a.
• “Filtering records in Alarms History table” on page 153 • “Viewing graph of metric value's performance” on page 155 • “Filtering event records” on page 158 Applying a template You can manually configure the threshold and dispatch settings, and alarm notification settings, or use the details from an already configured component record as a template and apply the same settings across multiple other selected records.
4. Click Apply Settings. If required, modify the alarm settings copied to the Settings section and then apply the updated settings to other records in the Alarms table. The configuration settings of the previously selected record are applied to all the other newly selected records. 5. Click Save to commit the changes. You can also select a component record and directly apply specific settings on that record, without using the Copy Template feature. 1. 2. Select the check box for that component record.
2. Select the record that you want to delete. 3. Click Delete. The records are permanently removed from the Alarms table. Once Alarm is deleted, an entry is displayed in the EventLog screen but it is not displayed under AlarmHistroy screen.
3. In the Physical LDEV text box, enter the cu:ldev format of the LDEV that you want to search and click the Search icon. The component selection tree for the XP or the P9000 disk array that has the matching LDEV component automatically expands to display the LDEV highlighted for your reference. (If the component list for the selected XP or the P9000 disk array is large, you may have to use the scroll bar to navigate through the list of components to view the matching component).
While configuring an alarm, if you set threshold and dispatch settings but do not enable the alarm for a component, P9000 Performance Advisor does not monitor that component and generate an alarm when required. In every data collection cycle, P9000 Performance Advisor retrieves and compares the current performance value of a component with the set threshold value. The time when this value was retrieved and compared is shown under Time Updated.
Screen elements Description DKC/Grp (Array Name) Displays the array model to which the selected component belongs. Array Type Displays the array type to which the selected array model belongs. Metric Displays the metric for which a component is monitored. When you select the All option in the Metrics list, the alarm records configured on the selected component are displayed in the Alarms table.
Screen elements Description Displays the status on email and SNMP notifications, and script execution. The five possible statuses are listed as follows: • Status 0: Timed Out : in case alarm can not be triggred in the given time(this time is specified in the Email_TimeOut field of the serverparameters.
3. Click Filter. P9000 Performance Advisor filters the existing set of records and displays only those that match the selection criteria on the Alarm History screen. The records are displayed in an ascending order. For more information on Alarms History screen, see “Alarm History screen” on page 151. Click Clear Filter any time while selecting values from the filter options. It removes the current selection and display all the records in the Alarms History table.
Screen elements Description This list displays the following options: • Time posted (default selection): If this option is selected, the time stamps of when the records are posted on the Alarm History screen are displayed. A record for a component is first posted on the Alarm History screen when the following conditions are met: • Alarm is enabled on the component. • Performance data collection is in progress.
• The performance of a component drops below the set threshold limit. The current value of the component is displayed under Value in the Alarms History table. Click the corresponding link to view the performance value when the component crossed the threshold level or dropped below the threshold level. For a component that has crossed the threshold limit, the performance graph includes the following: • Performance value when the component crossed or dropped below the threshold level.
• “Adding or removing metric values” on page 138 • “Configuring notification and monitoring settings” on page 140 Viewing events P9000 Performance Advisor generates events in response to various activities that you perform using this application. Appropriate records are automatically displayed for all the events in the Event Log table. For instance, records are logged for events generated when a performance data collection fails or the collection schedule is restarted.
• “Deleting event records” on page 159 Filtering event records You can filter event records based on the duration when the events are logged, type, and severity of events logged. You can also do a quick text based search that works only on records displayed in the current page. For search based on text entries: 1. Click Monitoring > Event Log in the left pane. The Event Log screen appears. By default, records for events logged in the last 24 hours are displayed. 2.
Severity level Description Critical Error Critical errors, where P9000 Performance Advisor may not function. Though you would have already set the severity level for event logging, this filter also displays the severity levels applicable to all events logged before you set the severity level. It is useful in cases where yo want to view events generated prior to setting the severity level. 5. Click Find.
Configuring alarms and managing events
8 Managing the P9000 Performance Advisor database This chapter discusses the following topics: • • • • • • “Introduction” on page 161 “Configuring database size” on page 163 “Purging data” on page 165 “Creating and viewing Export DB CSV files” on page 168 “Archiving data” on page 178 “Importing data” on page 181 • “Deleting logs for archival and import activities” on page 184 Introduction P9000 Performance Advisor uses Oracle as its database.
IMPORTANT: • You have to log on to P9000 Performance Advisor as an Administrator or a user with administrator privileges to configure, purge, archive, or import the P9000 Performance Advisor database. You also need this privilege to view or delete Export DB schedules. • Database related tools or functionalities should be executed with the same privilege that is used to install P9000 Performance Advisor. If you are trying to execute the tools, ensure that you are a member of the ORA_DBA Windows group.
• • • • • “Automatically purging data” on page 167 “Creating and viewing Export DB CSV files” on page 168 “Archiving data” on page 178 “Importing data” on page 181 “Deleting logs for archival and import activities” on page 184 Configuring database size You can increase the P9000 Performance Advisor database size based on the disk space available on the management station, where P9000 Performance Advisor is installed.
initiate Auto Grow + 3 GB), the allocated database size is automatically increased by additional 2 GB. Simultaneously, the following prediction on the time taken for the database to grow to the specified size is also displayed under DB Configuration/Purge: Given current data storage rates, DB grow in less than X hours.
1. Click PA and DB Settings > Database Manager in the left pane. The Database Manager screen appears. By default, the DB Configuration/Purge is enabled. 2. From the Configured Maximum Database Size list, select the disk space that you want to allocate. By default, the Current Database Size displays 3 GB that is the default disk space allocated for P9000 Performance Advisor database after first installation.
CAUTION: The data that is purged cannot be recovered. It is permanently deleted from the P9000 Performance Advisor database. Hence, purge data only when you are absolutely sure that the data is no longer required. Also, P9000 Performance Advisor activities, such as plotting charts and collecting data might be impacted when either the manual or auto purge is in progress. Alternatively, if you want to archive data before purging it, use the archival export functionality.
4. Click Purge. A dialog box appears prompting you to confirm the deletion of records. 5. Click Yes. P9000 Performance Advisor deletes the performance data available prior to the current specified date in the database.
• • • • • • • • • “Automatically increasing the database size (AutoGrow)” on page 163 “Manually increasing the database size” on page 164 “Manually purging the data” on page 166 “Purging older data” on page 166 “Creating and viewing Export DB CSV files” on page 168 “Archiving data” on page 178 “Importing data” on page 181 “Deleting logs for archival and import activities” on page 184 “Migrating data to another management station” on page 184 Creating and viewing Export DB CSV files P9000 Performance Advis
• The performance data collection interval time stamps. • The data for the following metrics: • RIO Read Cache Hits, RIO Reads, RIO Write Cache Hits, and RIO Writes. • SIO Read Cache Hits, SIO Reads, SIO Write Cache Hits, and SIO Writes. • CFW Reads, CFW Read Cache Hits, CFW Writes, and CFW Write Cache Hits. • DFW Count, DFW Normal Count, and DFW Sequential Count. • Total IO, Inhibit Mode IO Count, and Bypass Mode IO Count.
IMPORTANT: Since, the CHIP/CHA and the ACP/DKA MPs are moved to the MP blades in the P9000 disk arrays, their MP utilization metrics are not applicable for the P9000 disk arrays. port_exportDB-array_serial_number_.csv This file includes the following details: • The XP or the P9000 disk array serial number for which the report is generated. • The port IDs on the XP or the P9000 disk array.
The MP busy time indicates the time taken by an MP blade to process the request it receives from the associated processing type. rgutil_exportDB-array_serial_number_.csv This file includes the following details: • • • • The XP or the P9000 disk array serial number for which the report is generated. The RAID group IDs on the XP or the P9000 disk array. The performance data collection interval time stamps.
Creating Export DB CSV files IMPORTANT: • If you have logged in with user privileges, you cannot schedule the export DB activity. • Only a single .csv file is created for the XP1024/128 Disk Arrays. • 020000 is the supported version for the P9000 disk arrays, such as the P9500 and the following XP disk arrays, XP24000, XP20000, XP12000, XP10000, and the SVN Disk Arrays. • 020000 is also the supported version if you want to view the external LUN information.
3. Based on your requirement, select the Collection Period as One Time or Recurring. • If you select the Collection Period as One Time, proceed to step 4. • If you select the Collection Period as Recurring, the following schedule options are enabled: • Collection Schedule: Displays Daily, Weekly, and Monthly. Collection Schedule Description By default, Weekly is selected as the collection schedule. The corresponding Day of the Week list displays the week days.
6. Select the check box for Human Readable Format, if you want to view the data for LDEVs in the cu:ldev format. 7. Select the check box for Version Number Select Version Number to enable the corresponding list that displays the following supported versions based on the XP or the P9000 disk array type that you select: • 020000 • 016000 • 010600 • 010500 The following image shows scheduling the export DB activity for 53036, which belongs to the P9500 Disk Array type. 8.
9. Select the check box for the RG Utilization, if you want to view the percentage of utilization for the RAID groups. This option can be used only when the Response Time check box is selected and the supported versions are 016000 or 020000. 10. Select the check box for Display LDEV's of the Journal, if you want to view all the LDEVs that belong to a journal pool. 11. Select the Start Time and End Time, if it is a one-time export activity.
• • • • • • • • • • “Automatically increasing the database size (AutoGrow)” on page 163 “Manually increasing the database size” on page 164 “Manually purging the data” on page 166 “Purging older data” on page 166 “Automatically purging data” on page 167 “Archiving data” on page 178 “Importing data” on page 181 “Deleting logs for archival and import activities” on page 184 “Migrating data to another management station” on page 184 “Generating, saving, or scheduling reports” on page 327 Importing data to MS
the corresponding schedule details for the Export DB schedules are also displayed in the Scheduled Export DB tasks section, under the View Exported/Scheduled Exported DB Files tab. The following image shows the .csv files created for 53036 and 53046, which belong to the P9500 Disk Array type. IMPORTANT: • The name of the user who created the report is displayed under User Name.
• • • • • • “Automatically purging data” on page 167 “Archiving data” on page 178 “Importing data” on page 181 “Deleting logs for archival and import activities” on page 184 “Migrating data to another management station” on page 184 “Generating, saving, or scheduling reports” on page 327 Deleting Export DB reports and schedules IMPORTANT: You can delete a schedule record in the Scheduled Export DB tasks section, only if you have logged in to P9000 Performance Advisor as an Administrator or a user with adm
IMPORTANT: • After the data is archived, it is permanently deleted from the P9000 Performance Advisor database and the free disk space is released back to the database. If you want to use the archived data for an XP or a P9000 disk array, import the corresponding .dmp files. Also, perform a fresh configuration data collection for that XP or the P9000 disk array on the management station, where you performed the import operation.
5. Click Export. P9000 Performance Advisor archives data for the specified duration. As part of the archival process, P9000 Performance Advisor does the following: a. Displays an informational message that the export for the selected array is successfully initiated and starts exporting the data. b. Logs two records under Export data for the date and time when the archival is complete. c. Creates two .dmp files and displays their names under File Name.
• “Deleting logs for archival and import activities” on page 184 • “Migrating data to another management station” on page 184 Importing data You can import the archived data to another management station or back to the same management station from where the data was initially exported. CAUTION: • You must import the data to the same version of the management station as that of the installed P9000 Performance Advisor. For example: If you have installed P9000 Performance Advisor v5.
IMPORTANT: The following are a few important points: • After importing performance data for an XP or a P9000 disk array, ensure that you perform a fresh configuration data collection for that XP or the P9000 disk array on the target management station, as the archival process only exports the performance data.
3. Based on the XP or the P9000 disk array for which you want to import its performance data, select the relevant file from the list displayed in the Archive Import section. For example, PA53036_12OCT2008_20.07.32_1217826540130_1217826600138.DMP NOTE: translates to %PADB_HOME% in this context of importing data. 4. Click Import. Based on whether the import is for an XP or a P9000 disk array, P9000 Performance Advisor does the following: a.
• • • • • • • • • “Automatically increasing the database size (AutoGrow)” on page 163 “Manually increasing the database size” on page 164 “Manually purging the data” on page 166 “Purging older data” on page 166 “Automatically purging data” on page 167 “Creating and viewing Export DB CSV files” on page 168 “Archiving data” on page 178 “Deleting logs for archival and import activities” on page 184 “Migrating data to another management station” on page 184 Deleting logs for archival and import activities IMP
CAUTION: HP Strongly insists not to manually copy or use the drag and drop feature to move the PADB folder to the target management station, or another location on the source management station. This action will result in irrevocable loss of data. Use only the Backup utility provided by P9000 Performance Advisor to migrate data. IMPORTANT: • To use the Backup utility, ensure that the same version of P9000 Performance Advisor is installed on both the source and target management stations.
Space requirements • Before taking a backup of the database, make a note of the Current Database Size under the DB Configuration/Purge tab. • While restoring the database, ensure that the total available space on the disk where the database is already installed is more than the backed up database. If the database is installed on C:\HPSS\ padb, the total available free disk space on C: must be greater than the size of the database that be restored.
6. Click Yes to proceed. The Backup status window is displayed. Also, the details of the data being backed up is displayed in the command prompt window. To restore backed up data using the Backup utility: 1. Click Start > Programs > HP StorageWorks > Backup Utility. The Backup Utility window is displayed. 2.
• %HPSS_HOME%\bin\backuputility -backup target-path —time The format for the start and end date, and time is as follows: DD-MM-YYYY,hh:mm:ss NOTE: • If you have saved the P9000 Performance Advisor database on a different location during installation, navigate to that location. • The target-path that you specify must not include space in the file location path.
• • • • • • • • PORTs = Total number of ports for an XP or a P9000 disk array Ldev_Collection_frequency = Collection frequency for LDEVs (in seconds) Port_Collection_frequency = Collection frequency for ports (in seconds) Dkc_Collection_ frequency = Collection frequency for DKC (in seconds) Ldev_Space ( per Ldev) = 0.0002 MB Port_Space (per port) = 0.00008 MB Dkc_Space (per collection) = 0.
Managing the P9000 Performance Advisor database
9 Viewing XP and P9000 disk array components This chapter discusses the following topics: • • • • • • “Introduction” on page 191 “Viewing performance summary” on page 196 “Viewing XP and P9000 disk array summary” on page 201 “Volume Information” on page 202 “Advisory on CLPR utilization” on page 205 “Viewing CHIP/CHA data” on page 205 • • • • • • • • • • “Viewing ACP/DKA data” on page 210 “Viewing MP blade utilization for P9000 disk arrays” on page 214 “Viewing Smart and ThP pools data for P9000 disk arr
NOTE: The CHIPs and ACPs are applicable only for the XP48, XP128, XP10000, XP12000, and the XP20000 Disk Arrays. They are replaced by the CHAs and the DKAs for the XP24000 Disk Array and the P9000 disk arrays, such as the P9500. To view the component data on the Array View screen: 1. Click Monitoring > Array View in the left pane. All the XP and the P9000 disk arrays monitored by P9000 Performance Advisor are grouped under Arrays. Custom groups, if configured are grouped under Custom Groups. 2.
Figure 11 on page 193 shows the Array View screen for 53036, which belongs to the P9500 Disk Array type: Figure 11 Array View screen . Further, to view the performance and utilization metrics at the component level in the disk array, click the plus (+) sign for the disk array and select the component node from the list displayed. Click each component under a particular component node to view the individual performance or utilization data.
Component selection tree Description Documentation Links For XP disk arrays For P9000 disk arrays “Viewing ACP/DKA data” on page 210 Yes No “Viewing LDEV data” on page 236 Yes Yes FrontendIO Provides the list of busiest frontend LDEVs and the ports associated with the LDEVs “10 busiest LDEVs/Ports” on page 228 Yes Yes BackendIO Provides the list of busiest backend LDEVs and the ports associated with the LDEVs “10 busiest LDEVs/RAID groups” on page 229 Yes Yes Provides the following deta
Component selection tree Description Documentation Links For XP disk arrays For P9000 disk arrays “RAID Group summary” on page 231 Yes Yes “Port summary” on page 233 Yes Yes “Viewing MP blade utilization for P9000 disk arrays” on page 214 No Yes “Viewing Smart and ThP pools data for P9000 disk arrays” on page 217 No Yes Provides the following details for a RAID group: • Performance summary for all the related metrics RG Summary • Current configuration, which includes the component type an
Component selection tree Description Documentation Links For XP disk arrays For P9000 disk arrays No Yes No Yes Provides the following details for the installed CHA and the DKA: • Average performance derived from the overall average performance of all the ports in the CHIP or the RAID groups in the DKA • Average performance of individual ports for a CHA CHA/DKA • Average performance of individual RAID groups for a DKA • “Viewing CHIP/CHA data” on page 205 • “Viewing ACP/DKA data” on page 210 •
The following table describes the Performance View screen elements. The images shown are for 53036, which belongs to the P9500 Disk Array type.
Performance View screen elements Description For an XP disk array, the Bus/Path Util % group box displays the CHIP/CHA utilization and the ACP pair utilization for the cache memory bus and the shared memory bus. Bus/Path Util % group box For a P9000 disk array, the Bus/Path Util % group box for the CHIP/CHA utilization and the ACP pair utilization are displayed under the respective Frontend Total Avg group box and the Backend Total Avg group box.
Performance View screen elements Description Displays the overall average sequential and non-sequential reads, and writes for an ACP pair. For more information, see “Viewing ACP/DKA data” on page 210. In addition, the combined backend transfer value is displayed (only for XP24000 and P9000 disk arrays), which is the sum of backend transfers happening on all the ThP pools served by a particular DKA.
Performance View screen elements MP Blades Util % group box Description Displays the average utilization of an MP blade, which is calculated as the utilization of all the individual processors in the MP blade. The MP blades are grouped based on the clusters to which they belong. For more information, see “Viewing MP blade utilization for P9000 disk arrays” on page 214. NOTE: The MP blade component is not applicable for the XP disk arrays.
IMPORTANT: • The MIX CHIP displays only eight ports and four MPs though there are eight MPs on that board. The remaining four behave as ACP MPs. • If performance data is collected separately for the DKC, ports, and the RAID groups, through two different schedules, all the metrics display the latest data as received by the management station, from either of the schedules. For more information about schedules, see “Data Collection Configuration” on page 53.
Screen elements Description Volume Information The Volume Information displays the summary of all the components for the selected XP or the P9000 disk array. A list of components and their numbers are displayed. Initially, N/A is displayed beside each component as the configuration collection has not yet been initiated. For more information on configuration summary, see “Volume Information” on page 202.
Related Topics • • • • • • • • • • • • “Viewing performance summary” on page 196 “Advisory on CLPR utilization” on page 205 “Viewing CHIP/CHA data” on page 205 “Viewing ACP/DKA data” on page 210 “Viewing MP blade utilization for P9000 disk arrays” on page 214 “Viewing Smart and ThP pools data for P9000 disk arrays” on page 217 “Utilization Summary” on page 203 “10 busiest LDEVs/Ports” on page 228 “10 busiest LDEVs/RAID groups” on page 229 “RAID Group summary” on page 231 “Port summary” on page 233 “Viewing
• CHA MPs and the associated ports. The port type, such as Fibre, Ficon, Escon, or FCoE (applicable only for P9000 disk arrays) is also displayed beside the port ID. In addition, the utilization summary includes the following for a P9000 disk array: • Cache usage. • Bus utilization. • MP blade utilization, which includes the following: • MP blade IDs. • DKCs, cluster #, and the blade locations for the MP blades. • The MPs on the MP blade and each MP's utilization percentage.
Advisory on CLPR utilization P9000 Performance Advisor provides an advisory on the usage of individual CLPRs in an XP or a P9000 disk array. The advisory is based on the data collected for the past one week. The following are the scenarios for which an advisory is created: • If the cache for a CLPR is less utilized, the advisory suggests that you consider re-allocating portion of the cache to the other CLPRs.
CHIPs/CHAs. You can also click CHIP for an XP disk array or CHA/DKA for a P9000 disk array in the component selection tree. The summary is displayed in the CHIP/CHA summary table for the XP disk arrays and the CHA summary table for the P9000 disk arrays (see following images).
The following table describes the CHIP/CHA summary table for an XP disk array and the CHA summary table for a P9000 disk array. CHIP/CHA summary table for XP disk arrays includes... CHA summary table for P9000 disk arrays includes... The CHA name The CHIP or the CHA name Example: CHA-1F, 1 indicates the cluster # where the CHA board is located.
IMPORTANT: • The port type, such as Fibre, Ficon, Escon, or FCoE (applicable only for P9000 disk arrays) is also displayed beside the port ID. • Since, the CHIP/CHA and the ACP/DKA MPs are moved to the MP blades in the P9000 disk arrays, their MP utilization metrics are not applicable for the P9000 disk arrays. For more information, see “Viewing MP blade utilization for P9000 disk arrays” on page 214.
Individual CHIP/CHA data For XP disk arrays For P9000 disk arrays The average I/Os and throughput of data in MB/s on all the ports in the selected CHIP/CHA Yes Yes The individual MPs on the selected CHIP/CHA Yes No The IDs of the associated ports on the selected CHIP/CHA Yes Yes (the port IDs are directly displayed under the selected CHA.
• • • • • • • “Viewing MP blade utilization for P9000 disk arrays” on page 214 “Viewing Smart and ThP pools data for P9000 disk arrays” on page 217 “Utilization Summary” on page 203 “10 busiest LDEVs/Ports” on page 228 “10 busiest LDEVs/RAID groups” on page 229 “Port summary” on page 233 “Viewing LDEV data” on page 236 Viewing ACP/DKA data Based on whether you selected an XP or a P9000 disk array, click an ACP/DKA pair in the ACP Pair Backend group box under the Performance View tab to view the summary of
The following images display the ACP Pair Backend group box and the DKA summary table for 53036, which belongs to the P9000 Disk Array type: HP StorageWorks P9000 Performance Advisor Software User Guide 211
The following table describes the ACP/DKA summary table for an XP disk array and the DKA summary table for a P9000 disk array. ACP/DKA summary table for XP disk arrays includes... DKA summary table for P9000 disk arrays includes... The ACP/DKA pair name The DKA pair name Example: BUNU Example: AUMU The individual MPs on an ACP/DKA and the utilization percentage of each MP Not applicable In the above image, BU MP Utilization % indicates the utilization of the MPs on BU, which is the left ACP.
Individual ACP/DKA data For XP disk arrays For P9000 disk arrays Yes No Yes Yes Yes Yes Summary The MPs on the individual ACP/DKA and their utilization percentage For example, if you selected the AUMU DKA pair, you can view the MPs and also their utilization percentage on AU. Similarly, you can also view the above-mentioned details for MU. The backend transfers for the selected ACP/DKA pair, which includes the sequential and non-sequential reads and writes.
• • • • • “Utilization Summary” on page 203 “10 busiest LDEVs/Ports” on page 228 “10 busiest LDEVs/RAID groups” on page 229 “Port summary” on page 233 “Viewing LDEV data” on page 236 Viewing MP blade utilization for P9000 disk arrays Click an MP blade ID in the MP Blades Util% group box under the Performance View tab to view the corresponding utilization summary on the MP Blades screen. You can also click MP Blades in the component selection tree. The following image shows the MP Blades Util% group box.
Figure 13 MP Blades screen . The MP Blade Configuration group box includes the following: • The installed MP blades, DKCs, and the clusters to which they belong. Each MP blade ID includes the corresponding cluster # and the blade location. For example: MPB-1MA is the MP blade ID, 1 indicates the cluster #, and MA indicates the blade location. • The individual MPs on each MP blade and each MP's utilization percentage. Click an individual MP to view the utilization graph in the Chart Work Area.
MP blade screen elements Description Displays the following details: • Processing Type: The list of processing types. Processing Distribution table • Avg. Util%: The average MP blade utilization by each processing type. The average utilization is calculated as the utilization of all the individual processors in the MP blade. For more information on the processing types, see Table 20 on page 301.
MPB-1MB is also listed in the MP Blade Configuration group box. • MPB-1MB belongs to the Cluster 1 and the DKC 0. • 1 in MPB-1MB represents the cluster # and MA represents the blade location for MPB-1MB. • MP 0, MP 1, MP 2, and MP 3 are the MPs on MPB-1MB. • The average utilization of MPB-1MB is 2%. • The average utilization is calculated as the utilization of all the individual processors in MPB-1MB, which is as follows: (MP1+MP2+MP3+MP4)/4.
To view data about the different storage tiers and the RAID group utilization for a Smart or ThP pool in a P9000 disk array, click Pools for the disk array in the component selection tree under Array View in the left pane. Figure 14 Smart Pool Screen . Figure 15 THP Pool Screen .
IMPORTANT: The data on the Smart pools and the ThP pools are not displayed if the pools are not configured in the selected P9000 disk array. The following error message is displayed: Smart and ThP pools are not configured for this P9500 Disk Array. Table 10 on page 219 describes the Pool Information screen elements: Table 10 Pool Information screen details Table name Description Displays the configuration and performance data of the Smart and the ThP pools.
Column names Description IOPS Displays the sum of the random and sequential read and write I/Os on the individual Smart pool or the ThP pool. MBPS Displays the sum of the random and sequential reads and writes in MB/s on the individual Smart pool or a ThP pool. Backend Tracks Displays the total backend tracks associated with the Smart pool or the ThP pool. It is an aggregate of all the backend transfers due to I/Os occurring on every VVol in the Smart pool or the ThP pool.
NOTE: HP recommends viewing a maximum of 150 records at a time, so that there is no performance impact. • The metrics based on which you want to sort the records. You can sort records based on the IOPS, MBPS, Backend Tracks, and the Avg Read/Write Resp Time metrics. By default, the VVol records are sorted based on the Avg Read/Write Resp Time values. To configure the above-mentioned settings, click V-vols Settings in the Pool Details table.
• • • • “10 busiest LDEVs/Ports” on page 228 “10 busiest LDEVs/RAID groups” on page 229 “Port summary” on page 233 “Viewing LDEV data” on page 236 Viewing continuous access data for P9000 disk arrays The Array – View Continuous Access screen provides data on the continuous access configurations (synchronous, asynchronous, and journal based) created in the selected XP or P9000 disk array. The configuration data includes the P-VOL, S-VOL, and associated port, RAID group details.
Figure 18 Continuous Access Async . Table 13 describes the data displayed: Table 13 Continuous access configuration data Screen element Description Primary Array Serial number of the primary XP or P9000 disk array (primary data center). PVOL LDEV configured as P-VOL on the primary data center. Displays the LDEV number in cu:ldev format. Secondary Array Serial number of the secondary XP or P9000 disk array. SVOL LDEV configured as S-VOL on the secondary data center.
Screen element Description Failed or Active. NOTE: CA Link Status When Continuous access is configured as Sync or Async and the selected volume type is SVOL, then you might encounter the CA Link status as NA Not Applicable. Number of active continuous access paths from a PVOL to SVOL. NOTE: No. of Paths When Continuous Access is configured as Sync or Async and the selected volume type is SVOL, then you might encounter the status of the number of paths as NA - Not Applicable.
CA link status metrics Select the row to see the performance metrics of PVOL or SVOL (based on the volume type). If journals are displayed in the selected row, the journal group and volume details are also displayed. To view continuous access journals and volumes data, see Table 16 and Table 17. You can also click a particular record to highlight the record and then click Plot Chart to choose the metrics and view the respective performance graphs.
Screen elements Description The average utilization of the MP blade that is configured for the selected volume based on the volume type (S-VOL or P-VOL). NOTE: The MP blade average utilization data is collected during the DKC performance data collection. The collection frequency set for the DKC data collection might be different from that set for the LDEV data collection.
Screen element Description State of the journal group, can be one of the following: • JSTAT_SMPL: The journal volume that does not have a pair, or deleting. • JSTAT_NONE: The specified JID does not exist. • JSTAT_P(S)JNN: P(S)vol Journal Normal Normal • JSTAT_P(S)JSN: P(S)vol Journal Suspend Normal • JSTAT_PJNF: P(S)vol Journal Normal Full • JSTAT_P(S)JSF: P(S)vol Journal Suspend Full • JSTAT_P(S)JSE: P(S)vol Journal Suspend Error including link failure.
Column Head Column Head LDEV MB/s - Frontend Total random and sequential frontend read and write MBs on the journal LDEV during the entire collection interval. LDEV I/Os - Frontend Total random and sequential frontend read and write I/Oss on the journal LDEV during the entire collection interval. Avg Read Resp (msec) Average read response time of all the journal LDEVs created in a specified RAID group over the entire data collection interval.
Figure 19 10 busiest front end LDEVs . Figure 20 10 busiest front end Ports .
IMPORTANT: • Under the LDEV tab, you can also view the associated RAID Group for an LDEV. This data helps determine if more than one LDEV is contributing to a RAID Group activity. • If the LDEV is a LUSE Master, the details of individual LDEVs are considered for the busiest components and not the sum of all the individual LDEVs. The LDEV response time components, AVERAGE READ RESPONSE, MAXIMUM READ RESPONSE, AVERAGE WRITE RESPONSE, and MAXIMUM WRITE RESPONSE, are measured in milliseconds.
• • • • • “Utilization Summary” on page 203 “10 busiest LDEVs/Ports” on page 228 “RAID Group summary” on page 231 “Port summary” on page 233 “Viewing LDEV data” on page 236 Viewing RAID group summary To view the summary of overall utilization of RAID groups for an XP or a P9000 disk array, click RG Summary in the component selection tree for that XP or P9000 disk array.
Screen elements Description RG The RAID group to which the LDEV belongs. The SLPR with which the RAID group is associated. NOTE: SLPR does not exist in the P9000 disk arrays. So, the SLPR-related data is displayed only for the XP disk arrays. SLPR The SLPR group ID. NOTE: SLPR Name SLPR does not exist in the P9000 disk arrays. So, the SLPR group ID is displayed only for the XP disk arrays. CLPR The CLPR with which the RAID group is associated. CLPR Name The CLPR group ID.
Screen elements Description % RGUtil Seq Write Parity The sequential write parity utilization percentage for a RAID group. Overall % RGUTIL The overall percentage utilization of a RAID group, which is the sum of the random reads, random writes, random write parity, sequential reads, sequential writes, and the sequential write parity.
Figure 24 Port summary . Screen elements Description CHP Port ID Displays the port ID for the CHP port. Provides the option to view information associated with a particular port or with all ports. Displays the SLPR with which the RAID group is associated. NOTE: SLPR SLPR does not exist in the P9000 disk arrays. So, the SLPR-related data is displayed only for the XP disk arrays. Displays the SLPR group ID. NOTE: SLPR Name SLPR does not exist in the P9000 disk arrays.
Screen elements Description Min IO/s Displays the minimum frontend I/Os on the port. Max MB/s Displays the maximum frontend throughput in MB/s. Min MB/s Displays the minimum frontend throughput in MB/s. Avg MB/s Displays the average frontend throughput in MB/s. Viewing 90th and 95th percentile values for Continuous Access ports The write data throughputs frequently display transient peaks beyond the average performance. Sizing to these peaks can lead to enormous bandwidth provisioning and cost.
Viewing LDEV data P9000 Performance Advisor displays the following data on the Array View LDEV screen for all the LDEVs that belong to an XP or a P9000 disk array: • The performance data of all the LDEVs • The data on associated components, such as the following: • The summary for individual RAID groups, which includes: • RAID level • Associated ACP pair • Disk mech details • Associated drive type and RPM rate • The I/Os and MB/s data for individual CHIP ports For an XP disk array, in addition to the above-
You can query the existing performance data in P9000 Performance Advisor for a particular date and time stamp to view the corresponding point in time data for all the LDEVs. By default, the data displayed is for the last performance data collection time stamp and sorted in a descending order. The sorting of data is based on the average read response time of individual LDEVs. You can query the LDEV data for a different date and time stamp and also sort the data based on a different sort type.
page links to navigate to other sections of the LDEV table and view additional LDEV records. You can also click the prev, next, or last links to navigate to the respective pages. Querying and sorting data You can query the performance data in the P9000 Performance Advisor database for the last data collection date and time stamp, for which you want to view the LDEV data. By default, your query is executed on the latest performance data received from the selected XP or P9000 disk array.
Screen elements Description Host Group Select Host Group to sort LDEV data based on the host groups (does not apply to the XP48 Disk Array). Select Jnl Group to sort LDEV data based on the journal volume pool IDs. Jnl Group LDEV MB/s - Frontend NOTE: The Jnl Group sort option is displayed only if the journal groups are configured in the selected XP or P9000 disk array. Select LDEV MB/s to sort LDEV data based on the frontend throughput (MB/s) of the LDEVs.
2. Click Query. If you do not select a last collection date and time stamp, the current last collection date and time stamp is considered for querying the data. IMPORTANT: • For an XP24000 Disk Array, the performance data can be collected on 64000 LDEVs (64K binary (65,536)). • For the XP or the P9000 disk arrays with external LDEVs, – is displayed under ACP PAIR in the LDEV table, as the external LDEVs do not have a valid ACP pair associated with them.
Screen elements Description For XP disk arrays For P9000 disk arrays RG The RAID group to which the LDEV belongs Yes Yes ACP Pair ID The card letters for the ACP pair Yes Yes CHIP Port ID The port ID for the CHIP (CHP) port Yes Yes Host Group: The host group to which the host belongs The host group to which the host belongs Yes Yes MP Blade ID The identification number of the MP blade that is currently associated with the LDEV.
Components and metrics in the LDEV Column Settings list Table 19 on page 242 lists components available for selection in the LDEV Column Settings list: Table 19 Components and metrics in LDEV Column Settings list Screen elements Description ACP Pair ID The card letters for the ACP pair. The percentage of the ACP pair processors usage, during the reporting period. ACP Pair Util NOTE: This metric is available only for the XP disk arrays.
Screen elements Description The port ID for the CHIP (CHP) port. Provides the option to view information associated with a particular port or with all ports. NOTE: CHP Port ID If a Mainframe LDEV in an XP or a P9000 disk array is presented through a FICON CHA, the corresponding CHA ID is not displayed. Instead, Not Mapped is displayed in this column. The percentage that the CHP processors were used during the reporting period. NOTE: CHP Util This metric is available only for the XP disk arrays.
Screen elements Description E-LDEV The external LUN LDEV ID on the external array. Ext-Lun Indicates that the LDEV is an Ext-Lun. The following options are available:- (hyphen) = Normal LUN E = Ext-Lun P = Ext-Lun provider (this LDEV is used as an Ext-Lun for another array) E-Port(s) A list of Ext-Lun initiator ports (ports used to connect to an external array). E-Seq The Ext-Lun provider's serial number for the array. Host ID (Host identifier) The name of the host machine.
Screen elements Description LUN (Logical Unit Number) ID The identification number of the LUN. MP Blade Id The identification number of the MP blade that is currently processing requests for an LDEV. The MP blade ID includes the cluster # and the blade location. For example, MPB-1MA, where 1 indicates the cluster # and MA indicates the blade location. NOTE: This component is displayed only for the P9000 disk arrays.
Screen elements Description Target LUN The LUN associated with the given LDEV. Vol. Group The volume group identification name if the device is associated with a volume group. P9000 Performance Advisor reports volume groups from LVM (an HP brand) and VXVM (a Veritas brand). NOTE: • The E-LDEV, Ext-LUN, E-Port(s), E-Seq, Jnl Group, and Vol. Group are available for selection only if they are configured in the selected XP or P9000 disk array.
IMPORTANT: • If the state for an LDEV displays up as SMPL (Simplex), it means that the LDEV is neither configured as a PVOL or SVOL. • The replication pair status is displayed only when you do a fresh configuration collection for an XP or a P9000 disk array. However, if the configuration data collection is scheduled, the replication pair status is automatically updated to show the current status. To view the replication volumes and the status of the replication: 1. 2. Click the Column Settings check box.
Filtering LDEV records Records in the LDEV table can be filtered in the following ways: • Filter records based on user specified criteria • Filter records based on existing selection • Filter records for values greater or lesser than the specified value Filter records based on user specified criteria This type of filter is applicable when you want to view the LDEV data based on the filter criteria that you specify.
The LDEV table displays only those records that match the specified RAID group IDs. Filter records for values greater or lesser than the specified value This type of filter is applicable when you want to view performance values of LDEVs based on the following combination, where you select from an existing filter criteria and also specify a value. Following is an example on filtering records in the LDEV IO/s list: 1. Click the LDEV IO list. 2.
4. Click OK to continue. A record for the export activity is logged in the Event Log screen. The record includes the name of the XP or P9000 disk array, and the date and time when the export activity was initiated. After the data is exported, another record is logged in the Event Log screen. In addition to the disk array, the date and time stamp, the record also includes a link to download the CSV file. The following image displays the records logged for the XP disk array, 82502.
Continuous Access Journal Detail View Double-click a Journal group volume ID in the Jnl Group column to open the Continuous Access Journal Detail View screen, as shown in Journal Group detail view. A list of LDEVs configured in the continuous access journal volume displays; a maximum of 16 LDEVs display. The status on backend transfers and average read response of each LDEV associated with the journal group is also shown.
Screen elements Description The status of the ThP Pool • 0: Undefined/Creating/Deleting – Specified pool does not exist completely. THP Pool Status • 1: Normal • 2: Pool capacity beyond threshold • 3: Pool capacity reached 100% of the pool • 4: Failure, cannot show further information for the pool POOL Threshold 1 A user configurable pool threshold (varying between 5% - 95% in increments of 5%). The default value is 70%. This is the high for the pool.
IMPORTANT: • The LDEV table does not display hyperlinks in the ACP Pair ID and ACP Pair Util fields for RAID groups spanning across multiple ACP pairs. Hence, no chart for the same can be created. • For a P9000 disk array, the LDEV table does not display the ACP Pair Util field for RAID groups. So, a chart for the ACP pair utilization metrics cannot be plotted. • An XP24000 type array has 32 CHIPs, 8 ACP pairs, and 4 MPs per port, an XP20000 type array has 8 CHIPs, 4 ACPs and 4 MPs per port.
Viewing CLPR information Click a CLPR value in the LDEV table to view the detail view for that CLPR in a separate browser window. In the CLPR window, the line above the table indicates the hierarchical information for the selected CLPR.
• • • • • • • “Viewing ACP/DKA data” on page 210 “Utilization Summary” on page 203 “10 busiest LDEVs/Ports” on page 228 “10 busiest LDEVs/RAID groups” on page 229 “RAID Group summary” on page 231 “Port summary” on page 233 “Viewing LDEV data” on page 236 HP StorageWorks P9000 Performance Advisor Software User Guide 255
Viewing XP and P9000 disk array components
10 Using charts This chapter discusses the following topics: • “Introduction” on page 257 • “Plotting charts” on page 262 Introduction You can plot performance graphs to view historical data of components that belong to the same or different XP disk arrays and P9000 disk arrays. Graphical representation of components performance metrics is especially useful when you want to compare similar components of different XP and P9000 disk arrays to determine their performance and observe trends.
of a component by viewing its data points collected at different collection rates in the same chart. You can compare components across the XP and the P9000 disk arrays based on the following metric categories. (Ensure that you select every element that you want to appear in your chart, because the system charts only those elements that are specified): NOTE: Firmware version later than 50.09.33 snapshot PIDs are available for the XP12000 and the XP10000 Disk Arrays.
IMPORTANT: • In the Chart Work Area, plot performance graphs for any combination of the XP and the P9000 disk arrays, metrics, and components. Ensure that the components you select do not exceed 512 in number. • By default, the performance graphs in the Chart Work Area are plotted only for the last 1 hour of the management station's time.
Sections Description Includes the Available Metrics Choose Metric Category list that displays all the applicable metrics from the following metric categories for a selected component: • Frontend IO Metrics • Frontend MB Metrics • Utilization Metrics • Backend Metrics • Response Time Metrics Select components from the component selection tree and metrics from the Available Metrics Choose Metric Category list to view their performance graphs in the Chart Work Area.
Sections Description By default, each chart window is identified by the metric category for which the performance metrics of components are plotted. The Chart Work Area comprises of five chart windows, each representing a specific metric category. The performance metrics of components for the same metric category are plotted in a single chart window and for different metric categories, the performance metrics of components are plotted in separate chart windows.
• “Zooming in on charts” on page 316 Plotting charts NOTE: The figures in the following procedure are an example for the XP disk array metric selection. Prerequisite Ensure that the following prerequisites are met before you navigate to the Charts screen: • You have collected the performance data, so that the data on associated components is displayed under the various categories for the individual XP and P9000 disk arrays.
1. Click Monitoring > Charts in the left pane. The Charts screen appears. By default, the Data Source section displays the following main nodes in the component selection tree: • The DKC or the model numbers of individual XP and P9000 disk arrays monitored by P9000 Performance Advisor. If user-friendly names are provided for the XP and the P9000 disk arrays, they appear in brackets beside the DKC numbers. • Custom Groups, lists the individual custom groups that you created.
2. Based on your requirement, select components from an XP or a P9000 disk array or choose LDEVs from a custom group. You can also search for a particular physical LDEV in the component selection tree, if you are aware of the LDEV name. For more information, see “Searching for components” on page 295: • Click the plus (+) sign for an XP or a P9000 disk array and select components from the list, for which you want to view the performance graphs.
By default, the metrics for the first metric category in the list are automatically displayed in the Metric column. 3. Choose the metrics for which you want to view the performance of the selected components. For more information, see “Choosing metrics” on page 268. By default, the most used metric category and related category metrics are listed. A performance graph for the selected component and metric is automatically displayed in the Chart Work Area. 4.
XP or P9500 Disk Array main categories – component selection tree Description For an XP disk array, Front-end comprises of the frontend components, such as the ports and associated MPs and CHAs. For more information, see “Front-end navigation path” on page 269. Front-end For a P9000 disk array, Front-end comprises of the frontend components, such as the ports and the associated CHAs. For more information, see “Front-end navigation path” on page 269. Comprises of individual CLPRs.
XP or P9500 Disk Array main categories – component selection tree Replication Volumes Description Comprises of volumes that are used in the business copy or the continuous access transactions. The business copy volumes comprise of individual physical LDEVs. The continuous access volumes comprise of the journal pools that have LDEVs configured to be part of the journal pools. The continuous access volumes can be configured on the P9000 disk arrays, such as the P9500.
This logical grouping of components enables easy navigation through different levels of component types to select and view performance graphs of specific components. In the above example, the ports are categorized under the host groups. If you notice that the response time of a particular LDEV is high, drill down to the associated ports to view their performance metrics for the duration when the LDEV response time is found to be high.
IMPORTANT: • For a component type, the metrics are displayed for selection only if the corresponding components are supported or configured in the XP and the P9000 disk arrays. For example, if the configuration collection is not yet performed for an XP disk array, the CLPR partition data is not available. Hence, clicking Cache in the component selection tree does not result in any metrics and the Available Metrics Choose Metric Category list is disabled.
In the above image, under Front-end for the XP disk array 10090 (XPArray_1): 1. 2. 3. 4. 5. Front-end is the main category. Port is the component type. The number (40) indicates the number of ports for the XP disk array 10090. CL1A is one of the individual ports. CHP00–1EU is an individual MP associated with the port CL1A. CHA–1EU is an individual CHA associated with the selected CHP00–1EU.
The applicable metrics are displayed in the Available Metrics Choose Metric Category list. Select the metrics at the component type or the individual component levels, or both to view the related performance graphs in the Chart Work Area. For a description of these metrics, see “Metric Category, metrics, and descriptions” on page 439. The following table provides the default set of metric categories that are displayed in the Available Metrics Choose Metric Category list for the XP and the P9000 disk arrays.
IMPORTANT: CLPRs do not exist in an XP1024 Disk Array. So, the Cache related metrics are not displayed in the Available Metrics Choose Metric Category list when you select a disk array of type XP1024. In the above image, under Cache for the XP disk array 10055: • Cache is the main category. The number (6) indicates the number of CLPRs partitions configured on the selected XP disk array. • The CLPR0 and CLPR1 are the individual CLPRs.
NOTE: For an XP10000 Disk Array, the MIX boards do not have MP4, 5, and 6 defined. So, when these metrics are chosen, the utilization is shown as zero. The Utilization metric is available for the DKAs that are associated with the concatenated RAID groups. All the associated DKA pairs are displayed individually in the Resource list box on the Charts screen. TIP: For CHIPs that are not installed, the MP utilization shows zero.
In the above image, under MP Blades for the P9000 disk array 53036: • MP Blades is the main category. The number 2 indicates the total number of MP blades configured on the selected P9000 disk array 53036. • MPB-1MA is one of the individual MP blade IDs that belongs to the Cluster 1 and the blade location MA. • Processors is the component type. The number 4 indicates the total number of processors that belong to MPB-1MA. • MP 0 is one of the individual processors that belongs to MPB-1MA.
• • • • • • • • • “Back-end navigation path” on page 275 “Replication Volumes navigation path” on page 284 “Pools navigation path” on page 278 “Snapshot Pool navigation path” on page 282 “LUSE navigation path” on page 285 “Host Groups navigation path” on page 287 “Ext-RG(s) navigation path” on page 291 “Drive types navigation path” on page 292 “Custom groups navigation path” on page 293 Back-end navigation path For the XP disk arrays The Back-end main category comprises of the DKA pairs, associated MPs, R
In the above image, under Back-end for the XP disk array 10090: • DKA is a component type. The number (2) indicates the number of DKA pairs available on the selected XP disk array. • AUMU is an individual DKA pair • BUNU is an individual DKA pair • RG(s) is a component type. The number (6) indicates the number of RAID groups configured on the selected XP disk array. 1–1 to 1–6 are individual RAID groups. The list under 1–1 displays the following component types: • Physical LDEVs.
In the above image, under Back-end for the P9000 disk array 53036: • DKA is a component type. The number (1) indicates the number of DKA pairs available on the P9000 disk array 53036. • AUMU is an individual DKA pair. • RG(s) is a component type. The number (12) indicates the number of RAID groups configured on the P9000 disk array 53036. 1–1 to 1–12 are the individual RAID groups. The list under 1–1 displays the following component types: • Physical LDEVs.
the XP disk arrays and For the P9000 disk arrays columns indicate whether the particular default metric is applicable for that XP or P9000 disk array.
• VVols (6) • Volumes (6) In the above image, under THP Pool for the XP disk array 10090: • THP Pool is the main category. The number (18) indicates the total number of ThP pools configured on the XP disk array 10090. • Pool ID:2 is one of the individual ThP pools. • DKA is a component type and lists the DKA pairs associated with the ThP Pool ID:2. • RG(s) is a component type and lists the RAID groups that form the ThP pools under ThP Pool ID:2.
For the P9000 disk arrays The Pools main category comprises of the following: • ThP pools • Smart pools THP The THP comprises of the associated RAID groups and their physical LDEVs, and the host groups and their VVols. Following is the component selection path: Pools > THP (component type) > Individual ThP pool IDs.
• • • • THP is the component type that displays the individual ThP pools. SMART is another component type. Pool ID:5 is a Smart pool. RG(s) is a component type under Smart Pool ID:5 and lists the RAID groups that are associated with the Smart pool. RG(s) further expands to display the individual RAID groups and pool LDEVs associated with the Smart Pool ID:5. • VVols is a component type under Smart Pool ID:5 and lists the host groups that are associated with the Smart pool.
• “Custom groups navigation path” on page 293 Snapshot Pool navigation path The Snapshot Pool main category comprises of the snapshot pools that contain the associated RAID groups, LDEVs, and the associated host group's VVols. Following is the component selection path: Snapshot Pool > Individual snapshot pool IDs.
• RG(s) is a component type. The number (2) indicates the number of RAID groups associated with the Snapshot Pool ID:7. • 5–6 is one of the individual RAID groups. • LDEVs is a component type. The number (2) indicates the number of LDEVs associated with the Snapshot Pool ID:7. Click LDEVs to view the list of LDEVs. • VVols is a component type. The volumes listed under VVols are grouped based on host groups. The number (1) besides VVols indicates the total number of host groups using the Snapshot Pool ID:7.
• “Drive types navigation path” on page 292 • “Custom groups navigation path” on page 293 Replication Volumes navigation path The Replication Volumes main category comprises of the business copy and the continuous access volumes. They are the main component types that further expand to display the associated components.
The applicable metrics are displayed in the Available Metrics Choose Metric Category list. Select the metrics at the component type or the individual component levels, or both and view the related performance graphs in the Chart Work Area. For a description of these metrics, see “Metric Category, metrics, and descriptions” on page 439.
LUSE > Individual LUSE masters > Components (component type) > Individual LDEVs that belong to the selected LUSE master. The LUSE masters and their LDEVs have the associated RAID groups given in brackets beside the LDEV IDs (example, 1:44 (1–1)). The number of components associated with a component type is displayed beside it. For example: • LUSE (21) • Components (2) In the above image, under LUSE for the XP disk array 10090: • LUSE is a main category.
Related Topics • • • • • • • • • • • “Front-end navigation path” on page 269 “Cache navigation path” on page 271 “MP Blades navigation path” on page 273 “Back-end navigation path” on page 275 “Pools navigation path” on page 278 “Snapshot Pool navigation path” on page 282 “Replication Volumes navigation path” on page 284 “Host Groups navigation path” on page 287 “Ext-RG(s) navigation path” on page 291 “Drive types navigation path” on page 292 “Custom groups navigation path” on page 293 Host Groups navigati
NOTE: The ports and LDEVs can be associated with multiple hosts in a host group. The port type, such as Fibre, Ficon, Escon, or FCoE (applicable only for P9000 disk arrays) is also displayed beside the port ID. In the above image, under Host Groups for the XP disk array 10090: • Host Groups is the main category. The number (65) indicates the number of host groups available in that category. • Host Group is one of the individual host groups.
RAID groups under the Back-end category. For more information, see “Front-end navigation path” on page 269 and “Back-end navigation path” on page 275. For the P9000 disk arrays The Host Groups category comprises of ports, RAID groups, LDEVs, and MP blades configured to communicate with the individual host groups. Each individual host group has four main component types, Ports, RAID Groups, LDEVs, and MP Blades.
In the above image, under Host Groups for the P9000 disk array 53025: • MP Blades is a component type under Host Group. The number (3) indicates the number of MP blades configured to process requests for the associated host group. • MPB-1MA [2.75] is a MP blade associated with Host Group. The number 2.75 beside the MP blade indicates the average utilization of MPB-1MA [2.75] by all the LDEVs from different host groups. • Processors is a component type under MPB-1MA [2.75].
• • • • “Replication Volumes navigation path” on page 284 “Ext-RG(s) navigation path” on page 291 “Drive types navigation path” on page 292 “Custom groups navigation path” on page 293 Ext-RG(s) navigation path The Ext-RG(s) category provides consolidated data on all the external volumes connected to the selected XP or P9000 disk array. Following is the component selection path: Ext-RG(s) > Ext-RdGp The following image is an example for an XP disk array and shows the respective external RAID groups.
• “Custom groups navigation path” on page 293 Drive types navigation path The Drive Types main category comprises the individual drive types that are available on the selected XP or P9000 disk array. Each drive type in the component selection tree expands to display the list of associated RAID groups, which in turn display the list of physical LDEVs.
• 1–4 is an individual RAID group. • Physical LDEVs is a component type under the RAID group 1–4. The number (1) indicates the number of LDEVs associated with the selected RAID group 1–4. Click Physical LDEVs to see the list of physical LDEVs that belong to the selected RAID group. • 0:03 is a physical LDEV under the RAID group 1–4. The RAID groups and the LDEVs that are associated with the selected drive type are also displayed under the Back-end category.
The serial numbers of the XP and the P9000 disk arrays to which the LDEVs belong are also mentioned in brackets beside the LDEV IDs. You can create a custom group that has multiple LDEVs from different XP and P9000 disk arrays. For more information on creating custom groups, see “Creating custom groups” on page 100. In the above image, CG_1_CG is one of the custom groups that is selected. The number (5) besides the LDEVs component type indicates the total number of LDEVs grouped in CG_1_CG.
Searching for components You can search for a particular physical LDEV in the component selection tree under Charts, if you are aware of the CU:LDEV name. The search automatically expands the RAID group list to which the physical LDEV belongs and the LDEV component is also highlighted for your reference.
2. In the Physical LDEV text box, enter the name of the LDEV that you want to search in the CU:LDEV format and click the Search icon. The component selection tree for the XP or the P9000 disk array that has the matching LDEV component automatically expands to display the LDEV highlighted for your reference. (If there are many components listed for the selected XP or P9000 disk array, you may have to use the scroll bar to navigate through the list of components to view the matching component).
The Chart Work Area displays the following default settings. They are applicable across the chart windows until you select the other available options: • Time Line in the Chart Style list. This implies that the data points for the different components are plotted as a line graph. The breaks in the performance graphs can be observed, if there are missing performance data collection. • Duration as 1 hour.
NOTE: • These selections work only on the active chart windows. • If the total number of data points from all the performance graphs exceeds 500 in a chart window, the data points are not rendered to optimize the charting functionality in P9000 Performance Advisor. You can hover the pointing device over the line graphs to view the data points.
An individual chart window can accommodate the performance graphs for up to 250 components. The 250 components that you select can belong to multiple component types and for different metrics from the same metric category. Performance Advisor plots the performance graphs incrementally and continues till the performance graphs for all the 250 components are plotted in the chart window.
For more information on the tasks that you can perform in the Chart Work Area, see “Using chart controls and settings” on page 304. Viewing top 20 consumers of an MP blade IMPORTANT: This section is applicable only for the P9000 disk arrays. The top 20 consumers can be LDEVs, continuous access journal groups, or the E-LUNs (external volumes) that are assigned to an MP blade.
MP blade utilization by top 20 consumers Example (see Figure 27) LDEV:0:18 (Backend) A consumer's association with a processing type provides an understanding on the number of processing cycles used by the consumer with different processing types. For example, an LDEV 0:09 might be involved in processing frontend and backend requests. Its processing type reveals whether the frontend or the backend requests have been high.
Processing types Description Open-initiator Indicates all the processing involved in the continuous access replication activities. Open-external initiator Indicates all the processing involved in accessing external storage. Open-mainframe target Indicates all the frontend activities involved in processing mainframe I/O requests. Open-mainframe ext initiator Indicates all the frontend activities involved in processing mainframe I/O requests.
MP blade utilization by processing types Example (see Figure 28) Average MP blade utilization by a processing type (average from the previous to the current time stamp) 3.12% Average MP blade utilization by all the processing types associated with the MP blade for the overall duration Total: 19.02% (16.4%) Calculated as (3.12 / 19.02) * 100 Average MP blade utilization by a processing type for the overall duration The value 16.4% in 19.
Figure 29 Average Metric Utilization . Place the pointer over an area to view the following details: Aggregate Data Example (see Figure 29) XP or P9000 disk array, component, metric name) 53040,CL1B(Fibre(Target)),Maximum) Date and time stamp 07/07/11, 14:06:00 Average utilization metrics value for the specific date and time stamp (average utilization metrics percentage for the specific date and time stamp) 12110 (68.
• • • • • • • • • “Using date and time filters” on page 314 “Using chart Styles” on page 314 “Printing charts” on page 310 “Changing the Chart Work Area layout” on page 310 “Viewing current LDEV assignments for an MP blade” on page 311 “Previewing charts” on page 316 “Zooming in on data points across performance graphs” on page 317 “Rearranging or moving chart windows” on page 319 “Removing chart windows” on page 320 Adding new chart windows By default, the performance graphs of components for metrics tha
in the Chart Work Area, the save operation opens the equivalent number of new browser windows. You are prompted to open, save the PDF, or cancel the save operation. Saving favorite charts You can save the combination of components and metrics for which you want to frequently view charts, as favorite charts. Whenever you want to view the performance graphs for the same set of components and metrics, load the corresponding favorite chart.
3. Click Save to save the selected charts as favorite charts. You can provide a name for the favorite chart by clicking in the respective text box and entering the name. If you do not provide a name, by default, the metric category title of the chart window is considered as the favorite chart name. The following are a few points that you must note while specifying a favorite chart name: • The name should have only alphanumeric characters.
1. Click Load Fav Chart(s). A pop-up dialog appears displaying the favorites charts that you can view. 2. Select one or more favorite charts from the list and click View Chart. The favorite chart appears in the Chart Work Area and is selected by default. • You can add components for metrics in the same metric category to this favorite chart and save it with the same name, or provide a different name.
Generating or saving reports for favorite charts NOTE: • To create a report, it is mandatory that you provide the report name, array model, and report type. • The Report Name, Customer Name, Consultant Name, and Array Location are pre-populated in the respective fields if you have already configured them as common settings on the Email Settings screen. For more information, see “Configuring email and SNMP settings” on page 88. If you do not want these default descriptions, modify the respective fields.
7. Click Generate to view the report immediately. Click Save to save and view the report later. P9000 Performance Advisor saves the report in its database and also displays a record for the report in the Reports section (Reports > View Reports). By default, the new record is displayed at the end of the list. The following details along with those you provided while creating a report are displayed for the report record in the Reports section: • User Name: The name of the user who created the report.
NOTE: • When you change the layout, it applies to all the chart windows in the Chart Work Area. • Each column in the Chart Work Area can occupy only four chart windows if you select the vertical alignment for the Chart Work Area. • The Chart Work Area layout can be modified only under Charts. Viewing current LDEV assignments for an MP blade IMPORTANT: This section is applicable only for the P9000 disk arrays.
Figure 30 Current MP blade assignment .
The forecast utilization can be monitored for a day, week, month, six months, or year based on the current data points. For example, if you have data points for a RAID group collected over two days and you want to forecast its utilization for the next one week, P9000 Performance Advisor forecasts the utilization rate based on the data collected over two days.
To forecast utilization for any of the above-mentioned components, select the component and its corresponding metric, and select the duration from the Forecast list in the Chart Setting section. You can forecast the utilization for only one component at a time. Using date and time filters The following are the date and time filters that you can use on charts: • Start Updating: Click Start Updating for P9000 Performance Advisor to update the selected chart window every 5 minutes with the newest data points.
Time Line Chart Style Time Line chart style (default) enables you to view the plotting of data points for a fixed time interval. In addition, when data points for multiple metrics are plotted with different collection frequencies, their relationships with the time intervals are displayed correctly. Only data points that are collected during the specified interval are retrieved from the database and plotted on the graph. Data points are not plotted for intervals of time where data collection has failed.
the data collection resumes, the data points and the average values are again plotted simultaneously for the selected components. Time Line No Breaks Chart Style The Time Line No Breaks chart style enables you to view the actual performance of the selected components irrespective of whether the data collection is active or discontinued.
1 Focused area in the Zoom preview panel 2 Sliders on the time scale in the Zoom preview panel Zooming in on data points across performance graphs In addition to zooming in on data points for a particular duration, you can also zoom in on a combination of data points in the chart window. If zoom preview is enabled, it also highlights the focused area in the chart window.
To zoom out, click an empty area in the chart window. In the Zoom Preview panel, if you plot the chart for one day, the chart displays data with time stamp.
In the Zoom Preview panel, if you plot the chart for more than one day, the chart displays data with date stamp. Rearranging or moving chart windows To move or rearrange chart windows in the Chart Work, click in the title bar of the chart window that you want to move and holding down the left mouse button, drag and drop that chart over an existing chart, where you want the new chart to be placed in the Chart Work Area.
Removing chart windows You can remove all the charts currently displayed in the Chart Work Area by clicking Close Charts. All the active and passive chart windows are removed from the Chart Work Area.
11 Using reports This chapter discusses the following topics: • • • • • “Generating, saving, or scheduling reports” on page 327 “Viewing a report” on page 335 “Viewing a schedule” on page 337 “Virtualization for reports” on page 336 “Logging report details and exceptions” on page 339 Introduction Reports provide history of performance data collected for a specified XP or a P9000 disk array, where a visual representation of the performance trend of components is shown for a duration that you specify.
Report types Description For XP disk arrays The Array Performance report provides the overall array performance by measuring the total I/Os, read and write I/Os on that array.
Report types Description For XP disk arrays For P9000 disk arrays CHIP Utilization The CHIP Utilization report provides data on the utilization of various installed CHIPs/CHAs for the duration that you specify. You can also view the CHIP/CHA utilization by the Hour of the Day report that provides the utilization data for all the CHIPs/CHAs averaged over a 24-hour period. Yes No Yes Yes Yes Yes LDEV IO The LDEV IO report provides data on the busiest LDEVs and the RAID groups.
Report types Description For XP disk arrays For P9000 disk arrays RAID Group Utilization The RAID Group Utilization report provides the top 32 RAID groups, which is derived based on the extent of utilization of each RAID group. It is available as standalone report and also as a part of the All report. For more information, see Creating report to view the most utilized RAID Groups.
Report types Description For XP disk arrays For P9000 disk arrays No Yes Yes Yes NOTE: NOTE: The MP blade utilization data is not applicable for the XP disk arrays. So, the MP Blade Utilization report is not included in the All report generated for the XP disk arrays. The ACP/DKA and the CHIP/CHA utilization data are not applicable for the P9000 disk arrays. So, their reports are not included in the All report generated for the P9000 disk arrays.
IMPORTANT: • Reports on the following are available only if they are configured in the selected XP or P9000 disk array. If not configured, they are not even displayed as options to select for create their reports. In addition, they are also not displayed in other related reports, like the Array Performance and the All reports.
Tasks you can perform on the Reports screen • • • • “Generating, saving, or scheduling reports” on page 327 “Viewing a report” on page 335 “Scheduling reports” on page 330 “Viewing a schedule” on page 337 Generating, saving, or scheduling reports You can generate a report, where you view only a temporary copy of the report. You can also save a report, where a copy of the report is retained in P9000 Performance Advisor for your later reference.
2. Select or enter the following details: • Name of the report in the Report Name box. The name should not be less than 2 characters or exceed 80 characters in length. • Name of the customer or company in the Customer Name box. • Name of the consultant in the Consultant box. • Location for the array in the Array Location box. • The array type from the Array Type list.
The following are the supported file formats: • HTML • PDF • RTF • CSV • DOCX The HTML format is the default file type for any report that you generate or save. The report is always provided in a compressed file (.zip) format as an email attachment. You can extract the contents of the ZIP file onto your local system to view the report details. However, if you select a PDF, DOCX or a RTF file type, choose to receive a normal report file or a compressed file as the email attachment.
3. Generate or save the report. • To generate a report, click Generate. P9000 Performance Advisor does not save the report in its database or display records for the report in the Reports section (Reports > View Reports). Instead, view only a temporary copy of the report. The report cannot be retrieved once it is closed. If required, manually save a copy of the report on your system based on the report file format: • If the file format is HTML, the report generated is displayed in a new IE browser window.
2. Choose the schedule and specify the duration of your choice: 1. Collection Schedule: displays Daily, Weekly, and Monthly. By default, Weekly is selected as the collection schedule. • Day of the Week: Displays the list of week days. Select the week day when you want the schedule to be executed. By default, Weekly is selected as the default collection schedule. • If you select Monthly as the collection schedule, the Monthly Schedule is displayed.
4. Click Save. P9000 Performance Advisor does the following: • Saves the schedule and also displays a record for the schedule in the Scheduled Reports section (Reports > View Reports). The following details along with those you provided while scheduling a report are displayed for that schedule in the Scheduled Reports section: • Occurrence: Displays number of times a particular schedule is repeated. The occurrence is aligned to the selected schedule frequency.
displays the graphs for only those LDEVs that have the associated I/Os and those RAID groups on which the I/Os transactions have occurred. Consider the following example: A report is created to view 32 busiest frontend LDEVs and 16 busiest frontend RAID groups, and only eight of the selected 32 LDEVs and four of the selected 16 RAID groups are busy.
2. Select the Metric Type as: • FontEndIO: Select this metric type to view a report of the most active or the least active LDEVs (or both) based on the threshold specified for the total frontend I/Os. • BackEndIO: Select this metric type to view a report of the most active or the least active LDEVs (or both) based on the threshold specified for the total backend transfers.
Viewing reports IMPORTANT: • Report schedules with an asterisk (*) before the User Name indicate that they are generated by a schedule. Following is the naming convention for reports that have an associated schedule: _exportDB-_____
• • • • “Deleting a report” on page 336 “Scheduling reports” on page 330 “Viewing a schedule” on page 337 “Logging report details and exceptions” on page 339 Deleting reports To delete a report: 1. Click Reports > View Reports in the left pane. 2. In the Reports section, select the check box for the report record that you want to delete. 3. Click Delete. Click OK when prompted to confirm. The report copy is also deleted from the :\HPSS\Tomcat\Webapps\PA\ Reports folder.
Editing report schedules The report schedules that you create appear in the Scheduled Reports section (Reports > View Reports). IMPORTANT: • The Scheduled Reports section appears only if you have logged in as an Administrator or a user with administrator privileges. • If the Email Dest for a schedule record is blank, it implies that the report is scheduled, but an email address is not provided or is invalid. In such cases, you do not receive any notification though the report is generated.
Click Cancel to retain the records. Understanding report records This section describes what to infer from the data displayed in the Reports section (Reports > View Reports). In the preceding image, you can view the report, PA_ACP_Rep that is executed on 2009-10-11 20:11:32 IST (Generation Time). The report provides data on ACP Utilization in an XP disk array, 82502 for a period of 1 month (2009-09-11 to 2009-10-11). The report is provided in HTML format.
generate a report daily at 19:00 hours. Hence, the schedule is active and a report is generated only the day after 9th September 2008, on 10th September 2008 at 19:00 hours. The End Time for this schedule displays 09.10.2008 19:00:00, which means that the last report that P9000 Performance Advisor generates is on 10th September 2008 at 19:00 hours. This is because, while creating the schedule, the number of times it must repeat is given as 1 in the Occurrence box.
Using reports
12 Using Performance Estimator for XP and P9000 disk arrays This chapter discusses the performance estimation for XP and P9000 disk arrays. Introduction P9000 Performance Advisor enables you to determine the optimal performance of your XP (XP24000, XP12000) and P9000 (P9500) Disk Arrays after configuration collection is complete for these XP disk arrays. It provides a framework called the Performance Estimator for estimating their performance.
Supported disk sizes for performance estimation The following table lists the disk arrays and the disk size they support. Table 22 Disk types supported for performance estimation Array type Supported disk size in GB XP12000 72 GB disk, 146 GB disk, 300 GB disk, 400 GB disk XP24000 Disks of any size.
The Performance Estimator screen corresponding to the selected XP disk array appears. The Array List displays the XP disk arrays that belong to the selected XP disk array model. By default, the Performance Estimator screen displays the current configuration for the first XP disk array that appears in the Array List. IMPORTANT: Performance Estimator supports: • 72 GB, 146 GB, 300 GB, and 400 GB disks. • RAID 1 (2D + 2D) and RAID 5 (7D + 1P) configurations. • SAS and SSD drive types. 3. 4.
• R.T. (ms) - Indicates the time taken in milliseconds for the XP disk array to respond for the selected configuration. • Number of disks - Indicates the total number of disks that are available per the estimate for the selected configuration. 5. To estimate the raw capacity, R1/R5/R6 usable (GB), and the total usable (GB) capacities: a. Select the disk size from the Disk Size in GB list. b. Select the RAID type from the RAID Type list. c.
2. Select the disk array model from the Array Type list. This list displays only the XP24000 or P9500 disk array models that are currently monitored by P9000 Performance Advisor. The Performance Estimator screen corresponding to the selected disk array model appears. The Array List displays the disk arrays that belong to the selected disk array model. In addition, the current configuration of the first disk array in the Array List is populated in the respective fields.
• MB/sec - Indicates the MB/s of data that the disk array can receive per second for the selected configuration. • R.T. (ms) - Indicates the time taken in milliseconds for the disk array to respond for the selected configuration. • Number of disks - Indicates the total number of disks that are available per the estimate for the selected configuration. 5. To estimate the raw capacity, R1/R5/R6 usable (GB), and total usable (GB) capacities: a. Select the RAID type from the RAID Type list. b.
13 Troubleshooting issues for components associated with applications This chapter discusses troubleshooting issues for disk array components associated with applications that reside on hosts, which communicate with the disk arrays. The troubleshooting is possible using the real-time charting or using the host group or WWNs of the hosts.
When you perform the host agent installation, the real-time server is also automatically installed on the host agent. For more information on the host agent installation, refer to the HP StorageWorks P9000 Performance Advisor Software Installation Guide. You can collect the real-time performance data for a set of five LDEVs, RAID groups, ports, cache, CHAs, and DKAs in an XP disk array.
RealTime screen The real-time performance data collection can be initiated on the RealTime screen, which appears when you click Troubleshooting > RealTime in the left pane. The following image shows the real-time charting components selection for 53012, which belongs to the P9000 Disk Array type. Figure 32 RealTime screen .
• “Stopping real-time performance data collection” on page 353 Starting real-time performance data collection Prerequisites • HP recommends that you dedicate a command device for the real-time performance data collection, so that it is not used by P9000 Performance Advisor for the regular configuration or performance data collection.
IMPORTANT: The following are important notes on the real-time performance data collection: • You can configure only one instance of the real-time performance data collection for an XP or a P9000 disk array through the connected host agent. You cannot use the same host agent for another real-time performance data collection until the current collection stops. However, if an XP or a P9000 disk array is connected to two host agents, configure separate real-time data collection through each of the host agents.
2. Click the plust (+) sign for an XP or a P9000 disk array serial number to view the component categories. The following image shows the MP selection for MPB-1MA under the MP Blade(s) component category for 53012, which belongs to the P9500 Disk Array type. Additionally, the following are displayed: 3. • HostAgent list: Displays the host agent that is connected to the selected XP or P9000 disk array. • Command device list: Displays the command devices for the selected XP or P9000 disk array.
5. Select the host agent name from the HostAgent box, if the XP or the P9000 disk array is connected to more than one host agent. Every host agent can accept only one instance of a real-time performance data collection request. If you want to use the same host agent for another real-time data collection request, stop the current data collection and initiate the new request. For more information, see “Stopping real-time performance data collection” on page 353. 6.
2. Click the Stop Collections tab. The Stop Collections table displays the following details for the XP or the P9000 disk arrays, or a combination of these arrays, for which real-time performance data collection is in progress: Screen elements Description Array Id The serial number of the XP or the P9000 disk array for which the real-time performance data collection is in progress. Component Type The selected component category.
If your application is associated with components that belong to the XP disk arrays, the response time of an LDEV is determined by the performance of the following components that are associated with that LDEV: • Ports (Frontend components) • CLPRs (Cache) • RAID groups (Backend components) If your application is associated with components that belong to the P9000 disk arrays, in addition to the above-mentioned components, an LDEV’s response time is also determined by the average utilization of the associat
3. View the performance data of the disk array components. If you application is using XP disk array components, view the performance data of LDEVs, ports, CLPRs, RAID groups, and the usage data of CHAs and DKAs. If your application is using P9000 disk array components, you can view the usage data of the MP blades in addition to viewing the performance data of LDEVs, ports, CLPRs, and RAID groups. The data can be viewed at the application level and host group or WWN level.
1. Click Troubleshooting > Host Based in the left pane. The list of the XP and the P9000 disk arrays monitored by P9000 Performance Advisor are displayed in the component selection tree under Host Based. 2. Select the XP or the P9000 disk array for which you want to associate an application. 3. Click Configure Application. The Configure Application dialog box appears. 4. Click Add. 5. Do one of the following: • Click the Host Groups option and choose the host group from the list.
6. Click New under Application and provide the application name. If you want to associate an existing application with the selected host group or WWN, click Existing under Application Name and choose the application from the list. There may be instances where the LDEVs associated with an application are made available through two hosts that belong to different WWNs or host groups. In such cases, you may want to associate the application with both the hosts.
Removing association between application and hosts To remove the association between an application and the corresponding host and WWN: 1. Click Troubleshooting > Host Based in the left pane. The list of the XP and the P9000 disk arrays monitored by P9000 Performance Advisor are displayed in the component selection tree under Host Based. 2. Click the plus (+) sign for the XP or the P9000 disk array, for which you want to remove the association between the application and the corresponding host. 3.
2. 3. Select the appropriate component from the Component Type list. The search function is supported only for the following component types: • WWN • Host Group • Port • CLPR • LDEV • RG Specify the name of the component in the adjacent text box and click the Search icon. The Search Results dialog box appears and displays the application and the host group or WWN of the host that are associated with the component. 4.
Viewing data Description Application level The data is retrieved through all the hosts and the WWNs that are connected to your application and the XP disk arrays. Host group The data is retrieved through the specific host groups. It is valid if your host uses multiple host groups to connect to an XP or a P9000 disk array. WWN level The data is retrieved through the specific WWN.
2. Based on your requirement, select an application or choose the host group or the WWN associated with an application: If your selection is at the application level, the data displayed for the LDEVs and the associated components is through all the host groups or WWNs associated with the application. Hence, the data is a superset of the data that you view at the host group or the WWN level.
3 CLPR table 4 RAID Group table Click an LDEV ID to view the associated port, CLPR, and the RAID group records highlighted in the respective tables. By default, the port, CLPR, and the RAID group records are displayed for the first LDEV listed in the LDEV table.
Resources Default metrics Description The average utilization of the MP blades that are associated with the LDEVs. NOTE: MP Blade Util (%) The MP blade average utilization data is collected during the DKC performance data collection. The collection frequency set for the DKC data collection might be different from that set for the LDEV data collection. In addition, the following are displayed: • The MP blade ID with its corresponding average utilization percentage in brackets. For example, MPB-1MA [9.
Resources Default metrics Description MBPS The total MB/s of data transferred through the ports. The average of individual MP utilization on each port. IMPORTANT: MP Util % CLPRs Since, the CHIP/CHA MPs are moved to the MP blades in the P9000 disk arrays, the MP Util % metric is not applicable for the P9000 disk arrays. It is applicable only for the XP disk arrays. Write Pending % The percentage of data pending to be written to an LDEV. Read Hit The percentage of data read from an LDEV.
The details of the partner port that is associated with the same MP is also displayed. The partner port record appears in grey. When you plot the usage graphs for these ports (primary and partner ports), you will be able to analyze whether the partner port is overloading the MP that is also associated with the primary port. IMPORTANT: The CHA MP data is not applicable for the P9000 disk arrays.
The additional set of metrics that P9000 Performance Advisor supports for the LDEVs are as follows: Table 26 Additional metrics for LDEVs Resources Additional LDEV metrics Description LDEVs Random I/O The total random I/Os on the LDEV during the entire collection interval. Sequential I/O The total sequential I/Os on the LDEV during the entire collection interval. Reads The sum of random reads and sequential reads on the LDEV during the entire collection interval.
The additional set of metrics that P9000 Performance Advisor supports for the RAID groups, Ports, and the CLPRs are as follows: Table 27 Additional metrics for RAID groups, ports, and CLPRs Resources Additional Frontend, Cache, and Backend metrics RAID groups Non Seq Reads The total backend tracks loaded in random mode for a specified RAID group. Seq Reads The total backend tracks loaded in sequential mode for a specified RAID group.
Viewing variations in the LDEV response time You can identify the LDEVs that are experiencing response time variations by analyzing their read and write response time values. Consider a scenario where your application is associated with multiple LDEVs and experiencing a slow response time. As some of the components, such as RAID groups are shared, their utilization might not indicate an impact on the application.
The reference value used by P9000 Performance Advisor is displayed as a blue straight line in the LDEV average read and write response time graph. Plotting charts You can select and plot charts for components in the LDEVs, Port, CLPR, and the RAID group tables. To plot charts for the selected components and metrics: 1. On the Troubleshooting screen, select components for which you want to plot charts. The components can belong to the LDEV, Port, CLPR, and the RAID Group tables.
3. Select the check box for the metric, for which you want to view the performance graph of the selected components and click OK. P9000 Performance Advisor plots appropriate performance graphs in the Chart Work Area. By default, the data points are plotted for the last one hour of the management station's time. For more information on using charts and chart options, see Plotting charts.
3. Select the XP array. 4. Select the application. 5. Identify ports associated with the LDEVs mapped to the application. In this case, this should bring up ports 1A and 5A. 6. Note the IOPS and MBPS for 1A. Plot a chart of the trend of IOPS and MBPS. 7. Identify the MP associated with port 1A and note the utilization of the MP. Plot a chart of the trend of MP utilization. 8. Note the IOPS and MBPS for 5A. Plot a chart of the trend of IOPS and MBPS. 9.
9. Based on the trend of the utilization values of the RG and its LDEVs, the reason for poor response time on LDEV2 could be attributed to the overloading of the RG 1-2. Also, it could be inferred that RG 1-2 is “hot” due to the heavy load generated by all the LDEVs. In case the LDEV loads are not balanced, the possible solution could be to relocate some of the busy LDEVs on to another RG. 10. Generate a report of the findings above.
Troubleshooting issues for components associated with applications
14 Launching P9000 Application Performance Extender from P9000 Performance Advisor You can launch the P9000 Application Performance Extender from P9000 Performance Advisor GUI. P9000 Application Performance Extender is a software that enables you to monitor, analyze, and prioritize the performance of critical applications running on P9500, XP20000, and XP24000 Disk Arrays.
2. On the HP StorageWorks P9000 Application Performance Extender Software screen, click Launch HP StorageWorks P9000 Application Performance Extender Software. The text displayed on the HP StorageWorks P9000 Application Performance Extender Software screen is taken from the AppIntegrations.properties file. So, ensure that the text is not modified in the AppIntegrations.properties file. Figure 34 HP StorageWorks P9000 Application Performance Extender Software screen .
15 Launching P9000 Performance Advisor from other Storage products Introduction You can launch P9000 Performance Advisor from P9000 Tiered Storage Manager and P9000 Remote Web Console Launching P9000 Performance Advisor from P9000 Tiered Storage Manager IMPORTANT: This section is applicable only for the XP disk arrays. P9000 Tiered Storage Manager is used to perform migration, where the data stored on predefined set of volumes is moved to another set of volumes with the same characteristics.
You can launch P9000 Performance Advisor for the Migration Group volumes and the Storage Tier volumes, and also in the Create Migration Task operation to facilitate selection of source and target volumes. IMPORTANT: • The location of the P9000 Performance Advisor management station and other parameters are defined in the P9000 Tiered Storage Manager hppa.properties file. For more information, see the HP StorageWorks P9000 Tiered Storage Manager Software Administrator Guide.
7. Enter your user name, password, and click Login. By default, the Frontend IO Metrics chart window appears in the Chart Work Area displaying the performance graphs for the selected LDEVs. You can also select additional metrics from the Available Metrics Choose Metric Category list. For more information, see “Plotting charts” on page 262. NOTE: Once you login, the current session is valid for 24 hours.
4. Under the Parity Groups tab, select the RAID group records for which you want to view their usage and I/O details. 5. Click Analyze Performance. The P9000 Performance Advisor Login page is displayed. 6. Enter your user name, password, and click Login. By default, the Frontend IO Metrics chart window appears in the Chart Work Area displaying the performance graphs for the selected RAID group. You can also select additional metrics from the Available Metrics Choose Metric Category list.
Launching P9000 Performance Advisor from P9000 Remote Web Console The HP Remote Web Console enables you to manage and optimize the P9000 storage systems. As part of this process, you can launch P9000 Performance Advisor in context from P9000 Remote Web Console to view the usage pattern of components for a longer duration and make provisioning decisions.
2. Download the PA_Link_Launch_Configuration_Files.zip file to your management station or the system from where you accessed the P9000 Performance Advisor Support screen. The file is available under PA link and launch from RWC on the P9000 Performance Advisor Support screen. The PA_Link_Launch_Configuration_Files.zip file consists of the following XML files that are required to launch P9000 Performance Advisor. • appDefinition.xml • appProfile.xml The readme.
2. Do one of the following: • To update the IP address, open the appDefinition.xml file in Notepad and update the P9000 Performance Advisor management station IP address for the tag as shown: Syntax:
After each of the above-mentioned commands is executed, a confirmation on the number of files copied is displayed in the command prompt window. IMPORTANT: Whenever you update the appDefinition.xml file for the management station IP address or the appProfile.xml file for the session name, execute the above-mentioned commands, so that P9000 Remote Web Console uses the latest XML files to launch the P9000 Performance Advisor session.
5. Click Settings > Launch Application > Performance Advisor. If you have updated a different session name in the appDefinition.xml file, that session name appears when you click Settings > Launch Application. NOTE: If none of the components are selected, the session will be in the disabled mode. The session is enabled only when you select a particular component. Figure 35 HP P9000 Remote Web Console screen . The session opens in a separate browser window.
5. Click Settings > Launch Application > Session Name (default: Performance Advisor). The data for the selected processor blade is displayed in the P9000 Performance Advisor, Array View - MP Blades screen. If multiple processor blades are selected, the data related to the first selected processor blade is displayed. For more information on MP Blades screen, see “Viewing MP blade utilization for P9000 disk arrays” on page 214.
The following image shows the processing distribution for MPB-1MA. To view the utilization data for MPB2, click MPB-2MC in the MP Blade Configuration group box in the Array View - MP Blades screen. Viewing Parity Group data Consider the scenario of five RAID groups (preferably belonging to the same drive type). You want to know which is the least busiest RAID group, so that you can provision storage space from the RAID group to create new LDEVs in that RAID group.
2. Select Parity Groups in the list displayed for the P9000 disk array serial number. NOTE: The navigation path Parity Groups > Internal is not supported in this version. 3. 388 In the right work area, select the parity group record for which you want to view the usage and performance data in Performance Advisor.
4. Click Settings > Launch Application > Session Name (default: Performance Advisor). By default, the utilization data for the Overall RAID Group utilization metric is displayed in the Utilization Metrics chart window. The overall RAID group utilization is the total busy rate of the RAID group over an entire collection interval. When a RAID group is associated with a ThP pool, this metric provides the extent to which a RAID group is busy because of the I/Os occurring on a ThP pool.
Viewing Logical Device data Consider two RAID groups (preferably belonging to the same drive type) that have an imbalance, where one RAID group is less busy compared to the other RAID group. The less busier RAID group has enough capacity. You can relocate LDEVs from the other RAID group to ensure load balancing between the RAID groups. P9000 Performance Advisor provides the overall average utilization for each RAID group, which also displays the percentage of RAID group utilization by an LDEV.
Viewing host group data For a host group, P9000 Performance Advisor provides the I/O, MB, and response time metrics on the associated port and individual LDEVs. To view the host group data: 1. Complete steps 1 and 2 mentioned for “Launching P9000 Performance Advisor” on page 384. 2. Select Host Groups in the list displayed for the P9000 disk array serial number. 3.
4. Click Settings > Launch Application > Session Name (default: Performance Advisor). The host group and the usage data of ports and LDEVs associated with the selected host group are displayed in the Array View - LDEV screen.
The above image displays the LDEVs and ports associated with the host group san-ita1. The Chart Work Area in the above image displays the maximum, minimum, and average I/O on the port CL2D that is selected in the Array View - LDEV screen. Sample on appDefinition.xml and appProfile.xml files • appDefinition.xml file
For example, if V5-1 is deleted in the appProfile.xml file, the RAID Groups application menu item does not appear for selection in the P9000 Remote Web Console. • V6-1 enables you to view data related to LDEV in P9000 Performance Advisor. It is also known as Logical Devices in P9000 Performance Advisor. For example, if V6-1 is deleted in the appProfile.xml file, the LDEV application menu item does not appear for selection in the P9000 Remote Web Console.
16 Support and other resources Contacting HP HP technical support For world wide technical support information, see the HP support website: http://www.hp.
• P9000Info Release Notes To find related documents, browse to the Manuals page of the HP Business Support Center web site: http://www.hp.com/support/manuals For related documentation, navigate to the Storage section, select a storage category (Storage Software > Storage Device Management Software and product. Websites • HP.com http://www.hp.com • HP Storage http://www.hp.com/go/storage • HP Manuals http://www.hp.com/support/manuals • HP download drivers and software http://www.hp.
Typographic conventions Table 29 Document conventions Convention Element Blue text: Table 29 Cross-reference links and email addresses Blue, underlined text: http://www.hp.
Support and other resources
A Appendix A Storage management logical partitions (SLPRs) A disk array can be shared with the multiple organizations and with multiple departments within an enterprise. Therefore, multiple administrators might manage a single disk array. This circumstance creates the potential for an administrator to destroy volumes of other organizations, and it can complicate and increase the difficulty of managing the disk array.
enterprise B's disk array administrator can manage enterprise B's virtual disk array, but cannot manage enterprise A's disk array. Cache logical partitions (CLPRs) When one disk array is shared with multiple hosts, and one host reads or writes a large amount of data, the host's read and write data occupies a large area in the disk array's cache memory. In this situation, the I/O performance of other hosts decreases because the hosts must wait to write to cache memory.
B Sample reports Report types P9000 Performance Advisor supports report generation for the following categories: • • • • • • “Array performance reports” on page 401. “LDEV IO reports” on page 410. “RAID Group Utilization Report” on page 415. “Cache utilization reports” on page 416. “ACP utilization reports” on page 419. “CHIP utilization reports” on page 421. • • • • • • • “XP Thin Provisioning (THP) pool occupancy” on page 424. “Snapshot pool occupancy” on page 425.
In addition, it includes a section called Findings at the beginning of the report. IMPORTANT: • The Findings section for an XP disk array provides a brief summary on the status of the CHIPs, cache, ACP, and the LDEVs. • The Findings section for a P9000 disk array provides a brief summary on the status of the cache, LDEVs, and the MP blades. • The utilization summary of the CHIP/CHA and the ACP/DKA MPs are not displayed in the Array Performance report - Findings section for the P9000 disk arrays.
Figure 38 Total I/O Rate . The total backend transfers may be compared to the total frontend I/Os and the difference is due to the effects of the array cache. The total backend transfers load is taken by the RAID groups and ACP/DKA pairs, where as the total frontend I/O load is taken by the CHIP/CHA ports. NOTE: If there are no data points available for the dates selected, a blank chart is displayed.
Figure 39 Total I/O Rate by hour of day . The total backend transfers may be compared to the total frontend I/Os and the difference is due to the effects of the array cache. The total backend transfers load is taken by the RAID groups and ACP/DKA pairs, where as the total frontend I/O load is taken by the CHIP/CHA ports. NOTE: If there are no data points available for the dates selected, blank chart is displayed.
Figure 40 Total I/O Rate Detail . The sequential frontend I/Os are when data is read from or written to consecutive addresses. The random frontend I/Os are when applications address non-consecutive blocks of data. CFWs are a special class of I/Os generated by HP's P9000 Continuous Access Remote Mirroring software. NOTE: If there are no data points available for the dates selected, blank chart is displayed.
Figure 41 Read/Write Ratio . For example, the data point of X on the graph indicates X% read activity and (100-X)% of write activity. NOTE: If there are no data points available for the dates selected, blank chart is displayed. If all the data values are zero for the dates selected, a chart with a horizontal line along X axis is displayed in the center of the chart.
Figure 42 Read/Write Ratio by hour of day . NOTE: If there are no data points available for the dates selected, blank chart is displayed. If all the data values are zero for the dates selected, a chart with a horizontal line along X axis is displayed in the center of the chart. Read/Write Detail report The Read/Write Detail report displays in a chart format, the total I/Os separated into different I/O types.
Figure 43 Read/Write Detail . NOTE: If there are no data points available for the dates selected, blank chart is displayed. If all the data values are zero for the dates selected, a chart with a horizontal line along X axis is displayed in the center of the chart. Max/Min Frontend Port IOPS report The Max/Min Frontend Port IOPS report displays in a chart format, the total maximum and minimum frontend port I/O operations per second over the entire data collection period.
Figure 44 Read/Write Detail . NOTE: If there are no data points available for the dates selected, blank chart is displayed. If all the data values are zero for the dates selected, a chart with a horizontal line along X axis is displayed in the center of the chart. Max/Min Frontend Port MB/s report The Max/Min Frontend Port MB/s report displays in a chart format, the total maximum and minimum frontend port MB/s over the entire data collection period.
Figure 45 Read/Write Detail . NOTE: If there are no data points available for the dates selected, blank chart is displayed. If all the data values are zero for the dates selected, a chart with a horizontal line along X axis is displayed in the center of the chart. LDEV IO report The LDEV IO report provides data on the busiest frontend and the backend LDEVs and RAID groups on an XP or a P9000 disk array. It is based on the frontend I/Os and the backend transfers.
In the LDEV I/O Mapping table: • Hyphen (-) is displayed in the RAID Format column if that RAID format is not applicable for THP Pool V-Vols. • Hyphen (-) is displayed in the LUSE Master column if the LDEV record is not a LUSE Master. So, the LDEV will either be a LUSE component or an individual volume (not part of any LUSE). • Hyphen (-) is displayed in the LUSE Status column if the LDEV record is neither a LUSE master nor a LUSE component. The LUSE Status is not applicable for such LDEV records.
Figure 46 Total Backend I/O Rate First Top 8 LDEVs . Total Backend I/O Rate First Top 8 RAID Groups report The Total Backend I/O Rate First Top 8 RAID Groups report displays in a chart format, the real backend I/O rate for the busiest eight RAID groups. This can be compared to the potential maximum throughput of the hardware. The maximum throughput varies depending on RAID level, disk mechanism type, and other factors such as the size of the individual I/Os.
Figure 47 Total Backend I/O Rate First Top 8 RAID Groups . Total Frontend I/O Rate First Top 8 LDEVs report The Total Frontend I/O Rate First Top 8 LDEVs report displays in a chart format, the number of I/Os operations performed by the first set of busiest eight LDEVs. “Total Frontend I/O Rate First Top 8 Ldevs” on page 414 displays a sample Total Frontend I/O Rate First Top 8 LDEVs report for the XP1024 Disk Array.
Figure 48 Total Frontend I/O Rate First Top 8 Ldevs . Total Frontend I/O Rate First Top 8 RAID Groups/Pools report The Total Frontend I/O Rate First Top 8 RAID Groups/Pools report displays in a chart format, the number of I/O operations performed by the eight busiest RAID groups or pools. Pools can either be the ThP pool or the snapshot pool.
Figure 49 Total Frontend I/O Rate First Top 8 Array Groups/Pools . RAID Group Utilization Report The Raid Group Utilization report consists of four charts that display the utilization of the top 32 RAID groups, split into eight each. The RAID group utilization indicates the total utilization of a RAID group over an entire collection interval. Figure 50 on page 416 displays a sample RAID Group Utilization report that provides the first top eight RAID groups for a P9500 Disk Array.
Figure 50 RAID Group Utilization — First top 8 RAID groups . The report displays the utilization graphs for only those RAID groups that have managed the backend transfers. When a RAID group is associated with a ThP pool, the extent of RAID group utilization due to I/Os occurring on a ThP pool is considered.
Figure 51 Cache Utilization . Cache Write Pending report The Cache Write Pending report displays in a chart format, the amount of data in the cache waiting to be written to a disk. It helps determine the amount of cache available. “Cache Write Pending” on page 417 displays a sample Cache Write Pending report for a P9500 Disk Array. Figure 52 Cache Write Pending .
Figure 53 Percentage read hits . Total Backend Transfer report The Total Backend Transfer report displays in a chart format, the total number of transfers, sequential, random drive-to-cache, and cache-to-drive, per second. “Total Backend Transfer report” on page 418 displays a sample Total Backend Transfer report for a P9500 Disk Array. Figure 54 Total Backend Transfer report .
Figure 55 Total Backend Transfer by Hour of the Day . Cache Side File Utilization report The Cache Side File Utilization report displays in a chart format, the cache side file utilization. The cache side file utilization is used for the P9000 Continuous Access Async Software. It holds the data buffers that have not been acknowledged by the remote host. “Cache Side File Utilization” on page 419 displays a sample Cache Side File Utilization report for a P9500 Disk Array.
The ACP utilization reports allow you to view in a chart format, the average utilization of the various installed ACP/DKA pairs either over the entire period or over every hour of a day. A sample of each report is given below: ACP Utilization report The ACP Utilization report displays in a chart format, the average utilization of the installed ACP/DKA pairs over the entire period.
Figure 58 ACP utilization over a 24-hour period . CHIP utilization report IMPORTANT: The utilization metrics on the CHIP/CHA MPs are not displayed for the P9000 disk arrays. They are included as part of the utilization metrics displayed for the MP blades in the P9000 disk arrays. The CHIP utilization reports allow you to view in a chart format, the utilization data for all the installed CHIPs/CHAs in the array and the average utilization data for all the installed CHIPs/CHAs in an XP disk array.
Figure 59 CHIP Utilization . CHIP Utilization by Hour of the Day report The CHIP Utilization by Hour of the Day report displays in a chart format, the utilization data for all the installed CHIPs/CHAs in the array averaged over a 24-hour period. “CHIP Utilization by Hour of the Day” on page 423 displays a sample CHIP Utilization by Hour of the Day report for an XP24000 Disk Array.
Figure 60 CHIP Utilization by Hour of the Day . CHIP processor utilization report The CHIP processor utilization report displays in a chart format, the individual MP utilization on an installed CHIP/CHA. “CHIP Processor Utilization” on page 424 displays a sample CHIP processor utilization report for an XP24000 Disk Array.
Figure 61 CHIP Processor Utilization . In this sample report, the individual MP utilization for the CHA 1E is displayed. Similarly a report is generated for all the installed CHIPs/CHAs. ThP pool occupancy report The THP Pool Occupancy report provides the usage percentage of the eight busiest ThP pools. NOTE: P9000 Performance Advisor reports only those ThP volumes in an XP or a P9000 disk array that are assigned to a pool.
Figure 62 XP Thin Provisioning pool occupancy . Snapshot pool occupancy report The Snapshot Pool Occupancy report provides the usage percentage of the eight busiest snapshot pools. NOTE: P9000 Performance Advisor reports only those snapshot volumes in an XP or a P9000 disk array that are assigned to a pool. Figure 63 on page 426 displays a sample Snapshot Pool Occupancy report for 53040, which is a P95000 disk array type array.
Figure 63 Snapshot pool occupancy . P9000 Continuous Access Journal group utilization report The Journal Pool Utilization report displays the utilization percentage of the eight busiest Journal groups. Figure 64 on page 427 displays a sample Continuous Access Journal Group Utilization report for a P9500 Disk Array.
Figure 64 P9000 Continuous Access Journal group utilization . LDEV Activity report You can view the maximum and least busiest LDEVs in an XP or a P9000 disk array through the LDEV Activity report. The LDEV data can be for one of the following metric types: • • • • • • FontEndIO BackEndIO MB Utilization Read Response Time Write Response Time The maximum and least busiest LDEVs are collated based on the maximum and minimum threshold levels you specify, and also the metric type that you select.
Figure 65 LDEV Activity report . IMPORTANT: • The threshold limits that you specify are independent of each other and applicable to only the category that you select. You can set both the maximum and minimum threshold levels, or one of them based on your requirement. • The report also provides the associated drive types for the LDEVs. This information helps you to identify if the associated drive is supporting the required LDEV performance. If not, move the LDEV to a different drive type.
Figure 66 Export Database report (Human readable format) . For more information on the different .csv files that are generated for an XP or a P9000 disk array, see “Export DB CSV files” on page 168.
MP blade utilization report The MP Blade Utilization report can be generated only for the P9000 disk arrays. It includes the average utilization data for each individual MP blade, their top 20 consumers, and the associated processing types. Average utilization of an MP blade The average utilization is calculated as the utilization of all the individual processors in the MP blade.
MP blade utilization by the processing types The average MP blade utilization split up for the different processing types is displayed in a chart for the selected duration. The duration for which the MP blade was busy processing consumer requests is also displayed as the Total Busy Time.
For more information on processing types, see “Viewing MP blade utilization by processing types” on page 301.
C Appendix C Supportability matrix The following matrix displays the supportability of ThP, snapshot, and continuous access journal volumes on the XP arrays.
Appendix C
D Appendix D Array mapping To correctly map the ACP and CHIP pairs, see the following tables for the respective array: Table 31 on page 435 lists ACP and CHIP pairs for disk array XP48/128. NOTE: The cards are lettered A-M, omitting I.
Table 33 on page 436 lists the ACP and CHIP pairs for XP1024. Table 33 XP1024 Slot name Pair ID Slot ID B, H ACP Pair 1 ACP B = 0; H = 4 C, J ACP Pair 2 ACP C = 1; J = 5 D, K ACP Pair 3 ACP D = 2; K = 6 E, L ACP Pair 4 ACP E = 3; L = 7 P, V CHIP Pair 1 CHIP P = 0; V= 4 Q, W CHIP Pair 2 CHIP Q = 1; W= 5 R, X CHIP Pair 3 CHIP R = 2; X= 6 S, Y CHIP Pair 4 CHIP S = 3; Y= 7 Table 34 on page 436 lists the ACP and CHIP pairs for XP12000 type array.
Table 35 on page 437 lists the ACP and CHIP pairs for the XP10000 and SVS200 type arrays. Table 35 XP10000 and SVS200 Slot name Pair ID Slot ID MIX-A, MIX-F ACP Pair 1 ACP MIX-A = 0; MIX-F = 4 MIX-A, MIX-F CHIP Pair 1 CHIP MIX-A = 8; MIX-F = 12 B,E CHIP Pair 2 CHIP B = 9; E = 13 Table 36 on page 437 lists the ACP and CHIP pairs for an XP24000 type array.
Slot name Pair ID Slot ID LU, XU CHIP Pair 13 CHIP LU=20; XU=28, KU, WU CHIP Pair 14 CHIP KU=22; WU=30 LL, XL CHIP Pair 15 CHIP LL=21; XL=29 KL, WL CHIP Pair 16 CHIP KL=23; WL=31 NOTE: The numbers in the third column correspond to the card letter. These numbers are used when reading CLUI output that has an older formatting style. Table 37 on page 438 lists the ACP and CHIP pairs for an XP20000 type array.
E Metric Category, metrics, and descriptions Metrics and descriptions “Metric Category, metrics, and descriptions” on page 439 provides the metric categories and metrics that are available in each of the metric categories, and the metric descriptions. Table 38 Metrics and descriptions Metric category Frontend IO Metrics Metric Description ACP Total IO – Frontend The total frontend I/Os (random plus sequential) on all the RAID groups managed by the ACP pair.
Metric category 440 Metric Description ACP Sequential Read Cache Hits – Frontend The frontend I/Os for the sequential read requests that result in cache hits for all the RAID groups managed by the ACP pair. ACP Sequential Writes – Frontend The frontend I/Os for the sequential write requests for all the RAID groups managed by the ACP pair. ACP Search/Reads Basic Mode – Frontend The frontend I/Os for the search or reads in basic mode for all the RAID groups managed by the ACP pair.
Metric category Metric Description CFW Reads The frontend I/Os for read requests in the Cache Fast Write mode, for the ACP pair. CFW Read Cache Hits The frontend I/Os for read requests in the Cache Fast Write mode that result in cache hits, for the ACP pair. CFW Writes The frontend I/Os for write requests in the Cache Fast Write mode, for the ACP pair. CFW Write Cache Hits The frontend I/Os for write requests in the Cache Fast Write mode that result in cache hits, for the ACP pair.
Metric category 442 Metric Description Total Sequential Reads MB - Frontend The total frontend I/Os made on an external volume. Total Random IO-Frontend The total random frontend I/Os rate on this external volume during the entire collection interval. Total Random Read-Frontends The total random frontend reads rate on this external volume during the entire collection interval.
Metric category Metric Description LDEV Total Random IO – Frontend The total random frontend I/Os rate on this LDEV during the entire collection interval. LDEV Random Reads – Frontend The total random frontend read I/Os on this LDEV during the entire collection interval. LDEV Random Reads Cache Hits – Frontend Out of the total random frontend read I/Os on this LDEV, the number of random reads available in the cache.
Metric category 444 Metric Description Total IO Writes – Frontend The total random and sequential frontend write I/Os on this LDEV during the entire collection interval. Total MB Reads – Frontend The total random and sequential frontend read MBs on this LDEV during the entire collection interval. Total MB Writes – Frontend The total random and sequential frontend Write MBs on this LDEV during the entire collection interval.
Metric category Metric Description Total IO/s – Frontend The total IO/s (reads and writes) on that port over a given duration. This port in turn connects to the host group through which the I/Os reach the port. Total IO – Frontend The total IO of all the LDEVs created in a specified RAID group for the given Host group over the entire data collection interval. The total IO is obtained by summing up all the IO of all the LDEVS for the given host group.
Metric category 446 Metric Description RAID Group Random Writes – Frontend The frontend random write I/Os on all the LDEVs created in a RAID group. RAID Group Total Sequential IO – Frontend The sum total of frontend sequential I/Os on all the LDEVs created in a RAID group. RAID Group Sequential Reads – Frontend The frontend sequential read I/Os on all the LDEVs created in a RAID group.
Metric category Metric Description Total Sequential Reads Frontend The sum total of sequential frontend read I/Os on individual virtual volumes defined in this pool. Total Sequential Read Cache Hits The sum total of sequential frontend read I/Os on individual virtual volumes defined in this pool, which are serviced from the cache. Total Sequential Writes Frontend The sum total of sequential frontend write I/Os on individual virtual volumes in this pool.
Metric category Metric Description Total Sequential Reads Cache Hits - Frontend The frontend I/Os for sequential read requests that result in cache hits, for all the snapshots in a snapshot pool. Total Sequential Writes Frontend The frontend I/Os for sequential read I/Os for all the snapshots in a snapshot pool. Search/Read in Basic Mode The frontend I/Os for search or reads in basic mode for all the snapshots in a snapshot pool.
Metric category Metric Description IOs per Page Total number of I/Os on a pool compared against the used pages in a pool for the last collected monitoring cycle. NOTE: This metric is disabled in the Alarm screen. IOPS per Tier Total number of I/Os on a pool tier compared against the used pages in a pool tier for the last collected monitoring cycle. NOTE: This metric is disabled in the Alarm screen.
Metric category Frontend MB Metrics 450 Metric Description Tier IOPS per Time Total number of I/Os on a pool tier over the collected monitoring cycles in a given duration. Total MB – Frontend The total frontend throughput in MB/s for a given LDEV. Total Random MB – Frontend The total random frontend I/Os throughput in MB/s for the given LDEV. Total Random MB – Frontend The random frontend I/Os throughput in MB/s for the given LDEV.
Metric category Metric Description Total MB/s – Frontend The total throughput of data handled by a port over a given duration. This port in turn connects to the host group through which the data reaches the port. Total MB– Frontend The total MB of all the LDEVs created in a specified RAID group for the given Host group over the entire data collection interval.
Metric category 452 Metric Description RAID Group Sequential Write MB – Frontend The frontend throughput in MB/s written sequentially to the RAID group. ACP Total MB – Frontend The total frontend throughput in MB/s read from or written to an ACP. ACP Total Random MB – Frontend The total frontend throughput in MB/s read from or written to an ACP randomly. ACP Random Read MB – Frontend The frontend throughput in MB/s read randomly from an ACP.
Metric category Metric Description Total Sequential MB Frontend The sum total of sequential frontend I/Os throughput in MB/s transfer rate of all the individual virtual volumes in this pool. Total Sequential Reads MB - Frontend The sum total of sequential frontend read I/Os throughput in MB/s transfer rate of all the individual Virtual volumes in this pool.
Metric category Cache MB Metric 454 Metric Description Total Sequential Write MB — Frontend The sum total of sequential frontend write I/Os throughput in MB/s transfer rate of all the virtual volumes in this pool. CM ACP BUS/PATH UTILIZATION Details the usage of the shared memory and cache memory bus by the CHA or DKA. Total MB — Frontend The total frontend throughput in MB/s data read or written to the external volume.
Metric category Metric Description Cache IO Metric CLPR Read Hits Single CLPR, data accessed or the reads on a single CLPR. • The total utilization of the ACP pair. Utilization Metrics ACP Pair Util Total • In a thin provisioning environment where an ACP pair is associated with a ThP pool, the ACP Pair Util Total metric provides the ACP utilization due to the I/O cache miss (where frontend I/Os occurring on a ThP pool are received at the array backend).
Metric category Metric Description • ACP Right Util MP0 • ACP Right Util MP1 • ACP Right Util MP2 • ACP Right Util MP3 • ACP Right Util MP4 The utilization of the MP# on the right ACP. • ACP Right Util MP5 • ACP Right Util MP6 • ACP Right Util MP7 CHIP Util Total The total utilization of the CHIP. CHIP MP Util The total utilization of each individual MP# on a CHIP board. SM CHIP BUS/Path Util The utilization of Shared memory CHIP transfer bus HI.
Metric category Metric Description RAID Group Utilization Seq Writes The utilization of the RAID group for sequential writes. RAID Group Utilization Seq Write Parity The utilization of the RAID group for writing sequential parity. Cache Usage Util The utilization of the cache shown as a percentage value. Cache Writes Pending Util The utilization of the cache write pending shown as a percentage value. Cache Sidefile Usage Util The utilization of the sidefile shown as a percentage value.
Metric category Metric Description The average MP blade utilization by each of the following processing types over an entire collection interval: • Open Target MP Blade Util/Processing type • Open Initiator • Open External Initiator • Open Mainframe Target • Open Mainframe Ext Initiator • Backend • System Backend Metrics 458 MP Blade Util - Top 20 Consumers The average MP blade utilization by each of the 20 consumers over an entire collection interval.
Metric category Response Time Metrics Metric Description RAID Group Write Tracks – Backend The total backend tracks destaged for a specified RAID group. RAID Group Total Tracks – Backend The Overall backend transfers for the selected RAID group. ACP Pair Sequential Read Tracks – Backend The total backend tracks loaded in sequential mode from the specified ACP Pair. ACP Pair Non-sequential Read Tracks – Backend The total backend tracks loaded in non-sequential mode from the specified ACP Pair.
Metric category Metric Average Write Response 460 Description The average write response time of all the LDEVs created in a specified RAID group over the entire data collection interval. The average write response time value for an LDEV is obtained from dividing the accumulated response time of all the I/Os by the total number of I/Os on that LDEV.
Real-time metrics and descriptions “Real-time metrics and descriptions” on page 461 provides the real-time metrics and their descriptions.
Real-time metrics Definitions Total seq IOPS IO size for reads Total seq read IOPS IO size for reads Total seq write IOPS IO size for reads Total write IOPS IO size for reads Total write KB per IO IO size for writes Total write throughput KB per IO IO size for writes LDEV's IOPS - Total IOPS IO size for writes Wr% - avg write ratio Total write percentage of total front end IO seq% - avg seq IO ratio Total sequention IO percengate of total front end IO r_H% - Avg Read hits Average read h
F Appendix F Forecasting ThP pool performance Guidelines for selecting data range to receive an optimal forecast To validate the forecasted data, we need to understand the trend of the existing data, as the forecasted data is an extension of the existing trend. The forecasted data represents a trend of the ThP pool occupancy values and not the actual values. The following graph indicates the trend of the actual values. The forecasted values be an extension of the trend of the selected data points.
• No variance: Select a data range that has at least some variance. If the selected data range has constant values for most of the range, the forecast may follow the constant data pattern. • Empty collection ranges: Missing data points may induce error in the forecasted data.
Glossary Array Control Processor (ACP) ACP is used in the XP disk arrays prior to the XP24000 Disk Array. With the introduction of the XP24000 Disk Array, the DKA has replaced ACP. The DKA is also applicable for the P9000 disk arrays. ACP handles the transfer of data between the cache and the physical drives held in the DKUs. The ACPs work in pairs, providing a total of eight SCSI buses. Each SCSI bus associated with one ACP is paired with a SCSI bus on the other ACP pair element.
CHA Channel adapter. A device that provides the interface between the array and the external host system. Occasionally, this term is used synonymously with the term channel host interface processor (CHIP). CHP Channel processor. The processors located on the CHA. Synonymous with CHIP. CHIP Channel host interface processor. Synonymous with the term CHA.
In an XP disk array, the DKA is one of the two PCB types that contains the MPs. Disk Controller (DKC) The array enclosure that contains the channel adapters and service processor (SVP). Disk Processor (DKP) In the XP disk arrays, the MP that resides on a DKA is addressed as the DKP. DKPs does not exist in the P9000 disk arrays. All the MPs/DKPs form part of the MP blades in the P9000 disk arrays. DKU Disk cabinet unit. The array cabinet that houses the physical disks.
OPEN-9). The number of resulting LDEVs depends on the selected emulation mode. The term LDEV is also known as term volume. LUN Logical unit number. A LUN results from mapping a SCSI logical unit number, port ID, and LDEV ID to a RAID group. The size of the LUN is determined by the emulation mode of the LDEV and the number of LDEVs associated with the LUN. For example, a LUN associated with two OPEN-3 LDEVs has a size of 4,693 MB. LUSE Logical Unit Size Expansion.
S-VOL Secondary or remote volume. The copy volume that receives the data from the primary volume. SVP Service processor. A notebook computer built into the disk array. The SVP provides a direct interface to the disk array and used only by the HP service representative. sidefile An area of cache used to store the data sequence number, record location, record length, and queued control information.
set of volumes with the same characteristics. For more information, see the manuals set provided for Tiered Storage Manager on the HP Manuals page. trap A type of SNMP message used to signal that an event has occurred. (SNIA) WWN World Wide Name. A unique identifier assigned to a Fibre Channel device. World Wide Name (WWN) Group The world wide name group provides access for every host in the specified WWN group to a specified logical unit or group of units. This is part of the LUN Security feature.
Index Symbols Array View screen, 53 10 busiest back-end RAID groups, 229 10 busiest front-end LDEVs, 228 , 87 disk space requirements, 188 displaying charts with different rates, 257 A ACP Pair summary screen, 198 Alamrs Apply template, 137 Alarms Alarm notifications, 137 Alarms history, 137 Choose metrics, 137 Delete, 137 Disabling alarms, 137 Email notifications, 137 Enable alarms, 137 Enabling alarms, 137 Forecast ThP Utilization, 137 Resource performance Plotting charts, 137 Set thresholds, 137 Setting
collection, data displaying charts with different rates, 257 collection, data configuring, 53 disk space requirements, 188 command devices, configuring , 53 components displaying performance data, 236 Configuration Host Information, 53 Configuration data collection One-time collection, 62 Scheduling collections Hourly, Daily, Weekly, Monthly, 62 Stopping collecgtion, 70 configuring chart metrics, 257 database size, 164 performance data, 53 Connectivity data unavailable, host-to-array, 106 Consumers LDEVs, J
Groups Displaying authorized, 110 groups displaying properties, 113 GUI Common tasks, 19 Screen resolution, 17 Sorting records, Selecting records, 19 H help obtaining, 395 Host information Request, Receive, 55 Host-to-array connectivity data unavailable, 106 HP technical support, 395 I Incomplete records, displaying, 106 Instant-on license Grace period, 21 L LDEVs displaying performance data, 236 Displaying unknown host connections, 106 License Add, view, remove, 21 Licenses HPAC license key website, 21
Report types ACP utilization, 321 Array Performance, 321 Cache utilization, 321 CHIP utilization, 321 Journal pool utilization, 321 LDEV Activity, 321 LDEV IO, 321 Snapshot pool occupancy, 321 ThP pool occupancy, 321 Reports Favorite charts, 308 Reports screen Generating reports, 321 Scheduling reports, 321 S Security screen, 110 Settings Configuration Settings tab, 87 Configure Settings, 87 Data Analysis Settings LDEV read-write response, Troubleshooting screen, 87 Personalize Arrays tab, 87 Personalize A
XP disk arrays Real-time monitoring Host agents, CMDs, 347 SVP registration Outband collection, 87 HP StorageWorks P9000 Performance Advisor Software User Guide 475