HP StorageWorks P9000 Performance Advisor Software User Guide This document describes how to use the HP StorageWorks P9000 Performance Advisor Software product (P9000 Performance Advisor), and includes information about user tasks and troubleshooting. This document is intended for users and HP service providers who have knowledge of the HP StorageWorks XP and P9000 disk arrays hardware, software, and storage systems.
Legal and notice information © Copyright 1999,-NaN, 2011 Hewlett-Packard Development Company, L.P. Confidential computer software. Valid license from HP required for possession, use or copying. Consistent with FAR 12.211 and 12.212, Commercial Computer Software, Computer Software Documentation, and Technical Data for Commercial Items are licensed to the U.S. Government under vendor's standard commercial license. The information contained herein is subject to change without notice.
Contents 1 Introduction to P9000 Performance Advisor ......................................... 15 Overview ................................................................................................................................. 15 2 Working with the P9000 Performance Advisor GUI .............................. 17 Introduction .............................................................................................................................. Title bar ......................................
Scheduling configuration data collection ............................................................................... Configuration collection schedules ................................................................................. Deleting configuration data collection schedules ..................................................................... Collecting performance data ......................................................................................................
7 Monitoring performance of XP and P9000 disk arrays ........................ 119 Introduction ............................................................................................................................ Configuring dashboard threshold settings .................................................................................. Specifying the top 20 consumers ........................................................................................ Dashboard threshold metrics ...................
Export DB CSV files .......................................................................................................... Creating Export DB CSV files ............................................................................................. Importing data to MS Excel ............................................................................................... Viewing Export DB CSV files ..............................................................................................
Choosing metrics ....................................................................................................... Front-end navigation path ........................................................................................... Cache navigation path ............................................................................................... MP Blades navigation path ......................................................................................... Back-end navigation path ..............
13 Using Performance Estimator for XP disk arrays ................................ 345 Introduction ............................................................................................................................ Supported disk types for performance estimation .................................................................. Supported disk sizes for performance estimation .................................................................. Understanding Performance Estimator data ................
Host does not appear in the management station after upgrading the management station version .... 391 Logging into P9000 Performance Advisor .................................................................................. 391 Login screen does not display in browser ................................................................................... 391 Maintaining versions for host agent logs ....................................................................................
Cache Utilization report .............................................................................................. Cache Write Pending report ........................................................................................ Percentage Read Hits report ........................................................................................ Total Backend Transfer report ......................................................................................
Figures 1 P9000 Performance Advisor Dashboard .................................................................... 17 2 License screen ........................................................................................................ 23 3 Array View screen ................................................................................................... 52 4 Configuration Data Collection ..................................................................................
33 Example of an SLPR .............................................................................................. 409 34 Example of a CLPR ................................................................................................ 410 35 Total I/O Rate ...................................................................................................... 413 36 Total I/O Rate by hour of day ................................................................................ 413 37 Total I/O Rate Detail .
Tables 1 License management during installation or upgrade .................................................... 22 2 Meter based Term licenses for P9500 array 53036 with 105 TB-Days capacity ............... 48 3 Meter based Term licenses for P9500 array 53036 with negative TB-Days capacity ......... 48 4 Meter based Term licenses for P9500 array 53036 ..................................................... 49 5 Group Details screen ..............................................................................
33 XP20000 ............................................................................................................. 440 34 Metrics and descriptions ........................................................................................ 441 35 Real-time metrics definitions ....................................................................................
1 Introduction to P9000 Performance Advisor Overview HP StorageWorks P9000 Performance Advisor Software collects, monitors, and displays the performance of XP and P9000 disk arrays. P9000 Performance Advisor collects performance data for individual components such as LDEV, CHIP/CHA, ACP/DKA, DKC, and MP blades (applicable for only P9000 disk arrays).
P9000 Performance Advisor also provides P9000Watch, a troubleshooting tool that helps you to troubleshoot performance issues of the XP and the P9000 disk arrays. You can also launch the following: • P9000 Performance Advisor from the HP StorageWorks P9000 Tiered Storage Manager Software. For more information, see “Launching P9000 Performance Advisor from P9000 Tiered Storage Manager” on page 371. • P9000 Application Performance Extender from P9000 Performance Advisor.
2 Working with the P9000 Performance Advisor GUI Introduction The P9000 Performance Advisor screen has the following sections: • Left pane • Right pane • Title bar The left pane and the title bar are common to all the P9000 Performance Advisor screens. The Dashboard screen appears soon after you log on to P9000 Performance Advisor. The main functionalities of P9000 Performance Advisor can be accessed using the respective links in the left pane.
2 Title bar 3 Right pane Title bar The title bar displays the product name and the product logo. In addition, the Title bar also displays the following: • User: Displays the name of the logged in user. P9000 Performance Advisor displays the user name who has logged in and using the current session. For example, if you logged in as an Administrator, P9000 Performance Advisor displays it in the following format — User: Administrator. • Help: Click Help to launch the P9000 Performance Advisor help.
• RealTimechart Right pane The right pane displays the screen based on the menu that you select in the left pane. You can select related options on these screens to achieve the desired output. A tool tip is provided for every screen element, which provides a brief description of the screen element. The right pane also displays the Chart Work Area for those screens that require viewing the performance graphs for selected components.
Resizing sections of P9000 Performance Advisor screens You can resize individual sections of P9000 Performance Advisor screens by using your pointing device. To resize, place the pointing device on the section border and move the pointer to increase, or decrease the width accordingly.
3 Managing licenses for XP and P9000 disk arrays This chapter discusses the following topics: • • • • • • “Introduction” on page 21 “Instant-on license on P9000 Performance Advisor installation” on page 24 “Instant-on license expiration” on page 25 “Grace period expiration” on page 27 “P9000 Performance Advisor licenses” on page 27 “Generating licenses” on page 36 • • • • “Installing licenses” on page 37 “Viewing aggregate License status” on page 39 “Viewing status for individual licenses” on page 39 “Re
So, usable capacity = Internal LDEVs — (External Volumes + Virtual Volumes) Table 1 License management during installation or upgrade Installation or upgrade License management When you install P9000 Performance Advisor v5.3 for the first time You are provided an Instant-on license, which is automatically enabled after installation. The Instant-on license (trial license) is provided with every instance of P9000 Performance Advisor.
NOTE: The License screen in P9000 Performance Advisor displays only the internal raw disk capacity of the XP disk arrays and the usable capacity of the P9000 disk arrays. Though P9000 Performance Advisor monitors the external storage attached to the XP or the P9000 disk array, it is not included in the license capacity calculation. This is an expected behavior because P9000 Performance Advisor is licensed only on internal raw disk and usable capacities. Figure 2 License screen .
• “Viewing aggregate License status” on page 39 • “Viewing status for individual licenses” on page 39 • “Removing licenses” on page 47 Related Topics • • • • • P9000 Performance Advisor licenses “Generating licenses at the HPAC license key website” on page 36 “Instant-on license on P9000 Performance Advisor installation” on page 24 “Instant-on license expiration” on page 25 “Grace period expiration” on page 27 Instant-on license on P9000 Performance Advisor installation The Instant-on license or the trial
Instant-on license activation P9000 Performance Advisor indicates that the instant-on license is activated by displaying the following status message in the top pane of the Dashboard screen and the License screen. (The Dashboard screen appears first when you log on to P9000 Performance Advisor): The Performance Advisor trial license expire on month, day, year. Please contact your HP Representative to purchase the requisite Performance Advisor licenses to avoid disruption of Performance Advisor services.
Screen elements Description Displays the total internal raw disk capacity of an XP disk array or the usable capacity of a P9000 disk array. NOTE: Array Capacity (TB) • P9000 Performance Advisor receives the accurate internal raw disk capacity of an XP disk array (same as displayed by Remote Web Console) when you perform the outband mode of configuration data collection.
Related Topics • “Generating licenses” on page 36 • “Installing licenses” on page 37 Grace period expiration The grace period that follows the instant-on license is valid for 60 days. After the grace period expires and if valid licenses are not installed, the following changes occur: • P9000 Performance Advisor cannot monitor the XP or the P9000 disk arrays for any new configuration changes made after the grace period is over.
To install these licenses on Performance Advisor, see the Hewlett-Packard Authorization Center (HPAC) website: http://webkey.external.hp.com Permanent licenses Permanent licenses are primary licenses that you generate and install on P9000 Performance Advisor to monitor an XP or a P9000 disk array. Permanent licenses are for an unlimited duration, perpetual, and unique to an XP or a P9000 disk array.
Meter Based Term licenses NOTE: Meter based Term licenses are applicable for P9000 disk arrays only. Meter based Term licenses are secondary licenses that you generate at the HPAC website and install as add-on licenses in P9000 Performance Advisor to monitor additional usable capacities. Meter based Term licenses cannot work independently and always need to be installed on a Permanent license. They are not a replacement to the Permanent license.
Example scenario 1 Consider that a small-sized company books air tickets online for its customers. The company has one P9000 disk array of 75TB usable capacity. A Permanent license is installed on 01/01/2010 to monitor the 75TB usable capacity. Based on the heavy online booking trend during December'09 January'10 time frame due to Christmas and New Year celebrations, the company is expecting a surge in the online booking traffic beginning December'10 and continuing till the end of 1st week of January'11.
So, 50TB usable capacity is monitored every day beginning December'10 for the next 39 days. After the spike in usable capacity reduces to 75TB, P9000 Performance Advisor uses the existing Permanent license that is already installed. So, the company has managed the short duration spike in usable capacity with Meter based Term license and also retained the Permanent license to monitor the existing 75TB usable capacity.
NOTE: After the installed TB-Days are activated, P9000 Performance Advisor verifies the remaining TB-Days every day after 1:00 PM and accordingly updates the TB-Days status on the License screen - License Status section. For more information on the License screen, see “License screen” on page 22. • If the installed TB-Days are used in the first half of a day, the TB-Days status is updated after 1:00 PM on the same day.
Column Headings - License Status section Description License Status Displays the status as Installed. End Date Displays 12/10/2010. Calculated as eight days starting from 12/03/2010. Consider that the usable capacity exceeds the 50TB Permanent licensed capacity by 10TB in the second half of 12/03/2010. As a result, P9000 Performance Advisor updates the above listed fields after 1:00 PM only on the next day (12/04/2010), though the 10TB-Days are already used from the 90TB-Days on 12/03/2010.
• • • • . . . On 02/08/2011, the License Capacity shows 50TB, –590TB-Days NOTE: If 90TB-Days are completely used in the second half of 12/11/2010, P9000 Performance Advisor enters 60 days grace period on the same day but updates the License screen - License Status section only after 1:00 PM on 12/12/2010. In this case, the License Capacity shows 50TB, –10TB-Days on 12/12/2010.
one day. In addition, the following fields are updated after 1:00 PM to reflect the latest data, which is as follows: • License Capacity: 50TB, +823TB-Days • License Status: Installed • Term (Days): 8 • End Date: 12/09/2010 Eight days count from 12/02/2010 If 100.5TB is used during the second half of 12/04/2010, P9000 Performance Advisor uses 101TB-Days and considers the 0.5 days as one day.
So, the TB-Days are used only when the additional usable capacity must be monitored. Generating licenses at the HPAC license key website Ensure that you have the registration number which is required for generating a license. Generating licenses IMPORTANT: • The product license entitlement certificate includes a registration number, which is a unique identifier that helps you to generate a license key for P9000 Performance Advisor.
5. Provide the following details on the Array information input screen: • Enter the Array DKC serial number, which is a five digit number, such as 10900, 53036. • Select the Hardware platform from the list. The supported P9000 disk array models, such as the P9500 and the XP disk array models, such as the XP24000, XP20000, XP12000, XP10000, XP1024, and the XP128 are displayed for selection. 6. Click Next >>. The Requestor Information screen appears. 7.
5. Click Add License. CAUTION: After the licenses are installed, do not modify the date and time on the management station where P9000 Performance Advisor is installed. Modifying them may result in inaccurate configuration and performance collections. The following details are updated in the View License File Status section.
Click Refresh to view the latest data on the License screen.
3. Click View Details. The View License Detail section appears. The following image shows the license details for 53036, which belongs to the P9500 Disk Array Type. In addition to the details displayed in the License Status section, the following details specific to the installed license appear in the View License Detail section: Screen elements Description Displays the license type.
Screen elements Description Displays the available license capacity. • If you select an XP disk array record, this column always displays the Installed License Capacity value. • If you select a P9000 disk array record whose usable capacity is monitored using only a Permanent license, this column displays the Installed License Capacity value. • In case of Meter based Term licenses: 1.
Screen elements Description If you select an XP or a P9000 disk array record whose usable capacity is monitored using only a Permanent license, this column is blank as the Permanent license is for an unlimited duration. In case of Meter based Term licenses: • If you select a P9000 disk array record for which both the Permanent license and TB-Days of Meter based Term license are installed, and the installed TB-Days are dormant, this column is blank.
Viewing license history The View License History section displays the list of events generated on the View License screen for each license key. The time stamp when an event occurred is also displayed for each event record. You can search for events generated during a specific duration. Provide the start and end date and time, and click Find to view the events generated during the selected duration.
Because this is a short term unplanned surge in storage requests, you can install TB-Days of Meter based Term license to monitor the additional usable capacity for the specified duration. To monitor 25TB for five days (at the rate of 25TB a day), generate and install 125TB-Days on 11/30/2010.
The every day reduction in TB-Days is equal to the additional usable capacity because of which the grace period has started. NOTE: Reduction or negative counting is only applicable for the installed TB-Days. It is not applicable for Permanent licenses. After 60 days grace period, P9000 Performance Advisor stops configuration data collection for any additional usable capacities. It continues performance data collection for the existing usable capacity. Example scenario 8 Consider the following points: 1. 2.
With 110TB-Days, P9000 Performance Advisor ends the grace period and continues to monitor the 10TB usable capacity for another five days. When fractions of a TB of additional usable capacity is monitored and the installed TB-Days are not sufficient, P9000 Performance Advisor considers it as a capacity violation and enters a grace period of 60 days. In such a case, if you install the appropriate TB-Days, P9000 Performance Advisor ends the grace period for that particular P9000 disk array.
• 100TB-Days (5 days * 20TB) is required for P9000 Performance Advisor to continue monitoring the 20TB usable capacity for another five days. With 160TB-Days, P9000 Performance Advisor ends the grace period and continues to monitor the 20TB usable capacity for another five days. Violating licensed capacity After 60 days of grace period, P9000 Performance Advisor considers it as a capacity violation and stops configuration data collection for any additional internal raw disk or usable capacity.
Removing Meter based Term licenses for P9000 disk arrays P9000 Performance Advisor removes the aggregate TB-Days of Meter based Term license. There is no option to remove the individual TB-Days of Meter based Term license. NOTE: Once a Meter based Term license is removed, it cannot be added again. However, another Meter based Term license can be installed.
2. In the View License Status section, select the P9000 disk array record for which you want to remove the Meter based Term license, and click Remove License. 3. In the Remove License dialog box, select METER from the License Type list. 4. Click Remove License(s). The Confirm Delete dialog box appears. 5. Click Yes. The message indicating the removal of the license appears on top of the Remove License dialog box.
If the permanent license is removed, when the Meter based Term license has a positive count. It will enter the grace period and Meter based Term license will not work.
4 Collecting configuration and performance data This chapter discusses the following topics: • • • • “Introduction” on page 51 “Configuring host information” on page 53 “Configuration data” on page 56 “Performance data” on page 68 Introduction P9000 Performance Advisor interacts with the XP and the P9000 disk arrays through hosts that have the operating system specific P9000 Performance Advisor host agents installed.
NOTE: P9000 Performance Advisor also collects the real-time performance data from the XP and the P9000 disk arrays. For more information, see Chapter 5 on page 83. Array View screen NOTE: You can request updates from the host agents and perform configuration and performance data collection, only if you have logged into P9000 Performance Advisor as an Administrator or a user who is granted administrator privileges.
Screen elements Description Configuration Collection Displays the list of command device records in the Configuration Collection table. You can select a command device and perform a one-time configuration data collection, or schedule a configuration data collection for the corresponding XP or P9000 disk array. Performance Collection Displays the list of XP and P9000 disk array records in the Performance Collection table.
Requesting host agent updates Prerequisites Ensure that the following prerequisites are met: • Ensure that the version of the host agent installed on the host matches with the version of P9000 Performance Advisor installed on the management station. • Ensure that the command devices are already created on the XP and the P9000 disk arrays connected to your host, and configured to communicate with the host.
4. Click Request Info. The Request Info button is enabled only when you select the host agents. Use the Shft or the Ctrl key to select multiple host agent records. The request is executed in the subsequent data collection cycle. Following is the sequence of events that occur for the selected host agent: a. P9000 Performance Advisor retrieves the updated information from the host agent. This may take a few minutes depending on the number of LDEVs that are exposed to the host agent.
4. Click Remove Host. The Remove Host button is enabled only when you select a host agent record. P9000 Performance Advisor deletes the host agent record and logs a confirmation on the Event Log screen. When you remove a host agent, information about the command devices and the following data for the XP and the P9000 disk arrays connected to the host agent are also removed.
Screen elements Description Array (Array Name) Displays the DKC number and the user-friendly name of the XP or the P9000 disk array. Host ID Displays the system name of the host. Port Displays the port that is configured to communicate data between the command device (on an XP or a P9000 disk array) and the associated host agent. Cmddev Displays the ID of the LDEV that is configured as a command device. DeviceFile Displays the device file for the command device.
CAUTION: Ensure that the date and time on the management station and the hosts are synchronized with the local time zone to receive accurate configuration data. This condition is also applicable for the client systems that use the IE browser to access P9000 Performance Advisor on a management station, and the systems that have the CLUI software installed.
• If the license has expired or licensed capacities have exceeded the grace period for an XP or a P9000 disk array In such a case, P9000 Performance Advisor displays the following error message under the Configuration Collection tab: Configuration collection is stopped due to license violation for array Simultaneously, an event is also logged on the Event Log screen.
2. Click the Configuration Collection tab. The Configuration Collection table displays the list of command device records for all the XP and the P9000 disk arrays that are monitored by P9000 Performance Advisor. 3. Select the command device record corresponding to the XP or the P9000 disk array for which you want to collect the configuration data.
6. Based on the disk array and mode of collection that you selected, following are the further course of steps: If you selected an XP disk array and either of the following modes of configuration data collection: • The outband mode - In this case, manually enter the SVP IP address in the SVP IP Address text box and proceed to next step to initiate the configuration data collection.
• • • • • “Filtering event records” on page 167 “Configuring email and SNMP settings” on page 92 “Starting real-time performance data collection” on page 85 “Viewing performance summary” on page 206 “Plotting charts” on page 264 Scheduling configuration data collection IMPORTANT: The schedule start time is set to the management station time where P9000 Performance Advisor is installed. Prerequisites For the set of prerequisites, see “Collecting configuration data” on page 59.
4. Select Collection Period as Recurring. Figure 4 on page 63 shows scheduling configuration data collection for 53036, which belongs to the P9500 Disk Array type. Figure 4 Configuration Data Collection . 5. Select one of the following as the Collection Schedule. By default, the collection is scheduled for every sunday at 00:00 hours: • • • • Hourly Daily Weekly Monthly For more information on the above-mentioned collection schedules, see “Configuration collection schedules” on page 66.
6. Retain the Collection Type as Outband (default selection), if you want P9000 Performance Advisor to directly collect data from the XP or the P9000 disk array through the array SVP (not applicable for XP1024/128 Disk Arrays). Proceed to next step. Select the Collection Type as Inband, if you want the RAID Manager Library to collect the configuration data from an XP or a P9000 disk array and transfer to P9000 Performance Advisor, and proceed to next step.
7. Based on the disk array and mode of collection that you selected, following are the further course of steps: If you selected an XP disk array and either of the following modes of configuration data collection: • The outband mode - In this case, manually enter the SVP IP address in the SVP IP Address text box and proceed to next step to initiate the configuration data collection.
• • • • • • • • • “Deleting configuration data collection schedules” on page 67 “Performance data” on page 68 “Providing user-friendly names for XP and P9000 disk arrays” on page 96 “Registering the XP or P9000 disk array SVP IP address in P9000 Performance Advisor” on page 96 “Filtering event records” on page 167 “Configuring email and SNMP settings” on page 92 “Starting real-time performance data collection” on page 85 “Viewing performance summary” on page 206 “Plotting charts” on page 264 Configuration
Collection Schedule Description Examples If the collection schedule is selected as Monthly, the Monthly Schedule appears with options for scheduling the collection on a particular date (Based on Date) or day (Based on Day) of a month. Every time the schedule is executed, P9000 Performance Advisor collects the configuration data for the last one month only. • If you want to schedule the collection on a particular date: • Select the Monthly Schedule as Based on Date, if it is not selected by default.
Collecting performance data After completing the configuration data collection for the XP and the P9000 disk arrays, schedule the performance data collection for the associated components, which belong to the following component types: • • • • • • • DKC Ports RAID Groups Ext RAID Groups THP pools Snapshot pools Cont. Access Journals You can create two performance data collection schedules for an XP or a P9000 disk array, as it enables you to frequently monitor the respective components.
Initially, when performance data collection is not yet configured for the XP and the P9000 disk arrays, the following details are displayed in the Performance Collection table, under the Performance Collection tab: Screen elements Description Array Displays the DKC number of the XP or the P9000 disk array. Host ID Displays the system name of the host. Port Displays the port that is configured to communicate data between the command device on an XP or a P9000 disk array and the associated host.
Creating performance data collection schedules IMPORTANT: • Only one schedule can be created on a selected command device. For a better performance, select a maximum of two command devices that belong to different ports. • A schedule cannot be created for the same XP or P9000 disk array through two different host agents. • HP recommends that you allow one minute per 1,000 LDEVs for the management station to keep up with the collection.
3. Click Create. The Create button is enabled only when you select an XP or a P9000 disk array record under the Performance Collection tab.
6. In the respective component type lists, select the check boxes for the components to collect their performance data. The following component type lists are displayed: For an XP disk array, the DKC provides data on the CHIPs, ACPs, Cache, SLPR, CLPR, and the SM. DKC For a P9000 disk array, the DKC provides data on the MP blades, in addition to the data on the Cache, CLPR, and the SM. NOTE: SLPR does not exist in the P9000 disk arrays.
Figure 5 Performance Data Collection . 1 Resource type list. 7. Set the frequency in minutes for the DKC, RAID groups, and the port performance data collection by selecting the frequency from the respective Frequency list. 8. Select the check box for Stagger Schedule if you want to stagger the data collection time at different intervals.
10. Click Save for the changes to take effect. Click Cancel, if you do not want to configure a schedule for the current selection. Click Refresh to view the updated list of performance data schedules. The new schedule starts automatically. The following table provides the subsequent changes that occur in the Performance Data section for the selected XP or the P9000 disk array record. Screen elements Description Schedule Name Displays the new schedule name. Components Displays the selected components.
• “Starting real-time performance data collection” on page 85 Enabling performance collection schedules for automatic updates You can enable the performance data collection schedules to automatically collect the performance data for newly discovered RAID groups and ports. The new RAID groups and ports in an XP or a P9000 disk array are discovered during the scheduled configuration data collection.
While creating a performance data collection schedule: 1. 2. 3. The RAID groups and ports are not selected from the respective component type lists. Instead, the ThP, snapshot, continuous access journal volumes, or the external RAID groups are selected from the respective component type lists. The Add new RAID Groups, Ports to this schedule check box is selected.
• The newly discovered RAID groups and ports are not added to this performance schedule, as the Add new RAID Groups, Ports to this schedule check box is not selected. However, if a second schedule is created, you can still select the Add new RAID Groups, Ports to this schedule check box. The new RAID groups and ports are automatically added to the second schedule. • If a second schedule is not created, the list of new RAID groups and ports are still available for selection in the first schedule.
• “Stopping performance data collection” on page 78 • “Deleting performance data collection schedule” on page 79 • “Starting real-time performance data collection” on page 85 Editing performance data collection schedules You can add or remove components from an existing performance data collection schedule, and edit the frequency of data collection.
P9000 Performance Advisor stops the collection from the next collection cycle. The current performance data collection schedule stops only after the current data collection is complete, as per the selected collection schedule. For example, if you had configured an hourly collection at 11:00 a.m and stopped the schedule at 11:30 a.m., the current performance data collection still continues as per the selected collection schedule and ends only at 12:00 p.m.
2. Click the Performance Data tab and select the XP or the P9000 disk array record for which you want to delete the associated performance data collection schedule. 3. Click Delete. The Delete button is enabled only when you select an XP or a P9000 disk array record under the Performance Collection tab. A dialog box appears prompting you to confirm whether you want to delete the schedule. 4. Click OK. The performance data collection schedule is permanently deleted.
3. If you type y at the prompt, you are further prompted to provide the minimum and maximum java heap size values. The minimum heap size value must be more than or equal to 512 MB, and the maximum heap size value must be less than or equal to 2048 MB. If heap size values are already set, the current minimum and maximum heap size values are also displayed for your reference. If you type n at the prompt, the command prompt window closes. 4.
The user must run the Resize Heap tool again and reset the value to a lower size.
5 Collecting real-time performance data from XP and P9000 disk arrays This chapter discusses the following topics: • “Introduction” on page 83 • “Starting real-time performance data collection” on page 85 • “Stopping real-time performance data collection” on page 89 Introduction P9000 Performance Advisor does real-time monitoring of the XP or the P9000 disk arrays, where performance data is collected for intervals as low as few seconds (approximately, five seconds per component).
IMPORTANT: • Real-time monitoring is supported for the P9000 disk arrays, such as the P9500 and the following XP disk array models: XP24000, XP20000, XP12000, and XP10000. • Real-time monitoring can be initiated on multiple XP and P9000 disk arrays. • The configuration data for an XP or a P9000 disk array is maintained by P9000 Performance Advisor on the management station. The same data is also maintained by the real-time server on the P9000 Performance Advisor host.
Figure 6 RealTimeChart screen . Screen elements Description • After the configuration data is collected for the XP or the P9000 disk arrays, they are displayed for selection in the Select Components list, under Start Collection tab. Start Collection tab • The LDEVs and the RAID groups are displayed under the respective categories for each XP or P9000 disk array. Clicking a particular category displays the associated real-time metrics in the Choose Metrics list.
• If there have been configuration changes on the XP or the P9000 disk array for which you want to collect the real-time performance data, the following informational message appears when you select components and start plotting the real-time graphs: The configuration data available in the Real Time Server is not in sync with the configuration data available on Performance Advisor.
IMPORTANT: The following are important notes on the real-time performance data collection: • You can configure only one instance of the real-time performance data collection for an XP or a P9000 disk array through the connected host agent. You cannot use the same host agent for another real-time performance data collection until the current collection stops. However, if an XP or a P9000 disk array is connected to two host agents, configure separate real-time data collection through each of the host agents.
2. Click the + sign next to an XP or a P9000 disk array serial number to view the LDEVs and RAID Groups categories. The following image shows the RAID groups selection for 10055, which belongs to the XP12000 Disk Array type. Additionally, the following are displayed: 3. • HostAgent list: Displays the host agent that is connected to the selected XP or P9000 disk array. • Command Device list: Displays the command devices for the selected XP or P9000 disk array.
4. Select a set of five or lesser number of LDEVs or RAID groups. Use the Shft or the Ctrl key for sequential or random selection of LDEVs or RAID groups. 5. Select the host agent name from the HostAgent box, if the XP or the P9000 disk array is connected to more than one host agent. Every host agent can accept only one instance of a real-time performance data collection request.
2. Click the Stop Collections tab. The Stop Collections table displays the following details for the XP or the P9000 disk arrays, or a combination of these arrays, for which real-time performance data collection is in progress: Screen elements Description Array ID The serial number of the XP or the P9000 disk array for which the real-time performance data collection is in progress. Component The type of component selected, which can be a RAID group or an LDEV.
6 Configuring common settings for P9000 Performance Advisor This chapter discusses the following topics: • • • • • • “Introduction” on page 91 “Configuring email and SNMP settings” on page 92 “Setting time zone for management station” on page 99 “Setting severity level” on page 98 “Registering the XP or P9000 disk array SVP IP address in P9000 Performance Advisor” on page 96 “Providing user-friendly names for XP and P9000 disk arrays” on page 96 In addition, this chapter also discusses the following topic
Manage custom groups, where you create, view, modify, or delete the custom groups Settings > Custom Groups Manage the fabricated LDEV records, where you modify or delete the incomplete LDEV records, and also replicate settings across the LDEV records Settings > Data Grid Update Manage P9000 Performance Advisor users profiles, where you create, modify or delete the user profiles, and view their group properties “Managing custom groups” on page 103 “Managing fabricated LDEV records” on page 110 Settings
IMPORTANT: • The new email notification settings that you provide are automatically updated in the serverparameters.properties file. Hence, a manual reboot of the P9000 Performance Advisor management station is not required. • The Email Address is a mandatory field. Provide a valid destination email address that receives the email notifications when the alarms and reports are generated, or the performance data collection fails. For example, test1@xyz.
3. Configure the following settings on the Email Settings screen: SMTP Server Settings • The IP address or host name of the SMTP server that will be used for processing emails. The default SMTP server IP address is localhost. • The related port number (accepts only numbers). The default port number is 25. P9000 Performance Advisor uses the above settings to dispatch email notifications to the intended recipients when the alarms or reports are generated, or the performance data collection fails.
• The name of the customer for whom the report is generated. • The name of the consultant who is associated with the customer. • The location of the XP or the P9000 disk array for which the report is generated. This information is useful if the XP or the P9000 disk array is located in a different site, away from the management station. Data Collection Email Settings • A valid destination email address, as specified under Alarm Email Settings.
• “Configuring alarms and viewing alarms history” on page 141 • “Configuration data” on page 56 • “Generating, saving, or scheduling reports” on page 331 Providing user-friendly names for XP and P9000 disk arrays P9000 Performance Advisor enables you to provide unique, user-friendly names for the monitored XP and P9000 disk arrays.
automatically available for that XP or P9000 disk array when you initiate an outband mode of configuration collection. IMPORTANT: • For a P9000 disk array (such as the P9500) or for an XP24000 Disk Array, the IP address of the management station is also registered with the array SVP. • For a P9000 disk array (such as the P9500), it is recommended that you maintain separate SVP login credentials, which you can use for outband mode of configuration data collection.
4. Click Save Credentials. The SVP IP address, user name, and password are saved in P9000 Performance Advisor database. P9000 Performance Advisor also uses these credentials to validate the connection with the P9000 disk array. NOTE: On a few occasions, the SVP IP address, user name, and password are not saved. It might be because the SVP is offline. Wait for a few minutes and try again. 5. Click Register. The SVP IP address that was saved is also registered with the management station.
NOTE: This change affects only those messages that are created after you instigated the severity change. All messages that were logged before you set the severity level still remain in the P9000 Performance Advisor database and appear on the Event Log screen. To set the severity level: 1. Click Settings in the left pane. 2. Select User Settings. The User Settings screen appears. 3.
• “Setting severity level” on page 98 • “Setting the duration to predict the LDEV response time” on page 100 Setting the duration to predict the LDEV response time You can set the duration that P9000 Performance Advisor must use to predict the average read and write response time of LDEVs. Complete the following steps to select the duration: 1. Click Settings in the left pane. 2. Select User Settings. The User Settings screen appears. 3.
1. Attempts to restart the HP StorageWorks P9000 Performance Advisor Tomcat service 'n' number of times, where 'n' indicates the retry count that is specified. By default, the retry count is set to five, which means that five attempts are made to restart the HP StorageWorks P9000 Performance Advisor Tomcat service before a notification is dispatched. For more information on specifying the retry count, see Configuring retry count on page 103. 2.
NOTE: • If you configure the SMTP parameters but do not specify a retry count, the HP StorageWorks P9000 Performance Advisor Monitor service does not attempt to restart the HP StorageWorks P9000 Performance Advisor Tomcat service. Also, it does not dispatch any notification to the intended recipients.
• • • • SMTP server IP address Destination email address (To Address) CC Address Source email address (From Address) In the above mentioned list, it is mandatory to provide the SMTP server IP address, source and destination email addresses. Following is a sample PAMonitor_mail.properties file that shows the configured SMTP notification settings: #mail server host address(mandatory) mailserver=SMTPserver.server.com #to address(mandatory) toAddress=Destination.address@xyz.com,Destination1.address@xyz.
• View a graphical representation of the associated LDEVs performance for specific LDEV metric and duration of your choice. For more information, see “Plotting charts” on page 264 • Configure alarms on the associated LDEVs, so that P9000 Performance Advisor monitors and sends appropriate notifications to intended recipients.
5. Click Create Custom Group. The Create Custom Group button is enabled only when you select LDEV records in the Custom Groups table. The selected set of LDEV records are included in the custom group and the new custom group is listed under List of Custom Groups. You can view the custom group details by clicking Group Details.
• P9000 XP Continuous Access Synchronous is installed on an XP24000 Disk Array (primary storage server) to create a secondary copy of the production data. The production data is located on the primary volume (P-VOL) in the same XP24000 Disk Array. The secondary copy is residing on the secondary volume (S-VOL) in an XP12000 Disk Array. • The Oracle database server is located on a P-VOL in an XP24000 Disk Array and the data is replicated onto two S-VOLs.
4. Click View. The View button is enabled only when you select LDEV records in the Custom Groups table. The Group Details screen appears providing the list of LDEVs added to the selected custom group. The Group Details for the Group displays the selected custom group's name. The following table describes the column headings in the Group Details screen. Table 5 Group Details screen Screen elements Description DKC Displays the IDs of the selected XP and P9000 disk arrays.
Screen elements Description Displays the following options to indicate whether or not the selected LDEV is an Ext-LUN (Ext-LDEV): • - (hyphen) = Normal LUN Ext-Lun • E = Ext-Lun • P = Ext-Lun provider (the selected LDEV is used as an Ext-LUN for another XP or P9000 disk array) Host Group Displays the host group name for the host. The host group name is a user-defined group on an XP or a P9000 disk array. ACP Pairs Displays the selected ACP pairs. RAID Group Displays the selected RAID groups.
3. To add the LDEV records to a custom group: a. Select a custom group from the list under List of Custom Groups. b. In the LDEV records table, select the check boxes for the LDEV records that you want to add to the custom group. Alternatively, use the Custom Groups filters to view specific set of the LDEV records. For more information on using filters, see “Creating custom groups” on page 104. c. Click Add. The Add button is enabled only when you select a custom group under List of Custom Groups.
Managing fabricated LDEV records P9000 Performance Advisor enables you to modify the fabricated or incomplete LDEV records that it gets from the RMLIB. These LDEV records contain no host to array connectivity data because of unknown host connections, and are displayed in a tabular format on the Data Grid Update screen. The modifications made to the fabricated LDEV records are automatically updated on all the P9000 Performance Advisor screens that display these LDEVs.
Screen elements Description Total No. of Records: Displays the total number of LDEV records that you can view on the Data Grid Update screen. This number is inclusive of all the LDEV records that are displayed on all the pages in the Data Grid Update screen. No. of Pages: Displays the total number of pages that you can view on the Data Grid Update screen. No. of records per page Displays the total number of records displayed on the current page of the Data Grid Update screen.
The existing list is filtered to display the set of fabricated LDEV records based on your selection. Related Topics • “Modifying records” on page 112 • “Applying Template” on page 114 Modifying fabricated LDEV records You can modify or delete the fabricated LDEV records, and also replicate values from an LDEV record to other LDEV records. You can perform these tasks on the LDEV records listed on the current page of the Data Grid Update screen.
4. Enter the values for the following in the text boxes located above the Data Grid Update table: • • • • • Host Target:LUN Volume Group Device File SSID Alternatively, click in the text boxes under the respective column headings in the Data Grid Update table and make the necessary changes.
4. Click Delete Records. A dialog box appears prompting you to confirm the removal of the selected LDEV records. 5. Click OK. The record is removed from the existing list of records. Related topics • “Querying for fabricated LDEV records” on page 111 • “Applying Template” on page 114 Using template IMPORTANT: • Ensure that the LDEV record used as a template is immediately preceding the other set of LDEV records. • The Target:LUN and device file data cannot be replicated across the LDEV records.
Managing P9000 Performance Advisor user profiles After P9000 Performance Advisor is installed with the authentication type selected as Native, login as an Administrator, create user accounts, and grant them privileges (administrator or user privileges). You can also login as a storageadmin, who is an administrator user of CV XP and has the same privileges as the administrator user of P9000 Performance Advisor. The Security screen appears when you click Security under Settings in the left pane.
4. Enter the following details for the user in the popup window that appears: • • • • 5. The name of the new user and a brief description about the user profile. A password. Enter the password again in the Confirm Password box Assign the user to a group. The Group list displays Administrators and StorageAdmins (read and write access), and Users (read access) privileges. Click OK to create the user. A new user record appears on the Users screen. By default, records are sorted in an alphabetical order.
3. Select a user record from the list displayed under the Users tab. 4. Click Delete. Click Yes in the popup window that appears, to permanently delete the user record. Click No to retain the user record. Related Topics • “Creating a user record” on page 115 • “Changing password” on page 116 • “Viewing group properties” on page 117 Viewing group properties To view the properties of a group: 1. Click Settings in the left pane. 2. From the list that appears, select Security.
Configuring common settings for P9000 Performance Advisor
7 Monitoring performance of XP and P9000 disk arrays This chapter discusses the following topics: • “Introduction” on page 119 • “Configuring dashboard threshold settings” on page 122 • “Viewing dashboard” on page 128 Introduction P9000 Performance Advisor provides a dashboard, where you can view the overall usage status of the XP and the P9000 disk arrays. The overall usage status is based on the usage of individual components.
In addition, the average usage summary for components is also derived from the set threshold duration and verified against the threshold limits set for metrics in the particular category. Thereafter, the statistics are displayed on the Dashboard screen. IMPORTANT: • The threshold duration is the period during which P9000 Performance Advisor monitors the point in time and average usage of components, and determines the overall health of the XP or the P9000 disk array.
2. 3. 4. Click a status icon in the Frontend, Cache, Backend, or the MP Blade (applicable only for the P9000 disk arrays) category in the XP/P9000 Array Health section to view the corresponding average usage summary of individual components in the Statistics section. For more information, see “XP/P9000 array health” on page 130. Select components and associated metrics in the Statistics section to plot the corresponding usage graphs in the Chart Work Area.
The Component Information section, where the busiest and least busiest components are displayed. These components are associated with the corresponding port, RAID group, or MP blade selected in the Statistics section. You can plot their usage graphs in the Chart Work Area.
1. Do one of the following: Click Settings in the left pane. OR Click Edit Threshold on the Dashboard screen. The Dashboard screen appears by default when you launch P9000 Performance Advisor or when you click Dashboard in the left pane.
3. Enter the threshold value. When you set the threshold limits, P9000 Performance Advisor verifies the usage of components against the set threshold limits. Accordingly, the appropriate status icons and the average usage summary values are displayed on the Dashboard screen. • If you have not set the threshold limit or if you do not want to view the XP or P9000 disk array overall usage data for a particular category, enter –1 or 0 in the metric text box.
the P9000 disk arrays, and the average usage summary of components, specify the threshold limit for at least one metric in the respective category. The changes you make on the Threshold Settings screen are immediately reflected on the Dashboard screen. By default, P9000 Performance Advisor retrieves data for the past six hours from the time you saved the threshold settings. It considers the management station time to calculate the threshold duration.
1. Go to the Component Settings section on the Threshold Settings screen. 2. From the Maximum Components list, select the maximum number of consumers you want to view in the Component Information section of the Dashboard screen. 3. Select Ascending or Descending as the Sort by Average Response Time. 4. Click Save to update the consumer settings. The maximum X busiest consumers appear in the Component Information section on the Dashboard screen.
Screen elements Description Cache If the cache exceeds the defined threshold limit during the specified threshold duration for a cache metric, the status icon appears in the Cache category in the XP/P9000 Array Health section of the Dashboard screen. For example, if the cache write pending for the Write Pending (%) metric exceeds the defined threshold even once, the XP/P9000 Array Health section.
Screen elements Description RG Util (%) The RG utilization threshold value indicates the average overall RAID group utilization that you define for an individual RAID group over the threshold duration. P9000 Performance Advisor uses this value to verify whether the average overall RAID group utilization of each RAID group is within or beyond the set threshold limit. The default threshold value is 50%. If the utilization of one RAID group exceeds the defined threshold, the status icon appears.
Sections Description Displays the statistics of the average usage summary of individual components for the category, for which you click the status icon. For example, the Statistics section displays the average usage summary of ports and CHA MPs, if you click in the Frontend category for an XP disk array. If the usage of a component exceeds the defined threshold limits during the Statistics (Frontend, Cache, Backend, or MP Blade) threshold duration, the in the Statistics section.
XP/P9000 array health The following table describes the different status icons that depict the overall health of the XP and the P9000 disk arrays in the Frontend, Cache, Backend, and the MP Blade (applicable for only the P9000 disk arrays) categories. Status icon Description Critical. Indicates that the usage of at least one component has crossed the set threshold limit during the specified threshold duration. Warning.
The overall usage status of an XP or a P9000 disk array in a category is based on the usage of components in that category. The usage data is collected only on those metrics whose threshold limits are set on the Threshold Settings screen. For example, assume that you have set the threshold limit for only the RG Seq Reads (IOPS) (Avg Seq Reads) metric in the Backend category.
• • • • The The The The average average average average sequential backend write tracks on individual RAID groups utilization of a RAID group utilization of an ACP/DKA pair utilization of an MP blade IMPORTANT: • The average CHA MPs and the DKA MPs utilization metrics are applicable only for the XP disk arrays. • The average MP blade utilization metrics is applicable only for the P9000 disk arrays. 2.
Component levels Description Can include the following: • The components whose usage corresponding to a particular metric is at 95% of the threshold limit or higher during the specified threshold duration. The Components shown as black text status icon in such cases appears as in the appropriate category, if there are no other components that are over utilized in that category.
Category Metrics Description RG Seq Reads (IOPS) (Avg Seq Reads), RG NonSeq Reads (IOPS) (Avg NonSeq Reads), RG Writes (IOPS) (Avg Writes): Average of the frontend sequential and random I/Os on an individual RAID group in the XP or the P9000 disk array backend.
IMPORTANT: Combined backend transfers: In Thin Provisioned environments, the overall backend transfers at the RAID group level are reported using combined backend transfer metric. For a Thin Provisioned V-Vol where the ThP pool is associated with multiple RAID groups, the backend transfers are not tracked to the corresponding RAID group level. The backend transfers contributed by all V-Vols in a ThP pool are combined and reported as combined backend transfers for each participating RAID group.
IMPORTANT: At a time, you can view the maximum X busiest consumers for only one frontend, backend, or MP blade record that you select in the Statistics section. To view the maximum X busiest consumers: 1. Based on your requirement, select a record corresponding to a port, RAID group, or an MP blade in the respective Frontend, Backend, or the MP Blade Statistics section. 2. Click Show Consumers. The maximum X busiest consumers are displayed in the Components Information section.
Metrics Description Block IO MBPS The frontend throughput in MB/s read and written to the LDEV during the specified threshold duration. RG Util (%) The average of the overall RAID group utilization of an individual RAID group associated with the LDEV. Backend Transfer The I/Os between the cache and the RAID groups during the specified threshold duration. For a P9000 disk array, the average utilization of an individual MP blade by the associated consumer is displayed under the Util % column.
1. Select a record corresponding to a port, CLPR, RAID group, or an MP blade in the Frontend, Cache, Backend, or the MP Blade Statistics section, or a corresponding component record from the Components Information section. While selecting the records, press the Shift key for sequential selection or the Ctrl key for random selection of multiple component records. 2. Click Plot Chart. The Plot Chart is enabled only when you select a component record.
3. Select the check box for the metric, for which you want to view the performance or usage graph of the selected component, and click OK. P9000 Performance Advisor plots the appropriate graphs in the Chart Work Area. The duration for which the data points are plotted in the chart depends on the threshold duration specified on the Threshold Settings screen. By default, the graphs are plotted for data points collected in the last 6 hours of the management station's time.
1 High watermark level.
8 Configuring alarms and managing events This chapter discusses the following topics: • • • • “Introduction” on page 141 “Configuring alarms and viewing alarms history” on page 141 “Managing alarm history” on page 158 “Viewing events” on page 166 Introduction P9000 Performance Advisor enables you to activate alarms on components, so that timely notifications can be dispatched to intended recipients when the performance of components rise beyond a particular limit.
IMPORTANT: You can configure and activate alarms on components only if you have logged into P9000 Performance Advisor as an Administrator, or a user who is granted administrator privileges.
3. 1 Component selection tree, where you select components that belong to an XP or a P9000 disk array. Components such as LDEVs are also grouped in custom groups. 2 Choose Metrics box, where you select metrics for which the components performance should be monitored. In the Alarms table, select components that you want to monitor. Further, configure threshold and dispatch levels, alarm notification settings, and enable alarms on the components.
3 4. Alarms table. After you enable alarms on components, P9000 Performance Advisor does the following: a. Collects the latest performance values of components in every collection frequency cycle and compares them with the set threshold levels. b. Based on whether the performance values have exceeded or dropped below the threshold level, P9000 Performance Advisor dispatches the appropriate alarm notifications.
3. Click Add Alarms. The Add Alarms button is enabled only when you select a component in the component selection tree. The records are automatically displayed in the Alarms table under the Alarms Configuration tab. Initially, when alarms are not yet configured on the selected components, the following informational message No alarms are configured appears above the Alarms table under the Alarms Configuration tab.
For a new component record, the following default values are displayed under the Alarm Configuration tab: • • • • • • • • NO under Active Selected XP array name under DKC/Grp (Array Name) Selected component under Resource Selected metric category under Metric Category Selected metric under Metric Not Defined under Level - 1, Level - 2, and Level - 3 1 under Dispatch Level administrator@localhost under Email Destination Related Topics • • • • • • “Setting threshold and dispatch levels” on page 149 “Config
• Deleting component records. For more information, see “Deleting records in the Alarms table” on page 156 If you want to configure notification and monitoring settings across component records, click Select All to select all the records in the Alarms table, and then make the changes. Click Clear All to clear the check boxes for the selected records. Filtering records in Alarms table There are two levels of filters to view records in the Alarms table.
Components based filtering is also possible when multiple components are selected across the XP and the P9000 disk arrays. However, the selected components must belong to the same component type. Use the Shift or the Ctrl key for selecting multiple components.
• RAID Group 1–5 and RAID Group Total IO – Frontend metric • RAID Group 1–5 and RAID Group Total MB – Frontend metric • RAID Group 1–5 and RAID Group Sequential Read Tracks – Backend metric If you want to configure alarm settings only on RAID group, 1–3 for the metric, RAID Group Total IO – Frontend, select the metric as RAID Group Total IO – Frontend from the Metrics list and Passive from the Alarms Status list.
4. In the text box under Dispatch Level, specify the threshold level beyond which P9000 Performance Advisor should trigger an alarm and dispatch notifications. • The value 1 in the text box under Dispatch Level corresponds to Level - 1. • The value 2 in the text box under Dispatch Level corresponds to Level - 2. • The value 3 in the text box under Dispatch Level corresponds to Level - 3.
recipient email addresses for all alarm notifications, see “Configuring email and SNMP settings” on page 92. Prerequisites Ensure that the following prerequisites are met before you configure alarm settings: • A valid source email address, and IP and port addresses of the SMTP servers are specified. For more information, see “Configuring email and SNMP settings” on page 92. P9000 Performance Advisor uses the specified SMTP server details to dispatch email notifications to the intended recipients.
3. To receive an email notification, enter the email address in the text box under Email Destination. By default, email notifications are sent to administrator@localhost, which is the common destination email address for all alarm notifications. This email address is valid till: • You specify a different destination email address on the Email Settings screen. The alarm notifications generated after this change are redirected to the new destination email address.
Related Topics • • • • • • • • • “Adding or removing metric values” on page 144 “Setting threshold and dispatch levels” on page 149 “Establishing scripts for alarms” on page 153 “Enabling or disabling alarms” on page 154 “Applying a template” on page 155 “Deleting records in the Alarms table” on page 156 “Filtering records in Alarms History table” on page 162 “Viewing graph of metric value's performance” on page 165 “Filtering event records” on page 167 Establishing scripts for alarms In addition to confi
Alternatively, copy the script location from an existing record and apply it across multiple other records. For more information, see “Applying a template” on page 155. Sample script file The following is an example of a script file: C:/Temp/a.xml. The format of the XML file should be as follows:
• • • • • • • • “Setting threshold and dispatch levels” on page 149 “Configuring alarm notifications” on page 150 “Establishing scripts for alarms” on page 153 “Applying a template” on page 155 “Deleting records in the Alarms table” on page 156 “Filtering records in Alarms History table” on page 162 “Viewing graph of metric value's performance” on page 165 “Filtering event records” on page 167 Applying a template You can manually configure the threshold and dispatch settings, and alarm notification settin
5. Click Apply Template. If required, modify the alarm settings copied to the Apply Template section and then apply the updated settings to other records in the Alarms table. The configuration settings of the previously selected record are applied to all the other newly selected records. If you do not want to retain the copied settings, click Clear Template to clear the selection in the text boxes, in the Apply Template section.
3. Click Delete. The records are permanently removed from the Alarms table. In the Alarm History table, a new record is displayed for this component and the Level is shown as Closed, which implies that there is no further activity related to this component record.
2. In the Physical LDEV text box, enter the cu:ldev format of the LDEV that you want to search and click the Search icon. The component selection tree for the XP or the P9000 disk array that has the matching LDEV component automatically expands to display the LDEV highlighted for your reference. (If the component list for the selected XP or the P9000 disk array is large, you may have to use the scroll bar to navigate through the list of components to view the matching component).
2. 3. Time Updated: The last time when the component was monitored for change in performance value (if the performance of a component exceeded or dropped below the set threshold level). Time Dispatched: The time when the alarm notifications were dispatched. The current performance value of a component is also displayed. Initially, the message No records found matching the given filter criteria is displayed if there are no component records posted in the Alarm History table.
IMPORTANT: The time shown under Time Updated is in sync with the data collection cycle frequency. If the performance value of a component drops below the set threshold value, P9000 Performance Advisor does the following: 1. Posts a new record and displays the time of posting under Time Posted 2. Dispatches an alarm notification of type, P9000 Alarm – Good Information alarm to the intended recipient 3. Displays the time of dispatch under Time Dispatched 4.
Screen elements Description DKC/Grp (Array Name) Displays the array model to which the selected component belongs. Array Type Displays the array type to which the selected array model belongs. Metric Displays the metric for which a component is monitored. When you select the All option in the Metrics list, the alarm records configured on the selected component are displayed in the Alarms table.
Screen elements Description Email Status, SNMP Status, Script Status Display whether the alarm notification through email or SNMP are successful, or if errors have occurred.
• If you select the P9000 disk array DKC, the records pertaining to the respective components on which alarms are configured appear in the Alarms History table Filtering based on Alarm History filters After filtering the existing set of records in the Alarm History table for specific components, you can further filter them using the second level of filters, the Alarm History filters, which further refines the search and provides specific results.
Screen elements Description Displays the following error types: • Email errors • SNMP errors • Script errors Error Status list • All errors • No errors Select one of the above-mentioned error types to filter records and view the status of the respective alarm and SNMP notifications, and script executions. If you select Email errors, SNMP errors, Script errors, or All errors, P9000 Performance Advisor returns anything that is non-zero for these selections.
Screen elements Description This list displays the following options: • All: This option is for viewing both the serious and the recovery alarms. Alarm Type Start Time, End Time boxes • Recovery: This option is for viewing records that are logged for alarm notifications dispatched after the performance of a component dropped below the set threshold limit.
value is updated till the time the component's performance rises or drops below the set threshold level. Once the performance value rises or drops below the threshold, a new record is posted in the Alarms History table and a new graph is generated as, and when P9000 Performance Advisor retrieves the latest performance value. The performance graph also displays the Dispatch Threshold value that is the threshold value for which an alarm was configured to be dispatched.
IMPORTANT: By default, the Event Log screen displays records for events that have been generated in the last 24 hours. 1 Event Log table, where all the events generated are displayed. The records logged contain the following details for an event: • • • • Time when the event was logged. Type of event logged. Severity of the event. Description. In addition, view the following details on the Event Log screen: • Historic data (data older than 24 hours) by specifying a date range for viewing the data.
1. Click Event Log in the left pane. The Event Log screen appears. By default, records for events logged in the last 24 hours are displayed. 2. Enter text in the Search Text box based on which you want to filter the event records. The text can be a combination of alphanumeric characters. 3. Click Search. The event records are filtered and only those records that have the matching text are displayed on the Event Log screen.
The existing list is filtered to display the set of event records that are matching the specified filter criteria. Click Clear if you want to remove the current search and view all the event records. Deleting event records To delete event records: 1. Click Event Log in the left pane. The Event Log screen appears. By default, records for events logged in the last 24 hours are displayed. 2. Select the event records that you want to be removed.
Configuring alarms and managing events
9 Managing the P9000 Performance Advisor database This chapter discusses the following topics: • • • • • • “Introduction” on page 171 “Configuring database size” on page 173 “Purging data” on page 175 “Creating and viewing Export DB CSV files” on page 178 “Archiving data” on page 188 “Importing data” on page 191 • “Deleting logs for archival and import activities” on page 194 Introduction P9000 Performance Advisor uses Oracle as its database.
IMPORTANT: You have to log on to P9000 Performance Advisor as an Administrator or a user with administrator privileges to configure, purge, archive, or import the P9000 Performance Advisor database. You also need this privilege to view or delete Export DB schedules. Database related tools or functionalities should be executed with the same privilege that is used to install P9000 Performance Advisor. If you are trying to execute the tools, ensure that you are a member of the ORA_DBA Windows group.
• • • • • • • • “Manually increasing the database size” on page 174 “Manually purging the data” on page 176 “Purging older data” on page 176 “Automatically purging data” on page 177 “Creating and viewing Export DB CSV files” on page 178 “Archiving data” on page 188 “Importing data” on page 191 “Deleting logs for archival and import activities” on page 194 Configuring database size You can increase the P9000 Performance Advisor database size based on the disk space available on the management station, wher
Where, X refers to the current disk space allocated for the database. For example, if the available disk space is greater than 21 GB + 3 GB (disk space needed for P9000 Performance Advisor to initiate Auto Grow + 3 GB), the allocated database size is automatically increased by additional 2 GB. Simultaneously, the following prediction on the time taken for the database to grow to the specified size is also displayed under DB Configuration/Purge: Given current data storage rates, DB grow in less than X hours.
To manually specify the amount of available disk space on your management station that you want to allocate for the database: 1. Click Database Manager in the left pane. The Database Manager screen appears. By default, the DB Configuration/Purge is enabled. 2. From the Configured Maximum Database Size list, select the disk space that you want to allocate.
CAUTION: The data that is purged cannot be recovered. It is permanently deleted from the P9000 Performance Advisor database. Hence, purge data only when you are absolutely sure that the data is no longer required. Also, P9000 Performance Advisor activities, such as plotting charts and collecting data might be impacted when either the manual or auto purge is in progress. Alternatively, if you want to archive data before purging it, use the archival export functionality.
4. Click Purge. A dialog box appears prompting you to confirm the deletion of records. 5. Click OK. P9000 Performance Advisor deletes the performance data available prior to the current specified date in the database.
• • • • • • • • • “Automatically increasing the database size (AutoGrow)” on page 173 “Manually increasing the database size” on page 174 “Manually purging the data” on page 176 “Purging older data” on page 176 “Creating and viewing Export DB CSV files” on page 178 “Archiving data” on page 188 “Importing data” on page 191 “Deleting logs for archival and import activities” on page 194 “Migrating data to another management station” on page 194 Creating and viewing Export DB CSV files P9000 Performance Advis
• The performance data collection interval time stamps. • The data for the following metrics: • RIO Read Cache Hits, RIO Reads, RIO Write Cache Hits, and RIO Writes. • SIO Read Cache Hits, SIO Reads, SIO Write Cache Hits, and SIO Writes. • CFW Reads, CFW Read Cache Hits, CFW Writes, and CFW Write Cache Hits. • DFW Count, DFW Normal Count, and DFW Sequential Count. • Total IO, Inhibit Mode IO Count, and Bypass Mode IO Count.
IMPORTANT: Since, the CHIP/CHA and the ACP/DKA MPs are moved to the MP blades in the P9000 disk arrays, their MP utilization metrics are not applicable for the P9000 disk arrays. port_exportDB-array_serial_number_.csv This file includes the following details: • The XP or the P9000 disk array serial number for which the report is generated. • The port IDs on the XP or the P9000 disk array.
The MP busy time indicates the time taken by an MP blade to process the request it receives from the associated processing type. rgutil_exportDB-array_serial_number_.csv This file includes the following details: • • • • The XP or the P9000 disk array serial number for which the report is generated. The RAID group IDs on the XP or the P9000 disk array. The performance data collection interval time stamps.
Creating Export DB CSV files IMPORTANT: • If you have logged in with user privileges, you cannot schedule the export DB activity. • Only a single .csv file is created for the XP1024/128 Disk Arrays. • 020000 is the supported version for the P9000 disk arrays, such as the P9500 and the following XP disk arrays, XP24000, XP20000, XP12000, XP10000, and the SVN Disk Arrays. • 020000 is also the supported version if you want to view the external LUN information.
3. Based on your requirement, select the Collection Period as One Time or Recurring. • If you select the Collection Period as One Time, proceed to step 4. • If you select the Collection Period as Recurring, the following schedule options are enabled: • Collection Schedule: Displays Daily, Weekly, and Monthly. Collection Schedule Description By default, Weekly is selected as the collection schedule. The corresponding Day of the Week list displays the week days.
6. Select the check box for Human Readable Format, if you want to view the data for LDEVs in the cu:ldev format. 7. Select the check box for Version Number Select Version Number to enable the corresponding list that displays the following supported versions based on the XP or the P9000 disk array type that you select: • 020000 • 016000 • 010600 • 010500 The following image shows scheduling the export DB activity for 53036, which belongs to the P9500 Disk Array type. 8.
9. Select the check box for the RG Utilization, if you want to view the percentage of utilization for the RAID groups. This option can be used only when the Response Time check box is selected and the supported versions are 016000 or 020000. 10. Select the check box for Display LDEV's of the Journal, if you want to view all the LDEVs that belong to a journal pool. 11. Select the Start Time and End Time, if it is a one-time export activity.
• • • • • • • • • • “Automatically increasing the database size (AutoGrow)” on page 173 “Manually increasing the database size” on page 174 “Manually purging the data” on page 176 “Purging older data” on page 176 “Automatically purging data” on page 177 “Archiving data” on page 188 “Importing data” on page 191 “Deleting logs for archival and import activities” on page 194 “Migrating data to another management station” on page 194 “Generating, saving, or scheduling reports” on page 331 Importing data to MS
the corresponding schedule details for the Export DB schedules are also displayed in the Scheduled Export DB tasks section, under the View Exported/Scheduled Exported DB Files tab. The following image shows the .csv files created for 53036 and 53046, which belong to the P9500 Disk Array type. IMPORTANT: • The name of the user who created the report is displayed under User Name.
• • • • • • “Automatically purging data” on page 177 “Archiving data” on page 188 “Importing data” on page 191 “Deleting logs for archival and import activities” on page 194 “Migrating data to another management station” on page 194 “Generating, saving, or scheduling reports” on page 331 Deleting Export DB reports and schedules IMPORTANT: You can delete a schedule record in the Scheduled Export DB tasks section, only if you have logged in to P9000 Performance Advisor as an Administrator or a user with adm
IMPORTANT: • After the data is archived, it is permanently deleted from the P9000 Performance Advisor database and the free disk space is released back to the database. If you want to use the archived data for an XP or a P9000 disk array, import the corresponding .dmp files. Also, perform a fresh configuration data collection for that XP or the P9000 disk array on the management station, where you performed the import operation.
5. Click Export. P9000 Performance Advisor archives data for the specified duration. As part of the archival process, P9000 Performance Advisor does the following: a. Displays an informational message that the export for the selected array is successfully initiated and starts exporting the data. b. Logs two records under Export data for the date and time when the archival is complete. c. Creates two .dmp files and displays their names under File Name.
• “Deleting logs for archival and import activities” on page 194 • “Migrating data to another management station” on page 194 Importing data You can import the archived data to another management station or back to the same management station from where the data was initially exported. CAUTION: • The import operation fails, if there is not enough free space in the database to accommodate the imported data.
IMPORTANT: The following are a few important points: • After importing performance data for an XP or a P9000 disk array, ensure that you perform a fresh configuration data collection for that XP or the P9000 disk array on the target management station, as the archival process only exports the performance data.
3. Based on the XP or the P9000 disk array for which you want to import its performance data, select the relevant file from the list displayed in the Archive Import section. For example, PA53036_12OCT2008_20.07.32_1217826540130_1217826600138.DMP NOTE: translates to %PADB_HOME% in this context of importing data. 4. Click Import. Based on whether the import is for an XP or a P9000 disk array, P9000 Performance Advisor does the following: a.
• • • • • • • • • “Automatically increasing the database size (AutoGrow)” on page 173 “Manually increasing the database size” on page 174 “Manually purging the data” on page 176 “Purging older data” on page 176 “Automatically purging data” on page 177 “Creating and viewing Export DB CSV files” on page 178 “Archiving data” on page 188 “Deleting logs for archival and import activities” on page 194 “Migrating data to another management station” on page 194 Deleting logs for archival and import activities IMP
IMPORTANT: • To use the Backup utility, ensure that the same version of P9000 Performance Advisor is installed on both the source and target management stations. • Do not modify the default settings that is configured for the P9000 Performance Advisor database at the time of installation or upgrade. • If you have already configured the serverparameters.properties file on the target management station, it will be replaced with the serverparameters.
• Before restoring the database, increase the Configured Maximum Database Size of the target database by a value equal to the sum of the current target database size + size of the database that is to be restored. If the current database size is 5 GB and the size of the database to be restored is 12.452 GB, change the Configured Maximum Database Size under the DB Configuration/Purge tab to a size greater than 5 GB + 12.452 GB, which is 17.452 GB. So, increase the database size to 18 GB.
2. Based on the kind of backup done, select the appropriate backup option from the list displayed: • DKC • Time • All IMPORTANT: You must select the same backup option that you had previously selected for taking backup of data. For example, if you have backed up data for a specific DKC ID, you must select DKC from the list of backup options while restoring the data. Selecting a different option, such as Time or All results in an error and data restore will not proceed. 3. Click Restore.
NOTE: • If you have saved the P9000 Performance Advisor database on a different location during installation, navigate to that location. • The target-path that you specify must not include space in the file location path. • To restore your files, enter: %HPSS_HOME%\bin\backuputility -restore target-path Where, target-path is the location where you want to restore the files. You can also restore data for an XP or a P9000 disk array DKC, or a particular duration.
• • • • Ldev_Space ( per Ldev) = 0.0002 MB Port_Space (per port) = 0.00008 MB Dkc_Space (per collection) = 0.0002 MB SECONDS_PER_DAY = 86400 The consolidated free space that is available for P9000 Performance Advisor is the total sum of the free space available on all the XP and the P9000 disk arrays monitored by P9000 Performance Advisor. Example 1.
Managing the P9000 Performance Advisor database
10 Viewing XP and P9000 disk array components This chapter discusses the following topics: • • • • • • “Introduction” on page 201 “Viewing performance summary” on page 206 “Viewing XP and P9000 disk array summary” on page 211 “Volume Information” on page 212 “Advisory on CLPR utilization” on page 215 “Viewing CHIP/CHA data” on page 215 • • • • • • • • • “Viewing ACP/DKA data” on page 220 “Viewing MP blade utilization for P9000 disk arrays” on page 224 “Viewing data on Smart and ThP pools for P9000 disk a
NOTE: The CHIPs and ACPs are applicable only for the XP48, XP128, XP10000, XP12000, and the XP20000 Disk Arrays. They are replaced by the CHAs and the DKAs for the XP24000 Disk Array and the P9000 disk arrays, such as the P9500. To view the component data on the Array View screen: 1. Click Array View in the left pane. 2. Click the + sign for Arrays. The list expands to display all the XP and the P9000 disk arrays that are monitored by P9000 Performance Advisor.
Figure 12 Array View screen . Further, click the + sign for an XP or a P9000 disk array in the component selection tree to view the respective component nodes. Click each component node to view the performance or utilization summary of all its components. Click each component under a particular component node to view the individual performance or utilization data. For example, clicking CHA/DKA for a P9000 disk array displays the performance summary of all its CHA ports and the DKAs.
Component selection tree Description Documentation Links For XP disk arrays For P9000 disk arrays “Viewing ACP/DKA data” on page 220 Yes No “Viewing LDEV data” on page 238 Yes Yes FrontendIO Provides the list of busiest frontend LDEVs and the ports associated with the LDEVs “10 busiest LDEVs/Ports” on page 231 Yes Yes BackendIO Provides the list of busiest backend LDEVs and the ports associated with the LDEVs “10 busiest LDEVs/RAID groups” on page 232 Yes Yes Provides the following deta
Component selection tree Description Documentation Links For XP disk arrays For P9000 disk arrays “RAID Group summary” on page 233 Yes Yes “Port summary” on page 235 Yes Yes “Viewing MP blade utilization for P9000 disk arrays” on page 224 No Yes “Viewing data on Smart and ThP pools for P9000 disk arrays” on page 227 No Yes Provides the following details for a RAID group: • Performance summary for all the related metrics RG Summary • Current configuration, which includes the component type
Component selection tree Description Documentation Links For XP disk arrays For P9000 disk arrays No Yes Provides the following details for the installed CHA and the DKA: • Average performance derived from the overall average performance of all the ports in the CHIP or the RAID groups in the DKA • Average performance of individual ports for a CHA CHA/DKA • Average performance of individual RAID groups for a DKA • “Viewing CHIP/CHA data” on page 215 • “Viewing ACP/DKA data” on page 220 • Combined
Performance View screen elements Description • DKC Time: Displays the time stamp of the latest DKC performance data collection • RAID Group Time: Displays the time stamp of the latest RAID group performance data collection General group box • Port Time: Displays the time stamp of the latest port performance data collection Click General to view the XP or the P9000 disk array utilization summary. For more information, see “Utilization Summary” on page 213.
Performance View screen elements Description For an XP disk array, the Bus/Path Util % group box displays the CHIP/CHA utilization and the ACP pair utilization for the cache memory bus and the shared memory bus. Bus/Path Util % group box For a P9000 disk array, the Bus/Path Util % group box for the CHIP/CHA utilization and the ACP pair utilization are displayed under the respective Frontend Total Avg group box and the Backend Total Avg group box.
Performance View screen elements Description Displays the overall average sequential and non-sequential reads, and writes for an ACP pair. For more information, see “Viewing ACP/DKA data” on page 220. In addition, the combined backend transfer value is displayed (only for XP24000 and P9000 disk arrays), which is the sum of backend transfers happening on all the ThP pools served by a particular DKA.
Performance View screen elements MP Blades Util % group box Description Displays the average utilization of an MP blade, which is calculated as the utilization of all the individual processors in the MP blade. The MP blades are grouped based on the clusters to which they belong. For more information, see “Viewing MP blade utilization for P9000 disk arrays” on page 224. NOTE: The MP blade component is not applicable for the XP disk arrays.
IMPORTANT: • The MIX CHIP displays only eight ports and four MPs though there are eight MPs on that board. The remaining four behave as ACP MPs. • If performance data is collected separately for the DKC, ports, and the RAID groups, through two different schedules, all the metrics display the latest data as received by the management station, from either of the schedules. For more information about schedules, see “Data Collection Configuration” on page 51.
Screen elements Description Volume Information The Volume Information displays the summary of all the components for the selected XP or the P9000 disk array. A list of components and their numbers are displayed. Initially, N/A is displayed beside each component as the configuration collection has not yet been initiated. For more information on configuration summary, see “Volume Information” on page 212.
Related Topics • • • • • • • • • • • • “Viewing performance summary” on page 206 “Advisory on CLPR utilization” on page 215 “Viewing CHIP/CHA data” on page 215 “Viewing ACP/DKA data” on page 220 “Viewing MP blade utilization for P9000 disk arrays” on page 224 “Viewing data on Smart and ThP pools for P9000 disk arrays” on page 227 “Utilization Summary” on page 213 “10 busiest LDEVs/Ports” on page 231 “10 busiest LDEVs/RAID groups” on page 232 “RAID Group summary” on page 233 “Port summary” on page 235 “View
• CHA MPs and the associated ports. The port type, such as Fibre, Ficon, Escon, or FCoE (applicable only for P9000 disk arrays) is also displayed beside the port ID. In addition, the utilization summary includes the following for a P9000 disk array: • Cache usage. • Bus utilization. • MP blade utilization, which includes the following: • MP blade IDs. • DKCs, cluster #, and the blade locations for the MP blades. • The MPs on the MP blade and each MP's utilization percentage.
Advisory on CLPR utilization P9000 Performance Advisor provides an advisory on the usage of individual CLPRs in an XP or a P9000 disk array. The advisory is based on the data collected for the past one week. The following are the scenarios for which an advisory is created: • If the cache for a CLPR is less utilized, the advisory suggests that you consider re-allocating portion of the cache to the other CLPRs.
CHIPs/CHAs. You can also click CHIP for an XP disk array or CHA/DKA for a P9000 disk array in the component selection tree. The summary is displayed in the CHIP/CHA summary table for the XP disk arrays and the CHA summary table for the P9000 disk arrays (see following images).
The following table describes the CHIP/CHA summary table for an XP disk array and the CHA summary table for a P9000 disk array. CHIP/CHA summary table for XP disk arrays includes... CHA summary table for P9000 disk arrays includes... The CHA name The CHIP or the CHA name Example: CHA-1F, 1 indicates the cluster # where the CHA board is located.
IMPORTANT: • The port type, such as Fibre, Ficon, Escon, or FCoE (applicable only for P9000 disk arrays) is also displayed beside the port ID. • Since, the CHIP/CHA and the ACP/DKA MPs are moved to the MP blades in the P9000 disk arrays, their MP utilization metrics are not applicable for the P9000 disk arrays. For more information, see “Viewing MP blade utilization for P9000 disk arrays” on page 224.
Individual CHIP/CHA data For XP disk arrays For P9000 disk arrays The average I/Os and throughput of data in MB/s on all the ports in the selected CHIP/CHA Yes Yes The individual MPs on the selected CHIP/CHA Yes No The IDs of the associated ports on the selected CHIP/CHA Yes Yes (the port IDs are directly displayed under the selected CHA.
• • • • • • • “Viewing MP blade utilization for P9000 disk arrays” on page 224 “Viewing data on Smart and ThP pools for P9000 disk arrays” on page 227 “Utilization Summary” on page 213 “10 busiest LDEVs/Ports” on page 231 “10 busiest LDEVs/RAID groups” on page 232 “Port summary” on page 235 “Viewing LDEV data” on page 238 Viewing ACP/DKA data Based on whether you selected an XP or a P9000 disk array, click an ACP/DKA pair in the ACP Pair Backend group box under the Performance View tab to view the summary
The following images display the ACP Pair Backend group box and the DKA summary table for 53036, which belongs to the P9000 Disk Array type: HP StorageWorks P9000 Performance Advisor Software User Guide 221
The following table describes the ACP/DKA summary table for an XP disk array and the DKA summary table for a P9000 disk array. ACP/DKA summary table for XP disk arrays includes... DKA summary table for P9000 disk arrays includes... The ACP/DKA pair name The DKA pair name Example: BUNU Example: AUMU The individual MPs on an ACP/DKA and the utilization percentage of each MP Not applicable In the above image, BU MP Utilization % indicates the utilization of the MPs on BU, which is the left ACP.
Individual ACP/DKA data For XP disk arrays For P9000 disk arrays Yes No Yes Yes Yes Yes Summary The MPs on the individual ACP/DKA and their utilization percentage For example, if you selected the AUMU DKA pair, you can view the MPs and also their utilization percentage on AU. Similarly, you can also view the above-mentioned details for MU. The backend transfers for the selected ACP/DKA pair, which includes the sequential and non-sequential reads and writes.
• • • • • “Utilization Summary” on page 213 “10 busiest LDEVs/Ports” on page 231 “10 busiest LDEVs/RAID groups” on page 232 “Port summary” on page 235 “Viewing LDEV data” on page 238 Viewing MP blade utilization for P9000 disk arrays Click an MP blade ID in the MP Blades Util% group box under the Performance View tab to view the corresponding utilization summary on the MP Blades screen. You can also click MP Blades in the component selection tree. The following image shows the MP Blades Util% group box.
The MP Blade Configuration group box includes the following: • The installed MP blades, DKCs, and the clusters to which they belong. Each MP blade ID includes the corresponding cluster # and the blade location. For example: MPB-1MA is the MP blade ID, 1 indicates the cluster #, and MA indicates the blade location. • The individual MPs on each MP blade and each MP's utilization percentage. Click an individual MP to view the utilization graph in the Chart Work Area.
• A column graph that displays the average utilization of the selected MP blade by each associated consumer. For more information on how to read the column graph, see “Viewing top 20 consumers of an MP blade” on page 305. Example: The following image displays the MPB-1MB utilization data for 53036, which belongs to the P9500 Disk Array type. The MPB-1MB utilization data includes the following: • MPB-1MB is the MP blade ID that was selected in the MP Blades Util% group box.
• The consumers displayed in the Top Components: MPB-1MB table are for MPB-1MB. These consumers can be associated with multiple processing types. • The Avg. Util % for each consumer record in the Top Components: MPB-1MB table displays the average MPB-1MB utilization by that consumer. For example, out of the 2.50% average utilization displayed for the Backend processing type, 0.45% is the average MPB-1MB utilization by the consumer 0:58 (LDEV) associated with Backend.
Table 10 on page 228 describes the Pool Information screen elements: Table 10 Pool Information screen details Table name Description Displays the configuration and performance data of the Smart and the ThP pools. The data includes the following: • Pool ID, type, and the pool status Pool Information • I/O per second data, MB/second data throughput, and backend transfers between the cache and the drives For more information, see Table 11 on page 228.
Column names Description Backend Tracks Displays the total backend tracks associated with the Smart pool or the ThP pool. It is an aggregate of all the backend transfers due to I/Os occurring on every VVol in the Smart pool or the ThP pool. Viewing pool volumes and VVols data for Smart pools and ThP pools The following data on the associated pool volumes and VVols is displayed in the Pool Details table for the selected Smart pool or the ThP pool.
NOTE: HP recommends viewing a maximum of 150 records at a time, so that there is no performance impact. • The metrics based on which you want to sort the records. You can sort records based on the IOPS, MBPS, Backend Tracks, and the Avg Read/Write Resp Time metrics. By default, the VVol records are sorted based on the Avg Read/Write Resp Time values. To configure the above-mentioned settings, click V-vols Settings in the Pool Details table.
• “10 busiest LDEVs/RAID groups” on page 232 • “Port summary” on page 235 • “Viewing LDEV data” on page 238 Viewing 10 busiest LDEVs and Ports To view the 10 busiest LDEVs or ports associated with an XP or a P9000 disk array's frontend activities, click FrontEndIO in the component selection tree for the XP or the P9000 disk array. The top 10 busiest LDEVs are displayed under the LDEV tab and the top 10 busiest ports are displayed under the Ports tab.
Figure 17 10 busiest front end Ports .
Figure 18 10 busiest backend LDEVs . Figure 19 10 busiest backend RAID groups .
utilization random read, random write, random write parity, sequential read, sequential write, sequential write parity, and the overall RAID group percentage utilization (sum of the above percentages) on a given RAID group. IMPORTANT: • The RAID group utilization percentage is not displayed for external storage volumes. • The Auto LUN XP software must be installed, and monitoring must be enabled on the XP1024/XP128 Disk Array for the RAID group utilization metric to be available on these arrays.
Screen elements Description LDEV MB/s The total frontend throughput in MB/s for the LDEV. Backend Transfer The total number of backend tracks transferred to or from the XP array backend. * is displayed beside the combined backend transfer value indicates one of the following: Combined Backed Transfer • If any of the physical LDEVs from a RAID group is configured in multiple ThP pools, the sum of the backend transfer on all the ThP pools will be shown as combined backend transfer for that RAID group.
IMPORTANT: When you request a port summary report, the total I/Os displayed may not be equal to the sum of the I/Os across each of the ports. This can occur if multiple paths to an LDEV exist. The port IO summary indicates the IO ceiling values across the ports. It does not indicate the absolute or accurate I/O rates across the ports. Click a CHP Port ID to view the associated performance graphs in the Chart Work Area. The port type for the selected port ID is also displayed in the chart legends.
Screen elements Description Port Type Displays the port type, such as Fibre, Ficon, Escon, or FCoE (applicable only for P9000 disk arrays) for the port ID. E-seq(s) Displays the Ext-Lun provider’s serial number for the array. Max IO/s Displays the maximum frontend I/Os on the port. Avg IO/s Displays the average of the total frontend I/Os. Min IO/s Displays the minimum frontend I/Os on the port. Max MB/s Displays the maximum frontend throughput in MB/s.
Viewing LDEV data P9000 Performance Advisor displays the following data on the Array View LDEV screen for all the LDEVs that belong to an XP or a P9000 disk array: • The performance data of all the LDEVs • The data on associated components, such as the following: • The summary for individual RAID groups, which includes: • RAID level • Associated ACP pair • Disk mech details • Associated drive type and RPM rate • The I/Os and MB/s data for individual CHIP ports For an XP disk array, in addition to the above-
You can query the existing performance data in P9000 Performance Advisor for a particular date and time stamp to view the corresponding point in time data for all the LDEVs. By default, the data displayed is for the last performance data collection time stamp and sorted in a descending order. The sorting of data is based on the average read response time of individual LDEVs. You can query the LDEV data for a different date and time stamp and also sort the data based on a different sort type.
page links to navigate to other sections of the LDEV table and view additional LDEV records. You can also click the prev, next, or last links to navigate to the respective pages. Querying and sorting data You can query the performance data in the P9000 Performance Advisor database for the last data collection date and time stamp, for which you want to view the LDEV data. By default, your query is executed on the latest performance data received from the selected XP or P9000 disk array.
Screen elements Description Ext-LUN Select Ext-LUN to sort LDEV data based on the external arrays exposed through a particular LUN on the XP or P9000 disk array. Host Group Select Host Group to sort LDEV data based on the host groups (does not apply to the XP48 Disk Array). Select Jnl Group to sort LDEV data based on the journal volume pool IDs.
2. Click Query. If you do not select a last collection date and time stamp, the current last collection date and time stamp is considered for querying the data. IMPORTANT: • For an XP24000 Disk Array, the performance data can be collected on 64000 LDEVs (64K binary (65,536)). • For the XP or the P9000 disk arrays with external LDEVs, – is displayed under ACP PAIR in the LDEV table, as the external LDEVs do not have a valid ACP pair associated with them.
Screen elements Description For XP disk arrays For P9000 disk arrays RG The RAID group to which the LDEV belongs Yes Yes ACP Pair ID The card letters for the ACP pair Yes Yes CHIP Port ID The port ID for the CHIP (CHP) port Yes Yes Host Group: The host group to which the host belongs The host group to which the host belongs Yes Yes MP Blade ID The identification number of the MP blade that is currently associated with the LDEV.
Components and metrics in the LDEV Column Settings list Table 14 on page 244 lists components available for selection in the LDEV Column Settings list: Table 14 Components and metrics in LDEV Column Settings list Screen elements Description ACP Pair ID The card letters for the ACP pair. The percentage of the ACP pair processors usage, during the reporting period. ACP Pair Util NOTE: This metric is available only for the XP disk arrays.
Screen elements Description The port ID for the CHIP (CHP) port. Provides the option to view information associated with a particular port or with all ports. NOTE: CHP Port ID If a Mainframe LDEV in an XP or a P9000 disk array is presented through a FICON CHA, the corresponding CHA ID is not displayed. Instead, Not Mapped is displayed in this column. The percentage that the CHP processors were used during the reporting period. NOTE: CHP Util This metric is available only for the XP disk arrays.
Screen elements Description E-LDEV The external LUN LDEV ID on the external array. Ext-Lun Indicates that the LDEV is an Ext-Lun. The following options are available:- (hyphen) = Normal LUN E = Ext-Lun P = Ext-Lun provider (this LDEV is used as an Ext-Lun for another array) E-Port(s) A list of Ext-Lun initiator ports (ports used to connect to an external array). E-Seq The Ext-Lun provider's serial number for the array. Host ID (Host identifier) The name of the host machine.
Screen elements Description LUN (Logical Unit Number) ID The identification number of the LUN. MP Blade Id The identification number of the MP blade that is currently processing requests for an LDEV. The MP blade ID includes the cluster # and the blade location. For example, MPB-1MA, where 1 indicates the cluster # and MA indicates the blade location. NOTE: This component is displayed only for the P9000 disk arrays.
Screen elements Description Target LUN The LUN associated with the given LDEV. Vol. Group The volume group identification name if the device is associated with a volume group. P9000 Performance Advisor reports volume groups from LVM (an HP brand) and VXVM (a Veritas brand). NOTE: • The E-LDEV, Ext-LUN, E-Port(s), E-Seq, Jnl Group, and Vol. Group are available for selection only if they are configured in the selected XP or P9000 disk array.
IMPORTANT: • If the state for an LDEV displays up as SMPL (Simplex), it means that the LDEV is neither configured as a PVOL or SVOL. • The replication pair status is displayed only when you do a fresh configuration collection for an XP or a P9000 disk array. However, if the configuration data collection is scheduled, the replication pair status is automatically updated to show the current status. To view the replication volumes and the status of the replication: 1. 2. Click the Column Settings check box.
Filtering LDEV records Records in the LDEV table can be filtered in the following ways: • Filter records based on user specified criteria • Filter records based on existing selection • Filter records for values greater or lesser than the specified value Filter records based on user specified criteria This type of filter is applicable when you want to view the LDEV data based on the filter criteria that you specify.
The LDEV table displays only those records that match the specified RAID group IDs. Filter records for values greater or lesser than the specified value This type of filter is applicable when you want to view performance values of LDEVs based on the following combination, where you select from an existing filter criteria and also specify a value. Following is an example on filtering records in the LDEV IO/s list: 1. Click the LDEV IO list. 2.
4. Click OK to continue. A record for the export activity is logged in the Event Log screen. The record includes the name of the XP or P9000 disk array, and the date and time when the export activity was initiated. After the data is exported, another record is logged in the Event Log screen. In addition to the disk array, the date and time stamp, the record also includes a link to download the CSV file. The following image displays the records logged for the XP disk array, 82502.
Continuous Access Journal Group detail view Double-click a Journal group volume ID in the Jnl Group column to open the Journal Group Detail View screen, as shown in Journal Group detail view and Journal Status. A list of LDEVs configured in the continuous access journal volume displays; a maximum of 16 LDEVs display. The status on backend transfers and average read response of each LDEV associated with the journal group is displayed under the Journal Volume Status tab. Figure 22 Journal Group Detail .
Screen elements Description THP Pool ID The ID of the pool (pool number) The status of the ThP Pool • 0: Undefined/Creating/Deleting – Specified pool does not exist completely. THP Pool Status • 1: Normal • 2: Pool capacity beyond threshold • 3: Pool capacity reached 100% of the pool • 4: Failure, cannot show further information for the pool POOL Threshold 1 A user configurable pool threshold (varying between 5% - 95% in increments of 5%). The default value is 70%. This is the high for the pool.
IMPORTANT: • The LDEV table does not display hyperlinks in the ACP Pair ID and ACP Pair Util fields for RAID groups spanning across multiple ACP pairs. Hence, no chart for the same can be created. • For a P9000 disk array, the LDEV table does not display the ACP Pair Util field for RAID groups. So, a chart for the ACP pair utilization metrics cannot be plotted. • An XP24000 type array has 32 CHIPs, 8 ACP pairs, and 4 MPs per port, an XP20000 type array has 8 CHIPs, 4 ACPs and 4 MPs per port.
Viewing CLPR information Click a CLPR value in the LDEV table to view the detail view for that CLPR in a separate browser window. In the CLPR window, the line above the table indicates the hierarchical information for the selected CLPR.
• • • • • • • “Viewing ACP/DKA data” on page 220 “Utilization Summary” on page 213 “10 busiest LDEVs/Ports” on page 231 “10 busiest LDEVs/RAID groups” on page 232 “RAID Group summary” on page 233 “Port summary” on page 235 “Viewing LDEV data” on page 238 HP StorageWorks P9000 Performance Advisor Software User Guide 257
Viewing XP and P9000 disk array components
11 Using charts This chapter discusses the following topics: • “Introduction” on page 259 • “Plotting charts” on page 264 Introduction You can plot performance graphs to view historical data of components that belong to the same or different XP disk arrays and P9000 disk arrays. Graphical representation of components performance metrics is especially useful when you want to compare similar components of different XP and P9000 disk arrays to determine their performance and observe trends.
analyze the performance of a component by viewing its data points collected at different collection rates in the same chart. You can compare components across the XP and the P9000 disk arrays based on the following metric categories. (Ensure that you select every element that you want to appear in your chart, because the system charts only those elements that are specified): NOTE: Firmware version later than 50.09.33 snapshot PIDs are available for the XP12000 and the XP10000 Disk Arrays.
3 Chart window (blue border indicates that the chart window is selected or active) 4 Choose Metrics box from where you select metrics for components 5 Component selection tree for charts 6 Zoom preview panel IMPORTANT: • The Configuration data collection for the XP and the P9000 disk arrays must be complete for the component selection tree to appear under Charts in the left pane.
Sections Description The component selection tree displays the following main nodes: • The XP and the P9000 disk arrays monitored by P9000 Performance Advisor.
Sections Description The Chart Work Area consists of the following: • The individual chart windows that display the performance graphs of components for the selected metrics. Chart Work Area • The chart controls that can be used to perform various tasks on the individual chart windows. • The zoom preview panel, where you can preview performance of components for a specified duration. For more information, see “Viewing charts” on page 301.
Sections Description The Chart controls section displays common controls or buttons used to perform specific tasks on charts, such as the following: • Add a new chart window. • Save charts as favorites and load the favorite charts. • Save charts as PDF files. • Print charts. • Change the Chart Work Area layout. • Update charts with the latest data points. • Select all the chart windows in the Chart Work Area. • Remove charts from the Chart Work Area.
• You have collected the performance data, so that the data on associated components is displayed under the various categories for the individual XP and P9000 disk arrays. • The custom groups are created, so that they appear for selection under Custom Groups in the component selection tree. IMPORTANT: • The components are available for selection only if they are configured on an XP or a P9000 disk array.
2. Based on your requirement, select components from an XP or a P9000 disk array or choose LDEVs from a custom group. You can also search for a particular physical LDEV in the component selection tree, if you are aware of the LDEV name. For more information, see “Searching for components” on page 300: • Click the + sign for an XP or a P9000 disk array and select components from the list, for which you want to view the performance graphs. The following image displays the hierarchy for component selection.
3. Select the related metrics from the Choose Metrics box. For more information, see “Choosing metrics” on page 270. By default, the Choose Metrics box displays a set of default metrics for the selected components. If you selected a combination of components from different component types, only the metrics related to the selected components are displayed in the Choose Metrics box. The following image shows the metrics for the MP blade processors on 53036, which belongs to the P9500 Disk Array type.
Selecting components and metrics The component selection tree provides a hierarchal representation of components under the following main nodes: • The XP and the P9000 disk arrays monitored by P9000 Performance Advisor. Click the + sign for an individual XP or a P9000 disk array node to view the corresponding main categories in the component selection tree (see the following table).
XP or P9500 Disk Array main categories – component selection tree Description Comprises of the individual ThP pool IDs and the associated RAID groups and VVols. The RAID groups further display the list of associated physical LDEVs. The ThP pools are displayed for the P9000 disk arrays, such as the P9500. THP Pool The ThP pools are also displayed only for the XP24000 Disk Array in the XP disk array family. For more information, see “THP Pool navigation path” on page 283.
3. For that host, plot performance graphs of all the LDEVs and identify the LDEV that is contributing to the observed high response time. Click the + sign for a main category to navigate through the associated components and view their performance graphs for the selected metrics. The components of the same type are grouped together. Each component further expands to a subset of components depending on its level of hierarchy in the component selection tree.
• The first image displays the metrics displayed when you select the XP disk array 10116 • The second image displays the metrics displayed when you select the component type Ports (72) • The third image displays the metrics displayed when you select the component CL1A HP StorageWorks P9000 Performance Advisor Software User Guide 271
IMPORTANT: • For a component type, the metrics are displayed for selection only if the corresponding components are supported or configured in the XP and the P9000 disk arrays. For example, if the configuration collection is not yet performed for an XP disk array, the CLPR partition data is not available. Hence, clicking Cache in the component selection tree does not result in any metrics and the Choose Metrics box is disabled. The same logic applies for other components in the XP and the P9000 disk arrays.
Front-end >Port (component type) > Individual ports > Individual MPs > Individual CHAs. The port type, such as Fibre, Ficon, or Escon is also displayed beside the port ID. In the above image, under Front-end for the XP disk array 10090 (XPArray_1): 1. 2. 3. 4. 5. Front-end is the main category. Port is the component type. The number (40) indicates the number of ports for the XP disk array 10090. CL1A is one of the individual ports. CHP00–1EU is an individual MP associated with the port CL1A.
In the above image, under Front-end for the P9000 disk array 53046: 1. 2. 3. 4. Front-end is the main category. Port is the component type. The number (32) indicates the number of ports for P9500 Disk Array 53046. CL1A is one of the individual ports. CHA–1EU is an individual CHA associated with the port CL1A. The related metrics associated with the component types and the individual components in the Front-end category are displayed in the Choose Metrics box.
IMPORTANT: • At the Front-end main category level, the Maximum Port IO/s – Frontend and the Maximum Port MB/s – Frontend metrics provide the respective maximum I/Os and MB/s of data throughput of all the ports in the selected XP or the P9000 disk array. These values are the maximum of the last collection interval. For example, if the port I/O collection interval is set to two minutes, the Maximum Port IO will be calculated as the maximum value over the two minute collection interval.
In the above image, under Cache for the XP disk array 10055: • Cache is the main category. The number (6) indicates the number of CLPRs partitions configured on the selected XP disk array. • The CLPR0 and CLPR1 are the individual CLPRs. The related metrics associated with the Cache and CLPRs are displayed in the Choose Metrics box. Select the metrics at the component type or the individual component levels, or both to view the related performance graphs in the Chart Work Area.
• • • • • • • • • • • “Front-end navigation path” on page 272 “Back-end navigation path” on page 278 “MP Blades navigation path” on page 277 “THP Pool navigation path” on page 283 “Snapshot Pool navigation path” on page 285 “Replication Volumes navigation path” on page 288 “LUSE navigation path” on page 290 “Host Groups navigation path” on page 292 “Ext-RG(s) navigation path” on page 294 “Drive types navigation path” on page 296 “Custom groups navigation path” on page 298 MP Blades navigation path IMPORTA
In the above image, under MP Blades for the P9000 disk array 53036: • MP Blades is the main category. The number 2 indicates the total number of MP blades configured on the selected P9000 disk array 53036. • MPB-1MA is one of the individual MP blade IDs that belongs to the Cluster 1 and the blade location MA. • Processors is the component type. The number 4 indicates the total number of processors that belong to MPB-1MA. • MP 0 is one of the individual processors that belongs to MPB-1MA.
The Back-end main category comprises of the DKA pairs, associated MPs, RAID groups, and LDEVs, where DKA and RAID groups are the main component types that further expand to display the associated components. Following is the component selection path: • Back-end > DKA (component type) > Individual DKA pairs > MP (component type) > Individual MPs. • Back-end > RG(s) component type) > Individual RAID Groups.
• BUNU is an individual DKA pair • RG(s) is a component type. The number (6) indicates the number of RAID groups configured on the selected XP disk array. 1–1 to 1–6 are individual RAID groups. The list under 1–1 displays the following component types: • Physical LDEVs. The number (543) indicates the number of LDEVs associated with the selected RAID group 1–1. Click Physical LDEVs to see the list of physical LDEVs that belong to the selected RAID group. • Pool LDEVs.
In the above image, under Back-end for the P9000 disk array 53036: • DKA is a component type. The number (1) indicates the number of DKA pairs available on the P9000 disk array 53036. • AUMU is an individual DKA pair. • RG(s) is a component type. The number (12) indicates the number of RAID groups configured on the P9000 disk array 53036. 1–1 to 1–12 are the individual RAID groups. The list under 1–1 displays the following component types: • Physical LDEVs.
IMPORTANT: When you plot the performance data points for a RAID group, the associated drive type is also displayed in the legend for the selected RAID group (Drive Type:). For example: Drive Type: DKR2E-J146FC. The related metrics associated with the component types and individual components in the Back-end category are displayed in the Choose Metrics box.
Back-end category Default set of metrics For XP disk arrays For P9000 disk arrays LDEV Total MB/s – Frontend Yes Yes Average Read Response Yes Yes LDEV Sequential Read Tracks – Backend Yes Yes Related Topics • • • • • • • “Front-end navigation path” on page 272 “Cache navigation path” on page 275 “MP Blades navigation path” on page 277 “THP Pool navigation path” on page 283 “Replication Volumes navigation path” on page 288 “Snapshot Pool navigation path” on page 285 “LUSE navigation path” on p
In the above image, under THP Pool for the XP disk array 10090: • THP Pool is the main category. The number (18) indicates the total number of ThP pools configured on the XP disk array 10090. • Pool ID:2 is one of the individual ThP pools. • DKA is a component type and lists the DKA pairs associated with the ThP Pool ID:2. • RG(s) is a component type and lists the RAID groups that form the ThP pools under ThP Pool ID:2.
The related metrics associated with the resources types and components in the THP Pool category are displayed in the Choose Metrics box. Select the metrics at the component type or the individual component levels, or both and view the related performance graphs in the Chart Work Area. For a description of these metrics, see “Metric Category, metrics, and descriptions” on page 441. The following table provides the default set of metrics that are displayed in the Choose Metrics box.
• RG(s) (component type) > Individual RAID Groups > LDEVs (component type) > Individual LDEVs assigned to the selected snapshot pool • VVols (component type) > Individual host group > S-Vol (component type) > Individual LDEV assigned as S-Vols > P-Vol (component type) > Individual LDEV assigned as P-Vols The number of components associated with a component type is displayed beside it.
• Snapshot Pool is the main category. The number (2) indicates the number of snapshot pools configured on the XP disk array 10038. • Pool ID:7 is one of the individual snapshot pools. • RG(s) is a component type. The number (2) indicates the number of RAID groups associated with the Snapshot Pool ID:7. • 5–6 is one of the individual RAID groups. • LDEVs is a component type. The number (2) indicates the number of LDEVs associated with the Snapshot Pool ID:7. Click LDEVs to view the list of LDEVs.
• • • • • • “THP Pool navigation path” on page 283 “LUSE navigation path” on page 290 “Host Groups navigation path” on page 292 “Ext-RG(s) navigation path” on page 294 “Drive types navigation path” on page 296 “Custom groups navigation path” on page 298 Replication Volumes navigation path The Replication Volumes main category comprises of the business copy and the continuous access volumes. They are the main component types that further expand to display the associated components.
In the above image, under Replication Volumes for the P9000 disk array 53036: • Business Copy Volumes is a component type. The number (15) indicates the number of LDEVs that are assigned as business copy volumes. Click Business Copy Volumes to view the list of LDEVs. • Continuous Access Volumes is a component type. The number (15) indicates the total number of journal pools that are created. • JID-0:0 is a journal pool under which LDEVs is a component type.
• • • • • • • • • • • “Front-end navigation path” on page 272 “Cache navigation path” on page 275 “MP Blades navigation path” on page 277 “Back-end navigation path” on page 278 “THP Pool navigation path” on page 283 “Snapshot Pool navigation path” on page 285 “LUSE navigation path” on page 290 “Host Groups navigation path” on page 292 “Ext-RG(s) navigation path” on page 294 “Drive types navigation path” on page 296 “Custom groups navigation path” on page 298 LUSE navigation path The LUSE main category com
In the above image, under LUSE for the XP disk array 10090: • LUSE is a main category. The number (21) indicates the number of LUSE masters available in the LUSE category. • 0:7B (1–1) is an LUSE master. 1–1 besides 0:7B indicates the RAID group to which the LUSE master belongs. • Components is a component type. The number (2) indicates the number of LDEVs that belong to the selected LUSE master 0:7B (1–1). 0:7B (1–1) and 0:7C (1–1) are the corresponding LDEVs.
• • • • • • • • • • • “Front-end navigation path” on page 272 “Cache navigation path” on page 275 “MP Blades navigation path” on page 277 “Back-end navigation path” on page 278 “THP Pool navigation path” on page 283 “Snapshot Pool navigation path” on page 285 “Replication Volumes navigation path” on page 288 “Host Groups navigation path” on page 292 “Ext-RG(s) navigation path” on page 294 “Drive types navigation path” on page 296 “Custom groups navigation path” on page 298 Host Groups navigation path The
In the above image, under Host Groups for the P9000 disk array 53012: • Host Groups is the main category. The number (21) indicates the number of host groups available in that category. • PA_HPUX is one of the individual host groups. The number 86 indicates the aggregate of the average I/Os from each LDEV belonging to PA_HPUX. • Ports is a main component type under the host group, PA_HPUX. The number (1) indicates the number of ports assigned to communicate with the selected host group PA_HPUX.
RAID groups under the Back-end category. For more information, see “Front-end navigation path” on page 272 and “Back-end navigation path” on page 278. The related metrics associated with the resources types and components in the Host Groups category are displayed in the Choose Metrics box. Select the metrics at the component type or the individual component levels, or both and view the related performance graphs in the Chart Work Area.
The metrics associated with the resources types and components in the Ext-RG(s) category are displayed in the Choose Metrics box. Select the metrics at the component type or the individual component levels, or both and view the related performance graphs in the Chart Work Area. For a description of these metrics, see “Metric Category, metrics, and descriptions” on page 441. The following table provides the default set of metrics that are displayed in the Choose Metrics box.
Drive types navigation path The Drive Types main category comprises the individual drive types that are available on the selected XP or P9000 disk array. Each drive type in the component selection tree expands to display the list of associated RAID groups, which in turn display the list of physical LDEVs. Following is the component selection path: Drive Types > Individual drive types > RAID Groups (component type) > Individual RAID groups > Physical LDEVs (component type) > Individual physical LDEVs.
In the above image, under Drive Types for the P9000 disk array 53036: • Drive Types is the main category. The number (1) indicates the number of drive types available in that category. • DKS5B-K146SS is an individual drive type. • RAID Groups is a component type. The number (12) indicates the number of RAID groups available on the selected drive type DKS5B-K146SS. • 1–4 is an individual RAID group. • Physical LDEVs is a component type under the RAID group 1–4.
The following table provides the default set of metrics that are displayed in the Choose Metrics box.
In the above image, CG_1_CG is one of the custom groups that is selected. The number (5) besides the LDEVs component type indicates the total number of LDEVs grouped in CG_1_CG. The LDEVs 1:7A, 1:7C, and 1:7B display 30064 and the LDEV 9:67 displays 82502 beside their LDEV IDs. It implies that these LDEVs are grouped from the XP disk arrays 30064 or 82502 under the CG_1_CG custom group.
• “Ext-RG(s) navigation path” on page 294 • “Drive types navigation path” on page 296 Searching for components You can search for a particular physical LDEV in the component selection tree under Charts, if you are aware of the CU:LDEV name. The search automatically expands the RAID group list to which the physical LDEV belongs and the LDEV component is also highlighted for your reference.
2. In the Physical LDEV text box, enter the name of the LDEV that you want to search in the CU:LDEV format and click the Search icon. The component selection tree for the XP or the P9000 disk array that has the matching LDEV component automatically expands to display the LDEV highlighted for your reference. (If there are many components listed for the selected XP or P9000 disk array, you may have to use the scroll bar to navigate through the list of components to view the matching component).
The Chart Work Area displays the following default settings. They are applicable across the chart windows until you select the other available options: • Time Line in the Chart Style list. This implies that the data points for the different components are plotted as a line graph. The breaks in the performance graphs can be observed, if there are missing performance data collection. • Duration as 1 hour.
NOTE: • These selections work only on the active chart windows. • If the total number of data points from all the performance graphs exceeds 500 in a chart window, the data points are not rendered to optimize the charting functionality in P9000 Performance Advisor. You can hover the pointing device over the line graphs to view the data points.
An individual chart window can accommodate the performance graphs for up to 250 components. The 250 components that you select can belong to multiple component types and for different metrics from the same metric category. Performance Advisor plots the performance graphs incrementally and continues till the performance graphs for all the 250 components are plotted in the chart window.
For more information on the tasks that you can perform in the Chart Work Area, see “Using chart controls” on page 309. Viewing top 20 consumers of an MP blade IMPORTANT: This section is applicable only for the P9000 disk arrays. The top 20 consumers can be LDEVs, continuous access journal groups, or the E-LUNs (external volumes) that are assigned to an MP blade.
MP blade utilization by top 20 consumers Example (see Figure 25) LDEV:0:18 (Backend) A consumer's association with a processing type provides an understanding on the number of processing cycles used by the consumer with different processing types. For example, an LDEV 0:09 might be involved in processing frontend and backend requests. Its processing type reveals whether the frontend or the backend requests have been high.
Processing types Description Open-initiator Indicates all the processing involved in the continuous access replication activities. Open-external initiator Indicates all the processing involved in accessing external storage. Open-mainframe target Indicates all the frontend activities involved in processing mainframe I/O requests. Open-mainframe ext initiator Indicates all the frontend activities involved in processing mainframe I/O requests.
MP blade utilization by processing types Example (see Figure 26) Average MP blade utilization by a processing type (average from the previous to the current time stamp) 3.12% Average MP blade utilization by all the processing types associated with the MP blade for the overall duration Total: 19.02% (16.4%) Calculated as (3.12 / 19.02) * 100 Average MP blade utilization by a processing type for the overall duration The value 16.4% in 19.
Figure 27 Average Metric Utilization . Place the pointer over an area to view the following details: Aggregate Data Example (see Figure 27) XP or P9000 disk array, component, metric name) 53040,CL1B(Fibre(Target)),Maximum) Date and time stamp 07/07/11, 14:06:00 Average utilization metrics value for the specific date and time stamp (average utilization metrics percentage for the specific date and time stamp) 12110 (68.
• • • • • • • • • “Using date and time filters” on page 319 “Using chart Styles” on page 319 “Printing charts” on page 315 “Changing the Chart Work Area layout” on page 315 “Viewing current LDEV assignments for an MP blade” on page 316 “Previewing charts” on page 321 “Zooming in on data points across performance graphs” on page 322 “Rearranging or moving chart windows” on page 323 “Removing chart windows” on page 324 Adding new chart windows By default, the performance graphs of components for metrics tha
in the Chart Work Area, the save operation opens the equivalent number of new browser windows. You are prompted to open, save the PDF, or cancel the save operation. Saving favorite charts You can save the combination of components and metrics for which you want to frequently view charts, as favorite charts. Whenever you want to view the performance graphs for the same set of components and metrics, load the corresponding favorite chart.
3. Click Save to save the selected charts as favorite charts. You can provide a name for the favorite chart by clicking in the respective text box and entering the name. If you do not provide a name, by default, the metric category title of the chart window is considered as the favorite chart name. The following are a few points that you must note while specifying a favorite chart name: • The name should have only alphanumeric characters.
1. Click Load Fav Chart(s). A pop-up dialog appears displaying the favorites charts that you can view. 2. Select one or more favorite charts from the list and click View Chart. The favorite chart appears in the Chart Work Area and is selected by default. • You can add components for metrics in the same metric category to this favorite chart and save it with the same name, or provide a different name.
Generating or saving reports for favorite charts NOTE: • To create a report, it is mandatory that you provide the report name, array model, and report type. • The Report Name, Customer Name, Consultant Name, and Array Location are pre-populated in the respective fields if you have already configured them as common settings on the Email Settings screen. For more information, see “Configuring email and SNMP settings” on page 92. If you do not want these default descriptions, modify the respective fields.
7. Click Generate to view the report immediately. Click Save to save and view the report later. P9000 Performance Advisor saves the report in its database and also displays a record for the report in the Reports section, under the View Created/Scheduled Reports tab. By default, the new record is displayed at the end of the list.
NOTE: • When you change the layout, it applies to all the chart windows in the Chart Work Area. • Each column in the Chart Work Area can occupy only four chart windows if you select the vertical alignment for the Chart Work Area. • The Chart Work Area layout can be modified only under Charts. Viewing current LDEV assignments for an MP blade IMPORTANT: This section is applicable only for the P9000 disk arrays.
Figure 28 Current MP blade assignment .
The forecast utilization can be monitored for a day, week, month, six months, or year based on the current data points. For example, if you have data points for a RAID group collected over two days and you want to forecast its utilization for the next one week, P9000 Performance Advisor forecasts the utilization rate based on the data collected over two days.
To forecast utilization for any of the above-mentioned components, select the component and its corresponding metric, and select the duration from the Forecast list in the Chart Work Area. You can forecast the utilization for only one component at a time. Using date and time filters The following are the date and time filters that you can use on charts: • Start Updating: Click Start Updating for P9000 Performance Advisor to update the selected chart window every 5 minutes with the newest data points.
Time Line chart style (default) enables you to view the plotting of data points for a fixed time interval. In addition, when data points for multiple metrics are plotted with different collection frequencies, their relationships with the time intervals are displayed correctly. Only data points that are collected during the specified interval are retrieved from the database and plotted on the graph. Data points are not plotted for intervals of time where data collection has failed.
the data collection resumes, the data points and the average values are again plotted simultaneously for the selected components. Time Line No Breaks Chart Style The Time Line No Breaks chart style enables you to view the actual performance of the selected components irrespective of whether the data collection is active or discontinued.
1 Focused area in the Zoom preview panel 2 Sliders on the time scale in the Zoom preview panel Zooming in on data points across performance graphs In addition to zooming in on data points for a particular duration, you can also zoom in on a combination of data points in the chart window. If zoom preview is enabled, it also highlights the focused area in the chart window.
To zoom out, click an empty area in the chart window. Rearranging or moving chart windows To move or rearrange chart windows in the Chart Work, click in the title bar of the chart window that you want to move and holding down the left mouse button, drag and drop that chart over an existing chart, where you want the new chart to be placed in the Chart Work Area. The existing chart window automatically shifts to accommodate the relocated chart window.
Removing chart windows You can remove all the charts currently displayed in the Chart Work Area by clicking Remove Charts. All the active and passive chart windows are removed from the Chart Work Area.
12 Using reports This chapter discusses the following topics: • • • • • “Generating, saving, or scheduling reports” on page 331 “Viewing a report” on page 340 “Viewing a schedule” on page 342 “Virtualization for reports” on page 341 “Logging report details and exceptions” on page 344 Introduction Reports provide history of performance data collected for a specified XP or a P9000 disk array, where a visual representation of the performance trend of components is shown for a duration that you specify.
Report types Description For XP disk arrays The Array Performance report provides the overall array performance by measuring the total I/Os, read and write I/Os on that array. The Array Performance report comprises of the following reports: Array Performance Yes • Total I/O Rate Yes • Total I/O Rate by hour of day The Findings section provides a brief summary on the status of the CHIPs, cache, ACP, and the LDEVs.
Report types Description For XP disk arrays For P9000 disk arrays CHIP Utilization The CHIP Utilization report provides data on the utilization of various installed CHIPs/CHAs for the duration that you specify. You can also view the CHIP/CHA utilization by the Hour of the Day report that provides the utilization data for all the CHIPs/CHAs averaged over a 24-hour period. Yes No Yes Yes Yes Yes LDEV IO The LDEV IO report provides data on the busiest LDEVs and the RAID groups.
Report types Description For XP disk arrays For P9000 disk arrays RAID Group Utilization The RAID Group Utilization report provides the top 32 RAID groups, which is derived based on the extent of utilization of each RAID group. It is available as standalone report and also as a part of the All report. For more information, see Creating report to view the most utilized RAID Groups.
Report types Description For XP disk arrays For P9000 disk arrays No Yes Yes Yes NOTE: NOTE: The MP blade utilization data is not applicable for the XP disk arrays. So, the MP Blade Utilization report is not included in the All report generated for the XP disk arrays. The ACP/DKA and the CHIP/CHA utilization data are not applicable for the P9000 disk arrays. So, their reports are not included in the All report generated for the P9000 disk arrays.
IMPORTANT: • Reports on the following are available only if they are configured in the selected XP or P9000 disk array. If not configured, they are not even displayed as options to select for create their reports. In addition, they are also not displayed in other related reports, like the Array Performance and the All reports.
Screen elements Description This tab displays two sections, the Reports section and the Scheduled Reports section. View Created/Scheduled Reports tab • The records for the reports that you schedule periodically or save in P9000 Performance Advisor are displayed in the Reports section. Select a record to view the associated report. • The schedules that are to be executed more than once are displayed in the Scheduled Reports section. You can edit or delete a schedule.
Providing common report details NOTE: • To create a report, it is mandatory that you provide the report name, array model, and the report type. • The Report Name, Customer Name, Consultant Name, and the Array Location are pre-populated in the respective fields, if you have already configured them as common settings on the Email Settings screen. For more information, see “Configuring email and SNMP settings” on page 92. These details are applicable for all the reports that you create.
2. Select or enter the following details: • Name of the report in the Report Name box. The name should not be less than 2 characters or exceed 80 characters in length. • Name of the customer or company in the Customer Name box. • Name of the consultant in the Consultant box. • Location for the array in the Array Location box. • The array type from the Array Type list.
The following are the supported file formats: • HTML • PDF • RTF • CSV • DOCX The HTML format is the default file type for any report that you generate or save. The report is always provided in a compressed file (.zip) format as an email attachment. You can extract the contents of the ZIP file onto your local system to view the report details. However, if you select a PDF, DOCX or a RTF file type, choose to receive a normal report file or a compressed file as the email attachment.
3. Generate or save the report. • To generate a report, click Generate. P9000 Performance Advisor does not save the report in its database or display records for the report in the Reports section, under the View Created/Scheduled Reports tab. Instead, view only a temporary copy of the report. The report cannot be retrieved once it is closed.
2. Choose the schedule and specify the duration of your choice: 1. Collection Schedule: displays Daily, Weekly, and Monthly. By default, Weekly is selected as the collection schedule. • Day of the Week: Displays the list of week days. Select the week day when you want the schedule to be executed. By default, Weekly is selected as the default collection schedule. • If you select Monthly as the collection schedule, the Monthly Schedule is displayed.
4. Click Save. P9000 Performance Advisor does the following: • Saves the schedule and also displays a record for the schedule in the Scheduled Reports section, under the View Created/Scheduled Reports tab. The following details along with those you provided while scheduling a report are displayed for that schedule in the Scheduled Reports section: • Occurrence: Displays number of times a particular schedule is repeated. The occurrence is aligned to the selected schedule frequency.
and eight backend LDEVs, and eight frontend and eight backend RAID groups. Further, the report displays the graphs for only those LDEVs that have the associated I/Os and those RAID groups on which the I/Os transactions have occurred. Consider the following example: A report is created to view 32 busiest frontend LDEVs and 16 busiest frontend RAID groups, and only eight of the selected 32 LDEVs and four of the selected 16 RAID groups are busy.
1. Select LDEV Activity from the Report Type list. 2. Select the Metric Type as: • FontEndIO: Select this metric type to view a report of the most active or the least active LDEVs (or both) based on the threshold specified for the total frontend I/Os. • BackEndIO: Select this metric type to view a report of the most active or the least active LDEVs (or both) based on the threshold specified for the total backend transfers.
Viewing reports IMPORTANT: • Report schedules with an asterisk (*) before the User Name indicate that they are generated by a schedule. Following is the naming convention for reports that have an associated schedule: _exportDB-_____
• • • • “Deleting a report” on page 341 “Scheduling reports” on page 335 “Viewing a schedule” on page 342 “Logging report details and exceptions” on page 344 Deleting reports To delete a report: 1. Click Reports in the left pane and then click the View Created/Scheduled Reports tab. 2. In the Reports section, select the check box for the report record that you want to delete. 3. Click Delete. Click OK when prompted to confirm.
Editing report schedules The report schedules that you create appear in the Scheduled Reports section under the View Created/Scheduled Reports tab. IMPORTANT: • The Scheduled Reports section under the View Created/Scheduled Reports tab appears only if you have logged in as an Administrator or a user with administrator privileges. • If the Email Dest for a schedule record is blank, it implies that the report is scheduled, but an email address is not provided or is invalid.
Click Cancel to retain the records. Understanding report records This section describes what to infer from the data displayed in the Reports section under the View Created/Scheduled Reports tab. In the preceding image, you can view the report, PA_ACP_Rep that is executed on 2009-10-11 20:11:32 IST (Generation Time). The report provides data on ACP Utilization in an XP disk array, 82502 for a period of 1 month (2009-09-11 to 2009-10-11). The report is provided in HTML format.
generate a report daily at 19:00 hours. Hence, the schedule is active and a report is generated only the day after 9th September 2008, on 10th September 2008 at 19:00 hours. The End Time for this schedule displays 09.10.2008 19:00:00, which means that the last report that P9000 Performance Advisor generates is on 10th September 2008 at 19:00 hours. This is because, while creating the schedule, the number of times it must repeat is given as 1 in the Occurrence box.
13 Using Performance Estimator for XP disk arrays This chapter discusses the following topics: • • • • “Introduction” on page 345 “Disk types supported for performance estimation” on page 346 “Understanding PET data” on page 346 “Estimating performance for XP24000 or XP12000 type of array” on page 346 Introduction P9000 Performance Advisor enables you to determine the optimal performance of your XP24000 and XP12000 Disk Arrays after configuration collection is complete for these XP disk arrays.
Array type Supported disk type XP12000 Supported disk sizes for performance estimation The following table lists the XP disk arrays and the disk size they support.
1. Click Performance Estimator in the left pane. The Performance Estimator screen appears. 2. Select the XP disk array model from the Array Type list. This list displays the XP disk array models only for the monitored XP disk arrays. The Performance Estimator screen corresponding to the selected XP disk array appears. The Array List displays the XP disk arrays that belong to the selected XP disk array model.
Based on the above selection, the Performance Estimator displays the estimated values in the following non-editable text boxes: • IO/sec - Indicates the I/Os that the XP disk array can receive for the selected configuration. • MB/sec - Indicates the MB/s of data that the XP disk array can receive per second for the selected configuration. • R.T. (ms) - Indicates the time taken in milliseconds for the XP disk array to respond for the selected configuration.
14 Troubleshooting issues for components associated with applications This chapter discusses the following topics: • • • • • • “Introduction” on page 349 “Associating applications with hosts” on page 351 “Viewing performance or usage data for components” on page 356 “Viewing variations in the LDEV response time” on page 364 “Searching for applications associated with components” on page 355 “Plotting charts” on page 365 • “Example troubleshooting scenarios” on page 366 Introduction P9000 Performance Advi
• CLPRs (Cache) • RAID groups (Backend components) If your application is associated with components that belong to the P9000 disk arrays, in addition to the above-mentioned components, an LDEV’s response time is also determined by the average utilization of the associated MP blades.
3. If your application is associated with an XP disk array, view the performance data of the LDEVs, ports, CLPRs, and the RAID groups. If your application is associated with a P9000 disk array, view the performance data of LDEVs, ports, CLPRs, and RAID groups, and the usage data of the MP blades. The data can be viewed at the application, host, and the WWN levels. For more information, see “Viewing performance or usage data for components” on page 356. 4.
1. Click Troubleshooting in the left pane. The list of the XP and the P9000 disk arrays monitored by P9000 Performance Advisor are displayed in the component selection tree under Troubleshooting. 1 Resource selection tree 2. Select the XP or the P9000 disk array for which you want to associate an application. 3. Click Configure Application. The Configure Application dialog box appears. 4. Click Add. 5. Select a WWN from the WWNs list.
7. Click New under Application and provide the application name. If you want to associate an existing application with the selected WWN and host, choose the application from the Existing list under Application Name. The following image shows the Configure Application dialog box for 53036, which belongs to the P9500 Disk Array type. The following are the possible combinations: • Associate a new application with a new host. • Associate a new application with an existing host.
10. Click the + sign for the XP or the P9000 disk array in the component selection tree to view the following structure in the order mentioned: 1. 2. 3. Application name Host name WWN The following image shows the application name, host name, and the WWN for 53036, which belongs to the P9500 Disk Array type.
• “Plotting charts” on page 365 Searching for applications associated with components You can search for a particular application that is using a component, if you know the name or ID of the component that is associated with the application. IMPORTANT: • You can search for only one component at a time. • The search results are specific to an XP or a P9000 disk array. You cannot search for components that are spread across multiple XP and P9000 disk arrays. To search for an application: 1.
Viewing performance or usage data for components You can view the performance and usage data of components at the application, host, and WWN level, where data is filtered and displayed accordingly. The data displayed is based on whether your application is associated with components that belong to the XP disk arrays or the P9000 disk arrays: • If your application is associated with components that belong to the XP disk arrays, view the performance data of LDEVs, ports, CLPRs, and the RAID groups.
2. Based on your requirement, select an application or choose the host or the WWN associated with an application: If your selection is at the application level, the data displayed for the LDEVs and the associated components is through all the hosts and the WWNs associated with the application. Hence, the data is a superset of the data that you view at the host or the WWN level. • To select a host, click the + sign for an application and select the host from the list of hosts.
4 RAID Group table Click an LDEV ID to view the associated port, CLPR, and the RAID group records highlighted in the respective tables. By default, the port, CLPR, and the RAID group records are displayed for the first LDEV listed in the LDEV table.
Resources Default metrics Description The average utilization of the MP blades that are associated with the LDEVs. NOTE: The average utilization displayed is for the last one hour. MP Blade Util (%) The MP blade average utilization data is collected during the DKC performance data collection. The collection frequency set for the DKC data collection might be different from that set for the LDEV data collection.
Resources Ports Default metrics Description RG Util % The total utilization of each RAID group. IOPS The total I/Os happening through the ports. MBPS The total MB/s of data transferred through the ports. The average of individual MP utilization on each port. IMPORTANT: MP Util % CLPRs Since, the CHIP/CHA MPs are moved to the MP blades in the P9000 disk arrays, the MP Util % metric is not applicable for the P9000 disk arrays. It is applicable only for the XP disk arrays.
The port type, such as Fibre, Ficon, Escon, or FCoE (applicable only for P9000 disk arrays) is also displayed beside the port ID. • CHA MP: Displays the MP associated with the port. The details of the partner port that is associated with the same MP is also displayed. The partner port record appears in grey. When you plot the usage graphs for these ports (primary and partner ports), you will be able to analyze whether the partner port is overloading the MP that is also associated with the primary port.
The additional set of metrics that P9000 Performance Advisor supports for the LDEVs are as follows: Table 21 Additional metrics for LDEVs Resources Additional LDEV metrics Description LDEVs Random I/O The total random I/Os on the LDEV during the entire collection interval. Sequential I/O The total sequential I/Os on the LDEV during the entire collection interval. Reads The sum of random reads and sequential reads on the LDEV during the entire collection interval.
The additional set of metrics that P9000 Performance Advisor supports for the RAID groups, Ports, and the CLPRs are as follows: Table 22 Additional metrics for RAID groups, ports, and CLPRs Resources Additional Frontend, Cache, and Backend metrics RAID groups Non Seq Reads The total backend tracks loaded in random mode for a specified RAID group. Seq Reads The total backend tracks loaded in sequential mode for a specified RAID group.
Viewing variations in the LDEV response time You can identify the LDEVs that are experiencing response time variations by analyzing their read and write response time values. Consider a scenario where your application is associated with multiple LDEVs and experiencing a slow response time. As some of the components, such as RAID groups are shared, their utilization might not indicate an impact on the application.
The reference value used by P9000 Performance Advisor is displayed as a blue straight line in the LDEV average read and write response time graph. Plotting charts You can select and plot charts for components in the LDEVs, Port, CLPR, and the RAID group tables. To plot charts for the selected components and metrics: 1. On the Troubleshooting screen, select components for which you want to plot charts. The components can belong to the LDEV, Port, CLPR, and the RAID Group tables.
3. Select the check box for the metric, for which you want to view the performance graph of the selected components and click OK. P9000 Performance Advisor plots appropriate performance graphs in the Chart Work Area. By default, the data points are plotted for the last one hour of the management station's time. For more information on using charts and chart options, see Plotting charts.
3. Select the XP array. 4. Select the application. 5. Identify ports associated with the LDEVs mapped to the application. In this case, this should bring up ports 1A and 5A. 6. Note the IOPS and MBPS for 1A. Plot a chart of the trend of IOPS and MBPS. 7. Identify the MP associated with port 1A and note the utilization of the MP. Plot a chart of the trend of MP utilization. 8. Note the IOPS and MBPS for 5A. Plot a chart of the trend of IOPS and MBPS. 9.
9. Based on the trend of the utilization values of the RG and its LDEVs, the reason for poor response time on LDEV2 could be attributed to the overloading of the RG 1-2. Also, it could be inferred that RG 1-2 is “hot” due to the heavy load generated by all the LDEVs. In case the LDEV loads are not balanced, the possible solution could be to relocate some of the busy LDEVs on to another RG. 10. Generate a report of the findings above.
15 Launching P9000 Application Performance Extender from P9000 Performance Advisor You can launch the P9000 Application Performance Extender from P9000 Performance Advisor GUI. P9000 Application Performance Extender is a software that enables you to monitor, analyze, and prioritize the performance of critical applications running on P9500, XP20000, and XP24000 Disk Arrays.
2. On the HP StorageWorks P9000 Application Performance Extender Software screen, click Launch HP StorageWorks P9000 Application Performance Extender Software. The text displayed on the HP StorageWorks P9000 Application Performance Extender Software screen is taken from the AppIntegrations.properties file. So, ensure that the text is not modified in the AppIntegrations.properties file. Figure 31 HP StorageWorks P9000 Application Performance Extender Software screen .
16 Launching P9000 Performance Advisor from other Storage products Introduction You can launch P9000 Performance Advisor from P9000 Tiered Storage Manager and P9000 Remote Web Console Launching P9000 Performance Advisor from P9000 Tiered Storage Manager IMPORTANT: This section is applicable only for the XP disk arrays. P9000 Tiered Storage Manager is used to perform migration, where the data stored on predefined set of volumes is moved to another set of volumes with the same characteristics.
You can launch P9000 Performance Advisor for the Migration Group volumes and the Storage Tier volumes, and also in the Create Migration Task operation to facilitate selection of source and target volumes. Viewing performance graphs for LDEVs To access P9000 Performance Advisor from P9000 Tiered Storage Manager and view the charts for the LDEVs that belong to a storage domain: 1. On the P9000 Tiered Storage Manager window, expand Storage Domains to view the list of storage domains. 2.
6. Click Analyze Performance. The P9000 Performance Advisor Login page is displayed. IMPORTANT: • The location of the P9000 Performance Advisor management station and other parameters are defined in the P9000 Tiered Storage Manager hppa.properties file. For more information, see the HP StorageWorks P9000 Tiered Storage Manager Software Administrator Guide.
Launching P9000 Performance Advisor from P9000 Remote Web Console The HP Remote Web Console enables you to manage and optimize the P9000 storage systems. As part of this process, you can launch P9000 Performance Advisor in context from P9000 Remote Web Console to view the usage pattern of components for a longer duration and make provisioning decisions.
2. Download the PA_Link_Launch_Configuration_Files.zip file to your management station or the system from where you accessed the P9000 Performance Advisor Support screen. The file is available under PA link and launch from RWC on the P9000 Performance Advisor Support screen. The PA_Link_Launch_Configuration_Files.zip file consists of the following XML files that are required to launch P9000 Performance Advisor. • appDefinition.xml • appProfile.xml The readme.
2. Do one of the following: • To update the IP address, open the appDefinition.xml file in Notepad and update the P9000 Performance Advisor management station IP address for the tag as shown: Syntax:
After each of the above-mentioned commands is executed, a confirmation on the number of files copied is displayed in the command prompt window. IMPORTANT: Whenever you update the appDefinition.xml file for the management station IP address or the appProfile.xml file for the session name, execute the above-mentioned commands, so that P9000 Remote Web Console uses the latest XML files to launch the P9000 Performance Advisor session.
5. Click Settings > Launch Application > Performance Advisor. If you have updated a different session name in the appDefinition.xml file, that session name appears when you click Settings > Launch Application. NOTE: If none of the components are selected, the session will be in the disabled mode. The session is enabled only when you select a particular component. Figure 32 HP P9000 Remote Web Console screen . The session opens in a separate browser window.
5. Click Settings > Launch Application > Session Name (default: Performance Advisor). The data for the selected processor blade is displayed in the P9000 Performance Advisor, Array View - MP Blades screen. If multiple processor blades are selected, the data related to the first selected processor blade is displayed. For more information on MP Blades screen, see “Viewing MP blade utilization for P9000 disk arrays” on page 224.
The following image shows the processing distribution for MPB-1MA. To view the utilization data for MPB2, click MPB-2MC in the MP Blade Configuration group box in the Array View - MP Blades screen. Viewing Parity Group data Consider the scenario of five RAID groups (preferably belonging to the same drive type). You want to know which is the least busiest RAID group, so that you can provision storage space from the RAID group to create new LDEVs in that RAID group.
4. Click Settings > Launch Application > Session Name (default: Performance Advisor). By default, the utilization data for the Overall RAID Group utilization metric is displayed in the Utilization Metrics chart window. The overall RAID group utilization is the total busy rate of the RAID group over an entire collection interval. When a RAID group is associated with a ThP pool, this metric provides the extent to which a RAID group is busy because of the I/Os occurring on a ThP pool.
1. Complete steps 1 and 2 mentioned for “Launching P9000 Performance Advisor” on page 377. 2. Select Logical Devices in the list displayed for the P9000 disk array serial number. 3. In the right work area, select the Logical Device record for which you want to view the usage and performance data in Performance Advisor. 4. Click Settings > Launch Application > Session Name (default: Performance Advisor).
4. Click Settings > Launch Application > Session Name (default: Performance Advisor). The host group and the usage data of ports and LDEVs associated with the selected host group are displayed in the Array View - LDEV screen. The above image displays the LDEVs and ports associated with the host group ctx_dcb. The Chart Work Area in the above image displays the maximum, minimum, and average I/O on the port CL8B that is selected in the Array View - LDEV screen. Sample on appDefinition.xml and appProfile.
PAGE 385For example, if V7-4 is deleted in the appProfile.xml file, the Ports/Host Groups application menu item does not appear for selection in the P9000 Remote Web Console. The application_menu is the Menu name which displays in the P9000 Remote Web Console. NOTE: The application_type tag in the appProfile.xml file, and external_application and application_type_link tags in appDefinition.xml file must always point to 'pa'.
Launching P9000 Performance Advisor from other Storage products
17 Troubleshooting P9000 Performance Advisor Overview This document describes how to troubleshoot problems that you may encounter while using the HP StorageWorks P9000 Performance Advisor Software. HP tools for troubleshooting and visualizing data HP provides you P9000Watch and P9000Sketch to troubleshoot and visualize data in a graphical format.
intervention, this utility can be used to collect data required for analysis when you reproduce the issue. The log can be generated at the following different levels: • • • • • • • Error All Debug Fatal Info Warn Off The log details can be saved (compressed zip format) and provided to the HP P9000 Performance Advisor Support team for further offline analysis of the issue. Once the log is provided, you can set the log level as 'Off'.
6. Restart the P9000 Performance Advisor Tomcat service. Go to Start > Programs > HP StorageWorks > P9000 Performance Advisor to restart the services. Average values are higher than maximum values If the average values for Port metrics are showing higher than the maximum values for XP 10000 and XP 12000 set of arrays, you need to upgrade the array firmware to 50-09-34 or later.
2. 3. Click Event Log in the left pane. In the Start Time and End Time section, set the hour and time corresponding to that of the management station. Host Agent: Verifying that it is operational To verify that the Host agent is operational: 1. Verify that the HP StorageWorks P9000 Performance Advisor Hostagent service is running on Windows, or a host agent daemon is running on UNIX. On UNIX, enter ps -ef |grep -i java.
Host does not appear in the management station If the host agent does not appear in the management station, check the following: • Verify the network configuration by pinging the management station from the host agent. • Verify that the host agent version and the management station version are the same. • Verify if the host agent services are running (on non-Windows platforms, run the following command: ps -ef | grep -i xppa).
To clear the Java Plug-in cache: 1. Close all web browser application windows 2. Open the Windows Control Panel, and click Java Plug-in 3. In the Java Plug-in Control Panel dialog, click the Cache tab 4. Click Clear 5. In the Confirmation Needed - Cache dialog, click Yes 6. Close the Java Plug-in Control Panel dialog box Maintaining versions for host agent logs You can maintain a minimum of three versions of the P9000 Performance Advisor host agent log file (PerformanceAdvisorXP.
• HP StorageWorks P9000 Performance Advisor Database • HP StorageWorks P9000 Performance Advisor Database Listener 2. Verify that supported JRE is installed by entering java -version at the command line prompt (from the management station home directory). P9000 Performance Advisor displays disconnected external arrays P9000 Performance Advisor continues to show disconnected external devices from external volumes if the external device is disconnected abruptly (by unplugging the FC cable).
2. 3. Click Request Info and wait until the status is updated to show as Received. This verifies that the host and management station are communicating with each other. Delete the existing performance data collection schedule and create a new schedule for the selected host under the Performance Collection tab (Array View > Performance Collection). For details on creating performance data collection schedules, see “Creating or viewing a performance data collection schedule” on page 70.
Unable to connect to the real-time server on the host agent If you are not able to connect to the real-time server on a host agent, one of the following error messages are displayed: • Performance Advisor could not recognize the Real Time Server on the host agent . This could be due to the Real Time Server not running on the selected host agent or the host agent name is not found. • Error occurred while connecting to Real Time Server.
Unfinished configuration data collection error On a windows machine, if configuration data collection does not finish or the PerformanceAdvisorXP.log displays the following error: PerformanceAdvisor: Failed to start config data collector transfer for : com.hp.xpsl.hostagent.RmlibException: RMLIBbased DKC configuration is empty for To resolve the issue: 1. Create a GUID for the devices.
NOTE: xxxx is the name of the management station on which HP StorageWorks P9000 Performance Advisor Software is installed. If the above error messages are present, perform one of the following actions: • Manually restart the HP StorageWorks P9000 Performance Advisor Database Listener service and HP StorageWorks P9000 Performance Advisor Database service. • Run the LSNRCTL STOP and LSNRCTL START commands. • Run theSHUTDOWN command followed by the STARTUP database command.
To enable multiprocessor optimization, you must modify the property for multiprocessor optimization in the serverparameters.properties file. By default, multiprocessor optimization is not enabled or switched off for P9000 Performance Advisor. Multiprocessor optimizations # # The following entry enables / disables the Multiprocessor Optimizations.
Security under Settings is not displayed when localhost is used in the http address to log on to P9000 Performance Advisor The Security link under Settings in the left pane is not displayed when you use localhost in the URL (http:///pa) to access P9000 Performance Advisor, which is enabled for Native authentication. This is regardless of logging in as a default administrator or as a user who is granted administrator privileges. Workaround: Use the fully qualified domain name ([servername].
1. 2. Click Start > Add/Remove Programs > P9000 Performance Advisor . Select the Repair option. The error is rectified. Error messages Table 1 describes the Performance Advisor error messages. Table 24 Error messages Scenario Error message Solution Performance Advisor is unable to connect to the database. Check if the Database is running. Please contact administrator or HP Support if the exception continues.
Scenario Error message Unable to collect configuration data due to insufficient disk space. This error is logged when collecting configuration information through out-band collection. The disk space needed to form the objects pertaining to the configuration collection is unavailable leading to this error. IOException: There is not enough space on the disk. Insufficient disk space on the system. This could potentially delay data collections.
Scenario Error message Solution Real-time monitoring fails if it is attempted on a host agent with a pending server update request. Real Time Server Configuration data is getting updated on the selected HA. Please select any other HA. When a realtime server update is requested, you cannot initiate realtime monitoring using the same host agent. Wait for the update to be completed, or select another host agent. Real-time monitoring is attempted but, charts are not displayed.
Scenario Error message Solution Perform the following steps: The Smart and ThP pools are not configured in the selected P9000 disk array. Smart and ThP pools are not configured for this P9500 Disk Array 1. Configure the Smart and ThP pools on your P9000 disk array. 2. Perform the configuration data collection for your P9000 disk array. 3. In the Array View node, select your P9000 disk array and click Pools to view the Smart and ThP pools data. Perform the following steps: 1.
Scenario If a component is not selected from the Components list for a particular P9000 disk array in P9000 Remote Web Console. 404 Error message Solution P9000 Performance Advisor has detected that a component is not selected to display data. Select a component (example: Parity Group, MP Blade Processor) in P9000 Remote Web Console to view the corresponding performance data in P9000 Performance Advisor.
18 Support and other resources Contacting HP For worldwide technical support information, see the HP support website at: http://www.hp.
http://www.hp.com/support/storagedocsurvey Thank you for your time and your investment in HP storage products.
Convention Element • Keys that are pressed • Text typed into a GUI element, such as a box Bold text • GUI elements that are clicked or selected, such as menu and list items, buttons, tabs, and check boxes Italic text Text emphasis • File and directory names • System output Monospace text • Code • Commands, their arguments, and argument values Monospace, italic text Monospace, bold text • Code variables • Command variables Emphasized monospace text WARNING! Indicates that failure to follow directio
Support and other resources
A Appendix A Storage management logical partitions (SLPRs) A disk array can be shared with the multiple organizations and with multiple departments within an enterprise. Therefore, multiple administrators might manage a single disk array. This circumstance creates the potential for an administrator to destroy volumes of other organizations, and it can complicate and increase the difficulty of managing the disk array.
enterprise B's disk array administrator can manage enterprise B's virtual disk array, but cannot manage enterprise A's disk array. Cache logical partitions (CLPRs) When one disk array is shared with multiple hosts, and one host reads or writes a large amount of data, the host's read and write data occupies a large area in the disk array's cache memory. In this situation, the I/O performance of other hosts decreases because the hosts must wait to write to cache memory.
B Sample reports Report types P9000 Performance Advisor supports report generation for the following categories: • • • • • • “Array performance reports” on page 411. “LDEV IO reports” on page 417. “RAID Group Utilization Report” on page 420. “Cache utilization reports” on page 421. “ACP utilization reports” on page 424. “CHIP utilization reports” on page 425. • • • • • • • “XP Thin Provisioning (THP) pool occupancy” on page 427. “Snapshot pool occupancy” on page 428.
IMPORTANT: • The Findings section for an XP disk array provides a brief summary on the status of the CHIPs, cache, ACP, and the LDEVs. • The Findings section for a P9000 disk array provides a brief summary on the status of the cache, LDEVs, and the MP blades. • The utilization summary of the CHIP/CHA and the ACP/DKA MPs are not displayed in the Array Performance report - Findings section for the P9000 disk arrays.
Figure 35 Total I/O Rate . The total backend transfers may be compared to the total frontend I/Os and the difference is due to the effects of the array cache. The total backend transfers load is taken by the RAID groups and ACP/DKA pairs, where as the total frontend I/O load is taken by the CHIP/CHA ports. NOTE: If there are no data points available for the dates selected, a blank chart is displayed.
The total backend transfers may be compared to the total frontend I/Os and the difference is due to the effects of the array cache. The total backend transfers load is taken by the RAID groups and ACP/DKA pairs, where as the total frontend I/O load is taken by the CHIP/CHA ports. NOTE: If there are no data points available for the dates selected, blank chart is displayed.
Read/Write Ratio report The Read/Write Ratio report displays in a chart format, the ratio of read activity to write activity, over the entire period. It is for both sequential and random read, or write activity. “Read/Write Ratio” on page 415 displays a sample Read/Write Ratio report for the XP1024 Disk Array. Figure 38 Read/Write Ratio . For example, the data point of X on the graph indicates X% read activity and (100-X)% of write activity.
Figure 39 Read/Write Ratio by hour of day . NOTE: If there are no data points available for the dates selected, blank chart is displayed. If all the data values are zero for the dates selected, a chart with a horizontal line along X axis is displayed in the center of the chart. Read/Write Detail report The Read/Write Detail report displays in a chart format, the total I/Os separated into different I/O types.
NOTE: If there are no data points available for the dates selected, blank chart is displayed. If all the data values are zero for the dates selected, a chart with a horizontal line along X axis is displayed in the center of the chart. LDEV IO report The LDEV IO report provides data on the busiest frontend and the backend LDEVs and RAID groups on an XP or a P9000 disk array. It is based on the frontend I/Os and the backend transfers.
Total Backend I/O Rate First Top 8 LDEVs report The Total Backend I/O Rate First Top 8 LDEVs report displays in a chart format, the real backend I/O rate of the busiest eight LDEVs. This can be compared to the potential maximum throughput of the hardware. The maximum throughput varies depending on RAID level and disk mechanism type and other factors such as the size of the individual I/Os.
Figure 42 Total Backend I/O Rate First Top 8 RAID Groups . Total Frontend I/O Rate First Top 8 LDEVs report The Total Frontend I/O Rate First Top 8 LDEVs report displays in a chart format, the number of I/Os operations performed by the first set of busiest eight LDEVs. “Total Frontend I/O Rate First Top 8 Ldevs” on page 419 displays a sample Total Frontend I/O Rate First Top 8 LDEVs report for the XP1024 Disk Array. Figure 43 Total Frontend I/O Rate First Top 8 Ldevs .
Figure 44 Total Frontend I/O Rate First Top 8 Array Groups/Pools . RAID Group Utilization Report The Raid Group Utilization report consists of four charts that display the utilization of the top 32 RAID groups, split into eight each. The RAID group utilization indicates the total utilization of a RAID group over an entire collection interval. Figure 45 on page 420 displays a sample RAID Group Utilization report that provides the first top eight RAID groups for the P9500 Disk Array.
The report displays the utilization graphs for only those RAID groups that have managed the backend transfers. When a RAID group is associated with a ThP pool, the extent of RAID group utilization due to I/Os occurring on a ThP pool is considered.
Figure 47 Cache Write Pending . Percentage Read Hits report The Percentage Read Hits report displays in a chart format, cache read hits as a percentage of the total cache read operations. “Percentage read hits” on page 422 displays a sample Percentage Read Hits report for a P9000 disk array. Figure 48 Percentage read hits .
Figure 49 Total Backend Transfer report . Total Backend Transfer by Hour of the Day report The Total Backend Transfer by Hour of the Day report displays in a chart format, the total number of transfers, both sequential and random drive-to-cache transfers, and all cache-to-drive transfers, averaged over a 24-hour period. “Total Backend Transfer by Hour of the Day” on page 423 displays a sample Total Backend Transfer by Hour of the Day report for a P9000 disk array.
Figure 51 Cache Side File Utilization . ACP utilization report IMPORTANT: The utilization metrics on the ACP/DKA MPs are not displayed for the P9000 disk arrays. They are included as part of the utilization metrics displayed for the MP blades in the P9000 disk arrays. The ACP utilization reports allow you to view in a chart format, the average utilization of the various installed ACP/DKA pairs either over the entire period or over every hour of a day.
Figure 52 ACP utilization over the entire period . ACP Utilization by Hour of the Day report The ACP Utilization by Hour of the Day report displays in a chart format, the average utilization of the installed ACP/DKA pairs over a 24-hour period. “ACP utilization over a 24-hour period” on page 425 displays a sample ACP Utilization by Hour of the Day report for an XP disk array. Figure 53 ACP utilization over a 24-hour period .
The CHIP utilization reports allow you to view in a chart format, the utilization data for all the installed CHIPs/CHAs in the array and the average utilization data for all the installed CHIPs/CHAs in an XP disk array. A sample of each report is given below: CHIP Utilization report The CHIP Utilization report displays in a chart format, the utilization data for all the installed CHIPs/CHAs in an XP disk array. “CHIP Utilization” on page 426 displays a sample CHIP Utilization report for an XP disk array.
CHIP processor utilization report The CHIP processor utilization report displays in a chart format, the individual MP utilization on an installed CHIP/CHA. “CHIP Processor Utilization” on page 427 displays a sample CHIP processor utilization report for an XP disk array. Figure 56 CHIP Processor Utilization . In this sample report, the individual MP utilization for the CHA 1E is displayed. Similarly a report is generated for all the installed CHIPs/CHAs.
Figure 57 XP Thin Provisioning pool occupancy . Snapshot pool occupancy report The Snapshot Pool Occupancy report provides the usage percentage of the eight busiest snapshot pools. NOTE: P9000 Performance Advisor reports only those snapshot volumes in an XP or a P9000 disk array that are assigned to a pool. Figure 58 on page 428 displays a sample Snapshot Pool Occupancy report for an XP disk array. Figure 58 Snapshot pool occupancy .
P9000 Continuous Access Journal group utilization report The Journal Pool Utilization report displays the utilization percentage of the eight busiest Journal groups. Figure 59 on page 429 displays a sample Continuous Access Journal Group Utilization report for a P9000 disk array. Figure 59 P9000 Continuous Access Journal group utilization . LDEV Activity report You can view the maximum and least busiest LDEVs in an XP or a P9000 disk array through the LDEV Activity report.
Figure 60 LDEV Activity report . IMPORTANT: • The threshold limits that you specify are independent of each other and applicable to only the category that you select. You can set both the maximum and minimum threshold levels, or one of them based on your requirement. • The report also provides the associated drive types for the LDEVs. This information helps you to identify if the associated drive is supporting the required LDEV performance. If not, move the LDEV to a different drive type.
Figure 61 Export Database report (Human readable format) . For more information on the different .csv files that are generated for an XP or a P9000 disk array, see “Export DB CSV files” on page 178.
MP blade utilization report The MP Blade Utilization report can be generated only for the P9000 disk arrays. It includes the average utilization data for each individual MP blade, their top 20 consumers, and the associated processing types. Average utilization of an MP blade The average utilization is calculated as the utilization of all the individual processors in the MP blade.
MP blade utilization by the processing types The average MP blade utilization split up for the different processing types is displayed in a chart for the selected duration. The duration for which the MP blade was busy processing consumer requests is also displayed as the Total Busy Time. For more information on processing types, see “Viewing MP blade utilization by processing types” on page 306.
Sample reports
C Appendix C Supportability matrix The following matrix displays the supportability of ThP, snapshot, and continuous access journal volumes on the XP arrays.
Appendix C
D Appendix D Array mapping To correctly map the ACP and CHIP pairs, see the following tables for the respective array: Table 27 on page 437 lists ACP and CHIP pairs for disk array XP48/128. NOTE: The cards are lettered A-M, omitting I.
Table 29 on page 438 lists the ACP and CHIP pairs for XP1024. Table 29 XP1024 Slot name Pair ID Slot ID B, H ACP Pair 1 ACP B = 0; H = 4 C, J ACP Pair 2 ACP C = 1; J = 5 D, K ACP Pair 3 ACP D = 2; K = 6 E, L ACP Pair 4 ACP E = 3; L = 7 P, V CHIP Pair 1 CHIP P = 0; V= 4 Q, W CHIP Pair 2 CHIP Q = 1; W= 5 R, X CHIP Pair 3 CHIP R = 2; X= 6 S, Y CHIP Pair 4 CHIP S = 3; Y= 7 Table 30 on page 438 lists the ACP and CHIP pairs for XP12000 type array.
Table 31 on page 439 lists the ACP and CHIP pairs for the XP10000 and SVS200 type arrays. Table 31 XP10000 and SVS200 Slot name Pair ID Slot ID MIX-A, MIX-F ACP Pair 1 ACP MIX-A = 0; MIX-F = 4 MIX-A, MIX-F CHIP Pair 1 CHIP MIX-A = 8; MIX-F = 12 B,E CHIP Pair 2 CHIP B = 9; E = 13 Table 32 on page 439 lists the ACP and CHIP pairs for an XP24000 type array.
Slot name Pair ID Slot ID LU, XU CHIP Pair 13 CHIP LU=20; XU=28, KU, WU CHIP Pair 14 CHIP KU=22; WU=30 LL, XL CHIP Pair 15 CHIP LL=21; XL=29 KL, WL CHIP Pair 16 CHIP KL=23; WL=31 NOTE: The numbers in the third column correspond to the card letter. These numbers are used when reading CLUI output that has an older formatting style. Table 33 on page 440 lists the ACP and CHIP pairs for an XP20000 type array.
E Metric Category, metrics, and descriptions Metrics and descriptions “Metric Category, metrics, and descriptions” on page 441 provides the metric categories and metrics that are available in each of the metric categories, and the metric descriptions. Table 34 Metrics and descriptions Metric category Frontend IO Metrics Metric Description ACP Total IO – Frontend The total frontend I/Os (random plus sequential) on all the RAID groups managed by the ACP pair.
Metric category 442 Metric Description ACP Sequential Read Cache Hits – Frontend The frontend I/Os for the sequential read requests that result in cache hits for all the RAID groups managed by the ACP pair. ACP Sequential Writes – Frontend The frontend I/Os for the sequential write requests for all the RAID groups managed by the ACP pair. ACP Search/Reads Basic Mode – Frontend The frontend I/Os for the search or reads in basic mode for all the RAID groups managed by the ACP pair.
Metric category Metric Description Total CFW The total frontend I/Os (read plus write) in the Cache Fast Write mode, for the ACP pair. CFW Reads The frontend I/Os for read requests in the Cache Fast Write mode, for the ACP pair. CFW Read Cache Hits The frontend I/Os for read requests in the Cache Fast Write mode that result in cache hits, for the ACP pair. CFW Writes The frontend I/Os for write requests in the Cache Fast Write mode, for the ACP pair.
Metric category 444 Metric Description Ext-RAID Group Total IO The total frontend I/Os made on an external volume. Ext-RAID Group Total Random IO The total random frontend I/Os rate on this external volume during the entire collection interval. Ext-RAID Group Random Reads The total random frontend reads rate on this external volume during the entire collection interval.
Metric category Metric Description LDEV Total Random IO – Frontend The total random frontend I/Os rate on this LDEV during the entire collection interval. LDEV Random Reads – Frontend The total random frontend read I/Os on this LDEV during the entire collection interval. LDEV Random Reads Cache Hits – Frontend Out of the total random frontend read I/Os on this LDEV, the number of random reads available in the cache.
Metric category 446 Metric Description Total IO Writes – Frontend The total random and sequential frontend write I/Os on this LDEV during the entire collection interval. Total MB Reads – Frontend The total random and sequential frontend read MBs on this LDEV during the entire collection interval. Total MB Writes – Frontend The total random and sequential frontend Write MBs on this LDEV during the entire collection interval.
Metric category Metric Description RAID Group Total Random IO – Frontend The sum total of the frontend random I/Os on all the LDEVs created in a RAID group. RAID Group Random Reads – Frontend The frontend random read I/Os on all the LDEVs created in a RAID group. RAID Group Random Reads Cache Hits – Frontend Out of the sum total of frontend random read I/Os on all the LDEVs created in a given RAID group, the number of random reads available in the cache.
Metric category 448 Metric Description THP Pool Random Read Cache Hits The sum total of random frontend read I/Os on individual virtual volumes defined in this pool, which are serviced from the cache. THP Pool Random Writes The sum total of random frontend write I/Os on individual virtual volumes defined in this pool. THP Pool Total Sequential IO The sum total of sequential frontend I/Os on individual virtual volumes defined in this pool.
Metric category Metric Description SNAPSHOT Pool Random Writes The frontend I/Os for random write requests that result in cache hits, for all the snapshots in a snapshot pool. SNAPSHOT Pool Total Sequential IO The frontend I/Os for sequential I/Os for all the snapshots in a snapshot pool. SNAPSHOT Pool Sequential Reads The frontend I/Os for sequential read I/Os for all the snapshots in a snapshot pool.
Metric category Frontend MB Metrics 450 Metric Description SNAPSHOT Pool Write Seq Access Mode The frontend I/Os for write requests in the sequential access mode for all the snapshots in a snapshot pool. Total MB – Frontend The total frontend throughput in MB/s for a given LDEV. Total Random MB – Frontend The total random frontend I/Os throughput in MB/s for the given LDEV. Random MB – Frontend The random frontend I/Os throughput in MB/s for the given LDEV.
Metric category Metric Description Total MB/s – Frontend The total throughput of data handled by a port over a given duration. This port in turn connects to the host group through which the data reaches the port. Total MB/s – Frontend The total throughput of data handled by a host group over a given duration. RAID Group Total MB – Frontend The total frontend throughput in MB/s read or written to the RAID group.
Metric category 452 Metric Description ACP Random Write MB – Frontend The frontend throughput in MB/s written randomly to an ACP. ACP Total Sequential MB – Frontend The total frontend throughput in MB/s read from or written sequentially to an ACP. ACP Sequential Read MB – Frontend The frontend throughput in MB/s read sequentially from an ACP. ACP Sequential Write MB – Frontend The frontend throughput in MB/s written sequentially to an ACP.
Metric category Metric Description THP Pool Sequential Write MB The sum total of sequential frontend write I/Os throughput in MB/s transfer rate of all the individual Virtual volumes in this pool. SNAPSHOT Pool Total MB The sum total of frontend throughput in MB/s transfer rate of all the virtual volumes in this pool. SNAPSHOT Pool Total Random MB The sum total of random frontend I/Os throughput in MB/s transfer rate of all the virtual volumes in this pool.
Metric category 454 Metric Description Cache Sidefile Usage MB The size of the P9000 Continuous Access Asynchronous side file usage in MB. CM ACP BUS/PATH UTILIZATION Details the usage of the shared memory and cache memory bus by the CHA or DKA. Ext-RAID Group Total MB The total frontend throughput in MB/s data read or written to the external volume. Ext-RAID Group Total Random MB The total frontend throughput in MB/s data read or written randomly to the external volume.
Metric category Metric Description • The total utilization of the ACP pair. Utilization Metrics ACP Pair Util Total • In a thin provisioning environment where an ACP pair is associated with a ThP pool, the ACP Pair Util Total metric provides the ACP utilization due to the I/O cache miss (where frontend I/Os occurring on a ThP pool are received at the array backend). In this case, ACP utilization also correlates to the utilization of RAID groups associated with a ThP pool.
Metric category Metric Description • ACP Right Util MP0 • ACP Right Util MP1 • ACP Right Util MP2 • ACP Right Util MP3 • ACP Right Util MP4 The utilization of the MP# on the right ACP. • ACP Right Util MP5 • ACP Right Util MP6 • ACP Right Util MP7 CHIP Util Total The total utilization of the CHIP. CHIP MP Util The total utilization of each individual MP# on a CHIP board. SM CHIP Bus/FBus Hi Util The utilization of Shared memory CHIP transfer bus HI.
Metric category Metric Description RAID Group Utilization Random Write Parity The utilization of the RAID group for writing random parity. RAID Group Utilization Seq Reads The utilization of the RAID group for sequential reads. RAID Group Utilization Seq Writes The utilization of the RAID group for sequential writes. RAID Group Utilization Seq Write Parity The utilization of the RAID group for writing sequential parity. Cache Usage Util The utilization of the cache shown as a percentage value.
Metric category Metric Description The average MP blade utilization by each of the following processing types over an entire collection interval: • Open Target MP Blade Util/Processing type • Open Initiator • Open External Initiator • Open Mainframe Target • Open Mainframe Ext Initiator • Backend • System Backend Metrics 458 MP Blade Util - Top 20 Consumers The average MP blade utilization by each of the 20 consumers over an entire collection interval.
Metric category Response Time Metrics Metric Description RAID Group Total Tracks – Backend The Overall backend transfers for the selected RAID group. ACP Pair Sequential Read Tracks – Backend The total backend tracks loaded in sequential mode from the specified ACP Pair. ACP Pair Non-sequential Read Tracks – Backend The total backend tracks loaded in non-sequential mode from the specified ACP Pair.
Metric category Metric Average Write Response Maximum Write Response Description The average write response time of all the LDEVs created in a specified RAID group over the entire data collection interval. The average write response time value for an LDEV is obtained from dividing the accumulated response time of all the I/Os by the total number of I/Os on that LDEV.
Real-time metrics Definitions Total cache lines random staged percentage Total backend random read percentage Total cache lines seq staged percentage Total backend sequential read percentage Total KB per IO IO size Total random IOPS IO size Total random read hits IO size Total random read hit IOPS IO size Total random write IOPS IO size Total read IOPS IO size Total read KB per IO IO size for reads Total read through put KB per second IO size for reads Total seq IOPS IO size for reads
Real-time metrics Definitions Wr% - avg write ratio Total write percentage of total front end IO seq% - avg seq IO ratio Total sequention IO percengate of total front end IO r_H% - Avg Read hits Average read hits kB/IO IO size in kB kB/s - total throughput KB per second IO size in kB ms - Avg IO response Time IO size in kB Be/Fe - total backend to frontend IO Total Backend to Front End IO ratio Be - Total cache lines staged percentage Total backend reads percentage RG% Raid group utilizat
F Appendix F Forecasting ThP pool performance Guidelines for selecting data range to receive an optimal forecast To validate the forecasted data, we need to understand the trend of the existing data, as the forecasted data is an extension of the existing trend. The forecasted data represents a trend of the ThP pool occupancy values and not the actual values. The following graph indicates the trend of the actual values. The forecasted values be an extension of the trend of the selected data points.
• No variance: Select a data range that has at least some variance. If the selected data range has constant values for most of the range, the forecast may follow the constant data pattern. • Empty collection ranges: Missing data points may induce error in the forecasted data.
Glossary Array Control Processor (ACP) ACP is used in the XP disk arrays prior to the XP24000 Disk Array. With the introduction of the XP24000 Disk Array, the DKA has replaced ACP. The DKA is also applicable for the P9000 disk arrays. ACP handles the transfer of data between the cache and the physical drives held in the DKUs. The ACPs work in pairs, providing a total of eight SCSI buses. Each SCSI bus associated with one ACP is paired with a SCSI bus on the other ACP pair element.
CHA Channel adapter. A device that provides the interface between the array and the external host system. Occasionally, this term is used synonymously with the term channel host interface processor (CHIP). CHP Channel processor. The processors located on the CHA. Synonymous with CHIP. CHIP Channel host interface processor. Synonymous with the term CHA.
Disk Controller (DKC) The array enclosure that contains the channel adapters and service processor (SVP). Disk Processor (DKP) In the XP disk arrays, the MP that resides on a DKA is addressed as the DKP. DKPs does not exist in the P9000 disk arrays. All the MPs/DKPs form part of the MP blades in the P9000 disk arrays. DKU Disk cabinet unit. The array cabinet that houses the physical disks.
LUN Logical unit number. A LUN results from mapping a SCSI logical unit number, port ID, and LDEV ID to a RAID group. The size of the LUN is determined by the emulation mode of the LDEV and the number of LDEVs associated with the LUN. For example, a LUN associated with two OPEN-3 LDEVs has a size of 4,693 MB. LUSE Logical Unit Size Expansion.
SVP Service processor. A notebook computer built into the disk array. The SVP provides a direct interface to the disk array and used only by the HP service representative. sidefile An area of cache used to store the data sequence number, record location, record length, and queued control information. Shared Memory (SM) The shared memory in an XP disk array stores shared information about the XP subsystem and the cache control information.
WWN World Wide Name. A unique identifier assigned to a Fibre Channel device. World Wide Name (WWN) Group The world wide name group provides access for every host in the specified WWN group to a specified logical unit or group of units. This is part of the LUN Security feature.
Index Symbols Array View screen, 51 10 busiest back-end RAID groups, 232 10 busiest front-end LDEVs, 231 , 91 disk space requirements, 198 displaying charts with different rates, 259 A ACP Pair summary screen, 208 Alamrs Apply template, 141 Alarms Alarm notifications, 141 Alarms history, 141 Choose metrics, 141 Delete, 141 Disabling alarms, 141 Email notifications, 141 Enable alarms, 141 Enabling alarms, 141 Forecast ThP Utilization, 141 Resource performance Plotting charts, 141 Set thresholds, dispatch le
collection, data configuring, 51 disk space requirements, 198 displaying charts with different rates, 259 command devices, configuring , 51 components displaying performance data, 238 Configuration Host Information, 51 Configuration data collection One-time collection, 59 Scheduling collections Hourly, Daily, Weekly, Monthly, 59 Stopping collecgtion, 67 configuring chart metrics, 259 database size, 174 performance data, 51 Connectivity data unavailable, host-to-array, 110 Consumers LDEVs, Journal groups, E-
GUI Common tasks, 19 Screen resolution, 17 Sorting records, Selecting records, 19 H help obtaining, 405 Host information Request, Receive, 53 Host-to-array connectivity data unavailable, 110 HP technical support, 405 I Incomplete records, displaying, 110 Instant-on license Grace period, 21 L LDEVs displaying performance data, 238 Displaying unknown host connections, 110 License Add, view, remove, 21 Licenses HPAC license key website, 21 Meter based Term license, 21 Permanent, 21 M mapping arrays, 437 me
S Security screen, 115 Settings Configuration Settings tab, 91 Configure Settings, 91 Data Analysis Settings LDEV read-write response, Troubleshooting screen, 91 Personalize Array, 91 Personalize Arrays tab, 91 Security, 91 Security tab, 91 Threshold Setting tab, 91 Threshold Settings, 91 Severity levels Event log, 91 Storage management logical partitions (SLPRs), 409, 411 Subscriber's Choice, HP, 405 symbols in text, 407 T technical support HP, 405 service locator website, 406 text symbols, 407 Threshold