Help

Table Of Contents
VNX guidelines
<10msec Great
10-20 msec Decent
20-100 msec Not so great
>100 msec Poor performance
High latency could cause forced flushes due to lack of available cache to buffer writes.
Symmetrix guidelines
<1 msec Great
1-10 msec Decent
10-50 msec Not so great
>50 msec Poor performance
For Symmetrix arrays, performance can be impacted by Write Pending (WP) limits when there is a lack of available free
cache slots to accept incoming writes forcing the array to proactively flush pages to disk. Running SRDF sessions might also
affect performance.
Corrective actions
Check back-end errors: These indicate that metro node had to abort and re-try operations. Could be a back-end fabric
and/or storage array health issue.
Examine the back-end fabric for its overall health state, recent changes, reported errors, and properly negotiated speeds.
Examine the back-end storage array for general health state, and that performance best practices are followed for disk/
RAID layout when needed.
Utilize available performance monitoring tools from the storage-array vendor to confirm the array's performance: Additional
metrics available to the array not visible to metro node may shed light on the problem.
Be sure to run the recommended storage array firmware version. Check for newer software releases and known bug fixes.
Engage your storage array vendor's performance specialists if the problem persists.
Changing the view
Use the following appropriate selection criteria to filter the data:
Read Displays latency statistics pertaining to reads only.
Write Displays latency statistics pertaining to writes only.
Director Displays data for all directors or a specific director in the cluster.
Viewing the Backend Latency chart
1. From the GUI main menu, click Performance.
2. In the Performance Dashboard, select the tab in which you want to display the Back-end Latency chart (or create a custom
tab).
3. Click +Add Content.
4. Click the Back-end Latency chart icon.
CPU utilization chart
The CPU Utilization chart provides a time-based view of the utilization load on the director CPUs of your metro node system.
By default, the chart shows an averaged view of the utilization loads of all of the CPUs on all the directors in your metro node
system. This category is typically the first one to check for problems. When the CPU reaches 100% busy, its I/O processing
capability will also peak.
If your CPU utilization is running near 100% at all times, you have no spare capacity to handle a peak load. In applications
that are latency-sensitive and require fast response times, high director CPU usage can reduce response times even though
throughput and I/O processing capability might stay constant. Spikes in CPU utilization typically correlate to increased I/O load,
36
Monitoring the system