Help

Table Of Contents
Changing the view
Use the following appropriate selection criteria to filter the data:
Director Allows you to select all directors or a specific director in the cluster.
Read and Write check boxes Allows you to select one or both check boxes to filter throughput for Reads and Writes.
Viewing the Back-end Throughput chart
1. From the GUI main menu, click Performance.
2. In the Performance Dashboard, select the tab in which you want to display the Back-end Throughput chart (or create a
custom tab).
3. Click +Add Content.
4. Click the Back-end Throughput icon.
Back-end Errors chart
The Back-end Errors chart displays the back-end I/O errors to and from the storage array. There are three categories of
back-end errors:
Aborts Indicate the metro node back-end gave up and aborted the I/O operation to the storage array, the array itself
decided to abort the I/O operation, or another SCSI initiator (metro node director, or host) connected to the array caused
the I/O to abort.
Timeouts Indicate the metro node back-end saw an I/O operation to a storage volume that has not completed within 10
seconds.
Resets Logical Unit resets issued by the metro node back-end to a storage volume as corrective action when after 20
seconds of no response for any I/O by the storage volume (Logical Unit on the storage array). The metro node back-end
re-tries all outstanding I/O to the storage volume.
NOTE:
The chart displays data only for the cluster to which you are currently connected. To simultaneously view back-end errors
for another cluster, open a second browser session and connect to the second cluster.
Guidelines
Back-end errors typically indicate back-end fabric and/or storage array issues.
For a normal healthy system, there should be no aborts, timeouts, or resets.
Timeouts might happen during bursts of I/O to a storage array. Seeing a few of these is generally not bad for
performance, however, frequent or periodic timeouts are not normal.
Aborts and Resets likely indicate major performance issues on the storage fabric or storage array.
Investigate the cause for back-end errors immediately.
Corrective actions
Look closely at the latency related categories (front-end read/write latency, and back-end read/write latency) for any high
averages or large spikes. Try to correlate the spikes to the errors.
Examine the back-end fabric for changes, reported errors, proper negotiated speeds, and health state.
Examine the back-end storage array for general health state, and best practices disk/volume layout.
Check the metro node firmware log for events indicating command timeouts, retries, or other general back-end health
issues.
Changing the view
Use the following appropriate selection criteria to filter the data:
Director Allows you to select all directors or a specific director in the cluster.
34
Monitoring the system