Help

Table Of Contents
For metro node Metro systems, poor virtual volume performance could be caused by poor inter-cluster WAN link
performance and/or poor storage volume performance.
Monitor other metro node performance statistics such as director CPU usage, front-end aborts, back-end aborts, storage
volume latency, or WAN latency for correlations and possible causes of the poor performance.
Display the default charts for a virtual volume
1. Select the virtual volume in the list.
2. Click View Charts in the properties panel. By default, all charts display in a single view.
Finding a virtual volume in the list
1. Click
in the upper-right corner of the screen to display the Search text box.
2. In the Search text box, type the full or partial name of the volume and then press Enter. You can use the Previous and
Next to move through the list of matches.
Viewing the Virtual Volumes dashboard
1. From the GUI main menu, click Performance.
2. Click + and select Add Virtual Volumes Dashboard.
Virtual Volume Throughput chart
The Virtual Volume Throughput chart provides a time-based view of the total throughput or IOPS for a virtual volume.
Generally throughput, more commonly referred to as IOPS, is associated with small block I/O (512B to 16KB I/O) requests.
Guidelines
The desired level of IOPS performance depends heavily on the host applications and their requested load. Therefore, it is not
possible to provide a threshold of good or bad IOPS performance.
Front-end performance in metro node depends heavily on the available back-end storage array performance, and in metro
node Metro configurations, the WAN performance for distributed devices.
Any running distributed rebuilds or data migrations might negatively affect available host throughput.
Since metro node Local and Metro implement write through caching, a small amount of write latency overhead (typically
<1msec) is expected. This latency may affect applications that serialize their I/O and do not take advantage of multiple
outstanding operations. These types of applications may see a throughput and IOPS drop with metro node in the data path.
In a metro node Metro environment you will incur extra WAN round-trip time on your write latency since writes need
to be successfully written to both cluster's storage before the host is acknowledged. This extra latency may impact the
throughput and IOPS of serialized-type applications.
Corrective actions
Check CPU Utilization. If it is extremely busy, metro node will be limited in the amount of throughput it can provide.
Check back-end latency. If on average the back-end latency is large, or there are large spikes, there could be a poorly
performing back-end fabric or an unhealthy, unoptimized, or over-loaded storage array.
Check front-end aborts. Their presence indicate that metro node is taking too long to respond to the host. These might
indicate problems with the front-end fabric or slow SCSI reservations.
Check back-end errors. If the metro node back-end is required to retry an operation because of an error, then this will add to
the delay in completing the operation to the host.
Check front-end operations count (queue depth). If this counter is large, this may explain larger than normal front-end
latency.
Perform a back-end fabric analysis and a performance analysis of the storage array(s) that hosts the underlying storage
volume(s) for the virtual volume.
Check for high metro node write delta time. Refer to the Corrective actions section in the Write Latency Delta chart topic.
If you are trying to boost IOPS performance, verify the front-end average iosize, and confirm that you are sending small
block I/O.
50
Monitoring the system