Help

Table Of Contents
Check for bandwidth/IOPS over-provisioned metro node front-end ports. Be sure to balance hosts and LUNs across
the available directors and front-end ports presented from metro node. Check the front-end fabric for saturation or over-
capacity.
Verify that front-end FC ports, HBAs and switch ports are configured to the correct port speeds.
Configure your host multipathing software based on metro node best practices, and ensure the installed software versions
are compatible with metro node. For more information on compatibility, see the Dell EMC Simple Support Matrix for metro
node document, available on Dell EMC Online Support and on the SolVe Online.
For metro node Metro configurations
Check the health of the inter-cluster link and maximum performance capabilities. From the GUI, check the inter-cluster
WAN bandwidth. If your application throughput appears low and seems to only achieve something similar to what the WAN
bandwidth reports, then you are probably limited by the WAN. Therefore:
Make sure you have provisioned enough inter-cluster bandwidth for the desired application workload. Verify that
your WAN configuration is supported by metro node (minimum supported bandwidth, supported inter-cluster latency,
compatible WAN hardware and software).
For Metro-FC, if the inter-cluster WAN is over a FC fabric, confirm that you have allocated enough buffer credits or that
you have configured the FC WAN ports correctly on your switches. Check for buffer credit starvation, c3 discards, and
CRC errors. Some vendors may require extended fabric licenses to enable WAN features.
Validate your WAN performance before going live in production. Create multiple test distributed devices and force them
to rebuild. Observe the performance of the rebuilds.
When troubleshooting distributed device performance, if feasible, check local device performance. Export a test LUN from
your storage array to metro node, then to the host, and then run a test I/O workload.
Check for any unexpected local or distributed rebuilds or data migrations. There will be some amount of performance impact
to host application traffic that relies on the same virtual volumes and storage volumes. Tune the rebuild transfer-size setting
to limit the performance impact of rebuild and migrations. Consider scheduling migrations during off-peak hours.
Changing the view
To view the throughput of a single director in your metro node system, select the director name from the Director drop-down.
Viewing the Virtual Volumes Throughput chart
1. From the GUI main menu, click Performance.
2. Click + and select Add Virtual Volumes Dashboard.
Virtual Volume Latency chart
The Virtual Volume Latency chart provides a time-based view of the IO Latency for a virtual volume broken down by read
and write latency. Virtual volume latency is defined as the amount of time an I/O spends within metro node for a given virtual
volume. The reported metro node front-end latency should match closely to the host or application reported volume latency
unless there is significant added delay in the front-end fabric, HBA, multi-pathing software or host operating system software.
For metro node cache read miss operations, it includes the time spent retrieving the disk blocks from the storage array.
Therefore, for non-metro node cached operations, front-end latency can perform only as fast as the back-end array. Contrast
cache read misses to cache hit operations which will be fast. Read miss and hit latency is not reported separately in the
read-latency metric, so it will be difficult to know the performance of each.
It is important to distinguish how front-end latency write operations behave in the following metro node configurations:
For metro node Local write operations, it includes the time spent protecting the disk blocks to one or more local storage
arrays.
For metro node Metro write operations to distributed-devices, it includes the time spent protecting the disk blocks to the
storage array at both clusters. When writing to the remote cluster, the round-trip time on the WAN links will add to the
front-end latency, depending upon the network delay observed between clusters.
Guidelines
Keep the following guidelines in mind when using this chart:
Monitoring the system
51