HP StorageWorks SAN Virtualization Services Platform Best Practices Guide (5697-0935, May 2011)

SAN and storage considerations
It is very important to consider the current SAN design and the added impact that the SVSP will
have.
Monitor the interswitch links (ISLs) on the switches to ensure utilization is not being exceeded.
High bandwidth devices (such as tape backup servers and storage arrays) must be on the
same SAN switches as the SVSP components.
The VSM servers perform data movement tasks. The throughput and performance parameters
of these tasks must be taken into account. Treat the VSM servers as a high bandwidth devices
and place them on switches close to the back-end arrays.
It is important to understand the best practices for the vendor specific storage arrays that are
configured behind the SVSP. Since much of the I/O directed at the SVSP will pass directly
through the virtualization engine to the arrays at the back end, optimizing the array parameters
based upon the host I/O characteristics has the most value.
DPM performance considerations
Data Path Modules (DPMs) have 16 ports arranged in 4 quads of 4 ports each. Each quad contains
two target ports and two initiator ports with each quad being licensed separately. Each quad is
capable of a bandwidth up to 800 MB/s and 1 million requests/second, depending on the
configuration and workload.
NOTE: You should not design to these numbers unless under very specific circumstances, and
they are not applicable for synchronous mirrors. For information on synchronous mirroring, see
“Synchronous mirroring” (page 40).
Mapping DPM table entries
The DPM translates I/O addresses from the virtual to physical in wire speed. To perform this
translation with minimal latency the DPM stores this mapping information in memory. Each DPM
can store up to 90,000 mapping entries. For a regular virtual disk or snapshot, each 1 GB of the
virtual disk requires one map entry (for a total of 90 TB of virtual capacity). Synchronous mirror
virtual disks use three entries for each 1 GB of virtual capacity.
The DPM can present more virtual capacity than the amount of mapping it can hold in memory.
Therefore, the DPM can swap these map entries. It uses the Least Recently Used (LRU) algorithm to
swap these map entries in and out of memory, by removing the oldest entries first. When not
excessive, swapping map entries does not cause noticeable degradation in performance. However,
if swapping is excessive, the overall performance of the system can be noticeably slower. Therefore,
excessive swapping should be avoided. Excessive swapping of DPM map entries happens when
the number of map entries required to map virtual capacity significantly exceeds the maximum
number of maps in memory causing mapped table entries to churn. Consider an example application
that needs to map a large physical address space at once. As a worst case scenario, the application
needs to sequentially move across 91 TB of space. There will be a performance impact because
the LRU algorithm must replace old map entries to accommodate the mapping of large number of
new entries. Adding another DPM group to the SVSP environment linearly increases the number
of mapped table entries in memory (see Adding DPM groups (page 9)).
Adding DPM quads
Add more DPM quads to increase the connectivity, processing power, bandwidth, and to partition
the environment. The front-end hosts and back-end arrays can be partitioned to balance load
between the quads. Balancing the back-end performance requires that you make the new quads
the primary path to some of the back-end LUs.
To add more ports to both DPMs in a DPM group:
8 Sizing the SVSP configuration