HP StorageWorks SAN Virtualization Services Platform administrator guide (5697-0204, January 2010)

A simple concatenated pool should have all of its volumes presented from a single back-end array.
The volumes should have the same RAID type, and similar performance and capacity characteristics.
The pool is constructed of at least as many volumes as there are paths from the DPMs to the array
(16 for an 8host port array). The benefits and trade-offs of this approach are as follows:
With a concatenated pool, the pool can be expanded by adding one or more additional volumes
of arbitrary size without changing the basic performance characteristics of the virtual disks carved
out of that pool. Best practice would be to add volumes of the same size, or roughly the same
size, as the original volumes.
By having all the back-end volumes on a single array, the availability of the virtual disks carved
from the pool is dependent only on the availability of that single array.
By having all the back-end volumes on a single array it is relatively straightforward to map per-
formance information from the array to the pool.
By having all the back-end volumes on a single array it is simpler to debug issues.
By having the same RAID type and disk drives for all the volumes of the pool, the performance
characteristics of the pool are derived from the performance characteristics of the disk drives and
RAID type. If these are mixed, it is not possible to predict what RAID type and disk drive will be
used by any front-end virtual disk, and the behavior could be very unpredictableeven between
different LBAs within a single front-end virtual disk.
Occasionally an I/O will span two back-end volumes. This is called a split I/O. Split I/Os are
handled on the DPM soft path. I/Os handled by the soft path do not enjoy the lowest latency and
highest throughput achieved by the DPM fast path. An occasional split I/O will have an impercept-
ible impact. With concatenated pools, there are very few split I/Os because there are only as
many opportunities for split I/Os as there are adjacent back-end volumes.
Some array controllers are able to detect sequential I/O workstreams and take actions to more
efficiently deal with those workstreams. An example of such an action is to use read-ahead caching.
The use of concatenated pools presents the back-end volumes with an undisturbed sequential
workstream, enabling the arrays to detect the sequential nature of the workstream just as the arrays
would be able to detect that workstream if SVSP were not between the host and the array.
Through SVSP v3.0, a single path is used by any one DPM to access each back-end volume. If
that path fails, the DPM will select one of the alternate paths that are available. If a pool were
constructed of a single back-end volume, then a single path will be used from each DPM to the
pool. If there are ten virtual disks using the capacity of that pool, all I/O to those ten virtual disks
will be concentrated on that single path. If there are no other pools on that array, then the resources
associated with the additional ports and controllers are unused. Performance on the single path
can suffer with long latencies and even queue full responses.
By having at least as many back-end volumes in the pool as paths from DPMs to the array there
is the opportunity that all those paths might be used in parallel. The odds of using multiple paths
grows as the number of back-end volumes in the pool increases. For an 8-port EVA like the
EVA8400 that is zoned to a single quad on each of two DPMs, 16 different paths are created
from the EVA to the DPMs. In that case, 16 back-end volumes would be the recommended minimum
number of volumes for the pool, while 32 back-end volumes would be even better.
Larger numbers of volumes for a concatenated pool have the benefit described above of providing
more opportunities to distribute the workload across the multiple array ports; however there are
trade-offs involved. There is a maximum number of 1024 back-end volumes supported per domain
in SVSP v3.0. There is also a maximum number of 4096 paths supported. Pools with larger numbers
of back-end volumes will consume more back-end virtual disks and more paths, and this may result
is running out of back-end volumes and paths before achieving the necessary back-end capacity.
Note that front-end virtual disks are allocated from capacity on back-end volumes using an algorithm
that roughly distributes the front-end virtual disks across the multiple back-end volumes. This also
distributes the workload of the front-end virtual disks roughly across the multiple paths. (The algorithm
was based on the assumption that pools will be created with at least 1015 back-end LUNs (which
seems like a reasonable assumption over the lifetime of a system). Another consideration was to
HP StorageWorks SAN Virtualization Services Platform administrator guide 115