HP StorageWorks SAN Virtualization Services Platform Administrator Guide (5697-0934, May 2011)

Mixing SAN-level virtualization with non-virtualized environments
Environments in which some logical units (LUs) are accessed directly from the array and other LUs
are accessed by the SVSP DPMs are supported. The same back-end LU must not be presented to
the SVSP and directly to servers, or data corruption will occur. Naming conventions that help
distinguish between these two presentations is one way to make it easier to avoid and troubleshoot
this kind of problem.
Setup volume configuration
SVSP field experience has shown that many issues arise when access to SVSP setup volumes is
slow. SVSP version 3 code contains new informational and warning messages for the setup volumes
that may be observed in the event viewer. The VSM event log must be monitored for messages
indicating slow setup volume updates.
If the event log indicates a recurring setup volume problem over several hours the following actions
should be taken:
Verify that the setup volumes are made from similar performance-based storage.
Verify that the volumes are made from high performance RAID1 storage.
Verify that the arrays containing the setup volumes are not extremely loaded.
Move the setup volumes to less busy or faster arrays.
Move the setup volumes to their own disk group or volume group.
Move some of the setup volumes to dedicated arrays.
Reduce the number of setup volumes.
Configurations with large numbers of service-enabled volumes (for example, thin provisioned or
with PiTs) generate the most demand for setup volume access.
Setup volumes might be spread across different arrays for additional redundancy, but remember
that all writes are mirrored, and therefore the slowest performance volume will determine when
the write is acknowledged. HP does not recommend placing all three volumes on a single array;
if that is all that is available, create only two setup volumes and use different pools if the array has
that option.
Building basic storage pools
Storage pools can be optimized for performance or for capacity; however, there is only one way
to configure storage pools to enable the maximum 2 PB per domain, and another way to deliver
maximum performance, but not both at the same time. This section defines the best practices for
building capacity-optimized storage pools.
Experience with SVSP in the field has indicated that pools should be built using at least 8–16
back-end LUs. Adding more back-end LUs to a pool is allowed, and in some cases may be desirable,
as long as other scalability limits are not exceeded. Less than 16 LUs in each SVSP should also
work, but is discouraged because it limits the capabilities of the system to distribute the I/O across
the many paths to the storage.
A simple concatenated pool should have all of its volumes presented from a single back-end array.
The volumes should have the same RAID type, and similar performance and capacity characteristics.
The pool is constructed of at least as many volumes as there are paths from the DPMs to the array
(16 for an 8–host port array). The benefits and trade-offs of this approach are as follows:
With a concatenated pool, the pool can be expanded by adding one or more additional
volumes of arbitrary size without changing the basic performance characteristics of the virtual
Setup volume configuration 97