HP StorageWorks SAN Virtualization Services Platform Best Practices Guide (5697-0935, May 2011)

TIP: Use HP Command View EVAPerf or similar tools to monitor the load on back-end LUs. Unless
you are specifically conducting stress testing, do not run storage "in the red-zone" and maintain
maximum capacity of 80% or less during normal testing. You could gain performance with less
impact to your production systems. When you add drives to an EVA, the EVA will automatically
distribute back-end LUs (BELUs) across all new drives in the EVAs, gaining the added performance
of the drives in the system. Those BELUs are then imported into an SVSP storage pool, and SVSP
will slowly balance I/O across the BELUs over time, with less impact to production performance.
Each back-end LU managed by SVSP must be presented to the VSM servers and DPMs in a
redundant (highly available) manner. When the storage system has two controllers like an EVA,
for example, the VSM servers and DPMs must see the back-end LU through both controllers. This
is called a fully cross-connected back-side. Multiple connections for additional bandwidth should
be weighed against the number of connections supported by SVSP (128 array ports maximum).
Some fully active-active arrays, like the HP XP for example, have individual processors per port or
pair of ports. In this case, back-end LUNs should be presented through at least two different ports
utilizing two different port processors.
If a multipath solution was installed prior to installing SVSP, it is likely that the hosts will have vendor
specific code for handling the multipath I/O access. For some operating systems, like Windows,
when SVSP is added to the system, the SVSP multipath driver must replace the current multipath
drivers. Other multipath drivers on the host that are unused should be uninstalled.
A storage array does not have to be dedicated exclusively to SVSP. The storage array can be
configured to present some logical units to the SVSP system and some directly to hosts.
It is important to follow the array manufacturer's guidelines for configuring back-end LUNs to meet
performance and availability needs. For example, if there are not enough spindles configured for
a virtual disk, performance will be limited to how those spindles can perform. With the EVA array,
for example, virtual disks are created so that they span across all the spindles in a disk group,
which can contain a large number of physical disks. Other non-virtualized arrays, like the HP XP
array, are limited in the number of drive spindles that can be grouped together for the purpose of
creating a virtual disk to serve as a back-end LUN to SVSP.
Tools are available for monitoring array performance and should be used to watch for hot spindles.
When creating an SVSP pool, consider using SVSP striping (performance pool) to spread I/O
across a large number of back-end LUNs.
If an SVSP virtual disk has performance issues, consider using SVSP migration to move the virtual
disk to a pool that is built from a larger number of back-end LUNs.
Configuring many LUNs
An SVSP domain supports a maximum of 1024 back-end LUs. The maximum number of back-end
LUs per back-end path (initiator/target pair) is 255 LUs. Therefore with some arrays it may be
possible to present 255 LUs from one host port, another 255 LUs from another host port and so
on. If you need to present more than 1024 back-end LUs to SVSPs, or have an array with only a
small number of host ports, you can:
Import-in-place the back-end LUN and then migrate the imported virtual disk to a storage pool
made of larger LUNs. Delete the small imported LUN. Repeat this process until the all data is
imported and the small LUNs are recycled into fewer, bigger LUNs. This requires that the array
has the ability to make large LUNs.
Import-in-place the back-end LUN and then migrate it to a storage pool made up of LUNs from
another array. Delete the small LUNs.
Create multiple SVSP domains, each supporting a maximum of 1024 back-end LUNs.
Configuring many LUNs 19