HP Logical Server Management Best Practices

47
(which is not affiliated with a logical server), and add new volumes meeting the needs. This preserves the initiator
WWNs in the storage pool entry (and any SAN zoning which has been done for those initiators to specific array
controller target ports). If the storage pool entry were deleted, the initiator WWN could be re-used (and thus the
storage administrator would want to appropriately adjust zoning to ensure the next user of those initiator WWNs isn‟t
granted access inappropriately).
When using multi-initiator NPIV, one issue is the increase in the number of WWNs (since multiple WWNs can be
applied to a physical HBA port). The use of multi-initiator NPIV enables the administrator to maintain a static SAN
while still offering flexibility in how a server is attached to its target storage. The number of WWNs required is
influenced by the number of storage pool entries required (since each entry requires unique initiator WWNs). The
storage pool should be populated with pool entries that will be commonly used within the environment. Separation of
boot disk visibility from data disk visibility can be handled by allowing SPM to mask the data volumes or (for
environments with view only entries that do not permit LUN masking operations by SPM/Matrix OE) by specifying at
least two types of storage pool entries: one for boot and one for data (either private or shared). As noted earlier,
shared volumes must be in their own storage pool entry; they cannot be in the same storage pool entry as boot
volumes or private data volumes.
The greater the variability in the different logical server storage definitions that will actually be deployed, the greater
the number of “class unique” storage pool entries. The primary variables are:
Size of the disk
OS Host Mode of the disk
RAID level of the disk
Single-Path or Multi-Path
The number of data disks a logical server needs access to (along with size differentiation)
Additional storage pool entry tags describing class-of-service, I/O performance, etc.
The recommendation is to think carefully about what the real logical server storage requirements are and limit the
storage pool entries to only those that will match these requirements. If the storage volumes in SPM permit host mode
to be changed dynamically by Matrix OE, there need not be separate entries based on OS host mode (one entry with
the other criteria met can be adjusted for the host mode required by a given logical server).
1. Create a common boot disk definition for each OS type you wish to deploy.
2. Create a few common private data disk definitions for each OS type you wish to deploy (remember that
multiple “disks” may be specified within a single storage pool entry).
3. Create a few common shared data disk definitions (if this applies to your situation) for each OS type you
wish to deploy.
4. If you do not require Multi-Path I/O (MPIO), each storage pool entry (with the exception of shared data disk
entries) only requires one server HBA WWN.
5. If you do require MPIO, each storage pool entry (again with the exception of shared data disk entries)
minimally requires two server HBA WWNs.
For a single-path server that only requires a boot disk and a pre-defined number of private data disks, only two
server HBA WWNs are required. If the server will additionally be attached to both private data disks and shared
data disks, the requirement rises to three server HBA WWNs. If the same server will be attached via multi-path, the
number doubles for each category or type of storage pool entry (i.e. boot storage, private data storage, and shared
data storage).
LUN masking on the array presentation can be used to ensure a given volume is accessed by the intended server
(specifying the initiator HBA WWNs). In addition, zoning within the SAN fabric can control access (and traffic
through the SAN). If SAN zoning is being used, the volumes must be appropriately zoned before they are made
available to Matrix OE (by specifying storage details in the storage pool entry). When using the HP Storage
Provisioning Manager, the volumes must be appropriately zoned before they are placed in the SPM catalog (via
import from a managed array or manual entry for an unmanaged array). Matrix OE will use the available storage
pool entries, and a logical server using a storage pool entry with a volume which is not zoned will cause issues when
a logical server attempts I/O to that volume (and fails without suitable zoning).