HP Logical Server Management Best Practices

48
It can be helpful to do the following:
1. Name the zone the same as the storage pool entry name (or vice-versa). This makes it easier to keep track of
the alignment between the storage pool entries and the zone names within the fabric.
2. If possible, prefix the name of the storage volumes created on the disk array with the name of the storage
pool entry suffixed by the LUN number or a simple index value. This also helps to align the storage pool
entry names with the names of the storage volumes on the disk array.
The zone names used for the storage pool entries may be overlaid on top of the existing SAN zone configuration.
This allows the SAN administrator to continue to use the existing SAN zone configuration strategy for handling off-
hours configuration changes and/or setting up access for the servers to additional SAN services such as backup, etc.
The storage pool entries representing a boot disk should be considered the server‟s “base WWN” and used when
creating additional zone definitions that involve an HP Matrix OE logical server but are being managed independent
of the storage pool entries.
It is recommended that unique WWN-based zones be created for each storage pool entry. This allows the maximum
flexibility as storage pool entries may be combined in a different manner for future logical server support (and by
following this recommendation, the SAN zoning need not be changed). Additionally, creating unique WWN-based
zones ensures that RSCN events within the fabric are kept to an absolute minimum. This is a SAN management best
practice and avoids many of the timing issues that can occur when attempting to use servers in a boot from SAN
configuration.
Multi-Initiator NPIV
Storage pool entries contain Virtual Connect initiator WWNs, which can be flexibly moved to preserve logical server
access to shared storage. Those initiator WWNs can be applied to physical HBA ports, and the existing solution has
a 1:1 relationship between the Virtual Connect initiator WWNs and the physical HBA ports. Matrix OE also provides
support for multi-initiator NPIV, which enables multiple initiator WWNs to be applied to a physical HBA port. NPIV
(N_Port Id Virtualization) is an extension to the Fibre Channel specification which facilitates sharing of a single
physical N_Port across multiple N_Port IDs. It enables Virtual Connect Fibre Channel to represent the various physical
HBA ports on the network, and now can also be used on the server-side to allow a logical server to have storage pool
entries with several WWNs that are mapping to the physical HBA ports.
Consider the scenario described earlier in Figure 17. The logical server required multi-path access to a boot volume
and a private data volume in one storage pool entry, and a shared data volume in another storage pool entry. That
requires four initiator WWNs, which can be mapped onto two physical HBA ports using multi-initiator NPIV (versus
requiring the environment to include an additional HBA mezzanine card to have four physical ports).
More detailed information on multi-initiator NPIV use is available in the Matrix operating environment - Automated
Storage Provisioning: “Static” SAN volume automation via multi-initiator NPIV white paper available at
http://www.hp.com/go/matrixoe/docs . That white paper provides details on scenarios using the multi-initiator
NPIV capabilities, as well as configuration details (e.g., supported hardware, maximum number of WWNs per
physical port varies based on the specific hardware configuration). Multi-initiator NPIV does require operating system
support, and the aforementioned white paper provides details on which Windows, RHEL and SLES versions are
supported.
EVA Disk Arrays
The HP EVA Disk Arrays can be managed through Command View EVA. In Command View EVA, the WWNs for the
server blade HBAs need to be added as hosts prior to presenting the storage volumes (Vdisks). The HBA WWNs will
be provided by the system administrator after creating the logical server or storage pool entry (or perhaps the storage
administrator is responsible for creating and maintaining the storage pool).
Figure 40 shows the Add Host tab within the Command View EVA host properties display.