HP Matrix Operating Environment Automated Storage Provisioning: "Static"SAN volume automation via multi-initiator NPIV

Insight Orchestration Storage Automation via NPIV
5
Figure 2: Deploy SAN storage from a storage pool
Figure 2 modifies the process by introducing the concept of a storage pool. Each entry in the storage
pool can contain one or more volumes. In addition, the storage pool (in its simplest case) also
contains a pair of server HBA WWNs. The storage pool entry forms the nexus of one or more SAN
volumes and the server identity that will be used to access SAN volumes. With storage pools, the
Server administrator is able transition from a “one server at a time” discussion with the SAN
administrator to a discussion regarding the number of servers that will be provisioned in the next six
months, their storage requirements, and how they will be used (i.e. compute intensive, sequential read
intensive, block read intensive, …).
With this information, the SAN administrator can pre-provision, on a less frequent basis, multiple
storage pool entries for the Server administrator to consume during the six month period. The storage
pool forms the boundary within which the Server administrator my flexibly create, deploy, delete, and
re-create physical servers.
This leads to both a process and management experience which is analogous to the one followed
when managing virtual machines. It is accomplished without violating the existing data center
management or process protocols. Each administrator continues to use the tools and processes that
are unique to his management domain.
Once the SAN storage provisioning is complete, no further changes are needed when the Server
administrator assigns on or more of the storage pool entries to a particular server. The SAN fabric
definition remains stable and no changes need to be made to the presenting disk array.
Using the idea of a pre-provisioned storage pool entry as a building block, we can now extend the
concept slightly to include the ability to separate storage pool entries into three broad categories:
1. Boot storage
2. Private data storage
3. Shared data storage
As described above, each storage pool entry minimally contains a pair of server HBA WWNs. The
server HBA WWNs effectively become a “key” which, when given to a particular server, allows that
server to access the pre-provisioned storage resource defined by the storage pool entry. Because we
have separated the storage pool entries into three categories, we are able to sequence the server’s
access to each of the storage categories in time. This is an important point when attempting to
automate the OS deployment process for a server because it is not always possible to
programmatically guide the OS deployment tool to select the desired boot disk while excluding the
visible private or shared data disks.
NPIV (N_Port Id Virtualization)
The key technology used to grant a server visibility to the different types of statically provisioned
storage is known as N_Port Id Virtualization or NPIV for short. NPIV is an extension to the
FibreChannel (FC) specification
1
which facilitates the sharing of a single physical N_Port across
multiple N_Port IDs. The result is that multiple initiators (each assigned its own N_Port ID) are enabled
to share the same physical port. As the server boots, a fabric login sequence (FLOGI) is performed
using the server HBA’s base port WWN. At the completion of the sequence, the fabric has assigned
one N_Port ID to the base port WWN and the server is granted access to its storage resources based
upon this WWN->N_Port ID mapping.