HP Matrix Operating Environment Automated Storage Provisioning: "Static"SAN volume automation via multi-initiator NPIV
Insight Orchestration – Storage Automation via NPIV 
22 
2.  OS Host Mode of the disk 
3.  RAID level of the disk 
4.  Single-Path or Multi-Path 
5.  The number of data disks a LogicalServer needs access to (along with size differentiation) 
6.  Additional storage pool entry tags describing class-of-service, I/O performance, etc. 
The recommendation is to think carefully about what the real LogicalServer storage requirements are 
and limit the storage pool entries to only those that will match these requirements.  
1.  Create a common boot disk definition for each OS type you wish to deploy. 
2.  Create a few common private data disk definitions for each OS type you wish to deploy 
(remember that multiple “disks” may be specified within a single storage pool entry). 
3.  Create a few common shared data disk definitions (if this applies to your situation) for each 
OS type you wish to deploy. 
4.  If you do not require MPIO, each storage pool entry (with the exception of shared data disk 
entries) only requires one server HBA WWN. 
5.  If you do require MPIO, each storage pool entry (again with the exception of shared data 
disk entries) minimally requires two server HBA WWNs. 
For a single-path server that only requires a boot disk and a pre-defined number of private data disks, 
only two server HBA WWNs are required. If the server will additionally be attached to both private 
data disks and shared data disks, the requirement rises to three server HBA WWNs. If the same 
server will be attached via multi-path, the number doubles for each category or type of storage pool 
entry (i.e. Boot Storage, Private Data Storage, and Shared Data Storage). 
Disk Array and FC Fabric Zone Management 
It is recommended that unique WWN based zones be created for each storage pool entry. It can be 
helpful to do the following: 
1.  Name the zone the same as the storage pool entry name (or vice-versa).  This makes it easier 
to keep track of the alignment between the storage pool entries and the zone names within 
the fabric. 
2.  If possible, prefix the name of the storage volumes created on the disk array with the name of 
the storage pool entry suffixed by the LUN number or a simple index value. This also helps to 
align the storage pool entry names with the names of the storage volumes on the disk array. 
Creating unique WWN based zones ensures that RSCN events within the fabric are kept to an 
absolute minimum. This is a SAN management best practice and avoids many of the timing issues 
that can occur when attempting to use servers in a boot-from-SAN configuration. 
The zone names used for the storage pool entries may be overlaid on top of the existing SAN zone 
configuration. This allows the SAN administrator to continue to use the existing SAN zone 
configuration strategy for handling off-hours configuration changes and/or setting up access for the 
servers to additional SAN services such as backup, etc. 
The storage pool entries representing a boot disk should be considered the server’s “base WWN” 
and used when creating additional zone definitions that involve an HPIO managed server but are 
being managed independent of the storage pool entries. 
Technology Preview Release Requirements 
For NPIV’s technology preview release, the following configuration requirements are supported: 










