9.5 HP P4000 VSA Installation and Configuration Guide

For the VSA for Hyper-V Server, dedicate a unique network adapter for iSCSI traffic.
For the VSA for ESX Server, be located on the same virtual switch as the VMkernel network
used for iSCSI traffic. This allows for a portion of iSCSI IO to be served directly from the VSA
to the iSCSI initiator without using a physical network.
For the VSA for ESX Server, be on a virtual switch that is separate from the VMkernel network
used for VMotion. This prevents VMotion traffic and VSA IO traffic from interfering with each
other and affecting performance.
Unsupported configurations
The following configurations that are possible using VMware ESX or Microsoft Hyper-V and the
VSA are specifically not supported for production use at this time.
More than 2 NICs configured on the VSA for ESX Server.
NIC bonding using the CMC within the VSA itself. (NIC bonding is a best practice in the
server.)
Use of VMware snapshots, VMotion, HA, DRS, or Microsoft Live Migration on the VSA itself.
Use of any ESX Server configuration that VMware does not support.
Use of any Hyper-V Server configuration that Microsoft does not support.
Booting physical servers off of a VSA cluster.
Extending the data virtual disk (ESX Server SCSI 1:0 or in Hyper-V, the first SCSI Controller)
of the VSA while in a cluster.
Co-location of a VSA and other virtual machines on the same physical platform without
reservations for the VSA CPU and memory in ESX.
Co-location of a VSA and other virtual machines on the same VMFS datastore or NTFS partition.
Hardware design for VSA
The hardware platform used for a virtual SAN affects the capacity, performance, and reliability
of that virtual SAN. The hardware features listed below affect the VSA configuration.
CPU
Memory
Virtual Switch or Network
Controllers and Hard Disk Drives
Network Adapters
CPU
Because the CPU of the VSA must be reserved, platforms that will host a VSA and other VMs should
be built with more processor cores to accommodate the additional VMs. Multi-core processors with
at least 2 GHz per core should be used so that a single core with at least 2 GHz can be reserved
for the VSA. All additional cores are then available for use with other VMs, thereby avoiding
resource contention with the virtual SAN. For example, a platform with two dual core processors
could host a VSA and use 3 cores to share for other VMs.
Memory
Similarly the memory of the VSA must be reserved. For platforms that will host a VSA and other
VMs, build in additional memory to accommodate the additional VMs. Assuming the hypervisor
and management applications will use less than 1 GB, memory beyond 2 GB is available to use
with other VMs, again avoiding resource contention with the virtual SAN. For example, assuming
Hardware design for VSA 21