10.5 HP StoreVirtual Storage VSA Installation and Configuration Guide (TA688-96138, March 2013)

is not accessible from more than one physical server. (Note that the LeftHand OS software
consumes a small amount of the available space.)
For ESX Server, the first virtual disk must be connected to SCSI address 1:0.
For Hyper-V Server, the first virtual disk must be connected to the first SCSI Controller.
All virtual disks for the VSA for vSphere must be configured as independent and persistent to
prevent VM snapshots from affecting them.
Virtual disks for VSA for Hyper-V must be fixed, not dynamic.
The VMFS datastore or NTFS partition for the VSA must not be shared with any other VMs.
Two or more VSAs on separate physical servers with Network RAID-10, and a Failover
Manager is the minimum configuration for high availability with automatic failover.
Two or more VSAs on separate physical servers can be clustered with a Virtual Manager for
manual failover.
Best practices
Other configuration recommendations are useful to improve the reliability and performance of your
virtual SAN. Consider implementing as many of these best practices as possible in your virtual
SAN environment.
Each VSA should meet the following conditions, if possible.
Have a virtual switch or virtual network comprised of dual Gigabit Ethernet or more. Providing
network redundancy and greater bandwidth improves both performance and reliability.
Be configured to start automatically and first, and before any other virtual machines, when
the ESX Server on which it resides is started. This ensures that the VSA is brought back online
as soon as possible to automatically re-join its cluster. The default installation configuration
for the VSA for Hyper-V is set to automatically start if it was running when the server shut
down.
Use redundant RAID for the underlying storage of a VSA in each server to prevent single disk
failures from causing VSA system failure. Do not use RAID 0.
NOTE: See the HP StoreVirtual Storage User Guide for detailed information about using
RAID for individual server-level data protection.
For the VSA for Hyper-V, dedicate a unique network adapter for iSCSI traffic.
For the VSA for vSphere, be located on the same virtual switch as the VMkernel network used
for iSCSI traffic. This allows for a portion of iSCSI IO to be served directly from the VSA to
the iSCSI initiator without using a physical network.
For the VSA for vSphere, be on a virtual switch that is separate from the VMkernel network
used for VMotion. This prevents VMotion traffic and VSA IO traffic from interfering with each
other and affecting performance.
Unsupported configurations
NIC bonding between the 2 virtual NICs in the VSA is not supported. (NIC bonding is a best
practice in the server.)
The virtual NICs on the VSA for vSphere and VSA for Hyper-V do not support flow control
setting modifications or TCP off-load. The physical NICs on the server can be configured with
these features.
Use of VMware snapshots, VMotion, HA, DRS, or Microsoft Live Migration on the VSA itself.
Use of any ESX Server configuration that VMware does not support.
Designing your virtual SAN 21