Veritas Volume Manager 5.1 SP1 Administrator"s Guide (5900-1506, April 2011)

This policy routes I/O down the single active path. This
policy can be configured for A/P arrays with one active path
per controller, where the other paths are used in case of
failover. If configured for A/A arrays, there is no load
balancing across the paths, and the alternate paths are only
used to provide high availability (HA). If the current active
path fails, I/O is switched to an alternate active path. No
further configuration is possible as the single active path
is selected by DMP.
The following example sets the I/O policy to singleactive
for JBOD disks:
# vxdmpadm setattr arrayname Disk \
iopolicy=singleactive
singleactive
Scheduling I/O on the paths of an Asymmetric Active/Active
array
You can specify the use_all_paths attribute in conjunction with the adaptive,
balanced, minimumq, priority and round-robin I/O policies to specify whether
I/O requests are to be scheduled on the secondary paths in addition to the primary
paths of an Asymmetric Active/Active (A/A-A) array. Depending on the
characteristics of the array, the consequent improved load balancing can increase
the total I/O throughput. However, this feature should only be enabled if
recommended by the array vendor. It has no effect for array types other than
A/A-A.
For example, the following command sets the balanced I/O policy with a partition
size of 2048 blocks (2MB) on the enclosure enc0, and allows scheduling of I/O
requests on the secondary paths:
# vxdmpadm setattr enclosure enc0 iopolicy=balanced \
partitionsize=2048 use_all_paths=yes
The default setting for this attribute is use_all_paths=no.
You can display the current setting for use_all_paths for an enclosure, arrayname
or arraytype. To do this, specify the use_all_paths option to the vxdmpadm
gettattr command.
# vxdmpadm getattr enclosure HDS9500-ALUA0 use_all_paths
ENCLR_NAME DEFAULT CURRENT
===========================================
HDS9500-ALUA0 no yes
Administering Dynamic Multi-Pathing
Administering DMP using vxdmpadm
184