Veritas Volume Manager 5.0.1 Administrator's Guide, HP-UX 11i v3, First Edition, November 2009

Global detach policy
Warning: The global detach policy must be selected when Dynamic MultiPathing
(DMP) is used to manage multipathing on Active/Passive arrays, This ensures
that all nodes correctly coordinate their use of the active path.
The global detach policy is the traditional and default policy for all nodes on the
configuration. If there is a read or write I/O failure on a slave node, the master
node performs the usual I/O recovery operations to repair the failure, and, if
required, the plex is detached cluster-wide. All nodes remain in the cluster and
continue to perform I/O, but the redundancy of the mirrors is reduced. When the
problem that caused the I/O failure has been corrected, the disks should be
re-attached and the mirrors that were detached must be recovered before the
redundancy of the data can be restored.
Local detach policy
Warning: Do not use the local detach policy if you use the VCS agents that monitor
the cluster functionality of Veritas Volume Manager, and which are provided with
Veritas Storage Foundation for Cluster File System HA and Veritas Storage
Foundation for databases HA. These agents do not notify VCS about local failures.
The local detach policy is designed to support failover applications in large clusters
where the redundancy of the volume is more important than the number of nodes
that can access the volume. If there is a write failure on a slave node, the master
node performs the usual I/O recovery operations to repair the failure, and
additionally contacts all the nodes to see if the disk is still accessible to them. If
the write failure is not seen by all the nodes, I/O is stopped for the node that first
saw the failure, and the application using the volume is also notified about the
failure. The volume is not disabled.
If required, configure the cluster management software to move the application
to a different node, and/or remove the node that saw the failure from the cluster.
The volume continues to return write errors, as long as one mirror of the volume
has an error. The volume continues to satisfy read requests as long as one good
plex is available.
If the reason for the I/O error is corrected and the node is still a member of the
cluster, it can resume performing I/O from/to the volume without affecting the
redundancy of the data.
The vxdg command can be used to set the disk detach policy on a shared disk
group.
Administering cluster functionality
Overview of cluster volume management
460