Veritas Storage Foundation 5.0.1 Cluster File System Administrator's Guide Extracts for the HP Serviceguard Storage Management Suite on HP-UX 11i v3

If the defaults file is edited while the vxconfigd daemon is already running, the vxconfigd
process must be restarted for the changes in the defaults file to take effect.
If the default activation mode is anything other than off, an activation following a cluster join,
or an activation following a disk group creation (or import) will fail, if another node in the cluster
has activated the disk group in a conflicting mode.
To display the activation mode for a shared disk group, use the vxdg list diskgroup
command.
You can also use the vxdg command to change the activation mode on a shared disk group.
Connectivity Policy of Shared Disk Groups
The nodes in a cluster must agree on the status of a disk, or a connectivity policy setting will
determine the disk status. If one node cannot write to a particular disk, all nodes must stop
accessing that disk before the results of the write operation are returned to the caller. If a node
cannot contact a disk, it must contact another node to check on the disk’s status. If a disk fails,
the nodes will agree to detach the disk, because no node can access it. If a disk does not fail, but
the access paths to that disk from some of the nodes in the cluster fail, the nodes in the cluster
will not be able to agree on the status of the disk. In this case, one of the following connectivity
policies will be applied:
Under the global connectivity policy, the detach occurs cluster-wide (globally) if any node in
the cluster reports a disk connectivity failure. This is the default connectivity policy.
Under the local connectivity policy, if disk connectivity fails, the failure is confined to the
particular nodes that see the failure. An attempt is made to communicate with all nodes in
the cluster to determine the usability of the disks. If all nodes report a problem with the
disks, a cluster-wide detach occurs.
The vxdg command is used to set the disk detach and disk group failure policy. The
dgfailpolicy attribute sets the disk group failure policy in the case that the master node loses
connectivity to the configuration and to the log copies within a shared disk group. This attribute
requires that the disk group version is 120 or greater. The following policies are supported:
dgdisable—The master node disables the diskgroup for all user or kernel initiated
transactions. First write and final close fail. This is the default policy.
leave—The master node panics instead of disabling the disk group if a log update fails for
a user or kernel initiated transaction (including first write or final close). If the failure to
access the log copies is global, all nodes panic in turn as they become the master node.
Disk Group Failure Policy
The local detach policy by itself is insufficient to determine the desired behavior if the master
node loses access to all disks that contain copies of the configuration database and logs. In this
case, the disk group is disabled. As a result, the other nodes in the cluster also lose access to the
volume. In release 4.1, the disk group failure policy was introduced to determine the behavior
of the master node in such cases.
This policy has two possible settings as shown in the following table:
Table 4-3 Behavior of Master Node for Different Failure Policies
Disable (dgfailpolicy=dgdisable)Leave (dgfailpolicy=leave)Type of I/O Failure
The master node disables the
disk group.
The master node panics with the message “klog update
failed” for a failed kernel-initiated transaction, or “cvm
config update failed” for a failed user-initiated transaction.
The master node loses
access to all copies of
the logs.
The behavior of the master node under the disk group failure policy is independent of the setting
of the disk detach policy. If the disk group failure policy is set to leave, all nodes panic in the
unlikely case that none of them can access the log copies.
Overview of Cluster Volume Management 33