Administrator Guide

Table Of Contents
Auto-resume-at-loser
Determines whether the loser automatically resumes I/O when the inter-cluster link is repaired after a failure.
When the link is restored, the losing cluster finds out that the data on the winning cluster is different. The loser must determine
whether to suddenly change to the winner's data, or to keep suspending I/O.
By default, auto-resume is enabled.
Usually, this property is set to false to give the administrator time to halt and restart the application. Otherwise, dirty data in
the hosts cache may be inconsistent with the image on disk to which the winning cluster has been writing. If the host flushes
dirty pages out of sequence, the data image may be corrupted.
Set this property to true for consistency groups used in a cluster cross-connect. In this case, there is no risk of data loss since
the winner is always connected to the host, avoiding out of sequence delivery.
true (default) - I/O automatically resumes on the losing cluster after the inter-cluster link has been restored.
Set auto-resume-at-loser to true only when the losing cluster is servicing a read-only application such as servicing web
pages.
false - I/O remains suspended on the losing cluster after the inter-cluster link has been restored. I/O must be manually
resumed.
Set auto-resume-at-loser to false for all applications that cannot tolerate a sudden change in data.
CAUTION: Setting auto-resume property to true may cause a spontaneous change of the data view presented
to applications at the losing cluster when the inter-cluster link is restored. If the application has not failed, it
may not be able to tolerate the sudden change in the data view and this can cause data corruption. Set the
property to false except for applications that can tolerate this issue and for cross connected hosts.
Use the set command in the advanced context to configure the auto-resume property for a consistency group:
VPlexcli:/clusters/cluster-1/consistency-groups/TestCG/advanced> set auto-resume-at-
loser true
Virtual-volumes
Administrators can add and remove virtual volumes to a consistency group. In order to be added to a consistency group, a
virtual volume:
Must not be a logging volume
Must have storage at every cluster in the storage-at-clusters property of the target consistency group
Must not be a member of any other consistency group
Any properties (such as detach rules or auto-resume) that conflict with those of the consistency group are automatically
changed to match those of the consistency group
NOTE:
Virtual volumes with different properties are allowed to join a consistency group, but inherit the properties of the
consistency group.
Use the consistency-group list-eligible-virtual-volumes command to display virtual volumes that are eligible
to be added to a consistency group.
Use the consistency-group add-virtual-volumes command to add one or more virtual volumes to a consistency
group.
Use the ll /clusters/cluster-*/consistency-groups/consistency-group command to display the virtual volumes
in the specified consistency group.
Use the consistency-group remove-virtual-volumes command to remove one or more virtual volumes from a
consistency group.
Consistency Groups
65