Administrator Guide

Table Of Contents
About rebuilds
Rebuilds synchronize data from a source drive to a target drive. When differences arise between legs of a RAID, a rebuild
updates the out-of-date leg.
There are two types of rebuild behavior:
A full rebuild copies the entire contents of the source to the target.
A logging rebuild copies only changed blocks from the source to the target.
Local mirrors are updated using a full rebuild (local devices do not use logging volumes).
In metro node Metro configurations, all distributed devices have an associated logging volume. Logging volumes keep track
of blocks that are written during an inter-cluster link outage. After a link or leg is restored, the metro node system uses the
information in logging volumes to synchronize mirrors by sending only changed blocks across the link.
Logging volume rebuilds also occur when a leg of a distributed RAID 1 becomes unreachable, but recovers quickly.
If a logging volume is unavailable at the time that a leg is scheduled to be marked out-of-date, the leg is marked as fully
out-of-date, causing a full rebuild.
The unavailability of a logging volume matters both at the time of recovery (when the system reads the logging volume) and at
the time that a write fails on one leg and succeeds on another (when the system begins writes to the logging volume).
CAUTION: If no logging volume is available, an inter-cluster link restoration causes a full rebuild of every
distributed device to which there were writes while the link was down.
See Logging volumes.
Rebuilds for thin provisioned storage
Thin provisioning allows storage to migrate onto a thinly provisioned storage volume while allocating the minimal amount of thin
storage pool capacity.
Thinly provisioned storage volumes can be incorporated into RAID 1 mirrors with similar consumption of thin storage pool
capacity.
Metro node preserves the unallocated thin pool space of the target storage volume in different ways that are based on whether
the target volume is thin-capable or not. For thin-capable volumes, if the source leg indicates zeroed data, metro node issues
UNMAP for those blocks on the target volumes. For non thin-capable target legs, metro node checks for zeroed data content
before writing, and it suppresses the write where it would cause an unnecessary allocation. For this thin rebuild algorithm to be
selected, metro node automatically sets the thin-rebuild flag on thin-capable volumes as part of the claiming process. For
the storage volumes not supported as thin-capable, the metro node administrator sets a third property, the thin-rebuild
attribute to true during or after the storage claiming.
NOTE:
During the storage volume claiming operation, metro node automatically sets the thin rebuild flag to true on the
thin-capable arrays. Metro node does not perform this activity on the thin storage volumes that are already claimed with
the flag set to false.
Metro node allows you to change the thin-rebuild value for storage volumes regardless of whether the storage volumes are
thin-capable or not. For thin-capable storage volumes, if you try to set the thin-rebuild property to false, the metro node CLI
displays a warning. In a scenario where all the contents of the source are written to the target, performance might be better
than the normal rebuild if:
The storage volumes are not thin-capable
The contents of the source and the target of the rebuild are almost the same
Only the differing data is written during the thin-rebuild process
The discovered thin provisioning property of storage volumes enables the creation of thin provisioning capable metro node
virtual volumes to which hosts can send UNMAP commands to free the unused blocks. However, the configured thin-rebuild
property controls the mirror synchronization that is performed at the metro node back-end.
Thin support in metro node provides you more information on the thin-aware capabilities of metro node.
CAUTION:
If a thinly provisioned storage volume contains non-zero data before being connected to metro node,
the performance of the migration or initial RAID 1 rebuild is adversely affected. If the thin storage allocation
pool runs out of space and the leg is the last redundant leg of the RAID 1, further writing to a thinly provisioned
device causes the volume to lose access to the device. This issue can cause data unavailability.
42 Data migration