CLI Guide

Table Of Contents
[-t|--to] {target-
extent|target-device}
* The name of target extent or device for the migration. Specify the target device or extent by name
if that name is unique in the global namespace. Otherwise, specify a full pathname.
Optional arguments
[-s|--transfer-
size] value
Maximum number of bytes to transfer per operation per device. A bigger transfer size means smaller
space available for host I/O. Must be a multiple of 4 K.
Range: 40 KB - 128 M. Default: 128 K.
If the host I/O activity is very high, setting a large transfer size may impact host I/O. See About
transfer-size in the batch-migrate start command.
--paused
Starts the migration in a paused state.
--force
Do not ask for confirmation. Allows this command to be run using a non-interactive script.
* - argument is positional.
Description
Starts the specified migration. If the target is larger than the source, the extra space on the target is unusable after the
migration. If the target is larger than the source, a prompt to confirm the migration is displayed.
Up to 25 local and 25 distributed migrations (rebuilds) can be in progress at the same time. Any migrations beyond those limits
are queued until an existing migration completes.
Extent migrations - Extents are ranges of 4K byte blocks on a single LUN presented from a single back-end array. Extent
migrations move data between extents in the same cluster. Use extent migration to:
Move extents from a hot storage volume shared by other busy extents,
De-fragment a storage volume to create more contiguous free space,
Support technology refreshes.
Start and manage extent migrations from the extent migration context:
VPlexcli:/> cd /data-migrations/extent-migrations/
VPlexcli:/data-migrations/extent-migrations>
NOTE:
Extent migrations are blocked if the associated virtual volume is undergoing expansion. See the virtual-volume
expand command.
Device migrations - Devices are RAID 0, RAID 1, or RAID C built on extents or other devices. Devices can be nested; a
distributed RAID 1 can be configured on top of two local RAID 0 devices. Device migrations move data between devices on the
same cluster or between devices on different clusters. Use device migration to:
Migrate data between dissimilar arrays
Relocate a hot volume to a faster array
This command can fail on a cross-cluster migration if there is not a sufficient number of meta volume slots. See the
troubleshooting section of the metro node procedures in the SolVe Desktop for a resolution to this problem.
Start and manage device migrations from the device migration context:
VPlexcli:/> cd /data-migrations/device-migrations/
VPlexcli:/data-migrations/device-migrations>
When running the dm migration start command across clusters, you might receive the following error message:
VPlexcli:/> dm migration start -f SveTest_tgt_r0_case2_1_0002 -t
SveTest_src_r0_case2_2_0002 -n cc2
The source device 'SveTest_tgt_r0_case2_1_0002' has a volume
'SveTest_tgt_r0_case2_1_0002_vol' in a view. Migrating to device
'SveTest_src_r0_case2_2_0002' will create a synchronous distributed device. In this GEO
system, this can increase the per I/O latency on 'SveTest_tgt_r0_case2_1_0002_vol'. If
applications using 'SveTest_tgt_r0_case2_1_0002_vol' are sensitive to this latency, they
may experience data unavailability. Do you wish to proceed ? (Yes/No) y
146
Commands