HP EVA-to-3PAR Online Import Migration Guide (T5494-96577)

NOTE: If the Migration OK? status is not OK, the tooltip provides information on why the
migration cannot be performed until the necessary action is taken.
9. Set the Destination Virtual Volume Attributes.
a. Set the Configuration to Set individually.
b. Select the desired Destination Provisioning for the destination virtual volume.
c. Select the desired Destination Common Provisioning Group (CPG) for the destination
virtual volume.
d. To use these settings for all the virtual volumes, set the Configuration to Set all the same.
To use different settings for each volume, repeat the steps for each volume.
10. Click Next.
11. Review the summary information, then click Add Migration.
The Host HP EVA to 3PAR StoreServ Online Import Status Summary page is displayed with
the newly added migration in the list of Migrations in Progress. The Preparation status displays
a clock indicating that migration preparation is being performed. When complete, the
Preparation status changes to Done, and the Completed button is displayed in Data Transfer,
indicating that the host must be unzoned from the EVA before beginning the migration.
12. Perform a LUN rescan on each host being migrated.
13. If you are migrating HP-UX 11i V2 standalone hosts or Serviceguard cluster and if non-shared
volume groups are present:
a. Note down the new pv_paths to the physical volumes in the volume group from the output
of the LUN rescan done in the previous step.
b. Add the new paths to the volume group using vgextend vg_name pv_path
14. If you are migrating Serviceguard cluster and if shared volume groups are present perform
the following steps:
a. Identify the shared volume group on which a configuration change is required.
For example, vg_shared
b. Identify one node of the cluster which is running an application using the shared volume
group.
For example, node1
The applications using the volume group, vg_shared, on this node will remain unaffected
during the procedure. Stop the applications using the shared volume group on all the
other cluster nodes thus scaling down the cluster application to the single cluster node
node1.
c. Deactivate the shared volume group on all other nodes of the cluster, except node1, using
the "-a n" option to the vgchange command.
vgchange -a n vg_shared
Ensure the volume group, vg_shared, is now active only on a single cluster node, node1
by using the vgdisplay command on all cluster nodes. The Status must display
available on a single node only.
d. Change the activation mode to exclusive on node1 vgchange -a e -x vg_shared
e. On node1, note down the new pv_paths to the physical volumes already in the volume
group from the output of the LUN rescan. Now add all the new paths to the volume group
using vgextend vg_shared pv_path
f. Export the changes to other cluster nodes:
1. From node1, export the mapfile for the vg_shared command
vgexport -s -p -m /tmp/vg_shared.map vg_shared.
2. Copy the mapfile /tmp/vg_shared to all the other nodes of the cluster.
24 Performing the data migration