HP MPX200 Multifunction Router Data Migration User Guide Abstract This guide is intended for administrators of data migration services using the MPX200 Multifunction Router, with a basic knowledge of managing SANs and SAN storage.
© Copyright 2012–2013 Hewlett-Packard Development Company, L.P. Confidential computer software. Valid license from HP required for possession, use or copying. Consistent with FAR 12.211 and 12.212, Commercial Computer Software, Computer Software Documentation, and Technical Data for Commercial Items are licensed to the U.S. Government under vendor's standard commercial license. The information contained herein is subject to change without notice.
Contents 1 Introduction...............................................................................................8 2 Getting started.........................................................................................10 Supported configurations.........................................................................................................10 Supported topologies.........................................................................................................10 Fabric configuration...
Installing a data migration license key.......................................................................................37 Applying an array-based license to a specific array.....................................................................37 Viewing data migration and scrubbing license usage..................................................................39 5 Performing data migration.........................................................................41 Typical data migration process.......
dml..................................................................................................................................82 get_target_diagnostics .......................................................................................................83 initiator............................................................................................................................86 iscsi..........................................................................................................
start_serial_jobs...............................................................................................................136 target rescan...................................................................................................................136 targetmap.......................................................................................................................137 7 Performance and best practices................................................................
Notifications........................................................................................................................165 qsrDMNotification object definition....................................................................................165 Data migration Solution notification object types..................................................................165 qsrJobId OBJECT-TYPE.................................................................................................
1 Introduction The MPX200-based DMS is block-based data migration that is independent of a SAN, server, storage protocol (FC and iSCSI), and storage vendor. Because application unavailability during data migration can critically impact services, DMS is designed to reduce down time. DMS supports both online (local and remote) and offline data migration across FC and iSCSI storage arrays. Anyone with knowledge of SAN or SAN storage administration will be able to use DMS.
“save capture” (page 103), helps to capture the configuration details, system logs, and MPX200 state at any time, and can be used for troubleshooting. • Licensing: DMS licenses provide capacity-based (per terabyte) and array-based licenses. For more information, see “Data migration licenses” (page 36).
2 Getting started This chapter provides information about supported configurations, and hardware and software setup for using DMS with MPX200 and the HP mpx Manager. Supported configurations This section describes and illustrates the supported topologies (direct attach, fabric, and multipath), and lists the supported fabric and array types. Supported topologies Supported topologies include fabric and multipath configurations.
Figure 3 (page 11) shows the configuration used when you are: • Migrating from one vendor SAN to another vendor SAN. • Installing a new fabric and do not have enough ports available in the old fabric. Figure 3 Migration between dissimilar vendor SANs Data migration configuration Figures in this section show the typical configurations used for offline and online data migration using MPX200 models.
Figure 4 Offline, two Fibre Channel arrays Figure 5 (page 12) illustrates both online and offline data migration between two Fibre Channel storage arrays. Figure 5 Online and offline, two Fibre Channel arrays Figure 6 (page 13) illustrates both online and offline data migration between two Fibre Channel storage arrays using MPX200 models with four Fibre Channel ports per blade (eight total Fibre Channel ports).
Figure 6 Online and offline, source array and destination array Figure 7 (page 14) illustrates both online and offline data migration between two Fibre Channel arrays using MPX200 models when the Fibre Channel fabric is also upgraded.
Figure 7 Online and offline, two Fibre Channel arrays (MPX200; fabric upgrade) Figure 8 (page 14) shows the offline data migration between a Fibre Channel storage array and an iSCSI storage array. Figure 8 Online and Offline Fibre Channel and iSCSI arrays Figure 9 (page 15) illustrates remote migration using WAN links between two data centers.
Figure 9 Remote migration using FCIP over WAN links Figure 10 (page 16) illustrates remote migration using iSCSI.
Figure 10 Remote migration for iSCSI Supported FC fabrics DMS is currently supported with B-Series, C-Series and H-Series, 2 Gb, 4 Gb, 8 Gb, and 16 Gb FC fabrics. Supported storage arrays Table 1 (page 16) lists the storage array types for which DMS provides support. To view the most current compatibility matrix, see www.hp.com.
Table 1 Supported storage arrays (continued) Vendor Storage Array AMS family WMS family USP family TagmaStore Network StorageController model NSC55 nl nl nl HP HP HP HP HP HP HP HP HP HP HP HP HP nl nl nl nl nl nl nl nl nl nl nl Storage MSA family Storage EVA family Storage XP P9000 Storage XP10000 and 12000 Storage XP20000 and 24000 Storage P4000 G2 SAN Solutions (iSCSI) 3PAR StoreServ 10000 3PAR StoreServ 7000 3PAR F-Class 3PAR T-Class 3PAR S-Class SAN Virtualization Services Platform (SVS
Software setup Software setup for DMS includes the following: • Zoning: Perform zoning on the FC switches so that array controller ports are visible to the MPX200, and the array is able to see virtual ports created by MPX200 FC ports and can present LUNs to the MPX200. • LUN presentation: Ensure the appropriate data LUNs are presented from the storage arrays to the MPX200.
3 Data migration objects ThIs chapter covers the objects that the MPX200 DMS uses in data migration. Arrays DMS either discovers the FC target ports zoned in with the MPX200 FC ports, or it discovers and logs into iSCSI qualified name (IQN) targets using iSCSI login. It forms an array when at least one data LUN is presented to the MPX200 from that array. If no data LUN is presented to the MPX200, all array ports are shown in the HP mpx Manager GUI and CLI as target ports.
indicates congestion at the array controller. Thus, the MPX200 may require automated throttling while trying to maximize migration performance by increasing concurrent I/Os. To control automatic throttling and pacing of migration I/O, use the Enable I/O Pacing option. • Enable I/O Pacing: This feature is applied only to a source array. The MPX200 intelligently manages concurrent migration I/Os to maximize overall migration throughput.
• I/O size: You can configure each data migration job to migrate data using a specified I/O size. Different types of arrays and LUNs may provide optimum performance based on the I/O size. The default size is 64 K. • Thin-provisioned LUN: MPX200 supports conversion of a regularly provisioned LUN to a thin-provisioned LUN. If a destination LUN supports thin provisioning, you can opt to configure this migration job as thin provisioned.
data migration during off peak hours. For example, the online data migration initial copy operation is performed during off peak hours. Serial Schedule The Serial Schedule option is designed to provide maximum flexibility for data migration. Even though DMS supports 512 (256 per blade) simultaneous migration jobs, typical array performance can be maximized by having only four to eight LUNs under active migration.
Table 2 Possible data migration job states (continued) Job State Description Paused A running job has been paused by the user. You can resume or stop a paused job. A paused job that is resumed continues running from the point where it was paused. Stopped A running, scheduled, failed, or pending job has been halted. You can restart or remove a job in the stopped state. A stopped job that is restarted begins at the job start.
Job failover/failback rules: • Both MPX blades must have connectivity to both Source and Destination arrays. • Both MPX blades must have the same Group name available. • Failover happens when the owner blade remains in down state until the Autofailover timer expires. • Failover happens if the resource (source/destination) LUN becomes unavailable on the owner blade until the Autofailover timer expires.
Table 4 Example: Four WWPNs per VPG VPG VPGroup1 VPGroup2 VPGroup3 VPGroup4 Virtual Port Number WWPN Blade1-FC1-VP1 21:00:00:c0:dd:13:2c:60 Blade1-FC2-VP1 21:00:00:c0:dd:13:2c:61 Blade2-FC1-VP1 21:00:00:c0:dd:13:2c:68 Blade2-FC2-VP1 21:00:00:c0:dd:13:2c:69 Blade1-FC1-VP2 21:01:00:c0:dd:13:2c:60 Blade1-FC2-VP2 21:01:00:c0:dd:13:2c:61 Blade2-FC1-VP2 21:01:00:c0:dd:13:2c:68 Blade2-FC2-VP2 21:01:00:c0:dd:13:2c:69 Blade1-FC1-VP3 21:02:00:c0:dd:13:2c:60 Blade1-FC2-VP3 21:02:00:c0:dd:13:2c
Figure 11 Presented targets: virtual presentation Figure 11 (page 26) shows: • LUNs from a single source storage array allocated to two servers. Use the Target Map Wizard to configure two separate VPGs to map LUNs from the storage array to Server1 and Server2. • Four target ports (WWPNs) on the source array are zoned in with two VPGs (VPG1 and VPG2) on the MPX200. • LUNs associated with VPG1 are for Server1, and LUNs associated with VPG2 are for Server2.
• Fabric Zone VPG1 WWPN Source Array port WWPN B Blade1-FC2-VP1_Zone 21:00:00:c0:dd:13:2c:61 50:05:08:b4:00:b4:78:cd B Blade2-FC2-VP1_Zone 21:00:00:c0:dd:13:2c:69 50:05:08:b4:00:b4:78:c9 Using the MPX200 Target Map feature, new Presented Target WWPN’s are created for each source array port.
Figure 12 (page 27) shows: • Four target ports (WWPNs) on the source array are zoned in with two VPGs (VPG1 and VPG2) on the MPX200. • LUNs associated with VPG1 are for Server1, and LUNs associated with VPG2 are for Server2. • Four global presented target ports (GPT1, GPT2, GPT3, and GPT4) depict the four source array target ports discovered either on VPG1 and VPG2.
• A single Global Presented Target WWPN may now present LUNs from any VPG using the lunremap command. Migration to a thin-provisioned LUN The MPX200 provides the option to create a data migration job to a thin-provisioned destination LUN. The MPX200 detects thin-provisioned storage based on SCSI Read capacity commands. Some storage arrays, even though they support thin provisioning, may not indicate the support for thin-provisioned storage in the SCSI Read Capacity response.
Table 5 Data migration size Number of Remote Migration Jobs per MPX200 Minimum Required DML Capacity 64 100 GB 128 164 GB 256 292 GB 512 548 GB For more information on working with DMLs, refer to “Creating and removing a DML” (page 69) and “Command line interface” (page 76). Remote peers A remote peer identifies the remote router used at a remote site. The remote router establishes native IP connectivity to perform remote data migration operations.
5. Configure IP addresses for the router’s iSCSI ports by entering an IP address: • In mpx Manager, modify the iSCSI Port Information page. • In the CLI, issue the set iscsi command (see“set iscsi” (page 110). To configure the local router and chassis: 1. Take these preliminary steps: a. Ensure that the local router and chassis have access to the source array. b. Ensure that the LUNs are visible. c. Create a data management LUN , see “Creating and removing a DML” (page 69). d.
Table 6 Native IP remote firewall ports Description Direction Port Protocol FTP Bi-directional 20 TCP/UDP SSH Unidirectional 22 TCP MPX Manager (PortMapper) Bi-directional 111 TCP/UDP SNMP Unidirectional 162 UDP RPCserver Bi-directional 617 TCP/UDP RPCserver Bi-directional 715 TCP/UDP RPCserver Bi-directional 717 TCP/UDP RPCserver Bi-directional 729 TCP/UDP RPCserver Bi-directional 731 TCP/UDP RPCserver Bi-directional 1014 TCP/UDP iSCSI Bi-directional 3260 TCP
• For iSCSI migrations, set the migration I/O size to 64 K. • Use only new destination LUNs to select the recommended (and better performing) TP and No Validation option under TP Settings when creating the migration job. • If migrating to a thin-provisioned storage, always allocate destination LUN first. HP does not recommend the Yes and TP validation option under TP Settings when creating the migration job.
Data scrubbing logs Data scrubbing jobs generate logs for every user configuration event, as well as for job STARTING, FAILING or COMPLETION. You can view data scrubbing logs using the same interface as used for migration logs, see “Viewing system and data migration job logs” (page 63). Data scrubbing licenses Data scrubbing license keys are based on an MPX200 blade serial number. The licenses are shared between two blades in the same MPX200 chassis.
Users The MPX200 supports two types of users: • Administrative user (admin): For managing the MPX200, you must be in an administrative session. The default password for the administrator is config. • Data migration user (miguser): This user session is required to configure migration-related activities. The default password is migration. Host A host is a logical construct consisting of one or more initiator ports for one or more protocols.
4 Data migration licenses This chapter provides information on data migration licenses including license types, license installation, and license use. Types of data migration licenses Data migration license keys are based on an MPX200 blade serial number. The licenses are shared between two blades in the same MPX200 chassis. The two types of data migration licenses are capacity-based and array-based.
Installing a data migration license key Follow this procedure to install a data migration license key using HP mpx Manager. To install a data migration license key: 1. In the HP mpx Manager main window, click the Router tab in the left pane. 2. In the left pane, click Router MPX200, and then select the blade on which to install the license key. NOTE: The License key is generated from the blade serial number. Install the license on the blade used to generate the key.
1. 2. 3. 4. In the left pane of the HP mpx Manager main window, click the Router tab. On the Wizards menu, click License an Array. In the left pane under Arrays, click the name of the FC or iSCSI array to which to apply the license. In the License Array dialog box, select the array for which you want to apply the license, see Figure 14 (page 38), and then click OK.
Figure 15 Information page showing array is licensed Viewing data migration and scrubbing license usage You can view the usage for the data migration and scrubbing licenses from either HP mpx Manager or the CLI. In addition, you can create a report containing the license usage information. Follow these procedures to view the usage of data migration and scrubbing licenses in the GUI; to view the licenses in the CLI, see “show migration_usage” (page 129).
Figure 16 License info for the chassis To view data migration license usage for the blade: 1. In the left pane of the HP mpx Manager main window, click the Services tab. 2. In the left pane, under Router MPXxxx, select a blade node. License usage appears on the License Info page, as shown in Figure 17 (page 40).
5 Performing data migration This chapter provides a number of procedures for configuring and managing data migration using DMS. Typical data migration process Table 7 (page 41) and Table 8 (page 42) show the MPX200 data migration process flow by category and activity, and references the appropriate section for each. Table 7 Online data migration process flow Category Pre-migration Activity For more information, see… 1. Plan for data migration. Data Migration Service for MPX200 Planning Guide 2.
Table 7 Online data migration process flow (continued) Category Activity For more information, see… 17. Remove arrays from persistence. “Removing an offline array” (page 69) 18. Check license usage. “Viewing data migration and scrubbing license usage” (page 39) Table 8 Offline data migration process flow Category Activity For more information, see… Pre-migration 1. Plan for data migration. Data Migration Service for MPX200 Planning Guide 2. At the start of the project, clear the migration logs.
If NPIV is not supported or not enabled and if the FC switch port cannot be configured to support loop mode, configure MPX200 ports in point-to-point only mode. In point-to-point only configuration of MPX200 FC ports, you can perform only offline migration. NPIV and enabling NPIV support are not options.
Do not modify the WWULN of a LUN that is to be presented to the MPX200. To create a WWULN specific to that array, use regular LUN creation procedures. LUN presentation from FC arrays This section provides the procedures for presenting LUNs and discovering FC storage arrays for data migration. To present source and destination LUNs from FC arrays: 1. Zone in source array controller ports with appropriate MPX200 VPGs (for more information, see “VPG” (page 24).
NOTE: The MPX200 supports a maximum of four VPGs. To expose more than 256 LUNs (numbered from 0 to 255) from any FC storage array that allows no more than 255 LUNs be presented per host, you can enable additional VPGs in the MPX200 blades. To present up to 1,024 LUNs (4×256) from the same array to the MPX200, repeat the preceding steps for each VPG. In addition, the current firmware supports 1,024 LUNs per VPG for a total of 4,096 (4×1024) LUNs mapped to the MPX200 if all VPGs are enabled.
2. From the shortcut menu, click Rescan. In the left pane under the FC Arrays node, the newly-generated array entity is shown. Alternately, you can click Refresh two or three times to rescan the targets and generate the array entity for targets that are exposing LUNs to the router. Creating a data migration job group Follow these steps to create a data migration job group in HP Storage mpx Manager: 1. In the left pane, click the Services tab to open the Services page.
1. Use either the HP mpx Manager or the CLI to create a presented target: • In HP mpx Manager, use the Target Presentation/LUN Mapping Wizard to map LUNs to initiators. The LUNs are presented to the initiators at the ID that is available on the MPX200. If the LUN needs to be presented to the initiator with a different LUN ID, select the wizard’s LUN Remap option as the Presentation Type.
1. Configure the hosts as follows: Host 1: 1. Present LUNs A, B, and C to the host as LUN IDs 1, 2, and 3. 2. Present LUNs A, B, and C to the MPX200 VPG1 as LUNs 1, 2, and 3. Host 2: 1. Present LUNs D, E, and F to the host as LUN ID 5, 6, and 7. 2. Present LUNs D, E, and F to the MPX200 VPG1 as LUN 5, 6, and 7. Host 3: 1. Present LUNs G, H, and I to the host as LUN ID 1, 2, and 3. 2. Present LUNs G, H, and I to the MPX200 VPG2 as LUN ID 1, 2, and 3.
2. 3. 4. 5. 6. 7. 8. Activate Zone 9, and then validate the new path. Remove SB1 from Zone 1, and then validate I/O failover to another path. Activate Zone 11, and then validate the new path. Remove SA2 from Zone 2, and then validate I/O failover to another path. Activate Zone 10, and then validate the new path. Remove SB2 from Zone 2, and then validate I/O failover to another path. Activate Zone 12, and then validate the new path.
4. On the LUN Selection window, select one or more LUNs for the selected virtual port group node, and then click Next, or use the LUN remapping feature to remap a LUN to a different ID. On the Suggestion window, the router automatically detects the portals through which the array target ports are accessible. HP recommends that you present a single source, array target port, and VPG only once across both blades.
5. On the LUN Selection window, expand the array and VP groups, select one or more LUNs to present, and then click Next. 6. On the Assign LUN ID window, a default LUN ID is shown corresponding to the Discovered LUN ID presented to the router from the storage array in the Present LUN ID column. A new LUN ID may be presented to the Host by editing the Present LUN ID column. Click Next to continue 7.
7. In the Add Remote Router Status window, review the remote router configuration, and then click Finish. Figure 18 Add Remote Router Status window 8. To view information about the newly added remote peer router, select the remote peer node in the router tree. Figure 19 (page 52) shows an example. Figure 19 View remote peer router information To remove a remote peer: 1. On the Wizards menu, click Remove Remote Peer Wizard.
Figure 20 Add imported array 4. In the Import Remote Array Security Check dialog box, type the miguser password, and then click OK. Imported arrays are identified under the Array node in the Router Tree by the text [Imported]. Figure 21 View imported array Setting array properties HP mpx Manager enables you to configure the target type and bandwidth, and to enable load balancing, for each storage array used in data migration. To set array properties: 1.
Figure 22 Information page: setting array properties 3. 4. (Optional) In the Symbolic Name box, enter a user-friendly array name. From the Target Type list, select Source. NOTE: 5. Array bandwidth is only displayed and editable if the array target type is Source. From the Array Bandwidth list, click one of the following values: • Slow (50 MB/s) • Medium (200 MB/s) • Fast (1600 MB/s) • User Defined • Max Available 6.
Creating a data migration job group Follow these steps to create a data migration job group in HP mpx Manager. To create a data migration job group: 1. In the left pane, click the Services tab to open the Services page. By default, the MPX200 shows Group 0 created under the Data Migration Jobs item in the left pane. 2. 3. 4. In the left pane, right-click Data Migration Jobs, and then on the shortcut menu, click Add Group. (Or on the Wizards menu, click Add Group.
7. Complete the Migration Wizard Options dialog box Figure 23 (page 56) as follows: a. Under Schedule Mode, click either Schedule in batch mode (to schedule multiple jobs) or Schedule individual job (to schedule a single job). b. Under Job Creation Method, click either Create job by dragging LUNs into the Data Migration Jobs pane or Create job by dragging LUNs from the Source LUNs pane to the Destination LUNs pane. c. Click OK. Figure 23 Migration wizard options 8.
Figure 24 Create data migration job dialog box 3. Create the data migration job by dragging and dropping the LUNs. The method depends on the job creation method selected in Step 7 • If the job creation method is Create job by dragging LUNs into the Data Migration Jobs pane, drag and drop the source LUN and the destination LUN from the left and middle panes onto the Data Migration Job (New) mode in the right pane.
4. In the Data Migration Jobs Options dialog, specify the job attributes as follows: a. Under Migration Type, select one of the following: • Click Offline (Local/Remote) to schedule a data migration job in which the servers affected by the migration job are down. • Click Online (Local) to schedule a data migration job in which disconnecting server access to the LUN is not required.
This option is particularly useful for migration jobs specified as Schedule for later and Serial Schedule Jobs on the Data Migration Jobs Options dialog box Figure 27 (page 61), where the jobs need to be classified under a specific group for better management. To optimize MPX200 performance, HP recommends that you run simultaneously no more than four jobs on any specified source or destination array. To schedule data migration jobs in batch mode: 1.
6. In the Data Migration Jobs Options dialog box Figure 25 (page 57), specify the job attributes as follows: a. Under Migration Type, select one of the following: • Click Offline (Local/Remote) to schedule a data migration job in which the servers affected by the migration job are down. • Click Online (Local) to schedule a data migration job in which disconnecting server access to the LUN is not required.
1. Open the Serial Data Migration Jobs Options dialog box, see Figure 27 (page 61) using one of these options: • On the Wizards menu, click Start Serial Schedule Job(s). • Right-click a serial scheduled job, and then click Start Serial Scheduled Jobs. This option immediately starts the selected job, unless there are other jobs configured with a lower priority that must complete migration first. nl Figure 27 Serial data migration jobs options dialog box 2. 3. 4. 5.
4. 5. 6. 7. To see a summarized view of all completed jobs, click the Completed Data Migration Jobs tab in the right pane. To view a list of all jobs, click Data Migration Jobs in the left pane. To view a list of all jobs belonging to a specific migration group, click the migration group name in the left pane. To view a list of all jobs that are currently being synchronized, click the Synchronizing tab in the right pane.
NOTE: For online data migration jobs, log details include the Number of DRL (dirty region log) Blocks; for offline data migration, DRL count is not applicable. 4. (Optional) On the Data Migration Job page, perform any of the following job control actions as needed: • Click Pause to interrupt a running migration job. • Click Stop to halt a running migration job. • Click Remove to delete a migration job. • Click Resume to continue a previously paused migration job.
Figure 29 Router Log (System Log) dialog box 3. 4. Use the buttons on the bottom of the Router Log (System Log) dialog box to perform the following actions: • Click OK to close the log window after you have finished viewing it. • Click Clear to delete the contents of the log. • Click Export to download the logs in CSV file format that can be viewed in any spreadsheet application, such as Microsoft Excel. • Click Print to send the contents of the log to a printer.
2. In the Log Type dialog box, click Data Migration Logs. The Router Log (Migration Log) dialog box opens and lists the following columns of information, as shown in Figure 31 (page 65): SeqID is the sequential ID of log entries. Time Stamp is the log entry time, based on router system time. Group Name is the user-defined job group or Group 0. Job Name is the user-defined name for the job. Job ID is a numeric ID. Job Type is the migration job type.
Using the Verifying Migration Jobs wizard The data migration verification wizard helps you configure jobs to verify data transfer occurred without loss or corruption. To verify data integrity, the process executes a bit-by-bit comparison of data between the source and its corresponding destination LUN. You can configure a verification job on a pair of source and destination LUNs after a migration job has been completed and acknowledged.
Figure 32 Verifying jobs options dialog box 5. 6. The contents of the Verifying Jobs Options dialog box are identical to the Data Migration Jobs Options dialog box. For an explanation of the selections on this dialog box, see “Using the data migration wizard” (page 55). To save the verifying jobs options, click Apply. Or, to discard changes to this job verification, click Cancel. Acknowledging a data migration job The last action to complete a migration requires acknowledging the job.
Acknowledging online, local migration jobs When initial copy jobs for online, local migration are completed, HP mpx Manager transitions the migration jobs to the Completed Data Migration Job page. While online local migration jobs are in the Copy Complete state the MPX200 is updating both the source and destination LUNs with any write I/O's from the host. Completed online local migration jobs can only be acknowledged after the server is offline and the source LUN is unpresented to the server.
5. On the Confirm Acknowledgement dialog box, click Yes. Removing an offline array You should remove arrays used in data migration because they are kept in persistent storage. If you used an array-based license for the data migration job and you plan to use this array again for migration, you may keep the license when removing the array. The MPX200 allows you to remove only offline arrays.
2. Complete the Create Data Management LUN Wizard as follows: a. Select a storage array for this DML. b. Expand a VPGROUP_n node, and then select one or more LUNs by selecting the check box to the left of each. Figure 34 (page 70) shows an example. Figure 34 Create data management LUN wizard c. To save your changes and close the wizard, click OK. The wizard verifies that all LUNs selected for the DML meet the following criteria: nl • The LUN is not already used as a DML.
Figure 35 Viewing data management LUN information After using the DML for data migration, you should release (remove) it. You cannot remove the master DML (the first DML created) until all other DMLs are removed. That is, to remove all DMLs, you must remove the master DML last. To remove a DML in HP mpx Manager: 1. On the Wizards menu, click Remove Data Management LUN. Or, in the router tree pane, right-click a blade, and then click Remove Data Management LUN to remove a DML from the selected blade. nl 2.
Figure 36 Create LUN scrubbing job dialog box As a security measure, HP mpx Manager does not allow you to select mapped LUNs or LUNs that are part of other jobs. In addition, destination arrays are filtered out and do not appear in the right pane of the LUN selection window. All scheduling options and job state changes (start, stop, pause, and so on) apply in the same way to both scrubbing and migration jobs. For scrubbing jobs, you can also specify one of several scrubbing algorithms.
To view the scrubbing job details, select the appropriate job in the appropriate group, as shown in Figure 38 (page 73). Figure 38 Scrubbing job page Generating a data migration report HP mpx Manager provides reporting of data migration jobs that have either been acknowledged or removed from the system. Each migration job entry in the report lists the job details, including source and destination LUN information. You can generate migration reports in three formats: TXT, JSON, and XML.
Src Lun Info -----------Src Symbolic Name = DGC RAID-1 Src Lun Id = 6 Src Vp Index = 1 Src Lun Start Lba = 0 Src Lun End Lba = 2097151 Src Lun Size = 2097151 Src Lun Vendor Id = DGC Src Lun Product Id= RAID 10 Src Lun Revision = 0223 Src Lun Serial No = SL7E1083500091 NAA WWULN = 60:06:01:60:f9:31:22:00:62:98:eb:c9:6e:1a:e0:11 Vendor WWULN = 00:02:00:00:00:00:00:00:00:02:00:00:00:00:00:00 Dst Lun Info -----------Dst Symbolic Name = NETAPP LUN-0 Dst Lun Id = 6 Dst Vp Index = 1 Dst Lun Start Lba = 0 Dst Lun E
5. To upload the report (currently in JSON format only) to a server, follow these steps: a. In the URL box, enter the address where you want the report to be uploaded. Ensure that this URL runs an HTTP service that can accept uploaded files and also acknowledge their receipt. b. Click Set URL to save the URL. c. Click Upload Report to transfer the report to the specified location. 6. Or, to save the report to a local router, follow these steps: a.
6 Command line interface This chapter provides information on using the CLI for data migration solutions. It defines the guest MPX200 account and the user session types, admin and miguser. For each command, it provides a description, the required session type, and an example. To view information about all CLI commands, see the MPX200 Command Line Interface (CLI) User Guide. User accounts User accounts include the guest account.
Command syntax The MPX200 CLI command syntax uses the following format: command keyword keyword [value] keyword [value1][value2] nl nl nl The command is followed by one or more options. Consider the following rules and conventions: • Commands and options are case insensitive. • Required option values appear in standard font within brackets; for example, [value]. • Non-required option values appear in italics within brackets; for example, [value].
Authority miguser Syntax array import rm Keywords import Imports a remote array to local router as a destination rm Removes from persistence the details associated with an offline array, and may remove the license information associated with the array.
this license in future for any array (including this array). Do you want to remove the array license (Yes/No)? [No] WARNING: Removing physical targets associated with this array will remove all LUN presentations (if any) to hosts from these targets. Do you want to remove the physical targets for this array (Yes/No)? [Yes] All attribute values for that have been changed will now be saved. nl nl array_licensed_port Use with keyword rm to remove licensed offline array ports.
Keywords acknowledge Acknowledges a successfully completed LUN compare job. After you run this command, the LUN compare job is permanently deleted from the database. add Schedules a standalone LUN compare job. You can name the job and associate it with a job group. Scheduling options include: immediately, at a pre-defined later time, or by serial scheduling.
Please select a VPGroup for Destination Lun ('q' to quit): 1 nl LUN Vendor LUN Size( GB) Attributes --- ------ -------------- ---------1 HP 10.00 SRC LUN 2 HP 10.00 3 HP 20.00 4 HP 20.00 5 HP 10.00 6 HP 5.00 7 HP 5.
nl All attribute values for that have been changed will now be saved. The following example shows the compare_luns stop command: nl MPX200 <1> (miguser) #> compare_luns stop nl nl Job Type ID Status Job Description -----------------------Verify Running ------------------------------------HP HSV200-0:0001 to DGC RAID-1:0000 nl --- ---0 Offline nl Please select a Job Id from the list above ('q' to quit): 0 nl All attribute values for that have been changed will now be saved.
MPX200 <1> (miguser) #> dml delete Index SymbolicName ----- -----------0 Data Mgmt Lun 0::1 1 Data Mgmt Lun 0::2 Please select a Data Mgmt Lun from the List above ('q' to quit): 1 Successfully initiated Data Management Lun deletion nl get_target_diagnostics Obtains data (such as READ CAPACITY or INQUIRY) either from a single target port (if an array is not formed) or from all target ports of an array and makes the data available for debugging purposes.
Examples The following example shows the get_target_diagnostics command by executing one specific command from the default set.
0 Online DGC RAID-0, 50:06:01:62:41:e0:49:2e 1 Online HP HSV210-1, 50:00:1f:e1:50:0a:37:19 Please select a Array/Target from the list above ('q' to quit): 1 Index (VpGroup Name) ----- -------------1 VPGROUP_1 Please select a VpGroup from the list above ('q' to quit): 1 Do you want to execute command on 1)All LUNs 2)Single LUN: ('q' to quit): 2 Index (LUN/VpGroup) ----- ------------0 0/VPGROUP_1 1 1/VPGROUP_1 2 2/VPGROUP_1 3 3/VPGROUP_1 4 4/VPGROUP_1 5 5/VPGROUP_1 Please select a LUN from above ('q' to quit)
LUN ID :- 0001000000000000 STATUS :- 0 SCSI STATUS :- 0 CDB :- 12000000ff0000000000000000000000 DATA TRANSFER LENGTH :- 32768 RESIDUE TRANSFER LENGTH :- 32516 ACTUAL DATA LENGTH :- 252 DATA :00 00 05 12 f7 30 00 32 48 50 20 20 20 20 48 53 56 32 31 30 20 20 20 20 20 20 20 20 35 30 30 30 41 32 39 39 41 53 5a 30 32 34 41 37 41 54 4c 38 34 42 00 00 00 62 0d 80 08 c0 03 00 03 24 01 60 01 c0 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 90 3e 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
6=Windows2008, 7=HP-UX, 8=AIX, 9=Windows2012, 10=Other) [Windows ] 6–Command Line Interface Initiator 6-26 ISR654609-00 G All attribute values that have been changed will now be saved. The following example shows the initiator mod command.
Keywords discover Discovers the iSCSI target through the router’s iSCSI port. login Logs in the user to a specific discovered iSCSI target and lists all other targets discovered from the iscsi discover command. Examples The following example shows the iscsi discover command: MPX200 <1> (admin) #> iscsi discover A list of attributes with formatting and current values will follow. Enter a new value or simply press the ENTER key to accept the current value.
MPX200 <1> (admin) #> lunigmap add WARNING ------This command should be used to present iSCSI targets that present one LUN per target.
60:05:08:b4:00:07:59:a4:00:02:e0:00:07:15:00:00 5/VPGROUP_2 PB5A8C3AATK8BW 60:05:08:b4:00:07:59:a4:00:02:e0:00:07:18:00:00 Please select a LUN presented to the initiator ('q' to quit): 0 Index MappedId Type Initiator ----- -------- ---- --------0 0 FC 50:06:0b:00:00:c1:73:75 1 0 FC 50:01:10:a0:00:17:60:69 2 0 FC 20:00:00:05:1e:b4:45:fb Please select an Initiator to remove ('a' to remove all, 'q' to quit): 2 All attribute values that have been changed will now be saved.
1 20:00:00:14:c3:3d:cf:88,21:00:00:14:c3:3d:cf:88 2 20:00:00:14:c3:3d:d3:25,21:00:00:14:c3:3d:d3:25 3 50:06:01:60:cb:a0:35:f6,50:06:01:68:4b:a0:35:f6 4 50:06:01:60:cb:a0:35:f6,50:06:01:60:4b:a0:35:f6 Please select a Target from the list above ('q' to quit): 0 Index (LUN/VpGroup) ----- ------------0 0/VPGROUP_1 1 1/VPGROUP_1 2 2/VPGROUP_1 Please select a LUN presented to the initiator ('q' to quit): 1 Index Type Initiator ----- ---- ----------------0 FC 20:00:00:1b:32:0a:61:80 Please select an Initiator to r
Multiple VpGroups are currently 'ENABLED'.
Syntax migration Keywords acknowledge Acknowledges a completed data migration job. After running the command with this option, the migration job is permanently deleted from the database. add Schedules a data migration job. You can enter a name for the data migration job and associate it with a job group. Scheduling options include: immediately, at a pre-defined later time, or by serial scheduling.
HP 10.00 5 HP 10.00 6 HP 10.00 7 HP 10.00 8 HP 10.00 9 HP 100.00 10 HP 3.00 11 HP 4.
60:06:01:60:a0:40:21:00:69:48:f4:64:99:ed:e0:11 APM00070900914 60:06:01:60:a0:40:21:00:b4:59:85:ba:b4:ed:e0:11 7 DGC 2.000 APM00070900914 60:06:01:60:a0:40:21:00:b5:59:85:ba:b4:ed:e0:11 8 DGC 2.000 APM00070900914 60:06:01:60:a0:40:21:00:be:b4:e5:ff:4e:ee:e0:11 9 DGC 2.
0 1 50:06:01:62:41:e0:49:2e, 61-00-00 50:00:1f:e1:50:0a:37:19, 82-0c-00 DGC RAID-0 HP HSV210-1 Src+Dest Src+Dest nl Please select a Destination Target from the list above ('q' to quit): 1 nl Index (VpGroup Name) ----- -------------1 VPGROUP_1 nl Please select a VPGroup for Destination LUN ('q' to quit): 1 nl LUN --1 Vendor -----HP LUN Size( GB) -------------10.000 2 HP 10.000 3 HP 10.000 4 HP 10.000 5 HP 100.
The MPX200 warns you if it detects any valid metadata on the destination LUN. However, you can continue and use the LUN for migration if you are aware of the consequences and want to continue with the job scheduling. 10. Specify whether the destination LUN is a thin-provisioned LUN, and if “yes”, then specify whether to validate the data on that LUN. 11. At the prompts, specify the I/O size, job name, migration group, and scheduling type. a.
MPX200 <1> (miguser) #> migration rm_peer Job Type Status Job Description ID --- --------------------------- ------------------------------------0 Offline.. Completed (100%) HP HSV200-0:VPG1:004 to HP HSV210-... Please select a Job Id from the list above ('q' to quit): 0 Do you wish to continue with the operation(yes/no)? [No] yes All attribute values for that have been changed will now be saved.
1. 2. Log in to the MPX200 as guest and enter the password. nl Open a miguser session using the following command: miguser start -p migration (The default password for miguser is migration.) 3. Create a migration group using the following command: migration_group add 4. At the prompt, enter a name for the new group. The name must be a minimum of 4 and a maximum of 64 alphanumeric characters. You can create a maximum of eight job groups in addition to the default job group.
migration_report Saves and uploads data migration reports in several file formats. To see example output from a generated migration report, see “Generating a data migration report” (page 73). Authority miguser Syntax migration_report Keywords save Saves data migration report files. upload Uploads the data migration report files to a server. Notes To generate a data migration report: 1. On the MPX200, issue the migration_report save command.
Priorities have been successfully re-adjusted. remotepeer Identifies the remote router used at a remote site. The remote router establishes Native IP connectivity to perform remote data migration operations. Use this command to add and remove remote peers. Authority miguser Syntax remotepeer Keywords add Adds a remote router at a remote site. rm Replaces the remote router’s management and iSCSI port information with its own information.
rescan devices Rescan the devices for new LUNs. Authority admin Syntax rescan devices Examples The following shows an example of the rescan devices command.
MPX200 <1> (admin) #> reset mappings Are you sure you want to reset the mappings in the system (y/n): y Please reboot the System for the settings to take affect. save capture Captures the system log that you can use to detect and troubleshoot problems when the MPX200 is exhibiting erroneous behavior. This command generates a System_Capture.tar.gz file that provides a detailed analysis.
The following example shows the scrub_lun add command: nl MPX200 <1> (miguser) #> scrub_lun add A list of attributes with formatting and current values will follow. Enter a new value or simply press the ENTER key to accept the current value. If you wish to terminate this process before reaching the end of the list press 'q' or 'Q' and the ENTER key to do so.
MPX200 <1> (miguser) #> scrub_lun rm Job Type Status Job Description ID ----------------------------- ------------------------------------0 Scrubbi.. Running (Pass: 1 23%) NETAPP LUN-3:VPG1:001 nl Please select a Job Id from the list above ('q' to quit): 0 Do you wish to continue with the operation(yes/no)? [No] yes nl Job marked for removal. It will be removed after pending operations are complete.
system Sets system properties. For more information, see “set system” (page 111). vpgroups Enables or disables the VP groups, and specifies names to each VP group. For more information, see “set vpgroups” (page 112). set array Sets the target type of an array to make it behave as either a source, a destination, or both.
Disable compression if: ◦ You know that the underlying data is not compressible. ◦ The available WAN bandwidth is greater than 600 Mbps, but you observe performance degradation.
NOTE: The MPX200 uses the Maximum Concurrent I/Os parameter to generate migration I/Os for the jobs configured on the array. Because the array may also be used by the hosts, migration I/Os from the MPX200 and host I/Os may result in I/Os that exceed the maximum concurrent I/Os supported by array. Arrays are equipped to handle this scenario and start returning the SCSI status as 0x28(TASK SET FULL) or0x08(BUSY) for the incoming I/Os that exceed the arrays’ maximum concurrent I/O limit.
MPX200 <1> (miguser) #> set array A list of attributes with formatting and current values will follow. Enter a new value or simply press the ENTER key to accept the current value. If you wish to terminate this process before reaching the end of the list press 'q' or 'Q' and the ENTER key to do so.
If you wish to terminate this process before reaching the end of the list press 'q' or 'Q' and the ENTER key to do so. WARNING: The following command might cause a loss of connections to both ports.
Authority admin Syntax set iscsi [ ] Keywords The number of the iSCSI port to be configured. Examples The following example shows the set iscsi command: MPX200 <1> (admin) #> set iscsi 1 nl A list of attributes with formatting and current values will follow. Enter a new value or simply press the ENTER key to accept the current value. If you wish to terminate this process before reaching the end of the list press 'q' or 'Q' and the ENTER key to do so.
System Log Level (Default,Min=0, Max=2) [0 ] Time To Target Device Offline (Secs,Min=0, Max=120)[0 ] All attribute values that have been changed will now be saved. nl set vpgroups Enables or disables the VP groups, and specifies a name to each VP group. Although VpGroup 1 cannot be disabled, you can change its name. Authority admin Syntax set vpgroups Examples The following shows an example of the set vpgroups command.
State Vendor ID Product ID Target Type Path Domain WWPN Port ID State Path Domain WWPN Port ID State Array Bandwidth Max I/Os I/O Pacing Load Balancing Array License LunInfo Display Symbolic Name State Vendor ID Product ID Target Type Path Domain WWPN Port ID State Path Domain WWPN Port ID State Array Bandwidth Max I/Os I/O Pacing Load Balancing Array License LunInfo Display nl nl Online HP MSA2012fc Destination FC 20:78:00:c0:ff:d5:9a:05 01-04-ef Online FC 21:78:00:c0:ff:d5:9a:05 01-06-ef Online NA 128 E
Vendor ID Product ID Target Type Path Domain WWPN Port ID State Path Domain WWPN Port ID State Array Bandwidth Max I/Os I/O Pacing Load Balancing Array License LunInfo Display Symbolic Name State Vendor ID Product ID Target Type Path Domain WWPN Port ID State Path Domain WWPN Port ID State Array Bandwidth Max I/Os I/O Pacing Load Balancing Array License LunInfo Display Symbolic Name State Vendor ID Product ID Target Type Path Domain WWPN Import Path State Path Domain WWPN Import Path State Array Bandwidth M
Syntax show compare_luns Examples The following shows an example of the show compare_luns command. MPX200 <1> #> show compare_luns Compare State Type ( 1=Running 2=Failed 3=Completed 4=Serial 5=All ) : 5 Index Id Creator Owner Type Status Job Description ----- -- ------- ------ ---- ------------------------ -------------------------------0 0 1 1 Compare Verify Running ( 2%) IBM 2145-0:VPG1:001 to 3PARda..
Lun Type DML State Owner Serial Number Creator Blade Id Array Symbolic Name Lun VPG:ID LUN WWULN LUN State Free/Total Metadata Free/Total Data Extents DRL [Master DML] Active 0834E00021 1 IBM 1814-0 1:6 60:0a:0b:80:00:2a:3f:78:00:00:67:e7:4c:fe:b4:22 Online Extents 8192/8192 49/49 show fc Displays the port status, link status, port name, and node name for each FC port.
Authority guest Syntax show features Examples The following example shows the show features command: MPX200 <1> #> show features License Information ------------------FCIP 1GbE Licensed FCIP 10GbE Not Licensed SmartWrite 1GbE Licensed SmartWrite 10GbE Not Licensed DM Capacity Licensed DM Array Licensed DS Capacity Licensed DS Array Licensed show feature_keys Displays the feature key information.
show initiators Displays detailed information for all initiators. Authority guest Syntax show initiators Examples The following example shows the show initiators command.
0 FC 20:00:00:e0:8b:86:fb:9b,21:00:00:e0:8b:86:fb:9b 1 FC 20:00:00:e0:8b:89:17:03,21:00:00:e0:8b:89:17:03 2 ISCSI iqn.1986-03.com.hp:fcgw.mpx200.dm.
Syntax show logs Examples The following example illustrates the show logs command used to display ten log records: MPX200 <1> #> show logs 10 10/09/2011 11:11:04 BridgeApp 3 QLFC_Login: Port Name 500601604ba035de 10/09/2011 11:15:29 QLFC 3 #0: QLIsrEventHandler: RSCN update (8015) rscnInfo:0x2080000 VpIndex:0x0 10/09/2011 11:15:29 QLFC 3 #0: QLIsrEventHandler: RSCN update (8015) rscnInfo:0x2080000 VpIndex:0x1 10/09/2011 11:15:29 QLFC 3 #0: QLIsrEventHandler: RSCN update (8015) rscnInfo:0x2080000 VpIndex:0x
60:0a:0b:80:00:2a:3f:d8:00:00:ad:88:4d:26:90:3a 5/VPGROUP_1 1T70246204 60:0a:0b:80:00:2a:3f:78:00:00:6c:40:4d:26:b5:0c Please select a LUN from the list above ('q' to quit): 0 LUN Information ----------------WWULN 60:0a:0b:80:00:2a:3f:78:00:00:6c:49:4d:26:b5:38 Serial Number 1T70246204 LUN Number 0 VendorId IBM ProductId 1814 FAStT ProdRevLevel 0916 Portal 0 Lun Size 1024 MB Lun State Online LUN Path Information -------------------Controller Id WWPN,PortId / IQN,IP Path Status ------------- ----------------
show luns Displays all the LUNs and their detailed information.
Process Blocks 8192/8192 Request Blocks 8192/8192 Event Blocks 4096/4096 Control Blocks 1024/1024 Client Req Blocks 8192/8192 FCIP Buffer Pool 0/0 FCIP Request Blocks 0/0 FCIP NIC Buffer Pool 0/0 1K Buffer Pool 69623/69632 4K Buffer Pool 4096/4096 Sessions 4096/4096 Connections: GE1 256/256 GE2 256/256 In the following example, 10GbE ports are present and show all the ports connected: MPX200 <1> #> show memory Memory Units Free/Total ----------------------Physical 157MB/1002MB Buffer Pool 7808/8832 Nic Buf
show migration Displays a summarized status of either all migration jobs or those having a specific state. It also lists the configuration details of the selected job.
Priority Not Applicable Migration Status Running I/O Size 64 KB Migration State 10% Complete Migration Performance 204 MBps Migration Curr Performance 204 MBps Job ETC 0 hrs 0 min 44 sec Start Time Fri Nov 2 14:10:33 2012 End Time --Delta Time --Source Array IBM 2145-0 Source Lun VPG:ID 1:1 Source Lun WWULN 60:05:07:68:02:80:80:a7:cc:00:00:00:00:00:13:5e Source Serial Number 0200a02029f3XX00 Source Lun Size 10.
Source Lun End Lba 20971519 Destination Array 3PARdata VV-1 Destination Lun VPG:ID 1:0 Destination Lun WWULN 50:00:2a:c0:00:02:1a:f8 Destination Serial Number 01406904 Destination Lun Size 10.000 GB Destination Lun Start Lba 0 Destination Lun End Lba 20971519 Compared Data Size 20971520 Blocks (1 Block is of 512 bytes) show migration_logs Displays the data migration logs and the operation performed on them.
Seq id: 2 : Job Type: Migration (Online) : miguser :ADDED : MigrOwner 1 : JobId 0( Online) of group Group 0 with priority 0 from Target NETAPP LUN-0 Lun NETAPP LUN hpTQaF01ICU6(0) StartLba 0 to Target NETAPP LUN-0 Lun NETAPP LUN hpTQaF01ICtb(1) StartLba 0 with migration size 2.
0 20:15:00:a0:b8:2a:3f:78, 01-02-00 IBM 1814-1 Source 1 20:78:00:c0:ff:d5:9a:05, 01-04-ef HP MSA2012fc-0 Destination Please select a Target from the list above ('q' to quit): 0 Index (LUN/VpGroup) Serial Number/WWULN ----- ------------- ------------------0 0/VPGROUP_1 1T70246204 60:0a:0b:80:00:2a:3f:78:00:00:6c:49:4d:26:b5:38 1 1/VPGROUP_1 1T70246204 60:0a:0b:80:00:2a:3f:d8:00:00:d1:c0:4d:91:36:23 2 2/VPGROUP_1 1T70246204 60:0a:0b:80:00:2a:3f:d8:00:00:d1:c2:4d:91:36:44 3 3/VPGROUP_1 1T70246204 60:0a:0b:80:0
Syntax show migration_perf Examples The following example shows the show migration_perf command: nl MPX200 <1> #> show migration_perf 0 nl Migration State Type ( 1=Running 2=Completed ) : 1 Index Id Creator Owner Type Status Job Description ----- -- ------- ------ ---- ------------------------ -------------------------------------0 2 1 1 Online .. Running ( 26%) HP MSA2324fc-0:VPG1:005 to HP..
Data Scrubbing Array based licenses issued 20 Data Scrubbing Array based licenses used 20 Available Data Scrubbing Array based licenses 0 show perf Displays the performance (in bytes) of the active job. Authority guest Syntax show perf Examples The following examples show the show perf command: MPX200 <1> #> show perf WARNING: Valid data is only displayed for port(s) that are not associated with any configured FCIP routes.
GE1 GE2 FC1 FC2 -------------------------------0 0 189M 189M 0 0 188M 188M 0 0 182M 182M 0 0 187M 187M 0 0 188M 188M 0 0 186M 186M 0 0 187M 187M 0 0 186M 186M 0 0 170M 170M 0 0 189M 189M In the following example, 10GbE ports are present and shows all the ports connected: nl In the following example, 10GbE ports are present and shows all the ports connected: nl MPX200 <1> #> show perf byte nl WARNING: Valid data is only displayed for port(s) that are not associated with any configured FCIP routes.
WWPN WWNN Port ID 50:00:1f:e1:50:0a:e1:48 50:00:1f:e1:50:0a:e1:40 82-0c-00 nl VPGroup 1 Name iqn.1986-03.com.hp:fcgw.mpx200.0851e00035.b1.01.50001fe1500a3718 nl nl WWPN WWNN Port ID VPGroup Name 50:00:1f:e1:50:0a:37:18 50:00:1f:e1:50:0a:37:10 82-04-00 1 iqn.1986-03.com.hp:fcgw.mpx200.0851e00035.b1.01.5006016241e0492e nl nl WWPN 50:06:01:62:41:e0:49:2e WWNN 50:06:01:60:c1:e0:49:2e Port ID 82-01-00 VPGroup 1 nl . nl . nl . nl Name iqn.1986-03.com.hp:fcgw.mpx200.0851e00035.b1.
Syntax show remotepeers Examples The following example shows the show remotepeers command: MPX200 <1> (admin) #> show remotepeers nl Remote Peer System Information -----------------------------Product Name MPX200 Symbolic Name Blade-1 Serial Number 2800111111 No. of iSCSI Ports 2 iSCSI Base Name iqn.1992-08.com.qlogic:isr.2800111109.b1 Mgmt IPv4 Address 172.35.14.71 Mgmt IPv6 Link-Local :: Mgmt IPv6 Address 1 :: Mgmt IPv6 Address 2 :: No. of iSCSI Remote Connections 1 Remote iSCSI Connection Address 1 70.
--------------------Job Owner:Id:UUID b1:1:1105F00605b1717 Job Description IBM 2145-0:VPG1:000 Group Name Group 0 Scrubbing Type Scrubbing Priority Not Applicable Scrubbing Status Running I/O Size 64 KB Scrubbing Algorithm ZeroClean [ 2 Pass ] Scrubbing CurrentPass 1 Scrubbing State 17% Complete Scrubbing Performance 273 MBps Scrubbing Curr Performance 273 MBps Job ETC 0 hrs 1 min 56 sec Start Time Fri Nov 2 14:15:36 2012 End Time --Delta Time --Array IBM 2145-0 Lun VPG:ID 1:0 Lun WWULN 60:05:07:68:02:80:80
Authority guest Syntax show targets Examples The following example shows the show targets command: nl nl MPX200 <1> #> show targets Target Information -------------------WWNN 50:08:05:f3:00:1a:15:10 WWPN 50:08:05:f3:00:1a:15:11 Port ID 02-03-00 State Online WWNN 50:08:05:f3:00:1a:15:10 WWPN 50:08:05:f3:00:1a:15:19 Port ID 02-07-00 State Online nl The following example shows the show targets command with imported targets: MPX200 <1> #> show targets Target Information -------------------WWNN 50:05:07:68:0
Syntax show vpgroups Examples The following example shows the show vpgroups command.
Syntax target Keywords rescan Examples The following example shows the target rescan command: mpx200 (admin) #> target rescan Scanning Target WWPN 00:00:02:00:00:00:00:00 Target Rescan done Scanning Target WWPN 00:00:03:00:00:00:00:00 Target Rescan done Scanning Target WWPN 00:00:01:00:00:00:00:00 Target Rescan done Scanning Target WWPN 50:08:05:f3:00:1a:15:11 Target Rescan done Scanning Target WWPN 50:08:05:f3:00:1a:15:19 Target Rescan done Target Re-Scan completed nl nl nl nl nl nl nl nl nl nl
Keywords add Adds the target presentation. rm Removes the target presentation.
7 Performance and best practices This chapter discusses the factors affecting data migration solution performance and offers suggestions for obtaining maximum performance. Performance factors DMS provides maximum throughput of 4 TB per hour.
as hh:mm:ss and is an estimate based on the current I/O performance in 30 second intervals. Job ETC is displayed with job details either through CLI or the GUI. CLI Example: . . . Migration I/O Size Migration Migration Migration Job ETC . . .
Online ETC job • If an online (local/remote) job is running while the host is writing to the source LUN, ETC is the total outstanding blocks that are yet be copied from source to destination plus any outstanding Dirty Region Log (DRL) blocks, divided by the job’s current performance (MBps): Outstanding blocks + DRL blocks / Current performance MBps = ETC Behavior characteristics • ETC is calculated every time the job is queried.
Choosing the right DMS options Follow these guidelines when choosing DMS options: • Use the Configure Only option to configure migration jobs while applications are still online. Start the migration jobs as soon as the server offline notification is received from the system administrator. • To get optimum MPX200 performance, schedule a maximum of eight jobs to run simultaneously.
Array reconfiguration precautions include the following: • If the LUN presentation from the array to the MPX200 is changed, click the Refresh button two or three times to see the changes. • Wait for a few seconds between retries because the MPX200 will be running the discovery process. Remove unused arrays for the following reasons: • DMS allows a maximum of seven arrays to be configured at any time. • Arrays stored in persistence consume resources even if the array is offline and no longer needed.
8 Using the HP MSA2012fc storage array MSA2012fc Array Behavior The controllers A and B of the MSA2012fc expose a completely independent set of LUNs that cannot be accessed through other controllers. ControllerA-port0 and ControllerA-port1 form one array, and ControllerB-port0 and ControllerB-port1 form another array. The MSA2012fc array appears as two independent arrays on the MPX200.
14. Reboot the MPX200. The MPX200 can now see the new set of LUNs under the array entity that was licensed in Step 5. 15. Configure data migration jobs as described in “Scheduling an individual data migration job” (page 56) or “Scheduling data migration jobs in batch mode” (page 58).
9 Restrictions This chapter details the restrictions that apply to DMS related to reconfiguring LUNs on a storage array, and removing an array after a data migration job completion. Reconfiguring LUNs on a storage array Carefully handle reconfiguration of a LUN ID following these guidelines: • Do not change the LUN ID for any LUN that is currently configured to a data migration job or masked to an initiator. • Before reassigning a LUN ID to a LUN, ensure that the LUN is not configured.
2. 3. 4. 5. On the switch, remove the configured zones containing MPX200 FC ports and controller ports of the array. Wait for up to 30 seconds to see the array in the offline state in the show array command output, see “show array” (page 112). If working on a dual-blade setup, repeat the preceding steps for the peer blade. The array must be offline on both blades before you can remove it. Remove the array using the array rm command, see “array” (page 77). NOTE: Firmware versions 3.2.
10 Support and other resources Contacting HP For worldwide technical support information, see the HP support website: http://www.hp.
• Red Hat website: http://www.redhat.com • SPOCK website: http://www.hp.com/storage/spock • White papers and Analyst reports: http://www.hp.com/storage/whitepapers Prerequisites Prerequisites for installing or using this product include: • Microsoft Cluster Server • Windows NT SP1 • Third-party backup software Typographic conventions Table 11 Document conventions Convention Element Blue text: Table 11 (page 149) Cross-reference links and e-mail addresses Blue, underlined text: http://www.hp.
diagnosis, and automatic, secure submission of hardware event notifications to HP, which will initiate a fast and accurate resolution, based on your product’s service level. Notifications may be sent to your authorized HP Channel Partner for on-site service, if configured and available in your country. The software is available in two variants: • HP Insight Remote Support Standard: This software supports server and storage devices and is optimized for environments with 1-50 servers.
11 Documentation feedback HP is committed to providing documentation that meets your needs. To help us improve the documentation, send any errors, suggestions, or comments to Documentation Feedback (docsfeedback@hp.com). Include the document title and part number, version number, or the URL when submitting your feedback.
A Configuring the data path through MPX200 for online data migration This appendix provides the information you need to configure the data paths through the MPX200 for online data migration using multipathing software under the following operating systems: • Windows 2003 • Windows 2008 • Windows 2012 • RHEL 4 and 5 • Novell SLES 10 and 11 • IBM AIX 5.3 and 6.1 • HP-UX 11.11, 11.23, and 11.31 • Solaris 10 SPARC x86 • VMware ESX 3.5, VMware ESXi 4.1, VMware ESXi 5.0, and VMWare ESXi 5.
These validations use the method described in “Zoning in presented targets: Method 1”. Alternatively, you can use the method described in “Zoning in presented targets: Method 2”. OS Windows 2008 and Windows 2003 Multipathing software Array-specific MMC EMC PowerPath HDLM HP MPIO IBM RDAC NetApp Data Motion nl nl nl nl nl Pre-migration setup Install the DSM-MPIO (device-specific module) according to the installation steps in the DSM installation manual.
Table 13 Configuring native device Mapper-Multipath on Linux (continued) Removing second direct path The path status for the path belonging to the zoned-out controller port (for example, from controller port (for Port B) is shown as failed or faulty in the multipath -ll output on the host. The example, Port B) entire host I/O now must flow through the router. Verify that the show perf byte command shows the I/O flowing through the router.
Table 15 Configuring Hitachi Dynamic Link Manager on Linux (continued) Pre-migration setup Install HDLM software as recommended by the vendor. Multipath installation verification To check the multipath status of the disks, issue the dlnkmgr view -path command.
Table 16 Configuring EMC PowerPath on IBM AIX (continued) d. Vary off the volume group as follows: # varyoffvg vgu01 e. Change the reserve_lock setting as follows: # chdev -l hdiskpower10 -a reserve_lock=no hdiskpower10 changed f. Confirm that the change was made as follows: # lsattr -El hdiskpower10 |grep reserve noreserve_lock no Reserve device on open True g. Vary on the volume group as follows: # varyonvg vgu01 h. Mount the file system as follows: # mount /u01 i. Start the Oracle database application.
Table 17 Configuring HP PVLinks on HP-UX (continued) Multipath installation verification Verify that the volume group created has multiple PVs. Each PV is a path to the same disk. The first path for each LUN is treated as the primary path, while all other paths are treated as alternate PVLinks, which are used to failover I/O in case of primary path failure.
Table 18 Configuring EMC PowerPath on HP-UX (continued) Adding router path for the removed controller port (for example, Port A) On the HP-UX host after zoning in the router presented target controller (for example, Port A), issue the following commands: ioscan insf -e The powermt display dev=all command lists the additional path to the same LUN along with the direct path. Removing second direct path The powermt display dev=all command displays the path state as dead.
Solaris multipath configuration Table 20 Configuring native multipathing on Solaris SPARC OS Solaris 10 SPARC x86 Multipathing software Native Multipathing Pre-migration setup 1. 2. 3. 4. To To To To enable multipath on a Solaris host, refer to the following Solaris documentation. verify the multipaths for the LUN, issue the mpathadm list lu command. check the multipath device, issue the luxadm probe command.
VMware multipath configuration Table 21 Configuring native multipathing on VMware ESX/ESXi OS VMware ESX 3.5, ESXi 4.1, ESXi 5.0, and and ESXi 5.1 Multipathing software Native Multipathing Pre-migration setup None Multipath installation verification 1. In the vSphere Client GUI, select the Configuration tab. 2. Click the Storage menu item in the left pane, and then select the Devices tab. 3.
Table 22 Configuring multipathing on Citrix XenServer (continued) Removing second direct path from 1. From the FC switch, zone out the second direct path. controller port (for example, Port 2. In the left pane, click Hardware HBA virtual disk storage. B) 3. In the right pane, select the Storage tab. 4. On the Storage page, select the LUN, and then click Rescan. 5. Select the General tab, and then on the General page, check the multiple paths.
B Configuring the data path through MPX200 for iSCSI online data migration This appendix provides information on how to configure the data path through the MPX200 for performing iSCSI to iSCSI and iSCSI to FC online data migration. It covers pre-insertion requirements, the insertion process with Microsoft MPIO and Dell EqualLogic DSM. NOTE: MPX200 online migration with HP-UX hosts does not require you to change the initiator type. Leave the initiator set to the default, Windows.
4. 5. 6. 7. Perform discovery again from the host to one iSCSI ports on each of the router blades. Ensure that the iSCSI presented target is listed on the Target's property page of the Microsoft iSCSI initiator. From Blade 1, log in or connect to the presented target. Select the Enable multi-path option, and then verify that two paths are visible for the LUN.
C SNMP SNMP provides monitoring and trap functions for managing the router through third-party applications that support SNMP. The router firmware supports SNMP versions 1 and 2 and a QLogic management information base (MIB). You may format traps using SNMP version 1 or 2. SNMP Parameters You can set the SNMP properties using HP mpx Manager or the CLI. Table 23 (page 164) describes the SNMP parameters.
1 Trap address (other than 0.0.0.0) and trap port combinations must be unique. For example, if trap 1 and trap 2 have the same address, they must have different port values. Similarly, if trap 1 and trap 2 have the same port value, they must have different addresses.
qsrJobType OBJECT-TYPE Syntax: Integer Status: Current Description: Data migration job type, either online or offline. qsrJobOpCode OBJECT-TYPE Syntax: Integer Status: Current Description: Data migration job operation type, either migration or comparison. qsrJobOperation OBJECT-TYPE Syntax: Integer Status: Current Description: Data migration job operation performed, and whether it was user-driven or automatic. Operations include STARTING_COPY, STOPPED, REMOVED, and ACKNOWLEDGED.
Description: Indicates from which blade the trap is generated.
D HP-UX Boot volume migration Data migration HP-UX Boot volume migration rules: • MPX200 Data Migration supports both HP-UX11i versions 2 and 3 boot volume migration. • Boot volume migration in an HP-UX environment is supported only with the MPX200 Data Migration OFFLINE method. • Boot volume migration supports both stand alone systems (non vPar) and vPar configurations. Stand alone systems (non vPar configurations) Pre Migration • As the data migration must be done OFFLINE, shut down the system.
3. Select the correct device, select the file hpux.cfi from the HPUX folder, and save it with a new name. 4. Select the newly created boot option to boot from the new LUN. vPar configurations Pre Migration • As the data migration must be done OFFLINE, shut down the system using the recommended procedures to shutdown the vPars and system. • Itanium servers require nPar mode for complete system shutdown.
• HP recommends that the boot LUN is presented as ID 0. Presenting it with another ID can cause the boot LUN to not be discovered when a device scan is performed. • Once the vPar is booted, check all the boot paths and the boot options in the vPar data base, and modify them to reflect the new boot paths.
E Troubleshooting Table 25 (page 171) lists some problems that may occur with the data migration service and provides a possible reason or solution for each. Table 25 Troubleshooting Problem Reason and Solution The show array command either: Ensure that the zoning is correctly set up on the switches. • Does not show any array entities. Ensure that the show targets command is not showing any entry for the array.
Table 25 Troubleshooting (continued) Problem Reason and Solution 60:05:08:b4:00:05:4d:94:00:00:c0:0 0:00:2c:00:00 and WWULN: 60:05:08:b4:00:05:4d:94:00:00:c0:0 0:00:2d:00:00 mapped on same LUN ID: 8. Marking LUN offline: LUN ID: 8 WWULN: 60:05:08:b4:00:05:4d:94:00:00:c0:0 0:00:2d:00:00 you explicitly acknowledge or remove all migration jobs associated with a set of LUNs that need to be removed. Only after that should you assign the new set of LUNs to the MPX200 host group.
Table 25 Troubleshooting (continued) Problem Reason and Solution (To rescan an array in HP mpx Manager, right-click the appropriate array in the router tree, and then click Rescan.) After zoning the presented target ports with the host, LUNs This problem can occur if you map LUNs for presentation are not visible through the router paths. and also create a global presentation for a presented target. If you map LUNs, use VPG-based target maps instead.
Glossary A AMS Attachable Modular Storage array A storage system that contains multiple disk or tape drives. A disk array, for example, is differentiated from a disk enclosure, in that the array has cache memory and advanced functionality, like RAID and virtualization. Components of a typical disk array include disk array controllers, cache memories, disk enclosures, and power supplies. B bandwidth A measure of the volume of data that can be transmitted at a specified transmission rate.
H HA High availability. A system or device that operates continuously for a long length of time. HDLM Hitachi Dynamic Link Manager HDS Hitachi Data Systems HIT Host Integration Tools I initiator A sSystem component, such as a network interface card, that originates an I/O operation. IQN iSCSI qualified name iSCSI Internet small computer system interface. Transmits native SCSI over the TCP/IP stack.
path A path to a device is a combination of an adapter port instance and a target port as distinct from internal paths in the fabric network. A fabric network appears to the operating system as an opaque network between the initiator and the target. ping A computer network administration utility used to test whether a specified host is reachable across an IP network, and to measure the round-trip time for packets sent from the local host to a destination computer.
VP Virtual port. VPD Vital product data VPG Virtual port group. A RCLI software component used to create logical FC adapter initiator ports on the fabric. VPN Virtual private network. W WMS Workgroup Modular Storage WWN World wide name. WWNN World wide node name. Unique 64-bit address assigned to a device. WWPN World wide port name. Unique 64-bit address assigned to each port on a device. One WWNN may contain multiple WWPN addresses. WWULN World wide unique LUN name.
Index A admin session, 76 array, 77, 174 array_licensed_port, 79 arrays, 19 removing after data migration jobs, 146 authority requirements, 77 B bandwidth, 174 C CHAP, 174 command syntax, 77 commands array, 77 array_licensed_port, 79 compare_luns, 79 dml, 82 get_target_diagnostics, 83 initiator, 86 iscsi, 87 lunigmap, 88 lunmask, 90 lunremap, 91 migration, 92 migration_group, 98 migration_parameters, 99 migration_report, 100 readjust_priority, 100 remotepeer, 101 rescan devices, 102 reset, 102 save captur
data migration configuration, 11 data migration job acknowledging a DM job, 67 acknowledging offline DM job, 67 acknowledging online, local DM job, 68 acknowledging online, remote DM job, 68 scheduling, 56 scheduling in batch mode, 58 scheduling verification of job options, 66 starting serial scheduled jobs, 60 Verifying Migration Jobs wizard, 66 viewing job details and controlling job actions, 62 viewing status, 61 viewing system and DM job logs, 63 data migration report, 73 data migration wizard, 55 data
providing feedback, 150 R readjust_priority, 100 related documentation, 148 remote support, 149 remotepeer, 101 rescan devices, 102 reset, 102 S save capture, 103 scrub_lun, 103 scrubbing LUN wizard, 71 secure shell, 176 set, 105 set array, 106 set event_notification, 109 set fc, 109 set features, 110 set iscsi, 110 set system, 111 set vpgroups, 112 setting array properties, 53 show array, 112 show compare_luns, 114 show dml, 115 show fc, 116 show feature_keys, 117 show features, 116 show initiators, 118