Front cover PowerVM Migration from Physical to Virtual Storage Moving to a Virtual I/O Server managed environment Ready-to-use scenarios included AIX operating system based examples Abid Khwaja Dominic Lancaster ibm.
International Technical Support Organization PowerVM Migration from Physical to Virtual Storage January 2010 SG24-7825-00
Note: Before using this information and the product it supports, read the information in “Notices” on page vii. First Edition (January 2010) This edition applies to Version Virtual I/O Server 2.1.2.10, AIX 6.1.3, and HMC 7.3.4-SP3. © Copyright International Business Machines Corporation 2010. All rights reserved. Note to U.S. Government Users Restricted Rights -- Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp.
Contents Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . v Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viii Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4.3 Transition raw data disk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85 Chapter 5. Logical partition migrations . . . . . . . . . . . . . . . . . . . . . . . . . . . 89 5.1 Direct-attached SCSI partition to virtual SCSI . . . . . . . . . . . . . . . . . . . . . . 90 5.2 Direct-attached SAN rootvg and data partition to SAN virtual SCSI . . . . 100 5.3 Direct-attached SAN rootvg and data partition to virtual Fibre Channel . 113 5.
Figures 1-1 Test environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 2-1 Relationship between physical and virtual SCSI on Virtual I/O Server . . . 19 2-2 Relationship between physical and Virtual SCSI on client partition . . . . . 20 2-3 HMC Virtual I/O Server Physical Adapters panel . . . . . . . . . . . . . . . . . . . 21 2-4 Create Virtual SCSI Server Adapter panel . . . . . . . . . . . . . . . . . . . . . . . .
5-13 Add a Fibre Channel adapter to the Virtual I/O Server . . . . . . . . . . . . . 120 5-14 Create Virtual Fibre Channel Adapter panel . . . . . . . . . . . . . . . . . . . . . 121 5-15 Virtual Adapters panel. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122 5-16 Edit a managed profile . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123 5-17 Virtual Adapters tab. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Notices This information was developed for products and services offered in the U.S.A. IBM may not offer the products, services, or features discussed in this document in other countries. Consult your local IBM representative for information on the products and services currently available in your area. Any reference to an IBM product, program, or service is not intended to state or imply that only that IBM product, program, or service may be used.
techniques on various operating platforms. You may copy, modify, and distribute these sample programs in any form without payment to IBM, for the purposes of developing, using, marketing or distributing application programs conforming to the application programming interface for the operating platform for which the sample programs are written. These examples have not been thoroughly tested under all conditions.
Preface IT environments in organizations today face more challenges than ever before. Server rooms are crowded, infrastructure costs are climbing, and right-sizing systems is often problematic. In order to contain costs there is a push to use resources more wisely by minimizing waste and maximizing the return on investment. Virtualization technology was developed to answer these objectives. More and more organizations will deploy (or are in the process of deploying) some form of virtualization.
step-by-step and allow the reader to determine which migration techniques will work best for them based on their skills and available resources. The procedures detailed cover migrations on AIX® operating-system-based hosts. Linux® operating-system-based migrations are not covered in this publication. The team who wrote this book This book was produced by a team of specialists from around the world working at the International Technical Support Organization, Poughkeepsie Center.
leading-edge technologies. Your efforts will help to increase product acceptance and customer satisfaction, as you expand your network of technical contacts and relationships. Residencies run from two to six weeks in length, and you can participate either in person or as a remote resident working from your home base. Find out more about the residency program, browse the residency index, and apply online at: ibm.com/redbooks/residencies.
xii PowerVM Migration from Physical to Virtual Storage
1 Chapter 1. Introduction This publication provides instructions on how to transition from direct-attached storage on a standalone IBM server or IBM logical partition to an IBM logical partition with its storage virtualized through a Virtual I/O Server. This transition is referred to as a physical-to-virtual migration. Since the focus of this publication is on migrations, it only briefly covers the creation and configuration of logical partitions.
1.1 Definitions The following definitions will assist you in understanding the material located within this publication: Standalone servers Standalone servers are typically systems that do not contain multiple logical partitions or any Virtual I/O Servers. Client or logical partition This is a partition on POWER®-based hardware that has some level of virtualization. For example, CPU and memory and may have some direct-attached I/O hardware or I/O hardware virtualized by the Virtual I/O Server, or both.
1.3 Test environment The environment in which testing of the migration scenarios was performed is depicted in Figure 1-1.
1.4 Storage device compatibility in a Virtual I/O Server environment Physical-to-virtual (p2v) device compatibility refers only to the data on the device, not necessarily to the capabilities of the device. A device is p2v compatible when the data retrieved from that device is identical regardless of whether it is accessed directly through a physical attachment or virtually (for example, through the Virtual I/O Server).
A device is considered to be p2v compatible if it meets the following criteria: It is an entire physical volume (for example, a LUN). Device capacity is identical in both physical and virtual environments. The Virtual I/O Server is able to manage this physical volume using a UDID or IEEE ID. For more information about determining whether a physical volume has a UDID or IEEE identification field see 2.2, “Checking unique disk identification” on page 13.
In Table 1-1 a number of migrations are presented from a source host configuration through to the destination host configuration. The table is not exclusive of any other forms of migration. However, the procedures and thus the lab-tested methods detailed in subsequent chapters in this book are derived from this table. You may find methods that work better in your particular environments especially since this book exclusively discusses IBM-specific technologies.
Migration objective Volume group to migrate Migration procedure Destination host configuration Chapter 6, “Standalone SAN rootvg to virtual Fibre Channel” on page 145 rootvg Access same SAN disk with adapter on destination Partition with virtual Fibre Channel (virtual SCSI also possible) Chapter 7, “Direct attached Fibre Channel devices partition to virtual Fibre Channel” on page 153 N/A Access same tape with adapter on destination Partition with virtual Fibre Channel The following is a suggeste
8 PowerVM Migration from Physical to Virtual Storage
2 Chapter 2. Core procedures There are a number of core procedures that are used in multiple scenarios of the accompanying chapters. These procedures are documented fully in this chapter and additional notes are provided about the procedures that will not be found in the fully worked-through examples in subsequent chapters. Some of the additional notes are about issues such as best practices, and some are additional diagnostic methods that may be used.
2.1 File-backed optical for restoration File-backed optical devices provide a clean, easy-to-use mechanism to take a backup of either a root or data volume group and restore it to a target device. The target device could be a LUN presented as virtual SCSI or virtual Fibre Channel.
Using the AIX mkcd command-line method To use the AIX mkcd command-line method, use the following procedures: 1. Run the mkcd command with the flags shown below. If you would like to use a non-default location to store the images, such as an NFS file share, you can include a -I / at the end of the options, where is the path for the final images.
On the Virtual I/O Server A media repository and a virtual optical device must now be created. The media repository does not have to be on the rootvg. Any volume group accessible to the Virtual I/O Server will be acceptable, but there can only be one repository per Virtual I/O Server. 5. Make a media repository on the Virtual I/O Server rootvg as in the following command: $ mkrep -sp rootvg -size 10G A repository should be large enough to hold any and all images that you may have created for this migration.
10.If you have multiple media created and the procedure that you are running asks for the next CD in the sequence, use the Virtual I/O Server unloadopt command to unload the current virtual media and repeat step 8 on page 12 to load the next image. $ unloadopt -vtd vcd1 $ loadopt -disk cd_image_15364.
2.2.1 The physical volume identifier (PVID) The PVID is written to a disk when the disk has been made a member of an AIX volume group and may be retained on the disk when the disk is removed from a volume group.
ieee_volname False lun_id False max_transfer True prefetch_mult on read False pvid False q_type False queue_depth True raid_level False reassign_to True reserve_policy True rw_timeout True scsi_id False size False write_cache 600A0B8000110D0E0000000E47436859 0x0003000000000000 0x100000 IEEE Unique volume name Logical Unit Number Maximum TRANSFER Size 1 Multiple of blocks to prefetch none Physical volume identifier simple Queuing Type 10 Queue Depth 5 RAID Level 120 Reassign Timeout value no_r
clr_q no CLEARS its Queue on error True cntl_delay_time 0 Controller Delay Time cntl_hcheck_int 0 Controller Health Check Interval dist_err_pcnt 0 Distributed Error Percentage dist_tw_width 50 Distributed Error Sample Time hcheck_cmd inquiry Device True True True True Health Check Command True hcheck_interval 60 Check Interval True hcheck_mode nonactive Check Mode True location Label True lun_id 0x0 Unit Number ID False lun_reset_spt yes Supported True max_retry_delay 60 Quiesce Time True max_transfer 0x
2.2.4 The chkdev command As of Virtual I/O Server Fix Pack 22, a new command has been introduced to assist with the identification of disks and their capabilities.
2.3 Creating a virtual SCSI device In a virtual SCSI storage environment, the Virtual I/O Server owns the physical SCSI cards and disks. The disks are then configured as backing devices on the Virtual I/O Server so that client partitions can access these backing storage devices. Physical disks owned by the Virtual I/O Server can be assigned to client partitions in several different ways: The entire disk may be presented to the client partition.
Figure 2-1 shows the relationship between physical SCSI disk and the target SCSI device on the Virtual I/O Server. Figure 2-1 Relationship between physical and virtual SCSI on Virtual I/O Server As mentioned earlier, client partitions access their assigned storage through a virtual SCSI client adapter.
Figure 2-2 shows the relationship between physical SCSI disk and the virtual SCSI devices on a client partition. Figure 2-2 Relationship between physical and Virtual SCSI on client partition Detailing the actual procedure of creating virtual SCSI devices follows.
On the HMC The objective is to create the server and client adapters that will allow the disks being presented from a physical Fibre Channel adapter to be visible to a client partition. 1. On the HMC, you will see a panel similar to Figure 2-3 if you display the physical adapters attached to the Virtual I/O Server. In our example, the highlighted Fibre Channel adapter in slot C1 will be used. Figure 2-3 HMC Virtual I/O Server Physical Adapters panel 2.
will be available to only a single partition since a specific partition was specified. This is the best practice, as we do not recommend making the adapter available to all clients. Figure 2-4 shows the panel to create the server adapter.
For this example, the adapter was dynamically added. If you want your configuration to be permanent, add the adapter to the Virtual I/O Server profile in addition to dynamically adding it. Your display of the Virtual I/O Server virtual adapters panel will look similar to Figure 2-5 when this step is complete. The server adapter that was created is highlighted. Figure 2-5 Virtual SCSI server adapter created on Virtual I/O Server Chapter 2.
4. Create the virtual client adapter. You must use the same slot numbers that you selected in the previous step. In addition, select the check box This adapter is required for partition activation check box. Your display of the Client Virtual Adapters Properties panel should yield something similar to Figure 2-6 when this step is complete. Note from the figure that the adapter was added to the client partition profile and not dynamically added.
The first lsmap command in the following command output shows us that vhost6 is mapped to server slot C17 (as previously defined on the HMC) and currently has no virtual target device mapped to it. Noting the slot number is a good way to verify that you have selected the correct server adapter before proceeding. For the purpose of this example, the physical hdisk6 is the disk that the client partition should eventually use.
2.4 Virtual Fibre Channel and N_Port ID virtualization N_Port ID virtualization (NPIV) is a technology that allows multiple logical partitions to access independent physical storage through the same physical Fibre Channel adapter. Each partition is identified by a pair of unique worldwide port names, enabling you to connect each partition to independent physical storage on a SAN. Unlike virtual SCSI, only the client partitions see the disk.
Detailing the procedure to use NPIV follows. In the scenario described, it is assumed that you have: A running standalone source host with rootvg on a SAN LUN A Virtual I/O Server with a physical NPIV-capable Fibre Channel adapter allocated to it A destination client partition that is currently running with rootvg on virtual SCSI disk The client partition will be reconfigured such that it boots using the migrated SAN LUN using virtual Fibre Channel.
On the HMC Create the virtual Fibre Channel mappings that will allow the destination client partition to see what was previously the source standalone server’s rootvg SAN LUN. 1. Create the virtual Fibre Channel server adapter on the Virtual I/O Server. Something similar to the highlighted portion in Figure 2-7 is what you should see when this step is complete.
2. Create the virtual Fibre Channel client adapter in the client partition profile. If you want the adapter and storage to be visible after a partition shutdown, save the configuration to a new profile and use the new profile when starting up the partition. You should see something similar to the highlighted portion in Figure 2-8 when this step is complete. Figure 2-8 Virtual Fibre Channel client adapter created in client partition profile Note: A POWER Hypervisor has a limit of 32,000 pairs of WWPNs.
Note: Each time that you configure a virtual Fibre Channel adapter, whether dynamically or by adding to a partition profile, the HMC obtains a new, non-reusable, pair of WWPNs from the POWER Hypervisor.
To get to the above panel, on the HMC select the client partition, click Properties from the Tasks menu. Select the Virtual Adapters tab on the panel that appears. Select the Client Fibre Channel adapter line. From Actions, select Properties. On the Virtual I/O Server You will now activate the virtual adapters defined in the previous step and map the virtual adapter to the physical Fibre Channel adapter. 4.
Ports logged in:7 Flags:a VFC client name:fcs0 VFC client DRC:U8204.E8A.10FE411-V4-C9-T1 In your lsmap output, you may not see the Status as LOGGED_IN if you had not already mapped the SAN LUN to the Virtual I/O Server. On the SAN and storage devices You can do the SAN mapping now by proceeding with the following steps: 9.
At this point, the following SMS panel is displayed: ------------------------------------------------------------------------------Select Media Adapter 1. U8204.E8A.10FE411-V2-C9-T1 /vdevice/vfc-client@30000008 2.
In Figure 2-10, the relationship between the virtual Fibre Channel components and what the SAN switch sees is shown.
Method 1: the SwitchExplorer Web Interface a. Use your Web browser to point to the URL of your SAN switches’ IP address, then log in to the SAN switch with a user login with at least read access. You should see a panel similar to the one shown in Figure 2-11. Figure 2-11 SAN Switch panel Chapter 2.
b. In Figure 2-11 on page 35, port 6 has been highlighted since this is our physical port from our cabling diagram. Click the port to bring up the port details. You will see a panel similar to that shown in Figure 2-12. Figure 2-12 SAN port details c. Note that the port selected has the entry NPIV Enabled set to a value of True. This is highlighted in Figure 2-12. If the value is set to false then this should be rectified before continuing this procedure.
d. The panel shown in Figure 2-13 is displayed. Figure 2-13 SAN port device details The highlighted device port WWN is one that would be expected to be seen. This means our virtual Fibre Channel connection has correctly presented the virtual Fibre Channel to the SAN Switch. Some disk storage devices may take a few seconds before the WWPN is presented to them. Method 2: using telnet a.
VC Link Init .. Locked L_Port .. Locked G_Port .. Disabled E_Port .. ISL R_RDY Mode .. RSCN Suppressed .. Persistent Disable.. NPIV capability ON .. .. .. .. .. .. .. ON .. .. .. .. .. .. .. ON .. .. .. .. .. .. .. ON .. .. .. .. .. .. .. ON .. .. .. .. .. .. .. ON .. .. .. .. .. .. .. ON .. .. .. .. .. .. .. ON .. .. .. .. .. .. .. ON .. .. .. .. .. .. .. ON .. .. .. .. .. .. .. ON .. .. .. .. .. .. .. ON .. .. .. .. .. .. .. ON .. .. .. .. .. .. .. ON .. .. .. .. .. .. .. ON .. .. .. .. ..
Lli: Proc_rqrd: Timed_out: Rx_flushed: Tx_unavail: Free_buffer: Overrun: Suspended: Parity_err: 2_parity_err: CMI_bus_err: 109 2422 0 0 0 0 0 0 0 0 0 Loss_of_sig: Protocol_err: Invalid_word: Invalid_crc: Delim_err: Address_err: Lr_in: Lr_out: Ols_in: Ols_out: 7 0 0 0 0 0 5 2 1 5 Port part of other ADs: No itso-aus-san-01:admin> d. From the portshow command output, note that the WWPN has been presented to the SAN switch.
Select Device Device Current Device Number Position Name 1. SCSI 14 GB FC Harddisk, part=2 (AIX 6.1.0) ( loc=U8204.E8A.10FE411-V4-C9-T1-W201300a0b811a662-L0 ) ------------------------------------------------------------------------------Navigation keys: M = return to Main Menu ESC key = return to previous screen X = eXit System Management Services ------------------------------------------------------------------------------Type menu item number and press Enter or select Navigation key:1 13.
14.Enter option 1 to exit the SMS menu, as shown here: ------------------------------------------------------------------------------Are you sure you want to exit System Management Services? 1. Yes 2. No ------------------------------------------------------------------------------Navigation Keys: X = eXit System Management Services ------------------------------------------------------------------------------Type menu item number and press Enter or select Navigation key:1 15.
The remaining lsdev commands list out all Fibre Channel adapters and show how hdisk8 maps back to the virtual Fibre Channel adapter fcs2: # lsdev|grep fcs fcs0 Defined 07-00 fcs1 Defined 07-01 fcs2 Available C9-T1 4Gb FC PCI Express Adapter (df1000fe) 4Gb FC PCI Express Adapter (df1000fe) Virtual Fibre Channel Client Adapter # # lsdev -l hdisk8 -F parent fscsi2 # lsdev -l fscsi2 -F parent fcs2 The migration is now complete.
3 Chapter 3. Standalone SCSI rootvg to virtual SCSI This chapter details the migration of a standalone client with a rootvg on local disk to a logical partition with a disk presented via a Virtual I/O Server using virtual SCSI. © Copyright IBM Corp. 2010. All rights reserved.
Figure 3-1 shows an overview of the process. Standalone Client eth Migrated to VIOS vSCSI fc fc eth VIOS vsa AIX Server Physical Volumes vca Client LPAR Physical Volumes IBM Dedicated IBM System p Figure 3-1 Migration from standalone rootvg on local disk to a logical partition Local disks on standalone machines are not accessible to a Virtual I/O Server.
As with any migration, planning is essential. Our instructions generally refer to a single disk rootvg environment. If you have multiple disks in your rootvg then: If the rootvg is mirrored across the disks, you may want to break the mirror first. This gives you a recovery point if any problem occurs. If the rootvg is striped across a number of disks then our recommendation is that you use the method in 3.1, “Back up to CD and restore” on page 46.
3.1 Back up to CD and restore This migration uses the file backed optical feature of the Virtual I/O Server to present a number of previously made ISO images to the target logical partition as though these images where physical CD media. The advantage of this method is that it could be used to provision logical partitions very quickly from a master image copy, for example, in a development environment or if performing any form of diagnostics. The steps for the migration follow.
d. For the File system to store final CD images question, you can leave it blank or chose to use options such as an NFS file system. An NFS file system was used in this example (the /mnt/cdiso NFS file system that was previously created). e. Select Yes for the Do you want the CD to be bootable option. f. Select No for the Remove final images after creating CD option. g. Select No for the Create the CD now option. h. Press Enter to begin the system backup creation.
vasi0 (VASI) vbsd0 vhost0 vhost1 vhost2 vhost3 vhost4 vhost5 vhost6 vsa0 vcd1 Optical vp1rootvg vp2rootvg vp3rootvg vp4rootvg vtopt0 Optical vtscsi0 vtscsi1 vtscsi2 name ent8 Available Virtual Asynchronous Services Interface Available Available Available Available Available Available Available Available Available Available Virtual Block Storage Device (VBSD) Virtual SCSI Server Adapter Virtual SCSI Server Adapter Virtual SCSI Server Adapter Virtual SCSI Server Adapter Virtual SCSI Server Adapter Virtual
9. Use the Virtual I/O Server lsmap command to verify that the device has been created: $ lsmap -all | more SVSA Physloc Client Partition ID --------------- -------------------------------------------- -----------------vhost0 U8204.E8A.10FE411-V2-C11 0x00000003 VTD Status LUN Backing device Physloc vtopt1 Available 0x8200000000000000 VTD Status LUN Backing device Physloc
d. The lsrep command can be used to show which images you have loaded into the repository: $ lsrep Size(mb) Free(mb) Parent Pool 10198 9595 rootvg Parent Size 139776 Name cd_image_82472.1 Parent Free 81920 File Size Optical 603 None Access ro 11.Load the virtual optical media file that was created earlier using the mkvopt command against the virtual optical device that you created in step 6 on page 47 above (vtopt1 in this example) using the loadopt command: $ loadopt -disk cd_image_82472.
19.Type the 5, as shown in Example 3-1. Example 3-1 Main SMS Entry Panel ------------------------------------------------------------------------------Main Menu 1. Select Language 2. Setup Remote IPL (Initial Program Load) 3. Change SCSI Settings 4. Select Console 5.
24.In response to the Select Media Adapter panel, type the number that represents the virtual SCSI device that is mapped to the CD/ROM. In Example 3-2 there is only a single device. Example 3-2 Select Media Adapter ------------------------------------------------------------------------------Select Media Adapter 1. U8204.E8A.10FE411-V3-C7-T1 /vdevice/v-scsi@30000007 2. None 3.
repository you can also create all the media up front and not revisit this step again: $ mkvopt -name cd_image_82472.vol2 -file /updates/cd_image_82572.vol2 -ro 27.Load the next virtual optical media file that was created earlier using the Virtual I/O Server loadopt command: $ loadopt -disk cd_image_82472.vol2 -vtd vtopt1 On the target partition 28.From the logical partition terminal session or console, you can now press Enter to continue the restore process.
the mirroring procedure is complete to allow the target client logical partition to access the SAN. Standalone Client Migrated to VIOS vSCSI eth fc fc eth VIOS vsa ate gr mi pv to N SA k dis SAN Switch Step 2 - ma p as 1– IBM Dedicated ep St Physical Volumes a volu me to the cli ent lo gical part ition AIX Server Disk A Physical Volume LUNs Storage Device DS4800 Figure 3-2 Cloning using mirrorvg to a SAN disk The steps for the migration follow.
On the standalone server Start by determining the size of the root volume group, then use the migratepv command to move to a new disk. 1. Obtain the size of the rootvg using the AIX lsvg rootvg command if the rootvg spans several volumes.
Number of running methods: 0 ---------------attempting to configure device 'fscsi0' Time: 0 LEDS: 0x569 invoking /usr/lib/methods/cfgefscsi -l fscsi0 Number of running methods: 1 ---------------Completed method for: fscsi0, Elapsed time = 1 return code = 0 ****************** stdout *********** hdisk8 ****************** no stderr *********** ---------------Time: 1 LEDS: 0x539 Number of running methods: 0 ---------------attempting to configure device 'hdisk8' Time: 1 LEDS: 0x626 invoking /usr/lib/methods/cfgs
4. List the disks using the AIX lsdev command to ensure that the SAN disk is presented correctly to AIX: # lsdev -Cc disk hdisk0 Available hdisk1 Available hdisk2 Available hdisk3 Available hdisk4 Available hdisk5 Available hdisk6 Available hdisk7 Available hdisk8 Available # 00-08-00 00-08-00 00-08-00 00-08-00 00-08-00 00-08-00 00-08-00 00-08-00 06-00-02 SAS Disk Drive SAS Disk Drive SAS Disk Drive SAS Disk Drive SAS Disk Drive SAS Disk Drive SAS Disk Drive SAS Disk Drive MPIO Other DS4K Array Disk 5.
max_transfer 0x40000 Maximum TRANSFER Size True node_name 0x200200a0b811a662 Node Name False pvid none Physical volume identifier False q_err yes QERR bit True q_type simple Queuing TYPE True queue_depth 10 DEPTH True reassign_to 120 REASSIGN time out value True reserve_policy single_path Reserve Policy True rw_timeout 30 READ/WRITE time out value True scsi_id 0x11000 ID False start_timeout 60 unit time out value True unique_id 3E213600A0B8000291B0800009AE303FEFAE10F1815 device identifier False ww_name 0x20
7. Use the AIX migratepv command to move the contents of the local SAS/SCSI disk to the SAN disk. If you are migrating disks on a one-for-one basis, the command shown below works well. If you have multiple local hard disks in use then it is best to use the migratepv command with the -l option and migrate each logical volume in turn: # migratepv hdisk0 hdisk8 0516-1011 migratepv: Logical volume hd5 is labeled as a boot logical volume.
9. Update the boot partition and reset the bootlist on the source standalone system using the AIX bosboot and bootlist commands: # bosboot -a -d hdisk8 bosboot: Boot image is 40810 512 byte blocks. # bootlist -m normal hdisk8 # 10.Shut down the standalone client using the AIX shutdown command. On the SAN disk storage controller 11.
Ensure that the UUID in this step matches that from step 5. This will confirm the same disk is mapped. 14.Map the SAN disk device to the client logical partition. In this instance the Virtual Resource Virtual Storage Management task was used from the HMC rather than typing commands on the Virtual I/O Server. Figure 3-3 shows the HMC panel from which this task is accessed. Figure 3-3 Virtual Storage Management functions Chapter 3.
15.Because there is a physical disk in use here, you must navigate to the Physical Volumes tab, as shown in to Figure 3-4.
16.Select the required hard disk, such as hdisk6, to map to the client partition and click Modify Assignment, as shown in Figure 3-5. Figure 3-5 Hard Disk Selection Chapter 3.
17.Select the new partition assignment and click OK to accept that you are assigning this volume, as shown in Figure 3-6. Figure 3-6 Selection of the client virtual slot The last screen after a number of updating panels shows that the assignment was correct. Click Close to exit the Virtual Storage Assignment function, as shown in Figure 3-7.
On the client partition 18.You can now boot the client logical partition using the SMS option and discover the newly presented virtual SCSI disk that maps to your SAN disk. The migration is almost complete. Remember to set up the Ethernet addresses on the virtual Ethernet interfaces since they were last used on physical Ethernet cards and may not be correct in this virtual environment. 3.
Figure 3-8 provides an overview of this process. Standalone Client Migrated to VIOS vSCSI eth fc fc eth VIOS vsa Physical Volumes as a v olum e to th e clie nt log ical p art ition AIX Server Physical Volumes ep St 1- IBM Dedicated vca Client LPAR _c i sk _d al t y op Step 2 - map IBM System p SAN Switch P6 550 SAN Switch Disk A Physical Volume LUNs Storage Device DS4800 Figure 3-8 alt_disk_copy using SAN disk The steps for the migration follow.
VG PERMISSION: MAX LVs: LVs: OPEN LVs: TOTAL PVs: STALE PVs: ACTIVE PVs: MAX PPs per VG: MAX PPs per PV: LTG size (Dynamic): HOT SPARE: read/write 256 12 11 1 0 1 32512 1016 1024 kilobyte(s) no TOTAL PPs: FREE PPs: USED PPs: QUORUM: VG DESCRIPTORS: STALE PPs: AUTO ON: 546 (139776 megabytes) 509 (130304 megabytes) 37 (9472 megabytes) 2 (Enabled) 2 0 yes MAX PVs: AUTO SYNC: BB POLICY: 32 no relocatable 2. Ensure that the disk that you are going to clone the rootvg to has: a.
hcheck_mode nonactive Check Mode True location Location Label True lun_id 0x0 Logical Unit Number ID False lun_reset_spt yes Reset Supported True max_retry_delay 60 Maximum Quiesce Time True max_transfer 0x40000 Maximum TRANSFER Size True node_name 0x200200a0b811a662 Node Name False pvid none Physical volume identifier False q_err yes QERR bit True q_type simple Queuing TYPE True queue_depth 10 DEPTH True reassign_to 120 REASSIGN time out value True reserve_policy single_path Reserve Policy True rw_timeout
Creating logical volume alt_hd9var Creating logical volume alt_hd3 Creating logical volume alt_hd1 Creating logical volume alt_hd10opt Creating logical volume alt_hd11admin Creating logical volume alt_lg_dumplv Creating logical volume alt_livedump Creating logical volume alt_loglv00 Creating /alt_inst/ file system. /alt_inst filesystem not converted. Small inode extents are already enabled. Creating /alt_inst/admin file system. /alt_inst/admin filesystem not converted.
system console.
7. One of the final actions of the alt_disk_copy command is to set the bootlist to the newly created altinst_rootvg. Since the aim is to preserve the rootvg, ensure that the bootlist is set back to the correct volume. Reset the bootlist on the source standalone system using the AIX bosboot and bootlist commands: # bosboot -a -d hdisk0 bosboot: Boot image is 40810 512 byte blocks.
The other important output from the chkdev command is the PHYS2VIRT_CAPABLE field. In this example it has YES as a value. A value of YES means that at this point in time, this disk volume can be mapped to a virtual device and presented to a logical partition. A value of NO would mean that the disk cannot be mapped. A value of NA means that the disk has already been mapped as a virtual target device (VTD). More information about the chkdev command can be found by reading its man page. 10.
VIRT2NPIV_CAPABLE: YES VIRT2PHYS_CAPABLE: YES PVID: 000fe4017e0037d70000000000000000 UDID: 3E213600A0B8000291B0800009D760401BBB80F1815 FAStT03IBMfcp IEEE: VTD: vtscsi0 $ Note that the PHYS2VIRT_CAPABLE field in the above command output is now set to a value of NA, which indicates that this disk is now mapped to a VTD, vtscsi0 in this example. On the client partition Using the SMS menu, now boot the client partition and perform cleanup tasks: 12.
On the standalone system You have now migrated this system to a logical partition. If you wish to revert the current disk configuration back to a pre alt_disk_copy scenario: 15.On the local system an AIX lsvg command shows that the ODM is unaware that you have removed the SAN disk that was the target of the alt_disk_copy: # lsvg rootvg altinst_rootvg # 16.To clean up the system, use the AIX alt_rootvg_op command with the -X flag: # alt_rootvg_op -X Bootlist is set to the boot disk: hdisk0 blv=hd5 17.
NIM also allows you to perform functions such as: Installation of system patch bundles Installation of user-defined software packages Upgrades of the operating system on the fly While you generally must install NIM on a separate server or logical partition (and it could reside on the Tivoli® Storage Manager Server if required), the benefits of NIM outweigh the expense: Multiple restorations can be performed simultaneously in a NIM environment.
The following notes apply to the use of a SAS-connected tape drive: At the time of writing, only an IBM SAS-attached tape drive is supported. It is preferable to create a separate virtual SCSI host adapter than to use one already in service for disks or optical storage. This is because of the different block sizes used to transfer data for tape operations and a separate virtual SCSI adapter is more portable. The tape drive is not a shared device. It can only be in use by one partition at a time.
4 Chapter 4. Standalone SCSI data to virtual SCSI This chapter provides instructions for migrating a client’s data on direct-attached disk to a logical partition with the data disks being virtualized by a Virtual I/O Server using virtual SCSI. The instructions outlined assume that both the source and destination hosts already exist.
remaining disks are used as data disks. The SAN storage is provided by a DS4800. The destination server is a client logical partition that has no physical disk of its own. The physical disks are attached to the Virtual I/O Server. See Figure 4-1 for a graphical representation.
4.1 Migration using a virtual media repository The goal of this section is to make a backup of a user volume group to a file, create a media repository on the Virtual I/O Server, and give the client logical partition virtualized access to the media repository. Keep in mind that applications should be shut down prior to performing the backup since files that are open cannot be backed up. On the standalone source host Begin by backing up the user volume data: 1.
mkrr_fs was successful. Removing temporary file system: /mkcd/cd_fs... Removing temporary file system: /mkcd/mksysb_image... The mkcd command creates the backup file in /mkcd/cd_images by default. In this case, the file created is cd_image_401446. Transfer the file to the Virtual I/O Server using the file transfer program of your choice. On the Virtual I/O Server Create the media repository and make it ready for access by the client partition: 3.
$ mkvdev -fbo -vadapter vhost4 -dev vcd1 vcd1 Available The virtual optical device will appear as Virtual Target Device File-backed Optical in a virtual device listing. 7. Load the virtual optical media file that you created earlier with the loadopt command. Once loaded, the image file will be copied into the repository (/var/vio/VMLibrary) and you will see a backing device for vhost4.
On the HMC 8. Map the vhost adapter from the previous step to a SCSI adapter on the client logical partition if the mapping does not already exist. Something similar to the highlighted line in Figure 4-2 is what you should see. Figure 4-2 Client logical partition virtual adapter mapping in WebSM This will map vcd1, which has vhost4 as its backing device, on the Virtual I/O Server to a virtual SCSI optical device in slot 9 on the client logical partition.
Preserve Physical Partitions for each Logical Volume: no Enter y to continue: y 0516-1254 /usr/sbin/mkvg: Changing the PVID in the ODM. datasrcvg datasrclv /dev/datadestlv: A file or directory in the path name does not exist. New volume on /tmp/vgdata.249948/cdmount/usr/sys/inst.images/savevg_image: Cluster size is 51200 bytes (100 blocks). The volume number is 1. The backup date is: Mon Oct 12 11:10:23 EDT 2009 Files are backed up by name. The user is root. x 14 ./tmp/vgdata/datasrcvg/image.info x 142 .
datasrclv jfs2 1 /mnt # cd /mnt # ls -l total 8 -rw-r--r-1 root staff drwxr-xr-x 2 root system # cat datafile This is a test file. 1 1 open/syncd 21 Oct 12 10:58 datafile 256 Oct 12 17:06 lost+found 4.2 Migrating data using savevg If it is not required to have a media repository, savevg may be used instead. 1.
Files are backed up by name. The user is root. x 14 ./tmp/vgdata/datasrcvg/image.info x 142 ./tmp/vgdata/vgdata.files405658 x 142 ./tmp/vgdata/vgdata.files x 2746 ./tmp/vgdata/datasrcvg/filesystems x 1803 ./tmp/vgdata/datasrcvg/datasrcvg.data x 272 ./tmp/vgdata/datasrcvg/backup.data x 0 ./mnt x 21 ./mnt/datafile x 0 ./mnt/lost+found The total size is 5140 bytes. The number of restored files is 9.
Serial Number............... Device Specific.(Z0)........0000053245005032 Device Specific.(Z1)........ # ls -l pattern.txt -rw-r--r-1 root system 30 Oct 16 12:24 pattern.txt # cat pattern.txt This is a raw disk test file. # dd if=./pattern.txt of=/dev/hdisk8 seek=20 count=1 0+1 records in. 0+1 records out. 2. Get the unique_id of the SAN LUN. While the odmget command has been used below, the lsattr command is also useful for this task.
On the HMC 4. Map a virtual SCSI adapter from the Virtual I/O Server to the client logical partition if the mapping does not already exist. You should see something similar to the highlighted line in Figure 4-3. Figure 4-3 Client logical partition mapping for access to SAN disk On the Virtual I/O Server 5. Map the vhost created from the step above to the SAN disk and present it to the client partition using the mkvdev command: $ mkvdev -vdev hdisk6 -vadapter vhost4 vtscsi0 Available Chapter 4.
On the client partition The SAN LUN will be visible to the client as a SCSI disk. 6. Verify that the data is available to the client. Running the lspv command after the cfgmgr command makes the new disk visible on the client. Our test data was extracted from the raw disk using the dd command as a confirmation that the migration was successful: # lspv hdisk0 000fe41120532faf active # cfgmgr # lspv hdisk0 000fe41120532faf active hdisk1 none # # dd if=/dev/hdisk1 count=21 This is a raw disk test file.
5 Chapter 5. Logical partition migrations In this chapter we describe the methods for moving data from a logical partition with direct-attached disk to a logical partition using disk presented through a Virtual I/O Server. © Copyright IBM Corp. 2010. All rights reserved.
5.1 Direct-attached SCSI partition to virtual SCSI This migration method describes a scenario where a logical partition with local direct-attached disk is migrated to a Virtual I/O Server in the same systems enclosure or CEC, as shown in Figure 5-1.
c.
2. Identify the parent device to which hdisk0 is connected. This is done using the lsdev command in two steps: # lsdev -l hdisk0 -F parent scsi1 # lsdev -l scsi1 -F parent sisscsia0 # 3. The output from step 2 shows us that in this example, hdisk0 is attached to the SCSI device scsi1, which has a parent device of sisscsia0. Determine what the sisscsia0 device is using the lsdev command: # lsdev -C | grep sisscsia0 sisscsia0 Available 00-08 # PCI-X Dual Channel Ultra320 SCSI Adapter 4.
Serial Number...............YL10C4061142 Manufacture ID..............000C EC Level....................0 ROM Level.(alterable).......05080092 Product Specific.(Z0).......5702 Hardware Location Code......U78A0.001.0000000-P1-C4 # In the previous output the hardware location code is highlighted and provides the physical location code of slot C4 for the sisscsia0 SCSI adapter. Write down the location code for use in future steps. 6.
On the HMC The HMC is now used to create client and server virtual SCSI adapters and migrate the SCSI Storage controller to the correct profile. 7. Modify the client logical partition profile by removing the SCSI adapter with the local attached disks. The physical location code that was noted from the previous step 6 on page 93 was slot C4. Figure 5-2 shows the logical partition profile properties.
slot number is 15 and the free client slot number is 5. Figure 5-3 shows the required properties to create the server adapter. If you perform this task using the Dynamic Logical Partition Virtual Adapters function to add the virtual SCSI server adapter, be sure that you save the current profile using the Configuration Save Current Configuration function. You can rename this newly created profile later if required. Figure 5-3 Create Virtual SCSI Server Adapter panel Chapter 5.
10.Modify the Virtual I/O Server to add the SCSI adapter to the profile. Figure 5-4 shows the storage controller in slot C4, which has been highlighted, for addition to the profile. Figure 5-4 Logical Partition Profile Properties panel Click the Add as required tab, click OK, then click Close and return to the HMC management server panel. 11.Now you must make the Virtual I/O Server use the newly added SCSI Storage controller.
12.Create a client virtual SCSI adapter. Select your client partition and navigate through, selecting your profile for the Create Virtual Adapters task. You can fill the panel in with the required information, similar to the panel shown in Figure 5-5.
b. Use the lsmap command to ensure that vhost4 is the correct virtual adapter: $ lsmap -all | grep vhost4 vhost4 U8204.E8A.10FE401-V1-C15 0x00000000 $ The previous output confirms that vhost4 is our required virtual SCSI server adapter. The location code of C15 matches the slot that was used when it was created. 14.Now look for new disks that have been defined. a.
15.Create a mapping from the physical disk and verify that the mapping is correct: a. Use the mkvdev command to map hdisk8 to the new virtual Server SCSI adapter, which is vhost4: $ mkvdev -vdev hdisk8 -vadapter vhost4 vtscsi0 Available $ b. Use the lsmap command to verify that the correct disk is now mapped to vhost4: $ lsmap -vadapter vhost4 SVSA Physloc Client Partition ID --------------- -------------------------------------------- -----------------vhost4 U8204.E8A.
In the previous output the C5 is the client slot number and 8100000000000000 matches the value of the LUN field in the output from the lsmap command that was performed on the Virtual I/O Server. These values are all correct. 2. If the disks that you migrated contain a boot volume, check and update the boot information if required. a. Use the bosboot command to set up the disk correctly for the next boot: # bosboot -ad /dev/hdisk1 bosboot: Boot image is 40810 512 byte blocks.
already have a physical Fibre Channel adapter on the Virtual I/O Server, you may do the migration by mapping the SAN storage to the Virtual I/O Server instead of remapping the adapter. Figure 1-2 provides a graphical representation of the procedure that you are about to follow.
On the source partition The following series of commands show us the pre-migration state of the source partition and allow us to collect the information that will be needed later on in the migration. The first lspv command displays only the disks that are relevant for this exercise and shows us that the partition was booted from rootvg on hdisk4 and the data volume group is datasrcvg on hdisk5. The remaining lsattr commands retrieve the unique_id for each disk.
hdisk1 Available 00-08-02 MPIO Other FC SCSI Disk Drive hdisk2 Available 00-08-02 MPIO Other FC SCSI Disk Drive hdisk4 Available 00-08-02 MPIO Other DS4K Array Disk hdisk5 Available 00-08-02 MPIO Other DS4K Array Disk hdisk6 Available 00-08-02 MPIO Other FC SCSI Disk Drive hdisk7 Available 00-08-02 MPIO Other FC SCSI Disk Drive # lscfg -vl fcs0 fcs0 U78A0.001.DNWGCV7-P1-C4-T1 FC Adapter Part Number.................10N8620 Serial Number...............1B80904DC3 Manufacturer................001B EC Level......
Filesystem 1024-blocks Free %Used /dev/hd4 196608 31000 85% /dev/hd2 1966080 128204 94% /dev/hd9var 376832 128428 66% /dev/hd3 147456 130732 12% /dev/hd1 16384 16032 3% /dev/hd11admin 131072 130708 1% /proc /dev/hd10opt 409600 122912 70% /dev/livedump 262144 261776 1% /var/adm/ras/livedump /dev/fslv00 2097152 2096504 1% # cd /data # ls -l total 0 drwxr-xr-x 2 root system 256 -rw-r--r-1 root system 0 migrate_FC_to_vSCSI.
On the HMC The Fibre Channel adapter must now be remapped from the source partition to the Virtual I/O Server so that the LUNs may be made available to the destination partition as virtual SCSI disk. 1. Using the hardware location code for fcs0 from the previous step, open the source partition’s profile panel and locate the physical Fibre Channel adapter. In Figure 5-7, the correct Fibre Channel adapter in slot C4 has been highlighted. Remove this Fibre Channel adapter from the partition profile.
2. Dynamically add the physical Fibre Channel adapter removed from the source partition profile in the previous step to the Virtual I/O Server. The partition Properties Panel will show something similar to the highlighted portion in Figure 5-8 when this step is complete.
3. Dynamically add two virtual SCSI server adapters to the Virtual I/O Server, one for rootvg and the other for the data disk. An example of the panel in which you create a virtual adapter is displayed in Figure 5-9. Figure 5-9 Virtual SCSI Server Adapter Add Panel Chapter 5.
Figure 5-10 shows the Virtual Adapters panel with our two server SCSI adapters added.
4. Since our destination partition is currently shut down, add two virtual SCSI client adapters to the destination partition’s profile. The client partition’s Profile Properties panel is displayed in Figure 5-11 with the added client adapters highlighted.
below, the PHYS2VIRT_CAPABLE field for both disks show a state of YES. This tells us that it is safe to use these disks for our physical-to-virtual migration.
vhost7 U8204.E8A.10FE411-V2-C18 VTD 0x00000000 NO VIRTUAL TARGET DEVICE FOUND $ mkvdev -vdev hdisk6 -vadapter vhost6 vtscsi2 Available $ mkvdev -vdev hdisk7 -vadapter vhost7 vtscsi3 Available As shown in the following command output, running chkdev again after running the mkvdev command will show you the mapped VTDs. In addition, the PHYS2VIRT_CAPABLE field now has a state of NA and the VIRT2NPIV_CAPABLE and VIRT2PHYS_CAPABLE fields have a state of YES.
8. Activate the destination client partition in SMS mode and select the disk to boot from that was originally on the source partition. The output below shows the available SCSI devices from SMS from our example. The disk in slot C9 is our original rootvg disk. ------------------------------------------------------------------------------Select Media Adapter 1. U8204.E8A.10FE411-V4-C7-T1 /vdevice/v-scsi@30000007 2. U8204.E8A.10FE411-V4-C9-T1 /vdevice/v-scsi@30000009 3. U8204.E8A.
Compare the output below to the output gathered from the pre-migration source partition. The tail command lists out the last two lines of the /etc/hosts file, which looks the same as on the original host, and the df command shows us that the partition booted with /data already mounted just as on the original host. Finally, the ls command shows us that the data on the data disk is intact and that it is the same data disk that was on the original host. # tail -2 /etc/hosts 192.168.100.92 p2_411 192.168.100.
In Figure 5-12, a direct-attached Fibre Channel adapter is shown with SAN disks for the client logical partition, which is then migrated to the Virtual I/O Server with virtual Fibre Channel installed.
On the client partition On the client logical partition first capture details of the resources that are going to migrate. These may include the details of the root volume group (rootvg), any data volume groups, and the details of the Fibre Channel card if you are going to migrate the Fibre Channel card from the client partition to the Virtual I/O Server: 1.
hdisk hdisk4 hdisk5 LUN # 1 2 Ownership B (preferred) B (preferred) User Label PW9405-17-1 PW9405-17-2 The previous output describes where the hard disks that are SAN connected are sourced from (in this case the Storage Subsystem ITSO_DS4800) and how the disks are named in the storage array (PW9405-17-1 and PW9405-17-2, respectively). If you are using EMC storage then the powermt display command may be used or the lspath command for other MPIO-capable storage to display details. 4.
identification refer to 2.2, “Checking unique disk identification” on page 13.
FC World Wide Name # False 6. Now capture details about the Fibre Channel card if you are going to migrate it. If you are not migrating the Fibre Channel cards then you can omit this step. At the time of writing, only the 8 GB Fibre Channel adapter Feature Code 5735 supports the virtual Fibre Channel (or NPIV) function on a POWER6-technology-based system. a.
Device Specific.(ZM)........3 Network Address.............10000000C9738E84 ROS Level and ID............02C82774 Device Specific.(Z0)........1036406D Device Specific.(Z1)........00000000 Device Specific.(Z2)........00000000 Device Specific.(Z3)........03000909 Device Specific.(Z4)........FFC01231 Device Specific.(Z5)........02C82774 Device Specific.(Z6)........06C12715 Device Specific.(Z7)........07C12774 Device Specific.(Z8)........20000000C9738E84 Device Specific.(Z9)........BS2.71X4 Device Specific.(ZA)..
profile of the running Virtual I/O Server immediately so that on a restart, the Fibre Channel resource is available for use. 10.Use the HMC to determine a free slot number on the client logical partition. Performing this action now reduces switching between the client logical partition and Virtual I/O Server configurations a number of times. 11.Now create the virtual Fibre Channel resource on the Virtual I/O Server and then the client logical partition.
a. Enter the required slot numbers into the Create Virtual Fibre Channel Adapter panel, as in Figure 5-14. Figure 5-14 Create Virtual Fibre Channel Adapter panel On the page shown in Figure 5-14 it is also possible to select the client partition, p2_411. Click OK once the slot number is entered that was recorded in step 10 on page 120. Chapter 5.
b. The HMC panel that is displayed is to the panel in Figure 5-15 and shows that the virtual Fibre Channel is defined for creation. You must exit this panel by clicking the OK button for the definition to be saved.
12.Modify the profile of the client logical partition and create a virtual Fibre Channel client adapter. Select the required client partition, and then edit the profile by using the Action Edit function, as in Figure 5-16. Figure 5-16 Edit a managed profile Chapter 5.
a. Select the Virtual Adapters tab, as in Figure 5-17.
b. Using the Actions drop-down box, as in Figure 5-18, select Create Fibre Channel Adapter. Figure 5-18 Resource Creation panel Chapter 5.
c. In the Fibre Channel resource panel (Figure 5-19) enter the slot numbers that match the numbers that you used when you defined the Fibre Channel Adapter on the Virtual I/O Server endpoint in step 12a on page 124. Click OK when complete. Figure 5-19 Fibre Channel Adapter resources Note: On the panel shown in Figure 5-19, the “This adapter is required for partition activation” check box was not selected during the test migration. In production this option should be selected.
As shown in Figure 5-20, you can now see that a Client Fibre Channel Adapter has been created. Figure 5-20 Client Fibre Channel Adapter Note: You must exit the previous panel (Figure 5-20 on page 127) by clicking OK for the resource to be saved correctly in the profile. Exiting without clicking OK means that the POWER Hypervisor will not assign world wide port names (WWPNs) to the client Fibre Channel adapter and you will not be able to continue this migration. For further details refer to 2.
d. Once you have clicked OK on the above panel, reselect the Virtual Adapters tab and select the newly created client Fibre Channel adapter. Use the Actions Properties selection in the drop-down box (Figure 5-21) to display the WWPNs of the client Fibre Channel adapter.
The resulting panel displays the assigned WWPNs, as shown in Figure 5-22. Figure 5-22 Virtual Fibre Channel Adapter Properties e. Make a note of the WWPNs that are displayed (Figure 5-22), as they will be needed shortly. If you want the adapter and storage to be visible after the partition shutdown, save the configuration to a new profile and use the new profile when starting up the partition.
vhost3 vhost4 vhost5 vhost6 vhost7 vsa0 name ent8 $ Available Available Available Available Available Available status Available Virtual SCSI Server Adapter Virtual SCSI Server Adapter Virtual SCSI Server Adapter Virtual SCSI Server Adapter Virtual SCSI Server Adapter LPAR Virtual Serial Adapter description Shared Ethernet Adapter Or use the shorter form of the lsdev command if you prefer: $ lsdev -dev vfchost* name status vfchost0 Available description Virtual FC Server Adapter 14.
Create the mapping between the resources: a. Use the lsmap command to view the newly added virtual Fibre Channel server adapter. Note that the physical location code of the virtual Fibre Channel server adapter will display the slot number: $ lsmap -all -npiv Name Physloc ClntID ClntName ClntOS ------------- ---------------------------------- ------ -------------- ------vfchost0 U8204.E8A.
cannot see them at this stage). For further information refer to 2.4, “Virtual Fibre Channel and N_Port ID virtualization” on page 26. 17.Boot to the SMS menu: a. Type 5 and press Enter to access the Select Boot Options panel. b. Type 1 and press Enter to access the Select Device Type panel. c. Type 5 and press Enter to access the Hard Drive Panel. d. Type 3 and press Enter to use SAN media. 18.
Mapped a physical Fibre Channel port to the virtual Fibre Channel host with the vfcmap command Started the client logical partition that should present the WWPNs to the SAN fabric 20.Correct the SAN zoning in the SAN switch and the storage device mapping/masking to the new WWPNs. 21.Break the reserve if required. If you did not shut down the client partition cleanly, you may have a SCSI 2 reservation on the disks. This can be removed using the SAN GUI or CLI appropriate to the storage platform. 22.
d. Type 3 and press Enter to use SAN media. At this point, the following screen is displayed: ------------------------------------------------------------------------------Select Media Adapter 1. U8204.E8A.10FE411-V4-C8-T1 /vdevice/vfc-client@30000008 2. List all devices 24.At the Select Media Adapter panel, type 1 and press Enter, which should correspond to a vfc-client device. The slot number will be the client slot number that was used when the client Virtual Fibre Channel adapter was created: U8204.
These PVIDs match the values that were recorded prior to the migration. b. The AIX lsvg command also shows that the two volume groups are present as expected: # lsvg rootvg datasrcvg c. Now list the Fibre Channel devices.
The previous steps prove conclusively that the migration from a logical partition with a direct-attached Fibre Channel card to a logical partition with a Virtual Fibre Channel card has been successful. 26.The last steps are to: a. Ensure that the bootlist still correctly points to the correct hard disk. b. Clean up un-needed references to Fibre Channel cards that have been removed from the operating system. This migration is now complete.
5.4 Virtual SCSI rootvg and data to virtual Fibre Channel In this section a logical partition's virtual SCSI rootvg and data volumes will be migrated to another partition that will have the same volumes presented as the virtual Fibre Channel disk. Figure 5-24 provides a graphical representation of the procedure that we detail.
On the source partition The following series of commands show us the pre-migration state of the source virtual SCSI partition. 1. The first lspv command displays only the disks that are relevant for this exercise and shows us that the partition was booted from rootvg on hdisk8 and the data volume group is datasrcvg on hdisk9. # lspv | grep active hdisk8 000fe4117e88efc0 hdisk9 000fe41181e1734c rootvg datasrcvg active active 2.
# tail -2 /etc/hosts 192.168.100.92 p2_411 192.168.100.91 p1_411 Having gathered the configuration and validation data from the source partition, shut down the source partition. On the Virtual I/O Server On the Virtual I/O Server: 4. Find the virtual SCSI server mappings for the source partition and remove them. The lsmap commands in the following example show us the mappings of the virtual SCSI server adapters and the following rmvdev commands remove these mappings.
vhost7 U8204.E8A.10FE411-V2-C18 VTD 0x00000004 NO VIRTUAL TARGET DEVICE FOUND Finally, the vhost server adapters are deleted using rmdev commands.
Figure 5-25 Virtual Fibre Channel adapter added to client profile 6. Dynamically remove the virtual SCSI server adapters from the Virtual I/O Server and add a virtual Fibre Channel adapter. On the Virtual I/O Server In the following steps, the adapters defined in the previous steps will be configured and the mappings to the disk from the Virtual I/O Server to the client partition created: 7.
are ported to the second port on the adapter, so fcs1 is the correct adapter to use. $ lsdev | grep fcs fcs0 Available fcs1 Available fcs2 Available fcs3 Available 8Gb PCI Express Dual Port FC Adapter (df1000f114108a03) 8Gb PCI Express Dual Port FC Adapter (df1000f114108a03) FC Adapter FC Adapter The vfcmap command is used to create the virtual Fibre Channel mappings. The lsmap command shows a NOT_LOGGED_IN state because our client is currently shut down.
hdisk1 Available C8-T1-01 MPIO Other DS4K Array Disk # lsdev -l hdisk2 hdisk2 Available C8-T1-01 MPIO Other DS4K Array Disk The remaining commands provide additional evidence that hdisk1 and hdisk2 are in fact the same disks that were visible on the original client partition. Compare the output below to the output gathered from the pre-migration source partition.
144 PowerVM Migration from Physical to Virtual Storage
6 Chapter 6. Standalone SAN rootvg to virtual Fibre Channel In this chapter we show you how to migrate a standalone machine’s rootvg on storage area network (SAN) LUNs to a Virtual I/O Server client partition that will have its rootvg on SAN LUNs mapped using virtual Fibre Channel (using NPIV). Figure 6-1 on page 146 provides a graphical representation of the procedure to perform. © Copyright IBM Corp. 2010. All rights reserved.
Standalone Client Migrated to VIOS VFC eth fc fc eth VIOS vfc VIOS AIX Server Move zonin g for LUN to IBM Dedicated Physical Volumes vfc Client LPAR IBM System p P6 550 SAN Switch SAN Switch rootvg Storage Device DS4800 Figure 6-1 Migrate standalone SAN rootvg to client partition SAN rootvg over Virtual Fibre Channel In the scenario described below, it is assumed that you already have: A running standalone host with rootvg on a SAN LUN A Virtual I/O Server with a physical NPIV-capable
On the standalone source host The lspv command below shows us that rootvg is on hdisk8. Thus, our machine was booted from hdisk8. # lspv hdisk0 hdisk1 hdisk2 hdisk3 hdisk4 hdisk5 hdisk6 hdisk7 hdisk8 active 000fe4012a8f0920 none 000fe4012913f4bd none 000fe401106cfc0c 000fe4012b5361f2 none none 000fe401727b47c5 None None None None None None None None rootvg The following lsdev commands confirm that hdisk8 is a LUN on a storage array that is mapped to the client through a Fibre Channel adapter.
Note: Be sure to have the virtual Fibre Channel client file set installed on the standalone SAN rootvg before shutting down your standalone machine for migration. This will be required for the virtual Fibre Channel when rootvg is started on the client partition. On the HMC Create the virtual Fibre Channel mappings that will allow the client partition to see what was previously the standalone server’s rootvg SAN LUN. 2. Create the virtual Fibre Channel server adapter on the Virtual I/O Server.
partition. Something similar to the highlighted portion in Figure 6-3 is what you should see when this step is complete. Figure 6-3 Virtual Fibre Channel client adapter defined in client logical partition profile On the Virtual I/O Server You will now activate the virtual adapters defined in the previous step and map the virtual adapter to the physical Fibre Channel adapter. 4. Run the cfgdev command to configure the virtual Fibre Channel adapter. 5.
6. Get the list of all available physical Fibre Channel server adapters. As you can see from the lsdev command output, our NPIV-supported dual-port Fibre Channel card is at fcs0 and fcs1. Since only the second port is cabled on the card in this test environment, fcs1 must be selected.
11.Verify that the client has booted with the same LUN that was on the standalone machine via the virtual Fibre Channel adapter. The getconf command is another way to discover the boot device. The lspv command gives us added confirmation that rootvg is on hdisk8 and the lsdev and lscfg commands show us that hdisk8 is a SAN disk.
152 PowerVM Migration from Physical to Virtual Storage
7 Chapter 7. Direct attached Fibre Channel devices partition to virtual Fibre Channel This section provides instructions for the migration of a logical partition that uses direct-attached Fibre Channel resources (such as a tape drive) to a logical partition with the Fibre Channel devices being virtualized using the Virtual I/O Server and a virtual Fibre Channel capable Fibre Channel card.
In Figure 7-1, LPAR1 and the Virtual I/O Server can both access the LTO4 tape drive since both have a dedicated adapter with SAN zoning in place. The migration process removes the dedicated tape access from LPAR1 and re-presents the tape drive using the virtual Fibre Channel capability of the VIOS.
The steps required to accomplish this are covered in the following section. On the client partition: part 1 On the client logical partition, perform the following steps: 1. Identify which Fibre Channel card and port is being used by the tape device.
4. Remove the tape devices and library device, if present, from the system. The AIX rmdev command can be used for this purpose: rmdev -dl rmt0 rmdev -dl smc0 5. Remove the Fibre Channel device from AIX. The -R flag used with the rmdev command removes the fcnet and fscsi devices at the same time. Be careful if you are using a dual-ported Fibre Channel card.
b. Select the Virtual I/O Server partition on which the virtual Fibre Channel is to be configured. Then select Tasks Dynamic Logical Partitioning Virtual Adapters, as in Figure 7-2. Figure 7-2 Dynamically add virtual adapter Chapter 7.
c. Create a virtual Fibre Channel server adapter. Select Actions Create Fibre Channel Adapter, as in Figure 7-3.
d. Enter the virtual slot number for the Virtual Fibre Channel server adapter, then select the client partition to which the adapter may be assigned and enter the client adapter ID, as in Figure 7-4. Click OK. Figure 7-4 Set virtual adapter ID Click OK. Chapter 7.
e. Remember to update the profile of the Virtual I/O Server partition so that the change is reflected across restarts of the partitions. As an alternative, you may use the Configuration Save Current Configuration option to save the changes to the new profile. See Figure 7-5, which shows the location of the panel similar to what your HMC will present. Figure 7-5 Save the Virtual I/O Server partition configuration f. Change the name of the profile if required and click OK.
8. To create the virtual Fibre Channel client adapter in the client partition: a. Select the client partition on which the virtual Fibre Channel adapter is to be configured. Then select Tasks Configuration Manage Profiles, as in Figure 7-6. Figure 7-6 Change profile to add virtual Fibre Channel client adapter Chapter 7.
b. To create the virtual Fibre Channel client adapter select the profile, then select Actions Edit. Expand the Virtual Adapters tab and select Actions Create Fibre Channel Adapter, as in Figure 7-7.
c. Enter the virtual slot number for the Virtual Fibre Channel client adapter. Then select the Virtual I/O Server partition to which the adapter may be assigned and enter the server adapter ID, as in Figure 7-8. Click OK. Figure 7-8 Define virtual adapter ID Values d. Click OK OK Close. On the Virtual I/O Server On the Virtual I/O Server, ensure the correct setup for virtual Fibre Channel: 9. Log in to the Virtual I/O Server partition as user padmin. 10.
12.Run the lsnports command to check the Fibre Channel adapter NPIV readiness of the adapter and the SAN switch. In the example below, the fabric attribute is set to a value of 1, which confirms that the adapter and the SAN switch are NPIV enabled. If the fabric attribute is equal to 0, then the adapter or the SAN switch (or both) are not NPIV ready and you must check the configuration: $ lsnports name swwpns awwpns fcs1 2048 2048 physloc fabric tports aports U78A0.001.DNWGCV7-P1-C1-T2 1 64 64 13.
On the HMC: part 2 Now you have created the virtual Fibre Channel adapters for both the server on the Virtual I/O Server and on the client partition. You must correct the SAN zoning in the SAN switch. Use the HMC to get the correct port details: 15.To determine the world wide port numbers to be used in the new SAN zoning, perform the following steps: a. On the HMC select the appropriate virtual I/O client partition, then click Task Properties.
b. Figure 7-10 shows the properties of the virtual Fibre Channel client adapter. Here you can get the WWPN that is required for the SAN zoning. Figure 7-10 Virtual Fibre Channel client adapter properties c. You can now log on to the SAN switch and use these values to fix the zone members. Note: The steps to perform SAN zoning are not shown. Refer to other IBM Redbooks publications and SAN Implementation manuals for guidelines and advice.
sub-command can be shortened from the prior example and typed on one line: # tapeutil -f /dev/rmt0 inquiry 83 Issuing inquiry for page 0x83... Inquiry Page 0x83, 0000 0010 0020 0030 0040 - 0 1 0183 554C 3133 000E 0008 2 3 0046 5433 3130 1110 2002 Length 74 4 5 0201 3538 3032 E588 000E 6 7 0022 302D 3535 0194 1110 8 9 4942 5444 3138 0004 E588 A B 4D20 3420 0183 0000 C D 2020 2020 0008 0001 E F 2020 2020 2001 0193 0123456789ABCDEF [..F..."IBM ] [ULT3580-TD4 ] [1310025518... .] [....å........] [.. .
Drive Address 256 Drive State .................... ASC/ASCQ ....................... Media Present .................. Robot Access Allowed ........... Source Element Address ......... Media Inverted ................. Same Bus as Medium Changer ..... SCSI Bus Address Vaild ......... Logical Unit Number Valid ...... Volume Tag ..................... Drive Address 257 Drive State .................... ASC/ASCQ ....................... Media Present .................. Robot Access Allowed ...........
Abbreviations and acronyms AIX Advanced Interactive Executive ISO International Organization for Standards APAR Authorized Program Analysis Report ITSO International Technical Support Organization API Application Programming Interface LAN Local Area Network LPAR Logical Partition BLV Boot Logical Volume LPP Licensed Program Product CD Compact Disk LUN Logical Unit Number CD-R CD Recordable LV Logical Volume CD-ROM Compact Disk-Read Only Memory LVCB Logical Volume Control Block
SDD Subsystem Device Driver SMIT System Management Interface Tool SMS System Management Services SP Service Processor SPOT Shared Product Object Tree SRC System Resource Controller SRN Service Request Number SSA Serial Storage Architecture SSH Secure Shell SSL Secure Socket Layer SUID Set User ID SVC SAN Virtualization Controller TCP/IP Transmission Control Protocol/Internet Protocol TSM Tivoli Storage Manager UDF Universal Disk Format UDID Universal Disk Identification VG V
Related publications The publications listed in this section are considered particularly suitable for a more detailed discussion of the topics covered in this book. IBM Redbooks For information about ordering these publications, see “How to get Redbooks” on page 171. Note that some of the documents referenced here may be available in softcopy only.
Help from IBM IBM Support and downloads ibm.com/support IBM Global Services ibm.
Index A AIX alt_disk_copy 68 alt_rootvg_op 74 bootlist 60, 71, 100, 115 bosboot 60, 71, 100 cfgdev 129 cfgmgr 55, 71, 166 dd 85 extendvg 58 getconf 41, 115, 151 ls 11 lsattr 57, 67, 86, 91, 117 lscfg 85, 102, 118 lsdev 14, 57, 90, 102, 118, 138, 147, 155 lspv 14, 59, 67, 90, 115, 138, 147 lsvg 46, 55, 66, 83, 116 migratepv 59 mkcd 11, 80 mpio_get_config 115, 135 NIM 74 restvg 82 rmdev 70, 156 savevg 84 shutdown 93, 119 smitty mkcd 10 tail 103, 113, 138, 147, 151 tapeutil 155 telnet 37 touch 116 alt_disk_cop
Brocade portcfgshow 37 portLoginShow 39 portshow 38 telnet 37 smitty mkcd 46 VIOS cfgdev 24, 60, 96, 109, 149, 163 chkdev 4, 17, 60, 71, 98, 109 PHYS2VIRT_CAPABLE 17, 110 VIRT2NPIV_CAPABLE 17, 111 VIRT2PHYS_CAPABLE 17, 111 chrep 12 loadopt 12, 50, 81 lsdev 47, 130 lsmap 12, 48, 98, 110, 131, 139, 164 lsnports 31, 150, 164 fabric attribute 31 lsrep 49, 80 mkrep 12, 49, 80 mkvdev 12, 47, 76, 80, 87, 99, 111 mkvopt 12 mkvopy 49 oem_setup_env 71 rmdev 140 rmvdev 139 rmvopt 12 unloadopt 13, 52 vfcmap 31, 131, 15
NOT_LOGGED_IN 142 NPIV 26, 147 enabling 26 requirements 26 O oem_setup_env 71 P padmin 97 PHYS2VIRT_CAPABLE 17, 72, 98, 110–111 physical location code 92 physical partition size 46 physical to virtual compliance 4 physical volume identifier 14 portcfgshow 37 portLoginShow 39 portshow 38 PowerPath 5 pvid 14, 98 smitty mkcd 10 SMS boot mode 65, 112 source system 10 standalone server 2 storage 2 System Backup and Recovery 75 systems enclosure 113 T tail 103, 113, 138, 147, 151 tapeuti inventory command 167
rmvopt 12 slot number 110 unloadopt 13, 52 vfcmap 31, 131, 150, 164 viosbr 9 viosbr 7, 9 VIRT2NPIV_CAPABLE 17, 111 VIRT2PHYS_CAPABLE 17, 111 virtual fibre channel 26, 114 client fileset required devices.vdevice.IBM.vfc-client.
PowerVM Migration from Physical to Virtual Storage
Back cover ® PowerVM Migration from Physical to Virtual Storage Moving to a Virtual I/O Server managed environment Ready-to-use scenarios included AIX operating system based examples ® IT environments in organizations today face more challenges than ever before. Server rooms are crowded, infrastructure costs are climbing, and right-sizing systems is often problematic. In order to contain costs there is a push to use resources more wisely to minimize waste and maximize the return on investment.