HP StorageWorks SAN Virtualization Services Platform administrator guide Part number: 5697–8056 Second edition: March 2009
Legal and notice information © Copyright 2008-2009 Hewlett-Packard Development Company, L.P. Confidential computer software. Valid license from HP required for possession, use or copying. Consistent with FAR 12.211 and 12.212, Commercial Computer Software, Computer Software Documentation, and Technical Data for Commercial Items are licensed to the U.S. Government under vendor's standard commercial license. The information contained herein is subject to change without notice.
Contents About this guide ................................................................................... 9 Intended audience ...................................................................................................................... 9 Prerequisites ............................................................................................................................... 9 Related documentation ...............................................................................................
Windows multipathing ................................................................................................. Presenting SVSP virtual disks to servers .................................................................................. Creating the user defined host (UDH) ............................................................................. Creating VSM virtual disks ............................................................................................ Defining hosts in SVSP ...........
The asynchronous mirror decision table ........................................................................................ Establishing a disaster recovery site ............................................................................................. Testing or validating your ability to recover from a DR site without detaching or splitting the async mirror group ............................................................................................................................
VMware storage administration best practices ............................................................................ Rescan SAN operations .................................................................................................... Resizing VSM virtual disks ................................................................................................. Storage VMotion ..............................................................................................................
Figures 1 SAN Virtualization Services Platform overview ............................................................ 14 2 The basics .............................................................................................................. 15 3 Management interface ............................................................................................ 16 4 Install/Restore License Key screen of the Launch AutoPass window ................................. 20 5 License dialog box .....................
Tables 1 Document conventions ............................................................................................. 10 2 VSM license types ................................................................................................... 19 3 License capacities ................................................................................................... 21 4 Zoning example for back-end using first DPM quad of first DPM group ...........................
About this guide This guide provides information about HP StorageWorks SAN Virtualization Services Platform and explains the concepts of running of administrative tasks. Intended audience This guide is intended for operators and administrators of storage area networks (SANs) that include supported HP storage arrays.
Document conventions and symbols Table 1 Document conventions Convention Element Blue text: Table 1 Cross-reference links and e-mail addresses Blue, underlined text: http://www.hp.
HP technical support For worldwide technical support information, see the HP support website: http://www.hp.com/support Before contacting HP, collect the following information: • • • • • • Product model names and numbers Technical support registration number (if applicable) Product serial numbers Error messages Operating system type and revision level Detailed questions Customer self repair This product has no customer-replaceable components.
• For HP StorageWorks SAN Virtualization Service Platform: SVSP@hp.
1 SAN Virtualization Services Platform overview Figure 1 shows the relationships of the major components that comprise the HP StorageWorks SAN Virtualization Services Platform (SVSP) solution. In describing the solution, this document uses the following terms: • SVSP domain—The two Data Path Modules (DPMs), two Virtualization Services Manager (VSM) servers, and the arrays that are providing block-based storage services to the DPM and SVSP devices.
1 2 3 7 5 6 4 8 9 gl0134 1 Customer servers 6 Non-virtualized I/O 2 Customer SAN, dual-fabric 7 VSM servers 3 Virtualized I/O as seen by server 8 SVSP domain 4 Virtualized I/O as seen by array 9 HP Command View management platform 5 Data Path Modules Figure 1 SAN Virtualization Services Platform overview Data Path Modules One way to think of the pair of DPMs is as being similar to a pair of array controllers, only with 8 host ports and 8 back-end ports per controller.
• Asynchronous remote replication In addition to creating and managing the many objects that exist within the SVSP domain, another purpose of the VSM user interface is to define the virtualization maps used by the DPM to perform the virtualization of the I/O. These virtualization maps exist for each SVSP-defined virtual disk, plus all point-in-time copies, snapshots, snapclones, and remote asynchronous mirrors.
the New option to create an instance of that object. You can also right-click on an object in the right pane to manage that object. Figure 3 Management interface The following sections describe steps that need to be performed in creating the initial pool of SVSP blocks, creating a virtual disk or LUN out of those blocks, and then presenting that virtual disk to a server. Common SVSP tasks to get started These tasks allow you to make new virtual disks available for use.
Creating SVSP virtual disks See the “Working with virtual disks ” chapter in the HP StorageWorks SAN Virtualization Services Platform Manager user guide for detailed information on viewing and managing virtual disks. Presenting SVSP virtual disks to application servers See “Presenting SVSP virtual disks to servers” on page 31. Presenting hosts in SVSP See “Presenting SVSP virtual disks to servers” on page 31.
Best practices The key to a successful SVSP implementation is planning ahead in terms of how the existing storage array is used to build the needed SVSP storage pools. For example, while it is possible to build one large pool and carve it up into many small LUNs, such an implementation might negatively impact system performance. To reduce the potential for performance issues, you need to build the pool and back-end LUs to an appropriate size.
2 Adding devices to the domain This chapter describes how to expand your SVSP domain by the addition of licensed capacity, arrays, DPMs, and servers. Adding servers requires that you consider the appropriate multipathing for the operating system and switch zoning. Only presentation for an HP Enterprise Virtual Array (EVA) and Modular Storage Array (MSA) is discussed. Please consult the vendor documentation for other types of arrays.
License type Description Enables you to perform the following operations between domains in addition to the basic license: Continuous Access (CA) • Supports remote snapclones between two domains. • Supports remote async mirrors between a single source domain and up to three remote domains. SVSP Data Path Modules 4-port upgrade LTU Enables you to turn on additional DPM ports on both DPMs within the same DPM group. NOTE: BC is intra-domain, while CA is domain-to-domain. Installing a license key file 1.
Viewing licensed capacity From within the VSM client, in the menu bar, click Tools > License. The License dialog box appears. Figure 5 License dialog box The following table describes the capacities listed in the License dialog box. Each capacity features its total amount, the amount used, and the amount available. Table 3 License capacities Property Description Basic capacity The amount of licensed capacity allotted for basic operations (for example, the maximum size of all pools).
• Array to VSM servers • If adding the array also involves using a new DPM quad, add the new DPM quad to the VSM server zones and verify that the DPM ports are licensed. • Using the management interface of the new array, create or define DPMs and VSMs as hosts, and then present the back-end LUs to the DPMs and VSMs. Each pair of quads should have a unique host definition. • Refresh the VSM software using the VSM refresh button.
1. Configure the MSA Storage Management Utility for an MSA2012fc and run the utility. A status message screen is displayed. 2. Click the Manage option, click Create a vdisk from the drop-down menu, and then select Automatic Virtual Disk Creation (Policy-based). The following screen appears.
3. Enter a virtual disk name, tolerance level, size of virtual disk, and number of volumes. Click Create virtual disk. The following screen is displayed. 4. Click Create New Virtual Disk and a processing message appears, as shown on the following screen.
5. After the virtual disk and volumes are created successfully, the volumes can be discovered as back-end LUs from VSM GUI as shown below. 6. Create a storage pool using the MSA back-end LUs. Use this pool to create SVSP virtual disks based on your requirements. NOTE: • Do not expand LUNs that are already configured in VSM. If a particular storage pool runs out of free space and you want to expand it, create a new virtual disk on the MSA and add it to the same pool.
NOTE: • Do not expand LUNs that are already configured in VSM. If a particular storage pool runs out of free space and you want to expand it, create a new virtual disk on the array and add it to the same pool. • Do not change LUN numbers for LUNs that are already managed by VSM. • Remove LUNs from VSM management only after releasing them from storage pools and stripe sets. • Do not use any array-based features such as local replication on any virtual disk presented to the SVSP domain.
Installing multipath applications HP-UX multipathing HP-UX 11iv2 NOTE: Secure Path requires a right-to-use license per server. 1. Go to http://h18006.www1.hp.com/products/sanworks/secure-path/hp-ux.html. 2. Under Support, click Software updates. 3. Under Select your product, click HP StorageWorks Secure Path for HP-UX Software. 4. Click Download drivers and software. 5. Select a language and operating system. 6. Either select the correct product description, or click Download for that product.
Linux multipathing NOTE: Only the QLogic multipathing driver is supported at this time. 1. Go to http://h18006.www1.hp.com/products/sanworks/softwaredrivers/multipathoptions/ linux.html. 2. Select the QLogic driver. 3. Select the RedHat or SUSE Linux operating system. 4. Click Download for that product. Optionally, you can select the correct product description to verify you made the correct selection, and download from that page. 5.
4. Make sure you are using the correct HBA driver. VMware multipathing For VMware, the multipathing policy must be set to the Most Recently Used (MRU) path. Windows multipathing 1. Go to http://h18006.www1.hp.com/products/sanworks/softwaredrivers/multipathoptions/ windows.html. 2. Under Select your product, click Windows MPIO DSM for SVSP. This product contains an active-passive multipath driver. 3. Select your operating system. 4. Select your software/driver language. 5.
5. Bind the WWPN to a Target ID. 6. Click Save, and enter the password on the security check popup. If this is the first time you have used the SanSurfer application, and you have not changed the default password, the password is config. 7. Restart the server to make the changes effective. The HBA saves this information. It makes no difference in what order the arrays are scanned. The HBA assigns the saved target ID to the WWPN.
5. Restart the server activate the changes. Presenting SVSP virtual disks to servers The HBAs of a server need to be defined so that the DPM can customize its interface to the operating system of the server that will be using the virtual disk. This is done by creating a user defined host (UDH), which is an alias definition for the server's HBAs with a common property that sets the operating system. The DPMs use the UDH for selective LUN presentation to one or more UDHs.
5. Run ioscan on all HP-UX hosts. 6. Right-click on HP-UX UDHs and set the hosts to online. Linux servers It may be necessary to reboot the Linux server to allow the discovery of HBAs by the VSM application. VMware servers To discover a new SVSP virtual disk: 1. Launch the Virtual Infrastructure client. 2. Select the ESX server to which you have presented a new virtual disk. 3. Select the Configuration tab. 4. On the left side, under Hardware, select Storage Adapters. 5.
• The shadow copy mechanism. VSS provides fast volume capture of the state of a disk at one instant in time—a shadow copy of the volume. This volume copy exists side by side with the live volume, and contains copies of all files on disk effectively saved and available as a separate device. • Consistent file state through application coordination.
Installing the SVSP VSS hardware provider on the host server NOTE: For VSS to work correctly, you must first install the SVSP full-featured DSM. You can install the SVSP VSS hardware provider using a single installation package. The final step in the installation process automatically starts the SVSP VSS Hardware Provider Service. 1. Run the SVSP VSS installation file. You can find the SVSP VSS installation file on the VSM installation CD or you can download the file on the web.
2. Click Next. The Select Installation Folder window appears. If you want to change to a different installation folder, select Browse, and enter the location that you want. 3. Click Next. The Confirm Installation window appears.
4. If you want to make changes to your installation, click Back until you arrive at the window where you can make the change. If you are satisfied with your installation choices, click Next to start the installation. After the SVSP VSS hardware provider is installed, the Installation Complete window appears. 5. Click Close to exit the installation wizard. 6.
7. Configure the SVSP VSS hardware provider user names and passwords for accessing the SVSP domains by performing these steps. a. Open a DOS command prompt window (click Start > Run, and type cmd). b. Use the change directory command (CD) to navigate to the installation folder for the SVSP VSS hardware provider. The default folder is C:\Program Files\Hewlett-Packard\SVSP VSS Hardware Provider\. c. Type SaHWConfig and press Enter. The information returned by the SaHWConfig command is shown in Figure 7.
4. From Computer Management on the host server, run a scan for hardware changes. 5. After the scan finishes, open Disk Management. 6. Identify the new disk and create the new disk as a single primary partition with a new drive letter. In the following steps, the examples of commands assume that the VSM virtual disk was created and assigned to use drive letter m:.
7. In the DOS command prompt window on the host server, type vshadow.exe -p m: and press Enter. This command creates a persistent shadow copy on drive m:. The drive label is the letter that you gave to the new drive in step 6. The shadow copy is a read-only point-in-time replica of the original volume contents. A persistent shadow copy remains in the system until you, or the backup application, initiates an explicit command to delete the shadow copy.
Figure 8 Results of the vshadow.exe -p m: command in the DOS command prompt window Figure 9 shows an example of the hierarchical snapshot structure that is created on the VSM. Both the PiT name and the snapshot name include the initial part of the shadow copy set number.
Figure 9 Results of the vshadow.exe -p m: command in the VSM GUI The full shadow copy set number appears in the comment field of the PiT and snapshot in the VSM. Figure 10 shows information that appears in the comment field. Figure 10 Shadow copy set number information in the PiT Comment field To see views that are created by VSS, select Tools > Options > General > Data presentation and select the Show VSS Views checkbox. 8. In the DOS command prompt window on the server, mount the view by typing vshadow.
10. Remove the VSS shadow copy by typing vshadow.exe -ds={SnapShotID} and pressing Enter. The vshadow.exe -ds={SnapShotID} command unmounts the snapshot on the host and deletes the snapshot and PiT on the VSM. 11. To create a persistent VSS shadow copy with a snapshot that can be presented to another host, type vshadow.exe -p -t=export.xml m: and press Enter. The vshadow.exe -p -t=export.xml m: command creates a shadow set that you can transport to another host.
The following images show a configuration example of Veritas NetBackup software that uses VSS snapshots. The configuration consists of two servers: • One server runs the application and has an adequate backup client installed. • The second server runs the backup software and acts also as the media server. Figure 11 shows a configured MS-Windows-NT backup policy for three drives (x, y, w) on a computer named SRV-00-016. The backup is written to a storage unit labeled srv-00-015-disk.
Figure 12 Example of a disk drive acting as a media server Figure 13 shows the attributes of the backup policy. Note that this policy is configured to perform snapshot backups.
Figure 13 Backup policy attributes Figure 14 shows that VSS was selected as the snapshot method for use. VSS was selected through the Advanced Snapshot Options... button shown in Figure 13. Figure 14 VSS selected as the snapshot method VSS deployment with VSM virtual disk groups To reference multiple VSM virtual disks as a single entity, you must place the VSM virtual disks in a virtual disk group (VDG).
on all VDG members. VDGs are often used to encapsulate data files and log files of the same database into a one entity. From a server perspective, the data files and the log files reside on two separate drives. From a backup and recovery perspective, the data files and the log files are two components of a single entity. A backup snapshot must be synchronously captured on both the data drive and the log drive.
3 Zoning Zoning is used to partition a fabric into logical groups of devices that can access members of the zone and are restricted from accessing devices not in the zone.
Back-end zones for capacity-based zoning Each VSM fabric has a back-end zone to connect the DPM initiator ports, VSM server, Command View EVA server, and the storage array. The first column of the table describes the zones and is given a name (called an alias), which mainly describes the components in that zone. Some abbreviations used in the aliases are back-end (BE), VSM server 1 on blue fabric (VSM_1B), and VSM server 2 on blue fabric (VSM_2B).
Capacity-based zoning example NOTE: Only one DPM group is shown in Figure 15 through Figure 17 and Table 4 through Table 7. When there are multiple DPM groups within an SVSP domain, then: • In the back-end, there are additional DPM-to-array and DPM-to-VSM zones that duplicate those shown in the following figures and tables. • In the front end, there are additional server-to-DPM target zones that are unique to the added servers.
Figure 15 shows the zoning for a VSM to EVA storage array with two zones and two fabrics.
Figure 16 shows the zoning for a VSM to DPM with four zones and two fabrics. Figure 16 VSM-to-DPM zoning example Fabric A is zoned as follows: 1. 2. DPM1_Port1 + VSM1_HBA1_Port1 + VSM1_HBA2_Port1 + VSM2_HBA1_Port1 + VSM2_HBA2_Port1 DPM2_Port1 + VSM1_HBA1_Port1 + VSM1_HBA2_Port1 + VSM2_HBA1_Port1 + VSM2_HBA2_Port1 Fabric B is zoned as follows: 1. 2.
Figure 17 shows the zoning for a DPM to EVA array with four zones and two fabrics. Figure 17 DPM-to-EVA array zoning example Fabric A is zoned as follows: 1. 2. DPM1_ Port1 + Storage-EVA Ctrl-A_Port1 + Ctrl-B_Port1 DPM2_Port1 + Storage-EVA Ctrl-A_Port1 + Ctrl-B_Port1 Fabric B is zoned as follows: 1. 2.
Table 4, Table 5, Table 6, and Table 7 are examples of a DPM being zoned by each quad.
Zone name BE_ARRAY1_VSM_RED Zone members—aliases Port connections ARRAY1_RED ARRAY1 ports 11/13 VSM1_RED VSM1 ports 15/17 VSM2_RED VSM2 ports 19/21 DPM1Q2_BLUE DPM1 port 3 DPM2Q2_BLUE DPM2 port 3 VSM1_BLUE VSM1 ports 16/18 VSM2_BLUE VSM2 ports 20/22 DPM1Q2_BLUE DPM1 port 3 DPM2Q2_BLUE DPM2 port 3 ARRAY1_BLUE ARRAY1 ports 2/4 ARRAY1_BLUE ARRAY1 ports 2/4 VSM1_BLUE VSM1 ports 16/18 VSM2_BLUE VSM2 ports 20/22 Blue fabric BE_DPMQ2_VSM_BLUE BE_DPMQ2_ARRAY1_BLUE BE_ARRAY1_VSM_BLUE
Zone name BE_ARRAY1_VSM_BLUE Zone members—aliases Port connections ARRAY1_BLUE ARRAY1 ports 2/4 VSM1_BLUE VSM1 ports 16/18 VSM2_BLUE VSM2 ports 20/22 Table 7 Zoning example for back-end using fourth quad of first DPM group Zone name Zone members—aliases Port connections DPM1Q4_RED DPM1 port 1 DPM2Q4_RED DPM2 port 1 VSM1_RED VSM1 ports 15/17 VSM2_RED VSM2 ports 19/21 DPM1Q4_RED DPM1 port 1 DPM2Q4_RED DPM2 port 1 ARRAY1_RED ARRAY1 ports 11/13 ARRAY1_RED ARRAY1 ports 11/13 VSM1_RED
Performance zoning Back-end zones for performance-based zoning Figure 18 Red fabric array-to-DPM zoning Figure 19 Red fabric DPM-to-VSM server zoning Figure 20 Red fabric array-to-VSM server zoning 56 Zoning
Figure 21 Blue fabric array-to-DPM zoning Figure 22 Blue fabric DPM-to-VSM server zoning Figure 23 Blue fabric array-to-VSM server zoning Front-end zones for performance-based zoning Figure 24 DPM-to-application server zoning HP StorageWorks SAN Virtualization Services Platform administrator guide 57
Zoning
4 Monitoring the SVSP domain This chapter describes how to set up monitoring for an SVSP domain using administrative tools. Monitoring system performance There are two ways to monitor SVSP performance: • Monitor the application server-to-array throughput using the Fibre Channel switch vendor performance tools. • Monitor the internal VSM data moving performance using a tool for collecting performance data like Microsoft's Perfmon, which is described below.
5. In the log settings window, click Add Counters. 6. In the drop-down box under select counters from computer, choose or enter the IP address of the VSM server that is to be monitored. Add any counters you want to monitor. 7. Click Close. 8. In the Interval field, select the time interval for data to be sampled. You can start with 15 seconds, but you may need to occasionally use 3 seconds for more precise data. 9. In the Run As: field, enter the user name and password needed to access the VSM.
Using Perfmon counters to log Perfmon has many counters available, but your data becomes harder to monitor if you have to sort through too much. To learn about a counter, select it, and then click the Explain button. Choose the category from the Performance object drop-down menu. Some counters with similar purposes (for example, Processor: %, Processor Time, and System: Processor Queue Length) are in different categories.
Troubleshooting Perfmon Table 8 describes potential Perfmon problems and possible corrective actions. Table 8 Troubleshooting Perfmon Problem Perfmon log does not start or is not working Cannot change Perfmon settings Corrective action • Check that the correct username and password are used for the VSM server. • Check that the time period is correct. For example, you may have chosen 6 days instead of 6 hours. Ensure that Apply is selected.
5 Removing devices from the domain This chapter provides a set of steps or checklists for what is to be done when deleting objects or devices from the domain. See the referenced material to get the exact steps needed to perform the indicated action. Deleting or reusing capacity In general, the process of deleting virtual disks is the reverse or opposite of the process used to create and present those same virtual disks. 1. Stop all applications that are using the virtual disks to be deleted. 2.
Deleting back-end LUs 1. Follow the Deleting or reusing capacity procedure above to first identify all affected virtual disks. 2. Delete the PiTs and snapshots associated with those virtual disks. 3. Delete the pool and any associated stripes sets. 4. At this point, it is possible to unpresent and delete the back-end LU. Deleting front-end virtual disks and hosts 1. Stop all applications using the virtual disks. 2.
2. Identify all virtual disks presented to the host. There is a tab in the VSM GUI that shows all presented virtual disks. See the “Working with hosts” chapter of the HP StorageWorks SAN Virtualization Services Platform Manager user guide. 3. Stop all remote and local replication (mirroring) tasks that involve any of the selected virtual disks. See the “Using mirroring” chapter of the HP StorageWorks SAN Virtualization Services Platform Manager user guide. 4.
Removing devices from the domain
6 Boot from SVSP devices This chapter outlines the process for booting from the SAN with the various operating systems supported by the SAN Virtualization Services Platform (SVSP). Please see the http://h18006.www1.hp.com/ storage/networking/bootsan.html website for a link to detailed boot from SAN documentation, where application notes are available for each operating system. Boot from SAN with HP-UX This process is written for HP-UX 11.23 on an IA-64 server.
9. Ping a known IP address to confirm network connectivity. It may be necessary to wait several minutes for the DNS registration on the network to complete before a ping works or the newly booted server is reachable from a remote platform on the network. 10. Restore all paths from the new boot LUN to the server (re-enable DPMs, re-enable switch ports, or change zoning back to the original configuration as appropriate to restore all paths from the server to the new boot LUN).
Boot from SAN with Windows Server 1. Connect one port of the server HBA to a front-end switch. 2. Power on the server and enter the HBA BIOS settings menu. 3. Configure all HBA ports: a. Enable the Target Reset option. b. Disable the “Enable LIP Reset.” c. Disable the “Enable LIP Full login.” 4. For one HBA port, enable the Boot BIOS. 5. Configure a boot zone for the server. This zone should include only one path between the host and the DPM. 6.
Boot from SVSP devices
7 Site failover recovery with asynchronous mirrors The asynchronous mirror decision table When using an asynchronous mirror group pair, some actions and properties require that you specify either the source or destination. See the following tables: creating and deleting, adding and deleting virtual disks, editing (setting) properties, and controlling. Creating and deleting Task Create an asynchronous mirror group pair. Delete an asynchronous mirror group or pair.
Editing (setting) properties Task Async mirror group to specify Result on source async mirror group Result on destination async mirror group Edit (general) an asynchronous mirror group. Either Properties are changed. Properties are changed. Auto suspend on links down mode for an asynchronous mirror group pair. Source Auto suspend on links down is disabled or enabled. Auto suspend on links down is disabled or enabled. Comment for an asynchronous mirror group.
Task Resume remote replication in an asynchronous mirror group pair. Revert an asynchronous mirror group pair to its home configuration. Suspend remote replication in an asynchronous mirror group pair. Async mirror group to specify Result on source async mirror group Result on destination async mirror group Source Remote replication from the source is allowed. If applicable, begins log merging or full copy from the source. Remote replication to the destination is allowed.
11. Each SVSP domain now sees the VSM servers of the other SVSP domain with status degraded because the FC HBAs that previously used to connect the SVSP domains are no longer used. Delete the FC HBAs that previously used to connect the SVSP domains from the HBA lists on both SVSP domains. You can access the HBA list from the HBA node in the tree.
6. Create a user PiT on the group. 7. Wait until the PiT you created is copied to the destination. 8. Suspend the group. 9. Split the group. 10. Log in to the DR site's SVSP domain. 11. Assign the host permission to use the mirrored virtual disk. 12. Merge the mirrored virtual disk without enabling rollback. Specify the name of the original virtual disk on the main site as the destination. VSM creates an async mirror group, mirroring from the DR site to the main site.
1. Connect to the main site's SVSP domain and prepare the virtual disk for a merge, as follows: a. Verify that the virtual disk exists. b. Detach the task. c. Remove host presentations from the virtual disk. Since the virtual disk has PiTs, this involves either disconnecting the hosts or powering them down, and deleting them from the Host list once their statuses change to Absent. d. Delete any snapshots on the virtual disk.
3. Detach the tasks coming into the SVSP domain. 4. Assign the host permission to use the recovery virtual disks. You can do either of the following: a. Right-click the specific DR element that you want to recover, and select Manage > Add Host Permission to assign permission to a host to use the DR element. The host will then use the most recent PiT available on that DR element. There is a chance, however, that the application will not be able to use the PiT as it is.
3. 78 Perform a controlled failback of each virtual disk to the new main site, as follows: a. Plan a downtime window for the application, based on the organization’s needs and any data that was not yet mirrored. b. At the scheduled time, shut down the application, which is currently using a virtual disk on the DR site. c. Unmount the virtual disk on the host. d. Connect to the DR SVSP domain. e. Remove the host permission from the virtual disk.
8 Site failure recovery with synchronous mirroring The Virtualization Services Manager (VSM) synchronous mirroring feature provides continuous access to a virtual disk even if one of the underling physical storage component fails. Synchronous mirroring is done with the local domain only, but can be two sites as in a stretched domain.
Figure 25 Synchronous mirroring across sites with stretched DPMs NOTE: When using synchronous mirrors to protect multiple virtual disks that belong to the same application, it is best practice to have all of the virtual disks active on the same DPM. This is especially true when in the stretched domain, so as to have all virtual disks of that application fail over together or not at all. Site failures In this topology, each site failure affects the surviving site.
Synchronous mirroring in a single stretched SVSP domain Solution topology Figure 26 shows the topology of a synchronous mirror configuration in a single SVSP domain with two SVSP sites. Figure 26 shows two sites that represent two independent data centers.
Site failures General description of site failures In a configuration where one SVSP domain stretches between the two sites, the connection between the two sites can break for any of these reasons: • Complete power failure on one of the sites • Disconnection of all of the cables that connect the sites • Complete destruction of one of the sites The end result is the same: the surviving site loses connectivity with the VSM server, the DPM, application servers, and the storage arrays at the other site.
the mirror and fails because it cannot access one of the tasks. You must direct the mirror on how to recover. The synchronous mirror group status is Partial. • If the passive DPM and the active VSM are not on the same site, the failover request fails, and the host I/O operations fail. Manual intervention for recovery is expected. Synchronous mirror failure analysis 1. 2. Find out the actual status of the sites by checking the status of the individual components locally on each site.
Turning off the power to all of the DPMs prevents the DPMs or intersite links from coming online unexpectedly when you restore power. NOTE: You can also disconnect the intersite links at the end of the surviving site. 2. On the surviving site, take the necessary actions to allow the VSM to become active. If needed, use the options in the Recovery tab on the VSM monitor. Whenever you select an entry and click the Yes button or the OK button, make sure that the status of the VSM is Passive.
Figure 28 Recovery tab—VSM in Partial state After clicking OK, the VSM application will restart itself and will come up as the active VSM. After the VSM comes up, the VSM will use only the surviving setup virtual disk to synchronize the mirrored tasks. 3. 4. On the surviving site, for every failed synchronous mirror group, right-click the mirror group, and select Manage > Recovery > Force Resume.
Recovering the surviving site by using Force Delete CAUTION: Using Force Delete on a synchronous mirror deletes the synchronous mirror structure. 1. If possible, at the site that is down, turn off the power to all of the DPMs that are involved in synchronous mirroring and the VSM server. Turning off the power to the DPMs prevents the DPMs from coming online unexpectedly when the power is restored. NOTE: You can also disconnect the intersite links at the end of the surviving site. 2.
Figure 30 Recovery tab—VSM in Partial state After clicking OK, the VSM server reboots itself and comes up as the active VSM. After the VSM comes up, the VSM uses only the surviving setup virtual disk to synchronize the mirrored tasks. 3. At the surviving site, for every synchronous mirror group that has a status of Partial, make sure that the surviving DPM, which will have a status of Present, is the active DPM for that synchronous mirror group.
Site failure recovery with synchronous mirroring
9 Basic troubleshooting This chapter describes how to solve problems you might encounter after installing and configuring the HP StorageWorks SAN Virtualization Services Platform. Diagnostic tools HP Command View EVA and the Array Configuration Utility (ACU) for the MSA will report hardware and configuration problems after storage has been presented to the HP StorageWorks SAN Virtualization Services Platform domain.
Problem Corrective action Check the VSM monitor status tab to see if the VSM is running with local setup. • Local setup means that the virtual disks containing the setup database were not located on startup and that the system started with a local (blank) database. • Verify zoning and LUN masking. The VSM is active but does not see pools, virtual disks, and so on. • Verify that the VSM is connected to the Fibre Channel switch, and there are link lights on the HBA or switch port.
Problem Corrective action • Check the status of the EVA or MSA. Server I/O to virtual disk fails • Check the VSM interface as to whether the virtual disk is listed as a partial status. Partial status means that one or more of the EVA or MSA virtual disks that make up a VSM virtual disk are not accessible to VSM. If it is a partial status, check zoning and LUN masking to correct the problem. • Check the DPM logs as to whether the virtual disk is listed in a PART status.
Problem Corrective action • Verify that all VSM HBAs are properly zoned to the EVA or MSA. If the VSM or DPM can see any LUN from the EVA or MSA with the correct number of paths, then the problem is not zoning. • Verify that all DPM back-side ports are properly zoned to the EVA or MSA. VSM/DPM does not see any virtual disks on the EVA or MSA • Verify that all VSM HBAs are properly defined in HP Command View EVA or the ACU with one host name for each VSM associated with all HBAs installed in that VSM.
VSM server zoning To verify proper back-end zoning for the VSM server, open the VSM management interface. Go to the Data Path Module and verify the number of back-end HBAs listed for each DPM. To check the settings of the second VSM, failover the passive VSM. Check the back-end HBAs for that DPM on the newly-active VSM. DPM zoning To verify proper zoning from the DPM side, extract the VSM Snap package (the file is named save_state...tgz), and open the wwpn file located in the \proc\kahuna\fps\ folder.
LUN) is visible to the DPM (in other words, logged in and responding to I/Os). Missing entries typically indicate that a physical disk is not properly connected (check cabling and zoning) or enabled (check LUN masking). The table includes at least one entry for every path (for example, I_T_L nexus) from the DPM to the physical disk.
A Using VSM with firewalls To protect you system against unauthorized access from outside your network, enable Windows Firewall. To enable Windows Firewall: 1. Click Start > Control Panel > Windows Firewall. 2. On the General tab, verify that the firewall is On (enabled). 3. Click the Exceptions tab. 4. Select File and Printer Sharing and click the Edit button. 5. Check the box to enable UPD 137. While UPD 137 is highlighted, click the Change scope button. 6.
10. Enter a name and port number for the entries below. NOTE: The VSM Status Monitor is already displayed by default. Program/service name TCP port ftp_tcp_20 20 ftp_tcp_21 21 ssh_tcp_22 22 telnet_tcp_23 23 iscsi_tcp_3260 3260 corba_tcp_4102 4102 http_tcp_8080 8080 SVM Status Monitor NA 11. Click OK.
B Deploying VMware ESX Server with SVSP This appendix provides guidelines and specific recommendations for deploying the VMware ESX server virtual infrastructure with SVSP. For current information regarding VMware and SVSP, see the HP StorageWorks SAN Virtualization Services Platform release notes. To ensure proper deployment, the following sections must be followed in order. HP recommends that you test this deployment in a test environment before using it in a production environment.
Deployment steps Before actually configuring the environment it is very important to carefully plan the environment and the deployment steps after taking all the requirements into consideration. The deployment steps include configuring of all the storage components that provide storage services for the VMware environment: • Fibre Channel zoning—Configure the appropriate SAN zoning. • Storage systems—Configure the LUNs and LUN masking.
3. Load balance the LUNs (if more than one) between storage systems or controllers. VSM GUI The following steps are an overview of the VSM GUI configuration process: 1. Configure at least one storage pool from the LUN presented by the storage system. 2. Configure at least one virtual disk from that storage pool. 3. Configure a user defined host (UDH) for the VMware servers using the appropriate personality. 4. Assign the virtual disk to the VMware servers. To configure the VSM GUI: 1.
5. Configure the SCSI personality. A SCSI personality defines the way in which the DPM (acting as storage system controllers) reacts to certain SCSI commands coming from the ESX server, especially with regard to virtual disk failover. The correct SCSI personality to use with VMware ESX Server is the HP EVA personality. 6. Assign the virtual disk with the same LUN number to all ESX servers that are part of the VMware cluster (or to a specific standalone ESX server in not in the VMware cluster).
2. Under the Advanced Settings GUI, choose LVM on the left menu window. • LVM.EnableResignature—Make sure this is set to 0. • LVM.DisallowSnapshotLun—Make sure this is set to 0. These settings allow other ESX servers to see snapshots as normal DATASTOREs instead of a new raw LUN. If exposing the snapshot back to the same ESX server is needed, make sure the LVM.DisallowSnapshotLun is set to 1. Repeat these steps for each ESX server in the environment. 3.
4. Under the Storage Adapters window, chose QLA/LP HBA and then select Rescan. Make sure Scan For New Storage Devices and Scan for New VMFS Volumes are checked in the Rescan window. Under the Details window you should see targets and paths within a target for every VSM virtual disk. 5. Using the VMware VI client GUI, choose the ESX server, select the Configuration tab and then click Storage. 6. Select Add Storage and follow the VMware wizard to create a DATASTORE.
NOTE: At this time, the only supported multipath policy is Most Recently Used (default). VMware storage administration best practices Rescan SAN operations HP recommends that whenever a change is made to the front-side zone a “Rescan SAN” operation is performed on all ESX servers. This is particularly important after recovery of a path failure or when an DPM is replaced. If “Rescan SAN” is not performed, the ESX server may not know about new available paths and will operate in a single path mode.
• Make sure that both VSM virtual disks that include the source and destination DATASTORE have permissions and are assigned to the ESX servers. Using VSS with Windows 2003 SP2 running on a virtual machine Using the Microsoft Virtual Shadow Copy Service (VSS) with Windows 2003 SP2 running on VMware Virtual Machines is supported with ESX 3.5 update 2 or higher and ESXi 3.5 update 2 or higher.
1. From the VSM GUI: a. Verify that you can see the LUN presented by the storage array as back-end LUs. b. Follow the HP StorageWorks SAN Virtualization Services Platform Manager user guide procedures to create a storage pool. c. Follow the HP StorageWorks SAN Virtualization Services Platform Manager user guide procedures to create a virtual disk from the storage pool. d.
Deploying VMware ESX Server with SVSP
C Configuration worksheets Use these worksheets to document the names, IP addresses, and other important information for your SAN Virtualization Services Platform configuration.
Configuration worksheets
D Specifications This appendix contains the specifications for the HP StorageWorks SAN Virtualization Services Platform Data Path Module (DPM) and the HP StorageWorks SAN Virtualization Services Platform Virtualization Services Manager (VSM) Server.
Device management Feature Description Access Serial port, SSH, telnet, web browser, SOAP/XML, SNMP interfaces Interfaces Supported protocols • 10/100/1000 Ethernet RJ-45 for management (optional) • 1 serial DB-9 RS232 for configuration and basic management ssh, telnet, ftp, http, SNMP, NTP, and net syslog Mechanical Characteristic Value Dimensions 17 in. (W) x 1.75 in. (H) x 26 in. (D) Enclosure 1U rack-mountable Weight 10.
Regulatory The Data Path Module has the following certifications: • • • • • UL CE cUL FCC TUV VSM server Environmental Specification Value Temperature range1 Operating 10°C to 35°C (50°F to 95°F) Shipping –40°C to 70°C (–40°F to 158°F) Maximum wet bulb temperature 28°C (82.4°F) Relative humidity (noncondensing)2 Operating 10% to 90% Non-operating 5% to 95% 1 All temperature ratings shown are for sea level. An altitude derating of 1°C per 300 m (1.
Specification Value Input requirement Rated input voltage 100 VAC to 240 VAC Rated input frequency 50 Hz to 60 Hz Rated input current 7.1 A (at 120 VAC); 3.5 A (at 240 VAC) Rated input power 852 W BTUs per hour 2910 (at 120 VAC); 2870 (at 240 VAC) Power supply output Rated steady-state power 700 W Characteristics Component Characteristic Processor Dual-Core Intel Xeon 5130 2.
Glossary This glossary defines acronyms and terms used with the SVSP solution. access path A specific series of physical connections through which a device is recognized by another device. active boot set The boot set used to supply system software in a running system. Applies to the DPM. See also boot set. active path A path that is currently available for use. See also passive path, and in use path.
Business Copy SVSP An HP StorageWorks product that works with SAN storage systems to provide local replication capabilities within the SVSP domain, providing local point-in-time (PiT) copies of data, using snapshots of data, based on changes to virtual disks. CLI Command line interface. The Data Path Module provides a CLI through the local administrative console (serial port console), telnet, or SSH.
HBA See host bus adapter. host In VSM, every server that uses VSM virtual disks. Servers that run as VSM servers are also considered hosts. host bus adapter A device that provides input/output (I/O) processing and physical connectivity between a server and a storage system. In order to minimize the impact on host processor performance, the host bus adapter performs many low-level interface functions automatically or with minimal processor involvement. I/O Input/Output.
passive path A path that must have some operation (for example, a SCSI start unit command that is issued by the server) performed on it to make it active. See also active/active RAID, active/passive RAID, and secondary path. patch file Incremental update to an existing system image. personality The way in which a DPM exposes LUNs to the hosts that use them.
purposes such as data recovery, backup and testing, while the original virtual disk stays online and continues to be updated. • A read-write entity that makes PiT data available to any host as a logical drive. SNMP Simple Network Management Protocol. The protocol used by the Data Path Module to report exception conditions to third-party network management applications. SSH Secure Shell. A protocol and application for communicating with a remote computer system.
virtual disk In VSM, a unit of storage allocated to one or more hosts from a storage pool. A virtual disk can range in size from 1 GB to 2 TB. DPMs present allocated virtual disks to hosts as logical drives. Volume Shadow Copy Service A backup infrastructure for the Microsoft Windows Server 2003/2008 operating systems, as well as a mechanism for creating consistent point-in-time copies of data known as shadow copies. VSM Virtualization Services Manager.
Index D A back-end LUs, 15, 21, 22 zoning, 48, 56 back-end LU deleting, 64 boot from SAN HP-UX, 67 Linux, 68 VMware, 104 Vmware, 68 Windows, 69 Data Path Modules initiator port zoning, 48 overview, 14 specifications, 109 target port zoning, 48 defining hosts, 31 deleting array, 64 back-end LUs, 64 capacity, 63 front-end virtual disks, 64 hosts, 64, 64 PiTs, 63 pools, 63 snapshots, 63 stripe sets, 63 virtual disks, 63 device management, 110 diagnostic tools, 89 disaster recovery site establishing, 73 test
Force Delete, 86 Force Resume, 83 H help obtaining, 11 high availability features, 109 host zoning, 48 hosts defining, 31 deleting, 64 presenting, 17 HP Business Support Center website, 9 technical support, 11 websites, 11 HP-UX boot from SAN, 67 defining host, 31 multipathing, 27 I installing license key file, 20 multipath applications, 27 SVSP hardware provider, 34 VMware ESX server, 104 VSS, 33 L licenses capacities, 21 entering, 19 key file, 20 types, 20 Linux boot from SAN, 68 defining host, 32 mult
SVPS characteristics, 109 specifications, 109 SVSP configuration worksheets, 107 overview, 13 topology, 81 symbols in text, 10 synchronous mirrors across SVSP sites, 79 site recovery, 80 stretched domain, 81 VMware best practices, 103 boot from SAN, 68 defining host, 32 deploying ESX server, 97 ESX server configuration, 100 multipath, 29 supported ESX versions, 98 VSM server specifications, 111 zoning, 48 VSS on virtual machine, 104 T W technical support HP, 11 service locator website, 11 text symbols,