HP ProLiant Cluster HA/F500 for Enterprise Virtual Array Enhanced DT Supplement Guide November 2003 (Second Edition) Part Number 339223-002 HP CONFIDENTIAL Writer: Woody Jernigan File Name: a-frnt.
© 2003 Hewlett-Packard Development Company, L.P. Microsoft® and Windows® are US registered trademarks of Microsoft Corporation. Hewlett-Packard Company shall not be liable for technical or editorial errors or omissions contained herein. The information in this document is provided “as is” without warranty of any kind and is subject to change without notice. The warranties for HP products are set forth in the express limited warranty statements accompanying such products.
Contents About This Guide Audience Assumptions........................................................................................................v Symbols in Text................................................................................................................. vi Getting Help ...................................................................................................................... vi Technical Support ...........................................................................
Contents Setting Up the Fibre Channel Switches at Both Locations, if Applicable ................2-6 Controller-to-Switch Connections.............................................................................2-6 Host-to-Switch Connections......................................................................................2-9 Zoning Recommendations.........................................................................................2-9 Bidirectional Solution.............................................
About This Guide This guide provides step-by-step instructions for installation and reference information for operation, troubleshooting, and future upgrades for the HP ProLiant Cluster HA/F500 for Enterprise Virtual Array (EVA) Enhanced Disaster Tolerance configuration.
About This Guide Symbols in Text These symbols may be found in the text of this guide. They have the following meanings. WARNING: Text set off in this manner indicates that failure to follow directions in the warning could result in bodily harm or loss of life. CAUTION: Text set off in this manner indicates that failure to follow directions could result in damage to equipment or loss of information.
About This Guide • Product model name and number • Applicable error messages • Add-on boards or hardware • Third-party hardware or software • Operating system type and revision level HP Website The HP website has information on this product as well as the latest drivers and flash ROM images. You can access the HP website at http://www.hp.com. Authorized Reseller For the name of your nearest authorized reseller: • In the United States, call 1-800-345-1518. • In Canada, call 1-800-263-5868.
1 Introduction This guide provides supplemental information for setting up an HP ProLiant Cluster HA/F500 for EVA Enhanced Disaster Tolerant (DT) configuration using HP StorageWorks Continuous Access EVA software. This guide serves as a link between the various clustering guides needed to complete an Enhanced DT cluster installation.
Introduction Disaster Tolerance Disaster-tolerant solutions provide high levels of availability with rapid data access recovery, no single point of failure, and continued data processing after the loss of one or more system components in a cluster configuration. Data is simultaneously written to both local and remote sites during normal operation. The local site is known as the source site because it is in control of the operation.
Introduction The ProLiant server nodes in the cluster are connected or stretched over a distance. Up to two storage subsystems for FCIP connections and four storage subsystems for non-FCIP connections can be used at one site. These storage subsystems act as the source sites for the Continuous Access software and process disk subsystem requests for all nodes in the cluster.
Introduction The HA/F500 for EVA Enhanced DT cluster requires two types of links: network and storage. The first requirement is at least two network links between the servers. MSCS uses the first network link as a dedicated private connection to pass heartbeat and cluster configuration information between the servers. The second network link is a public network connection that clients use to communicate with the cluster nodes.
Introduction Required Configuration Components You will need the components in Table 1-1 to install the HA/F500 for EVA Enhanced DT cluster. NOTE: Only the HP StorageWorks Secure Path software is included in this kit. Table 1-1: Required Components Components Up to eight cluster nodes Supported Products For a current list of supported HP ProLiant servers, refer to http://www.hp.
Introduction Table 1-1: Required Components continued Components Supported Products FCIP interconnect Refer to the hp StorageWorks continuous access and data replication manager SAN extensions reference guide at http://h18006.www1.hp.com/storage/index.html. Ethernet connection HP VNSwitch 900XA Storage system HP EVA for HA/F500 Cluster, Version 3.
Introduction Figure 1-1 shows a basic DT cluster configuration consisting of two separated nodes. The two nodes, plus the source storage subsystem, form an MSCS cluster. Figure 1-1: Basic DT cluster configuration Bidirectional DT Configuration The bidirectional DT configuration allows a source subsystem to also be configured as a destination subsystem.
Introduction • Two Fibre Channel Adapters (FCA) in each server • Two SMAs, at least one for each site NOTE: Refer to the HP StorageWorks Continuous Access EVA Design Reference Guide for complete lists of supported equipment. Figure 1-2 shows a bidirectional DT configuration consisting of two separated nodes. As in the basic DT configuration, data at the first site is mirrored on a second storage subsystem at the second site.
2 Cluster Installation This chapter details procedures outlined in the corresponding guides listed at the beginning of each of the following sections. It is important to follow the steps covered in this chapter because many steps are specific to the HA/F500 Enhanced DT cluster installation.
Cluster Installation Continuous Access Refer to the HP StorageWorks Continuous Access EVA Operations Guide for detailed information on Continuous Access, including any restrictions. Table 2-1 outlines the restrictions that are specific to the HA/F500 Enhanced DT cluster configuration. Table 2-1: Continuous Access Restrictions Restriction Comment Maximum of 16 storage systems on SAN. Storage system is visible to only one Command View EVA instance at a time.
Cluster Installation Table 2-1: Continuous Access Restrictions continued Restriction Comment Each storage system can have a relationship with one remote storage system. HSG controllers might be present on the SAN but cannot interoperate with HSV controllers. The HSG and HSV controllers must be in different management zones. One snapshot or Snapclone allowed per Vdisk. Can be on either the source or destination.
Cluster Installation Installing the Hardware Depending on the size of your SAN and the considerations used in designing it, many different hardware configurations are possible. Refer to the HP StorageWorks Continuous Access Enterprise Virtual Array Design Reference Guide for a detailed description of various hardware configurations.
Cluster Installation Table 2-2: Hardware Preparation Checklist for Continuous Access EVA Installation continued Task Reference Document Plan and populate layout of one or more physical disk groups. HP StorageWorks Continuous Access EVA Operations Guide Power up storage systems and SMAs.
Cluster Installation Setting Up the Fibre Channel Switches at Both Locations, if Applicable NOTE: Both Fibre Channel switches can be configured from the same site. Your Fibre Channel switches must be installed and configured with two working redundant fabrics before you connect the remaining Continuous Access EVA components to your fabrics. For information on the specific switches used and GBICs needed, refer to http://h18006.www1.hp.com/storage/saninfrastructure.html.
Cluster Installation Figure 2-1: Supported cabling Either controller can be controller A or controller B. In a storage system that has not been configured, the first controller that powers up and passes a self-test becomes controller A. Also, under certain conditions, controller A and controller B can have their designations reversed. Any other controller-to-fabric cabling scheme is not supported.
Cluster Installation 4. Power up that controller. 5. After controller A passes the self-test, power up the other controller. Figure 2-2: Example 1 of cabling not supported Figure 2-3: Example 2 of cabling not supported 2-8 HP ProLiant Cluster HA/F500 for Enterprise Virtual Array Enhanced DT Supplement Guide HP CONFIDENTIAL Writer: Woody Jernigan File Name: c-ch2 Cluster Installation.
Cluster Installation Host-to-Switch Connections Tag each end of your fiber optic cable to identify switch names, port numbers, host names, and so on. Two fiber optic connections are required for each host. Connect the fiber optic cable such that connections to the two FCAs go to two separate switches (fabrics). Zoning Recommendations Having both fabrics in place and operational is necessary before you even begin any other equipment installation.
Cluster Installation Bidirectional Solution You can configure data replication groups to replicate data from storage system A to storage system B and other unrelated data replication groups to replicate data from storage system B back to storage system A. This feature, called bidirectional replication, allows for a storage system to have both source and destination virtual disks, where these Vdisks belong to separate Data Replication (DR) groups.
Cluster Installation Configuring the Software The storage system must be initialized before it can be used. This process binds the controllers together as an operational pair and establishes preliminary data structures on the disk array. Initialization is performed through the use of the Command View EVA. This procedure is documented in the HP StorageWorks Command View EVA Getting Started Guide.
Cluster Installation Preparation Checklist for Continuous Access EVA Software Installation Table 2-3: Software Preparation Checklist for Continuous Access EVA Installation Task Reference Document Install system software: • HP OpenView Storage Management Appliance Software v2.0 Update HP OpenView Storage Management Appliance Software User Guide • EVA system software VCS v3.010 or upgrade EVA system software v3.0 to v3.
Cluster Installation Logging On to the SAN Management Appliance 1. Log on to the management appliance by opening a browser and accessing the management appliance remotely by entering the IP address (or the network name if a Domain Name System (DNS) is configured) as the URL. The logon screen opens. 2. Click anonymous. 3. Log in as administrator. 4. Enter the password for the account. 5. Click OK. The hp openview storage management appliance window displays.
Cluster Installation To enter a license key: 1. Click Agent Options in the Session pane. The Management Agent Options window displays. 2. Click Licensing Options. The Licensing Options window displays. 3. Click Enter new license key. The Add a License window displays. 4. Enter the license key. You must enter the license key exactly as it was in the e-mail you received from the license key fulfillment website. If possible, copy the license key from the e-mail and paste it into the text field. 5.
Cluster Installation Naming the Site In the hp openview storage management appliance window: 1. Click Devices. The Devices window displays: Figure 2-4: HP OpenView SAN window 2. Click command view eva. The HSV Storage Network Properties window displays. You can now browse the EVAs in the Uninitialized Storage System in the navigation panel. 3. Determine which site is to be designated Site A and which is to be designated Site B by selecting Hardware>Controller Enclosure.
Cluster Installation Figure 2-5: Initialize an HSV Storage System window 5. In the Step 1: Enter a Name field, enter the site name. 6. In the Step 2: Enter the number of disks field, enter the maximum number of disks (minimum of eight in a disk group) or the number of disks you will use in the default disk group. NOTE: You must determine if you will configure your storage in a single disk group or multiple disk groups.
Cluster Installation 8. Click Finish, and then click OK (if the operation was successful). NOTE: If the operation is not successful, it typically is caused by a communication problem. Verify the SAN connection, fix the problem, and begin again at step 1. Creating the VD folders 1. In the Command View EVA navigation pane, click Virtual Disks. The Create a Folder window displays. Figure 2-6: Creating a VD Folder window 2. In the Step 1: Enter a Name field, enter the folder name (use the cluster name). 3.
Cluster Installation Creating the VDs You are given the opportunity to select a preferred path during the creation of a Vdisk. This means that host I/O to a Vdisk will go to the controller you designate as preferred, as long as the paths to that controller are available. There are five possible preferred path settings. However, the Windows environment allows only those shown in the bulleted list, as Secure Path is responsible for supporting failback capability.
Cluster Installation 1. In the Command View EVA navigation pane, click the new VD folder. The Create a Vdisk Family window displays. Figure 2-7: Create a Vdisk Family window 2. In the Vdisk name: field, enter the VD name. 3. In the Size: field, enter the size in gigabytes. 4. In the Preferred path/mode: dropdown menu, make a selection (for load balancing). NOTE: All members of DR Group must have same setting. 5. Click Create More and repeat steps 2 through 4 for each VD you create. 6.
Cluster Installation Creating the Hosts Creating the Host Folder Create a host folder for each cluster to enable ease of administration. 1. Click Create Folder. The Create a Folder window displays. Figure 2-8: Create a Folder window 2. In the Step 1: Enter a Name field, enter SiteA or any name up to 32 characters long. 3. In the Step 2: Enter comments field, enter any additional information, up to 64 characters long. 4. Click Finish, and then click OK. 5.
Cluster Installation Adding a Host NOTE: If the SAN appliance cannot see the host WWNs, perform steps 1 and 2. Otherwise, begin at step 3. 1. Reboot the SAN appliance. 2. Access the Command View EVA application. 3. Click the desired host in the navigation pane. The Add a Host window displays. Figure 2-9: Add a Host window 4. In the Host name: field, enter the host name. 5. In the Host IP address: dropdown menu, select the appropriate scheme or enter the IP address if it is a static IP address. 6.
Cluster Installation The Add a Host Port window displays. Figure 2-10: Add a Host Port 8. For each FCA: a. In the Click to select from list dropdown menu, select the appropriate FCA. b. Click Add port. 9. Select the Ports tab (which displays only after selecting to add the port) and verify the ports are correctly assigned. 10. Repeat the procedure for Site B.
Cluster Installation Presenting the VDs to the Host CAUTION: Shut down all the nodes. Only one node should see the drives at one time. 1. In the Command View EVA navigation pane, click the first new VD. The Vdisk Active Member Properties window displays. Figure 2-11: Vdisk Active Member Properties window— General Tab view HP ProLiant Cluster HA/F500 for Enterprise Virtual Array Enhanced DT Supplement Guide HP CONFIDENTIAL Writer: Woody Jernigan File Name: c-ch2 Cluster Installation.
Cluster Installation 2. Click Presentation. The Vdisk Active Member Properties window displays. Figure 2-12: Vdisk Active Member Properties window— Presentation Tab view 2-24 HP ProLiant Cluster HA/F500 for Enterprise Virtual Array Enhanced DT Supplement Guide HP CONFIDENTIAL Writer: Woody Jernigan File Name: c-ch2 Cluster Installation.
Cluster Installation 3. Click Present. The Present Vdisk window displays. Figure 2-13: Present Vdisk window 4. Select both hosts, and then click Present Vdisk. 5. Click OK. You are returned to the Vdisk Active Members Property window. HP ProLiant Cluster HA/F500 for Enterprise Virtual Array Enhanced DT Supplement Guide HP CONFIDENTIAL Writer: Woody Jernigan File Name: c-ch2 Cluster Installation.
Cluster Installation 6. Select the Presentation tab to verify that both hosts are on the same LUN. The Vdisk Active Members Property window displays. Figure 2-14: Vdisk Active Member Properties window— Presentation Tab view 7. Repeat steps 1 through 6 for each VD. 8. Power on Node 1. 9. Log on to the domain. 10. Wait until all the VDs are discovered. 11. Open the operating system Device Manager and look at that disk drive. 12.
Cluster Installation 15. Repeat steps 8 through 14 for Node 2. 16. Join Node 2 to the cluster. Creating Replicated Disks Discovering the Devices You will be creating the copy sets and DR groups in the same sequence. 1. In the hp openview storage management appliance window, click Tools. Figure 2-15: Tools window HP ProLiant Cluster HA/F500 for Enterprise Virtual Array Enhanced DT Supplement Guide HP CONFIDENTIAL Writer: Woody Jernigan File Name: c-ch2 Cluster Installation.
Cluster Installation 2. Click continuous access. The Continuous Access Status window displays. NOTE: You are now working in the Continuous Access user interface, not the Command View EVA. Figure 2-16: Continuous Access Status window The window is empty. 3. Click Refresh>Discover. A pop-up window informs you that the discovery process could be lengthy. After the system has discovered the devices, you will create the DR groups and copy sets. NOTE: You must plan how to separate managed sets and copy sets.
Cluster Installation Creating the DR Groups You can create the DR groups first or create the initial copy set, which forces the DR group creation process. The following procedure is for creating the DR groups before the copy sets. 1. On the Continuous Access window, select the site from the navigation pane. 2. Click Create>DR Group. The Create a new DR Group window opens. Figure 2-17: Create a new DR Group window 3. In the DR Group: field, enter the name. 4.
Cluster Installation Creating the Copy Sets NOTE: Entering the first copy set will force the DR group creation sequence if no DR Group has yet been created. 1. On the Continuous Access window, select the site from the navigation pane. 2. Click Create>Copy Set. The Create a new Copy Set window opens. Figure 2-18: Create a new Copy Set window 3. In the DR Group: dropdown list, select the DR group to which the copy set will belong. 4. In the Copy Set: field, enter the copy set name. 5.
Cluster Installation 7. Select the destination from the Destination Storage System: dropdown list (Site B, if you have followed suggested naming conventions). 8. Click Finish. Creating the Managed Sets A managed set is a folder created to hold DR groups. One or more DR groups can be combined to create a managed set. 1. Choose Create>Managed Sets. The Edit or create a Managed Set window displays. Figure 2-19: Edit or create a Managed Set window 2. In the Managed Set Name: field, enter the name. 3.
Cluster Installation 5. In the navigation pane, select the first DR group to be part of a managed set. 6. In the Configuration dropdown menu, select Edit. The Edit an existing DR Group window displays. Figure 2-20: Edit an existing DR Group window 7. Select a managed set from the Managed Set list, and then click Finish. 8. Repeat steps 5 through 7 for each DR group to add.
Cluster Installation Pre-presenting the Destination VDs to Cluster Nodes 1. In the hp openview storage management appliance window, select Devices. The Devices window displays. Figure 2-21: Devices window 2. Click command view eva. The command view eva Properties window opens. 3. In the navigation pane, select the destination subsystem, and then click Virtual Disks. 4. Select the virtual disk to present on the destination subsystem.
Cluster Installation 5. Select Active. The Vdisk Active Member Properties window displays. Figure 2-22: Vdisk Active Member Properties window 2-34 HP ProLiant Cluster HA/F500 for Enterprise Virtual Array Enhanced DT Supplement Guide HP CONFIDENTIAL Writer: Woody Jernigan File Name: c-ch2 Cluster Installation.
Cluster Installation 6. Select the Presentation tab, and then click Present. The Present Vdisk window opens. Figure 2-23: Present Vdisk window 7. Select the VDs, and then click Present Vdisk. 8. Click OK. 9. Repeat for each VD to present. HP ProLiant Cluster HA/F500 for Enterprise Virtual Array Enhanced DT Supplement Guide HP CONFIDENTIAL Writer: Woody Jernigan File Name: c-ch2 Cluster Installation.
Cluster Installation 10. Verify the disks are properly presented. a. In the navigation pane, select the host to verify. b. Select the Presentation tab. The Host Properties window displays. Figure 2-24: Host Properties window c. Verify that each VD is presented to a unique Logical Unit (LUN). The configuration is complete. 2-36 HP ProLiant Cluster HA/F500 for Enterprise Virtual Array Enhanced DT Supplement Guide HP CONFIDENTIAL Writer: Woody Jernigan File Name: c-ch2 Cluster Installation.
Cluster Installation Zoning Worksheets Locate and record the WWNs of each host on the zoning worksheet. Keep a copy of all worksheets at all your sites. Table 2-4: Zoning Worksheet Site A Host WWN # Domain ID # Port # Alias Name Site/Location Domain ID # Port # Alias Name Site/Location Site B Host WWN # HP ProLiant Cluster HA/F500 for Enterprise Virtual Array Enhanced DT Supplement Guide HP CONFIDENTIAL Writer: Woody Jernigan File Name: c-ch2 Cluster Installation.
3 Disaster Recovery This chapter covers failover and failback scenarios that can occur with the ProLiant Cluster HA/F500 for EVA Enhanced DT configuration. Managing Continuous Access Refer to the HP Continuous Access EVA Operations Guide (referred to in this chapter as the Continuous Access guide) for complete instructions on managing the Continuous Access software. Failure Scenarios Additional information can be found in the “Monitoring Events” and “Failover” sections in the Continuous Access guide.
Disaster Recovery Resource Failover A cluster failover occurs if a cluster resource fails at one of the sites (refer to Figure 3-1). The exact failover behavior can be configured for each cluster group, but usually this means that the entire cluster group that contains the failed resource may attempt to switch to the other cluster node. No storage subsystem failover is required in this scenario. You are given the opportunity to select a preferred path during the creation of the Vdisks.
Disaster Recovery Local Server Failure A normal failover occurs if one of the cluster nodes fails. All the resources defined in the cluster groups that were running on the failed node will attempt to switch over to the surviving nodes. As with a cluster resource failure, no storage subsystem failover is required. This is also the case when a cluster node is brought down for a scheduled event such as system maintenance.
Disaster Recovery Source Site Failover A small amount of downtime might occur if the Quorm disk is at the source site. The cluster is not available during this time until a site failover is performed. Refer to the Continuous Access guide to determine the scenarios that warrant a failover. There are two types of failover procedures to recover the remote copy sets: planned and unplanned. Use the planned failover procedure when failover is a scheduled event. Otherwise, use an unplanned failover procedure.
Disaster Recovery Source Site Failback The two failover-only options for Vdisk creation allow the host to control when a Vdisk moves to a preferred path. For example, if path A is preferred and that path becomes unavailable, path B is used. The host will then control the movement back to path A when it becomes available later. EVA Storage Failback Procedure Problem EVA storage fails at one site requiring a site failover: • Manual failover to the alternate site is successful.
Disaster Recovery 3. Power on the Storage Management Appliance at the failed site. 4. From the management appliance, at the failed site, remove access to the cluster for any disks that were failed over during the original site failover. 5. Be sure the cluster node at the failed site has been rebooted. At this point the node at the failed site should show NO drives in Device Manager because the failed over drives have been unpresented. Determine which management appliance you will continue to use.
Disaster Recovery • Cluster loses the reservation of the disk and fails. • Cluster requires a manual restart of the cluster service and manually brings the drives back online. Solution To overcome the disruption of cluster service during the repair of the ISLs: 1. Access the management appliance at the destination site (site that was originally the primary site) and disable access to the cluster nodes for any of the disks that were failed over. 2. Repair the ISLs.
Glossary array controller software (ACS) Software contained on a removable ROM program card that provides the operating system for the array controller. bidirectional Pertaining to the process by which two servers mirror each other from remote locations.
Glossary destination site The location of the secondary network. disaster tolerant (DT) A solution that provides rapid data access recovery and continued data processing after the loss of one or more components. failback 1. The process that takes place when a previously failed controller is repaired or replaced and reassumes the workload from a companion controller. 2. The process that takes place when the operation of a previously failed cluster group moves from one cluster node back to its primary node.
Glossary host The primary or controlling computer to which a storage system is attached. host bus adapter (HBA) A device that connects a host system to a SCSI bus. The HBA usually performs the lowest layers of the SCSI protocol. This function may be logically and physically integrated into the host system. initiator site The location of the primary network. latency The amount of time required for a transmission to reach its destination. local site The location of the primary network.
Glossary OCP Operator Control Panel. The element on the front of an HSV controller that displays the controller’s status using LEDs and an LCD. Information selection and data entry are controlled by the OCP push buttons. remote Files, devices, and other resources that are not directly connected to the system being used at the time. Glossary-4 HP ProLiant Cluster HA/F500 for Enterprise Virtual Array Enhanced DT Supplement Guide HP CONFIDENTIAL Writer: Woody Jernigan File Name: w-glossary.
Index A authorized reseller vii disaster tolerant (DT) configuration cluster installation 2-1 disaster recovery 3-1 Disk Replication (DR) groups, creating 2-29 B basic configuration 1-6 bidirectional configuration 1-7 bidirectional solution 2-10 E Ethernet NICs 1-4 F C cabling not supported 2-8 supported 2-7 clusters installation 2-1 resource failure 3-3 configuration bidirectional 1-7 software 2-11 Continuous Access managing 3-1 restrictions 2-2 copy sets, creating 2-30 D devices, discovering 2-27 d
Index H O host bus adapter (HBA) basic configuration 1-6 bidirectional configuration 1-8 hosts adding a host 2-21 creating host folders 2-20 HP Insight Manager 7 1-3 HP Integrated Remote Control 1-3 HP Remote Insight Board 1-3 HP Remote Management 1-3 HP website vii Open Systems Gateway (OSG) 1-6 operating systems 1-5 I R initialization, source site 2-14 installation Fibre Channel Adapters (FCAs) 2-5 hardware preparation checklist 2-4 required components 1-5 required materials 2-1 restrictions 2-2 se
Index U W unplanned failover 3-4 websites HP vii Windows operating systems 1-5 World Wide Name (WWN) location 2-5 V Virtual Disks (VD) creating VD folders 2-17 creating VDs 2-18 presenting VDs to the host 2-23 Z zoning recommendations 2-9 worksheets 2-37 HP ProLiant Cluster HA/F500 for Enterprise Virtual Array Enhanced DT Supplement Guide HP CONFIDENTIAL Writer: Woody Jernigan File Name: x-index.