HP ProLiant Cluster HA/F500 for Enterprise Virtual Array Enhanced DT Supplement Guide June 2003 (First Edition) Part Number 339223-001 HP CONFIDENTIAL Writer: Bill Akers File Name: a-frnt Codename: 49ers III Part Number: 339223-001 Last Saved On: 5/29/03 2:04 PM
© 2003 Hewlett-Packard Development Company, L.P. Microsoft, Windows, and Windows NT are US registered trademarks of Microsoft Corporation. Hewlett-Packard Company shall not be liable for technical or editorial errors or omissions contained herein. The information in this document is provided “as is” without warranty of any kind and is subject to change without notice. The warranties for HP products are set forth in the express limited warranty statements accompanying such products.
Contents About This Guide Audience Assumptions........................................................................................................v Important Safety Information ..............................................................................................v Symbols in Text................................................................................................................. vi Getting Help ..........................................................................................
Contents Setting up the Fibre Channel Adapters......................................................................2-5 Setting Up the Fibre Channel Switches at Both Locations, if Applicable ................2-6 Controller-to-Switch Connections.............................................................................2-6 Host-to-Switch Connections......................................................................................2-9 Zoning Recommendations.................................................
About This Guide This guide provides step-by-step instructions for installation and reference information for operation, troubleshooting, and future upgrades for the HP ProLiant Cluster HA/F500 for Enterprise Virtual Array (EVA) Enhanced Disaster Tolerance configuration.
Symbols in Text These symbols may be found in the text of this guide. They have the following meanings. WARNING: Text set off in this manner indicates that failure to follow directions in the warning could result in bodily harm or loss of life. CAUTION: Text set off in this manner indicates that failure to follow directions could result in damage to equipment or loss of information. IMPORTANT: Text set off in this manner presents essential information to explain a concept or complete a task.
• Product model name and number • Applicable error messages • Add-on boards or hardware • Third-party hardware or software • Operating system type and revision level HP Website The HP website has information on this product as well as the latest drivers and flash ROM images. You can access the HP website at http://www.hp.com. Authorized Reseller For the name of your nearest authorized reseller: • In the United States, call 1-800-345-1518. • In Canada, call 1-800-263-5868.
1 Introduction This guide provides supplemental information for setting up an HP ProLiant Cluster HA/F500 for EVA Enhanced Disaster Tolerant (DT) configuration using HP StorageWorks Continuous Access (CA) EVA software. This guide serves as a link between the various clustering guides needed to complete an Enhanced DT cluster installation.
Introduction Disaster Tolerance Disaster-tolerant solutions provide high levels of availability with rapid data access recovery, no single point of failure, and continued data processing after the loss of one or more system components in a cluster configuration. Data is simultaneously written to both local and remote sites during normal operation. The local site is known as the source site because it is in control of the operation.
Introduction The ProLiant server nodes in the cluster are connected or stretched over a distance. Up to two storage subsystems for FCIP connections and four storage subsystems for non-FCIP connections can be used at one site. These storage subsystems act as the source sites for the CA software and process disk subsystem requests for all nodes in the cluster. The storage subsystems are connected to the server nodes by means of redundant Fibre Channel connections that are managed by Secure Path.
Introduction The HA/F500 for EVA Enhanced DT cluster requires two types of links: network and storage. The first requirement is at least two network links between the servers. MSCS uses the first network link as a dedicated private connection to pass heartbeat and cluster configuration information between the servers. The second network link is a public network connection that clients use to communicate with the cluster nodes.
Introduction Required Configuration Components You will need the components in Table 1-1 to install the HA/F500 for EVA Enhanced DT cluster. NOTE: Only the HP StorageWorks Secure Path software is included in this kit. Table 1-1: Required Components Components Supported Products Up to eight cluster nodes For a current list of supported HP ProLiant servers, refer to the website: http://www.hp.
Introduction Table 1-1: Required Components continued Components Supported Products FCIP interconnect Computer Network Technology (CNT) Corporation • (CNT) UltraNet Edge Storage Router Model 1001 SAN Valley Systems, Inc. • SL 1000-AC IP-SAN Gateway • SL 1000-DC IP-SAN Gateway Ethernet connection HP VNSwitch 900XA Storage system HP EVA for HA/F500 Cluster, Version 3.
Introduction Basic DT Configuration The basic DT configuration includes a second destination storage subsystem that mirrors the data on the source storage subsystem. The basic DT configuration consists of: • Two ProLiant servers as cluster nodes • Two storage subsystems • Four HP StorageWorks Fibre Channel SAN switches (for a current list of supported switches, refer to the website: http://www.hp.
Introduction Bidirectional DT Configuration The bidirectional DT configuration allows a source subsystem to also be configured as a destination subsystem.
Introduction Figure 1-2: Bidirectional DT cluster configuration Maximum DT Configuration Refer to the website, http://www.hp.com/servers/proliant/highavailability, for maximum configuration information.
2 Cluster Installation This chapter details procedures outlined in the corresponding guides listed at the beginning of each of the following sections. It is important to follow the steps covered in this chapter because many steps are specific to the HA/F500 Enhanced DT cluster installation. Required Materials To configure an HA/F500 Enhanced DT cluster for CA, you will need any applicable documents listed in the “Related Documents” section of the HP Continuous Access EVA Getting Started Guide.
Cluster Installation Continuous Access Refer to the HP StorageWorks Continuous Access EVA Operations Guide for detailed information on CA, including any restrictions. Table 2-1 outlines the restrictions that are specific to the HA/F500 Enhanced DT cluster configuration. Table 2-1: CA Restrictions Restriction Comment Maximum of eight storage systems on SAN Storage system is visible to only one Command View EVA instance at a time A disk group must contain at least eight physical drives.
Cluster Installation Table 2-1: CA Restrictions continued Restriction Comment Each storage system can have a relationship with one remote storage system. HSG controllers may be present on the SAN but cannot interoperate with HSV controllers. The HSG and HSV controllers must be in different management zones. One snapshot or Snapclone allowed per Vdisk. Can be on either the source or destination.
Cluster Installation Installing the Hardware Depending on the size of your SAN and the considerations used in designing it, many different hardware configurations are possible. Refer to the HP StorageWorks Continuous Access Enterprise Virtual Array Design Reference Guide for a detailed description of various hardware configurations.
Cluster Installation Table 2-2: Hardware Preparation Checklist for CA EVA Installation continued Task Reference Document Plan and populate layout of one or more physical disk groups. HP StorageWorks Continuous Access EVA Operations Guide, Chapter 3 Power up storage systems and SMAs.
Cluster Installation Setting Up the Fibre Channel Switches at Both Locations, if Applicable NOTE: Both Fibre Channel switches can be configured from the same site. Your Fibre Channel switches must be installed and configured with two working redundant fabrics before you connect the remaining CA EVA components to your fabrics. For information on the specific switches used and GBICs needed, refer to the following website: http://h18006.www1.hp.com/storage/saninfrastructure.
Cluster Installation Figure 2-1: Supported cabling Either controller can be controller A or controller B. In a storage system that has not been configured, the first controller that powers up and passes a self-test becomes controller A. Also, under certain conditions, controller A and controller B can have their designations reversed. Any other controller-to-fabric cabling scheme is not supported.
Cluster Installation 4. Power up that controller. 5. After controller A passes the self-test, power up the other controller.
Cluster Installation Host-to-Switch Connections Tag each end of your fiber optic cable to identify switch names, port numbers, host names, and so on. Two fiber optic connections are required for each host. Connect the fiber optic cable such that connections to the two FCAs go to two separate switches (fabrics). Zoning Recommendations Having both fabrics in place and operational is necessary before you even begin any other equipment installation.
Cluster Installation Bidirectional Solution You may configure data replication groups to replicate data from storage system A to storage system B and other unrelated data replication groups to replicate data from storage system B back to storage system A. This feature, called bidirectional replication, allows for a storage system to have both source and destination virtual disks, where these Vdisks belong to separate Data Replication (DR) groups.
Cluster Installation Configuring the Software The storage system must be initialized before it can be used. This process binds the controllers together as an operational pair and establishes preliminary data structures on the disk array. Initialization is performed through the use of the Command View EVA. This procedure is documented in the HP StorageWorks Command View EVA Getting Started Guide.
Cluster Installation Preparation Checklist for Continuous Access EVA Software Installation Table 2-3: Software Preparation Checklist for CA EVA Installation Task Reference Document Install system software: • HP OpenView Storage Management Appliance Software v2.0 Update HP OpenView Storage Management Appliance Software User Guide • EVA system software VCS V3.0 or upgrade EVA system software v2.0 to v3.
Cluster Installation Logging On to the SAN Management Appliance 1. Log on to the SMA by opening a browser and accessing the SMA remotely by entering the IP address (or the network name if a Domain Name System (DNS) is configured) as the URL. The logon screen opens. 2. Click anonymous. 3. Log in as administrator. 4. Enter the password for the account. 5. Click OK. The hp openview storage management appliance window displays.
Cluster Installation 5. Click Add license. The license key is added. 6. To enter additional license keys, repeat steps 4 and 5.
Cluster Installation Naming the Site In the hp openview storage management appliance window: 1. Click Devices. The Devices window displays: Figure 2-4: HP OpenView SAN window 2. Click command view eva. The HSV Storage Network Properties window displays. You can now browse the EVAs in the Uninitialized Storage System in the navigation panel. 3. Determine which site is to be designated Site A and which is to be designated Site B by selecting Hardware > Controller Enclosure.
Cluster Installation Figure 2-5: Initialize an HSV Storage System window 5. In the Step 1: Enter a Name field, enter the site name. 6. In the Step 2: Enter the number of disks field, enter the maximum number of disks (minimum of eight in a disk group) or the number of disks you will use in the default disk group. NOTE: You must determine if you will configure your storage in a single disk group or multiple disk groups.
Cluster Installation Creating the VD folders 1. In the Command View EVA navigation pane, click Virtual Disks. The Create a Folder window displays. Figure 2-6: Creating a VD Folder window 2. In the Step 1: Enter a Name field, enter the folder name (use the cluster name). 3. In the Step 2: Enter comments field, enter any additional information. 4. Click Finish, and then click OK.
Cluster Installation Creating the VDs You are given the opportunity to select a preferred path during the creation of a Vdisk. This means that host I/O to a Vdisk will go to the controller you designate as preferred, as long as the paths to that controller are available. There are five possible preferred path settings. However, the Windows environment allows only those shown in the bulleted list, as Secure Path is responsible for supporting failback capability.
Cluster Installation 1. In the Command View EVA navigation pane, click the new VD folder. The Create a Vdisk Family window displays. Figure 2-7: Create a Vdisk Family window 2. In the Vdisk name: field, enter the VD name. 3. In the Size: field, enter the size in gigabytes. 4. In the Preferred path/mode: dropdown menu, make a selection (for load balancing). 5. Click Create More and repeat steps 2 through 4 for each VD you create. 6. Click Finish, and then click OK.
Cluster Installation Creating the Hosts Creating the Host Folder Create a host folder for each cluster to enable ease of administration. 1. Click Create Folder. The Create a Folder window displays. Figure 2-8: Create a Folder window 2. In the Step 1: Enter a Name field, enter SiteA or any name up to 32 characters long. 3. In the Step 2: Enter comments field, enter any additional information, up to 64 characters long. 4. Click Finish, and then click OK. 5.
Cluster Installation Adding a Host NOTE: If the SAN appliance cannot see the host WWNs, perform steps 1 and 2. Otherwise, begin at step 3. 1. Reboot the SAN appliance. 2. Access the Command View EVA application. 3. Click the desired host in the navigation pane. The Add a Host window displays. Figure 2-9: Add a Host window 4. In the Host name: field, enter the host name. 5. In the Host IP address: dropdown menu, select the appropriate scheme or enter the IP address if it is a static IP address. 6.
Cluster Installation The Add a Host Port window displays. Figure 2-10: Add a Host Port 8. For each FCA: a. In the Click to select from list dropdown menu, select the appropriate FCA. b. Click Add port. 9. Select the Ports tab (which displays only after selecting to add the port) and verify the ports are correctly assigned. 10. Repeat the procedure for Site B.
Cluster Installation Presenting the VDs to the Host CAUTION: Shut down all the nodes. Only one node should see the drives at one time. 1. In the Command View EVA navigation pane, click the first new VD. The Vdisk Active Member Properties window displays.
Cluster Installation 2. Click Presentation. The Vdisk Active Member Properties window displays.
Cluster Installation 3. Click Present. The Present Vdisk window displays. Figure 2-13: Present Vdisk window 4. Select both hosts, and then click Present Vdisk. 5. Click OK. You are returned to the Vdisk Active Members Property window.
Cluster Installation 6. Select the Presentation tab to verify that both hosts are on the same LUN. The Vdisk Active Members Property window displays. Figure 2-14: Vdisk Active Member Properties window— Presentation Tab view 7. Repeat steps 1 through 6 for each VD. 8. Power on Node 1. 9. Log on to the domain. 10. Wait until all the VDs are discovered. 11. Open the operating system Device Manager and look at that disk drive. 12.
Cluster Installation 15. Repeat steps 8 through 14 for Node 2. 16. Join Node 2 to the cluster. Creating Replicated Disks Discovering the Devices You will be creating the copy sets and DR groups in the same sequence. 1. In the hp openview storage management appliance window, click Tools.
Cluster Installation 2. Click continuous access. The Continuous Access Status window displays. NOTE: You are now working in the CA software, not the HSV Element Manager. Figure 2-16: Continuous Access Status window The window is empty. 3. Click Refresh>Discover. A pop-up window informs you that the discovery process could be lengthy. After the system has discovered the devices, you will create the DR groups and copy sets. NOTE: You must plan how to separate managed sets and copy sets.
Cluster Installation Creating the DR Groups You can create the DR groups first or create the initial copy set, which forces the DR group creation process. The following procedure is for creating the DR groups before the copy sets. 1. On the Continuous Access window, select the site from the navigation pane. 2. Click Create >DR Group. The Create a new DR Group window opens. Figure 2-17: Create a new DR Group window 3. In the DR Group: field, enter the name. 4.
Cluster Installation Creating the Copy Sets NOTE: Entering the first copy set will force the DR group creation sequence if no DR Group has yet been created. 1. On the Continuous Access window, select the site from the navigation pane. 2. Click Create >Copy Set. The Create a new Copy Set window opens. Figure 2-18: Create a new Copy Set window 3. In the DR Group: dropdown list, select the DR group to which the copy set will belong. 4. In the Copy Set: field, enter the copy set name. 5.
Cluster Installation 7. Select the destination from the Destination Storage System: dropdown list (Site B, if you have followed suggested naming conventions). 8. Click Finish. Creating the Managed Sets A managed set is a folder created to hold DR groups. One or more DR groups can be combined to create a managed set. 1. Choose Create >Managed Sets. The Edit or create a Managed Set window displays. Figure 2-19: Edit or create a Managed Set window 2. In the Managed Set Name: field, enter the name. 3.
Cluster Installation 5. In the navigation pane, select the first DR group to be part of a managed set. 6. In the Configuration dropdown menu, select Edit. The Edit an existing DR Group window displays. Figure 2-20: Edit an existing DR Group window 7. Select a managed set from the Managed Set list, and then click Finish. 8. Repeat steps 5 through 7 for each DR group to add.
Cluster Installation Presenting the VDs to Cluster Nodes 1. In the hp openview storage management appliance window, select Devices. The Devices window displays. Figure 2-21: Devices window 2. Click command view eva. The HSV Element Manager Storage Network Properties window opens. 3. In the navigation pane, select the destination subsystem, and then click Virtual Disks. 4. Select the virtual disk to present on the destination subsystem.
Cluster Installation 5. Select Active. The Vdisk Active Member Properties window displays.
Cluster Installation 6. Select the Presentation tab, and then click Present. The Present Vdisk window opens. Figure 2-23: Present Vdisk window 7. Select the VDs, and then click Present Vdisk. 8. Click OK. 9. Repeat for each VD to present.
Cluster Installation 10. Verify the disks are properly presented. a. In the navigation pane, select the host to verify. b. Select the Presentation tab. The Host Properties window displays. Figure 2-24: Host Properties window c. Verify that each VD is presented to a unique Logical Unit (LUN). The configuration is complete.
Cluster Installation Zoning Worksheets Locate and record the WWNs of each host on the zoning worksheet. Keep a copy of all worksheets at all your sites.
3 Disaster Recovery This chapter covers failover and failback scenarios that can occur with the ProLiant Cluster HA/F500 for EVA Enhanced DT configuration. Managing CA Refer to the HP Continuous Access EVA Operations Guide (referred to in this chapter as the CA guide) for complete instructions on managing the CA software. Failure Scenarios The HA/F500 for EVA Enhanced DT cluster uses CA software to provide automated failover of applications, servers, and server resources.
Disaster Recovery Resource Failover A CA failover occurs if a cluster resource fails at the site (refer to Figure 3-1). The exact failover behavior can be configured for each cluster group, but usually this means that the entire cluster group that contains the failed resource may attempt to switch to the other cluster node. No storage subsystem failover is required in this situation. You are given the opportunity to select a preferred path during the creation of the Vdisks.
Disaster Recovery Local Server Failure A normal failover occurs if one of the cluster nodes fails. All the resources defined in the cluster groups that were running on the failed node will attempt to switch over to the surviving nodes. As with a cluster resource failure, no storage subsystem failover is required. This is also the case when a cluster node is brought down for a scheduled event such as system maintenance.
Disaster Recovery Source Site Failover A small amount of downtime occurs during a site failover. The cluster does not have access to any data or applications if the source storage system fails or is no longer available. The cluster is not available during this time. Refer to the CA guide to determine the scenarios that warrant a failover. There are two types of failover procedures to recover the remote copy sets: planned and unplanned. Use the planned failover procedure when failover is a scheduled event.
Disaster Recovery Source Site Failback The failover/failback options are not supported with Secure Path. The two failoveronly options for Vdisk creation allow the host to control when a Vdisk moves to a preferred path. For example, if path A is preferred and that path becomes unavailable, path B is used. The host will then control the movement back to path A when it becomes available later.
Glossary array controller software (ACS) Software contained on a removable ROM program card that provides the operating system for the array controller. bidirectional Pertaining to the process by which two servers mirror each other from remote locations. CA Continuous Access.
Glossary destination site The location of the secondary network. disaster tolerant (DT) A solution that provides rapid data access recovery and continued data processing after the loss of one or more components. failback 1. The process that takes place when a previously failed controller is repaired or replaced and reassumes the workload from a companion controller. 2. The process that takes place when the operation of a previously failed cluster group moves from one cluster node back to its primary node.
Glossary host The primary or controlling computer to which a storage system is attached. host bus adapter (HBA) A device that connects a host system to a SCSI bus. The host bus adapter usually performs the lowest layers of the SCSI protocol. This function may be logically and physically integrated into the host system. initiator site The location of the primary network. latency The amount of time required for a transmission to reach its destination. local site The location of the primary network.
Glossary OCP Operator Control Panel. The element on the front of an HSV controller that displays thecontroller’s status using LEDs and an LCD. Information selection and data entry are controlled by the OCP push buttons. remote Files, devices, and other resources that are not directly connected to the system being used at the time.
Index A authorized reseller vii disaster recovery 3-1 Disk Replication (DR) groups, creating 2-29 E B basic configuration 1-7 bidirectional configuration 1-8 bidirectional solution 2-10 C cabling not supported 2-8 supported 2-7 clusters installation 2-1 resource failure 3-3 configuration bidirectional 1-8 software 2-11 Continuous Access (CA) managing 3-1 restrictions 2-2 copy sets, creating 2-30 D devices, discovering 2-27 disaster recovery 3-1 disaster tolerant (DT) configuration cluster installation
Index H operating systems 1-5 host bus adapter (HBA) basic configuration 1-7 bidirectional configuration 1-8 hosts adding a host 2-21 creating host folders 2-20 HP Insight Manager 7 1-3 HP Integrated Remote Control 1-3 HP Remote Insight Board 1-3 HP Remote Management 1-3 HP website vii P I initialization, source site 2-14 installation Fibre Channel Adapters (FCAs) 2-5 hardware preparation checklist 2-4 required components 1-5 required materials 2-1 restrictions 2-2 server setup 2-5 software preparation
Index V World Wide Name (WWN) location 2-5 Virtual Disks (VD) creating VD folders 2-17 creating VDs 2-18 presenting VDs to the host 2-23 Z zoning recommendations 2-9 worksheets 2-37 W website, HP vii Windows operating systems 1-5 HP ProLiant Cluster HA/F500 for Enterprise Virtual Array Enhanced DT Supplement Guide HP CONFIDENTIAL Writer: Bill Akers File Name: x-index Codename: 49ers III Part Number: 339223-001 Last Saved On: 5/29/03 2:46 PM Index-3