Configuration Guide hp StorageWorks Data Replication Manager HSG80 ACS Version 8.7P Product Version: ACS v8.7P Sixth Edition (March 2004) Part Number: AA–RPHZF–TE HP StorageWorks Data Replication Manager provides a disaster-tolerant solution for secure data storage through the use of hardware redundancy across several sites. Multiple heterogeneous servers can be connected to one or more shared storage subsystems.
© Copyright 2000–2004 Hewlett-Packard Development Company, L.P. Hewlett-Packard Company makes no warranty of any kind with regard to this material, including, but not limited to, the implied warranties of merchantability and fitness for a particular purpose. Hewlett-Packard shall not be liable for errors contained herein or for incidental or consequential damages in connection with the furnishing, performance, or use of this material.
contents Contents About this Guide. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 Overview. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .14 Intended Audience . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .14 Related Documentation . .
Contents Nonremote Copy Sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .34 Operation Modes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .34 Synchronous Operation Mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .34 Asynchronous Operation Mode. . . . . . . . . . . . .
Contents 4 Configuring a Standard Data Replication Manager Solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 Introduction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .58 Restrictions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .58 Configuration Overview . . . . . . . . . . . . .
Contents Installation Steps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .83 Upgrade Installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .85 Upgrade Installation Assumptions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .85 HBA Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Contents Longwave or Very Long Distance GBICs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .111 Other Transport Modes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .111 Create Switch Zones . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .112 Create Remote Copy Sets . . . . . . . . . . . . . . . . . . . . .
Contents Installation Steps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .129 Connect the Host to the SAN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .132 Rename the Host Connections. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .132 Update Switch Zones. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Contents Single-Switch Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .152 Setting Up the Single-Switch Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .153 Single-Fabric Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .154 Setting Up the Single-Fabric Configuration . . . .
Contents Example: Zoning Yellow Zone_Top and Yellow Zone_Bottom . . . . . . . . . . . . . . . . . . . . . . . . . . . .200 Example: Zoning Brown Zone_Top and Brown Zone_Bottom . . . . . . . . . . . . . . . . . . . . . . . . . . . . .202 Create the Zone Names . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .204 Add the New Zones to the Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Contents 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 Cabling from the initiator to the target site . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . DRM dual-switch single-site configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Single-switch DRM configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Dual switch with single ISL. .
Contents 12 Data Replication Manager HSG80 ACS Version 8.
About This Guide About this Guide This configuration guide provides information to help you: ■ Understand HP StorageWorks Replication Manager (DRM) hardware requirements AboutData this Guide and configurations ■ Understand remote copy set concepts ■ Set up and cable your DRM solutions ■ Consider entry-level and advanced configurations ■ Troubleshoot your DRM configuration ■ Decide how zoning will help your DRM configuration ■ Contact technical support for additional assistance This configura
About this Guide Overview This section covers the following topics: ■ Intended Audience ■ Related Documentation Intended Audience This book is intended for use by system administrators who are experienced with the following: ■ ACS Version 8.
About this Guide Conventions Conventions consist of the following: ■ Document Conventions ■ Text Symbols ■ Equipment Symbols Document Conventions This document follows the conventions in Table 1.
About this Guide T Identifies a procedural step to be performed at the target site. Equipment Symbols The following equipment symbols may be found on hardware for which this guide pertains. They have the following meanings: Any enclosed surface or area of the equipment marked with these symbols indicates the presence of electrical shock hazards. Enclosed area contains no operator serviceable parts. WARNING: To reduce the risk of personal injury from electrical shock hazards, do not open this enclosure.
About this Guide Rack Stability Rack stability protects personnel and equipment. WARNING: To reduce the risk of personal injury or damage to the equipment, be sure that: ■ The leveling jacks are extended to the floor. ■ The full weight of the rack rests on the leveling jacks. ■ In single rack installations, the stabilizing feet are attached to the rack. ■ In multiple rack installations, the racks are coupled. ■ Only one rack component is extended at any time.
About this Guide HP Authorized Reseller For the name of your nearest HP authorized reseller: 18 ■ In the United States, call 1-800-345-1518 ■ In Canada, call 1-800-263-5868 ■ Elsewhere, see the HP web site for locations and telephone numbers: http://www.hp.com. Data Replication Manager HSG80 ACS Version 8.
Introduction to Data Replication Manager 1 This chapter introduces HP StorageWorks Data Replication Manager (DRM) and describes the required hardware and software components.
Introduction to Data Replication Manager Data Replication Manager Overview DRM provides a disaster-tolerant (DT) storage solution through the use of hardware redundancy and data replication between two sites separated by some distance. Multiple heterogeneous servers can be connected to one or more shared storage subsystems. A basic DRM configuration consists of two sites—an initiator and a target. The initiator site carries out primary data processing. The target site is used for data replication.
Introduction to Data Replication Manager Required Hardware Components DRM uses a minimum of two HSG80 Array Controller pairs: one at the initiator site and one at the target site. Each site must have one or more ESA12000 racks or EMA12000/EMA16000 modular storage racks: ■ RA8000/ESA12000 racks are equipped with one or more BA370 enclosures and disk storage building blocks (SBBs). Each BA370 enclosure holds up to 24 disks.
Introduction to Data Replication Manager ESA12000 Storage Rack The ESA12000 SBB rack houses the BA370 enclosures, which contain the components listed in Table 2.
Introduction to Data Replication Manager Figure 2 shows additional components that must be added to the ESA12000 building block to support a DRM solution, including Fibre Channel switches. The optional redundant power distribution unit is also shown.
Introduction to Data Replication Manager EMA12000 Modular Storage Rack The EMA12000 modular SBB racks include power distribution units, are pre-cabled, and contain the components listed in Table 3.
Introduction to Data Replication Manager Figure 3 shows an EMA12000 modular building block that supports a DRM solution. The modular SBB consists of the controller enclosure and the disk enclosure. The redundant power distribution unit is also shown.
Introduction to Data Replication Manager CXO7085A Figure 4: Fibre Channel SAN Switch 16 CXO7337A Figure 5: Fibre Channel SAN Switch 8-EL CXO7977A Figure 6: StorageWorks edge switch 2/16 1 2 3 4 Slot # 5 6 7 8 9 10 CXO7978A Figure 7: StorageWorks director 2/64 26 Data Replication Manager HSG80 ACS Version 8.
Introduction to Data Replication Manager Gigabit Interface Converters Gigabit interface converters (GBICs) are the converters that are inserted into the ports of the Fibre Channel switch to serve as the interface between the fiber optic cables and the switch. Short-wave GBICs are used with a 50-micron multimode fiber optic cable (SC-terminated) to connect the components at the initiator and target sites (host-to-switch; controller-to-switch).
Introduction to Data Replication Manager Hardware Configurations Figures shown previously in this chapter have reflected the build of a DT solution for each of two types of rack configurations. Figure 8 shows a completed DT setup for the ESA12000 rack. Figure 9 shows a completed DT setup for the EMA12000 modular storage rack. Note: If you prefer to join racks for more storage capacity, follow the instructions in the rack documents. Be sure to establish the same setup at both the initiator and target sites.
Introduction to Data Replication Manager 1 2 1 2 3 Controller enclosure component of modular SBB Disk enclosure component of modular SBB Redundant power distribution unit (standard) 3 CXO7434A Figure 9: Fibre Channel-based EMA12000 DT modular storage subsystem (with fully-redundant power) Data Replication Manager HSG80 ACS Version 8.
Introduction to Data Replication Manager Software Components This section describes the software components necessary to configure and manage a DT storage subsystem. For installation instructions, see Chapter 4, “Configuring a Standard Data Replication Manager Solution.” Array Controller Software HSG80 Array Controller Software (ACS) is the software component of the HSG80 Array Controller subsystem.
Introduction to Data Replication Manager StorageWorks Command Console (Optional) SWCC provides local and remote management of controllers and their attached storage devices. SWCC consists of two major components: the SWCC client and the SWCC agent. SWCC can be used to configure and manage the DT storage subsystem.
Introduction to Data Replication Manager 32 Data Replication Manager HSG80 ACS Version 8.
Remote Copy Set Features 2 This chapter discusses Data Replication Manager (DRM) concepts you need to know to configure a DRM solution. These concepts primarily describe how to use remote copy sets and association sets.
Remote Copy Set Features Remote Copy DRM uses the peer-to-peer remote copy function of the HSG80 controller to achieve data replication. The HSG80 dual-controllers at the initiator site are connected to their partner HSG80 controllers at the target site. Remote copy sets are mirrors of each other and are created from units at the initiator and target sites. As data is written to a unit at the initiator site, it is mirrored to its remote copy set partner unit at the target site.
Remote Copy Set Features SYNCHRONOUS Transaction is not complete until this point Perform task (User) Update data (Primary data center) Transmit update (Network) Update data (Recovery site) Notify user (Network) Time ASYNCHRONOUS Recovery site updates may be done at a later time Transaction is complete at this point Perform task (User) Update data (Primary data center) Notify user (Network) Transmit update (Network) Update data (Recovery site) Time CXO7070A Figure 10: Remote copy set operation
Remote Copy Set Features Synchronous For the synchronous operation mode, the OUTSTANDING_IO setting refers to the number of initiator-to-target writes that can be outstanding at any one time. If OUTSTANDING_IO is set to 1 and the host issues four writes to a remote copy set, then only one write is in progress between the initiator and target at a time. The other three writes are queued in the initiator controller.
Remote Copy Set Features Note: This switch is valid only in normal error mode with write history logging enabled (not failsafe). The RESUME switch initiates the mini-merge restore of the specified remote target unit. This switch enables the initiator to read the log unit and send the write commands, in order, to the target, which brings the target into congruency with the initiator. For more information on mini-merge, see “Write History Logging” on page 39.
Remote Copy Set Features ■ If FAIL_ALL is set, and if one member assumes the failsafe locked condition, then all members of the association set assume the failsafe locked condition. ■ Association sets reside on the initiator dual controller, as illustrated in Figure 11.
Remote Copy Set Features ADD ASSOCIATIONS Command When you issue the ADD ASSOCIATIONS AssociationSetName RemoteCopySetName command, it adds an association set with one member to the controller pair’s configuration. Use this command on the node on which the initiator resides. Issue the SET AssociationSetName ADD = RemoteCopySetName command to add additional members.
Remote Copy Set Features Write History Log Unit Restrictions Things to remember about write history log units include: ■ Up to 12 write history log units can be assigned (12 possible remote copy sets). ■ There can be only one write history log unit assigned to an association set. — No new remote copy sets can be added to an association set while write history logging is active. — No new target can be added to a remote copy set that is part of an association set while write history logging is active.
Remote Copy Set Features Write History Log Unit Performance Considerations When the write history log unit is merging the captured write operations back to the target, the host makes all I/O resources available to the write history log unit. This means that you can expect at least a 90 percent reduction of host I/O capability for other operations.
Remote Copy Set Features ORDER_ALL Switch When the ORDER_ALL switch of the ADD ASSOCIATIONS command is enabled, the order of all asynchronous write operations across all members of the association set is preserved. No write history log unit is required. With the ORDER_ALL switch enabled and write history logging enabled, if one member of the association set starts write history logging, all members of the association set start write history logging.
Remote Copy Set Features Unplanned Failover An unplanned failover does not allow for an orderly shutdown of controllers. An unplanned failover is initiated when any of the following occurs: ■ The initiator site is lost. ■ There is no host access. ■ There is no access to both initiator controllers. Note: If both links are severed and the initiator configuration is functional, the system administrator must determine which site to use as the primary site.
Remote Copy Set Features .
Getting Started 3 This chapter explains how to get your Data Replication Manager (DRM) solution ready for setup. Note: It is a good idea to keep a copy of this manual at both the initiator and target sites to ensure a successful and identical setup at both sites. Two copies also eliminate confusion if more than one person is configuring DRM.
Getting Started Site, Host, and Solution Preparation Before you start operating your disaster tolerant (DT) subsystem, you must: ■ Ensure that you have sufficient space to install and store the subsystems and have adequate power and cooling resources ■ If you choose to use more than one rack, understand the proper methods for positioning and joining subsystems ■ Have the proper devices installed ■ Verify that all of the storage components are in place Host Bus Adapter Requirements To run your DRM s
Getting Started 2. Ping using the name of the switch. This verifies the operation of the name resolution. 3. Telnet into the switch (username = admin; password = password [default setting]). Refer to your switch documentation for Telnet session procedures. Make the following adjustments to the switch: — Enter switchName to configure the switch name. Be sure to designate a name that enables you to easily identify the switch you are trying to access.
Getting Started Host-to-Switch Connections ■ Host name and rank number or PCI slot number of the HBA ■ Switch name and port number on switch Switch-to-Controller Connections ■ Fibre Channel switch name (top or bottom) ■ Fibre Channel switch port number (0-15 for a sixteen port switch) ■ Site name (initiator or target) ■ Controller name ■ Controller port number (1 or 2) ■ Host port number ■ HBA WWN The DT solution requires two different types of fiber optic cables, depending on where the c
Getting Started Table 5: Example of Wiring for First Server and First Storage Array at Each Site Initiator Site Target Site Host Port 1 ➔ Top Switch, Port 0 Host Port 1 ➔ Top Switch, Port 0 Host Port 2 ➔ Bottom Switch, Port 0 Host Port 2 ➔ Bottom Switch, Port 0 Controller A, Port 1 ➔ Top Switch, Port 2 Controller A, Port 1 ➔ Top Switch, Port 2 Controller A, Port 2 ➔ Top Switch, Port 4 Controller A, Port 2 ➔ Top Switch, Port 4 Controller B, Port 1 ➔ Bottom Switch, Port 2 Control
Getting Started Example Display 1 BuildngBTop> SHOW UNITS LUN Uses Used by --------------------------------------------------------D1 DISK10000 D2 DISK20000 D3 R3 D4 R4 D5 S5 D6 M6 D11 DISK30100 D20 MIR_DLOG D21 MIR_LOGD D199 DISK30300 BuildngBTop> In this example there is no D0 and there is an available LUN (D7) to use (note that D7 is missing from the list). Had there been a D0, you would need to delete the D0, then add unit D7. All LUN 0 devices must be changed to an unused LUN.
Getting Started 5. Change to SCSI-3 mode by issuing the following CLI command: SET THIS_CONTROLLER SCSI_VERSION=SCSI-3 You should see a display similar to that in Example Display 3.
Getting Started Rolling Upgrade Procedure This procedure allows you to change from SCSI-2 to SCSI-3 without stopping all I/O to the HSG80 controllers. Use this procedure if you cannot be without constant access to your storage. The procedure is the same as for the static upgrade above, except for the following: 1. Do not perform step 3 above to stop I/O and unmount devices. 2. Replace step 6 above with the following step 6: 6.
Getting Started ■ Only one extended long wavelength ISL is allowed per fabric. ■ Cascaded switches are not supported in asynchronous transfer mode (ATM) configurations. Cascaded Switch Configurations Figure 13 shows a DRM configuration that increases the distance between sites by using cascaded switches and hopping. There are no hops from the initiator host to the initiator controller and three hops from the initiator host to the target controller.
Getting Started Initiator site Host 2 Controller Pair 1 Hop 3 Target site 10 KM Host 3 Hop 2 10 KM Host 1 500 meters 70 KM Controller Pair 2 Hop 1 CXO7291A Figure 14: Cascaded switches in DRM environment with three hops between host and controller Figure 14 shows a DRM configuration that increases the number of host-to-controller port connections using cascaded switches and hopping. The figure features switches cascaded from Host 1 to the initiator controller.
Getting Started Multiple Intersite Links Multiple intersite links (ISLs) provide additional bandwidth between local and remote sites. Each ISL is a fiber link between two switches. The restrictions that apply when using multiple ISLs in a DRM environment are listed below: ■ DRM supports a maximum of two ISL connections per fabric. ■ The Multiple E-port Connectivity software option is required to access more than one E-port when using multiple ISLs or interswitch links with the SAN Switch 8-EL.
Getting Started 56 Data Replication Manager HSG80 ACS Version 8.
Configuring a Standard Data Replication Manager Solution 4 This chapter provides procedures for configuring your Data Replication Manager (DRM) solution. Because a DRM system spans two sites, you must configure the DRM system at each site. These procedures take you through the configuration process. You will first set up the target site, then the initiator site. Setup for each site is similar. At each site, you will configure the controllers by defining controller characteristics specific to DRM.
Configuring a Standard Data Replication Manager Solution Introduction The disaster tolerant (DT) configuration that supports DRM requires two HSG80 Array Controller subsystems—one at an initiator site and one at a target site. Tip: Because of the complexity of the configuration process, it is a good idea to have all DRM documentation available at both sites to eliminate confusion and minimize the risk of error. Follow the steps precisely in the order provided in this document.
Configuring a Standard Data Replication Manager Solution Table 6: Restrictions and Requirements (Continued) Restriction or Requirement Maximum configuration for all platforms except Novell NetWare: 12 equivalent hosts per storage array 6 host bus adapters (HBAs) per host 24 units per host 8 subsystems per site Maximum configuration for NetWare: 4 HBAs per host For IBM AIX, there is a maximum of 32 LUNs per storage array. Maximum of 12 remote copy sets allowed per HSG80 controller pair.
Configuring a Standard Data Replication Manager Solution Table 6: Restrictions and Requirements (Continued) Restriction or Requirement HP storage arrays running ACS 8.5F, 8.5S, 8.5P, 8.6F, and 8.6S may co-exist on the same SAN with a DRM configuration using ACS 8.7P. For OpenVMS, the LP7000 and LP8000 HBAs may coexist on the same DRM storage area network. However, they may not share the same server. IBM AIX supports only Cambex HBAs. Zoning is required when there is more than one Tru64 TruCluster.
Configuring a Standard Data Replication Manager Solution ■ Create Switch Zones at the Target Site ■ Configure the Host at the Target Site — HP OpenVMS — HP Tru64 UNIX — HP-UX — IBM AIX — Microsoft Windows NT and Windows 2000 — Novell NetWare — Sun Solaris Each of these steps is discussed in detail in the sections beginning on page 62.
Configuring a Standard Data Replication Manager Solution Configure the Controllers at the Target Site Note: Target site procedure steps are marked with a target symbol, T. Initiator site procedures are marked with an initiator symbol, I. Before configuring the controllers at the target site, follow these preparatory steps: T T 1. Identify the World Wide Name (WWN) on the HBAs in each host. 2. Establish the names to assign to the target and initiator sites.
Configuring a Standard Data Replication Manager Solution T T 9. Establish a CLI connection to the top controller. 10. Verify that the top controller is on and functional by looking for the CLI prompt on the maintenance port. Note: Unless otherwise noted, all operations may be conducted from the top controller (controller B1). T 11. To verify that the controllers are properly set up, issue the CLI command: SHOW THIS_CONTROLLER You should see a display similar to that in Example Display 1.
Configuring a Standard Data Replication Manager Solution T 12. Verify that the subsystem WWN (also called the NODE_ID) has been assigned to the controller. If zeros are displayed, the name is not set. ■ If the name is set, go to step 15. ■ If the WWN has not been assigned to the controller, you must obtain the name and set it before proceeding. Note: The subsystem’s WWN and checksum are located on a sticker on top of the frame that houses the controllers, EMU, PVA, and cache modules.
Configuring a Standard Data Replication Manager Solution 15. Configure the controllers for multiple-bus failover mode: SET MULTIBUS_FAILOVER COPY = THIS_CONTROLLER This command automatically restarts the OTHER controller. You should see %LFL and %EVL prompts. Refer to the HP StorageWorks HSG80 Array Controller ACS V8.7 Maintenance and Service Guide for more information on these reports. T 16.
Configuring a Standard Data Replication Manager Solution b. To set SCSI-2 mode: SET THIS SCSI = SCSI-2 T 18. Change your controller prompts to identify which controller you are working on: SET THIS_CONTROLLER PROMPT=”TargetControllerNameTop> ” SET OTHER_CONTROLLER PROMPT=”TargetControllerNameBottom> ” Example: set this_controller prompt=”BuildngBTop> ” Example: set other_controller prompt=”BuildngBBottom> ” Note: This step takes effect immediately. T 19.
Configuring a Standard Data Replication Manager Solution Example Display 6 . . . Mirrored Cache: 256 megabyte write cache, version 0012 Cache is GOOD No unflushed data in cache . . . If the command is rejected, do not restart the controllers. Wait a few minutes and then try again. Note: It is not necessary to repeat this step on controller B. T 21.
Configuring a Standard Data Replication Manager Solution You should see a display similar to that in Example Display 7. Example Display 7 . . . Host PORT_1: Reported PORT_ID = nnnn-nnnn-nnnn-nnnn . . . . . . . . . . . .PORT_1_TOPOLOGY = FABRIC (connection down) Address . . . . . . . . . . . . . . . . . . . . . . . . . . .=nnnnnn Host PORT_2: Reported PORT_ID = nnnn-nnnn-nnnn-nnnn . . . . . . . . . . . .PORT_2_TOPOLOGY = FABRIC (connection down) Address . . . . . . . . . . . . . . . . . . . . . . . . .
Configuring a Standard Data Replication Manager Solution Configure Storage at the Target Site Before you can configure the storage for DRM, you must add disks, create the storagesets, and create units. Devices and Storagesets Before you can configure the storage for remote replication, you must add disks, create storagesets, and create units. Follow the instructions in the HP StorageWorks HSG80 ACS Solution Software Version 8.
Configuring a Standard Data Replication Manager Solution T 3. Verify that the access on each unit is set to NONE: SHOW UNITS FULL You should see a display similar to that in Example Display 9.
Configuring a Standard Data Replication Manager Solution Example Display 10 LUN Uses Used by --------------------------------------------------------------------------D1 DISK10000 LUN ID: nnnn-nnnn-nnnn-nnnn-nnnn-nnnn-nnnn-nnnn NOIDENTIFIER Switches: RUN NOWRITE_PROTECT READAHEAD_CACHE WRITEBACK_CACHE READ_CACHE MAXIMUM_READ_CACHED_TRANSFER_SIZE = 128 MAXIMUM_WRITE_CACHED_TRANSFER_SIZE = 128 Access: NONE State: ONLINE to this controller PREFERRED_PATH = OTHER Host based logging NOT specified Siz
Configuring a Standard Data Replication Manager Solution Example: a. Insert shortwave GBICs in ports 2 and 4 of both the top and bottom Fibre Channel switches. b. Connect a multimode, 50-micron fiber optic cable from port 1 of the top controller to port 2 of the top Fibre Channel switch (as shown by callout 1 in Figure 17). c. Connect a second multimode, 50-micron fiber optic cable from port 2 of the top controller to port 4 of the top Fibre Channel switch (as shown by callout 2 in Figure 17). d.
Configuring a Standard Data Replication Manager Solution Connect the Target Site to the External Fiber Link Locate the offsite connection points at the target site that link the target site to the initiator site. Execute the procedure in the next section if you have longwave or very long distance GBICs. Otherwise, go the section below titled “Other Transport Modes.” LongWave or Very Long Distance GBICs T T T 1. Install longwave or very long distance GBIC now if not previously installed. 2.
Configuring a Standard Data Replication Manager Solution Create Switch Zones at the Target Site You must now create zones on the switches that the controllers are connected to. See Chapter 6, “Configuring the Optional Advanced DRM Solutions,” for more information on creating zones. T T T T 1. Create a zone on the top fabric that contains port 1 of the top controller. This zone will later contain target host connections as well. 2.
Configuring a Standard Data Replication Manager Solution Install SWCC (Optional) T You may now install SWCC. For detailed information about SWCC, including installation, refer to the Compaq StorageWorks Command Console Version 2.4 User Guide. Additional Setup T You will need the latest TIMA kit, which is identified at: http://h71000.www7.hp.com/openvms/supportchart.html Connect the Host to the SAN Use your established cabling policy to connect the host to the Fibre Channel switch. T T 1.
Configuring a Standard Data Replication Manager Solution T 1. We suggest that you use the worksheet in Figure 19 when renaming your hosts. !NEWCONxx World Wide Name Host Name Host OS Type HBA Number Note: If you use scripting to automate failover and failback operations, do not use dashes (hyphens) as separators in your naming convention—use underscores instead. Dashes are not allowed by the Perl scripting language. Figure 19: Host renaming worksheet T 2.
Configuring a Standard Data Replication Manager Solution Example Display 12 Connection Unit Name HostB1 Operating system VMS Controller THIS HOST_ID=nnnn-nnnn-nnnn-nnnn HostB2 VMS HOST_ID=nnnn-nnnn-nnnn-nnnn Port Address 1 210013 Status Offset online 0 . . . . ADAPTER_ID=nnnn-nnnn-nnnn-nnnn OTHER 1 200013 online 0 . . . .
Configuring a Standard Data Replication Manager Solution Example Display 13 Apr 19 14:19:37 tru002 vmunix: KGPSA-CA : Driver Rev 1.30: F/W Rev 3.81A4(2.01A0): wwn 1000-0000-c924-fe8c Multipath Software Tru64 UNIX has native multipath support with path auto-detection. No further configuration is required. Install SWCC (Optional) T You may now install SWCC. For detailed information about SWCC, including installation, refer to the Compaq StorageWorks Command Console Version 2.4 User Guide.
Configuring a Standard Data Replication Manager Solution Note: If you use scripting to automate failover and failback operations, do not use dashes (hyphens) as separators in your naming convention—use underscores instead. Dashes are not allowed by the Perl scripting language. T 1. We suggest that you use the worksheet in Figure 19 on page 76 when renaming your hosts. T 2.
Configuring a Standard Data Replication Manager Solution HP-UX Before starting this procedure, make sure that your host is up to date with service packs and patches. For supported revision levels, refer to the DRM Release Notes.
Configuring a Standard Data Replication Manager Solution T 2. When you have completed the worksheet, rename the connections: RENAME !NEWCONxx TargetHostConnectionNamex RENAME !NEWCONxx TargetHostConnectionNamey Example: rename !NEWCONxx HostB1 Example: rename !NEWCONxx HostB2 T 3. Change the operating system for each connection to HP-UX: SET !NEWCONxx operating_system=hp T 4.
Configuring a Standard Data Replication Manager Solution Example Display 16 Class disk I H/W Path 0 Driver S/W State 0/0/1/1.2.0 sdisk CLAIMED /dev/dsk/c1t2d0 Description DEVICE SEAGATE ST39204LC /dev/rdsk/c1t2d0 disk 1 0/0/255.0.0.
Configuring a Standard Data Replication Manager Solution Install the Secure Path Fibre Channel HBA Device Driver and the AIX Platform Kit T The following describes the preferred method for installing the HP StorageWorks platform kit software for IBM AIX and Secure Path Fibre Channel HBA device driver software on your AIX servers. Use these instructions, in the given order, instead of the installation instructions in the platform kit (HP StorageWorks HSG80 ACS Version 8.
Configuring a Standard Data Replication Manager Solution b. Enter the following commands: #mkdir /cdrom #mount -v cdrfs -r /dev/cd0 /cdrom #cd /cdrom #./INSTALL (follow the prompts) The system will not find any DEC HSG80 RAID array devices at this time. The option of installing the SWCC Agent will be presented at this time. Choose Yes. Installation of the SWCC Agent will begin.
Configuring a Standard Data Replication Manager Solution Multiple instances of the Command Console LUN hdisks may be displayed. Remove all of the higher numbered hdisks, keeping only the lowest numbered hdisk of the Command Console LUN. Remove the hdisks with the following command: #rmdev -dl hdiskx where x is the number of the hdisk to be removed. T 9. Run the StorageWorks Install Agent, if required: #cd /usr/stgwks2 # ./config.sh Choose Option 3. T 10.
Configuring a Standard Data Replication Manager Solution T T 4. Back up all Volume Groups (highly recommended). 5. Unmount and perform file system check on all logical volumes, varyoff, and export volume groups, with the following commands: #umount /dev/(logical_volume_name) #fsck /(file_system_name) #varyoffvg (volume_group_name) T 6.
Configuring a Standard Data Replication Manager Solution The option of installing the SWCC Agent will be presented at this time. Choose Yes. Installation of the SWCC Agent will begin. When installation is complete, you will be asked if you wish to start the Agent: ■ Answer Yes if the host will be used as an SWCC Agent. ■ Answer No if the host will not be used as an SWCC Agent. Refer to the HP StorageWorks HSG80 ACS Solution Software Version 8.
Configuring a Standard Data Replication Manager Solution T 19. Run the HP StorageWorks Install Agent, if required: #cd /usr/stgwks2 # ./stgwks_aix.sh Choose Option 1. T 20. Reestablish volume groups, logical volumes, and file systems with the following commands: #varyonvg (volume_group_name) #mount /dev/(logical_volume__name) T 21. Reestablish clustering services (if required). T 22. Check the status of the HBAs periodically.
Configuring a Standard Data Replication Manager Solution T 3. Change the operating system for each connection to AIX (use WINNT for this function): SET T !NEWCONxx OPERATING_SYSTEM=WINNT 4. After you have renamed the host connections, issue the following command to see the new settings: SHOW CONNECTIONS Update Switch Zones The switch zones created earlier must be updated with the host connection information: T T 1.
Configuring a Standard Data Replication Manager Solution Note: In the example, hdisk2 represents a nonremote copy set because you have disabled access to all RCS LUNs. Configure the SWCC Agent (Optional) T The SWCC Agent may now be installed and configured. Refer to the HP StorageWorks HSG80 ACS Solution Software Version 8.7 for IBM AIX Installation and Configuration Guide for installation instructions.
Configuring a Standard Data Replication Manager Solution Install Multipath Software Install Secure Path for Windows. For installation instructions, refer to the current version of the HP StorageWorks Secure Path for Microsoft Windows Installation and Reference Guide available at http://h18006.www1.hp.com/products/sanworks/secure-path/index.html T T 1. Verify that the Secure Path Agent is installed by going to Administrative Tools and selecting Services.
Configuring a Standard Data Replication Manager Solution Rename the Host Connections To better identify which hosts you are working with, HP recommends that you rename the host connections, using a meaningful connection name for each. Each HBA appears as a connection. An HBA can be identified by its WWN, which you recorded when you installed the HBAs, and which appears in the connection description. Initially, each connection is named !NEWCONxx.
Configuring a Standard Data Replication Manager Solution Update Switch Zones The switch zones created earlier must be updated with the host connection information (refer to Chapter 6, “Configuring the Optional Advanced DRM Solutions,” for detailed information on zone creation): T T 1. On the top fabric, add the host connection to the zone that contains port 1 of the top target controller. 2. On the bottom fabric, add the host connection to the zone that contains port 1 of the bottom target controller.
Configuring a Standard Data Replication Manager Solution T 2. Use the Secure Path Agent Configuration screen at the server to grant access to the client at both the initiator and target sites. To do this: a. From the NetWare server, toggle to the Secure Path NLM (NetWare Loadable Module) screen. b. At the Main menu, select 2) Client Administration, then select 2) Add a Client. c. Type the fully qualified DNS name for the client, then press Enter. d. Press Esc to return to the Main menu. T 3.
Configuring a Standard Data Replication Manager Solution Example Display 20 Connection Unit Name Operating system !NEWCON00 WINNT HOST_ID=nnnn-nnnn-nnnn-nnnn !NEWCON01 WINNT HOST_ID=nnnn-nnnn-nnnn-nnnn Controller THIS Port 1 Address Status Offset 210013 online 0 . . . . ADAPTER_ID=nnnn-nnnn-nnnn-nnnn OTHER 1 200013 online 0 . . . .
Configuring a Standard Data Replication Manager Solution Example Display 21 Connection Unit Name HostB1 Operating system Controller THIS NETWARE HOST_ID=nnnn-nnnn-nnnn-nnnn HostB2 HOST_ID=nnnn-nnnn-nnnn-nnnn 1 Address Status Offset 210013 online 0 . . . . ADAPTER_ID=nnnn-nnnn-nnnn-nnnn OTHER NETWARE Port 1 200013 online 0 . . . .
Configuring a Standard Data Replication Manager Solution Connect the Host to the SAN Use your established cabling policy to connect the host to the Fibre Channel switch. T T 1. Use 50-micron, multimode fiber optic cable to connect one adapter of each pair to the top Fibre Channel switch. 2. Use 50-micron, multimode fiber optic cable to connect the other adapter of each pair to the bottom Fibre Channel switch.
Configuring a Standard Data Replication Manager Solution Rename the Host Connections To better identify which hosts you are working with, HP recommends that you rename the host connections, using a meaningful connection name for each. Each HBA appears as a connection. An HBA can be identified by its WWN, which you recorded when you installed the HBAs, and which appears in the connection description. Initially, each connection is named !NEWCONxx.
Configuring a Standard Data Replication Manager Solution Enable Access to the Hosts at the Target Site T The target units must have access to the hosts before configuring Secure Path. Enable access by issuing the following command: SET UnitName ENABLE_ACCESS_PATH = TargetHostConnectionNamex, TargetHostConnectionNamey Example: set UnitName enable_access_path = HostB1,HostB2 Verify the Disks To run DRM, you must have an even number of HBAs installed in each host system.
Configuring a Standard Data Replication Manager Solution Disable Access to the Hosts at the Target Site T To prevent the target host from writing to any remote copy set targets, disable access by issuing the command: SET UnitName DISABLE=ALL Issue this command for each unit. Note: This step is for remote copy set (RCS) LUNs only. Additional Setup T Reboot the host using the reboot -- -r command. The format command does not now show any disks from the HSG80 subsystem.
Configuring a Standard Data Replication Manager Solution I 8. Verify that all controllers are on and functional by observing the CLI prompt on the maintenance port of each controller. Note: Unless otherwise specified, all operations may be conducted from the top controller (controller A1). I 9. Verify that the controllers are properly set up: SHOW THIS_CONTROLLER You should see a display similar to that in Example Display 24.
Configuring a Standard Data Replication Manager Solution I 10. Verify that the subsystem WWN, also called the NODE_ID, is set (if zeros are displayed, the name is not set): ■ If the name is set, go to step 13. ■ If the WWN has not been assigned to the controller, you must obtain the name before proceeding. Note: The subsystem’s WWN and checksum are located on a sticker on top of the frame that houses the controllers, EMU, PVA, and cache modules.
Configuring a Standard Data Replication Manager Solution I 13. Configure the controllers for multiple-bus failover mode: SET MULTIBUS_FAILOVER COPY=THIS_CONTROLLER This command automatically restarts the Other controller. A %LFL and %EVL prompt is displayed. Refer to the HP StorageWorks HSG80 Array Controller ACS V8.7 Maintenance and Service Guide for more details on these reports. I 14.
Configuring a Standard Data Replication Manager Solution I 16. Change your controller prompts to identify which controller you are working on: SET THIS_CONTROLLER PROMPT=”InitiatorControllerNameTop> ” SET OTHER_CONTROLLER PROMPT=”InitiatorControllerNameBottom> ” Example: set this_controller prompt=”BuildingATop> ” Example: set other_controller prompt=”BuildingABottom> ” Note: This step takes effect immediately. L I 17.
Configuring a Standard Data Replication Manager Solution Example Display 29 . . . Mirrored Cache: 256 megabyte write cache, version 0012 Cache is GOOD No unflushed data in cache . . . Note: These settings are applied automatically to controller B2. It is not necessary to repeat these steps on controller B2. L I 19.
Configuring a Standard Data Replication Manager Solution Example Display 30 . . . Host PORT_1: Reported PORT_ID = nnnn-nnnn-nnnn-nnnn . . . . . . . . . . . . . . . . . . PORT_1_TOPOLOGY = FABRIC (up) Host PORT_2: Reported PORT_ID = nnnn-nnnn-nnnn-nnnn . . . . . . . . . . . . . . . . . . .PORT_2_TOPOLOGY = FABRIC (up) NOREMOTE_COPY I 23. You are now ready to enable DRM.
Configuring a Standard Data Replication Manager Solution Configure Storage at the Initiator Site This section explains how to configure storage for remote replication. Devices and Storagesets I Before you can configure the storage for remote replication, you must add disks, create storagesets, and create units. Follow the instructions in the HP StorageWorks HSG80 ACS Solution Software Version 8.
Configuring a Standard Data Replication Manager Solution I 3. After all units have been created, verify that the access on each unit is set to NONE: SHOW UNITS FULL You should see a display similar to that in Example Display 32. Example Display 32 LUN Uses Used by -----------------------------------------------------------------------D10 . . . . . . . . . . . . . . . . . . . . . . . . . . . . DISK1000 LUN ID: nnnn-nnnn-nnnn-nnnn-nnnn-nnnn-nnnn-nnnn NOIDENTIFIER Switches: RUN . . . . . . . . . . .
Configuring a Standard Data Replication Manager Solution Example Display 33 LUN Uses Used by -----------------------------------------------------------------------D10 . . . . . . . . . . . . . . . . . . . . . . . . . . . . DISK1000 LUN ID: nnnn-nnnn-nnnn-nnnn-nnnn-nnnn-nnnn-nnnn NOIDENTIFIER Switches: RUN . . . . . . . . . . . . . . . . . . . .NOWRITE_PROTECT READ_CACHE‘ READAHEAD_CACHE . . . . . . . . . . . . . . . . . . . .
Configuring a Standard Data Replication Manager Solution Example: a. Insert short-wave GBICs in ports 2 and 4 of the top and bottom Fibre Channel switches. b. Connect a 50-micron, multimode fiber optic cable from port 1 of the top controller to port 2 of the top Fibre Channel switch (as shown by callout 1 of Figure 20). c. Connect a second 50-micron, multimode fiber optic cable from port 2 of the top controller to port 4 of the top Fibre Channel switch (as shown by callout 2 of Figure 20). d.
Configuring a Standard Data Replication Manager Solution Connect the Initiator Site to the External Fiber Link I Locate the connection points at the initiator site that link the initiator site to the target site. Execute the procedure in the next section if you have longwave or very long distance GBICs. Otherwise, go the section titled “Other Transport Modes,” on page 111. Longwave or Very Long Distance GBICs I I I 1. Install longwave or very long distance GBIC now if not previously installed. 2.
Configuring a Standard Data Replication Manager Solution Create Switch Zones Switch zones must now be created and updated. See Chapter 6, “Configuring the Optional Advanced DRM Solutions,” for more information on creating zones. I I I I 1. Create a zone on the top fabric that contains port 1 of the top controller. This zone will later contain initiator host connections as well. 2. Add port 2 of the top controller to the top ISL zone created at the target site.
Configuring a Standard Data Replication Manager Solution T 2. Verify that the target has access to the initiator controller: SHOW CONNECTIONS This command shows all the connections; verify that the following are included: InitiatorControllerA, InitiatorControllerB, InitiatorControllerC, InitiatorControllerD. T 3. The target units must allow access to the controllers at the initiator site.
Configuring a Standard Data Replication Manager Solution Note: If you use scripting to automate failover and failback operations, do not use dashes (hyphens) as separators in your naming convention—use underscores instead. Dashes are not allowed by the Perl scripting language. Repeat for each remote copy set. You will see a confirmation message on your terminal, as shown in Example Display 34.
Configuring a Standard Data Replication Manager Solution I 2. To remove the failsafe lock from a remote copy set and resume normal operation, issue the following CLI command: SET RemoteCopyName ERROR_MODE=NORMAL Example: set rcs1 error_mode=normal You can also use this procedure for remote copy sets where a disaster-tolerant (DT) condition is not required. Note: If the error mode is set to normal and there is no target member, the remote copy set is no longer considered DT.
Configuring a Standard Data Replication Manager Solution Example Display 35 Name Storageset Uses Used by ----------------------------------------------------------------------MIR_D1LOG I mirrorset DISK50100 DISK60100 4. Present the log unit to the controller: ADD UNIT UnitName MirrorsetName Example: add unit d10 mir_d1log I 5. Verify that the controller recognizes the log unit: SHOW UNITS You should see a display similar to that in Example Display 36.
Configuring a Standard Data Replication Manager Solution Example Display 37 LUN Uses Used by -------------------------------------------------------------------D10 MIR_D1LOG LUN ID: 6000-1FE1-0001-3B10-0009-9130-8044-0066 IDENTIFIER = 10 Switches: RUN NOWRITE_PROTECT READAHEAD_CACHE NOWRITEBACK_CACHE READ_CACHE MAXIMUM_CACHED_TRANSFER_SIZE = 32 Access: None State: ONLINE to this controller Not reserved PREFERRED_PATH = THIS_CONTROLLER Host based logging NOT specified Size: 35556389 blocks Geo
Configuring a Standard Data Replication Manager Solution I 7. You may set the FAIL_ALL or ORDER_ALL properties of the association set now, if desired, by issuing the following CLI commands: SET AssociationSetName FAIL_ALL SET AssociationSetName ORDER_ALL Note: If you choose to set the FALL_ALL property of the association set, make sure that all of the remote copy sets in the association set are set to failsafe error mode. If you choose to use failsafe error mode, you cannot use a log unit.
Configuring a Standard Data Replication Manager Solution Additional Setup I You will need the latest TIMA kit, which is identified at: http://h71000.www7.hp.com/openvms/supportchart.html Connect the Host to the SAN Use your established cabling policy to connect the host to the Fibre Channel switch. I I 1. Use 50-micron, multimode fiber optic cable to connect one adapter of each pair to the top Fibre Channel switch. 2.
Configuring a Standard Data Replication Manager Solution I 2. When you have completed the worksheet, rename the connections: RENAME !NEWCONxx InitiatorHostConnectionNamex RENAME !NEWCONxx InitiatorHostConnectionNamey Example: rename !NEWCONxx HostA1 Example: rename !NEWCONxx HostA2 I 3. Set the operating system for each connection to OpenVMS: SET InitiatorHostConnectionNamex OPERATING_SYSTEM=VMS Example: set HostA1 operating_system = vms Example: set HostA2 operating_system = vms I 4.
Configuring a Standard Data Replication Manager Solution HP Tru64 UNIX Before beginning this procedure, make sure that your host is up to date with service packs and patches. For supported revision levels, refer to the DRM Release Notes. Install the HBAs I You must install at least two HBAs in each host system. Record the HBA WWID for use later in this section. Refer to the Compaq StorageWorks 64-Bit PCI-to-Fibre Channel Host Bus Adapter User Guide for installation instructions.
Configuring a Standard Data Replication Manager Solution You should see a display similar to that in Example Display 42. Example Display 42 Connection Unit Name !NEWCON00 Operating system Controller WINNT THIS HOST_ID=nnnn-nnnn-nnnn-nnnn !NEWCON01 WINNT 1 Address 210013 Status Offset online 0 . . . . ADAPTER_ID=nnnn-nnnn-nnnn-nnnn OTHER HOST_ID=nnnn-nnnn-nnnn-nnnn Port 1 200013 online 0 . . . .
Configuring a Standard Data Replication Manager Solution Example Display 43 Connection Unit Name HostA1 Operating system Controller Tru64_UNIX THIS HOST_ID=nnnn-nnnn-nnnn-nnnn HostA2 1 210013 Address online Status Offset 0 . . . . ADAPTER_ID=nnnn-nnnn-nnnn-nnnn Tru64_UNIX OTHER HOST_ID=nnnn-nnnn-nnnn-nnnn Port 1 200013 online 0 . . . .
Configuring a Standard Data Replication Manager Solution Existing Fibre Channel HP-UX Configurations I Refer to the current version of the HP StorageWorks Secure Path for HP-UX Installation and Reference Guide for information on: ■ Changing from SCSI-2 to SCSI-3, Command Console LUN (CCL) behavior ■ Changing HBAs and switch modes from QuickLoop to Fabric Install the HBAs I You must install at least two HBAs in each host system. HBAs must be installed in pairs.
Configuring a Standard Data Replication Manager Solution I 3. Change the operating system for each connection to HP-UX: SET !NEWCONxx operating_system=hp I 4. After you have renamed the host connections, issue the following command to see the new settings: SHOW CONNECTIONS Update Switch Zones The switch zones created earlier must be updated with the host connection information: I I 1. On the top fabric, add the host connection to the zone that contains port 1 of the top target controller. 2.
Configuring a Standard Data Replication Manager Solution Configure the SWCC Agent (Optional) I You may now install and configure the SWCC Agent. Refer to the HP StorageWorks HSG80 ACS Solution Software Version 8.7 for HP-UX Installation and Configuration Guide for installation instructions. Additional Setup I You may now configure volume groups, logical volumes, and file systems on any nonremote copy set LUNs on the storage arrays using normal HP-UX procedures.
Configuring a Standard Data Replication Manager Solution ■ HSG80 ACS code is v8.7P. ■ Storage subsystem is pre-configured with or without a CCL LUN. ■ Mode is SCSI-2 or SCSI-3, with the LUN connection type set as WINNT. ■ HBAs are installed in pairs. ■ No volume groups, logical volumes, or file systems are created. ■ Clustering services is not installed. HBA Limitations HBAs have the following limitations: ■ Addressing of LUNS is limited to 16 devices.
Configuring a Standard Data Replication Manager Solution I 5. Load Secure Path for IBM Fibre Channel driver: a. Load the Secure Path CD into the CD drive. b. Enter the following commands: #mkdir /mnt #mount -v cdrfs -r /dev/cd0 /mnt #mkdir /tmp/driver #cp /mnt/driver/PC1000SP.image /tmp/driver #cd /tmp/driver #installp -acd PC1000SP.image all #lslpp -l PC1000.driver.obj #umount /mnt Note: Follow the vendor documentation if using the PC2000 HBA. I 6.
Configuring a Standard Data Replication Manager Solution Upgrade Installation If you are currently using an AIX server in transparent failover mode, and you wish to upgrade to ACS Version 8.7P in a DRM environment, follow these instructions. Upgrade Installation Assumptions ■ All components are connected ■ AIX OS is upgraded to v4.3.3 or v5.
Configuring a Standard Data Replication Manager Solution I 8. Uninstall the Fibre Channel driver with the following command: #installp -u PC1000.driver.obj I I 9. Disconnect all Fibre Channel adapter cables. 10. If adding an additional Cambex Fibre Channel adapter, shut down the server with the following command: #shutdown I 11. Install additional Fibre Channel HBAs (if required). Do not connect fiber cables at this time. Note: The maximum number of HBAs per host is 6.
Configuring a Standard Data Replication Manager Solution b. Enter the following commands: #mkdir /mnt #mount -v cdrfs -r /dev/cd0 /mnt #mkdir /tmp/driver #cp /mnt/driver/PC1000SP.image /tmp/driver #cd /tmp/driver #installp -acd PC1000SP.image all #lslpp -l PC1000.driver.obj #umount /mnt I 16. Run Configuration Manger to add Fibre Channel HBA to the configuration database with the following commands: #cfgmgr -v #lsdev -Cc adapter I I 17. Connect fiber cables to HBAs. 18.
Configuring a Standard Data Replication Manager Solution Connect the Host to the SAN Use your established cabling policy to connect the host to the Fibre Channel switches: I I 1. Use 50-micron, multimode fiber optic cable to connect one adapter of each pair to the top fabric. 2. Use 50-micron, multimode fiber optic cable to connect the other adapter of each pair to the bottom fabric.
Configuring a Standard Data Replication Manager Solution Enable Access to the Hosts at the Initiator Site I The initiator hosts must have access to the units. Enable access with the following command: SET UnitName ENABLE_ACCESS_PATH=InitiatorHostConnectionNamex, InitiatorHostConnectionNamey Example: set UnitName enable_access_path=HostA1,HostA2 Repeat this step for each unit.
Configuring a Standard Data Replication Manager Solution Microsoft Windows NT and Windows 2000 Before beginning this procedure, make sure that your host is up to date with service packs and patches. For supported revision levels, refer to the DRM Release Notes. Ensure that the hosts are not connected to the Fibre Channel switches at any point during this procedure. Install the HBAs I You must install at least two HBAs in each host system. HBAs must be installed in pairs.
Configuring a Standard Data Replication Manager Solution Install SWCC (Optional) I You may now install SWCC. For detailed information about SWCC, including installation, refer to the Compaq StorageWorks Command Console Version 2.4 User Guide. Connect the Host to the SAN Use your established cabling policy to connect the host to the Fibre Channel switch. I I 1. Use 50-micron, multimode fiber optic cable to connect one adapter of each pair to the top Fibre Channel switch. 2.
Configuring a Standard Data Replication Manager Solution I 2. When you have completed the worksheet, rename the connections: RENAME !NEWCONxx TargetHostConnectionNamex RENAME !NEWCONxx TargetHostConnectionNamey Example: rename !NEWCONxx HostA1 Example: rename !NEWCONxx HostA2 I 3. Set the operating system for each connection to Windows: SET TargetHostConnectionNamex OPERATING_SYSTEM = WINNT Example: set HostA1 operating_system = winnt Example: set HostA2 operating_system = winnt I 4.
Configuring a Standard Data Replication Manager Solution Novell NetWare I Before beginning this procedure, make sure that your host is up to date with support packs and patches. For supported revision levels, refer to the DRM Release Notes. Install the HBAs I You must install at least two HBAs in each host system. Record the HBA WWID for use later in this section. Refer to the Compaq StorageWorks 64-Bit PCI-to-Fibre Channel Host Bus Adapter User Guide for installation instructions.
Configuring a Standard Data Replication Manager Solution Note: HP recommends that you set both the fully qualified and unqualified DNS names as valid, authorized clients. Install Secure Path Manager I For installation instructions, refer to the current version of the HP StorageWorks Secure Path for Novell NetWare Installation and Reference Guide. Install SWCC (Optional) I You may now install SWCC.
Configuring a Standard Data Replication Manager Solution Rename the Host Connections To better identify which hosts you are working with, HP recommends that you rename the host connections, using a meaningful connection name for each. Each HBA appears as a connection. An HBA can be identified by its WWN, which you recorded when you installed the HBAs, and which appears in the connection description. Initially, each connection is named !NEWCONxx.
Configuring a Standard Data Replication Manager Solution Update Switch Zones The switch zones created at the target site must be updated with the host connection information (refer to Chapter 6, “Configuring the Optional Advanced DRM Solutions,” for detailed information on creating and updating switch zones): I I 1. On the top fabric, add the host connection to the zone that contains port 1 of the top initiator controller. 2.
Configuring a Standard Data Replication Manager Solution Sun Solaris Before beginning this procedure, make sure that your host is up to date with Solaris OS patches. For supported revision levels, refer to the DRM Release Notes. Install the HBAs I You must install at least two HBAs in each host system. HBAs must be installed in pairs. Record the HBA WWID for use later in this section. Refer to the Compaq StorageWorks 64-Bit PCI-to-Fibre Channel Host Bus Adapter User Guide for installation instructions.
Configuring a Standard Data Replication Manager Solution I 3. Verify that the host has logged into the fabric: SHOW CONNECTIONS You should see a display similar to that in Example Display 50. Example Display 50 Connection Unit Name !NEWCON00 Operating system Controller THIS WINNT HOST_ID=nnnn-nnnn-nnnn-nnnn !NEWCON01 Address 1 Status Offset 210013 online 0 . . . . ADAPTER_ID=nnnn-nnnn-nnnn-nnnn OTHER WINNT HOST_ID=nnnn-nnnn-nnnn-nnnn Port 1 200013 online 0 . . . .
Configuring a Standard Data Replication Manager Solution Example Display 51 Connection Unit Name HOSTA1 Operating system Controller SUN THIS Port 1 Address 210013 Status Offset online 0 online 0 HOST_ID=nnnn-nnnn-nnnn-nnnn ADAPTER_ID=nnnn-nnnn-nnnn-nnnn HOSTA2 SUN OTHER 1 200113 HOST_ID=nnnn-nnnn-nnnn-nnnn ADAPTER_ID=nnnn-nnnn-nnnn-nnnn . . .
Configuring a Standard Data Replication Manager Solution Reverify the Disks I Issue the format command again to verify that disks are present. There must be only one entry for each disk. Note: All target numbers are stored in IdLite.conf. Configure the SWCC Agent (Optional) I You may now install SWCC. Refer to the HP StorageWorks HSG80 ACS Solution Software Version 8.7 for Sun Solaris Installation and Configuration Guide for details regarding the Configuration utility.
Configuring a Standard Data Replication Manager Solution Documenting Your Configuration IT Keep a copy of both configurations at both sites. Update your records whenever you modify the configuration. Follow the steps outlined below in the sections titled “Terminal Emulator Session” and “SHOW Commands” to obtain a status of the controllers, association sets, remote copy sets, units, and connections. After you have obtained this information for the initiator site, repeat the steps for the target site.
Configuring a Standard Data Replication Manager Solution PORT_2_TOPOLOGY = FABRIC (fabric up) Address = 220313 REMOTE_COPY = BuildngA Cache: 256 megabyte write cache, version 0012 Cache is GOOD No unflushed data in cache CACHE_FLUSH_TIMER = DEFAULT (10 seconds) Mirrored Cache: 256 megabyte write cache, version 0012 Cache is GOOD No unflushed data in cache Battery: NOUPS FULLY CHARGED Expires: Extended information: Terminal speed 9600 baud, eight bit, no parity, 1 stop bit Operation control: 00000000 S
Configuring a Standard Data Replication Manager Solution Example Display 54 Name Uses Used by ----------------------------------------------------------------------RC1 remote copy D1 AS1 Reported LUN ID: 6000-1FE1-0001-3AE0-0009-9141-6136-0038 Switches: OPERATION_MODE = SYNCHRONOUS ERROR_MODE = NORMAL FAILOVER_MODE = MANUAL OUTSTANDING_IOS = 60 Initiator (BuildngA\D1) State: ONLINE to this controller Target state: BuildngB\D1 is NORMAL IT 4.
Configuring a Standard Data Replication Manager Solution You should see a display similar to that in Example Display 56 for each connection. Example Display 56 Connection Unit Name HostA1 Operating system WINNT THIS Controller 1 634000 Port Address OLthis Status Offset 0 HOST_ID=1000-0000-C921-4B5B ADAPTER_ID=1000-0000-C921-4B5B IT IT IT 148 6. Click Stop to end the Capture Text function. Your work has been saved in the file created in step 3 in the Terminal Emulator Session‚ page 145. 7.
5 Configuring the Optional Entry-Level DRM Solutions This chapter describes the entry-level DRM solutions and explains how to set up and configure them. This chapter covers the following topics: ■ Dual-Switch Single-Site Configuration‚ page 150 ■ Single-Switch Configuration‚ page 152 ■ Single-Fabric Configuration‚ page 154 Note: It is a good idea to keep a copy of this manual at both the initiator and target sites to ensure a successful and identical setup at both sites.
Configuring the Optional Entry-Level DRM Solutions Dual-Switch Single-Site Configuration This configuration, shown in Figure 22, is designed for environments that need only local data protection in the event of local disaster or that are used as local test beds for operational DRM solutions. This solution uses only two switches, where each switch creates “a fabric in a box,” instead of the multiswitch fabrics supported in full DRM solutions.
Configuring the Optional Entry-Level DRM Solutions The HSG80 Array Controller and the server host bus adapter (HBA) use only shortwave gigabit interface converters (GBICs). This means that the DRM Dual-Switch Single-Site configuration is limited to 500 meters of 50-micron or 200 meters of 62.5-micron multimode fiber optic cable between the HBA and either switch and between the controller and either switch.
Configuring the Optional Entry-Level DRM Solutions 3. Make the following connections between the hosts and the switches: a. Connect a fiber optic cable from port 0 of Fibre Channel switch A to one adapter in Host A. b. Connect a fiber optic cable from port 0 of Fibre Channel switch Y to the other adapter in Host A. c. Connect a fiber optic cable from port 2 of Fibre Channel switch A to one adapter in Host Y. d.
Configuring the Optional Entry-Level DRM Solutions Host A Host Y Red Zone Switch 0 Blue Zone 2 1 12 14 3 5 7 Controller Pair A 9 11 13 15 Controller Pair Y CXO7736A Figure 23: Single-switch DRM configuration Setting Up the Single-Switch Configuration Before making any connections with the fiber optic cable, create and enable the zones that simulate two fabrics. As Figure 23 shows, the Red Zone uses ports 0 through 7; the Blue Zone uses ports 8 through 15.
Configuring the Optional Entry-Level DRM Solutions e. Connect a fiber optic cable from port 1 of the top controller of Controller Pair Y to port 5 of the Fibre Channel switch. f. Connect a fiber optic cable from port 2 of the top controller of Controller Pair Y to port 7 of the Fibre Channel switch. g. Connect a fiber optic cable from port 1 of the bottom controller of Controller Pair Y to port 15 of the Fibre Channel switch. h.
Configuring the Optional Entry-Level DRM Solutions Host A Host Y Red Zone Blue Zone Red Zone Blue Zone ISL Switch A 0 2 1 6 3 5 0 7 2 1 6 3 Controller Pair A 5 Switch Y 7 Controller Pair Y CXO7737A Figure 24: Dual switch with single ISL Note: For 8-EL switches, HP recommends that the ISL connection use port 7. Port 7 is the only removable port on the 8-EL switch; the other seven are fixed, short-wave GBICs and are not suitable for ISLs.
Configuring the Optional Entry-Level DRM Solutions Setting Up the Single-Fabric Configuration Before making any connections with the fiber optic cable, create and enable the zones that simulate two fabrics. As Figure 24 shows, the Red Zone uses ports 0 through 3; the Blue Zone uses ports 4 through 7 of each switch. To create zones, refer to your switch documentation.
Configuring the Optional Entry-Level DRM Solutions 5. Make the following remote host connections: a. Connect a fiber optic cable from HBA A of Host Y to port 0 of Fibre Channel switch Y. b. Connect a fiber optic cable from HBA B of Host Y to port 6 of Fibre Channel switch Y. 6. Make the following ISL connections: a. Install the appropriate GBIC type (long-wave or short-wave) on each switch. They may be placed anywhere on the switches, regardless of the zoning configuration. b.
Configuring the Optional Entry-Level DRM Solutions 158 Data Replication Manager HSG80 ACS Version 8.
Configuring the Optional Advanced DRM Solutions 6 This chapter provides information on different Data Replication Manager (DRM) configurations for special circumstances. The topics discussed in this chapter are: ■ Bidirectional DRM Solution‚ page 159 ■ Stretched Cluster DRM Solution‚ page 160 Bidirectional DRM Solution DRM supports active/active bidirectional solutions by using two sets of storage arrays—one set for each direction.
Configuring the Optional Advanced DRM Solutions Host A Host B Host W Switch A Host X Switch Y 2 4 6 8 10 12 14 0 2 4 6 8 10 12 14 1 3 5 7 9 11 13 15 1 3 5 7 9 11 13 15 ~ ~ 0 Switch B Switch Z 2 4 6 8 10 12 14 0 2 4 6 8 10 12 14 1 3 5 7 9 11 13 15 1 3 5 7 9 11 13 15 ~ ~ 0 Storage Controller A Storage Controller B Initiator Target Storage Controller Y Storage Controller Z Target Initiator CXO7734A Figure 25: Bidirectional DRM c
Troubleshooting 7 This chapter shows you how to interpret information from the HSG80 controllers, the SAN switches, and the operating system to aid in troubleshooting. The user of these troubleshooting procedures must be familiar with Data Replication Manager (DRM) procedures and CLI commands and must be proficient with the HSG80. Refer to the HP StorageWorks Data Replication Manager HSG80 Version 8.7P Failover/Failback Procedures Guide for additional and more detailed troubleshooting procedures.
Troubleshooting — Secure Path‚ page 180 ■ Controller Replacement in a DRM Configuration‚ page 181 Preliminary Checks Before you begin the troubleshooting procedures, verify that the hardware components have power and are functioning properly. For help getting a terminal connection to the controller, refer to the HP StorageWorks HSG80 ACS Solution Software Version 8.7 Installation and Configuration Guide for your operating system. See the HP StorageWorks HSG80 Array Controller ACS Version 8.
Troubleshooting Table 8: SHOW THIS Command Analysis SHOW THIS command output What to look for Related Commands Controller: HSG80 ZG94115534 Software V87P, Hardware Make sure serial number is unique. Check the ACS version (here it is 8.7P). NODE_ID = 5000-1FE1-0007-9DD0 Make sure the NODE_ID (WWID) is Your WWIDs will be different from those shown in this example.
Troubleshooting The WWID (NODE_ID) in Table 8 is 5000-1FE1-0007-9DD0; the four ports on the controller pair are always arranged as shown in Figure 26. The -9DD0 WWID for the controller pair dictates the -9DD1 through -9DD4 WWIDs for the ports. Note: The WWID numbering scheme shown in Figure 26 is true only when the controllers are in multibus failover mode; it is not true for transparent mode.
Troubleshooting Table 9: SHOW OTHER Command Analysis (Continued) SHOW OTHER Command Output What to look for Reported PORT_ID = 5000-1FE1-0007-9DD2 PORT_2_TOPOLOGY = FABRIC (fabric up) Address = 200613 Switch domain 0, port 6.
Troubleshooting Table 10: SHOW CONNECTIONS Command Analysis SHOW CONNECTIONS Command Output Connection Name Operating system !NEWCON66 Controller WINNT Port THIS WINNT OTHER WINNT THIS PPRC_TARGET OTHER PPRC_INITIATOR THIS 2 PPRC_INITIATOR OTHER 200013 200513 0 Online to this controller. OL other 0 OL this 2 210E13 OL other 0 ADAPTER_ID=5000-1FE1-0007-9DE2 2 200513 offline WWID of HBA.
Troubleshooting We now know that we are online to four HBAs whose adapter IDs end in -C9E1, -C9F0, -F21A, and -F251. We also know that unit address 200013 is switch domain 0, port 0. There appear to be two different adapters cabled to the same switch: -F251 and -C9E1, both to port 0. That cannot be the case, because there cannot be two switch domain 0s on the same fabric. We actually have two switch 0s and two switch 1s.
Troubleshooting Step 4: Issue switchShow Command from the First Switch Issue a SWITCHSHOW command at the switch prompt to see the port connections: sw11:admin> switchShow Table 12 shows typical output from the SWITCHSHOW command and highlights the information relevant to troubleshooting. Table 12: First switchShow Command Output Command Output switchName: sw11 Comments Name of the switch switchType: 2.
Troubleshooting From the SWITCHSHOW command output, we know that there is an ISL from the switch named sw11 to a switch named sw12 (port 15). Figure 27 is a picture of the cabling information we have gathered so far. It is drawn from Table 12, which shows that: ■ Port 0 of sw11 (where we issued the SWITCHSHOW command) is cabled to the HBA whose WWID ends in -C9F0. ■ Port 13 of sw11 is cabled to port 1 of the bottom controller. ■ Port 14 of sw11 is cabled to port 2 of the bottom controller.
Troubleshooting Table 13: Second switchShow Command Output switchName: sw13 switchType: 2.
Troubleshooting HBA . . .F21A HBA . . .
Troubleshooting Table 14: Third switchShow Command Output switchName: sw12 switchType: 2.
Troubleshooting HBA . . .F251 HBA . . .F21A Port 1 Port 2 Top controller Top controller -9DD3 -9DD4 Port 1 Port 2 Bottom controller Bottom controller -9DD1 -9DD2 0 4 HBA . . .
Troubleshooting Table 15: Fourth switchShow Command Output switchName: sw14 switchType: 2.
Troubleshooting Information from the Operating Systems Figure 30 shows both fabrics (two sets of switches that communicate), but we do not yet know which HBAs go to which servers. HBA . . .F251 HBA . . .F21A Port 1 Port 2 Top controller Top controller -9DD3 -9DD4 Port 1 Port 2 Bottom controller Bottom controller -9DD1 -9DD2 0 4 6 HBA . . .
Troubleshooting HP Tru64 UNIX Issue the following command at the system prompt: #uerf –R –r 300|more This shows what the system found during boot; it includes the WWID of the HBAs and the revision level of the emx driver. Repeat for each server.
Troubleshooting Microsoft Windows NT and Windows 2000 Execute the following procedure: 1. Shut down the server. 2. Boot the server from a bootable DOS diskette. 3. Insert the KGPSA diskette shipped with the adapter for Intel part AK-RF2LC-CA. 4. Issue the following DOS commands: A:\cd I386 A:\1p6dutil 5. Select option 6 (Show Host adapters info). 6. Select Host Adapter (1 or 2). 7. Write down its WWID. 8. Exit the menu by selecting 0. 9. Exit the program by selecting menu item 7. Repeat for each server.
Troubleshooting INITIATOR SITE TARGET SITE Initiator host Target host HBA . . .F251 HBA . . .F21A HBA . . .C9F0 HBA . . .
Troubleshooting Other Troubleshooting Considerations SHOW commands, zoning, and Secure Path may also assist in troubleshooting. SHOW Commands Information useful for troubleshooting can be acquired by issuing various SHOW commands. See Appendix A for a list of SHOW commands used in troubleshooting. Two particularly useful commands are SHOW UNITS FULL and SHOW REMOTE FULL. SHOW UNITS FULL At the initiator controller, issue the SHOW UNITS FULL command. Table 16 shows a typical output.
Troubleshooting Table 17: SHOW REMOTE FULL Command Output Name Uses Used by ------------------------------------------------RCS1 remote copy D1 A1 Reported LUN ID: 6000-1FE1-0009-1D70-0009-9421-3547-0176 Switches: OPERATION_MODE = SYNCHRONOUS If in failsafe mode, units can become failsafe locked.
Troubleshooting Controller Replacement in a DRM Configuration When a failed controller running ACS V8.7P needs to be replaced, follow the supported procedures in the HP StorageWorks HSG60 and HSG80 Controller and HSx80 Cache Module Replacement Procedures for Array Controller Software V8.7x-x Release Notes. This document can be obtained at: http://h18006.www1.hp.com/products/sanworks/drm/relatedinfo.
Troubleshooting 182 Data Replication Manager HSG80 ACS Version 8.
Zoning in the Storage Area Network 8 This chapter describes Data Replication Manager (DRM) concepts and variations for alternative DRM configurations. These descriptions include cascaded switches, multiple intersite links (ISLs), dual-switch single-site DRM solutions, and switch zoning.
Zoning in the Storage Area Network Switch Zoning The Fibre Channel switch zoning feature provides a means to control storage area network (SAN) access at the node port level. Zoning can be used to separate one physical fabric into many virtual fabrics consisting of selected server and storage ports.
Zoning in the Storage Area Network Zoning Hosts and HSG80 Subsystems Between Sites In a DRM configuration, the initiator hosts are zoned so that they do not have access to the target controllers; the target hosts are zoned so that they do not have access to the initiator controllers. There are circumstances, however, when the hosts at one site do require access to HSG80 controller pairs at both sites. This could occur when you are running scripts, OpenVMS host-based shadowing, or stretch clusters.
Zoning in the Storage Area Network Network Interconnect Host Green Zone_Top Top Fabric Red Zone_Top Host Blue Zone_Top Red Zone_Top Switch Y ~ ~ Switch A Very Long Distance GBIC Up to 100 KM 9 Micron single-mode fiber Fabric Boundary Controller A Bottom Fabric Controller Y Normal GBIC Switch Z ~ ~ Switch B Up to 100 KM 9 Micron single-mode fiber Green Zone_Bottom Red Zone_Bottom Blue Zone_Bottom Red Zone_Bottom CXO7294B Figure 32: Zoning in a DRM homogeneous environment HP suggests th
Zoning in the Storage Area Network Table 18: Blank zoning input form template Zoning Configuration Name = Zone Name= WWID # Switch Name= Domain ID # Port # Alias Name Path= Function Site Zoning Configuration Name = Zone Name= WWID # Switch Name= Domain ID # Port # Data Replication Manager HSG80 ACS Version 8.
Zoning in the Storage Area Network Local site Remote site Network Interconnect Host Green Zone_Top Red Zone_Top Host Host Name=1 Remote Path 2_Y Switch A Switch Y Local Path 1_B ~ ~ Domain ID 0 Very Long Distance GBIC Fabric Boundary Bottom Fabric Red Zone_Top Host Name=2 Local Path 1_A Top Fabric Blue Zone_Top Remote Path 2_Z Domain ID 1 Up to 100 KM 9 Micron single-mode fiber Controller A1 Top Controller A2 Top Controller A1 Bottom Controller A2 Bottom Controller A Normal GBIC
Zoning in the Storage Area Network Example: Zoning Green Zone_Top and Green Zone_Bottom Table 19, the Green Zone_Top and Green Zone_Bottom input form, is created from a blank template (Table 18) and is added to throughout this example.
Zoning in the Storage Area Network Figure 33 shows zoning using the Domain ID number and the Port number, rather than the WWID number. The WWID could also have been used. A general rule is that if you are changing connections within DRM, use the WWID for zoning. 1. Identify and write down the Domain ID of each switch. To get this information, use the switchShow command from each switch in a Telnet session or from the front console of each switch.
Zoning in the Storage Area Network 8. The next alias to create is from switch B. Open a Telnet session to switch B. 9. Create the Host 1_B alias: aliCreate "Host 1_B", "0,2" 10. Create the controller A-1 bottom alias: aliCreate "Controller A1_bottom", "0,4" 11. Save the configuration: cfgSave This completes alias naming for the Green Zones. The next section configures the Blue Zones.
Zoning in the Storage Area Network Table 20: Blue Zone_Top and Blue Zone_Bottom input form Zoning Configuration Name=Top_Fabric Zone Name=Blue Zone_Top WWID # Switch Name=Switch Y Path=A Domain ID # Port # Alias Name Function Site 1 1 — E-Port Remote 1 2 Host 2_Y Host Remote 1 4 Controller Y1_top Controller Remote Zoning Configuration Name=Bottom_Fabric Zone Name=Blue Zone_Bottom WWID # 192 Switch Name=Switch Z Domain ID # Port # 1 1 1 Path=B Alias Name Function Site 1 —
Zoning in the Storage Area Network As shown in Figure 33, the Host in the Blue Zone is named “Host 2." 1. Log the domain IDs of switches Y and Z. In this example, they are both Domain ID 1. Log this information in the Blue Zone_Top and Blue Zone_Bottom input form. On this input form, list the Blue Zone in two blocks, one for switch Y in Blue Zone_Top and one for switch Z in Blue Zone_Bottom. See Table 20 for entries. 2. Log the ports that connect to the hosts and E ports.
Zoning in the Storage Area Network Table 21: Red Zone_Top and Red Zone_Bottom input form Zoning Configuration Name=Top_Fabric Zone Name=Red Zone_Top WWID # Switch Name=Switch A&Y Path=A&B Domain ID # Port # Alias Name Function Site 0 6 Controller A2_top Controller Local 1 6 Controller Y2_top Controller Remote Zoning Configuration Name=Bottom_Fabric Zone Name=Red Zone_Bottom WWID # Switch Name=Switch B&Z Path=A&B Domain ID # Port # Alias Name Function Site 0 6 Controller A2_bott
Zoning in the Storage Area Network These are the DRM remote copy paths: As shown earlier, switches A and B are Domain ID 0; switches Y and Z are Domain ID 1. 1. List switch domains A and Y in the Red Zone_Top input form. List switch Domains B and Z in the Red Zone_Bottom form. 2. List the controller connections. Figure 33 shows the controller pair listed as: Controller A2_top (top A controller, port 2) and Controller Y2_top (top Y controller, port 2). a.
Zoning in the Storage Area Network Create the Zone Names 1. Select the Telnet session from switch A. 2. Create the Green Zone_Top name and add the zone members: zoneCreate "Green Zone_Top", "Host 1_A; Controller A1_top” 3. Create the Blue Zone_Top name and add the zone members: zoneCreate "Blue Zone_Top", "Host 2_Y; Controller Y1_top" 4.
Zoning in the Storage Area Network This ensures the effective configuration of the switches after a restart or power-down. 6. Select the Telnet session from switch B. 7. Create the configuration using “Bottom_Fabric” as the filename and add all of the zone members.
Zoning in the Storage Area Network Network Interconnect Host Yellow Zone_Top Top Fabric Red Zone_Top Host Brown Zone_Top Red Zone_Top Switch Y ~ ~ Switch A Up to 100 KM 9 Micron single-mode fiber Fabric Boundary Controller A Controller Y Switch B Switch Z ~ ~ Bottom Fabric Up to 100 KM 9 Micron single-mode fiber Yellow Zone_Bottom Red Zone_Bottom Brown Zone_Bottom Red Zone_Bottom CXO7680A Figure 34: Zoning in a DRM heterogeneous environment Use Table 18, the blank zoning input form te
Zoning in the Storage Area Network Local site Remote site Network Interconnect Host Yellow Zone_Top Red Zone_Top Host Name=3 Host Name=4 Local Path 3_A Remote Path 4_Y Switch A Local Path 3_B Red Zone_Top Domain ID 0 Remote Path 4_Z Switch Y Domain ID 1 ~ ~ Top Fabric Blrown Zone_Top Host Up to 100 KM 9 Micron single-mode fiber Fabric Boundary Controller A1 Top Controller A1 Bottom Bottom Fabric Controller A2 Top Controller A2 Bottom Controller A Controller Y1 Top Controller Y1 Botto
Zoning in the Storage Area Network Example: Zoning Yellow Zone_Top and Yellow Zone_Bottom Table 22, the Yellow Zone_Top and Yellow Zone_Bottom input form, is created from the blank template (Table 18) and is added to during this example.
Zoning in the Storage Area Network Table 22 shows zoning using the Domain ID number and Port number, rather than the WWID number. The WWID could also have been used. A general rule is that if you are changing connections within the DRM, use the WWID for zoning. If you are changing out hardware, use the Domain ID and Port number. The zoning procedure follows. 1. Identify and record the Domain ID of each switch.
Zoning in the Storage Area Network Note: The controller alias “Controller A1_Bottom” has already been created and does not need to be repeated. 9. Save the configuration: cfgSave This completes alias naming for the Yellow Zones. The next section configures the Brown Zones. Example: Zoning Brown Zone_Top and Brown Zone_Bottom Zoning the Brown portion of this example is similar to zoning the Yellow portion, with a few name and number changes.
Zoning in the Storage Area Network Table 23: Brown Zone_Top and Brown Zone_Bottom input form Zoning Configuration Name=Top_Fabric Zone Name=Brown Zone_Top WWID # Switch Name=Switch Y Path=A Domain ID # Port # Alias Name Function Site 1 1 — E-Port Remote 1 8 Host 4_Y Host Remote 1 4 Controller Y1_top Controller Remote Zoning Configuration Name=Bottom_Fabric Zone Name=Brown Zone_Bottom WWID # Switch Name=Switch Z Domain ID # Port # 1 1 1 Path=B Alias Name Function Site 1 —
Zoning in the Storage Area Network 9. Save the configuration: cfgSave This completes alias naming for the Brown Zones. Note: The Red Zones have already been created, so the port aliases for the Red Zone ports do not need to be repeated. Create the Zone Names 1. Select the Telnet session from switch A. 2. Create the Yellow Zone_Top name and add the zone numbers: zoneCreate “Yellow Zone_Top”, Host 3_A; Controller A1-top” 3.
Zoning in the Storage Area Network Note: Top_Fabric has already been created. cfgAdd “Top_Fabric”, “Yellow Zone_Top; Brown Zone_Top” This adds Yellow Zone_Top and Brown Zone_Top to a configuration file titled “Top_Fabric,” which already contains Green Zone_Top, Blue Zone_Top, Red Zone_Top, and their alias members. These are stored in flash memory for switches A and Y. 3. Save the configuration: cfgSave 4.
Zoning in the Storage Area Network 1,4 alias: Controller Y2_top 1,6 alias: Host 1_A 0,2 alias: Host 2_Y 1,2 alias: Host 3_A 0,8 alias: Host 4_Y 1,8 Effective configuration: cfg: Top_Fabric zone: Blue Zone_Top . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1,2 1,4 zone: Brown Zone_Top 1,8 1,4 zone: Green Zone_Top 0,2 0,4 zone: Red Zone_Top 0,6 1,6 zone: Yellow Zone_Top 0,8 0,4 7. Select the Telnet session from switch B. 8.
Zoning in the Storage Area Network Note: This configuration now becomes the effective (in use) configuration for both switches B and Z. 11. To make this the active configuration after a restart or power-down, issue another cfgSave command: cfgSave This ensures the effective configuration of the switches after a restart or power-down. 12.
Zoning in the Storage Area Network Effective configuration: cfg: Bottom_Fabric zone: Blue Zone_Bottom 1,2 1,4 zone: Brown Zone_Bottom 1,8 1,4 zone: Green Zone_Bottom 0,2 0,4 zone: Red Zone_Bottom 0,6 1,6 zone: Yellow Zone_Bottom 0,8 0,4 Zoning for a DRM heterogeneous configuration is now complete. If you want to add additional zones, repeat the steps starting at the section titled “DRM Heterogeneous Configuration.
Zoning in the Storage Area Network 10. Save the configuration: cfgSave Controller members from one site have now been added to the host zones at the other site. Repeat the procedure if other zones at one site need access to controllers at the other site. Data Replication Manager HSG80 ACS Version 8.
Zoning in the Storage Area Network 210 Data Replication Manager HSG80 ACS Version 8.
Status Comparison A This appendix describes the procedure for comparing the status of: ■ Controllers ■ Association sets ■ Remote copy sets ■ Units ■ Connections Performing a status comparison consists of the following procedures: ■ Target Site Terminal Emulator Session ■ Issuing SHOW Commands Data Replication Manager HSG80 ACS Version 8.
Status Comparison Target Site Terminal Emulator Session 1. Using a serial cable, connect the COM port of a laptop computer or another computer to the corresponding serial port on the HSG80 controllers. 2. Start a terminal emulator session that is capable of capturing text to a file (which is later saved as step 6 of the SHOW Commands procedure). Use the following settings: 9600 baud, 8 bits, no parity, 1 stop bit, XON/XOFF. Issuing SHOW Commands 1.
Status Comparison Host PORT_1: Reported PORT_ID = 5000-1FE1-0001-3AE1 PORT_1_TOPOLOGY = FABRIC (fabric up) Address = 220113 Host PORT_2: Reported PORT_ID = 5000-1FE1-0001-3AE2 PORT_2_TOPOLOGY = FABRIC (fabric up) Address = 220313 REMOTE_COPY = BuildingB Cache: 256 megabyte write cache, version 0012 Cache is GOOD No unflushed data in cache CACHE_FLUSH_TIMER = DEFAULT (10 seconds) Mirrored Cache: 256 megabyte write cache, version 0012 Cache is GOOD No unflushed data in cache Battery: NOUPS FULLY CHAR
Status Comparison Example Display 3 Name Uses Used by ----------------------------------------------------------------------RC1 remote copy D1 AS1 Reported LUN ID: nnnnnnnnnnnnnnn Switches: OPERATION_MODE = SYNCHRONOUS ERROR_MODE = NORMAL FAILOVER_MODE = MANUAL OUTSTANDING_IOS = 60 . . .
B Replicating Storage Units This chapter describes Data Replication Manager (DRM) concepts and procedures for making point-in-time copies of a storage unit. The topics discussed in this chapter are: ■ Cloning Data for Backup‚ page 217 ■ Snapshot‚ page 219 Cloning and snapshot are methods of making a point-in-time copy of a storage unit. Table 24 provides an overview comparison of the two methods.
Replicating Storage Units Table 24: Cloning and Snapshot Comparison (Continued) Cloning 216 Snapshot Can clone an unpartitioned single-disk unit, stripeset, or mirrorset. The snapshot unit must have the following characteristics: Write-back cache enabled Capacity equal to or greater than the source unit Made of any storage container except write history log containers Source unit and clone both reside on and fail over on the same controller.
Replicating Storage Units Cloning Data for Backup Use the CLONE utility to duplicate data on any unpartitioned single-disk unit, stripeset, mirrorset, or striped mirrorset in preparation for backup. When the cloning operation is done, you can back up the clones rather than the storageset or the single-disk unit, which can continue to service its I/O load. When you are cloning a mirrorset, CLONE does not need to create a temporary mirrorset.
Replicating Storage Units Example: This example shows the commands you would use to clone storage unit D98. The CLONE utility terminates after it creates storage unit D99, a clone or copy of D98. Bold type indicates user entry. RUN CLONE CLONE LOCAL PROGRAM INVOKED UNITS AVAILABLE FOR CLONING: 98 ENTER UNIT TO CLONE ? 98 CLONE WILL CREATE A NEW UNIT WHICH IS A COPY OF UNIT 98. ENTER THE UNIT NUMBER WHICH YOU WANT ASSIGNED TO THE NEW UNIT ? 99 THE NEW UNIT MAY BE ADDED USING ONE OF THE FOLLOWING METHODS: 1.
Replicating Storage Units ADD UNIT D99 C_ST1 D99 HAS BEEN CREATED. IT IS A CLONE OF D98. CLONE - NORMAL TERMINATION Snapshot With snapshot, the contents of a source unit are frozen in time and presented to the host as a second unit, the snapshot. The snapshot unit (Figure 37) preserves the original data (from the time of the snapshot) while allowing writes to the source unit to continue.
Replicating Storage Units Snapshot Command Note: This command is operational only in controller software versions 8.7S and 8.7P and is operational only if both controllers have 512 MB of mirrored cache. This command creates and names a snapshot unit. A snapshot unit is one that reflects the contents of another unit at a specific time (the instant the ADD SNAPSHOT_UNITS command is entered). The snapshot unit can then be presented to the host. The snapshot unit remains until it is deleted (DELETE command).
Replicating Storage Units Source unit The unit whose contents is frozen in time and preserved on the snapshot unit. The source unit must: ■ Be less than 512 GB ■ Have write-back cache enabled ■ Be nontransportable Switches There are no switches associated with this command.
Replicating Storage Units 222 Data Replication Manager HSG80 ACS Version 8.
Upgrading to ACS Version 8.7P Software C Array Controller Software (ACS) Version 8.7P implements the Data Replication Manager (DRM) feature, which can be upgraded using either a rolling or a shutdown upgrade method. These upgrade methods apply only to dual-redundant controller configurations. Note: The rolling upgrade procedure is not currently supported for Microsoft Windows NT, Microsoft Windows 2000, and IBM AIX platforms. The shutdown upgrade procedure must be used for these platforms.
Upgrading to ACS Version 8.7P Software Rolling Upgrade Procedure for Version 8.6-xP to 8.7P The ACS Version 8.7P rolling upgrade procedure from ACS Version 8.6-xP allows the disk to be accessible during the upgrade process with minimal disruption. Specific controllers are referred to as Controller A or Controller B during the procedure. For clarity, the CLI prompts illustrated in the procedure use HSGA> and HSGB> to indicate the controller used.
Upgrading to ACS Version 8.7P Software I 4. Identify and record the current CACHE_FLUSH_TIMER value: HSGA> SHOW THIS_CONTROLLER The following text is only a portion of the resulting display: Cache: 256 megabyte write cache, version 0022 Cache is GOOD No unflushed data in cache CACHE_FLUSH_TIMER=DEFAULT (10 seconds) Note: The CACHE_FLUSH_TIMER value is displayed in the caching parameters section. This parameter is modified during the procedure and must be restored later. I 5.
Upgrading to ACS Version 8.7P Software I 9. Shut down Controller B: HSGA> SHUTDOWN OTHER_CONTROLLER Note: Disregard any messages about misconfigured controllers or failover status. After Controller B shuts down, the Reset button and the first three LEDs turn on (see Figure 38). Proceed only after the Reset button stops flashing and remains on. 1 2 1 2 3 4 5 1 2 6 Reset button First three LEDs CXO6991A Figure 38: Controller reset button and first three LEDs I 10.
Upgrading to ACS Version 8.7P Software Note: A controller restart can take as long as 60 seconds and is indicated by the temporary cycling of the port LEDs and a flashing Reset button. Disregard messages about misconfigured controllers or failover status. When controller B has restarted, it automatically shuts down Controller A. f. I Install the program card ESD cover on Controller B. 12. Verify that Controller B completed initialization: a.
Upgrading to ACS Version 8.7P Software Target Site Upgrade Procedure Note: During the target site upgrade, one of the initiator site controllers could restart with an instance code of 0xE096980. This potential restart is expected; disregard the associated instance code. T 1. Connect a PC or terminal to the maintenance port of Controller A at the target site. T 2. Delete any snapshot units by performing the following steps: a. Identify all snapshot units: HSGA> SHOW UNITS FULL b.
Upgrading to ACS Version 8.7P Software T 6. Set the CACHE_FLUSH_TIMER to 1 second with the following commands: HSGA> SET THIS_CONTROLLER CACHE_FLUSH_TIMER=1 HSGA> SET OTHER_CONTROLLER CACHE_FLUSH_TIMER=1 T 7. Disable writeback caching on all units to help minimize failover time. Issue the following command as required for each unit: HSGA> SET unit-name NOWRITEBACK_CACHE T 8.
Upgrading to ACS Version 8.7P Software Note: After this step has been performed, the previous ACS version cannot be restored to this subsystem without performing the downgrade process, which should be performed only by HP authorized service personnel. a. Remove the program card ESD cover from Controller B. b. While pressing and holding the controller Reset button, eject the old program card. c. After ejecting the program card, release the Reset button. d.
Upgrading to ACS Version 8.7P Software T 14. After Controller A restarts, restore the CACHE_FLUSH_TIMER to the value recorded in step 4 using the following commands: HSGA> SET THIS_CONTROLLER CACHE_FLUSH_TIMER=n HSGA> SET OTHER_CONTROLLER CACHE_FLUSH_TIMER=n T 15. For each unit, restore the WRITEBACK_CACHE settings as recorded in step 5: HSGA> SET unit-name WRITEBACK_CACHE T 16. Restore all snapshot units removed in step 2. T 17.
Upgrading to ACS Version 8.7P Software b. Record the configuration for each snapshot unit for later restoration. c. Delete all snapshot units individually with the following command: HSGA> DELETE snapshot-unit-name I 4. Verify that all snapshot units are deleted: HSGA> SHOW UNITS FULL Note: If any snapshot unit exists, repeat step 3. I 5.
Upgrading to ACS Version 8.7P Software Note: After the controllers shut down, the Reset buttons and the first three LEDs on both controllers turn on (see Figure 38 on page 226). This could take several minutes, depending on the amount of data that needs to be flushed from the cache modules. Proceed only after both Reset buttons stop flashing and remain on. I 9.
Upgrading to ACS Version 8.7P Software Target Site Shutdown Upgrade Procedure Note: During the target site upgrade, one of the initiator site controllers could restart with an instance code of 0xE096980. This potential restart is expected; disregard the associated instance code. T 1. From a host console, stop all host activity to the controllers and dismount the logical units in the subsystem. T 2. Connect a PC or terminal to the maintenance port of Controller A at the target site. T 3.
Upgrading to ACS Version 8.7P Software Cache 256 megabyte write cache, version 0022 Cache is GOOD No unflushed data in cache CACHE_FLUSH_TIMER=1 SECOND Note: Repeat this step on both controllers (THIS_CONTROLLER and OTHER_CONTROLLER) until no unflushed data remains in either cache module memory. T 8.
Upgrading to ACS Version 8.7P Software g. Install a program card ESD cover on each controller. T 10. After the controllers restart, restore the CACHE_FLUSH_TIMER to the value recorded in step 5 on page 232: HSGB> SET THIS_CONTROLLER CACHE_FLUSH_TIMER=n HSGB> SET OTHER_CONTROLLER CACHE_FLUSH_TIMER=n T 11. Restore all snapshot units removed in step 3. T 12. Mount the logical units on the host. T 13. Disconnect the PC or terminal from the maintenance port of Controller A.
Glossary Glossary This glossary defines terms used in this guide or related to the Data Replication Manager. It is not a comprehensive glossary of computer terms. Glossary ACS An acronym for array controller software. See array controller software. adapter A hardware device that converts the protocol and hardware interface of one bus type to another without changing the function of either bus. AL_PA or ALPA An acronym for Arbitrated Loop Physical Address.
Glossary asynchronous mode A mode of operation of the remote copy set whereby the write operation provides command completion to the host after the data is safe on the initiating controller, and prior to the completion of the target command. Asynchronous mode can provide faster response time, but the data on all members at any one point in time cannot be assumed to be identical. See also synchronous mode. ATM Asynchronous Transfer Mode.
Glossary chunk size The number of data blocks, assigned by a system administrator, that are written to the primary RAIDset or stripeset member before the remaining data blocks are written to the next RAIDset or stripeset member. Nondefault chunk size values must be exactly divisible by 8. CLI An acronym for command line interface. The CLI is the configuration interface to operate the controller software.
Glossary disaster tolerance As applied to DRM, disaster tolerance provides the ability for rapid recovery of user data from a remote location when a significant event or a disaster occurs at the primary computing site. See also remote copy sets, DT. DT An acronym for disaster tolerance. See disaster tolerance. dual-redundant configuration A storage subsystem configuration consisting of two active controllers operating as a single controller.
Glossary failback The process of restoring data access to the newly restored controller in a dual-redundant controller configuration. The failback method (full copy or fast failback) is determined by the enabling of the Logging or Failsafe switches, the selected mode of operation (synchronous or asynchronous), and whether the failover is planned or unplanned. See also failover, dual-redundant configuration.
Glossary FD SCSI The fast, narrow, differential SCSI bus with an 8-bit data transfer rate of 10 MB/s. See FWD SCSI and SCSI. More information is available from http://www.t10.org. fiber An optical strand used in fiber optic cable. Spelled fibre when used in “Fibre Channel” protocol. See also fiber optic cable, Fibre Channel. fiber optic cable A transmission medium designed to transmit digital signals in the form of pulses of light.
Glossary heterogeneous host support Also called noncooperating host support. The ability to share storage between two similar (or dissimilar) hosts by way of storage partitioning. HIPPI–FC An acronym for the high-performance parallel interface (HIPPI) over the Fibre Channel. HIPPI is a media-level, point-to-point, 12-channel, full-duplex, electrical/optical interface. Not supported by DRM. See http://www.t11.org for more information. hop The definition of an interswitch connection.
Glossary logical unit number A value that identifies a specific logical unit belonging to a SCSI target ID number. A number associated with a physical device unit during a task’s I/O operations. Each task in the system must establish its own correspondence between logical unit numbers and physical devices. LOG_UNIT A CLI command switch that (when enabled) assigns a single, dedicated log unit for a particular association set. The association set members must all be in the NORMAL error mode (not failsafe).
Glossary N_port A port attached to a node for use with point-to-point topology or fabric topology. See point-to-point connection. network In data communication, a configuration in which two or more terminals or devices are connected to enable information transfer. NL_port A port attached to a node for use in all three Fibre Channel topologies: point-to-point, arbitrated loop, and switched fabric. node 1. In data communications, the point at which one or more functional units connect transmission lines. 2.
Glossary participating mode A mode within an L_port that allows the port to participate in loop activities. A port must have a valid AL_PA or ALPA to be in participating mode. PCM An acronym for Polycenter Console Manager. PCMCIA An acronym for Personal Computer Memory Card Industry Association. An international association formed to promote a common standard for PC card-based peripherals to be plugged into notebook computers. A PCMCIA card, sometimes called a PC Card, is about the size of a credit card.
Glossary port_name A 64-bit unique identifier assigned to each Fibre Channel port. The port_name is communicated during the logon and port discovery process. preferred address The AL_PA which an NL_Port attempts to acquire first during initialization. private NL_port An NL_Port which does not attempt login with the fabric and only communicates with NL_Ports on the same loop. Not used by DRM.
Glossary QoS An acronym for Quality of Service in an ATM network. Each virtual connection in an ATM network is set to a service category. The performance of the connection is measured by the established QoS parameters (outlined by the ATM Forum). Performance issues include data rate, cell loss rate, cell delay, and delay variation (jitter).
Glossary SCSI device 1. A host computer adapter, a peripheral controller, or an intelligent peripheral that can be attached to the SCSI bus. 2. Any physical unit that can communicate on a SCSI bus. SCSI device ID number A bit-significant representation of the SCSI address referring to one of the signal lines, numbered 0 through 7 for an 8-bit bus, or 0 through 15 for a 16-bit bus. SCSI ID number The representation of the SCSI address that refers to one of the signal lines numbered 0 through 15.
Glossary this controller The controller that is serving the current CLI session through a local or remote terminal. See also other controller. T1 The standard North American carrier for transmission at 1.544 Mbit/sec. T2 The standard North American carrier for transmission at 6.176 Mbit/sec. T3 The standard North American carrier for transmission at 44.736 Mbit/sec. UBR An acronym for unspecified bit rate.
index A B BA370 enclosure 21 Index C Index add associations command 39, 116 add mirrorset command 115 add remote copy set command 112, 113 add snapshot units command 220 add unit command 69, 107, 116 AIX configuring SWCC agent at initiator site 133 configuring SWCC agent at target site 90 connecting host to SAN at initiator site 132 connecting host to SAN at target site 88 disabling access to hosts at target site 89 enabling access to hosts at initiator site 133 installing HBAs at initiator site 126 i
Index delete 220, 224, 228, 232 initialize 115 rename 76, 120, 122, 136, 139, 142 restart controller 67, 70, 105, 108 scan for new devices 140 set alloclass 65 set cache flush timer 225 set controller identifier 65 set controller mirrored cache 66, 104 set controller node 102 set controller port topology 67, 105 set controller prompt 104 set controller remote copy 68, 106 set disable 89 set disable access path 69, 107, 116 set enable access path 113, 120, 133, 140 set error mode 114 set fail all 118 set id
Index multi-mode 27 setting up 47 single-mode 27 switch-to-controller connections 48 Fibre Channel installing software for Windows at initiator site 134 installing software for Windows at target site 90 setting up switch 46 switch-to-controller connection 48 fully-redundant power 27 G GBIC fiber optic cable for 48 inserting short wave 72 long wave or very long distance 73, 111 short-wave 27, 110 getting help 17 H hardware, required components 21 HBAs installing driver for NetWare at initiator site 137 in
Index renaming host connections for Windows 135 reverifying disks for Solaris 144 rolling upgrade procedure 224 setting failsafe at 114 setting up AIX at 126 setting up NetWare at 137 setting up OpenVMS at 118 setting up Solaris at 141 setting up Tru64 UNIX at 121 setting up Windows at 134 shutdown upgrade procedure 231 Tru64 UNIX multipath software support 121 updating switch zones for AIX 132 updating switch zones for NetWare 140 updating switch zones for OpenVMS 120 updating switch zones for Solaris 143
Index EMA12000 27 EMA16000 21 ESA12000 21, 27 RA8000 27 related documentation 14 remote copy function, peer-to-peer 34 remote copy sets creating at initiator site 112 overview 34 resume switch 37 suspend switch 36 rename command 76, 120, 122, 136, 139, 142 replicating storage units, cloning data for backup 217 resource partitioning 184 restart controller command 67, 70, 105, 108 restrictions Management Appliance 31 StorageWorks Command Console 31 rolling upgrade procedure 224 to 231 S scan for new devices
Index source unit 220 storage building block (SBB) 24 storage units creating at initiator site 107 creating at target site 69 StorageWorks Command Console overview 31 SWCC configuring agent for AIX at initiator site 133 configuring agent for AIX at target site 90 configuring agent for Solaris at initiator site 144 configuring agent for Solaris at target site 99 installing for NetWare at initiator site 138 installing for NetWare at target site 94 installing for OpenVMS at initiator site 118 installing for O
Index shutdown upgrade procedure 234 terminal emulator session 212 updating firmware for Windows 90 updating switch zones for AIX 89 updating switch zones for NetWare 96 updating switch zones for OpenVMS 77 updating switch zones for Solaris 98 updating switch zones for Tru64 UNIX 79 updating switch zones for Windows 93 verifying disks for AIX 89 verifying disks for Solaris 99 technical support, HP 17 text symbols 15 troubleshooting associating HBAs with servers 175 information from controllers 162 to 167 i
Index 258 Data Replication Manager HSG80 ACS Version 8.