Installation and Configuration Guide HP StorageWorks HSG80 ACS Solution Software V8.8 for HP-UX Product Version: 8.8-1 First Edition (March 2005) Part Number: AA–RV1FA–TE This guide provides installation and configuration instructions and reference material for operation of the HSG80 ACS Solution Software V8.8-1 for HP-UX.
© Copyright 2000-2005 Hewlett-Packard Development Company, L.P. Hewlett-Packard Company makes no warranty of any kind with regard to this material, including, but not limited to, the implied warranties of merchantability and fitness for a particular purpose. Hewlett-Packard shall not be liable for errors contained herein or for incidental or consequential damages in connection with the furnishing, performance, or use of this material.
contents Contents About this Guide. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .13 Overview. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 Intended Audience . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 Related Documentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Contents Determining the Address of the CCL. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Enabling/Disabling the CCL in SCSI-2 Mode. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Enabling the CCL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Disabling the CCL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Enabling/Disabling CCL in SCSI-3 Mode. . .
Contents RAIDset Planning Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86 Keep these points in mind when planning RAIDsets:. . . . . . . . . . . . . . . . . . . . . . . . 87 Striped Mirrorset Planning Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88 Storageset Expansion Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90 Partition Planning Considerations . . . . . . . . .
Contents Preparing HP-UX System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Rolling Upgrades . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Preparing Storage RAID Array . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Preparing HP-UX System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Rolling Upgrades . . . . . .
Contents Configuring Controller Settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Restarting the Controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Setting Time and Verifying All Commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Plugging in the FC Cable and Verifying Connections . . . . . . . . . . . . . . . . . . . . . . Repeating Procedure for Each Host Adapter. . . . . . . . . . . . . . . . . . . .
Contents Displaying the Current Switches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Changing RAIDset and Mirrorset Switches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Changing Device Switches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Changing Initialize Switches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Changing Unit Switches . . . . . . . . . . . . . . . . . . . . . . .
Contents Adding Storage Subsystem and its Host to Navigation Tree . . . . . . . . . . . . . . . . . . . . . . . . Removing Command Console Client . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Where to Find Additional Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . About the User Guide. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . About the Online Help . . . . . . . . .
Contents 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 Five-member RAIDset using parity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87 Striped mirrorset (example 1) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89 Striped mirrorset (example 2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89 One example of a partitioned single-disk unit . . . . . . . . . . .
Contents HSG80 ACS Solution Software V8.
Contents 12 HSG80 ACS Solution Software V8.
about this guide About this Guide This installation guide for HSG80 ACS Solution Software V8.8-1 for HP-UX provides information to help you: About this Guide ■ Plan the storage array subsystem. ■ Install and configure the storage array subsystem on individual operating system platforms.
About this Guide Overview This section covers the following topics: ■ "Intended Audience", page 14 ■ "Related Documentation", page 14 Intended Audience This book is intended for use by systems administrators and systems technicians who are experienced with the following: ■ Storage ■ Networking Related Documentation In addition to this guide, HP provides corresponding information: 14 ■ ACS V8.
About this Guide Solution software host support includes the following platforms: — IBM AIX — HP-UX — Linux (Red Hat x86, SuSE x86) — Novell NetWare — Open VMS — Sun Solaris — Tru64 UNIX — Windows NT/2000/Windows Server 2003 (32-bit) Additional support required by HSG80 ACS Solution Software V8.
About this Guide Chapter Content Summary Table 1 below summarizes the content of the chapters. Table 1: Summary of chapter contents Chapters 16 Descriptions 1. Planning a Subsystem This chapter focuses on technical terms and knowledge needed to plan and implement storage array subsystems. 2. Planning Storage Configurations Plan the storage configuration of your subsystem, using individual disk drives (JBOD), storageset types (mirrorsets, stripesets, and so on), and/or partitioned drives.
About this Guide Table 1: Summary of chapter contents (Continued) Chapters Descriptions 7. Backing Up, Cloning, and Moving Data Description of common procedures that are not mentioned elsewhere in this guide. ■ Backing Up Subsystem Configuration ■ Cloning Data for Backup ■ Moving Storagesets Appendix A. Subsystem Profile Templates This appendix contains storageset profiles to copy and use to create your system profiles.
About this Guide Conventions Conventions consist of the following: ■ "Document Conventions" ■ "Symbols in Text" ■ "Symbols on Equipment" Document Conventions This document follows the conventions in Table 2.
About this Guide Tip: Text in a tip provides additional help to readers by providing nonessential or optional techniques, procedures, or shortcuts. Note: Text set off in this manner presents commentary, sidelights, or interesting points of information. Symbols on Equipment The following equipment symbols may be found on hardware for which this guide pertains.
About this Guide Power supplies or systems marked with these symbols indicate the presence of multiple sources of power. WARNING: To reduce the risk of personal injury from electrical shock, remove all power cords to completely disconnect power from the power supplies and systems. Any product or assembly marked with these symbols indicates that the component exceeds the recommended weight for one individual to handle safely.
About this Guide Rack Stability Rack stability protects personnel and equipment. WARNING: To reduce the risk of personal injury or damage to the equipment, be sure that: ■ The leveling jacks are extended to the floor. ■ The full weight of the rack rests on the leveling jacks. ■ In single rack installations, the stabilizing feet are attached to the rack. ■ In multiple rack installations, the racks are coupled. ■ Only one rack component is extended at any time.
About this Guide Getting Help If you still have a question after reading this guide, contact an HP authorized service provider or access our web site http://www.hp.com. HP Technical Support Telephone numbers for worldwide technical support are listed on the following HP web site http://www.hp.com/support/. From this web site, select the country of origin. Note: For continuous quality improvement, calls may be recorded or monitored.
About this Guide HP Authorized Reseller For the name of your nearest HP authorized reseller: ■ In the United States, call 1-800-345-1518 ■ In Canada, call 1-800-263-5868 ■ Elsewhere, see the HP web site for locations and telephone numbers http://www.hp.com. HSG80 ACS Solution Software V8.
About this Guide Configuration Flowchart A three-part flowchart (Figures 1-3) is shown on the following pages. Refer to these charts while installing and configuring a new storage subsystem. All references in the flowcharts pertain to pages in this guide, unless otherwise indicated. 24 HSG80 ACS Solution Software V8.
About this Guide See the unpacking instructions on shipping box Unpack subsystem Plan a subsystem Chapter 1 Plan storage configurations Chapter 2 Prepare host system Chapter 3 Make local connection page 152 Controller pair Single controller Cable controller page 153 Cable controllers page 160 Configure controller page 154 Configure controllers page 161 Installing SWCC ? No A Yes B See Figure 3 on page 27 See Figure 2 on page 26 Figure 1: General configuration flowchart (panel 1) HSG80 ACS
About this Guide A Configure devices page 168 Create storagesets and partitions: Stripeset, page 170 Mirrorset, page 170 RAIDset, page 171 Striped mirrorset, page 172 Single (JBOD) disk, page 173 Continue creating units until you have you have completed your planned configuration. Partition, page 173 Assign unit numbers page 175 Setting configuration options page 177 Verify storage setup page 181 Figure 2: General configuration flowchart (panel 2) 26 HSG80 ACS Solution Software V8.
About this Guide B Install Agent Chapter 4 Install Client Appendix B Create storage See SWCC online help Verify storage setup page 181 Figure 3: Configuring storage with SWCC flowchart (panel 3) HSG80 ACS Solution Software V8.
About this Guide 28 HSG80 ACS Solution Software V8.
Planning a Subsystem 1 This chapter provides information that helps you plan how to configure the storage array subsystem. This chapter focuses on the technical terms and knowledge needed to plan and implement storage subsystems. Note: This chapter frequently references the command line interface (CLI). For the complete syntax and descriptions of the CLI commands, see the HP StorageWorks HSG60 and HSG80 Array Controller and Array Controller Software Command Line Interface Reference Guide.
Planning a Subsystem Defining Subsystems This section describes the terms this controller and other controller. It also presents graphics of the Model 2200 and BA370 enclosures. Note: The HSG80 controller uses the BA370 or Model 2200 enclosure. Controller Designations A and B The terms A, B, “this controller,” and “other controller,” respectively are used to distinguish one controller from another in a two-controller (also called dual-redundant) subsystem.
Planning a Subsystem BA370 Enclosure 1 2 3 4 5 2 1 3 4 5 6 6 EMU PVA Controller A Controller B Cache module A Cache module B CXO6283B Figure 5: Location of controllers and cache modules in a BA370 enclosure Controller Designations “This Controller” and “Other Controller” Some CLI commands use the terms “this” and “other” to identify one controller or the other in a dual-redundant pair. These designations are a shortened form of “this controller” and “other controller.
Planning a Subsystem Model 2200 Enclosure 1 2 CXO7366A 1 This controller 2 Other controller Figure 6: “This controller” and “other controller” for the Model 2200 enclosure BA370 Enclosure 1 2 CXO6468D 1 Other controller 2 This controller Figure 7: “This controller” and “other controller” for the BA370 enclosure 32 HSG80 ACS Solution Software V8.
Planning a Subsystem What is Failover Mode? Failover is a way to keep the storage array available to the host if one of the controllers becomes unresponsive. A controller can become unresponsive because of a controller hardware failure or, in multiple-bus mode only, due to a failure of the link between host and controller or host-bus adapter. Failover keeps the storage array available to the hosts by allowing the surviving controller to take over total control of the subsystem.
Planning a Subsystem At any time, host port 1 is active on only one controller, and host port 2 is active on only one controller. The other ports are in standby mode. In normal operation, both host port 1 on controller A and host port 2 on controller B are active. A representative configuration is shown in Figure 8. The active and standby ports share port identity, enabling the standby port to take over for the active one.
Planning a Subsystem Host 1 Host 2 Switch or hub Switch or hub Host port 1 active D0 Host 3 D1 Host port 1 standby Host port 2 standby Controller A D100 Controller B D101 D120 Host port 2 active CXO7036A Figure 8: Transparent failover—normal operation HSG80 ACS Solution Software V8.
Planning a Subsystem Host 1 Host 2 Switch or hub Switch or hub Host port 1 active D0 Host 3 D1 Host port 1 not available Host port 2 active Controller A D100 Controller B not available D101 D120 Host port 2 not available CXO7035A Figure 9: Transparent failover—after failover from Controller B to Controller A Multiple-Bus Failover Mode Multiple-bus failover mode has the following characteristics: ■ Host controls the failover process by moving the units from one controller to another ■ A
Planning a Subsystem In multiple-bus failover mode, you can specify which units are normally serviced by a specific controller of a controller pair. Units can be preferred to one controller or the other by the PREFERRED_PATH switch of the ADD UNIT (or SET unit) command. For example, use the following command to prefer unit D101 to “this controller”: SET D101 PREFERRED_PATH=THIS_CONTROLLER Note: This is an initial preference, which can be overridden by the hosts.
Planning a Subsystem Host 1 "RED" Host 2 "GREY" Host 3 "BLUE" FCA1 FCA2 FCA1 FCA2 FCA1 FCA2 Switch or hub Switch or hub Host port 1 active D0 Host port 2 active Controller A D1 D2 D100 D101 D120 All units visible to all ports Host port 1 active Controller B Host port 2 active NOTE: FCA = Fibre Channel Adapter CXO7094B Figure 10: Typical multiple-bus configuration 38 HSG80 ACS Solution Software V8.
Planning a Subsystem Selecting a Cache Mode The cache module supports read, read-ahead, write-through, and write-back caching techniques. The cache technique is selected separately for each unit. For example, you can enable only read and write-through caching for some units while enabling only write-back caching for other units. Read Caching When the controller receives a read request from the host, it reads the data from the disk drives, delivers it to the host, and stores the data in its cache module.
Planning a Subsystem Write-Through Caching Write-through caching is enabled when write-back caching is disabled. When the controller receives a write request from the host, it places the data in its cache module, writes the data to the disk drives, then notifies the host when the write operation is complete. This process is called write-through caching because the data actually passes through—and is stored in—the cache memory on its way to the disk drives. 40 HSG80 ACS Solution Software V8.
Planning a Subsystem Enabling Mirrored Caching In mirrored caching, half of each controller’s cache mirrors the companion controller’s cache, as shown in Figure 11. The total memory available for cached data is reduced by half, but the level of protection is greater.
Planning a Subsystem The CCL does the following: ■ Allows the RAID Array to be recognized by the host as soon as it is attached to the SCSI bus and configured into the operating system. ■ Serves as a communications device for the HS-Series Agent. The CCL identifies itself to the host by a unique identification string. In dual-redundant controller configurations, the commands described in the following sections alter the setting of the CCL on both controllers. The CCL is enabled only on host port 1.
Planning a Subsystem Enabling/Disabling CCL in SCSI-3 Mode The CCL is enabled all the time. There is no option to enable/disable. Determining Connections The term “connection” applies to every path between a Fibre Channel adapter in a host computer and an active host port on a controller. Note: In ACS V8.8-1, the maximum number of supported connections is 96. Naming Connections It is highly recommended that you assign names to connections that have meaning in the context of your particular configuration.
Planning a Subsystem Numbers of Connections The number of connections resulting from cabling one adapter into a switch or hub depends on failover mode and how many links the configuration has: 44 ■ If a controller pair is in transparent failover mode and the port 1 link is separate from the port 2 link (that is, ports 1 of both controllers are on one loop or fabric, and port 2 of both controllers are on another), each adapter has one connection, as shown in Figure 12.
Planning a Subsystem Host 1 "AQUA" Host 2 "BLACK" Host 3 "BROWN" FCA1 FCA1 FCA1 Switch or hub Switch or hub Connection AQUA1A1 Host port 1 active Host port 2 standby Controller A Connection BLACK1B2 Connection BROWN1B2 D0 D1 Host port 1 standby D100 Controller B D101 D120 Host port 2 active NOTE: FCA = Fibre Channel Adapter CXO7081B Figure 12: Connections in separate-link, transparent failover mode configurations HSG80 ACS Solution Software V8.
Planning a Subsystem Host 1 "GREEN" Host 2 "ORANGE" Host 3 "PURPLE" FCA1 FCA1 FCA1 Switch or hub Connections GREEN1A1 ORANGE1A1 PURPLE1A1 Host port 1 active D0 Host port 2 standby Controller A D1 Host port 1 standby Connections GREEN1B2 ORANGE1B2 PURPLE1B2 D100 Controller B D101 D120 Host port 2 active NOTE: FCA = Fibre Channel Adapter CXO7079B Figure 13: Connections in single-link, transparent failover mode configurations 46 HSG80 ACS Solution Software V8.
Planning a Subsystem Host 1 "VIOLET" FCA1 FCA2 Switch or hub Connection VIOLET1B1 Switch or hub Connection VIOLET1A1 Connection VIOLET2A2 Host port 1 active D0 Host port 2 active Controller A D1 D2 D100 Connection VIOLET2B2 D101 D120 All units visible to all ports Host port 1 active Controller B Host port 2 active NOTE: FCA = Fibre Channel Adapter CXO7080B Figure 14: Connections in multiple-bus failover mode HSG80 ACS Solution Software V8.
Planning a Subsystem Assigning Unit Numbers The controller keeps track of the unit with the unit number. The unit number can be from 0–199 prefixed by a D, which stands for disk drive. A unit can be presented as different LUNs to different connections.
Planning a Subsystem For example, if all host connections use the default offset values, unit D2 is presented to a port 1 host connection as LUN 2 (unit number of 2 minus offset of 0). Unit D102 is presented to a port 2 host connection as LUN 2 (unit number of D102 minus offset of 100). Figure 15 shows how units are presented as different LUNs, depending on the offset of the host.
Planning a Subsystem An additional factor to consider when assigning unit numbers and offsets is SCSI version. If the SCSI_VERSION switch of the SET THIS_CONTROLLER/OTHER_CONTROLLER command is set to SCSI-3, the CCL is presented as LUN 0 to every connection, superseding any unit assignments. The interaction between SCSI version and unit numbers is explained further in the next section. In addition, the access path to the host connection must be enabled for the connection to access the unit.
Planning a Subsystem Assigning Unit Numbers Depending on SCSI_VERSION The SCSI_VERSION switch of the SET THIS_CONTROLLER/OTHER_CONTROLLER command determines how the CCL is presented. There are two choices: SCSI-2 and SCSI-3. The choice for SCSI_VERSION affects how certain unit numbers and certain host connection offsets interact. Assigning Host Connection Offsets and Unit Numbers in SCSI-3 Mode If SCSI_VERSION is set to SCSI-3, the CCL is presented as LUN 0 to all connections.
Planning a Subsystem Table 3 summarizes the recommendations for unit assignments based on the SCSI_VERSION switch. Table 3: Unit assignments and SCSI_VERSION SCSI_VERSION 52 Offset Unit Assignment What the connection sees LUN 0 as SCSI-2 Divisible by 10 At offsets Unit whose number matches offset SCSI-3 Divisible by 10 Not at offsets CCL HSG80 ACS Solution Software V8.
Planning a Subsystem What is Selective Storage Presentation? Selective Storage presentation is a feature of the HSG80 controller that enables you to control the allocation of storage space and shared access to storage across multiple hosts. This is also known as Restricting Host Access. In a subsystem that is attached to more than one host or if the hosts have more than one adapter, it is possible to reserve certain units for the exclusive use of certain host connections.
Planning a Subsystem Note: These techniques also work for a single controller. Restricting Host Access by Separate Links In transparent failover mode, host port 1 of controller A and host port 1 of controller B share a common Fibre Channel link. Host port 2 of controller A and host port 2 of controller B also share a common Fibre Channel link.
Planning a Subsystem Host 1 "AQUA" Host 2 "BLACK" Host 3 "BROWN" FCA1 FCA1 FCA1 Switch or hub Switch or hub Connection AQUA1A1 Host port 1 active Host port 2 standby Controller A Connection BLACK1B2 Connection BROWN1B2 D0 D1 Host port 1 standby D100 Controller B D101 D120 Host port 2 active NOTE: FCA = Fibre Channel Adapter CXO7081B Figure 16: Limiting host access in transparent failover mode Restricting Host Access by Disabling Access Paths If more than one host is on a link (that is
Planning a Subsystem For example: In Figure 17, restricting the access of unit D101 to host 3, the host named BROWN can be done by enabling only the connection to host 3. Enter the following commands: SET D101 DISABLE_ACCESS_PATH=ALL SET D101 ENABLE_ACCESS_PATH=BROWN1B2 If the storage subsystem has more than one host connection, carefully specify the access path to avoid providing undesired host connections access to the unit.
Planning a Subsystem Restricting Host Access by Offsets Offsets establish the start of the range of units that a host connection can access. For example: In Figure 16, assume both host connections on port 2 (connections BLACK1B2 and BROWN1B2) initially have the default port 2 offset of 100. Setting the offset of connection BROWN1B2 to 120 presents unit D120 to host BROWN as LUN 0. SET BROWN1B2 UNIT_OFFSET=120 Host BROWN cannot see units lower than its offset, so it cannot access units D100 and D101.
Planning a Subsystem Host 1 "RED" Host 2 "GREY" Host 3 "BLUE" FCA1 FCA2 FCA1 FCA2 FCA1 FCA2 Switch or hub Connections RED1B1 GREY1B1 BLUE1B1 Switch or hub Connections RED2A2 GREY2A2 BLUE2A2 Connections RED1A1 GREY1A1 BLUE1A1 Host port 1 active Host port 2 active Controller A D0 D1 D2 D100 Connections RED2B2 GREY2B2 BLUE2B2 D101 D120 All units visible to all ports Host port 1 active Controller B Host port 2 active NOTE: FCA = Fibre Channel Adapter CXO7078 Figure 17: Limiting host acces
Planning a Subsystem multiple-bus failover to work. For most operating systems, it is desirable to have all connections to the host enabled.
Planning a Subsystem For example: In Figure 17, assume all host connections initially have the default offset of 0. Giving all connections access to host BLUE, an offset of 120 presents unit D120 to host BLUE as LUN 0. Enter the following commands: SET BLUE1A1 UNIT_OFFSET=120 SET BLUE1B1 UNIT_OFFSET=120 SET BLUE2A2 UNIT_OFFSET=120 SET BLUE2B2 UNIT_OFFSET=120 Host BLUE cannot see units lower than its offset, so it cannot access any other units.
Planning a Subsystem In multiple-bus failover mode, each of the host ports has its own port ID: ■ Controller B, port 1—worldwide name + 1, for example 5000-1FE1-FF0C-EE01 ■ Controller B, port 2—worldwide name + 2, for example 5000-1FE1-FF0C-EE02 ■ Controller A, port 1—worldwide name + 3, for example 5000-1FE1-FF0C-EE03 ■ Controller A, port 2—worldwide name + 4, for example 5000-1FE1-FF0C-EE04 Use the CLI command, SHOW THIS_CONTROLLER/OTHER_CONTROLLER to display the subsystem’s worldwide name.
Planning a Subsystem 1 2 Node ID (Worldwide name) Checksum 1 WWN INFORMATION P/N: WWN: S/N: NNNN – NNNN – NNNN – NNNN Checksum: NN 2 CXO6873B Figure 19: Placement of the worldwide name label on the BA370 enclosure Caution: Each subsystem has its own unique worldwide name (node ID). If you attempt to set the subsystem worldwide name to a name other than the one that came with the subsystem, the data on the subsystem is not accessible.
Planning Storage Configurations 2 This chapter provides information to help you plan the storage configuration of your subsystem. Storage containers are individual disk drives (JBOD), storageset types (mirrorsets, stripesets, and so on), and/or partitioned drives. Use the guidelines found in this section to plan the various types of storage containers needed.
Planning Storage Configurations Where to Start The following procedure outlines the steps to follow when planning your storage configuration. See Appendix A to locate the blank templates for keeping track of the containers being configured. 1. Determine your storage requirements. Use the questions in "Determining Storage Requirements", page 66, to help you. 2. Review configuration rules. See "Configuration Rules for the Controller", page 67. 3.
Planning Storage Configurations — Use the Command Line Interpreter (CLI) commands. This method allows you flexibility in defining and naming your storage containers. See the HP StorageWorks HSG60 and HSG80 Array Controller and Array Controller Command Line Interface Reference Guide. HSG80 ACS Solution Software V8.
Planning Storage Configurations Determining Storage Requirements It is important to determine your storage requirements.
Planning Storage Configurations Configuration Rules for the Controller The following list defines maximum configuration rules for the controller: ■ 128 visible LUNs/200 assignable unit numbers — In SCSI-2 mode, if the CCL is enabled, the result is 127 visible LUNs and one CCL. — In SCSI-3 mode, if the CCL is enabled, the result is 126 visible LUNs and two CCLs.
Planning Storage Configurations Tip: If you are redeploying disks that have been operating under a prior version of ACS into a newly established container, as a best practice, always initialize the devices and the new container before proceeding with subsystem activities to avoid operational and performance issues. 68 HSG80 ACS Solution Software V8.
Planning Storage Configurations Addressing Conventions for Device PTL The HSG80 controller has six SCSI device ports, each of which connects to a SCSI bus. In dual-controller subsystems, these device buses are shared between the two controllers. (The StorageWorks Command Console calls the device ports “channels.”) The standard BA370 enclosure provides a maximum of four SCSI target identifications (ID) for each device port. If more target IDs are needed, expansion enclosures can be added to the subsystem.
Planning Storage Configurations ■ L—Designates the logical unit (LUN) of the device. For disk devices the LUN is always 0. 1 02 Disk 10200 LUN 00 Target 02 Port 1 Figure 21: PTL naming convention The controller can either operate with a BA370 enclosure or with a Model 2200 controller enclosure combined with Model 4214R, Model 4254, Model 4310R, Model 4350R, Model 4314R, or Model 4354R disk enclosures. The controller operates with BA370 enclosures that are assigned ID numbers 0, 2, and 3.
Planning Storage Configurations Examples - Model 2200 Storage Maps, PTL Addressing The Model 2200 controller enclosure can be combined with the following: ■ Model 4214R disk enclosure—Ultra2 SCSI with 14 drive bays, single-bus I/O module. ■ Model 4254 disk enclosure—Ultra2 SCSI with 14 drive bays, dual-bus I/O module. Note: The Model 4214R uses the same storage maps as the Model 4314R, and the Model 4254 uses the same storage maps as the Model 4354R disk enclosures.
Planning Storage Configurations ■ Model 4354R disk enclosure—Ultra3 SCSI with 14 drive bays, dual-bus I/O module. Table 7 shows the addresses for each device in a three-shelf, dual-bus configuration. A maximum of three Model 4354R disk enclosures can be used with each Model 2200 controller enclosure. Note: Appendix A contains storageset profiles you can copy and use to create your own system profiles.
Planning Storage Configurations 10 SCSI ID 00 01 02 03 04 05 08 10 11 12 DISK ID Disk40100 Disk40200 Disk40300 Disk40400 Disk40500 Disk40800 Disk41000 Disk41200 9 Disk41100 8 Bay 1 2 3 4 5 6 7 8 9 10 SCSI ID 00 01 02 03 04 05 08 10 11 12 DISK ID Disk11000 7 Disk10800 6 Disk10500 5 Disk10400 4 Disk10300 3 Disk10200 2 Disk10100 1 Disk10000 Bay Disk40000 Model 4310R Disk Enclosure Shelf 4 (Single-bus) Disk11100 Disk11200 Bay 1 2 3 4 5 6
Planning Storage Configurations Table 5: PTL addressing, dual-bus configuration, three Model 4350R enclosures Model 4350R Disk Enclosure Shelf 1 (Single-bus) SCSI Bus B 9 SCSI ID 00 01 02 03 04 00 01 02 03 DISK ID Disk20300 8 Disk20200 7 Disk20100 6 Disk20000 5 Disk10400 4 Disk10300 3 Disk10200 2 Disk10100 1 Disk10000 Bay 10 04 Disk20400 SCSI Bus A Model 4350R Disk Enclosure Shelf 2 (Single-bus) SCSI Bus B 9 SCSI ID 00 01 02 03 04 00 01 02 03 DISK ID Disk40300
Planning Storage Configurations Table 6: PTL addressing, single-bus configuration, six Model 4314R enclosures 14 SCSI ID 00 01 02 03 04 05 08 09 10 11 12 13 14 15 DISK ID Disk60100 Disk60200 Disk60300 Disk60400 Disk60500 Disk60800 Disk60900 Disk61000 Disk61100 Disk61200 Disk61500 13 Disk61400 12 Disk61300 11 Bay 1 2 3 4 5 6 7 8 9 10 11 12 13 14 SCSI ID 00 01 02 03 04 05 08 09 10 11 12 13 14 15 DISK ID Disk51200 10 Disk51100 9 Disk51000 8
Planning Storage Configurations 01 02 03 04 05 08 09 10 11 12 13 14 15 DISK ID Disk20100 Disk20200 Disk20300 Disk20400 Disk20500 Disk20800 Disk20900 Disk21000 Disk21100 Disk21200 Disk21500 00 Bay 1 2 3 4 5 6 7 8 9 10 11 12 13 14 SCSI ID 00 01 02 03 04 05 08 09 10 11 12 13 14 15 DISK ID Disk31500 SCSI ID Disk21400 14 Disk31400 13 Disk21300 12 Disk31300 11 Disk31200 10 Disk31100 9 Disk31000 8 Disk30900 7 Disk30800 6 Disk30500 5 Di
Planning Storage Configurations Table 7: PTL addressing, dual-bus configuration, three Model 4354A enclosures.
Planning Storage Configurations Choosing a Container Type Different applications may have different storage requirements. You probably want to configure more than one kind of container within your subsystem. In choosing a container, you choose between independent disks (JBODs) or one of several storageset types, as shown in Figure 23. The independent disks and the selected storageset may also be partitioned. The storagesets implement RAID (Redundant Array of Independent Disks) technology.
Planning Storage Configurations Table 8 compares the different kinds of containers to help you determine which ones satisfy your requirements.
Planning Storage Configurations Creating a Storageset Profile Creating a profile for your storagesets, partitions, and devices can simplify the configuration process. Filling out a storageset profile helps you choose the storagesets that best suit your needs and to make informed decisions about the switches you can enable for each storageset or storage device that you configure in your subsystem. For an example of a storageset profile, see Table 9.
Planning Storage Configurations Initialize Switches: Chunk size Save Configuration Metadata _X_ Automatic (default) ___No (default) _X_Destroy (default) ___ 64 blocks _X_Yes ___Retain ___ 128 blocks ___ 256 blocks Unit Switches: Caching Read caching_______X__ Read-ahead caching_____ Write-back caching___X__ Write-through caching____ Access by following hosts enabled _ALL_____________________________________________ ____________ __________________________________________________ __________ ___
Planning Storage Configurations Planning Considerations for Storageset This section contains the guidelines for choosing the storageset type needed for your subsystem: ■ "Stripeset Planning Considerations", page 82 ■ "Mirrorset Planning Considerations", page 84 ■ "RAIDset Planning Considerations", page 86 ■ "Striped Mirrorset Planning Considerations", page 88 ■ "Storageset Expansion Considerations", page 90 ■ "Partition Planning Considerations", page 90 Stripeset Planning Considerations Stripes
Planning Storage Configurations The relationship between the chunk size and the average request size determines if striping maximizes the request rate or the data-transfer rate. You can set the chunk size or use the default setting (see "Chunk Size", page 95, for information about setting the chunk size). Figure 25 shows another example of a three-member RAID 0 stripeset. A major benefit of striping is that it balances the I/O load across all of the disk drives in the storageset.
Planning Storage Configurations ■ Striping does not protect against data loss. In fact, because the failure of one member is equivalent to the failure of the entire stripeset, the likelihood of losing data is higher for a stripeset than for a single disk drive. For example, if the mean time between failures (MTBF) for a single disk is l hour, then the MTBF for a stripeset that comprises N such disks is l/N hours.
Planning Storage Configurations Disk 10000 Disk 20000 A A' Disk 20100 Disk 10100 B B' Disk 10200 Disk 20200 C C' Mirror drives contain copy of data CXO7288A Figure 26: Mirrorsets maintain two copies of the same data Virtual disk Operating system view Actual device mappings Block 0 Block 1 Block 2 etc. Disk 1 Disk 2 Block 0 Block 1 Block 2 etc. Block 0 Block 1 Block 2 etc.
Planning Storage Configurations ■ You can configure up to a maximum of 20 RAID 3/5 mirrorsets per controller or pair of dual-redundant controllers. Each mirrorset may contain up to 6 members. Refer to "Configuration Rules for the Controller", page 67, for detailed information on maximum numbers. 30 RAID 3/5 and RAID 1 mirrorsets are permitted, however, there is limit of no more than 20 RAID 3/5 mirrorsets in such a configuration. ■ Both write-back cache modules must be the same size.
Planning Storage Configurations Virtual disk Operating system view Disk 1 Block 0 Block 5 Block 10 Block 15 Block 0 Block 1 Block 2 Block 3 Block 4 Block 5 etc.
Planning Storage Configurations ■ A RAIDset must include at least 3 disk drives, but no more than 14. ■ A storageset should only contain disk drives of the same capacity. The controller limits the capacity of each member to the capacity of the smallest member in the storageset. Thus, if you combine 9 GB disk drives with 4 GB disk drives in the same storageset, you waste 5 GB of capacity on each 9 GB member.
Planning Storage Configurations p t Mirrorset1 Mirrorset2 Disk 20000 Disk 10100 Disk 20200 A B C Disk 10000 Disk 20100 Disk 10200 B' C' A' Mirrorset3 CXO7289A Figure 29: Striped mirrorset (example 1) The failure of a single disk drive has no effect on the ability of the storageset to deliver data to the host. Under normal circumstances, a single disk drive failure has very little effect on performance.
Planning Storage Configurations Plan the mirrorset members, and plan the stripeset that contains them. Review the recommendations in "Planning Considerations for Storageset", page 82, and "Mirrorset Planning Considerations", page 84. Storageset Expansion Considerations Storageset Expansion allows for the joining of two of the same kind of storage containers by concatenating RAIDsets, stripesets, or individual disks, thereby forming a larger virtual disk which is presented as a single unit.
Planning Storage Configurations unpartitioned storageset or device. Partitions are separately addressable storage units; therefore, you can partition a single storageset to service more than one user group or application. Defining a Partition Partitions are expressed as a percentage of the storageset or single disk unit that contains them: ■ Mirrorsets and single disk units—the controller allocates the largest whole number of blocks that are equal to or less than the percentage you specify.
Planning Storage Configurations Changing Characteristics Through Switches CLI command switches allow you another level of command options. There are three types of switches that modify the storageset and unit characteristics: ■ Storageset switches ■ Initialization switches ■ Unit switches The following sections describe how to enable/modify switches. They also contain a description of the major CLI command switches.
Planning Storage Configurations Specifying Storageset and Partition Switches The characteristics of a particular storageset can be set by specifying switches when the storageset is added to the controllers’ configuration. Once a storageset has been added, the switches can be changed by using a SET command. Switches can be set for partitions and the following types of storagesets: ■ RAIDset ■ Mirrorset Stripesets have no specific switches associated with their ADD and SET commands.
Planning Storage Configurations Partition Switches The following switches are available when creating a partition: ■ Size ■ Geometry For details on the use of these switches, refer to CREATE_PARTITION command in the HP StorageWorks HSG60 and HSG80 Array Controller and Array Controller Command Line Interface Reference Guide. 94 HSG80 ACS Solution Software V8.
Planning Storage Configurations Specifying Initialization Switches Initialization switches set characteristics for established storagesets before they are made into units. The following kinds of switches effect the format of a disk drive or storageset: ■ Chunk Size (for stripesets and RAIDsets only) ■ Save Configuration ■ Destroy/Nodestroy ■ Geometry Each of these switches is described in the following sections.
Planning Storage Configurations Increasing the Request Rate A large chunk size (relative to the average request size) increases the request rate by enabling multiple disk drives to respond to multiple requests. If one disk drive contains all of the data for one request, then the other disk drives in the storageset are available to handle other requests. Thus, separate I/O requests can be handled in parallel, which increases the request rate. This concept is shown in Figure 32.
Planning Storage Configurations ■ If you have mostly sequential reads or writes (like those needed to work with large graphic files), make the chunk size for RAID 0 and RAID 0+1 a small number (for example: 67 sectors). For RAID 5, make the chunk size a relatively large number (for example: 253 sectors). Table 10 shows a few examples of chunk size selection.
Planning Storage Configurations Note: HP recommends that you DO NOT use SAVE_CONFIGURATION on every unit and device on the controller. Destroy/Nodestroy Specify whether to destroy or retain your data and metadata when a disk is initialized after it has been used in a mirrorset or as a single-disk unit. Note: The DESTROY and NODESTROY switches are only valid for mirrorsets and striped mirrorsets. ■ DESTROY (default) overwrites your data and forced-error metadata when a disk drive is initialized.
Planning Storage Configurations Specifying Unit Switches Several switches control the characteristics of units. The unit switches are described under the SET unit-number command in the HP StorageWorks HSG60 and HSG80 Array Controller and Array Controller Command Line Interface Reference Guide. One unit switch, ENABLE/DISABLE_ACCESS_PATH, determines which host connections can access the unit, and it is described in the larger topic of matching units to specific hosts.
Planning Storage Configurations Creating Storage Maps Configuring a subsystem is easier if you know how the storagesets, partitions, and JBODs correspond to the disk drives in your subsystem. You can more easily see this relationship by creating a hardcopy representation, also known as a storage map. To make a storage map, fill out the templates provided in Appendix A as you add storagesets, partitions, and JBOD disks to the configuration and assign them unit numbers.
Planning Storage Configurations Example Storage Map–Model 4310R Disk Enclosure Table 11 shows an example of four Model 4310R disk enclosures (single-bus I/O). ■ Unit D100 is a 4-member RAID 3/5 storageset named R1. R1 consists of Disk10000, Disk20000, Disk30000, and Disk40000. ■ Unit D101 is a 2-member striped mirrorset named S1. S1 consists of M1 and M2: — M1 is a 2-member mirrorset consisting of Disk10100 and Disk20100. — M2 is a 2-member mirrorset consisting of Disk30100 and Disk40100.
Planning Storage Configurations Table 11: Model 4310R disk enclosure, example of storage map 6 7 8 9 10 SCSI ID 00 01 02 03 04 05 08 10 11 12 D100 D101 D103 D105 D107 D108 D1 R1 S1 M4 S3 S4 M2 M6 D2 R3 D3 S5 spare Disk41000 Disk41100 Disk41200 Bay 1 2 3 4 5 6 7 8 9 10 SCSI ID 00 01 02 03 04 05 08 10 11 12 D100 D101 D102 D104 D106 D108 D1 R1 S1 M3 S2 R2 S3 S4 M1 M5 D2 R3 D3 S5 D4 M7 Disk10000 Disk11000 DISK ID Disk40800 5 Disk40500 4 Disk40400 3 Dis
Planning Storage Configurations 6 7 8 9 10 SCSI ID 00 01 02 03 04 05 08 10 11 12 D100 D101 D103 D104 D106 D108 D1 R1 S1 M4 S2 R2 S3 S4 M2 M6 D2 R3 D3 S5 spare Disk31000 Disk31100 Disk31200 DISK ID Disk30800 5 Disk30500 4 Disk30400 3 Disk30300 2 Disk30200 1 Disk30100 Bay Disk30000 Model 4310R Disk Enclosure Shelf 3 (Single-bus) HSG80 ACS Solution Software V8.
Planning Storage Configurations 104 HSG80 ACS Solution Software V8.
Preparing the Host System 3 This chapter describes how to prepare your HP-UX host computer to accommodate the HSG80 controller storage subsystem. The following information is included in this chapter: ■ "Installing RAID Array Storage System", page 106 ■ "Making a Physical Connection", page 110 ■ "Configuring HP-UX File System", page 115 Refer to Chapter 4 for instructions on how to install and configure the HSG Agent. The Agent for HSG is operating system-specific and polls the storage.
Preparing the Host System Installing RAID Array Storage System WARNING: A shock hazard exists at the backplane when the controller enclosure bays or cache module bays are empty. Be sure the enclosures are empty, then mount the enclosures into the rack. DO NOT use the disk enclosure handles to lift the enclosure. The handles cannot support the weight of the enclosure. Only use these handles to position the enclosure in the mounting brackets.
Preparing the Host System 5. Install the elements. Install the disk drives. Make sure you install blank panels in any unused bays. Fibre Channel cabling information is shown to illustrate supported configurations. In a dual-bus disk enclosure configuration, disk enclosures 1, 2, and 3 are stacked below the controller enclosure—two SCSI Buses per enclosure (see Figure 33).
Preparing the Host System 1 8 2 3 4 5 7 6 CXO7383A 1 3 5 7 SCSI Bus 1 Cable SCSI Bus 3 Cable SCSI Bus 5 Cable AC Power Inputs 2 4 6 8 SCSI Bus 2 Cable SCSI Bus 4 Cable SCSI Bus 6 Cable Fibre Channel Ports Figure 33: Dual-bus enterprise storage RAID array storage system 108 HSG80 ACS Solution Software V8.
Preparing the Host System 6 5 4 8 1 7 2 3 CXO7382A 1 3 5 7 SCSI Bus 1 Cable SCSI Bus 3 Cable SCSI Bus 5 Cable AC Power Inputs 2 4 6 8 SCSI Bus 2 Cable SCSI Bus 4 Cable SCSI Bus 6 Cable Fibre Channel Ports Figure 34: Single-bus enterprise storage RAID array storage system HSG80 ACS Solution Software V8.
Preparing the Host System Making a Physical Connection Your new storage system components must be initially configured using a serial cable connection to the HSG80 array controllers. You can change the following settings: SCSI-2 or SCSI-3 mode, CCL enabled or disabled, and Transparent or Multibus Failover mode. The controllers are preset to SCSI-2 mode, CCL enabled, and Transparent Failover mode. For HP-UX, SCSI-2 and SCSI-3 modes are supported with ACS 8.
Preparing the Host System For HP FC HBAs: #ioscan -fn |grep fc For HP FC HBAs, the output from the HP-based command is similar to: Class H/W Path I Driver S/W State *H/W Type Desc. fc 0 10/8 fcT1 CLAIMED INTERFACE HP FC Mass Storage Adapter lan 1 10/8.6 fcT1_cnt CLAIMED INTERFACE HP FC Mass Storage Cntl fcp 0 10/8.8 fcp CLAIMED INTERFACE FCP Protocol Adapter Installing Host Bus Adapter To make a physical connection, first install a host bus adapter.
Preparing the Host System For native HP HBAs, follow the instructions that came with your adapter. Connecting to Your Host System To connect to your host system, perform the following steps: 1. Shut down the HP System. Ensure that all power switches on the array controller and the host computer system are in the OFF position. 2. Connect an FC cable between the FC adapter connector on the back of the HP system and a port on the HP StorageWorks FC switch. 3.
Preparing the Host System 6. Turn on the power to the HP system, but do not boot the system. 7. Use the CLI to configure your array controller if it is not yet configured. 8. Boot the system. During the boot process, device special files are created for each logical unit configured on the array controller, and a Logical Unit Number (LUN) is assigned to each storageset configured on the array controller.
Preparing the Host System HP Host 1 HP Host 2 Adapter Adapter Switch Port 1 Controller 1 Port 2 Port 1 Controller 2 Port 2 SHR-1541 Figure 35: Single switch HP MC/ServiceGuard cluster, high availability configuration 114 HSG80 ACS Solution Software V8.
Preparing the Host System Configuring HP-UX File System The HP-UX Operating System has now been modified to communicate with the RAID Array Controller. Enable Volume Set Addressing for HP Host Mode HP systems require volume set addressing in order to present more than eight LUNs to the host. The new host port mode is HP_VSA. In order to use the RAID Array from the HP system, perform the following steps: 1. Create new virtual disks on your RAID Array and assign LUN IDs to each virtual disk created.
Preparing the Host System 2. Create a directory for the volume group. This is located in the /dev directory. Enter the following command to verify which volume groups exist. # Is /dev You see a directory similar to the one shown below. From the above list, the only volume group that exists is vg00. For the purpose of this example, vg01 has been created. 3. Enter the following command to make the directory vg01: # mkdir /dev/vg01 Note: The standard HP-UX kernel supports vg00 to vg09.
Preparing the Host System Class I H/W Path Driver S/W State /dev/dsk/c5t0d2 disk 2 8 8/12.8.0.255.4.1.0 sdisk 2 9 8/12.8.0.255.4.1.1 sdisk CLAIMED 3 0 8/12.8.0.255.4.1.2 sdisk CLAIMED 2 8/16/5.2.0 sdisk CLAIMED 3 8/16/5.5.
Preparing the Host System 7. The volume group may be displayed after creation with the vgdisplay command: # vgdisplay -v /dev/vg0n 8.
Preparing the Host System Free PE 7677 PV Name /dev/dsk/c5t0 d0 PV Status available Total PE 8677 Free PE 8677 9. Creation and Management of the Logical Volumes may now be accomplished via the HP-UX commands. An example of doing this is: # lvcreate /dev/vg01 Logical volume /dev/vg01/lvol1 has been successfully created with character device /dev/vg01/lvol1 Volume Group configuration for /dev/vg01 has been saved in /etc/lvmconf/vg01.
Preparing the Host System Current LE 25 Allocated PE 25 Stripes 0 Stripe Size (Kbytes) 0 Bad block on Allocation strict Solution Software Upgrade Procedures Use the following procedures for upgrades to your Solution Software. It is considered best practice to follow this order of procedures: 1. Perform backups of data prior to upgrade. 2. Verify operating system versions, upgrade operating systems to supported versions and patch levels. 3.
Preparing the Host System Note: Rolling upgrades are not supported on HP-UX when configured with Secure Path. Preparing Storage RAID Array Prepare the Enterprise/Modular Storage RAID Array by following these recommended steps: 1. Verify the storage system is operating properly. a. Check that the date and time are set properly by typing: HSG80> show this b. Check that no units are in a failed state by typing: HSG80> show storage c. Verify the cache battery is fully charged by typing: HSG80> show this 2.
Preparing the Host System 6. Set option 4 on the SWCC Agent menu auto start at boot options to the off state. For MC/ServiceGuard clusters, set option 4 to the off state on all nodes. Rolling Upgrades Most major problems that occur when performing a rolling upgrade are due to the changes to the array controller NVRAM configuration. Therefore, it is important to follow the rolling upgrade procedure exactly.
Preparing the Host System Preparing Storage RAID Array Prepare the Enterprise/Modular Storage RAID Array by following these recommended steps: 1. Verify the storage system is operating properly. a. Check that the date and time are set properly by typing: HSG80> show this b. Check that no units are in a failed state by typing: HSG80> show storage c. Verify the cache battery is fully charged by typing: HSG80> show this 2.
Preparing the Host System For further information, refer to the HP StorageWorks HSG60 and HSG80 Array Controller and Array Controller Software Maintenance and Service Guide. An upgrade performed while the subsystem is up and running is known as a “Rolling Upgrade”. This type of upgrade does not require a shutdown and restart of the subsystem for the upgrade changes to take effect. Recommendations for doing a rolling upgrade are: ■ Upgrade when disk activity is at a minimum.
Installing and Configuring HSG Agent 4 StorageWorks Command Console (SWCC) enables real-time configuration of the storage environment and permits you to monitor and configure the storage connected to the HSG80 controller.
Installing and Configuring HSG Agent Controller Configuration with HP-UX StorageWorks Command Console (SWCC) provides a graphical user interface that can be used to configure and monitor your storage system.Solution Software V8.8-1 for HP-UX does not support SWCC for Versions 11i V2.0 and 11i V2.0 Update 2. ACS V8.8-1 also does not support root, boot, dump, or swap. Note: For installations using SWCC, see “Why Use StorageWorks Command Console (SWCC)?” on page 127. 126 HSG80 ACS Solution Software V8.
Installing and Configuring HSG Agent Why Use StorageWorks Command Console (SWCC)? StorageWorks Command Console (SWCC) enables you to monitor and configure the storage connected to the HSG80 controller. SWCC consists of Client and Agent. ■ The Client provides pager notification and lets you manage your virtual disks. The Client runs on Windows 2000 with Service Pack 4, Windows NT 4.0 with Service Pack 6A or above, and Windows Server 2003 (32-bit).
Installing and Configuring HSG Agent Note: For serial and SCSI connections, the Agent is not required for creating virtual disks. 128 HSG80 ACS Solution Software V8.
Installing and Configuring HSG Agent Installation and Configuration Overview Table 13 provides an overview of the installation. Table 13: Installation and configuration overview Step Procedure 1 Verify that your hardware has been set up correctly. See the previous chapters in this guide. 2 Verify that you have a network connection for the Client and Agent systems. See "About the Network Connection for the Agent", page 130. 3 Verify that there is a LUN for communications.
Installing and Configuring HSG Agent About the Network Connection for the Agent The network connection, shown in Figure , displays the subsystem connected to a hub or a switch. SWCC can consist of any number of Clients and Agents in a network. However, it is suggested that you install only one Agent on a computer. By using a network connection, you can configure and monitor the subsystem from anywhere on the LAN. If you have a WAN or a connection to the Internet, monitor the subsystem with TCP/IP.
Installing and Configuring HSG Agent 7 1 A T V A T -S H V T N E C O O A T V O 4 4 7 A T V A T -S H 2 V T N E C O O 5 4 3 6 CXO7240A 1 Agent system (has the Agent software) 5 Hub or switch 2 TCP/IP network 6 HSG80 controller and its device subsystem 3 Client system (has the Client software) 7 Servers 4 Fibre Channel cable An example of a network connection HSG80 ACS Solution Software V8.
Installing and Configuring HSG Agent Before Installing the Agent The Agent requires the minimum system requirements, as defined in the release notes for your operating system. The program is designed to operate with the Client V2.5 on Windows 2000, Windows NT, and Windows Server 2003. Installing and Configuring the Agent The SWCC Agent has been designed to run on HP-UX V11.00 and V11.11 in 32or 64-bit mode operation. Note: SWCC is not supported on HP-UX Versions 11i V2.0 or 11i V2.0 Update 2.
Installing and Configuring HSG Agent /var/adm/syslog The steamd.log resides in this directory and can grow large depending on the amount of events your storage system has reported. It is recommended that you have at least 250 mb of available space in your /var directory initially to support the SWCC Agent event logging system. You can clean the log by using an editor such as vi or move the log to a backup area and create a new empty file with touch or vi. Preparing for the Installation 1.
Installing and Configuring HSG Agent The SWCC Agent installation now supports a graphic environment installation. You must have an X-windows environment setup in order to run this type of installation. 1. To start the SWCC Agent installation, click on the Agent button (see Figure 36). 2. To view the documentation, click on the Documentation button. 3. To exit, click on the exit button. Figure 36: GUI Installation 134 HSG80 ACS Solution Software V8.
Installing and Configuring HSG Agent Figure 37: Agent “data entry window” HSG80 ACS Solution Software V8.
Installing and Configuring HSG Agent Steps for Entering Data 1. Enter the directory in which you plan to install. (/opt/steam is the default.) 2. Enter the Agent password. (It must be at least 6 characters.) 3. Confirm the password. 4. Enter the subsystem monitoring interval. (180 is the default.) 5. Enter an email address to record error notifications. (Your current account is the default.) 6. Enter the port number for the spagent. (4999 is the default.) 7. Enter the port number for the spgui.
Installing and Configuring HSG Agent ■ Notification via TCP/IP socket ■ Notification via SNMP protocol ■ Notification via TCP/IP socket SNMP 10. Select the error notification level. You are notified based on this level. (You must select one.) ■ Fatal errors ■ Warnings and fatal errors ■ Info warnings and fatal errors 11. Select the subsystem access that the Client has. (This is used to control the Client access into the subsystem. You must select one.
Installing and Configuring HSG Agent Figure 38: Console displays installation warning Character-based (CLI) Installation Procedure Once you have executed the /cdrom/install.sh script, you are prompted with the following questions in order to set up the SWCC Agent. ■ Enter the directory in which you plan on installation. (/opt/steam is the default.) ■ Enter the Agent password. (It must be at least 6 characters.) ■ Confirm the password. ■ Enter the subsystem monitoring interval. (180 is the default.
Installing and Configuring HSG Agent — Notification via TCP/IP socket — Notification via SNMP protocol — Notification via TCP/IP socket SNMP ■ Select the error notification level. You are notified based on this level. (You must select one.) — Fatal errors — Warnings and fatal errors — Info warnings and fatal errors ■ Select the subsystem access that the Client has. (This is used to control the Client access into the subsystem. You must select one.
Installing and Configuring HSG Agent The computer scans the I/O bus for Command Console LUNs. The following is an example of the information that may be shown: ======================== Scanning I/O Bus on nodes: hpk400a Processing Host hpk400a Node HW Path -------- Driver Device -------------------- ---------- -------- hpk400a 10/8.8.0.255.3.15.0 sdisk HSGCCL hpk400a 10/8.8.0.255.4.1.0 sdisk HSGCCL Found 2 Command Console LUNs on host hpk400a.
Installing and Configuring HSG Agent 9. Enter the user email address and press Enter. The following is displayed: Enter the error notification level for this user. The user will be notified of errors at this level and above. The 1 = 2 = 3 = possible options are: Fatal Errors Warning and Fatal Errors Info, Warning and Fatal Errors Enter Notification Level (1, 2, 3): 10. Select 3 and press Enter.
Installing and Configuring HSG Agent The computer displays the following: RAID Array Configuration Menu Agent Admin Options Storage Subsystem Options 1) Change Agent Password 12) View Subsystems 2) Change SNMP Enterprise OID 13) Add a Subsystem 3) Start/Stop the Agent 14) Remove a Subsystem 4) Toggle Agent Startup on Boot 15) Modify a Subsystem 5) Uninstall Agent Agent Notification Options Client Options 6) Toggle Error Log Notification 16) View Clients 7) Toggle Mail Notification 17) Add a Client 8) View
Installing and Configuring HSG Agent Note: You can change the configuration of the Agent at any time. For more information, see "Configuring the Agent for an Alternate Path", page 148. Installing and Configuring the Agent with MC/ServiceGuard All hosts in the MC/ServiceGuard cluster must be entered into their /rhosts file, the host_name root of all hosts in the cluster. This action enables access by the root account.
Installing and Configuring HSG Agent 6. Follow the on-screen prompts to complete installation. 7. After rebooting all necessary hosts, login as root and enter the following command: /installation_directory/steam/bin/config.sh The RAID Array Configuration menu is displayed. 8. Select option 13 and press Enter to add a subsystem. Notice SPT is now the driver on the HP-PB bus adapter. 9. Select option C to continue.
Installing and Configuring HSG Agent The following is displayed: RAID Array Configuration Menu Agent Admin Options Storage Subsystem Options 1) Change Agent Password 12) View Subsystems 2) Change SNMP Enterprise OID 13) Add a Subsystem 3) Start/Stop the Agent 14) Remove a Subsystem 4) Toggle Agent Startup on Boot 15) Modify a Subsystem 5) Uninstall Agent Agent Notification Options Client Options 6) Toggle Error Log Notification 16) View Clients 7) Toggle Mail Notification 17) Add a Client 8) View Mail Noti
Installing and Configuring HSG Agent Restarting the Agent When changes are made to the Agent configuration, it must be restarted for changes to take effect. 1. Select Start/Stop the Agent from the Agent Maintenance menu. 2. If the Agent is shown to be running in the RAID Array Configuration menu, select option Y to stop the Agent. Repeat the menu selection and select option Y to start the Agent. 146 HSG80 ACS Solution Software V8.
Installing and Configuring HSG Agent Running the Agent The Agent runs in the background as a daemon. The Agent was started when you installed it. Its default is to restart automatically. The installation script places two entries in the /etc/inittab file to implement automatic execution of the Agent. The tag fields in the file are stmd and ntfy. Note: You can stop and start the Agent by using the RAID Array Configuration menu; however, the Agent must be running in order to monitor subsystems.
Installing and Configuring HSG Agent Configuring the Agent for an Alternate Path 1. Go to the RAID Array Configuration menu by entering the following command: # /installation_directory/steam/bin/config.sh Note: For HP-UX, only one alternate path is supported in transparent failover mode. 2. Select option 15 to modify a subsystem. 3. Enter the position number of the subsystem to modify. 4. Select option 4, alternate device. 5. Enter whether you want to add or remove a device. 6.
Installing and Configuring HSG Agent Removing the Agent To remove the Agent from the host server: 1. Shut down the Agent. a. Invoke the configuration menu by typing: ./opt/steam/bin/config.sh b. Select option 3 (Start/Stop the Agent). c. Confirm Agent shutdown by pressing Enter. Remove the steam entries in 2 files. 2. Select option 14, (Remove a subsystem). /etc/inittab hsvg::once:/opt/steam/bin/hsg_vgmon >> 2>&1 stmd::respawn:/opt/steam/bin/steamd -S >> /var/adm/syslog/steamd.
Installing and Configuring HSG Agent At this point the only way to clear this without rebooting is to kill the process and repeat step 3. For example: kill –9 1498 kill –9 1497 cd /opt rm –r /opt/steam 150 HSG80 ACS Solution Software V8.
FC Configuration Procedures 5 This chapter describes procedures to configure a subsystem that uses Fibre Channel (FC) fabric topology. In fabric topology, the controller connects to its hosts through switches.
FC Configuration Procedures Establishing a Local Connection A local connection is required to configure the controller until a command console LUN (CCL) is established using the CLI. Communication with the controller can be through the CLI or SWCC. The maintenance port, shown in Figure 40, provides a way to connect a maintenance port. The maintenance port can be an EIA-423 compatible terminal or a computer running a terminal emulator program. The maintenance port accepts a standard RS-232 jack.
FC Configuration Procedures Setting Up a Single Controller Powering On and Establishing Communication 1. Connect the computer or terminal to the controller, as shown in Figure 40. The connection to the computer is through the COM1 or COM2 port. 2. Turn on the computer or terminal. 3. Apply power to the storage subsystem. 4. Verify that the computer or terminal is configured as follows: — 9600 baud — 8 data bits — 1 stop bit — no parity — no flow control 5. Press Enter.
FC Configuration Procedures 4 1 2 5 3 5 4 CXO6881B 1 Controller 4 Cable from the switch to the host Fibre Channel 2 Host port 1 adapter 3 Host port 2 5 FC switch Figure 41: Single controller cabling Configuring a Single Controller Using CLI To configure a single controller using CLI involves the following processes: ■ "Verifying the Node ID and Check for Any Previous Connections", page 154 ■ "Configuring Controller Settings", page 155 ■ "Restarting the Controller", page 156 ■ "Setting T
FC Configuration Procedures The node ID is located in the third line of the SHOW THIS result: HSG80> SHOW THIS Controller: HSG80 ZG80900583 Software V8.8, Hardware E11 NODE_ID = 5000-1FE1-0001-3F00 ALLOCATION_CLASS = 0 If the node ID is present, go to step 5. If the node ID is all zeroes, enter node ID and checksum, which are located on a sticker on the controller enclosure.
FC Configuration Procedures Note: SET THIS SCSI_VERSION=SCSI-2, or If SCSI-2 is selected, you must disable CCL using the command: SET THIS NOCOMMAND_CONSOLE_LUN 6. Assign an identifier for the communication LUN (also called the command console LUN, or CCL). The CCL must have a unique identifier that is a decimal number in the range 1 to 32767, and which is different from the identifiers of all units.
FC Configuration Procedures 10. Assign an identifier for the communication LUN (also called the command console LUN, or CCL). The CCL must have a unique identifier that is a decimal number in the range 1 to 32767, and which is different from the identifiers of all units. Use the following syntax: SET THIS IDENTIFIER=N Identifier must be unique among all the controllers attached to the fabric within the specified allocation class. Setting Time and Verifying All Commands 1.
FC Configuration Procedures The following sample is a result of a SHOW THIS command, with the areas of interest in bold. Controller: HSG80 ZG94214134 Software V8.
FC Configuration Procedures ....... 5. Turn on the switches, if not done previously. If you want to communicate with the Fibre Channel switches through Telnet, set an IP address for each switch. See the manuals that came with the switches for details. Plugging in the FC Cable and Verifying Connections 6. Plug the Fibre Channel cable from the first host bus adapter into the switch. Enter the SHOW CONNECTIONS command to view the connection table: SHOW CONNECTIONS 7.
FC Configuration Procedures Setting Up a Controller Pair The following procedures describe how to set up a controller pair. Powering Up and Establishing Communication 1. Connect the computer or terminal to the controller as shown in Figure 40. The connection to the computer is through the COM1 or COM2 ports. 2. Turn on the computer or terminal. 3. Apply power to the storage subsystem. 4. Configure the computer or terminal as follows: — 9600 baud — 8 data bits — 1 stop bit — no parity — no flow control 5.
FC Configuration Procedures Figure 42 shows a controller pair with failover cabling showing one HBA per server with HSG80 controller in transparent failover mode.
FC Configuration Procedures The node ID is located in the third line of the SHOW THIS result: HSG80> show this Controller: HSG80 ZG80900583 Software V8.8, Hardware E11 NODE_ID = 5000-1FE1-0001-3F00 ALLOCATION_CLASS = 0 If the node ID is present, go to step 5. If the node ID is all zeroes, enter the node ID and checksum, which are located on a sticker on the controller enclosure.
FC Configuration Procedures 6. Assign an identifier for the communication LUN (also called the command console LUN, or CCL). The CCL must have a unique identifier that is a decimal number in the range 1 to 32767, and which is different from the identifiers of all units. Use the following syntax: SET THIS IDENTIFIER=N Identifier must be unique among all the controllers attached to the fabric within the specified allocation class. 7. Set the topology for the controller.
FC Configuration Procedures When FRUTIL asks if you intend to replace the battery, answer Y: Do you intend to replace this controller's cache battery? Y/N [N] Y FRUTIL prints out a procedure, but does not give you a prompt. Ignore the procedure and press Enter. 12. Set up any additional optional controller settings, such as changing the CLI prompt.
FC Configuration Procedures 14. Verify node ID, allocation class, SCSI version, failover mode, identifier, and port topology. The following display is a sample result of a SHOW THIS command, with the areas of interest in bold. Controller: HSG80 ZG94214134 Software V8.
FC Configuration Procedures 15. Turn on the switches if not done previously. If you want to communicate with the FC switches through Telnet, set an IP address for each switch. See the manuals that came with the switches for details. Plugging in the FC Cable and Verifying Connections 16. Plug the FC cable from the first host adapter into the switch. Enter a SHOW CONNECTIONS command to view the connection table: SHOW CONNECTIONS The first connection has one or more entries in the connection table.
FC Configuration Procedures Verifying Installation To verify installation for your HP-UX host, enter the following command: SHOW DEVICES HSG80 ACS Solution Software V8.
FC Configuration Procedures Configuring Devices The disks on the device bus of the HSG80 can be configured manually or with the CONFIG utility. The CONFIG utility is easier. Invoke CONFIG with the following command: RUN CONFIG WARNING: HP recommends that you use the CONFIG utility only at reduced I/O loads. The CONFIG utility takes about two minutes to discover and to map the configuration of a completely populated storage system. 168 HSG80 ACS Solution Software V8.
FC Configuration Procedures Configuring Storage Containers For a technology refresher on this subject, refer to "Choosing a Container Type", page 78. In choosing a container, you choose between independent disks (JBODs) or one of several storageset types, as shown in Figure 43. The independent disks and the selected storageset may also be partitioned.
FC Configuration Procedures Configuring a Stripeset 1. Create the stripeset by adding its name to the controller's list of storagesets and by specifying the disk drives it contains. Use the following syntax: ADD STRIPESET STRIPESET-NAME DISKNNNNN DISKNNNNN....... 2. Initialize the stripeset, specifying any desired switches: INITIALIZE STRIPESET-NAME SWITCHES See "Specifying Initialization Switches", page 95, for a description of the initialization switches. 3.
FC Configuration Procedures 3. Verify the mirrorset configuration: SHOW MIRRORSET-NAME 4. Assign the mirrorset a unit number to make it accessible by the hosts. See "Assigning Unit Numbers and Unit Qualifiers", page 175. For example: The commands to create Mirr1, a mirrorset with two members (DISK10000 and DISK20000), and to initialize it using default switch settings: ADD MIRRORSET MIRR1 DISK10000 DISK20000 INITIALIZE MIRR1 SHOW MIRR1 Configuring a RAIDset 1.
FC Configuration Procedures 4. Assign the RAIDset a unit number to make it accessible by the hosts. See "Assigning Unit Numbers and Unit Qualifiers", page 175. For example: The commands to create RAID1, a RAIDset with three members (DISK10000, DISK20000, and DISK10100) and to initialize it with default values: ADD RAIDSET RAID1 DISK10000 DISK20000 DISK30000 INITIALIZE RAID1 SHOW RAID1 Configuring a Striped Mirrorset 1. Create, but do not initialize, at least two mirrorsets.
FC Configuration Procedures Configuring a Single-Disk Unit (JBOD) 1. Initialize the disk drive, specifying any desired switches: INITIALIZE DISK-NAME SWITCHES See "Specifying Initialization Switches", page 95, for a description of the initialization switches. 2. Verify the configuration by entering the following command: SHOW DISK-NAME 3. Assign the disk a unit number to make it accessible by the hosts. See "Assigning Unit Numbers and Unit Qualifiers", page 175. Configuring a Partition 1.
FC Configuration Procedures or SHOW DISK-NAME The partition number is displayed in the first column, followed by the size and starting block of each partition. 4. Assign the partition a unit number to make it accessible by the hosts. See "Assigning Unit Numbers and Unit Qualifiers", page 175. For example: The commands to create RAID1, a three-member RAIDset, then partition it into two storage units are shown below.
FC Configuration Procedures Assigning Unit Numbers and Unit Qualifiers Each storageset, partition, or single (JBOD) disk must be assigned a unit number for the host to access. As the units are added, their properties can be specified through the use of command qualifiers, which are discussed in detail under the ADD UNIT command in the HP StorageWorks HSG60 and HSG80 Array Controller and Array Controller Software Command Line Interface Reference Guide.
FC Configuration Procedures Preferring Units In multiple-bus failover mode, individual units can be preferred to a specific controller. For example, to prefer unit D102 to “this controller,” use the following command: SET D102 PREFERRED_PATH=THIS RESTART commands must be issued to both controllers for this command to take effect: RESTART OTHER_CONTROLLER RESTART THIS_CONTROLLER Note: The controllers need to restart together for the preferred settings to take effect.
FC Configuration Procedures Configuration Options There are multiple options that allow you to configure your system. Changing the CLI Prompt To change the CLI prompt, enter a 1- to 16- character string as the new prompt, according to the following syntax: SET THIS_CONTROLLER PROMPT = “NEW PROMPT” If you are configuring dual-redundant controllers, also change the CLI prompt on the “other controller.
FC Configuration Procedures Note: This procedure assumes that the disks that you are adding to the spareset have already been added to the controller's list of known devices. To add the disk drive to the controller's spareset list, use the following syntax: ADD SPARESET DISKNNNNN Repeat this step for each disk drive you want to add to the spareset: For example: The following example shows the syntax for adding DISK11300 and DISK21300 to the spareset.
FC Configuration Procedures To disable autospare, use the following command: SET FAILEDSET NOAUTOSPARE During initialization, AUTOSPARE checks to see if the new disk drive contains metadata. Metadata is information the controller writes on the disk drive when the disk drive is configured into a storageset. Therefore, the presence of metadata indicates that the disk drive belongs to, or has been used by, a storageset. If the disk drive contains metadata, initialization stops.
FC Configuration Procedures Displaying the Current Switches To display the current switches for a storageset or single-disk unit, enter a SHOW command, specifying the FULL switch: SHOW STORAGESET-NAME or SHOW DEVICE-NAME Note: FULL is not required when showing a particular device. It is used when showing all devices, for example, SHOW DEVICES FULL.
FC Configuration Procedures Verifying Storage Configuration from Host This section describes how to verify that LUNs (virtual disk units) are being correctly presented to HP-UX. After configuring units (virtual disks) through either the CLI restart the host. If you are adding units dynamically, then use ioscan and insf -e commands to verify that the device special files and logical unit numbers are created and correctly assigned to each storageset.
FC Configuration Procedures 182 HSG80 ACS Solution Software V8.
Using CLI for Configuration 6 This chapter presents an example of how to configure a storage subsystem using the Command Line Interpreter (CLI). The CLI configuration example shown assumes: ■ A normal, new controller pair, which includes: — NODE ID set — No previous failover mode — No previous topology set ■ Full array with no expansion cabinet ■ PCMCIA cards installed in both controllers A storage subsystem example is shown in Figure 44.
Using CLI for Configuration Port 1 2 3 4 5 6 D0 S1 MI DISK102 00 D0 S1 M1 DISK202 00 D0 S1 M2 DISK302 00 D0 S1 M2 DISK402 00 D2 D101 DISK503 00 D1 M3 DISK502 00 D120 R2 DISK201 00 D120 R2 DISK301 00 D120 R2 DISK401 00 D120 R2 DISK501 00 D102 R1 DISK200 00 D102 R1 DISK300 00 D102 R1 DISK400 00 D102 R1 DISK500 00 Power Supply D120 R2 DISK601 00 Power Supply D102 R1 DISK100 00 Power Supply D1 M3 DISK602 00 Power Supply D120 R2 DISK101 00 Power Supply spareset member DISK603 00 Tar
Using CLI for Configuration Host 1 "RED" Host 2 "GREY" Host 3 "BLUE" FCA1 FCA2 FCA1 FCA2 FCA1 FCA2 Switch or hub Connections RED1B1 GREY1B1 BLUE1B1 Switch or hub Connections RED1A1 GREY1A1 BLUE1A1 Connections RED2A2 GREY2A2 BLUE2A2 Host port 1 active Host port 2 active Controller A D0 D1 D2 D101 Connections RED2B2 GREY2B2 BLUE2B2 D102 D120 All units visible to all ports Host port 1 active Controller B Host port 2 active NOTE: FCA = Fibre Channel Adapter CXO7547B Figure 45: Example, t
Using CLI for Configuration "RED" "GREY" "BLUE" D1 D0 D2 D101 D102 D120 CXO7110B Figure 46: Example, logical or virtual disks comprised of storagesets CLI Configuration Example Text conventions used in this example are listed below: ■ Text in italics indicates an action you take. ■ Text in THIS FORMAT, indicates a command you type. Be certain to press Enter after each command. ■ Text enclosed within a box, indicates information that is displayed by the CLI interpreter.
Using CLI for Configuration Plug serial cable from maintenance terminal into top controller.
Using CLI for Configuration Note: This command causes the controllers to restart. SET THIS PROMPT=“BTVS BOTTOM” SET OTHER PROMPT=“BTVS TOP” SHOW THIS SHOW OTHER Plug in the Fibre Channel cable from the first adapter in host “RED.” SHOW CONNECTIONS RENAME !NEWCON00 RED1B1 SET RED1B1 OPERATING_SYSTEM=HP RENAME !NEWCON01 RED1A1 SET RED1A1 OPERATING_SYSTEM=HP SHOW CONNECTIONS Note: Connection table sorts alphabetically.
Using CLI for Configuration Connection Name !NEWCON0 2 Operating System Controller HP THIS Port 2 Address Status XXXXX OL this X Unit Offset 0 HOST_ID=XXXX-XXXX-XXXX-XX ADAPTER_ID=XXXX-XXXX-XXX XX X-XXXX !NEWCON0 3 HP OTHER 2 XXXXX OL other X 0 HOST_ID=XXXX-XXXX-XXXX-XX ADAPTER_ID=XXXX-XXXX-XXX XX X-XXXX RED1A1 HP OTHER 1 XXXXX OL other X 0 ...
Using CLI for Configuration Connection Name RED1A1 Operating System Controller HP OTHER HOST_ID=XXXX-XXXX-XXXX-X XXX RED1B1 HP THIS HOST_ID=XXXX-XXXX-XXXX-X XXX RED2A2 HP OTHER HOST_ID=XXXX-XXXX-XXXX-X XXX RED2B2 HP THIS HOST_ID=XXXX-XXXX-XXXX-X XXX Port Address Status 1 XXXX XX OL other Unit Offset 0 ADAPTER_ID=XXXX-XXXX-XXX X-XXXX 1 XXXX XX OL this 0 ADAPTER_ID=XXXX-XXXX-XXX X-XXXX 2 XXXX XX OL other 0 ADAPTER_ID=XXXX-XXXX-XXX X-XXXX 2 XXXX XX OL this 0 ADAPTER_ID=XXXX-
Using CLI for Configuration Connection Name Operating System Controller Port Address Status GREY1A1 HP OTHER 1 XXXX XX OL other HOST_ID=XXXX-XXXX-XXXX-XX XX GREY1B1 HP THIS HOST_ID=XXXX-XXXX-XXXX-XX XX GREY2A2 HP OTHER HOST_ID=XXXX-XXXX-XXXX-XX XX GREY2B2 HP THIS HOST_ID=XXXX-XXXX-XXXX-XX XX BLUE1A1 HP OTHER HOST_ID=XXXX-XXXX-XXXX-XX XX BLUE1B1 HP THIS HOST_ID=XXXX-XXXX-XXXX-XX XX BLUE2A2 HP OTHER HOST_ID=XXXX-XXXX-XXXX-XX XX BLUE2B2 HP THIS Unit Offset 0 ADAPTER_ID=XXXX-
Using CLI for Configuration HOST_ID=XXXX-XXXX-XXXX-XX XX RED1A1 HP OTHER HOST_ID=XXXX-XXXX-XXXX-XX XX RED1B1 HP THIS HOST_ID=XXXX-XXXX-XXXX-XX XX RED2A2 HP OTHER HOST_ID=XXXX-XXXX-XXXX-XX XX RED2B2 HP THIS HOST_ID=XXXX-XXXX-XXXX-XX XX 192 ADAPTER_ID=XXXX-XXXX-XXX X-XXXX 1 XXXX XX OL other 0 ADAPTER_ID=XXXX-XXXX-XXX X-XXXX 1 XXXX XX OL this 0 ADAPTER_ID=XXXX-XXXX-XXX X-XXXX 2 XXXX XX OL other 0 ADAPTER_ID=XXXX-XXXX-XXX X-XXXX 2 XXXX XX OL this 0 ADAPTER_ID=XXXX-XXXX-XXX X-XXXX
Using CLI for Configuration SET CONNECTION BLUE1A1 UNIT_OFFSET=100 SET CONNECTION BLUE1B1 UNIT_OFFSET=100 SET CONNECTION BLUE2A2 UNIT_OFFSET=100 SET CONNECTION BLUE2B2 UNIT_OFFSET=100 RUN CONFIG ADD RAIDSET R1 DISK10000 DISK20000 DISK30000 DISK40000 DISK50000 DISK60000 INITIALIZE R1 ADD UNIT D102 R1 DISABLE_ACCESS_PATH=ALL SET D102 ENABLE_ACCESS_PATH=(RED1A1, RED1B1, RED2A2, RED2B2) ADD RAIDSET R2 DISK10100 DISK20100 DISK30100 DISK40100 DISK50100 DISK60100 INITIALIZE R2 ADD UNIT D120 R2 DISABLE_ACCESS_PATH
Using CLI for Configuration SHOW UNITS FULL 194 HSG80 ACS Solution Software V8.
Backing Up, Cloning, and Moving Data 7 This chapter includes the following topics: ■ "Backing Up Subsystem Configurations", page 196 ■ "Creating Clones for Backup", page 197 ■ "Moving Storagesets", page 201 HSG80 ACS Solution Software V8.
Backing Up, Cloning, and Moving Data Backing Up Subsystem Configurations The controller stores information about the subsystem configuration in its nonvolatile memory. This information could be lost if the controller fails or when you replace a module in the subsystem. Use the following command to produce a display that shows if the save configuration feature is active and which devices are being used to store the configuration.
Backing Up, Cloning, and Moving Data Creating Clones for Backup Use the Clone utility to duplicate the data on any unpartitioned single-disk unit, stripeset, mirrorset, or striped mirrorset in preparation for backup. When the cloning operation is complete, you can back up the Clones rather than the storageset or single-disk unit, which can continue to service its I/O load. When you are cloning a mirrorset, Clone does not need to create a temporary mirrorset.
Backing Up, Cloning, and Moving Data To Clone a single-disk unit, stripeset, or mirrorset: 1. Establish a connection to the controller that accesses the unit you want to Clone. 2. Start Clone using the following command: RUN CLONE 3. When prompted, enter the unit number of the unit you want to Clone. 4. When prompted, enter a unit number for the Clone unit that Clone creates. 5.
Backing Up, Cloning, and Moving Data The following example shows the commands you would use to Clone storage unit D6. The Clone command terminates after it creates storage unit D33, a Clone or copy of D6. RUN CLONE CLONE LOCAL PROGRAM INVOKED UNITS AVAILABLE FOR CLONING: 98 ENTER UNIT TO CLONE? 98 CLONE WILL CREATE A NEW UNIT WHICH IS A COPY OF UNIT 98. ENTER THE UNIT NUMBER WHICH YOU WANT ASSIGNED TO THE NEW UNIT? 99 THE NEW UNIT MAY BE ADDED USING ONE OF THE FOLLOWING METHODS: 1.
Backing Up, Cloning, and Moving Data . COPY FROM DISK10300 TO DISK20200 IS 100% COMPLETE COPY FROM DISK10000 TO DISK20300 IS 100% COMPLETE PRESS RETURN WHEN YOU WANT THE NEW UNIT TO BE CREATED REDUCE DISK20200 DISK20300 UNMIRROR DISK10300 UNMIRROR DISK10000 ADD MIRRORSET C_MA DISK20200 ADD MIRRORSET C_MB DISK20300 ADD STRIPESET C_ST1 C_MA C_MB INIT C_ST1 NODESTROY ADD UNIT D99 C_ST1 D99 HAS BEEN CREATED. IT IS A CLONE OF D98. CLONE - NORMAL TERMINATION 200 HSG80 ACS Solution Software V8.
Backing Up, Cloning, and Moving Data Moving Storagesets You can move a storageset from one subsystem to another without destroying its data. You also can follow the steps in this section to move a storageset to a new location within the same subsystem. Caution: Move only normal storagesets. Do not move storagesets that are reconstructing or reduced, or data corruption results. See the release notes for the version of your controller software for information on which drives can be supported.
Backing Up, Cloning, and Moving Data 5. Delete each disk drive, one at a time, that the storageset contained. Use the following syntax: DELETE DISK-NAME DELETE DISK-NAME DELETE DISK-NAME 6. Remove the disk drives and move them to their new PTL locations. 7. Again add each disk drive to the controller's list of valid devices. Use the following syntax: ADD DISK DISK-NAME PTL-LOCATION ADD DISK DISK-NAME PTL-LOCATION ADD DISK DISK-NAME PTL-LOCATION 8.
Backing Up, Cloning, and Moving Data New cabinet ADD DISK DISK10000 ADD DISK DISK10100 ADD DISK DISK20000 ADD DISK DISK20100 ADD RAIDSET RAID99 DISK10000 DISK10100 DISK20000 DISK20100 ADD UNIT D100 RAID99 HSG80 ACS Solution Software V8.
Backing Up, Cloning, and Moving Data 204 HSG80 ACS Solution Software V8.
Subsystem Profile Templates A This appendix contains storageset profiles to copy and use to create your profiles. It also contains an enclosure template to use to help keep track of the location of devices and storagesets in your shelves. Four (4) templates are needed for the subsystem. Note: The storage map templates for the Model 4310R and Model 4214R or 4314R reflect the physical location of the disk enclosures in the rack.
Subsystem Profile Templates Storageset Profile Type of Storageset: _____ Mirrorset __X_ RAIDset _____ Stripeset _____ Striped Mirrorset ____ JBOD Storageset Name Disk Drives Unit Number Partitions: Unit # Unit # Unit # Unit # Unit # Unit # Unit # Unit # RAIDset Switches: Reconstruction Policy Reduced Membership Replacement Policy ___Normal (default) __ _No (default) ___Best performance (default) ___Fast ___Yes, missing: ___Best fit ___None Mirrorset Switches: Replacement Policy C
Subsystem Profile Templates Unit Switches: Caching Read caching__________ Read-ahead caching_____ Write-back caching______ Write-through caching____ Access by following hosts enabled __________________________________________________ __________ __________________________________________________ __________ __________________________________________________ __________ __________________________________________________ __________ HSG80 ACS Solution Software V8.
Subsystem Profile Templates Storage Map Template 1 for the BA370 Enclosure Use this template for: ■ BA370 single-enclosure subsystems ■ first enclosure of multiple BA370 enclosure subsystems 1 2 Port 3 4 5 6 Power Supply Power Supply 3 D10300 D20300 D30300 D40300 D50300 D60300 Power Supply Power Supply 2 D30200 D40200 D50200 Targets D10200 D20200 D60200 Power Supply Power Supply 1 D10100 D20100 D30100 D40100 D50100 D60100 Power Supply Power Supply 0 D10000 D20000 208 D300
Subsystem Profile Templates Storage Map Template 2 for the Second BA370 Enclosure Use this template for the second enclosure of multiple BA370 enclosure subsystems.
Subsystem Profile Templates Storage Map Template 3 for the Third BA370 Enclosure Use this template for the third enclosure of multiple BA370 enclosure subsystems.
Subsystem Profile Templates Storage Map Template 4 for the Model 4214R Disk Enclosure Use this template for a subsystem with a three-shelf Model 4214R disk enclosure (single-bus). You can have up to six Model 4214R disk enclosures per controller shelf.
Subsystem Profile Templates 212 Bay 1 2 3 4 5 6 7 8 9 1 0 1 1 1 2 1 3 1 4 SCSI ID 0 0 0 1 0 2 0 3 0 4 0 5 0 8 0 9 1 0 1 1 1 2 1 3 1 4 1 5 DISK ID Disk30000 Disk30100 Disk30200 Disk30300 Disk30400 Disk30500 Disk30800 Disk30900 Disk31000 Disk31100 Disk31200 Disk31300 Disk31400 Disk31500 Model 4214R Disk Enclosure Shelf 3 (Single-bus) HSG80 ACS Solution Software V8.
Subsystem Profile Templates Storage Map Template 5 for the Model 4254 Disk Enclosure Use this template for a subsystem with a three-shelf Model 4254 disk enclosure (dual-bus). You can have up to three Model 4254 disk enclosures per controller shelf.
Subsystem Profile Templates continued from previous page Model 4254 Disk Enclosure Shelf 3 (Dual-bus) 214 Bay 1 2 3 4 5 6 7 8 9 1 0 1 1 1 2 1 3 1 4 SCSI ID 0 0 0 1 0 2 0 3 0 4 0 5 0 8 0 0 0 1 0 2 0 3 0 4 0 5 0 8 DISK ID Disk50100 Disk50200 Disk50300 Disk50400 Disk50500 Disk50800 Disk60000 Disk60100 Disk60200 Disk60300 Disk60400 Disk60500 Disk60800 Bus B Disk50000 Bus A HSG80 ACS Solution Software V8.
Subsystem Profile Templates Storage Map Template 6 for the Model 4310R Disk Enclosure Use this template for a subsystem with a six-shelf Model 4310R disk enclosure (single-bus). You can have up to six Model 4310R disk enclosures per controller shelf.
Subsystem Profile Templates Model 4310R Disk Enclosure Shelf 4 (Single-bus) 10 SCSI ID 00 01 02 03 04 05 08 10 11 12 DISK ID Disk41200 9 Disk41100 8 Disk41000 7 Disk40800 6 Disk40500 5 Disk40400 4 Disk40300 3 Disk40200 2 Disk40100 1 Disk40000 Bay Model 4310R Disk Enclosure Shelf 1 (Single-bus) 10 SCSI ID 00 01 02 03 04 05 08 10 11 12 DISK ID Disk11200 9 Disk11100 8 Disk11000 7 Disk10800 6 Disk10500 5 Disk10400 4 Disk10300 3 Disk10200 2 Disk10100
Subsystem Profile Templates Model 4310R Disk Enclosure Shelf 3 (Single-bus) 10 SCSI ID 00 01 02 03 04 05 08 10 11 12 DISK ID HSG80 ACS Solution Software V8.
Subsystem Profile Templates Storage Map Template 7 for the Model 4350R Disk Enclosure Use this template for a subsystem with a three-shelf Model 4350R disk enclosure (single-bus). You can have up to three Model 4350R disk enclosures per controller shelf.
Subsystem Profile Templates Model 4350R Disk Enclosure Shelf 4 (Single-bus) 10 SCSI ID 00 01 02 03 04 05 08 10 11 12 DISK ID HSG80 ACS Solution Software V8.
Subsystem Profile Templates Storage Map Template 8 for the Model 4314R Disk Enclosure Use this template for a subsystem with a six-shelf Model 4314R disk enclosure. You can have a maximum of six Model 4314R disk enclosures with each Model 2200 controller enclosure.
Subsystem Profile Templates Model 4314R Disk Enclosure Shelf 4 (Single-bus) 14 SCSI ID 00 01 02 03 04 05 08 09 10 11 12 13 14 15 DISK ID Disk41500 13 Disk41400 12 Disk41300 11 Disk41200 10 Disk41100 9 Disk41000 8 Disk40900 7 Disk40800 6 Disk40500 5 Disk40400 4 Disk40300 3 Disk40200 2 Disk40100 1 Disk40000 Bay Model 4314R Disk Enclosure Shelf 1 (Single-bus) 14 SCSI ID 00 01 02 03 04 05 08 09 10 11 12 13 14 15 DISK ID Disk11500 13 Disk11400 1
Subsystem Profile Templates Model 4314R Disk Enclosure Shelf 3 (Single-bus) 14 SCSI ID 00 01 02 03 04 05 08 09 10 11 12 13 14 15 DISK ID 222 Disk31500 13 Disk31400 12 Disk31300 11 Disk31200 10 Disk31100 9 Disk31000 8 Disk30900 7 Disk30800 6 Disk30500 5 Disk30400 4 Disk30300 3 Disk30200 2 Disk30100 1 Disk30000 Bay HSG80 ACS Solution Software V8.
Subsystem Profile Templates Storage Map Template 9 for the Model 4354R Disk Enclosure Use this template for a subsystem with a three-shelf Model 4354R disk enclosure (dual-bus). You can have up to three Model 4354R disk enclosures per controller shelf.
Subsystem Profile Templates Model 4354R Disk Enclosure Shelf 3 (Dual-bus) SCSI Bus A SCSI Bus B 14 SCSI ID 00 01 02 03 04 05 08 00 01 02 03 04 05 08 DISK ID 224 Disk60800 13 Disk60500 12 Disk60400 11 Disk60300 10 Disk60200 9 Disk60100 8 Disk60000 7 Disk50800 6 Disk50500 5 Disk50400 4 Disk50300 3 Disk50200 2 Disk50100 1 Disk50000 Bay HSG80 ACS Solution Software V8.
Installing, Configuring, and Removing the Client B The following information is included in this appendix: ■ "Why Install the Client?", page 226 ■ "Before You Install the Client", page 227 ■ "Installing the Client", page 228 ■ "Installing the Integration Patch", page 229 ■ "Troubleshooting Client Installation", page 232 ■ "Adding Storage Subsystem and its Host to Navigation Tree", page 234 ■ "Removing Command Console Client", page 236 ■ "Where to Find Additional Information", page 238 HSG8
Installing, Configuring, and Removing the Client Why Install the Client? The Client monitors and manages a storage subsystem by performing the following tasks: 226 ■ Create mirrored device group (RAID 1) ■ Create striped device group (RAID 0) ■ Create striped mirrored device group (RAID 0+1) ■ Create striped parity device group (3/5) ■ Create an individual device (JBOD) ■ Monitor many subsystems at once ■ Set up pager notification HSG80 ACS Solution Software V8.
Installing, Configuring, and Removing the Client Before You Install the Client 1. Verify that you are logged into an account that is a member of the administrator group. 2. Check the software product description that came with the software for a list of supported hardware. 3. Verify that you have the SNMP service installed on the computer. SNMP must be installed on the computer for this software to work properly. The Client software uses SNMP to receive traps from the Agent.
Installing, Configuring, and Removing the Client Installing the Client The following restriction should be observed when installing SWCC on Windows NT 4.0 Workstations. If you select all of the applets during installation, the installation fails on the HSG60 applet and again on one of the HSG80 applets. The workaround is to install all of the applets you want except for the HSG60 applet and the HSG80 ACS 8.5 applet. You can then return to the setup program and install the one that you need. 1.
Installing, Configuring, and Removing the Client Installing the Integration Patch The integration patch determines which version of firmware the controller is using and launches the appropriate StorageWorks Command Console (SWCC) Storage Window within Insight Manager (CIM) V4.23. Should I Install the Integration Patch? Install this patch if your HSG80 controller uses ACS 8.6 or later. This patch enables you to use the controller’s SWCC Storage Window within CIM to monitor and manage the controller.
Installing, Configuring, and Removing the Client Caution: If you remove the integration patch, HSG80 Storage Window V2.1 no longer works and you need to reinstall HSG80 Storage Window V2.1. The integration patch uses some of the same files as the HSG80 Storage Window V2.1. Integrating Controller’s SWCC Storage Window with CIM You can open the controller’s Storage Window from within the Windows-based CIM V4.23 by doing the following: 1.
Installing, Configuring, and Removing the Client “Insight Manager Unable to Find Controller’s Storage Window” If you installed Insight Manager before SWCC, Insight Manager is unable to find the controller’s Storage Window. To find the controller’s Storage Window, perform the following procedure: 1. Double-click the Insight Agents icon (Start > Settings > Control Panel). A window is displayed showing you the active and inactive Agents under the Services tab. 2.
Installing, Configuring, and Removing the Client Troubleshooting Client Installation This section provides information on how to resolve some of the problems that may occur when installing the Client software: ■ ■ Invalid Network Port Assignments During Installation “There is no disk in the drive” Message Invalid Network Port Assignments During Installation SWCC Clients and Agents communicate by using sockets.
Installing, Configuring, and Removing the Client ccfabric 4989/tcp #Fibre Channel Interconnect Agent spagent 4999/tcp #HS-Series Client and Agent spagent3 4994/tcp #HSZ22 Client and Agent ccagent 4997/tcp #RA200 Client and Agent spagent2 4995/tcp #RA200 Client and Agent “There is no disk in the drive” Message When you install the Command Console Client, the software checks the shortcuts on the desktop and in the Start menu.
Installing, Configuring, and Removing the Client Adding Storage Subsystem and its Host to Navigation Tree The Navigation Tree enables you to manage storage over the network by using the Storage Window. If you plan to use pager notification, you must add the storage subsystem to the Navigation Tree. 1. Verify that you have properly installed and configured the HS-Series Agent on the storage subsystem host. 2. Click Start > Programs > Command Console > StorageWorks Command Console.
Installing, Configuring, and Removing the Client Figure 49: Navigation window showing storage host system “Atlanta” 6. Click the plus sign to expand the host icon. When expanded, the Navigation Window displays an icon for the storage subsystem. To access the Storage Window for the subsystem, double-click the Storage Window icon. Figure 50: Navigation window showing expanded “Atlanta” host icon HSG80 ACS Solution Software V8.
Installing, Configuring, and Removing the Client Note: You can create virtual disks by using the Storage Window. For more information on the Storage Window, refer to HP StorageWorks Command Console V2.5 User Guide. Removing Command Console Client Before you remove the Command Console Client from the computer, remove AES. This prevents the system from reporting that a service failed to start every time the system is restarted. Steps 2 through 5 describe how to remove the Command Console Client.
Installing, Configuring, and Removing the Client Note: This procedure removes only the Command Console Client (SWCC Navigation Window). You can remove the HSG80 Client by using the Add/Remove program. HSG80 ACS Solution Software V8.
Installing, Configuring, and Removing the Client Where to Find Additional Information You can find additional information about SWCC by referring to the online Help and to HP StorageWorks Command Console V2.5 User Guide. About the User Guide HP StorageWorks Command Console V2.5 User Guide contains additional information on how to use SWCC.
glossary Glossary This glossary defines terms pertaining to the ACS solution software. It is not a comprehensive glossary of computer terms. 8B/10B A type of byte definition encoding and decoding to reduce errors in data transmission patented by the IBM Corporation. This process of encoding and decoding data for transmission has been adopted by ANSI. adapter A device that converts the protocol and hardware interface of one bus type into another without changing the function of the bus.
Glossary association set A group of remote copy sets that share selectable attributes for logging and failover. Members of an association set transition to the same state simultaneously. For example, if one association set member assumes the failsafe locked condition, then other members of the association set also assume the failsafe locked condition. An association set can also be used to share a log between a group of remote copy set members that require efficient use of the log space.
Glossary built-in self-test A diagnostic test performed by the array controller software on the controller policy processor. byte A binary character string made up of 8 bits operated on as a unit. cache memory A portion of memory used to accelerate read and write operations. cache module A fast storage buffer CCL CCL-Command Console LUN, a “SCSI Logical Unit Number” virtual-device used for communicating with Command Console Graphical User Interface (GUI) software.
Glossary controller A hardware device that, with proprietary software, facilitates communications between a host and one or more devices organized in an array. The HSG80 family controllers are examples of array controllers. copying A state in which data to be copied to the mirrorset is inconsistent with other members of the mirrorset. See also normalizing. copying member Any member that joins the mirrorset after the mirrorset is created is regarded as a copying member.
Glossary DOC DWZZA-On-a-Chip. ASCSI bus extender chip used to connect a SCSI bus in an expansion cabinet to the corresponding SCSI bus in another cabinet (See DWZZA). driver A hardware device or a program that controls or regulates another device. For example, a device driver is a driver developed for a specific device that allows a computer to operate with the device, such as a printer or a disk drive.
Glossary ESD Electrostatic discharge. The discharge of potentially harmful static electrical voltage as a result of improper grounding. extended subsystem A subsystem in which two cabinets are connected to the primary cabinet. external cache battery See ECB. F_Port A port in a fabric where an N_Port or NL_Port may attach. fabric A group of interconnections between ports that includes a fabric element.
Glossary FCC Federal Communications Commission. The federal agency responsible for establishing standards and approving electronic devices within the United States. FCC Class A This certification label is on electronic devices that can only be used in a commercial environment within the United States. FCC Class B This certification label is on electronic devices that can be used in either a home or a commercial environment within the United States.
Glossary FRU Field replaceable unit. A hardware component that can be replaced at the customer location by service personnel or qualified customer service personnel. FRUTIL Field Replacement utility. full duplex (n) A communications system in which there is a capability for 2-way transmission and acceptance between two sites at the same time. full duplex (adj) Pertaining to a communications method in which data can be transmitted and received at the same time.
Glossary host compatibility mode A setting used by the controller to provide optimal controller performance with specific operating systems. This improves the controller performance and compatibility with the specified operating system. hot disks A disk containing multiple hot spots. Hot disks occur when the workload is poorly distributed across storage devices which prevents optimum subsystem performance. See also hot spots. hot spots A portion of a disk drive frequently accessed by the host.
Glossary interface A set of protocols used between components, such as cables, connectors, and signal levels. I/O Refers to input and output functions. I/O driver The set of code in the kernel that handles the physical I/O to a device. This is implemented as a fork process. Same as driver. I/O interface See interface. I/O module A 16-bit SBB shelf device that integrates the SBB shelf with either an 8-bit single ended, 16-bit single-ended, or 16-bit differential SCSI bus (see SBB).
Glossary logical unit A physical or virtual device addressable through a target ID number. LUNs use their target bus connection to communicate on the SCSI bus. logical unit number LUN. A value that identifies a specific logical unit belonging to a SCSI target ID number. A number associated with a physical device unit during a task I/O operations. Each task in the system must establish its own correspondence between logical unit numbers and physical devices. logon Also called login.
Glossary mirrored write-back caching A method of caching data that maintains two copies of the cached data. The copy is available if either cache module fails. mirrorset See RAID level 1. MIST Module Integrity Self-Test. multibus failover Allows the host to control the failover process by moving the units from one controller to another. N_port A port attached to a node for use with point-to-point topology or fabric topology. NL_port A port attached to a node for use in all topologies.
Glossary normalizing Normalizing is a state in which, block-for-block, data written by the host to a mirrorset member is consistent with the data on other normal and normalizing members. The normalizing state exists only after a mirrorset is initialized. Therefore, no customer data is on the mirrorset. normalizing member A mirrorset member whose contents are the same as all other normal and normalizing members for data that has been written since the mirrorset was created or lost cache data was cleared.
Glossary partition A logical division of a container, represented to the host as a logical unit. PCMCIA Personal Computer Memory Card Industry Association. An international association formed to promote a common standard for PC card-based peripherals to be plugged into notebook computers. The card commonly known as a PCMCIA card is about the size of a credit card. PDU Power distribution unit. The power entry device for HP StorageWorks cabinets.
Glossary private NL_Port An NL_Port which does not attempt login with the fabric and only communicates with NL_Ports on the same loop. program card The PCMCIA card containing the controller operating software. protocol The conventions or rules for the format and timing of messages sent and received. PTL Port-Target-LUN. The controller method of locating a device on the controller device bus. PVA module Power Verification and Addressing module.
Glossary RAID level 3/5 A RAID storageset that stripes data and parity across three or more members in a disk array. A RAIDset combines the best characteristics of RAID level 3 and RAID level 5. A RAIDset is the best choice for most applications with small to medium I/O requests, unless the application is write intensive. A RAIDset is sometimes called parity RAID. RAIDset See RAID level 3/5. RAM Random access memory.
Glossary remote copy set A bound set of two units, one located locally and one located remotely, for long-distance mirroring. The units can be a single disk, or a storageset, mirrorset, or RAIDset. A unit on the local controller is designated as the “initiator” and a corresponding unit on the remote controller is designated as the “target”. request rate The rate at which requests are arriving at a servicing entity. RFI Radio frequency interference.
Glossary SCSI ID number The representation of the SCSI address that refers to one of the signal lines numbered 0 through 15. SCSI-P cable A 68-conductor (34 twisted-pair) cable generally used for differential bus connections. SCSI port (1) Software: The channel controlling communications to and from a specific SCSI bus in the system. (2) Hardware: The name of the logical socket at the back of the system unit to which a SCSI device is connected.
Glossary StorageWorks A family of modular data storage products that allow customers to design and configure their own storage subsystems. Components include power, packaging, cabling, devices, controllers, and software. Customers can integrate devices and array controllers in HP StorageWorks enclosures to form storage subsystems. HP StorageWorks systems include integrated SBBs and array controllers to form storage subsystems.
Glossary tape inline exerciser (TILX) The controller diagnostic software to test the data transfer capabilities of tape drives in a way that simulates a high level of user activity. topology An interconnection scheme that allows multiple Fibre Channel ports to communicate with each other. For example, point-to-point, Arbitrated Loop, and switched fabric are all Fibre Channel topologies.
Glossary warm swap A device replacement method that allows the complete system to remain online during device removal or insertion. The system bus may be halted, or quiesced, for a brief period of time during the warm-swap procedure. Wide Ultra SCSI Fast/20 on a Wide SCSI bus. Worldwide name A unique 64-bit number assigned to a subsystem by the Institute of Electrical and Electronics Engineers (IEEE) and set by manufacturing prior to shipping. This name is referred to as the node ID within the CLI.
Glossary 260 HSG80 ACS Solution Software V8.
index A B Back up, Clone, Move Data 195 backup cloning data 197 subsystem configuration 196 Index C Index ADD CONNECTIONS multiple-bus failover 50 transparent failover 48 ADD UNIT multiple-bus failover 50 transparent failover 48 adding virtual disks 238 adding a disk drive to the spareset configuration options 177 adding disk drives configuration options 177 Agent functions 127 array of disk drives 79 assigning unit numbers 48 assignment unit numbers fabric topology 175 unit qualifiers fabric topology
Index CHUNKSIZE 95 CLI commands installation verification 159, 167 CLI configuration example 186 CLI configurations 183 CLI prompt changing fabric topology 177 Client removing 236 uninstalling 236 CLONE utility backup 197 cloning backup 197 command console LUN 41 SCSI-2 mode 51 SCSI-3 mode 51 comparison of container types 79 configuration backup 196 fabric topology devices 168, 169 multiple-bus failover cabling 160 multiple-bus failover using CLI 186 single controller cabling 153 HP-UX file system 115 rest
Index changing fabric topology 180 devices changing switches fabric topology 179 configuration fabric topology 168, 169 creating a profile 80 disk drives adding fabric topology 177 adding to the spareset fabric topology 177 array 79 corresponding storagesets 100 dividing 90 removing from the spareset fabric topology 178 displaying the current switches fabric topology 180 dividing storagesets 90 document conventions 27 prerequisites 14 related documentation 14 E enabling switches 92 equipment symbols 19 er
Index install and configure 125 network connection 130 overview 129 remove agent 149 running agent 147 I initialize switches changing fabric topology 180 CHUNKSIZE 95 geometry 98 NOSAVE_CONFIGURATION 97 SAVE_CONFIGURATION 97 Insight Manager 238 Install warnings GUI console display 137 installation controller verification 159, 167 invalid network port assignments 232 there is no disk in the drive message 233 installation verification CLI commands 159, 167 integrating SWCC 238 invalid network port assignmen
Index offset LUN presentation 49 restricting host access multiple-bus fafilover 59 transparent fafilover 57 SCSI version factor 50 online help SWCC 238 options for mirrorsets 93 for RAIDsets 93 initialize 95 other controller 31 enabling switches 92 initialization switch 92 storageset switch 92 unit switch 92 switches initialization 95 storageset 93 preferring units multiple-bus failover fabric topology 176 prerequisites 14 profiles creating 80 description 80 storageset 205 example 206 P R NOSAVE_CONFIG
Index restricting host access disabling access paths multiple-bus failover 57 transparent failover 55 multiple-bus failover 57 separate links transparent failover 54 transparent failover 53 S SAVE_CONFIGURATION 97 saving configuration 97 SCSI version offset 50 SCSI-2 assigning unit numbers 51 command console lun 51 SCSI-3 assigning unit numbers 51 command console lun 51 Second enclosure of multiple-enclosure subsystem storage map template 2 209 selective storage presentation 53 SET CONNECTIONS multiple-bu
Index storageset profile 80 storageset switches SET command 93 storagesets creating a profile 80 moving 201 striped mirrorsets planning 89 planning considerations 88 stripesets distributing members across buses 84 planning 83 planning considerations 82 important points 83 subsystem saving configuration 97 subsystem configuration backup 196 SWCC 127 additional information 238 integrating 238 online help 238 switches changing 92 changing characteristics 92 CHUNKSIZE 95 enabling 92 mirrorsets 93 NOSAVE_CONFIG
Index V verification controller installation 159, 167 verification of installation controller 159, 167 virtual disks adding 238 W warning rack stability 21 symbols on equipment 19 web sites HP storage 22 268 where to start 29 worldwide names 60 NODE_ID 60 REPORTED PORT_ID 60 restoring 61 write performance 97 write requests improving the subsystem response time with write-back caching 39 placing data with write-through caching 40 write-back caching general description 39 write-through caching general des