TruCluster Server Cluster Hardware Configuration Part Number: AA-RHGWD-TE June 2001 Product Version: TruCluster Server Version 5.1A Operating System and Version: Tru64 UNIX Version 5.1A This manual describes how to configure the hardware for a TruCluster Server environment. TruCluster Server Version 5.1A runs on the Tru64 UNIX operating system.
© 2001 Compaq Computer Corporation Compaq, the Compaq logo, AlphaServer, StorageWorks, and TruCluster Registered in U.S. Patent and Trademark Office. Alpha, OpenVMS, and Tru64 are trademarks of Compaq Information Technologies Group, L.P. in the United States and other countries. Microsoft, Windows, and Windows NT are trademarks of Microsoft Corporation in the United States and other countries. UNIX and The Open Group are trademarks of The Open Group in the United States and other countries.
Contents About This Manual 1 Introduction 1.1 1.2 1.3 1.3.1 1.3.1.1 1.3.1.2 1.3.1.3 1.3.1.4 1.4 1.5 1.5.1 1.5.2 1.5.3 1.5.4 1.5.5 1.6 1.7 2 The TruCluster Server Product . . .. . .. . .. . . .. . .. . .. . .. . . .. . .. . .. . .. . Memory Requirements . . .. . . .. . .. . .. . .. . .. . . .. . .. . .. . .. . . .. . .. . .. . .. . Minimum Disk Requirements .. . .. . .. . .. . . .. . .. . .. . .. . . .. . .. . .. . .. . Disks Needed for Installation . . .. . .. . . .. . .. . .. . .. . . .. . .. . .. . ..
2.5 2.6 2.7 2.8 2.9 RAID Array Controller Restrictions . . .. . . .. . .. . .. . .. . . .. . .. . .. . .. . SCSI Signal Converters . .. . . .. . .. . .. . .. . .. . . .. . .. . .. . .. . . .. . .. . .. . .. . DS-DWZZH-03 and DS-DWZZH-05 UltraSCSI Hubs . . .. . .. . .. . SCSI Cables . . .. . . .. . .. . .. . .. . . .. . .. . .. . .. . .. . . .. . .. . .. . .. . . .. . .. . .. . .. . SCSI Terminators and Trilink Connectors . . .. . .. . .. . . .. . .. . .. . .. .
4 TruCluster Server System Configuration Using UltraSCSI Hardware 4.1 4.2 4.3 4.3.1 Planning Your TruCluster Server Hardware Configuration .. . Obtaining the Firmware Release Notes . . .. . .. . .. . .. . . .. . .. . .. . .. . TruCluster Server Hardware Installation . . .. . .. . .. . . .. . .. . .. . .. . Installation of a KZPBA-CB Using Internal Termination for a Radial Configuration .. . .. . .. . .. . . .. . .. . .. . .. . . .. . .. . .. . .. . 4.3.
5.7.2 5.7.3 6 Upgrading Memory Channel Adapters . .. . .. . .. . . .. . .. . .. . .. . Upgrading a Virtual Hub Configuration to a Standard Hub Configuration . . .. . . .. . .. . .. . .. . .. . . .. . .. . .. . .. . . .. . .. . .. . .. . 5–14 5–26 Using Fibre Channel Storage 6.1 Fibre Channel Overview .. . . .. . .. . .. . .. . .. . . .. . .. . .. . .. . . .. . .. . .. . .. . 6.1.1 Basic Fibre Channel Terminology .. . . .. . .. . .. . .. . . .. . .. . .. . .. . 6.1.2 Fibre Channel Topologies . .. . .. .
6.8.3.1 6.8.3.2 6.8.3.3 6.8.3.4 6.8.4 6.8.4.1 6.8.4.2 6.9 6.9.1 6.9.1.1 6.9.1.2 6.9.1.3 6.9.1.4 6.9.1.5 6.9.2 6.9.3 6.9.4 6.9.5 6.9.6 6.9.7 6.10 6.10.1 6.10.2 6.11 6.11.1 6.11.2 6.11.3 Installing the KGPSA PCI-to-Fibre Channel Adapter Module . . .. . .. . .. . .. . . .. . .. . .. . .. . .. . . .. . .. . .. . .. . . .. . .. . .. . .. . Setting the KGPSA-BC or KGPSA-CA to Run on a Fabric . . . .. . .. . .. . .. . . .. . .. . .. . .. . .. . . .. . .. . .. . .. . . .. . .. . .. . .. .
7 Using GS80, GS160, or GS320 Hard Partitions in a TruCluster Server Configuration 7.1 7.2 7.3 7.3.1 7.4 7.5 7.5.1 8 Overview . . .. . .. . . .. . .. . .. . .. . . .. . .. . .. . .. . .. . . .. . .. . .. . .. . . .. . .. . .. . .. . Hardware Requirements for a Hard Partition in a Cluster . .. . Configuring Partitioned GS80, GS160, or GS320 Systems in a TruCluster Configuration . . . .. . .. . .. . .. . .. . . .. . .. . .. . .. . . .. . .. . .. . .. .
8.8 8.8.1 8.8.2 Preparing the TL890 DLT MiniLibrary Expansion Unit . .. . .. . TL890 DLT MiniLibrary Expansion Unit Hardware . .. . .. . Preparing the DLT MiniLibraries for Shared SCSI Bus Usage . .. . .. . . .. . .. . .. . .. . . .. . .. . .. . .. . .. . . .. . .. . .. . .. . . .. . .. . .. . .. . 8.8.2.1 Cabling the DLT MiniLibraries . . .. . .. . .. . .. . . .. . .. . .. . .. . 8.8.2.2 Configuring a Base Module as a Slave .. . .. . . .. . .. . .. . .. . 8.8.2.3 Powering Up the DLT MiniLibrary . .. . ..
8.12.2 Preparing a TL881 or TL891 MiniLibrary for Shared SCSI Bus Use . . .. . . .. . .. . .. . .. . . .. . .. . .. . .. . .. . . .. . .. . .. . .. . . .. . .. . .. . .. . 8.12.2.1 Preparing a Tabletop Model or Base Unit for Standalone Shared SCSI Bus Usage . . .. . .. . . .. . .. . .. . .. . 8.12.2.1.1 Setting the Standalone MiniLibrary Tape Drive SCSI ID . . .. . .. . . .. . .. . .. . .. . .. . . .. . .. . .. . .. . . .. . .. . .. . .. . 8.12.2.1.2 Cabling the TL881 or TL891 DLT MiniLibrary . .. . 8.12.
9.1.4.2 9.1.4.3 9.1.4.4 9.1.4.5 Setting the KZPBA-CB SCSI ID . .. . .. . .. . .. . . .. . .. . .. . .. . Setting KZPSA-BB SCSI Bus ID, Bus Speed, and Termination Power . .. . .. . .. . .. . .. . . .. . .. . .. . .. . . .. . .. . .. . .. . KZPSA-BB and KZPBA-CB Termination Resistors . .. . Updating the KZPSA-BB Adapter Firmware . . .. . .. . .. . 9–16 9–17 9–18 9–18 10 Configurations Using External Termination or Radial Connections to Non-UltraSCSI Devices 10.1 Using SCSI Bus Signal Converters .. . .. . .
10.4.4 Cabling a Non-UltraSCSI RAID Array Controller to an Externally Terminated Shared SCSI Bus . .. . .. . . .. . .. . .. . .. . 10.4.4.1 Cabling an HSZ40 or HSZ50 in a Cluster Using External Termination . .. . .. . .. . .. . . .. . .. . .. . .. . . .. . .. . .. . .. . 10.4.4.2 Cabling an HSZ20 in a Cluster Using External Termination . . .. . .. . . .. . .. . .. . .. . .. . . .. . .. . .. . .. . . .. . .. . .. . .. . 10.4.5 Cabling an RA3000 Using External Termination . . .. . .. . .. . 10.
6–4 6–5 6–6 6–7 7–1 7–2 7–3 7–4 9–1 9–2 9–3 9–4 9–5 9–6 9–7 9–8 9–9 Displaying the UDID and Worldwide Names of Devices Known to the Console . . . .. . .. . .. . .. . . .. . .. . .. . .. . .. . . .. . .. . .. . .. . . .. . .. . .. . .. . Setting the Device Unit Number with the wwidmgr quickset Command . .. . .. . . .. . .. . .. . .. . . .. . .. . .. . .. . .. . . .. . .. . .. . .. . . .. . .. . .. . .. . Sample Fibre Channel Device Names .. . . .. . .. . .. . .. . . .. . .. . .. . .. .
2–1 3–1 3–2 3–3 3–4 3–5 3–6 3–7 3–8 3–9 3–10 3–11 3–12 4–1 5–1 5–2 5–3 5–4 5–5 5–6 5–7 5–8 6–1 6–2 6–3 6–4 6–5 6–6 6–7 xiv Contents PCI Backplane Slot Layout . .. . .. . .. . .. . .. . . .. . .. . .. . .. . . .. . .. . .. . .. . VHDCI Trilink Connector (H8861-AA) . . . .. . .. . .. . .. . . .. . .. . .. . .. . DS-DWZZH-03 Front View . .. . .. . .. . .. . .. . . .. . .. . .. . .. . . .. . .. . .. . .. . DS-DWZZH-05 Rear View . . .. . .. . .. . .. . .. . . .. . .. . .. . .. . . .. . .. . .. . .. .
6–8 6–9 6–10 6–11 6–12 6–13 7–1 7–2 7–3 8–1 8–2 8–3 8–4 8–5 8–6 8–7 8–8 8–9 8–10 8–11 8–12 8–13 8–14 8–15 8–16 8–17 8–18 8–19 8–20 9–1 10–1 10–2 10–3 10–4 10–5 10–6 10–7 10–8 10–9 A Configuration That Is Not Recommended .. . .. . .. . . .. . .. . .. . .. . Another Configuration That Is Not Recommended . . .. . .. . .. . .. . Arbitrated Loop Maximum Configuration . . .. . .. . .. . . .. . .. . .. . .. . A Simple Zoned Configuration .. . .. . .. . .. . . .. . .. . .. . .. . . .. . .. . .. . .. .
10–10 Two BA356s Cabled for Shared SCSI Bus Usage .. . . .. . .. . .. . .. . 10–11 Two UltraSCSI BA356s Cabled for Shared SCSI Bus Usage . . 10–12 Externally Terminated Shared SCSI Bus with Mid-Bus HSZ50 RAID Array Controllers .. . . .. . .. . .. . .. . .. . . .. . .. . .. . .. . . .. . .. . .. . .. . 10–13 Externally Terminated Shared SCSI Bus with HSZ50 RAID Array Controllers at Bus End . .. . .. . .. . .. . . .. . .. . .. . .. . . .. . .. . .. . .. .
4–1 4–2 4–3 5–1 5–2 5–3 5–4 5–5 6–1 6–2 6–3 6–4 8–1 8–2 8–3 8–4 8–5 8–6 8–7 8–8 8–9 8–10 8–11 8–12 8–13 8–14 8–15 8–16 8–17 8–18 Planning Your Configuration . .. . .. . .. . .. . . .. . .. . .. . .. . . .. . .. . .. . .. . Configuring TruCluster Server Hardware . . .. . .. . .. . . .. . .. . .. . .. . Installing the KZPBA-CB for Radial Connection to a DWZZH UltraSCSI Hub . . .. . .. . .. . .. . . .. . .. . .. . .. . .. . . .. . .. . .. . .. . . .. . .. . .. . .. . MC1 and MC1.5 J4 Jumper Configuration . ..
8–19 8–20 8–21 9–1 9–2 9–3 10–1 10–2 10–3 10–4 10–5 10–6 10–7 11–1 11–2 11–3 A–1 xviii Contents Hardware Components Used to Create the Configuration Shown in Figure 8–18 .. . .. . . .. . .. . .. . .. . .. . . .. . .. . .. . .. . . .. . .. . .. . .. . Hardware Components Used to Create the Configuration Shown in Figure 8–19 .. . .. . . .. . .. . .. . .. . .. . . .. . .. . .. . .. . . .. . .. . .. . .. . Shared SCSI Bus Cable and Terminator Connections for the ESL9326D Enterprise Library .. . .. . .. . ..
About This Manual This manual describes how to set up and maintain the hardware configuration for a TruCluster™ Server cluster. Audience This manual is for system administrators who will set up and configure the hardware before installing the TruCluster Server software. The manual assumes that you are familiar with the tools and methods that are needed to maintain your hardware, operating system, and network. New and Changed Features The following changes have been made to this manual since the Version 5.
Chapter 3 Contains information about setting up a shared SCSI bus, SCSI bus requirements, and how to connect storage to a shared SCSI bus using the latest UltraSCSI products (DS-DWZZH UltraSCSI hubs, and HSZ70 and HSZ80 RAID array controllers). Chapter 4 Describes how to prepare systems for a TruCluster Server configuration, and how to connect host bus adapters to shared storage using the DS-DWZZH UltraSCSI hubs and the HSZ70 and HSZ80 RAID array controllers.
• Cluster Administration — Describes cluster-specific administration tasks. • Cluster Highly Available Applications — Describes how to deploy applications on a TruCluster Server cluster and how to write cluster-aware applications. • Cluster LAN Interconnect — Describes how to install and configure LAN hardware for the cluster interconnect. You can find the latest version of the TruCluster Server documentation at the following URL: http://www.tru64unix.compaq.com/docs/pub_page/cluster_list.
• MA6000 HSG60 Array Controller ACS Version 8.5 Solution Software for Compaq Tru64 UNIX Installation and Configuration Guide • Compaq StorageWorks HSG60/HSG80 Array Controller ACS Version 8.5 Maintenance and Service Guide • MA6000 HSG60 Array Controller ACS Version 8.5 Solution Software for Compaq Tru64 UNIX Installation and Configuration Guide • Compaq StorageWorks Release Notes RA8000/ESA12000 and MA8000/EMA12000 Solution Software V8.
• Compaq StorageWorks Heterogeneous Open SAN Design Reference Guide • Fibre Channel Storage Hub 7 Installation Guide • Fibre Channel Storage Hub 7 Rack Mounting Installation Card For information about the tape devices, see the following manuals: • TZ88 DLT Series Tape Drive Owner’s Manual • TZ89 DLT Series Tape Drive User’s Guide • TZ885 Model 100/200 GB DLT 5-Cartridge MiniLibrary Owner’s Manual • TZ887 Model 140/280 GB DLT 7-Cartridge MiniLibrary Owner’s Manual • TL881 MiniLibrary System Us
– ESL9326 Tape Drive Upgrade Guide The Golden Eggs Visual Configuration Guide provides configuration diagrams of workstations, servers, storage components, and clustered systems. It is available on line in PostScript and Portable Document Format (PDF) formats at: http://www.compaq.com/info/golden-eggs At this URL you will find links to individual system, storage, or cluster configurations. You can order the document through the Compaq Literature Order System (LOS) as order number EC-R026B-36.
P Manuals for programmers R Manuals for reference page users Some manuals in the documentation help meet the needs of several audiences. For example, the information in some system manuals is also used by programmers. Keep this in mind when searching for information on specific topics. The Documentation Overview provides information on all of the manuals in the Tru64 UNIX documentation set. Reader’s Comments Compaq welcomes any comments and suggestions you have on this and other Tru64 UNIX manuals.
Conventions The following typographical conventions are used in this manual: # A number sign represents the superuser prompt. % cat Boldface type in interactive examples indicates typed user input. file Italic (slanted) type indicates variable values, placeholders, and function argument names. .. . A vertical ellipsis indicates that a portion of an example that would normally be present is not shown.
1 Introduction This chapter introduces the TruCluster Server product and some basic cluster hardware configuration concepts. The chapter discusses the following topics: • An overview of the TruCluster Server product (Section 1.1) • TruCluster Server memory requirements (Section 1.2) • TruCluster Server minimum disk requirements (Section 1.3) • A description of a generic two-node cluster with the minimum disk layout (Section 1.
executing in a cluster. They can access their disk data from any member in the cluster. • Like the TruCluster Production Server Software product, TruCluster Server lets you run components of distributed applications in parallel, providing high availability while taking advantage of cluster-specific synchronization mechanisms and performance optimizations.
• Optionally, one disk on a shared SCSI bus to act as the quorum disk (see Section 1.3.1.4). For a more detailed discussion of the quorum disk, see the Cluster Administration manual. The following sections provide more information about these disks. Figure 1–1 shows a generic two-member cluster with the required file systems. 1.3.1.
file system cannot also be used as the member boot disk or as the quorum disk. 1.3.1.3 Member Boot Disk Each member has a boot disk. A boot disk contains that member’s boot, swap, and cluster-status partitions.
• A quorum disk can have either 1 vote or no votes. In general, a quorum disk should always be assigned a vote. You might assign an existing quorum disk no votes in certain testing or transitory configurations, such as a one-member cluster (in which a voting quorum disk introduces a single point of failure). • You cannot use the Logical Storage Manager (LSM) on the quorum disk. 1.4 Generic Two-Node Cluster This section describes a generic two-node cluster with the minimum disk layout of four disks.
Figure 1–1: Two-Node Cluster with Minimum Disk Configuration and No Quorum Disk Network Member System 1 Memory Channel PCI SCSI Adapter Member System 2 PCI SCSI Adapter Tru64 UNIX Disk Shared SCSI Bus Cluster File System root (/) /usr /var Member 1 Member 2 root (/) swap root (/) swap ZK-1587U-AI Figure 1–2 shows the same generic two-node cluster as shown in Figure 1–1, but with the addition of a quorum disk.
Figure 1–2: Generic Two-Node Cluster with Minimum Disk Configuration and Quorum Disk Network Member System 1 Memory Channel PCI SCSI Adapter Member System 2 PCI SCSI Adapter Tru64 UNIX Disk Shared SCSI Bus Cluster File System root (/) /usr /var Member 1 Member 1 root (/) swap root (/) swap Quorum ZK-1588U-AI 1.
• Using a redundant array of independent disks (RAID) array controller in transparent failover mode allows the use of hardware RAID to mirror the disks. However, without a second SCSI bus, second Memory Channel, and redundant networks, this configuration is still not an NSPOF cluster (Section 1.5.4). • By using an HSZ70, HSZ80, or HSG80 with multiple-bus failover enabled, you can use two shared SCSI buses to access the storage.
Figure 1–3: Minimum Two-Node Cluster with UltraSCSI BA356 Storage Unit Network Member System 1 Tru64 UNIX Disk Memory Channel Interface Member System 2 Memory Channel Memory Channel Host Bus Adapter (ID 6) Host Bus Adapter (ID 7) Shared SCSI Bus UltraSCSI BA356 ID 0 Clusterwide /, /usr, /var ID 1 Member 1 Boot Disk ID 2 Member 2 Boot Disk ID 3 Quorum Disk ID 4 Shared SCSI Bus DS-BA35X-DA Personality Module Clusterwide Data Disks ID 5 ID 6 PWR Do not use for data disk.
this slot can be used for a second power supply to provide fully redundant power to the storage shelf. With the use of the cluster file system (see the Cluster Administration manual for a discussion of the cluster file system), the clusterwide root (/), /usr, and /var file systems can be physically placed on a private bus of either of the member systems. But, if that member system is not available, the other member systems do not have access to the clusterwide file systems.
1.5.2 Two-Node Clusters Using UltraSCSI BA356 Storage Units with Increased Disk Configurations The configuration shown in Figure 1–3 is a minimal configuration, with a lack of disk space for highly available applications. Starting with Tru64 UNIX Version 5.0, 16 devices are supported on a SCSI bus. Therefore, multiple BA356 storage units can be used on the same SCSI bus to allow more devices on the same bus.
Figure 1–4: Two-Node Cluster with Two UltraSCSI DS-BA356 Storage Units Network Member System 1 Memory Channel Interface Member System 2 Memory Channel Memory Channel Host Bus Adapter (ID 6) Host Bus Adapter (ID 7) Tru64 UNIX Disk Data disks Do not use for data disk. May be used for redundant power supply.
______________________ Note _______________________ You cannot use LSM to mirror the member system boot or quorum disks, but you can use hardware RAID. Figure 1–5 shows a small cluster configuration with dual SCSI buses using LSM to mirror the clusterwide root (/), /usr, and /var file systems and the data disks.
use LSM to mirror the quorum or the member system boot disks, we do not have a no-single-point-of-failure (NSPOF) cluster. 1.5.4 Using Hardware RAID to Mirror the Quorum and Member System Boot Disks You can use hardware RAID with any of the supported RAID array controllers to mirror the quorum and member system boot disks. Figure 1–6 shows a cluster configuration using an HSZ70 RAID array controller.
devices. Either controller can continue to service all of the units if the other controller fails. ______________________ Note _______________________ The assignment of HSZ target IDs can be balanced between the controllers to provide better system performance. See the RAID array controller documentation for information on setting up storagesets. In the configuration shown in Figure 1–6, there is only one shared SCSI bus.
____________________ Notes ____________________ Only the HSZ70, HSZ80, HSG60, and HSG80 are capable of supporting multiple-bus failover (SET MULTIBUS_FAILOVER COPY = THIS_CONTROLLER). Partitioned storagesets and partitioned single-disk units cannot function in multiple-bus failover dual-redundant configurations with the HSZ70 or HSZ80. You must delete any partitions before configuring the controllers for multiple-bus failover.
Figure 1–8: NSPOF Fibre Channel Cluster Using HSG80s in Multiple-Bus Failover Mode Member System 1 Memory Channel Memory Channel KGPSA Memory Channel Interface Memory Channel Interface Member System 2 Memory Channel Memory Channel KGPSA KGPSA KGPSA Fibre Channel Switch Fibre Channel Switch Port 1 Port 1 HSG 80 Controller A HSG 80 Controller B Port 2 Port 2 RA8000/ESA12000 ZK-1765U-AI If you are using LSM and multiple shared SCSI buses with storage shelves, you need to: • Mirror the clusterwide r
Figure 1–9 shows a two-member cluster configuration with three shared SCSI buses. The clusterwide root (/), /usr, and /var file systems are mirrored across the first two shared SCSI buses. The boot disk for member system one is on the first shared SCSI bus. The boot disk for member system two is on the second shared SCSI bus. The quorum disk is on the third shared SCSI bus. You can lose one system, or any one shared SCSI bus, and still maintain a cluster.
Figure 1–9: NSPOF Cluster Using LSM and UltraSCSI BA356s Network Network Member System 1 Memory Channel Tru64 UNIX Disk Memory Channel Interface Member System 2 Memory Channel Memory Channel Memory Channel Host Bus Adapter (ID 6) Host Bus Adapter (ID 7) Host Bus Adapter (ID 6) Host Bus Adapter (ID 7) Host Bus Adapter (ID 6) Host Bus Adapter (ID 7) UltraSCSI BA356 UltraSCSI BA356 ID 0 Clusterwide /, /usr, /var Data Disk ID 1 Member 1 Boot Disk Data Disk ID 2 Data Disk ID 3 Data Disk
1.6 Eight-Member Clusters TruCluster Server Version 5.1A supports eight-member cluster configurations as follows: • Fibre Channel: Eight-member systems may be connected to common storage over Fibre Channel in a fabric (switch) configuration. • Parallel SCSI: Only four of the member systems may be connected to any one SCSI bus, but you can have multiple SCSI buses connected to different sets of nodes, and the sets of nodes may overlap.
For a Fibre Channel configuration, connect the HSG60 or HSG80 controllers to the switches. You want the HSG60 or HSG80 to recognize the connections to the systems when the systems are powered on. 8. 9. Prepare the member systems by installing: • Additional Ethernet or Asynchronous Transfer Mode (ATM) network adapters for client networks. • SCSI bus adapters. Ensure that adapter terminators are set correctly. Connect the systems to the shared SCSI bus. (See Chapter 4 or Chapter 9.
2 Hardware Requirements and Restrictions This chapter describes the hardware requirements and restrictions for a TruCluster Server cluster. It includes lists of supported cables, trilink connectors, Y cables, and terminators. The chapter discusses the following topics: • Requirements for member systems in a TruCluster Server cluster (Section 2.1) • Memory Channel requirements (Section 2.2) • Host bus adapter restrictions (including KGPSA, KZPSA-BB, and KZPBA-CB) (Section 2.
behind an HSZ80, HSG60, or HSG80 controller. If the cluster member is using earlier firmware, the member may fail to boot, indicating "Reservation Conflict" errors. • TruCluster Server Version 5.1A supports eight-member cluster configurations as follows: – Fibre Channel: Eight-member systems may be connected to common storage over Fibre Channel in a fabric (switch) configuration.
Figure 2–1: PCI Backplane Slot Layout I/O Riser 0 I/O Riser 1 1-7 1-6 1-5 1-4 1-R 1-3 1-2 1-1 PCI 1 PCI 0 0-7 0-6 0-5 0-4 0-R 0-3 0-2 0-0/1 PCI 0 PCI 1 ZK-1748U-AI • TruCluster Server does not support the XMI CIXCD on an AlphaServer 8x00, GS60, GS60E, or GS140 system. 2.2 Memory Channel Restrictions The Memory Channel interconnect is one method used for cluster communications between the member systems.
When the Memory Channel is set up in standard hub mode, the Memory Channel hub must be visible to each member’s Memory Channel adapter. If the hub is powered off, no system is able to boot. A two-node cluster configured in virtual hub mode does not have these problems. In virtual hub mode, each system is always connected to the virtual hub. A loss of communication over the Memory Channel causes both members (if both members are still up) to attempt to obtain ownership of the quorum disk.
• The maximum length of an MC2 BN39B link cable is 10 meters (32.8 feet). • In an MC2 configuration, you can use a CCMFB optical converter in conjunction with the MC2 CCMAB host bus adapter or a CCMLB hub line card to increase the distance between systems. – The BN34R fiber-optic cable, which is used to connect two CCMFB optical converters, is available in 10-meter (32.8-foot) (BN34R-10) and 31-meter (101.7-foot) (BN34R-31) lengths.
Order an H3095-AA module to upgrade an AlphaServer 2000 or an H3096-AA module to upgrade an AlphaServer 2100 to support Memory Channel. • For AlphaServer 2100A systems, the Memory Channel adapter must be installed in PCI 4 through PCI 7 (slots 6, 7, 8, and 9), which are the bottom four PCI slots. 2.3 Host Bus Adapter Restrictions To connect a member system to a shared SCSI bus, you must install a host bus adapter in an I/O bus slot. The Tru64 UNIX operating system supports a maximum of 64 I/O buses.
Table 2–1: AlphaServer Systems Supported for Fibre Channel (cont.) Number of Adapters Supported in Fabric Topology Number of Adapters Supported in Loop Topology AlphaServer GS60, GS60E, and GS140b 63c, 32d — AlphaServer GS80, GS160, and GS320e 62 — AlphaServer System a The arbitrated loop topology requires the KGPSA-CA adapter with V3.03 (or later) firmware and Version 5.8 or later of the SRM console.
Gigabit Interface Converter (GBIC) transceiver-based port connections for maximum application flexibility. The hub is hot pluggable and is unmanaged. • Only single-hub arbitrated loop configurations are supported; that is, there are no cascaded hubs on any SCSI bus. • The only Fibre Channel switches supported are the DS-DSGGA-AA/AB 8/16 port, DS-DSGGB-AA/AB 8/16 port, or DS-DSGGC-AA/AB 8/16 port Fibre Channel switches.
• A storage array with dual-redundant HSG60 or HSG80 controllers in multiple-bus failover is four targets and consumes four ports on a switch. • The HSG60 and HSG80 documentation refers to the controllers as Controllers A (top) and B (bottom). Each controller provides two ports (left and right). (The HSG60 and HSG80 documentation refers to these ports as Port 1 and 2, respectively.) In transparent failover mode, only one left port and one right port are active at any given time.
>>> set bus_probe_algorithm new Use the show bus_probe_algorithm console command to determine if your system supports the variable. If the response is null or an error, there is no support for the variable. If the response is anything other than new, you must set it to new. • On AlphaServer 1000A and 2100A systems, updating the firmware on the KZPSA-BB SCSI adapter is not supported when the adapter is behind the PCI-to-PCI bridge. 2.3.
Table 2–2: RAID Controller Minimum Required Array Controller Software RAID Controller Minimum Required Array Controller Software HSZ20 3.4 HSZ22 (RAID Array 3000) D11x HSZ40 3.7 HSZ50 5.7 HSZ70 7.7 HSZ80 8.3-1 HSG60 8.5 HSG80 8.5 RAID controllers can be configured with the number of SCSI IDs as listed in Table 2–3.
• Only RA3000 storage units visible to the host as LUN0 (storage units with a zero (0) as the last digit of the unit number such as D0, D100, D200, and so forth) can be used as a boot device. • StorageWorks Command Console (SWCC) V2.2 is the only configuration utility that will work with the RA3000. SWCC V2.2 runs only on a Microsoft Windows NT or Windows 2000 PC. • The controller will not operate without at least one 16-MB SIMM installed in its cache.
• If you observe any “bus hung” messages, your DWZZA signal converters may have the incorrect hardware. In addition, some DWZZA signal converters that appear to have the correct hardware revision may cause problems if they also have serial numbers in the range from CX444xxxxx through CX449xxxxx. To upgrade a DWZZA-AA or DWZZA-VA signal converter to the correct revision, use the appropriate field change order (FCO), as follows: – DWZZA-AA-F002 – DWZZA-VA-F001 2.
______________________ Note _______________________ The DS-DWZZH-03/05 UltraSCSI hubs cannot be connected to a StorageWorks BA35X storage shelf because the storage shelf does not provide termination power to the hub. 2.8 SCSI Cables If you are using shared SCSI buses, you must determine if you need cables with connectors that are low-density 50-pins, high-density 50-pins, high-density 68-pins (HD68), or VHDCI (UltraSCSI).
Table 2–4: Supported SCSI Cables (cont.
2.9 SCSI Terminators and Trilink Connectors Table 2–5 describes the supported trilink connectors and SCSI terminators and the context in which you use them. Table 2–5: Supported SCSI Terminators and Trilink Connectors Trilink Connector or Terminator Density Pins Configuration Use H885-AA Three 68-pin Trilink connector that attaches to high-density, 68-pin cables or devices, such as a KZPSA-BB, KZPBA-CB, HSZ40, HSZ50, or the differential side of a SCSI signal converter.
3 Shared SCSI Bus Requirements and Configurations Using UltraSCSI Hardware A TruCluster Server cluster uses shared SCSI buses, external storage shelves or redundant array of independent disks (RAID) controllers, and supports disk mirroring and fast file system recovery to provide high data availability and reliability. This chapter discusses the following topics: • Shared SCSI bus configuration requirements (Section 3.1) • SCSI bus performance (Section 3.
______________________ Note _______________________ Although the UltraSCSI BA356 might have been included in this chapter with the other UltraSCSI devices, it is not. The UltraSCSI BA356 is discussed in Chapter 10 with the configurations using external termination. It cannot be cabled directly to an UltraSCSI hub because it does not provide SCSI bus termination power (termpwr).
• Be careful when performing maintenance on any device that is on a shared bus because of the constant activity on the bus. Usually, to perform maintenance on a device without shutting down the cluster, you must be able to isolate the device from the shared bus without affecting bus termination.
3.2 SCSI Bus Performance Before you set up a SCSI bus, it is important that you understand a number of issues that affect the viability of a bus and how the devices that are connected to it operate. Specifically, bus performance is influenced by the following factors: • Transmission method (Section 3.2.2) • Data path (Section 3.2.3) • Bus speed (Section 3.2.4) 3.2.1 SCSI Bus Versus SCSI Bus Segments An UltraSCSI bus may comprise multiple UltraSCSI bus segments.
This transmission method is less susceptible to noise than single-ended SCSI and enables you to use longer cables. Devices with differential SCSI interfaces include the following: – KZPBA-CB – KZPSA-BB – HSZ40, HSZ50, HSZ70, and HSZ80 controllers – Differential side of a SCSI signal converter or personality module You cannot use the two transmission methods in the same SCSI bus segment.
Table 3–1: SCSI Bus Speeds SCSI Bus Transfer Rate (MHz) Bus Width (Bytes) Bus Bandwidth (Speed) MB/sec SCSI 5 1 5 Fast SCSI 10 1 10 Fast-Wide 10 2 20 UltraSCSI 20 2 40 UltraSCSI-II 40 2 80 3.3 SCSI Bus Device Identification Numbers On a shared SCSI bus, each SCSI device uses a device address and must have a unique SCSI ID (from 0 through 15). For example, each SCSI bus adapter and each disk in a single-ended storage shelf uses a device address.
3.4 SCSI Bus Length There is a limit to the length of the cables in a shared SCSI bus. The total cable length for a SCSI bus segment is calculated from one terminated end to the other. If you are using devices that have the same transmission method and data path (for example, wide differential), a shared bus will consist of only one bus segment.
______________________ Notes ______________________ With the exception of the TZ885, TZ887, TL890, TL891, and TL892, tape devices can only be installed at the end of a shared SCSI bus. These tape devices are the only supported tape devices that can be terminated externally. We recommend that tape loaders be on a separate, shared SCSI bus to allow normal shared SCSI bus termination for those shared SCSI buses without tape loaders.
of the bus segment. Termination for the other end of the bus segment is provided by the folowing components: • Installed KZPBA-CB (or KZPSA-BB) termination resistor SIPs • External termination on a trilink connector that is attached to an HSZ40, HSZ50, HSZ70, or HSZ80 ______________________ Note _______________________ The DS-DWZZH-03/05 UltraSCSI hubs cannot be connected to a StorageWorks BA35X storage shelf because the storage shelf does not provide termination power to the hub. 3.6.
– A StorageWorks UltraSCSI BA356 storage shelf (which has the required 180-watt power supply). – The lower righthand device slot of the BA370 shelf within the RA7000 or ESA 10000 RAID array subsystems. This position minimizes cable lengths and interference with disks. – A non-UltraSCSI BA356 that has been upgraded to the 180-watt power supply with the DS-BA35X-HH option. • Uses the storage shelf only to provide its power and mechanical support. (It is not connected to the shelf internal SCSI bus.
The following section describes how to prepare the DS-DWZZH-05 UltraSCSI hub for use on a shared SCSI bus in more detail. 3.6.1.2.1 DS-DWZZH-05 Configuration Guidelines The DS-DWZZH-05 UltraSCSI hub can be installed in: • A StorageWorks UltraSCSI BA356 shelf (which has the required 180-watt power supply). • A non-UltraSCSI BA356 that has been upgraded to the 180-watt power supply with the DS-BA35X-HH option.
Table 3–3: DS-DWZZH UltraSCSI Hub Maximum Configurations DS-DWZZH-03 DS-DWZZH-05 Disk Drivesa Personality Moduleb c 5 0 0 Not Installed 4 0 0 Installed 3 0 3 Installed 2 0 4 Installed 1 0 5 Installed 0 2 0 Not Installed 3 1 0 Not Installed 2 1 1 Installed 1 1 2 Installed 0 1 3 Installed a DS-DWZZH UltraSCSI hubs and disk drives may coexist in a storage shelf.
After the host adapter has been serviced, if there are still SCSI IDs retained from the previous arbitration cycle, the next highest SCSI ID is serviced. When all devices in the group have been serviced, the DS-DWZZH-05 repeats the sequence at the next arbitration cycle. Fair arbitration is disabled by placing the switch on the front of the DS-DWZZH-05 UltraSCSI hub in the Disable position. (See Figure 3–4.
If jumper W1 is removed, the host adapter ports assume SCSI IDs 12, 13, 14, and 15. The controllers are assigned SCSI IDs 0 through 6. The DS-DWZZH-05 retains the SCSI ID of 7.
Figure 3–4: DS-DWZZH-05 Front View Fair Disable Controller Port SCSI ID 6-4 (6 - 0) Host Port SCSI ID 2 (14) Power Host Port SCSI ID 3 (15) Host Port SCSI ID 1 (13) Busy Host Port SCSI ID 0 (12) ZK-1447U-AI 3.6.1.2.4 SCSI Bus Termination Power Each host adapter that is connected to a DS-DWZZH-05 UltraSCSI hub port must supply termination power (termpwr) to enable the termination resistors on each end of the SCSI bus segment. If the host adapter is disconnected from the hub, the port is disabled.
3.6.1.3 Installing the DS-DWZZH-05 UltraSCSI Hub To install the DS-DWZZH-05 UltraSCSI hub, follow these steps: 1. Remove the W1 jumper to enable wide addressing mode. (See Figure 3–3.) 2. If fair arbitration is to be used, ensure that the switch on the front of the DS-DWZZH-05 UltraSCSI hub is in the Fair position. 3. Install the DS-DWZZH-05 UltraSCSI hub in a UltraSCSI BA356, non-UltraSCSI BA356 (if it has the required 180-watt power supply), or BA370 storage shelf. 3.
the member system and storage device. Be aware though, the KZPSA-BB is not an UltraSCSI device and therefore only works at fast-wide speed (20 MB/sec). The following sections describe how to prepare and install cables for storage configurations on a shared SCSI bus using UltraSCSI hubs and the HSZ70 and HSZ80 RAID array controllers, or the RAID Array 3000. 3.7.
Transparent failover compensates only for a controller failure, and not for failures of either the SCSI bus or host adapters and is therefore not a NSPOF configuration. ______________________ Note _______________________ Set each controller to transparent failover mode before configuring devices (SET FAILOVER COPY = THIS_CONTROLLER). To achieve a NSPOF configuration, you need multiple-bus failover and two shared SCSI buses.
see their non-active host ports as passive. If one of the controllers fails, the surviving controller sees both host ports as active. In the active/passive mode, the primary controller sees both host ports as active. The other controller sees both host ports as passive. If the primary controller fails, the remaining controller takes over and sees both host ports as active. The following sections describe how to cable the HSZ70, HSZ80, or RA3000 for TruCluster Server configurations using an UltraSCSI hub.
• HSZ70 controller A and controller B • HSZ80 controller A Port 1 (2) and controller B Port 1 (2) The BN37A-0C is a 30-centimeter (11.8-inch) cable and the BN37A-0E is a 50-centimeter (19.7-inch) cable. 4. Install the DS-DWZZH-03 or DS-DWZZH-05 UltraSCSI hub in an UltraSCSI BA356, non-UltraSCSI BA356 (with the required 180-watt power supply), or BA370 storage shelf. (See Section 3.6.1.1 or Section 3.6.1.2.) 5. If you are using a: 6.
Figure 3–5: Shared SCSI Bus with HSZ70 Configured for Transparent Failover Network Member System 1 Member System 2 Memory Channel Interface Memory Channel KZPBA-CB (ID 6) Memory Channel T T KZPBA-CB (ID 7) 1 1 T DS-DWZZH-03 T T 2 3 2 3 4 T Controller A Controller B HSZ70 HSZ70 StorageWorks RAID Array 7000 ZK-1599U-AI Table 3–4 lists the components that are used to create the clusters that are shown in Figure 3–5, Figure 3–6, Figure 3–7, and Figure 3–8.
Table 3–4: Hardware Components Shown in Figure 3–5 Through Figure 3–8 Callout Number Description 1 BN38C cablea 2 BN37A cableb 3 H8861-AA VHDCI trilink connector 4 H8863-AA VHDCI terminatorb a The maximum length of the BN38C (or BN38D) cable on one SCSI bus segment must not exceed 25 meters (82 feet). b The maximum combined length of the BN37A cables must not exceed 25 meters (82 feet).
3.7.1.2 Preparing a Dual-Redundant HSZ70 or HSZ80 for a Shared SCSI Bus Using Multiple-Bus Failover Multiple-bus failover is a dual-redundant controller configuration in which each host has two paths (two shared SCSI buses) to the array controller subsystem. The hosts have the capability to move LUNs from one controller (shared SCSI bus) to the other. If one host adapter or SCSI bus fails, the hosts can move all storage to the other path.
2. Install H8861-AA VHDCI trilink connectors (with terminators) on: • HSZ70 controller A and controller B • HSZ80 controller A Port 1 (2) and controller B Port 1 (2) ___________________ Note ___________________ You must use the same port on each HSZ80 controller. 3. Install the DS-DWZZH-03 or DS-DWZZH-05 UltraSCSI hub in a DS-BA356, BA356 (with the required 180-watt power supply), or BA370 storage shelf. (See Section 3.6.1.1 or Section 3.6.1.2.) 4.
Figure 3–7 shows a two-member TruCluster Server configuration with a radially connected dual-redundant HSZ70 configured for multiple-bus failover.
Figure 3–8 shows a two-member TruCluster Server configuration with a radially connected dual-redundant HSZ80 configured for multiple-bus failover.
The RA3000 storage subsystem has fully redundant components to eliminate single points of failure. It comes with a standard uninterruptible power supply (UPS) for cache data protection during power outages. The RA3000 uses the dual-ported HSZ22 controller. Optional dual redundant controllers with mirrored write-back cache provide maximum data integrity. The StorageWorks Command Console (SWCC) V2.2 (or higher) client graphical user interface (GUI) runs on a Microsoft Windows NT V4.
Table 3–5: Installing Cables for RA3000 Radial Configuration with a DWZZH UltraSCSI Hub Action Refer to: Install a BN38C HD68 to VHDCI cable between each KZPBA-CB UltraSCSI host adapter and a DWZZH port. The DWZZH accepts the VHDCI connector. You may use a BN38E-0B technology adapter cable with a BN37A cable instead of the BN38C cable.
Figure 3–9 shows a four-member TruCluster Server configuration and an RA3000 controller shelf with active/passive failover radially connected to a DS-DWZZH-05 UltraSCSI hub. Table 3–6 describes the callouts.
Figure 3–10 shows a four-member TruCluster Server configuration and an RA3000 pedestal with active/passive failover radially connected to a DS-DWZZH-05 UltraSCSI hub. Table 3–6 describes the callouts.
Figure 3–11 shows a two-member TruCluster Server configuration and an RA3000 pedestal with active/active or active/passive failover radially connected to a DS-DWZZH-05 UltraSCSI hub. This configuration uses independent connections to the two pedestal host ports to increase the available bandwidth to the RA3000 controllers. Table 3–6 describes the callouts.
Figure 3–12 shows a four-member TruCluster Server configuration and an RA3000 controller shelf with active/active or active/passive failover radially connected to a DS-DWZZH-05 UltraSCSI hub. Table 3–6 describes the callouts.
Table 3–6: Hardware Components Used in the Configurations Shown in Figure 3–9 through Figure 3–12 Callout Number Description 1 BN38C HD68-to-VHDCI cable.a A BN38E-0B technology adapter cable may be connected to a BN37A cable and used in place of a BN38C cable.b 2 BN37A VHDCI cablec 3 BN37A-0E 50-centimeter (19.7-inch) VHDCI cabled a The maximum length of the SCSI bus segment, including the combined length of BN38C cables and internal device length, must not exceed 25 meters (82 feet).
4 TruCluster Server System Configuration Using UltraSCSI Hardware This chapter describes how to prepare systems for a TruCluster Server cluster, using UltraSCSI hardware and the preferred method of radial configuration, including how to connect devices to a shared SCSI bus for the TruCluster Server product. This chapter does not provide detailed information about installing devices; it describes only how to set up the hardware in the context of the TruCluster Server product.
______________________ Note _______________________ If you are using Fibre Channel storage, see Chapter 6. Before you connect devices to a shared SCSI bus, you must: • Plan your hardware configuration, determining which devices will be connected to each shared SCSI bus, which devices will be connected together, and which devices will be at the ends of each bus. Planning is especially critical if you will install tape devices on the shared SCSI bus.
• Cluster interconnects You need only one cluster interconnect in a cluster. For TruCluster Server Version 5.1A, the cluster interconnect can be the Memory Channel or a private LAN. (See Cluster LAN Interconnect for more information on using a private LAN as the cluster interconnect.) However, you can use redundant cluster interconnects to protect against an interconnect failure and for easier hardware maintenance.
Table 4–1: Planning Your Configuration To increase: You can: Application performance Increase the number of member systems. I/O performance Increase the number of shared buses. Member system availability Increase the number of member systems. Cluster interconnect availability Use redundant cluster interconnects. Disk availability Mirror disks across shared buses. Use a RAID array controller. Shared storage capacity Increase the number of shared buses. Use a RAID array controller.
5. Mount the CD-ROM as follows (/dev/disk/cdrom0c is used as an example CD-ROM drive): # mount -rt cdfs -o noversion /dev/disk/cdrom0cc /mnt 6. Copy the appropriate release notes to your system disk. In this example, obtain the firmware release notes for the AlphaServer DS20 from the Version 5.6 Alpha Firmware Update CD-ROM: # cp /mnt/doc/ds20_v56_fw_relnote.txt ds20-rel-notes 7. Unmount the CD-ROM drive: # umount /mnt 8. Print the release notes. 4.
TruCluster Server clusters using the preferred method of radial connection with internal termination. ______________________ Note _______________________ The KZPSA-BB can be used in any configuration in place of the KZPBA-CB. The use of the KZPSA-BB is not mentioned in this chapter because it is not UltraSCSI hardware, and it cannot operate at UltraSCSI speeds. The use of the KZPSA-BB (and the KZPBA-CB) with external termination is discussed in Chapter 9.
Table 4–2: Configuring TruCluster Server Hardware (cont.) Step Action Refer to: 4 The firmware update release notes (Section 4.2) Update the system SRM console firmware from the latest Alpha Systems Firmware Update CD-ROM. ______________________ Note _____________________ The SRM console firmware includes the ISP1020/1040-based PCI option firmware, which includes the KZPBA-CB. When you update the SRM console firmware, you are enabling the KZPBA-CB firmware to be updated.
The DWZZH contains a differential to single-ended signal converter for each hub port (which is sometimes referred to as a DWZZA on a chip, or DOC chip). The single-ended sides are connected together to form an internal single-ended SCSI bus segment. Each differential SCSI bus port is terminated internal to the DWZZH with terminators that cannot be disabled or removed. Power for the DWZZH termination (termpwr) is supplied by the host SCSI bus adapter or RAID array controller connected to the DWZZH port.
Make sure that your storage shelves or RAID array subsystems are set up before completing this portion of an installation. Use the steps in Table 4–3 to set up a KZPBA-CB for a TruCluster Server cluster that uses radial connection to a DWZZH UltraSCSI hub. Table 4–3: Installing the KZPBA-CB for Radial Connection to a DWZZH UltraSCSI Hub Step Action Refer to: 1 Ensure that the eight KZPBA-CB internal Section 4.3.1, Figure 4–1, termination resistor SIPs, RM1-RM8 are installed.
Table 4–3: Installing the KZPBA-CB for Radial Connection to a DWZZH UltraSCSI Hub (cont.) Step Action Refer to: _____________________ Notes _____________________ Ensure that the SCSI ID that you use is distinct from all other SCSI IDs on the same shared SCSI bus. If you do not remember the other SCSI IDs, or do not have them recorded, you must determine these SCSI IDs. If you are using a DS-DWZZH-05, you cannot use SCSI ID 7 for a KZPBA-CB UltraSCSI adapter; SCSI ID 7 is reserved for DS-DWZZH-05 use.
Example 4–1: Displaying Configuration on an AlphaServer DS20 (cont.) CPU 1 Alpha 21264-4 500 MHz Bcache size: 4 MB Core Logic Cchip Dchip Pchip 0 Pchip 1 DECchip DECchip DECchip DECchip TIG Arbiter Rev 4.14 Rev 2.10 (0x1) MEMORY Array # ------0 Size ---------512 MB 21272-CA 21272-DA 21272-EA 21272-EA Rev Rev Rev Rev SROM Revision: V1.82 2.1 2.0 2.2 2.
Example 4–1: Displaying Configuration on an AlphaServer DS20 (cont.) Bus 02 Slot 01: NCR 53C875 Bus 02 pkb0.7.0.2001.0 Slot 02: DE500-AA Network Controller ewa0.0.0.2002.0 PCI Hose 01 Bus 00 SCSI Bus ID 7 00-06-2B-00-0A-48 Slot 07: DEC PCI FDDI fwa0.0.0.7.
Example 4–3 shows the output from the show config console command entered on an AlphaServer 8200 system.
Example 4–4: Displaying Devices on an AlphaServer 8200 (cont.) dkf4.0.0.1.1 dkf5.0.0.1.1 dkf6.0.0.1.1 dkf100.1.0.1.1 dkf200.2.0.1.1 dkf300.3.0.1.1 DKF4 DKF5 DKF6 DKF100 DKF200 DKF300 HSZ70 HSZ70 HSZ70 RZ28M RZ28M RZ28 V70Z V70Z V70Z 0568 0568 442D polling for units kzpsa0.4.0.2.1 dkg0.0.0.2.1 dkg1.0.0.2.1 dkg2.0.0.2.1 dkg100.1.0.2.1 dkg200.2.0.2.1 dkg300.3.0.2.1 on kzpsa0, slot 2, bus 0, hose1...
4.3.3.1 Displaying KZPBA-CB pk* or isp* Console Environment Variables To determine the console environment variables to use, execute the show pk* and show isp* console commands. Example 4–5 shows the pk console environment variables for an AlphaServer DS20.
• on — Turns on both low 8 bits and high 8 bits • diff — Places the bus in differential mode The KZPBA-CB is a Qlogic ISP1040 module, and its termination is determined by the presence or absence of internal termination resistor SIPs RM1-RM8. Therefore, the pk*0_soft_term environment variable has no meaning and it may be ignored. Example 4–6 shows the use of the show isp console command to display the console environment variables for KZPBA-CBs on an AlphaServer 8x00.
4.3.3.2 Setting the KZPBA-CB SCSI ID After you determine the console environment variables for the KZPBA-CBs on the shared SCSI bus, use the set console command to set the SCSI ID. For a TruCluster Server cluster, you will most likely have to set the SCSI ID for all KZPBA-CB UltraSCSI adapters except one. And, if you are using a DS-DWZZH-05, you will have to set the SCSI IDs for all KZPBA-CB UltraSCSI adapters.
Figure 4–1: KZPBA-CB Termination Resistors Internal Narrow Device Connector P2 Internal Wide Device Connector J2 JA1 SCSI Bus Termination Resistors RM1-RM8 ZK-1451U-AI 4–18 TruCluster Server System Configuration Using UltraSCSI Hardware
5 Setting Up the Memory Channel Cluster Interconnect This chapter describes Memory Channel configuration restrictions, and describes how to set up the Memory Channel cluster interconnect, including setting up a Memory Channel hub and Memory Channel optical converter (MC2 only), and connecting link cables. Two versions of the Memory Channel peripheral component interconnect (PCI) adapter are available: CCMAA and CCMAB (MC2).
2. Install the Memory Channel adapter into a PCI slot on each system (Section 5.2). 3. If you are using fiber optics with MC2, install the CCMFB fiber-optic module (Section 5.3). 4. If you have more than two systems in the cluster, install a Memory Channel hub (Section 5.4). 5. Connect the Memory Channel cables (Section 5.5). 6. After you complete steps 1 through 5 for all systems in the cluster, apply power to the systems and run Memory Channel diagnostics (Section 5.6).
Table 5–1: MC1 and MC1.5 J4 Jumper Configuration If hub mode is: Jumper: Standard J4 Pins 1 to 2 Example: 12 3 Virtual: VH0 J4 Pins 2 to 3 12 3 Virtual: VH1 None needed; store the jumper on J4 pin 1 or 3 12 3 If you are upgrading from virtual hub mode to standard hub mode (or from standard hub mode to virtual hub mode), be sure to change the J4 jumper on all Memory Channel adapters on the rail. 5.1.2 MC2 Jumpers The MC2 module (CCMAB) has multiple jumpers.
space. The configuration change is propagated to the other cluster member systems by entering the following command: # /sbin/sysconfig -r rm rm_use_512=1 See the Cluster Administration manual for more information on failover pairs. The MC2 jumpers are described in Table 5–2.
Table 5–2: MC2 Jumper Configuration (cont.) Jumper: Description: J5: AlphaServer 8x00 Mode 8x00 mode selected: Pins 1 to 2a Example: 12 3 8x00 mode not selected: Pins 2 to 3 12 3 J10 and J11: Fiber-Optic Mode Enable Fiber Off: Pins 1 to 2 3 2 1 Fiber On: Pins 2 to 3 pins 3 2 1 a Increases the maximum sustainable bandwidth for 8x00 systems. If the jumpers are in this position for other systems, the bandwidth is decreased.
5.2 Installing the Memory Channel Adapter Install the Memory Channel adapter in an appropriate peripheral component interconnect (PCI) slot. (See Section 2.2.) Secure the module at the backplane. Ensure that the screw is tight to maintain proper grounding. The Memory Channel adapter comes with a straight extension plate.
5.4 Installing the Memory Channel Hub You may use a hub in a two-node TruCluster Server cluster, but the hub is not required. When there are more than two systems in a cluster, you must use a Memory Channel hub as follows: • For use with the MC1 or MC1.5 CCMAA adapter, you must install the hub within 3 meters (9.8 feet) of each of the systems. For use with the MC2 CCMAB adapter, the hub must be placed within 4 meters (13.1 feet) or 10 meters (32.8 feet) (the length of the BN39B link cables) of each system.
______________________ Note _______________________ Do not connect an MC1 or MC1.5 link cable to an MC2 module. 5.5.1.1 Connecting MC1 or MC1.5 Link Cables in Virtual Hub Mode For an MC1 virtual hub configuration (two nodes in the cluster), connect the BC12N-10 link cables between the Memory Channel adapters that are installed in each of the systems. _____________________ Caution _____________________ Be very careful when installing the link cables. Insert the cables straight in.
Figure 5–1 shows Memory Channel adapters connected to linecards that are in the same slot position in the Memory Channel hubs. Figure 5–1: Connecting Memory Channel Adapters to Hubs Memory Channel hub 1 System A Memory Channel hub 2 Linecards Memory Channel adapters ZK-1197U-AI 5.5.2 Installing the MC2 Cables To set up an MC2 interconnect, use the BN39B-04 (4-meter; 13.1-foot) or BN39B-10 (10-meter; 32.8-foot) link cables for virtual hub or standard hub configurations without optical converters.
Do not connect an MC2 cable to an MC1 or MC1.5 CCMAA module. Gently push the cable’s connector into the receptacle, and then use the screws to pull the connector in tight. The connector must be tight to ensure a good ground contact. If you are setting up redundant interconnects, all Memory Channel adapters in a system must have the same jumper setting, either VH0 or VH1. 5.5.2.
5.5.2.4 Connecting MC2 Cables in Standard Hub Mode Using Optical Converters If you are using optical converters in an MC2 configuration, install an optical converter module (CCMFB), with attached BN34R fiber-optic cable, when you install the CCMAB Memory Channel PCI adapter in each system in the standard hub configuration. Also connect the CCMAB Memory Channel adapter to the optical converter with a BN39B-01 cable. ______________________ Note _______________________ See Section 2.
6. Install the CCMFB fiber-optic converter in slot opto only, 0/opto, 1/opto, 2/opto, or 3/opto, as appropriate. 7. Install a BN39B-01 1-meter (3.3-foot) link cable between the CCMFB optical converter and the CCMLB linecard. 8. Repeat steps 1 through 7 for each CCMFB module to be installed. 5.
When the console indicates a successful response from all other systems being tested, the data flow through the Memory Channel hardware has been completed and the test may be terminated by pressing Ctrl/C on each system being tested. Example 5–1 shows a sample output from node 1 of a standard hub configuration. In this example, the test is started on node 1, then on node 0. The test must be terminated on each system.
5.7 Maintaining Memory Channel Interconnects The following sections contain information about maintaining Memory Channel interconnects. See other sections in this chapter or the Memory Channel User’s Guide for detailed information about maintaining the Memory Channel hardware. Topics in this section include: • Adding a Memory Channel interconnect (Section 5.7.1) • Upgrading Memory Channel adapters (Section 5.7.2) • Upgrading a virtual hub configuration to a standard hub configuration (Section 5.7.
______________________ Note _______________________ When you upgrade from dual, redundant MC1 hardware to dual, redundant MC2 hardware, you must replace all the MC1 hardware on one interconnect before you start on the second interconnect (except as described in step 4 of Table 5–4). Memory Channel adapters jumpered for 512 MB may require a minimum of 512 MB physical RAM memory. Ensure that your system has enough physical memory to support the upgrade.
Table 5–4: Adding a Memory Channel Interconnect or Upgrading from a Dual, Redundant MC1 Interconnect to MC2 Interconnects (cont.
Table 5–4: Adding a Memory Channel Interconnect or Upgrading from a Dual, Redundant MC1 Interconnect to MC2 Interconnects (cont.) Step Action Refer to: Virtual Hub: If this is the first system in a virtual hub configuration, replace the MC1 adapter with an MC2 adapter. Figure 5–2 (B) If this is the second system in a virtual hub configuration, Figure 5–2 (C) replace both MC1 adapters with MC2 adapters.
Table 5–4: Adding a Memory Channel Interconnect or Upgrading from a Dual, Redundant MC1 Interconnect to MC2 Interconnects (cont.) Step Action Refer to: • The last member system has had its second MC1 adapter replaced with an MC2 adapter. • The cluster is operational. • All MC2 adapters are jumpered for 512 MB (and you need to utilize 512 MB address space). On one member system, use the sysconfig command to reconfigure the Memory Channel kernel subsystem to initiate the use of 512 MB address space.
[2] [3] [4] [5] [6] [7] } (dbx) { 65536 0 65536 0 0 0 4 4 p rm_adapters[1]->rmp_prail_va->rmc_size [0] [1] [2] [3] [4] [5] [6] [7] 16384 0 16384 0 16384 0 0 0 5 6 6 6 } 1 Find the size of a logical rail. 2 The logical rail is operating at 128 MB (16384 eight-KB pages). 3 Verify the jumper settings for the member systems on the first physical rail. 4 The J3 jumper is set at 512 MB for nodes 0, 2, and 4 on the first physical rail (65536 eight-KB pages).
Figure 5–2 shows a dual, redundant virtual hub configuration using MC1 hardware being upgraded to MC2.
Figure 5–3 through Figure 5–8 show a three-node standard hub configuration being upgraded from MC1 to MC2.
Figure 5–4: MC1-to-MC2 Standard Hub Upgrade: First MC1 Module Replaced MC2 Hub #1 MC1 Hub #1 0/OPTO AlphaServer Member System 1 MC2 MC1 MC1 MC1 AlphaServer Member System 3 MC1 AlphaServer Member System 2 MC1 MC1 Hub #2 ZK-1523U-AI 5–22 Setting Up the Memory Channel Cluster Interconnect
Figure 5–5: MC1-to-MC2 Standard Hub Upgrade: Replace First MC1 Adapter in Second System MC2 Hub #1 0/OPTO 2/OPTO AlphaServer Member System 1 MC2 MC2 MC1 MC1 AlphaServer Member System 3 MC1 AlphaServer Member System 2 MC1 MC1 Hub #2 ZK-1524U-AI Setting Up the Memory Channel Cluster Interconnect 5–23
Figure 5–6: MC1-to-MC2 Standard Hub Upgrade: Replace Third System Memory Channel Adapters MC2 Hub #1 1/OPTO 2/OPTO 0/OPTO AlphaServer Member System 1 MC2 MC2 MC1 MC1 AlphaServer Member System 3 MC2 AlphaServer Member System 2 MC2 1/OPTO MC2 Hub #2 MC1 Hub #2 ZK-1525U-AI 5–24 Setting Up the Memory Channel Cluster Interconnect
Figure 5–7: MC1-to-MC2 Standard Hub Upgrade: Replace Second MC1 in Second System MC2 Hub #1 0/OPTO 2/OPTO 1/OPTO AlphaServer Member System 1 MC2 MC2 MC1 MC2 AlphaServer Member System 3 MC2 AlphaServer Member System 2 MC2 1/OPTO 2/OPTO MC2 Hub #2 ZK-1526U-AI Setting Up the Memory Channel Cluster Interconnect 5–25
Figure 5–8: MC1-to-MC2 Standard Hub Upgrade: Final Configuration MC2 Hub #1 1/OPTO 2/OPTO 0/OPTO AlphaServer Member System 1 MC2 MC2 MC2 MC2 AlphaServer Member System 3 MC2 AlphaServer Member System 2 MC2 0/OPTO 1/OPTO 2/OPTO MC2 Hub #2 ZK-1527U-AI 5.7.
There will be some cluster down time. During the procedure, you can maintain cluster operations except for the time it takes to shut down the second system and boot the first system as a single-node cluster. ______________________ Note _______________________ If you are not using a quorum disk, the first member you shut down must have zero votes for the cluster to survive its shutdown. Use the clu_quorum command to adjust quorum votes.
Table 5–5: Upgrading from a Virtual Hub Configuration to a Standard Hub Configuration (cont.) Step Action ______________________ Refer to: Note ______________________ When system1 is at the console prompt, note the setting of the auto_action console environment variable, then use the console set command to set the auto_action variable to halt. This halts the system at the console prompt when the system is turned on, ensuring that you are able to run the Memory Channel diagnostics.
Table 5–5: Upgrading from a Virtual Hub Configuration to a Standard Hub Configuration (cont.) Step Action Refer to: ______________________ Note ______________________ If you are using fiber optics with Memory Channel, you have already installed the fiber-optic cable. Turn on hub power. 11 Turn on system1 system power and run the mc_diag Memory Channel diagnostic.
6 Using Fibre Channel Storage This chapter provides an overview of Fibre Channel, Fibre Channel configuration examples, and information on Fibre Channel hardware installation and configuration in a Tru64 UNIX or TruCluster Server Version 5.1A configuration. This chapter discusses the following topics: • An overview of Fibre Channel (Section 6.1). • A comparison of Fibre Channel topologies (Section 6.2). • Example cluster configurations using Fibre Channel storage (Section 6.3).
______________________ Note _______________________ TruCluster Server Version 5.1A configurations require one or more disks to hold the Tru64 UNIX operating system. The disks are either private disks on the system that will become the first cluster member, or disks on a shared bus that the system can access. Whether or not you install the base operating system on a shared disk, always shut down the cluster before booting the Tru64 UNIX disk.
AL_PA The Arbitrated Loop Physical Address (AL_PA) is used to address nodes on the Fibre Channel loop. When a node is ready to transmit data, it transmits Fibre Channel primitive signals that include its own identifying AL_PA. Arbitrated Loop A Fibre Channel topology in which frames are routed around a loop set up by the links between the nodes in the loop. All nodes in a loop share the bandwidth, and bandwidth degrades slightly as nodes and cables are added.
F_Port The ports within the fabric (fabric port). This port is called an F_port. Each F_port is assigned a 64-bit unique node name and a 64-bit unique port name when it is manufactured. Together, the node name and port name make up the worldwide name. FL_Port An F_Port containing the loop functionality is called an FL_Port. Link The physical connection between an N_Port and another N_Port or an N_Port and an F_Port.
6.1.2.1 Point-to-Point The point-to-point topology is the simplest Fibre Channel topology. In a point-to-point topology, one N_Port is connected to another N_Port by a single link. Because all frames transmitted by one N_Port are received by the other N_Port, and in the same order in which they were sent, frames require no routing. Figure 6–1 shows an example point-to-point topology. Figure 6–1: Point-to-Point Topology Node 2 Node 1 Transmit Transmit N_Port N_Port Receive Receive ZK-1534U-AI 6.1.2.
Figure 6–2 shows an example fabric topology. Figure 6–2: Fabric Topology Node 1 Node 3 Transmit Transmit Transmit Transmit F_Port N_Port Receive F_Port N_Port Receive Receive Receive Fabric Node 2 Node 4 Transmit Transmit Transmit Transmit F_Port N_Port Receive Receive F_Port N_Port Receive Receive ZK-1536U-AI 6.1.2.3 Arbitrated Loop Topology In an arbitrated loop topology, frames are routed around a loop set up by the links between the nodes.
Figure 6–3: Arbitrated Loop Topology Node 3 Node 1 Receive Transmit NL_Port NL_Port Receive Transmit Hub Node 4 Node 2 Transmit Receive NL_Port NL_Port Receive Transmit ZK-1535U-AI 6.2 Fibre Channel Topology Comparison This section compares and contrasts the fabric and arbitrated loop topologies and describes why you might choose to use them. When compared with the fabric (switched) topology, arbitrated loop is a lower cost, and lower performance, alternative.
Although the fabric topology is more expensive, it provides both increased connectivity and higher performance; switches provide a full-duplex 100 (200) MB/sec point-to-point connection to the fabric. Switches also provide improved performance and scaling because nodes on the fabric see only data destined for themselves, and individual nodes are isolated from reconfiguration and error recovery of other nodes within the fabric.
Figure 6–4 shows a typical Fibre Channel cluster configuration using transparent failover mode. Figure 6–4: Fibre Channel Single Switch Transparent Failover Configuration Member System 1 Memory Channel Memory Channel Interface KGPSA Member System 2 Memory Channel KGPSA Fibre Channel Switch Port 1 Port 1 HSG 80 Controller A HSG 80 Controller B Port 2 Port 2 RA8000/ESA12000 ZK-1531U-AI In transparent failover, units D00 through D99 are accessed through port 1 of both controllers.
Figure 6–5 shows a two-node Fibre Channel cluster with a single RA8000 or ESA12000 storage array with dual-redundant HSG80 controllers and an DS-SWXHB-07 Fibre Channel hub. Figure 6–5: Arbitrated Loop Configuration with One Storage Array Member System 1 Memory Channel KGPSA Memory Channel Interface Member System 2 Memory Channel KGPSA SWXHB-07 Port 1 HSG 80 Controller A Port 2 Port 1 HSG 80 Controller B Port 2 RA8000/ESA12000 ZK-1697U-AI 6.3.
• Normally, all available units (D0 through D199) are available at all host ports. Only one HSG80 controller will be actively doing I/O for any particular storage unit. However, both controllers can be forced active by preferring units to one controller or the other (SET unit PREFERRED_PATH=THIS). By balancing the preferred units, you can obtain the best I/O performance using two controllers.
Figure 6–6: Multiple-Bus NSPOF Configuration Number 1 Member System 1 Memory Channel Memory Channel Interface Memory Channel Interface Memory Channel KGPSA Member System 2 Memory Channel Memory Channel KGPSA KGPSA KGPSA Fibre Channel Switch Fibre Channel Switch HSG 80 Controller A HSG 80 Controller B Port 1 Port 1 Port 2 Port 2 RA8000/ESA12000 ZK-1707U-AI 6–12 Using Fibre Channel Storage
Figure 6–7: Multiple-Bus NSPOF Configuration Number 2 Member System 1 Memory Channel Memory Channel KGPSA Memory Channel Interface Memory Channel Interface Member System 2 Memory Channel Memory Channel KGPSA KGPSA KGPSA Fibre Channel Switch Fibre Channel Switch Port 1 Port 1 HSG 80 Controller A HSG 80 Controller B Port 2 Port 2 RA8000/ESA12000 ZK-1765U-AI Using Fibre Channel Storage 6–13
The configuration that is shown in Figure 6–8 is a NSPOF configuration, but is not a recommended cluster configuration because of the performance loss during failure conditions. If a switch or cable failure causes a failover to the other switch, access to the storage units has to be moved to the other controller, and that takes time. In the configurations shown in Figure 6–6 and Figure 6–7, the failure would cause access to the storage unit to shift to the other port of the same controller.
Figure 6–8: A Configuration That Is Not Recommended Member System 1 Memory Channel Memory Channel Memory Channel Interface Memory Channel Interface Member System 2 Memory Channel Memory Channel KGPSA KGPSA KGPSA KGPSA Fibre Channel Switch Port 1 Port 1 Fibre Channel Switch HSG 80 Controller A HSG 80 Controller B Port 2 Port 2 RA8000/ESA12000 ZK-1706U-AI Using Fibre Channel Storage 6–15
Figure 6–9: Another Configuration That Is Not Recommended AlphaServer KGPSA KGPSA Fibre Channel Switch Fibre Channel Switch HSG 80 Controller A HSG 80 Controller B Port 1 Port 1 Port 2 Port 2 RA8000/ESA12000 ZK-1806U-AI 6–16 Using Fibre Channel Storage
Figure 6–10 shows the maximum supported arbitrated loop configuration of a two-node Fibre Channel cluster with two RA8000 or ESA12000 storage arrays, each with dual-redundant HSG80 controllers and two DS-SWXHB-07 Fibre Channel hubs. This provides a NSPOF configuration.
private arbitrated loops (looplets) that are interconnected by a fabric. A private loop is formed by logically connecting ports on up to two switches. ______________________ Note _______________________ QuickLoop is not supported in a Tru64 UNIX Version 5.1A configuration or TruCluster Server Version 5.1A configuration. 6.5 Zoning This section provides a brief overview of zoning. A zone is a logical subset of the Fibre Channel devices that are connected to the fabric.
Switch zoning controls access at the storage system level, whereas SSP controls access at the storage unit level. The following configurations require zoning or selective storage presentation: • When you have a TruCluster Server cluster in a storage array network (SAN) with other stand-alone systems (UNIX or non-UNIX), or other clusters. • Any time you have Windows NT or Windows 2000 in the same SAN with Tru64 UNIX. (Windows NT or Windows 2000 must be in a separate switch zone.
If a host attempts to access a port that is outside its zone, the switch hardware blocks the access. You must modify the zone configuration when you move any cables from one port to another within the zone. If you want to guarantee that there is no access outside any zone, either use hard zoning, or use operating systems that state that they support soft zoning. Table 6–2 lists the types of zoning that are supported on each of the supported Fibre Channel switches.
Figure 6–11: A Simple Zoned Configuration Memory Channel Cluster 1 Member System 1 KGPSA Memory Channel Cluster 1 Member System 2 KGPSA 0 2 4 Memory Channel Cluster 2 Member System 1 KGPSA 6 Memory Channel Cluster 2 Member System 2 KGPSA 8 10 12 14 Fibre Channel Switch 1 Port 1 HSG 80 Controller A Port 1 HSG 80 Controller B 3 RA8000/ESA12000 5 Port 2 Port 2 7 9 11 13 15 Port 1 HSG 80 Controller A Port 2 Port 1 HSG 80 Controller B Port 2 RA8000/ESA12000 ZK-1709U-AI For informatio
Figure 6–12: Meshed Fabric with Three Cascaded Switches Member System 1 Memory Channel Interface Memory Channel KGPSA Member System 2 Memory Channel KGPSA Fibre Channel Switch Fibre Channel Switch Fibre Channel Switch Port 1 Port 1 HSG 80 Controller A HSG 80 Controller B RA8000/ESA12000 Port 2 Port 1 Port 2 Port 1 HSG 80 Controller A HSG 80 Controller B Port 2 Port 2 RA8000/ESA12000 ZK-1795U-AI Figure 6–13 shows an example meshed resilient fabric with four cascaded interconnected switches.
Figure 6–13: Meshed Resilient Fabric with Four Cascaded Switches Member System 1 Member System 2 Memory Channel Interface Memory Channel Memory Channel Memory Channel Interface Memory Channel Memory Channel KGPSA KGPSA KGPSA KGPSA Fibre Channel Switch Fibre Channel Switch Fibre Channel Switch Fibre Channel Switch Port 1 Port 1 HSG 80 Controller A HSG 80 Controller B Port 2 Port 2 RA8000/ESA12000 ZK-1794U-AI Using Fibre Channel Storage 6–23
______________________ Note _______________________ If you lose an ISL, the communication can be routed through another switch to the same port on the other controller. This can constitute the maximum allowable two hops. You can find the following information about storage array networks (SAN) in the Compaq StorageWorks Heterogeneous Open SAN Design Reference Guide located at: http://www5.compaq.com/products/storageworks/techdoc/san/AA-RMPNA-TE.
6. Use the show wwid* and show n* console commands to show the disk devices that are currently reachable, and the paths to the devices (Section 6.9.1.4). 7. Use the WWID manager to set the bootdef_dev console environment variable for the system where you will install the Tru64 UNIX operating system (Section 6.9.1.5). 8. See the Tru64 UNIX Installation Guide and install the base operating system from the CD-ROM.
6.8 Installing and Configuring Fibre Channel Hardware This section provides information about installing the Fibre Channel hardware that is needed to support Tru64 UNIX or a TruCluster Server configuration using Fibre Channel storage. Ensure that the member systems, the Fibre Channel switches or hubs, and the HSG80 array controllers are placed within the lengths of the optical cables that you will be using.
DS-DSGGC-AA/AB) before you can manage the switch via a telnet session, SNMP, or the Web. The DS-DSGGC-AA/AB Fibre Channel switches have a default IP address of 10.77.77.77. You may need to change this IP address before you connect the switch to the network. The DSGGA switch has slots to accommodate up to four (DS-DSGGA-AA) or eight (DS-DSGGA-AB) plug-in interface modules. Each interface module in turn supports two Gigabit Interface Converter (GBIC) modules.
For an installation, at a minimum, you have to complete the following steps. Some of the steps are explained in more detail in the following sections. 1. Place the switch or install it in the rack. 2. If you are using a DS-DSGGB-AA or DS-DSGGC, connect the switch to a terminal or PC (Section 6.8.1.2.3). 3. Connect the Ethernet cable between the Fibre Channel switch and the Ethernet switch or hub. 4. Connect the fiber-optic cables between the switch and host bus adapters and RAID array controllers.
6.8.1.2 Managing the Fibre Channel Switches You can manage the DS-DSGGA-AA, DS-DSGGA-AB, and DS-DSGGB-AB switches, and obtain switch status from the front panel, by making a telnet connection or by accessing the Web. The DS-DSGGB-AA and DS-DSGGC-AA/AB Fibre Channel switches do not have a front panel, so you must use a telnet connection or use Web access.
6.8.1.2.2 Setting the Ethernet IP Address and Subnet Mask from the Front Panel Before you telnet to the switch, you must connect the Ethernet cable and then set the Ethernet IP address and subnet mask. To use the front panel to set the Ethernet address and subnet mask, follow these steps: 1. Press any of the switch front panel buttons to activate the display for the top-level menu.
Press Enter to display the first submenu in the Operation Menu, Switch Offline: Operation Menu: Switch Offline 8. Press the down button until the Reboot submenu item is displayed: Operation Menu: Reboot 9. Press Enter. You can change your mind and not reboot: Reboot Accept? Yes No 10. Use the Tab/Esc button to select Yes. Press Enter to reboot the switch and execute the POST tests.
4. 5. Turn on power to the switch and log in. If the connection is correct, the self-test results will be displayed. It takes 2 to 3 minutes for self-tests to complete. • DS-DSGGB-AA: The switch automatically connects to the host and logs the user on to the switch as admin when the self-tests terminate. For subsequent logons, the default password is password. • DS-DSGGC-AA/AB: Plugging in the DS-DSGGC-AA/AB switch turns the power on. (There is no on/off power switch.) Log in as the admin user.
Table 6–3: Telnet Session Default User Names for Fibre Channel Switches DSGGA DSGGB or DSGGC other n/a Allows you to execute commands ending in Show, such as dateShow and portShow. user user Allows you to execute all commands ending in Show, plus any commands from the help menu that do not change the state of the switch, for example, version and errDump. You can change the passwords for all users up to and including the current user’s security level.
______________________ Note _______________________ When you telnet to the switch the next time, the prompt will include the switch name, for example: fcsw1:Admin> 6.8.2 Installing and Setting Up the DS-SWXHB-07 Hub The DS-SWXHB-07 hub supports up to seven 1.6025 Gb/sec ports. The ports can be connected to the KGPSA-CA PCI-to-Fibre Channel host bus adapter or to an HSG80 array controller. Unlike the DSGGA switch, the DS-SWXHB-07 hub does not have any controls or even a power-on switch.
_____________________ Caution _____________________ Static electricity can damage modules and electronic components. We recommend using a grounded antistatic wrist strap and a grounded work surface when handling modules. For an installation, at a minimum, you have to: 1. Place the hub on an acceptable surface or install it in the rackmount. 2. Install one or more GBIC modules. Gently push the GBIC module into an available port on the hub until you feel the GBIC module click into place.
• Solid amber: Indicates that a loss of signal or poor signal integrity has put the port in bypass mode. Make sure that a GBIC is installed, that a cable is attached to the GBIC, and that the other end of the cable is attached to a KGPSA-CA or HSG80. • Amber off (and green on): Indicates that the port and device are fully operational. For more information on determining the hub status, see the Fibre Channel Storage Hub 7 Installation Guide. 6.8.
Remember to remove the transparent plastic covering on the extremities of the optical cable. 5. Connect the fiber-optic cables to the shortwave Gigabit Interface Converter (GBIC) modules in the DSGGA, DSGGB, or DSGGC Fibre Channel switch. 6.8.3.2 Setting the KGPSA-BC or KGPSA-CA to Run on a Fabric The KGPSA host bus adapter defaults to the fabric mode, and can be used in a fabric without taking any action.
P00>>> wwidmgr -show adapter wwidmgr available only prior to booting. Reinit system and try again. P00>>> init . . . P00>>> wwidmgr -show adapter . . . For more information on the wwidmgr utility, see the Wwidmgr User’s Manual, which is on the Alpha Systems Firmware Update CD-ROM in the DOC directory. Use the worldwide ID manager to show all KGPSA adapters: P00>>> wwidmgr -show adapter Link is down. item adapter WWN pga0.0.0.3.1 - Nvram read failed [ 0] pga0.0.0.3.1 1000-0000-c920-eda0 pgb0.0.0.4.
If the current topology for an adapter is LOOP, set an individual adapter to FABRIC by using the item number for that adapter (for example, 0 or 1). Use 9999 to set all adapters: P00>>> wwidmgr -set adapter -item 9999 -topo fabric Reformatting nvram Reformatting nvram ______________________ Note _______________________ The qualifier in the previous command is -topo and not -topology. You will get an error if you use -topology.
fabric mode that is connected to a loop. Therefore, determine the topology setting before using the adapter. The wwidmgr utility is documented in the Wwidmgr User’s Manual, which is located in the DOC subdirectory of the Alpha Systems Firmware CD-ROM. The steps required to set the link type are summarized here; see the Wwidmgr User’s Manual for complete information and additional examples. Assuming that you have the required console firmware, use the wwidmgr utility to set the link type, as follows: 1.
6. Repeat this process for the other cluster member if this is a two-node TruCluster configuration. 6.8.3.4 Obtaining the Worldwide Names of KGPSA Adapters A worldwide name is a unique number assigned to a subsystem by the Institute of Electrical and Electronics Engineers (IEEE) and set by the manufacturer prior to shipping. The worldwide name assigned to a subsystem never changes.
6.8.4 Setting Up the HSG80 Array Controller for Tru64 UNIX Installation This section covers setting up the HSG80 controller for operation with Tru64 UNIX Version 5.1A and TruCluster Server Version 5.1A. The steps described here apply to both fabric and arbitrated loop configurations. However, arbitrated loop requires specific settings for the port topology and AL_PA values.
6. Install disks into storage shelves. 7. Connect a terminal to the maintenance port on one of the HSG80 controllers. You need a local connection to configure the controller for the first time. The maintenance port supports serial communication with the following default values: 8. • 9600 bits/sec • 8 data bits • 1 stop bit • No parity Connect the RA8000 or ESA12000 to the power source and apply power.
HSG80> HSG80> HSG80> HSG80> HSG80> HSG80> HSG80> HSG80> HSG80> HSG80> HSG80> set other port_1_topology = offline 6 set other port_2_topology = offline 6 set this port_1_topology = fabric 7 set this port_2_topology = fabric 7 set other port_1_topology = fabric 7 set other port_2_topology = fabric 7 set this time=dd-mmm-yyyy:hh:mm:ss 8 set this scsi_version = scsi-3 9 set other scsi_version = scsi-3 9 restart other 10 restart this 10 1 Removes any failover mode that may have been previously configured.
Setting the SCSI_VERSION to SCSI-2 allows a disk unit to be at LUN 0, and specifies that the command console LUN (CCL) is not fixed at a particular location, but floats to the first available LUN. If SCSI_VERSION is set to SCSI-3, the CCL is presented at LUN 0 for all connection offsets. Do not assign unit 0 at any connection offset because the unit would be masked by the CCL at LUN 0 and would not be available.
Example 6–1: Determine HSG80 Connection Names (cont.
HSG80 (!NEWCON50). For example, assume that member system pepicelli has two KGPSA Fibre Channel host bus adapters, and that the worldwide name for KGPSA pga is 1000-0000-C920-DA01. Example 6–1 shows that the connections for pga are !NEWCON49, !NEWCON50, !NEWCON54, and !NEWCON56. You can change the name of !NEWCON49 to indicate that it is the first connection (of four) to pga on member system pepicelli as follows: HSG80> rename !NEWCON49 pep_pga_1 13.
4 Verifies that all connections have the offsets set to 0 and the operating system is set to TRU64_UNIX. ____________________ Note _____________________ If the fiber-optic cables are not properly installed, there will be inconsistencies in the connections shown. 14. Set up the storage sets as required for the applications to be used. An example is provided in Section 6.9.1.1. 6.8.4.1 Setting Up the HSG80 Array Controller for Arbitrated Loop Section 6.8.
This is the preferred address, but the HSG80 controller is free to use whatever AL_PA it obtains during loop initialization. However, the address you specify must be valid and must not be used by another port. If the controller is unable to obtain the address you specify (for example, because two ports are configured for the same address), the controller cannot come up on the loop.
HSG80> set other PORT_2_AL_PA = 02 After you have done this, continue with steps 12 through 14 in Section 6.8.4. 6.8.4.2 Obtaining the Worldwide Names of HSG80 Controller The RA8000 or ESA12000 is assigned a worldwide name when the unit is manufactured. The worldwide name (and checksum) of the unit appears on a sticker placed above the controllers. The worldwide name ends in zero (0), for example, 5000-1FE1-0000-0D60. You can also use the SHOW THIS_CONTROLLER Array Controller Software (ACS) command.
controller, there are different procedures for replacing HSG80 controllers in an RA8000 or ESA12000: • If you replace one controller of a dual-redundant pair, the NVRAM from the remaining controller retains the configuration information (including worldwide name). When you install the replacement controller, the existing controller transfers configuration information to the replacement controller.
This section describes how to perform the following tasks: • • Before the installation: a. Configure HSG80 storagesets — In this manual, example storagesets are configured for both Tru64 UNIX and TruCluster Server on Fibre Channel storage. Modify the storage configuration to meet your needs (Section 6.9.1.1). b.
f. Add additional systems to the cluster (Section 6.9.7). If you are installing either the Tru64 UNIX operating system or TruCluster Server software, follow the procedure in Section 6.7. 6.9.1 Before You Install The following sections cover the preliminary steps that must be completed before you install Tru64 UNIX and TruCluster Server on Fibre Channel disks. 6.9.1.
configuration. A blank table (Table A–1) is provided in Appendix A for use in an actual installation. One mirrorset, the BOOT-MIR mirrorset, is used for the Tru64 UNIX and cluster member system boot disks. The other mirrorset, CROOT-MIR, is used for the cluster root (/), cluster /usr, cluster /var, and quorum disks. To set up the example disks for operating system and cluster installation, follow the steps in Example 6–2.
Example 6–2: Setting Up the Mirrorset (cont.
3 Initializes the BOOT-MIR and CROOT-MIR mirrorsets. If you want to set any initialization switches, you must do so in this step. The BOOT-MIR mirrorset will be used for the Tru64 UNIX and cluster member system boot disks. The CROOT-MIR mirrorset will be used for the cluster root (/), cluster /usr and cluster /var file systems, and the quorum disk. 4 Verifies the mirrorset configuration and switches. Ensure that the mirrorsets use the correct disks.
______________________ Note _______________________ A storageset must reside on one controller or the other. All the partitions of a storageset must be on the same controller because all the partitions of a storageset fail over as a unit. The steps performed in Example 6–3 include: 1. Assigns a unit number to each storage unit and disables all access to the storage unit. 2. Sets an identifier for each storage unit. 3. Enable selective access to the storage unit.
Example 6–3: Adding Units and Identifiers to the HSG80 Storagesets (cont.) HSG80> set d144 ENABLE_ACCESS_PATH = !NEWCON61,!NEWCON62,!NEWCON64,!NEWCON65 Warning 1000: Other host(s) in addition to the one(s) specified can still access this unit. If you wish to enable ONLY the host(s) specified, disable all access paths (DISABLE_ACCESS=ALL), then again enable the ones specified.
selective access in case there are other systems or clusters that are connected to the same switch as the cluster. Record the unit name of each partition with the intended use for that partition (Table 6–4). ____________________ Note _____________________ In a multiple-bus failover, dual-redundant configuration, you can balance the I/O load by specifying the controller through which the unit will be accessed.
HSG80. The identifiers should be easily recognized. Ensure that you record the identifiers (Table 6–4). 3 Enables access to each unit for those hosts that you want to be able to access this unit. Because access was initially disabled to all hosts, you can ensure selective access to the units. If you do not remember the connection names, use the HSG80 show connection command as shown in Example 6–1 to determine the HSG80 connection names for the connection to the KGPSA host bus adapters.
Table 6–4: Converting Storageset Unit Numbers to Disk Names (cont.) File System or Disk HSG80 Unit Worldwide Name UDID Device Name /usr D143 6000-1FE1-0000-0D60- 143 0009-8080-0434-002B N/Aa /var D144 6000-1FE1-0000-0D60- 144 0009-8080-0434-0028 N/Aa dskn dsk19 dsk18 a These units are not assigned an alias for the device unit number by the WWID manager command; therefore, they do not get a device name and will not show up in a console show dev display. 6.9.1.
wwid3 P00>>> show n* N1 N2 N3 N4 ____________________ Note _____________________ The console only creates devices for which the wwidn console environment variable has been set, and are accessible through an HSG80 N_Port as specified by the Nn console environment variable also being set. These console environment variables are set with the wwidmgr -quickset or wwidmgr -set wwid commands. The use of the wwidmgr -quickset command is shown later in Example 6–5. 3.
1 The number within the brackets ([ ]) is the item number of the device shown on any particular line. 2 The UDID is assigned at the HSG80 with the set Dn IDENTIFIER = xxx command, and is not used by the Tru64 UNIX operating system, but may be set (as we have done with the SET D131 IDENTIFIER=131 group of commands). When the identifier is not set at the HSG80, a value of -1 is displayed. 3 The worldwide name for the device. It is prefixed with the value WWID:01000010:.
Example 6–5 shows: • The use of the wwidmgr -quickset command to set the device unit number for the Tru64 UNIX Version 5.1A installation disk to 133, and the first cluster member system boot disk to 131. • The wwidmgr -quickset command provides a reachability display equivalent to execution of the wwidmgr reachability command. The reachability part of the display provides the following: – The worldwide name for the storage unit that is to be accessed. – The new device name for the storage unit.
Example 6–5: Setting the Device Unit Number with the wwidmgr quickset Command (cont.) dga131.1001.0.1.0 dga131.1002.0.1.0 dga131.1003.0.1.0 dga131.1004.0.1.0 dgb131.1001.0.2.0 dgb131.1002.0.2.0 dgb131.1003.0.2.0 dgb131.1004.0.2.0 P00>>> init via adapter: pga0.0.0.1.0 pga0.0.0.1.0 pga0.0.0.1.0 pga0.0.0.1.0 pgb0.0.0.2.0 pgb0.0.0.2.0 pgb0.0.0.2.0 pgb0.0.0.2.
6.9.1.4 Displaying the Available Boot Devices The only Fibre Channel devices that are displayed by the console show dev command are those devices that have been assigned to a wwidn environment variable with the wwidmgr -quickset command. The devices that are shown in the reachability display of Example 6–5 are available for booting and the setting of the bootdef_dev console environment variable during normal console mode.
Clear the wwid1 console environment variable as follows: P00>>> wwidmgr -clear wwid1 Then, reboot the system. Example 6–6 provides sample device names as displayed by the show dev command after using the wwidmgr -quickset command to set the device unit numbers. Example 6–6: Sample Fibre Channel Device Names P00>>> show dev dga131.1001.0.1.0 dga131.1002.0.1.0 dga131.1003.0.1.0 dga131.1004.0.1.0 dga133.1001.0.1.0 dga133.1002.0.1.0 dga133.1003.0.1.0 dga133.1004.0.1.0 dgb131.1001.0.2.0 dgb131.1002.0.2.
______________________ Note _______________________ The bootdef_dev environment variable values must point to the same HSG80. To set the bootdef_dev console environment variable for the Tru64 UNIX installation when booting from a Fibre Channel device, follow these steps: 1. Obtain the device name for the Fibre Channel storage unit where you will install the Tru64 UNIX operating system. The device name shows up in the reachability display as shown in Example 6–5 with a Yes under the connected column.
P00>>> show bootdef_dev bootdef_dev dga133.1002.0.1.0 You are now ready to install the Tru64 UNIX operating system. 6.9.2 Install the Base Operating System After you read the TruCluster Server Cluster Installation manual, and use the Tru64 UNIX Installation Guide as a reference, boot from the CD-ROM and perform a full installation of the Tru64 UNIX Version 5.1A operating system.
# hwmgr -view dev | grep IDENTIFIER HWID: Device Name Mfg Model Location ----------------------------------------------------------------------62: /dev/disk/dsk15c DEC HSG80 IDENTIFIER=133 63: /dev/disk/dsk16c DEC HSG80 IDENTIFIER=132 64: /dev/disk/dsk17c DEC HSG80 IDENTIFIER=131 65: /dev/disk/dsk18c DEC HSG80 IDENTIFIER=141 66: /dev/disk/dsk19c DEC HSG80 IDENTIFIER=142 67: /dev/disk/dsk20c DEC HSG80 IDENTIFIER=143 68: /dev/disk/dsk21c DEC HSG80 IDENTIFIER=144 If you know that you have set the UDID for a l
. . . For more information on the hardware manager, see hwmgr(8). 2. Search the display for the UDIDs (or worldwide names) for each of the cluster installation disks and record the /dev/disk/dskn values. If you used the grep utility to search for a specific UDID, for example hwmgr -view dev | grep "IDENTIFIER=131" repeat the command to determine the /dev/disk/dskn for each of the remaining cluster disks. Record the information for use when you install the cluster software.
6.9.6 Reset the bootdef_dev Console Environment Variable If you set the bootdef_dev console environment variable to multiple paths in Section 6.9.1.5, the base operating system installation or clu_create procedures modify the variable and you should reset it to provide multiple boot paths. To reset the bootdef_dev console environment variable, follow these steps: 1. Obtain the device name and worldwide name for the Fibre Channel unit from where you will boot cluster member system 1 (Table 6–4). 2.
You can set units preferred to a specific controller, in which case both controllers will be active. If the bootdef_dev console environment variable ends up with all boot paths in an unconnected state, you can use the ffauto or ffnext console environment variables to force a boot device from a not connected to a connected state. The ffauto console environment variable is effective only during autoboots (boots other than manual boots). Use the set ffauto on console command to enable ffauto.
______________________ Note _______________________ The console System Reference Manual (SRM) software guarantees that you can set the bootdef_dev console environment variable to a minimum of four device names. You may be able to set it to five, but only four are guaranteed. 6.9.7 Add Additional Systems to the Cluster To add additional systems to the cluster, follow this procedure: 1.
c. 2. Boot genvmunix on the newly added cluster member system. Each installed subset will be configured and a new kernel will be built and installed. After the new kernel is built, do not reboot the new cluster member system. Shut down the system and reset the bootdef_dev console environment variable to provide multiple boot paths to the member system boot disk as follows: a. Obtain the device name and worldwide name for the Fibre Channel unit from where you will boot (Table 6–4).
Path from host bus adapter A to controller B port 2 4 c. Set the bootdef_dev console environment variable for member system 2 boot disk to a comma-separated list of several of the boot paths that show up as connected in the reachability display (wwidmgr -quickset or wwidmgr -show reachability). You must initialize the system to use any of the device names in the bootdef_dev variable as follows: P00>>> set bootdef_dev \ dga132.1001.0.1.0,dga132.1004.0.1.0,\ dgb132.1002.0.2.0,dgb132.1003.0.2.
connection is discovered. In transparent failover mode, host connections to port 1 default to an offset of 0; host connections on port 2 default to an offset of 100. Host connections on port 1 can see units 0 through 99; host connections on port 2 can see units 100 through 199. In multiple-bus failover mode, host connections on either port 1 or 2 can see units 0 through 199. In multiple-bus failover mode, the default offset for both ports is 0.
2. At the HSG80, set multiple-bus failover as follows. Before putting the controllers in multiple-bus failover mode, you must remove any previous failover mode: HSG80> SET NOFAILOVER HSG80> SET MULTIBUS_FAILOVER COPY=THIS ____________________ Note _____________________ Use the controller that you know has the good configuration information. 3.
!NEWCON54 TRU64_UNIX OTHER HOST_ID=1000-0000-C920-DA01 1 230813 OL other 0 ADAPTER_ID=1000-0000-C920-DA01 !NEWCON55 TRU64_UNIX OTHER HOST_ID=1000-0000-C920-EDEB 2 230913 OL other 100 ADAPTER_ID=1000-0000-C920-EDEB !NEWCON56 TRU64_UNIX OTHER HOST_ID=1000-0000-C920-DA01 2 230813 OL other 100 ADAPTER_ID=1000-0000-C920-DA01 !NEWCON57 TRU64_UNIX THIS HOST_ID=1000-0000-C921-09F7 2 offline 100 ADAPTER_ID=1000-0000-C921-09F7 !NEWCON58 TRU64_UNIX OTHER HOST_ID=1000-0000-C921-09F7 1 offline 0 ADAPTER_ID
____________________ Note _____________________ The remaining steps apply only to fabric configurations. In this release, you cannot boot from storage that is connected via a Fibre Channel arbitrated loop. a. Use the wwid manager to show the Fibre Channel environment variables and determine which units are reachable by the system.
the example, cluster member 1 will need access to the storage units with UDIDs 131 (member 1 boot disk) and 133 (Tru64 UNIX disk). Cluster member 2 will need access to the storage units with UDIDs 132 (member 2 boot disk) and 133 (Tru64 UNIX disk). Set up the device and port path for cluster member 1 as follows: P00>>> wwidmgr -quickset -udid 131 . . . P00>>> wwidmgr -quickset -udid 133 . . . f. Initialize the console: P00>>> init g.
• Display the current Fibre Channel topology for a Fibre Channel adapter See emxmgr(8) for more information on the emxmgr utility. 6.11.1 Using the emxmgr Utility to Display Fibre Channel Adapter Information The primary use of the emxmgr utility for TruCluster Server is to display Fibre Channel information. Use the emxmgr -d command to display the presence of KGPSA Fibre Channel adapters on the system.
N_Port at FC DID 0x210113 - SCSI tgt id 1 : 2 portname 5000-1FE1-0001-8931 nodename 5000-1FE1-0001-8930 Present, Logged in, FCP Target, FCP Logged in, N_Port at FC DID 0x210213 - SCSI tgt id 2 : 2 portname 5000-1FE1-0001-8941 nodename 5000-1FE1-0001-8940 Present, Logged in, FCP Target, FCP Logged in, N_Port at FC DID 0x210313 - SCSI tgt id 4 : 2 portname 5000-1FE1-0001-8942 nodename 5000-1FE1-0001-8940 Present, Logged in, FCP Target, FCP Logged in, N_Port at FC DID 0x210513 - SCSI tgt id 6 : 2 portname 1000
3 • FCP Initiator — The remote N_Port acts as a SCSI initiator device (it sends SCSI commands). • FCP Suspended — The driver has invoked a temporary suspension on SCSI traffic to the N_Port while it resolves a change in connectivity. • F_PORT — The fabric connection (F_Port) allows the adapter to send Fibre Channel traffic into the fabric. • Directory Server — The N_Port is the FC entity queried to determine who is present on the Fibre Channel fabric.
Present, Logged in, FCP Target, FCP Logged in, N_Port at FC DID 0x00006e - SCSI tgt id 4 : portname 2200-0020-3700-55CB nodename 2000-0020-3700-55CB Present, Logged in, FCP Target, FCP Logged in, 1 Status of the emx0 link. The connection is a Fibre Channel arbitrated loop (FC-AL) connection, and the link is up. The adapter is on SCSI bus 2 at SCSI ID 7. The port name and node name of the adapter are provided. The Fibre Channel DID number is the physical Fibre Channel address being used by the N_Port. 6.
1. 2. 3. View adapter’s current Topology View adapter’s Target Id Mappings Change Target ID Mappings d. a. x.
7 Using GS80, GS160, or GS320 Hard Partitions in a TruCluster Server Configuration This chapter contains information about using AlphaServer GS80/160/320 hard partitions in a TruCluster Server Version 5.1A configuration with Tru64 UNIX Version 5.1A. The chapter discusses the following topics: • An overview of the use of hard partitions in an AlphaServer GS80, GS160, or GS320 TruCluster Server configuration (Section 7.1).
The AlphaServer GS80/160/320 systems use the same switch technology, the same CPU, memory, and power modules, and the same I/O riser modules. The GS160 and GS320 systems house the modules in up to two system boxes, each with two QBBs, in a cabinet. The GS320 requires two cabinets for the system boxes. The GS80 is a rack system with the system modules for each QBB in a drawer. An 8-processor GS80 uses two drawers for the CPU, memory, and I/O riser modules.
Figure 7–1: Portion of QBB Showing I/O Riser Modules I/O Riser BN39B I/O Riser Cable ZK-1749U-AI ____________________ Notes ____________________ You can have up to two I/O riser modules in a QBB, but you cannot split them across partitions. Each I/O riser has two cable connections (Port 0 and Port 1). Ensure that both cables from one I/O riser are connected to the same PCI drawer (0-R and 1-R in Figure 2–1). A QBB I/O riser (local) is connected to a PCI I/O riser (remote) by BN39B cables.
We recommend that you connect I/O riser 0 (local I/O riser ports 0 and 1) to the primary PCI drawer that will be the master system control manager (SCM). The BA54A-AA PCI drawer (the bottom PCI drawer in Figure 7–2 and Figure 7–3) is a primary PCI drawer. See Figure 2–1 for PCI drawer slot layout. A primary PCI drawer contains: – A standard I/O module in slot 0-0/1 that has EEPROMs for the system control manager (SCM) and system reference manual (SRM) firmware.
1) that is higher than the master SCM. Both the master SCM and standby SCM must have the scm_csb_master_eligible SCM environment variable set. __________________ Note __________________ We recommend that you put the primary PCI drawers that contain the master and standby SCM in the power cabinet. They both must be connected to the OCP.
types of PCI drawers. It is harder to distinguish the type of PCI drawer from the rear, but slot 1 provides the key. The primary PCI drawer has a standard I/O module in slot 1, and the console and modem ports and USB connections are visible on the module.
Figure 7–3: Rear View of Expansion and Primary PCI Drawers I/O Riser 1 I/O Riser 0 Expansion PCI Drawer Console Serial Bus Node ID Module PCI Drawer Node ID CSB Connector Primary PCI Drawer Local Terminal/ COM1/Port PCI Drawer Node ID Standard I/O Module CSB Connector Console Serial Bus Node ID Module ZK-1751U-AI 7.3 Configuring Partitioned GS80, GS160, or GS320 Systems in a TruCluster Configuration An AlphaServer GS80/160/320 system can be a member of a TruCluster Server configuration.
equally well with any number of partitions (as supported by the system type) by modifying the amount and placement of hardware and the SCM environment variable values. ______________________ Notes ______________________ View each partition as a separate system. Ensure that the system comes up as a single partition the first time that you turn power on. Do not turn the key switch on. Only turn on the AC circuit breakers.
3. • Shared storage that is connected to KZPBA-CB (parallel SCSI) or KGPSA-CA (Fibre Channel) host bus adapters. • Network controllers. Install BN39B cables between the local I/O risers on the QBBs in the partition (see Figure 7–1) and the remote I/O risers in the primary and expansion PCI drawer (see Figure 2–1 and Figure 7–3). Use BN39B-01 cables (1-meter; 3.3-foot) for a PCI drawer in the GS80 RETMA cabinet. Use BN39B-04 cables (4-meter; 13.
____________________ Notes ____________________ If the OCP key switch is in the On or Secure position, the system will go through the power-up sequence. In this case, when the power-up sequence terminates, power down the system with the power off SCM command, then partition the system. If the auto_quit_scm SCM environment variable is set (equal 1), control will be passed to the SRM console firmware at the end of the power-up sequence.
Example 7–1: Defining Hard Partitions with SCM Environment Variables (cont.) hp_qbb_mask5 hp_qbb_mask6 hp_qbb_mask7 srom_mask xsrom_mask primary_cpu primary_qbb0 auto_quit_scm fault_to_sys dimm_read_dis scm_csb_master_eligible perf_mon scm_force_fsl ocp_text auto_fault_restart scm_sizing_time 0 0 0 ff ff ff ff 1 0 0 1 20 0 as 1 c f ff ff ff ff ff ff ff ff 1 0 0 6 7 gs160 1 Sets the number of hard partitions to 2. 2 Sets bits 0 and 1 of the mask (0011) to select QBB 0 and QBB 1 for hard partition 0.
scm_csb_master_eligible environment variable. The master and standby SCM must be connected to the OCP. The master SCM must have the lowest node ID. Use the node ID address obtained from the show csb SCM command (see Example 7–4). If multiple primary PCI drawers are eligible, the SCM on the PCI drawer with the lowest node ID is chosen as master. The other SCM will be a standby in case of a problem with the master SCM. If the node ID switch is set to zero, the CSB node ID will be 10 (Example 7–4).
1 Turns on power to partition 0. 2 Turns on power to partition 1. 3 Transfers control from the SCM firmware to the SRM console firmware. ____________________ Note _____________________ If the auto_quit_scm SCM environment variable is set, control is passed to the SRM console firmware automatically at the end of the power-up sequence. 12. Obtain a copy of the latest firmware release notes for the AlphaServer system (see Section 7.5).
7.4 Determining AlphaServer GS80/160/320 System Configuration You may be required to reconfigure an AlphaServer GS80/160/320 system that is not familiar to you.
1 Hard partition number. There are two hard partitions in this example (0 and 1). 2 QBB number and console serial bus (CSB) node ID. QBB 0 and 1 (CSB node IDs 30 and 31) are in partition 0. QBB 2 and 3 (CSB node IDs 32 and 33) are in partition 1. 3 Status of the CPU module, which is present, powered up, and has passed self test (P). A dash (-) indicates an empty slot. An F indicates a self test failure. In this example, each QBB contains four CPU modules, each of which has passed self test.
11 Hierarchical switch (H-switch) type, status, temperature, and a report of which QBBs are connected to the H-switch. In this example, QBBs 0, 1, 2, and 3 are connected to the H-switch. 12 Console serial bus node ID for PCI drawers. In this example, the first PCI drawer has node ID 10. The second PCI drawer has node ID 11. Note that in this case, the node ID switches are set to 0 and 1. 13 Status of each of the four PCI buses in a PCI drawer. An S indicates that a standard I/O module is present.
Example 7–4: Displaying Console Serial Bus Information (cont.) 30 30 C0 C1 C2 C3 C0 C1 31 31 C4 C5 C6 C7 32 32 C8 C9 CA CB C8 C9 33 33 CC CD CE CF 40 E0 E1 1 PSM XSROM CPU0/SROM CPU1/SROM CPU2/SROM CPU3/SROM IOR0 IOR1 PSM XSROM CPU0/SROM CPU1/SROM CPU2/SROM CPU3/SROM PSM XSROM CPU0/SROM CPU1/SROM CPU2/SROM CPU3/SROM IOR0 IOR1 PSM XSROM CPU0/SROM CPU1/SROM CPU2/SROM CPU3/SROM HPM SCM MASTER SCM SLAVE T05.4 T05.4 V5.0-7 V5.0-7 V5.0-7 V5.0-7 (03.24/01:09) (03.24/02:10) T05.4 T05.4 V5.0-7 V5.0-7 V5.0-7 V5.
• PBM (PCI backplane manager) • PSM (Power system manager) • HPM (Hierarchical switch power manager) • SCM master: This PCI primary drawer has the master SCM. • SCM slave: The SCM on this PCI primary drawer is a slave and has not been designated as a backup to the master. • CPUn/SROM: Each CPU module has SROM firmware that is executed as part of the power-up sequence. • XROM: Each CPU executes this extended SROM firmware on the PSM module after executing the SROM firmware.
• System Reference Manual (SRM) flash ROM on the standard I/O module • The flash ROMs for the following console serial bus (CSB) microprocessors: • – SCM: One on the standard I/O module of each primary PCI drawer – Power system manager (PSM): One on the PSM module in each QBB – PCI backplane manager (PBM): One on each PCI backplane – Hierarchical switch power manager (HPM): One on the H-switch PCI host bus adapter EEPROMS To update the AlphaServer GS80/160/320 firmware with the LFU utility, fo
____________________ Note _____________________ You do not need to zero the hp_qbb_maskn environment variables, only the hp_count. 5. Turn power on to the system to allow SRM console firmware execution. The SRM code is copied to memory on the partition primary QBB during the power-up initialization sequence. SRM code is executed out of memory, not the SRM EEPROM on the standard I/O module. SCM_E0> power on 6.
Use the update command to update all firmware, or you can designate a specific device to update; for example, SRM console firmware: UPD> update srm ___________________ Caution ___________________ Do not abort the update — doing so can cause a corrupt flash image in a firmware module. A complete firmware update for a QBB can take from 5 minutes for a PCI with no updatable devices to over 30 minutes for a PCI with many updatable devices.
8 Configuring a Shared SCSI Bus for Tape Drive Use The topics in this section provide information on preparing the various tape devices for use on a shared SCSI bus with the TruCluster Server product. The topics discussed include preparing the following tape drives for shared SCSI bus usage: • TZ88 (Section 8.1) • TZ89 (Section 8.2) • Compaq 20/40 GB DLT Tape Drive (Section 8.3) • Compaq 40/80-GB DLT Drive (Section 8.4) • TZ885 (Section 8.5) • TZ887 (Section 8.
They both work with an expansion unit (previously called the DS-TL890-NE) and a new module called the data unit. Section 8.12 covers the TL881 and TL891 with the common components as sold with the Compaq 6-3 part numbers. As long as the TL89x MiniLibrary family is being sold with both sets of part numbers, the documentation will retain the documentation for both ways to configure the MiniLibrary. 8.
Figure 8–1: TZ88N-VA SCSI ID Switches Backplane Interface Connector SCSI ID Switch Pack Snap−in Locking Handles TZ88N−VA Table 8–1: TZ88N-VA Switch Settings SCSI ID SCSI ID Selection Switches 1 2 3 4 5 6 Automatica Off Off Off On On On 0 Off Off Off Off Off Off 1 On Off Off Off Off Off 2 Off On Off Off Off Off 3 On On Off Off Off Off 4 Off Off On Off Off Off 5 On Off On Off Off Off Configuring a Shared SCSI Bus for Tape Drive Use 8–3
Table 8–1: TZ88N-VA Switch Settings (cont.) SCSI ID SCSI ID Selection Switches 6 Off On On Off Off Off 7 On On On Off Off Off a SBB tape drive SCSI ID is determined by the SBB physical slot. 8.1.2 Cabling the TZ88N-VA There are no special cabling restrictions specific to the TZ88N-VA; it is installed in a BA350 StorageWorks enclosure. A DWZZA-VA installed in slot 0 of the BA350 provides the connection to the shared SCSI bus. The tape drive takes up three slots.
Figure 8–2 shows a TruCluster Server cluster with three shared SCSI buses. One shared bus has a BA350 with a TZ88N-VA at SCSI ID 3.
Table 8–2: Hardware Components Used to Create the Configuration Shown in Figure 8–2 (cont.) Callout Number Description 8 DWZZA-VA with H885-AA trilink connector 9 DWZZB-VW with H885-AA trilink connector a The maximum length of the BN38C (or BN38D) cable on one SCSI bus segment must not exceed 25 meters (82 feet). b The maximum length of the BN37A cable must not exceed 25 meters (82 feet). c The maximum combined length of these cables must not exceed 25 meters (82 feet). 8.1.
The single-ended SCSI bus may be daisy chained from one single-ended tape drive to another with BC19J cables as long as the SCSI bus maximum length is not exceeded. Ensure that the tape drive on the end of the bus is terminated with a H8574-A or H8890-AA terminator. You can add additional TZ88N-TA tape drives to the differential shared SCSI bus by adding additional DWZZA or DWZZB/TZ88N-TA combinations.
Figure 8–3: DS-TZ89N-VW SCSI ID Switches Backplane Interface Connector SCSI ID Switch Pack Snap−in Locking Handles DS−TZ89N−VW The SCSI ID is selected by switch positions, which must be selected before the tape drive is installed in the BA356. Table 8–3 lists the switch settings for the DS-TZ89N-VW.
Table 8–3: DS-TZ89N-VW Switch Settings (cont.
8.2.3 Setting the DS-TZ89N-TA SCSI ID The DS-TZ89N-TA has a push-button counter switch on the rear panel to select the SCSI ID. It is preset at the factory to 15. Push the button above the counter to increment the SCSI ID (the maximum is 15); push the button below the switch to decrease the SCSI ID. 8.2.4 Cabling the DS-TZ89N-TA Tape Drives You must connect the DS-TZ89N-TA tabletop model to a single-ended segment of the shared SCSI bus.
8.3 Compaq 20/40 GB DLT Tape Drive The Compaq 20/40 GB DLT Tape Drive is a Digital Linear Tape (DLT) tabletop cartridge tape drive that can hold up to 40 GB of data per CompacTape IV cartridge using 2:1 compression. It is capable of storing and retrieving data at a rate of up to 10.8 GB per hour (using 2:1 compression). The Compaq 20/40 GB DLT Tape Drive uses CompacTape III, CompacTape IIIXT, or CompacTape IV media. It is a narrow, single-ended SCSI device, and uses 50-pin, high-density connectors.
Figure 8–4: Compaq 20/40 GB DLT Tape Drive Rear Panel SCSI ID SCSI ID Selector Switch + 0 + 0 - - 20/40 GB DLT Tape Drive ZK-1603U-AI 8.3.2 Cabling the Compaq 20/40 GB DLT Tape Drive The Compaq 20/40 GB DLT Tape Drive is connected to a single-ended segment of the shared SCSI bus. A DWZZB-AA signal converter is required to convert the differential shared SCSI bus to single-ended. Figure 8–5 shows a configuration with a Compaq 20/40 GB DLT Tape Drive on a shared SCSI bus.
(65.6-foot) cable). Ensure that the trilink or Y cable at both ends of the differential segment of the shared SCSI bus is terminated with an HD68 differential terminator such as an H879-AA. The single-ended SCSI bus may be daisy chained from one single-ended tape drive to another with cable part number 146745-003 or 146776-003 (0.9-meter (2.95-foot) cables) as long as the SCSI bus maximum length of 3 meters (9.8 feet) (fast SCSI) is not exceeded.
Figure 8–5: Cabling a Shared SCSI Bus with a Compaq 20/40 GB DLT Tape Drive Network Member System 1 Memory Channel T KZPBA-CB (ID 6) T Member System 2 Memory Channel Interface 7 Memory Channel 6 KZPBA-CB (ID 7) 5 5 T KZPBA-CB (ID 7) KZPBA-CB (ID 6) 7 1 1 + 0 T - DS-DWZZH-03 T T T 10 T 9 2 3 T Controller B HSZ70 4 20/40 GB DLT Tape Drive 6 8 DWZZB-AA Controller A HSZ70 StorageWorks RAID Array 7000 ZK-1604U-AI Table 8–4 lists the components that are used to create the cluster
Table 8–4: Hardware Components Used to Create the Configuration Shown in Figure 8–5 (cont.) Callout Number Description 9 199629-002 or 189636-002 (68-pin high density to 50-pin high density 1.8-meter (5.9-foot) cables) 10 341102-001 50-pin high density terminator a The maximum length of the BN38C (or BN38D) cable on one SCSI bus segment must not exceed 25 meters (82 feet). b The maximum length of the BN37A cable must not exceed 25 meters (82 feet).
8.4.2 Cabling the Compaq 40/80-GB DLT Drive The Compaq 40/80-GB DLT Drive is connected to a single-ended segment of the shared SCSI bus. Figure 8–6 shows a configuration with a Compaq 40/80-GB DLT Drive for use on a shared SCSI bus. To configure the shared SCSI bus for use with a Compaq 40/80-GB DLT Drive, follow these steps: 1. You need one DWZZB-AA for each shared SCSI bus with a Compaq 40/80-GB DLT Drive. Ensure that the DWZZB-AA jumpers W1 and W2 are installed to enable the single-ended termination.
Ensure that SCSI IDs for the tape drive and host bus adapter do not conflict. To achieve system performance capabilities, we recommend that you place no more than two Compaq 40/80-GB DLT Drives on a SCSI bus, and that you place no shared storage on the same SCSI bus with the tape drive.
Table 8–5: Hardware Components in the Configuration in Figure 8–6 (cont.) Callout Number Description 7 328215-00X, BN21K, or BN21L HD68 to HD68 cablec 8 H885-AA trilink connector 9 189646-001 (0.9 meter; 2.95-foot cable) or 189646-002 (1.8 meter; 5.9-foot cable)d BN21K-01 or BN21L-01 (1-meter; 3.3-foot cable)d BN21K-02 or BN21L-02 (2-meter; 6.6-foot cable)d 10 152732-001 LVD terminator a The maximum length of the BN38C (or BN38D) cable on one SCSI bus segment must not exceed 25 meters (82 feet).
4. Issue a bus reset or turn the minilibrary power off and on again to cause the drive to recognize the new SCSI ID. 8.5.2 Cabling the TZ885 Tape Drive The TZ885 is connected to a single-ended segment of the shared SCSI bus. It is connected to a differential portion of the shared SCSI bus with a DWZZA-AA or DWZZB-AA. Figure 8–7 shows a configuration of a TZ885 for use on a shared SCSI bus. To configure the shared SCSI bus for use with a TZ885, follow these steps: 1.
______________________ Note _______________________ Ensure that there is no conflict with tape drive and host bus adapter SCSI IDs.
Table 8–6: Hardware Components Used to Create the Configuration Shown in Figure 8–7 (cont.) Callout Number Description 8 H885-AA trilink connector 9 BN21M cable 10 H8574-A terminator a The maximum length of the BN38C (or BN38D) cable on one SCSI bus segment must not exceed 25 meters (82 feet). b The maximum length of the BN37A cable must not exceed 25 meters (82 feet). c The maximum combined length of these cables must not exceed 25 meters (82 feet). 8.
Figure 8–8: TZ887 DLT MiniLibrary Rear Panel SCSI ID Selector Switch SCSI ID + + 0 0 - - TZ887 ZK-1461U-AI 8.6.2 Cabling the TZ887 Tape Drive The TZ887 is connected to a single-ended segment of the shared SCSI bus. It is connected to a differential portion of the shared SCSI bus with a DWZZB-AA. Figure 8–9 shows a configuration with a TZ887 for use on a shared SCSI bus. To configure the shared SCSI bus for use with a TZ887, follow these steps: 1.
length is not exceeded and there are sufficient SCSI IDs available. Ensure that the tape drive on the end of the bus is terminated with an H8574-A or H8890-AA terminator. You can add additional shared SCSI buses with TZ887 tape drives by adding additional DWZZB-AA/TZ887 combinations. ______________________ Note _______________________ Ensure that there is no conflict with tape drive and host bus adapter SCSI IDs.
8.7 Preparing the TL891 and TL892 DLT MiniLibraries for Shared SCSI Usage ______________________ Note _______________________ To achieve system performance capabilities, we recommend placing no more than two TZ89 drives on a SCSI bus, and also recommend that no shared storage be placed on the same SCSI bus with a tape library. The TL891 and TL892 MiniLibraries use one (TL891) or two (TL892) TZ89N-AV differential tape drives and a robotics controller, which access cartridges in a 10-cartridge magazine.
The first and second lines of the default screen show the status of the two drives (if present). The third line shows the status of the library robotics, and the fourth line is a map of the magazine, with the numbers from 0 through 9 representing the cartridge slots. Rectangles on this line indicate cartridges that are present in the corresponding slot of the magazine.
4. Select the tape drive (DLT0 Bus ID: or DLT1 Bus ID:) or library robotics (LIB Bus ID:) whose SCSI bus ID you want to change. The default SCSI IDs are as follows: • Lib Bus ID: 0 • DLT0 Bus ID: 4 • DLT1 Bus ID: 5 Use the up or down arrow button to select the item whose SCSI ID you want to change. Press the Enter button. 5. Use the up or down arrow button to scroll through the possible SCSI ID settings. Press the Enter button when the desired SCSI ID is displayed. 6.
SCSI bus without stopping all ASE services that generate activity on the bus. For this reason, we recommend that tape devices be placed on separate shared SCSI buses, and that there be no storage devices on the SCSI bus. The cabling depends on whether or not there are one or two drives, and for the two-drive configuration, if each drive is on a separate SCSI bus. ______________________ Note _______________________ It is assumed that the library robotics controller is on the same SCSI bus as tape drive 1.
To connect the drive robotics and one drive to one shared SCSI bus and the second drive to a second shared SCSI bus, follow these steps: 1. Connect a BN21K or BN21L between the last trilink connector on one shared SCSI bus to the leftmost connector (as viewed from the rear) of the TL892. 2. Connect a BN21K or BN21L between the last trilink connector on the second shared SCSI bus to the left DLT2 connector (the fifth connector from the left). 3. Install a 30-centimeter (11.
Figure 8–10: TruCluster Server Cluster with a TL892 on Two Shared SCSI Buses Network Memory Channel Interface Member System 1 Member System 2 T 6 KZPBA-CB (ID 6) KZPBA-CB (ID 7) 5 7 5 KZPBA-CB (ID 7) 5 T KZPBA-CB (ID 6) T 7 Memory Channel Memory Channel 5 T KZPBA-CB (ID 7) KZPBA-CB (ID 6) 7 7 1 1 T DS-DWZZH-03 T Library Robotics DLT1 T 6 2 3 T Controller B HSZ70 4 DLT2 Expansion Unit Interface Controller A HSZ70 StorageWorks RAID Array 7000 TL892 1 Ft SCSI Bus Jumper ZK-1
8.8 Preparing the TL890 DLT MiniLibrary Expansion Unit The topics in this section provide information on preparing the TL890 DLT MiniLibrary expansion unit with the TL891 and TL892 DLT MiniLibraries for use on a shared SCSI bus. ______________________ Note _______________________ To achieve system performance capabilities, we recommend placing no more than two TZ89 drives on a SCSI bus, and also recommend that no shared storage be placed on the same SCSI bus with a tape library. 8.8.
8.8.2.1 Cabling the DLT MiniLibraries You must make the following connections to render the DLT MiniLibrary system operational: • Expansion unit to the motor mechanism: The motor mechanism cable is about 1 meter (3.3 feet) long and has a DB-15 connector on each end. Connect it between the connector labeled Motor on the expansion unit to the motor on the pass-through mechanism.
____________________ Notes ____________________ Do not connect a SCSI bus to the SCSI connectors for the library connectors on the base modules. We recommend that no more than two TZ89 tape drives be on a SCSI bus. Figure 8–11 shows a MiniLibrary configuration with two TL892 DLT MiniLibraries and a TL890 DLT MiniLibrary expansion unit. The TL890 library robotics is on one shared SCSI bus, and the two TZ89 tape drives in each TL892 are on separate, shared SCSI buses.
Figure 8–11: TL890 and TL892 DLT MiniLibraries on Shared SCSI Buses Network Memory Channel Interface Member System 1 T Memory Channel 6 7 Member System 2 7 Memory Channel 6 KZPBA-CB (ID 6) KZPBA-CB (ID 6) KZPBA-CB (ID 6) T 5 T 5 5 KZPBA-CB (ID 7) KZPBA-CB (ID 7) 5 T KZPBA-CB (ID 7) KZPBA-CB (ID 6) 7 1 1 5 KZPBA-CB (ID 7) T 6 5 T DS-DWZZH-03 T T 2 7 3 T 7 4 Diag Controller B HSZ70 Controller A HSZ70 StorageWorks RAID Array 7000 Motor 6 SCSI TL890 Robotics Control cabl
Table 8–8: Hardware Components Used to Create the Configuration Shown in Figure 8–11 Callout Number Description 1 BN38C or BN38D cablea 2 BN37A cableb 3 H8861-AA VHDCI trilink connector 4 H8863-AA VHDCI terminator 5 BN21W-0B Y cable 6 H879-AA terminator 7 328215-00X, BN21K, or BN21L cablec a The maximum length of the BN38C (or BN38D) cable on one SCSI bus segment must not exceed 25 meters (82 feet). b The maximum length of the BN37A cable must not exceed 25 meters (82 feet).
After a series of power-on self-tests have executed, the default screen will be displayed on the base module control panel: DLT0 Idle DLT1 Idle Loader Idle 0> _ _ _ _ _ _ _ _ _ _ <9 The default screen shows the state of the tape drives, loader, and number of cartridges present for this base module. A rectangle in place of the underscore indicates that a cartridge is present in that location. 2. Press the Enter button to enter the Menu Mode, displaying the Main Menu. 3.
inventory of modules may be incorrect and the contents of some or all of the modules will be inaccessible to the system and to the host. When the expansion unit comes up, it will communicate with each base module through the expansion unit interface and inventory the number of base modules, tape drives, and cartridges present in each base module.
4. Press the down arrow button until the Configure Menu item is selected, and then press the Enter button to display the Configure submenu. 5. Press the down arrow button until the Set SCSI item is selected and press the Enter button. 6. Press the up or down arrow button to select the appropriate tape drive (DLT0 Bus ID:, DLT1 Bus ID:, DLT2 Bus ID:, and so on) or library robotics (Library Bus ID:) for which you want to change the SCSI bus ID.
8.9 Preparing the TL894 DLT Automated Tape Library for Shared SCSI Bus Usage The topics in this section provide information on preparing the TL894 DLT automated tape library for use on a shared SCSI bus in a TruCluster Server cluster. ______________________ Note _______________________ To achieve system performance capabilities, we recommend placing no more than two TZ89 drives on a SCSI bus segment. We also recommend that storage be placed on shared SCSI buses that do not have tape drives.
2. Press and release SELECT to enter the menu mode. 3. Verify that the following information is displayed in the SDA: Menu: Configuration: 4. Press and release SELECT to choose the Configuration menu. 5. Verify that the following information is displayed in the SDA: Menu: Configuration Inquiry 6. Press and release the up or down arrow buttons to locate the SCSI Address submenu, and verify that the following information is displayed in the SDA: Menu: Configuration SCSI Address .. 7.
Menu: Configuration: 4. Press and release SELECT to choose the Configuration menu. 5. Verify that the following information is displayed in the SDA: Menu: Configuration SCSI Address 6. Press and release the SELECT button again to choose SCSI Address and verify that the following information is shown in the SDA: Menu: SCSI Address Robotics 7. Use the down arrow button to bypass the Robotics submenu and verify that the following information is shown in the SDA: Menu: SCSI Address Drive 0 8.
This configuration, which is called the four-bus configuration, is shown in Figure 8–12. In this configuration, each of the tape drives, except SCSI bus drive 0 and the robotics controller, requires a SCSI address on a separate SCSI bus. The robotics controller and drive 0 use two SCSI IDs on their SCSI bus. Figure 8–12: TL894 Tape Library Four-Bus Configuration Robotics Controller *SCSI Address 0 Tape Drive Interface PWA SCSI Cable 1.
Appendix B of the TL81X/TL894 Automated Tape Library for DLT Cartridges Facilities Planning and Installation Guide provides figures showing various bus configurations. In these figures, the configuration changes have been made by removing the terminators from both drives, installing the SCSI bus jumper cable on the drive connectors vacated by the terminators, then installing an HD68 SCSI bus terminator on the SCSI bus port connector on the cabinet exterior.
In Figure 8–13, one bus is connected to port 1 (robotics controller and tape drives 0 and 1) and the other bus is connected to port 3 (tape drives 2 and 3). Ensure that the terminators are present on the tape drives 1 and 3.
Table 8–10: Hardware Components Used to Create the Configuration Shown in Figure 8–13 (cont.) Callout Number Description 6 H879-AA terminator 7 328215-00X, BN21K, or BN21L cablec a The maximum length of the BN38C (or BN38D) cable on one SCSI bus segment must not exceed 25 meters (82 feet). b The maximum length of the BN37A cable must not exceed 25 meters (82 feet). c The maximum combined length of these cables must not exceed 25 meters (82 feet). 8.
is applied, until the electronics is activated and able to set the SCSI IDs electronically. The physical SCSI IDs should match the SCSI IDs set by the library electronics. Ensure that the SCSI IDs that are set by the rotary switch and from the control panel do not conflict with any SCSI bus controller SCSI ID. The following sections describe how to prepare the TL895 for use on a shared SCSI bus in more detail. 8.10.1 TL895 Robotic Controller Required Firmware Robotic firmware version N2.
2. On the Enter Password screen, enter the operator password. The default operator password is 1234. The lock icon is unlocked and shows an O to indicate that you have operator-level security clearance. 3. On the Operator screen, press the Configure Library button. The Configure Library screen displays the current library configuration.
In this configuration each of the tape drives, except tape drive 0 and the robotics controller, require a SCSI ID on a separate SCSI bus. The robotics controller and tape drive drive 0 use two SCSI IDs on their SCSI bus. You can reconfigure the tape drives and robotics controller to place multiple tape drives on the same SCSI bus with SCSI bus jumper (part number 6210567) included with the tape library.
Figure 8–14: TL895 Tape Library Internal Cabling Robotics Controller SCSI ID 0 Tape Drive 0 SCSI ID 1 Tape Drive 1 SCSI ID 2 Terminator PN 0415619 SCSI Jumper Cable PN 6210567 Tape Drive 2 SCSI ID 3 SCSI Port 8 SCSI Port 7 Terminator SCSI Port 6 Tape Drive 3 SCSI ID 4 SCSI Port 5 Jumper Cable Tape Drive 4 SCSI ID 5 SCSI Port 4 Terminator SCSI Port 3 Tape Drive 5 SCSI ID 1 SCSI Port 2 Jumper Cable SCSI Port 1 Tape Drive 6 SCSI ID 2 Terminator ZK-1397U-AI 8.10.
electronic SCSI ID using the Configure menu from the control panel (see Section 8.10.2). The actual upgrade is beyond the scope of this manual. See the TL895 Drive Upgrade Instructions manual for upgrade instructions. 8.10.5 Connecting the TL895 Tape Library to the Shared SCSI Bus The TL895 tape library has up to 3 meters (9.8 feet) of internal SCSI cabling per SCSI bus.
Each tape library comes configured with a robotic controller and bar code reader (to obtain quick and accurate tape inventories). The libraries have either three or six TZ89N-AV drives. The TL896, because it has a greater number of drives, has a lower capacity for tape cartridge storage. Each tape library utilizes bulk loading of bin packs, with each bin pack containing a maximum of 11 cartridges. Bin packs are arranged on an eight-sided carousel that provides either two or three bin packs per face.
These tape libraries each have a multi-unit controller (MUC) that serves two functions: • It is a SCSI adapter that allows the SCSI interface to control communications between the host and the tape library. • It permits the host to control up to five attached library units in a multi-unit configuration. Multi-unit configurations are not discussed in this manual.
Table 8–12: MUC Switch Functions (cont.) Switch Function 7 Host selection: Down for SCSI, up for seriala 8 Must be down, reserved for testing a For a TruCluster Server cluster, switch 7 is down, allowing switches 1, 2, and 3 to select the MUC SCSI ID. 8.11.3 Setting the MUC SCSI ID The multi-unit controller (MUC) SCSI ID is set with switch 1, 2, and 3, as shown in Table 8–13. Note that switch 7 must be down to select the SCSI bus and enable switches 1, 2, and 3 to select the MUC SCSI ID.
Table 8–15: TL896 Default SCSI IDs Device Default SCSI ID MUC 2 Drive 5 (top) 5 E Drive 4 4 F Drive 3 3 A Drive 2 5 B Drive 1 4 C Drive 0 (bottom) 3 SCSI Port D 8.11.5 TL893 and TL896 Automated Tape Library Internal Cabling The default internal cabling configurations for the TL893 and TL896 Automated Tape Libraries (ATLs) are as follows: • The SCSI input for the TL893 is high-density, 68-pin differential.
; Figure 8–15: TL893 Three-Bus Configuration 0415498 (50-Pin Micro-D Terminator) 0425031 (SCSI Diff Feed Through) MUC SCSI Address 2 TZ89 Tape Drive SCSI Address 5 (top shelf) TZ89 Tape Drive SCSI Address 4 (middle shelf) TZ89 Tape Drive SCSI Address 3 (bottom shelf) 0425017 (Cable) 9-01 409 6 20 62 1 99-0 040 9-01 409 6 20 0415619 (68-pin Micro-D Terminator) 0415619 (68-pin Micro-D Terminator) Drive Housing SCSI Port A SCSI Port B SCSI Port C (Rear Connector Panel) ZK-1326U-AI • The SCSI
;; – The lower bay bottom shelf tape drive (tape drive 0, SCSI ID 3) is on SCSI Port C and is terminated on the tape drive. – The tape drive terminators are 68-pin differential terminators (part number 0415619).
other devices on the shared SCSI bus. Each SCSI bus must be terminated internal to the tape library at the tape drive itself with the installed SCSI terminators. Therefore, TL893 and TL896 tape libraries must be on the end of the shared SCSI bus. In a TruCluster Server cluster with TL893 or TL896 tape libraries, the member systems and StorageWorks enclosures or RAID subsystems may be isolated from the shared SCSI bus because they use trilink connectors or Y cables.
Figure 8–17: Shared SCSI Buses with TL896 in Three-Bus Mode Network Memory Channel Interface Member System 1 T Memory Channel 6 7 Member System 2 7 Memory Channel 6 KZPBA-CB (ID 6) KZPBA-CB (ID 6) KZPBA-CB (ID 6) T 5 T 5 KZPBA-CB (ID 7) KZPBA-CB (ID 7) 5 5 T KZPBA-CB (ID 7) KZPBA-CB (ID 6) 7 1 1 5 KZPBA-CB (ID 7) T 6 5 T DS-DWZZH-03 T T 2 7 3 T Controller B HSZ70 7 4 Controller A HSZ70 StorageWorks RAID Array 7000 TL896 A B C D E F SCSI Ports (3-bus mode) ZK-1626U-A
Table 8–16: Hardware Components Used to Create the Configuration Shown in Figure 8–17 (cont.) Callout Number Description 6 H879-AA terminator 7 328215-00X, BN21K, or BN21L cablec a The maximum length of the BN38C (or BN38D) cable on one SCSI bus segment must not exceed 25 meters (82 feet). b The maximum length of the BN37A cable must not exceed 25 meters (82 feet). c The maximum combined length of these cables must not exceed 25 meters (82 feet). 8.
8.12.1.2 TL881 and TL891 MiniLibrary Rackmount Components A TL881 or TL891 base unit (which contains the tape drive) can operate as an independent, standalone unit, or in concert with an expansion unit and multiple data units. A rackmount multiple-module configuration is expandable to up to six modules in a configuration. The configuration must contain at least one expansion unit and one base unit.
• Data unit — This rackmount module contains a 16-cartridge magazine to provide additional capacity in a multi-module configuration. The data unit robotics works in conjunction with the robotics of the expansion unit and base units. It is under control of the expansion unit. The data unit works with either the TL881 or TL891 base unit.
8.12.1.4 DLT MiniLibrary Part Numbers Table 8–18 lists the part numbers for the TL881 and TL891 DLT MiniLibrary systems. Part numbers are only shown for the TL881 fast, wide differential components.
8.12.2.1 Preparing a Tabletop Model or Base Unit for Standalone Shared SCSI Bus Usage A TL881 or TL891 DLT MiniLibrary tabletop model or a rackmount base unit may be used standalone. You may want to purchase a rackmount base unit for future expansion. ______________________ Note _______________________ To achieve system performance capabilities, we recommend placing no more than two tape drives on a SCSI bus, and also recommend that no shared storage be placed on the same SCSI bus with a tape library.
______________________ Note _______________________ There are no switches for setting a mechanical SCSI ID for the tape drives. The SCSI IDs default to five. The MiniLibrary sets the electronic SCSI ID very quickly, before any device can probe the MiniLibrary, so the lack of a mechanical SCSI ID does not cause any problems on the SCSI bus. To set the SCSI ID, follow these steps: 1. From the Default Screen, press the Enter button to enter the Menu Mode, displaying the Main Menu.
5. Use the up or down arrow button to scroll through the possible SCSI ID settings. Press the Enter button when the desired SCSI ID is displayed. 6. Repeat steps 4 and 5 to set other SCSI bus IDs as necessary. 7. Press the Escape button repeatedly until the default menu is displayed. 8.12.2.1.2 Cabling the TL881 or TL891 DLT MiniLibrary There are six 68-pin, high-density SCSI connectors on the back of the TL881 or TL891 DLT MiniLibrary standalone model or rackmount base unit.
______________________ Note _______________________ It is assumed that the library robotics controller is on the same SCSI bus as tape drive 1. To connect the library robotics and one drive to a single shared SCSI bus, follow these steps: 1. Connect a 328215-00X, BN21K, or BN21L between the last Y cable or trilink connector on the bus to the leftmost connector (as viewed from the rear) of the MiniLibrary. The 328215-004 is a 20-meter (65.6-foot) cable. 2. Install a 30-centimeter (11.
4. Install an HD68 differential (H879-AA) terminator on the right DLT1 connector (the fourth connector from the left) and install another HD68 differential terminator on the right DLT2 connector (the rightmost connector). Figure 8–18 shows an example of a TruCluster configuration with a TL891 standalone MiniLibrary connected to two shared SCSI buses.
Table 8–19: Hardware Components Used to Create the Configuration Shown in Figure 8–18 Callout Number Description 1 BN38C or BN38D cablea 2 BN37A cableb 3 H8861-AA VHDCI trilink connector 4 H8863-AA VHDCI terminator 5 BN21W-0B Y cable 6 H879-AA terminator 7 328215-00X, BN21K, or BN21L cablec a The maximum length of the BN38C (or BN38D) cable on one SCSI bus segment must not exceed 25 meters (82 feet). b The maximum length of the BN37A cable must not exceed 25 meters (82 feet).
_____________________ Note _____________________ This cable is not shown in Figure 8–19 because the pass-through mechanism is not shown in the figure. • Robotics control cables from the expansion unit to each base unit or data unit: These cables have a DB-9 male connector on one end and a DB-9 female connector on the other end.
pass-through mechanism and cable to the library robotics motor are not shown in this figure. For more information on cabling the units, see Section 8.12.2.1.2. With the exception of the robotics control on the expansion module, a rackmount TL881 or TL891 DLT MiniLibrary is cabled in the same manner as a tabletop unit.
Table 8–20: Hardware Components Used to Create the Configuration Shown in Figure 8–19 Callout Number Description 1 BN38C or BN38D cablea 2 BN37A cableb 3 H8861-AA VHDCI trilink connector 4 H8863-AA VHDCI terminator 5 BN21W-0B Y cable 6 H879-AA terminator 7 328215-00X, BN21K, or BN21L cablec a The maximum length of the BN38C (or BN38D) cable on one SCSI bus segment must not exceed 25 meters (82 feet). b The maximum length of the BN37A cable must not exceed 25 meters (82 feet).
DLT0 Idle DLT1 Idle Loader Idle 0> _ _ _ _ _ _ _ _ _ _ <9 The default screen shows the state of the tape drives, loader, and number of cartridges present for this base unit. A rectangle in place of the underscore indicates that a cartridge is present in that location. 2. Press the Enter button to enter the Menu Mode, displaying the Main Menu. 3. Press the down arrow button until the Configure Menu item is selected, then press the Enter button.
When the expansion unit comes up, it will communicate with each base and data unit through the expansion unit interface and inventory the number of base units, tape drives, data units, and cartridges present in each base and data unit. After the MiniLibrary configuration has been determined, the expansion unit will communicate with each base and data unit and indicate to the modules which cartridge group that base or data unit contains.
• DLT3 Bus ID: 4 • DLT4 Bus ID: 5 • DLT5 Bus ID: 6 7. Press Enter when you have the item selected for which you want to change the SCSI ID. 8. Use the up and down arrows to select the desired SCSI ID. Press the Enter button to save the new selection. 9. Press the Escape button once to return to the Set SCSI Submenu to select another tape drive or the library robotics, and then repeat steps 6, 7, and 8 to set the SCSI ID. 10.
These tape devices have been qualified for use on shared SCSI buses with both the KZPSA-BB and KZPBA-CB host bus adapters. Ensure that the host bus adapter you use is supported on your system by searching the options list for your system at the following URL: http://www.compaq.com/alphaserver/products/options.html 8.13.
8.13.3 Preparing the ESL9326D Enterprise Library for Shared SCSI Bus Usage The ESL9326D Enterprise Library contains library electronics (robotic controller) and from 6 to 16 35/70 DLT (DS-TZ89N-AV) fast-wide, differential DLT tape drives. Tape devices are supported only on those shared SCSI buses that use the KZPSA-BB or KZPBA-CB host bus adapters. ______________________ Notes ______________________ The ESL9326D Enterprise Library is cabled internally for two 35/70 DLT tape drives on each SCSI bus.
8.13.3.3 ESL9326D Enterprise Library Internal Cabling The default internal cabling for the ESL9326D Enterprise Library is to place two 35/70 DLT tape drives on one SCSI bus. Figure 8–20 shows the default cabling for an ESL9326D Enterprise Library with 16 tape drives. Each pair of tape drives is cabled together internally to place two drives on a single SCSI bus. If your model has fewer drives, all internal cabling is supplied.
______________________ Note _______________________ Each internal cable is up to 2.5 meters (8.2 feet) long. The length of the internal cables, two per SCSI bus, must be taken into consideration when ordering SCSI bus cables. The maximum length of a differential SCSI bus segment is 25 meters (82 feet), and the internal tape drive SCSI bus length is 5 meters (16.4 feet). Therefore, you must limit the external SCSI bus cables to 20 meters (65.6 feet) maximum. 8.13.3.
Table 8–21: Shared SCSI Bus Cable and Terminator Connections for the ESL9326D Enterprise Library Tape Drives on Shared SCSI Bus Connect SCSI Cable to Connector Install HD68 Terminator on Connector 0, 1, and library electronicsa Q B 2, 3 C D 4, 5 E F 6, 7 G H 8, 9 I J 10, 11 K L 12, 13 M N 14, 15 O P a Install 30-centimeter (11.8-inch) jumper cable part number 330582-001 between SCSI connectors R and A to place the library electronics on the SCSI bus with tape drives 0 and 1.
9 Configuring Systems for External Termination or Radial Connections to Non-UltraSCSI Devices This chapter describes how to prepare the systems for a TruCluster Server cluster when there is a need to access: • Shared SCSI storage using external termination. • Non-UltraSCSI RAID array controllers (HSZ40 and HSZ50) using a radial connection.
Channel adapters, hubs (if necessary), cables, and network adapters) have been installed, you can connect your host bus adapter to the UltraSCSI hub or storage subsystem. Follow the steps in Table 9–1 to start the TruCluster Server hardware installation procedure. You can save time by installing the Memory Channel adapters, redundant network adapters (if applicable), and KZPSA-BB or KZPBA-CB SCSI adapters all at the same time.
9.1.1 Radial Installation of a KZPSA-BB or KZPBA-CB Using Internal Termination Use this method of cabling member systems and shared storage in a TruCluster Server cluster if you are using a DWZZH UltraSCSI hub. You must reserve at least one hub port for shared storage. The DWZZH-series UltraSCSI hubs are designed to allow more separation between member systems and shared storage. Using the UltraSCSI hub also improves the reliability of the detection of cable faults.
The other end of the SCSI bus segment is terminated by the KZPSA-BB or KZPBA-CB onboard termination resistor SIPs, or a trilink connector/terminator combination installed on the HSZ40 or HSZ50. The KZPSA-BB PCI-to-SCSI bus adapter: • Is installed in a PCI slot of the supported member system (see Section 2.3.2). • Is a fast, wide differential adapter with only a single port, so only one differential shared SCSI bus can be connected to a KZPSA-BB adapter.
Table 9–2: Installing the KZPSA-BB or KZPBA-CB for Radial Connection to a DWZZH UltraSCSI Hub Step Action Refer to: 1 Section 9.1.4.4, Figure 9–1, and KZPSA PCI-to-SCSI Storage Adapter Installation and User’s Guide Ensure that the KZPSA-BB internal termination resistors, Z1, Z2, Z3, Z4, and Z5 are installed. Ensure that the eight KZPBA-CB internal Section 4.3.3.3, termination resistor SIPs, RM1-RM8 are installed.
Table 9–2: Installing the KZPSA-BB or KZPBA-CB for Radial Connection to a DWZZH UltraSCSI Hub (cont.) Step Action Refer to: 5 Use the show config and show device console commands to display the installed devices and information about the KZPSA-BBs or KZPBA-CBs on the AlphaServer systems. Look for KZPSA or pk* in the display to determine which devices are KZPSA-BBs. Look for QLogic ISP1020 in the show config display and isp in the show device display to determine which devices are KZPBA-CBs. Section 9.1.
Table 9–3: Installing a KZPSA-BB or KZPBA-CB for Use with External Termination Step Action Refer to: 1 Section 9.1.4.4, Figure 9–1, and KZPSA PCI-to-SCSI Storage Adapter Installation and User’s Guide Remove the KZPSA-BB internal termination resistors, Z1, Z2, Z3, Z4, and Z5. Remove the eight KZPBA-CB internal termination Section 4.3.3.3, resistor SIPs, RM1-RM8. Figure 4–1, and KZPBA-CB PCI-to-Ultra SCSI Differential Host Adapter User’s Guide 2 Power down the member system.
Table 9–3: Installing a KZPSA-BB or KZPBA-CB for Use with External Termination (cont.) Step Action Refer to: 7 Section 9.1.4.1 through Section 9.1.4.3 and Example 9–6 through Example 9–9 Use the show pk* or show isp* console commands to determine the status of the KZPSA-BB or KZPBA-CB console environment variables, and then use the set console command to set the KZPSA-BB bus speed to fast, termination power to on, and the KZPSA or KZPBA-CB SCSI bus ID.
Table 9–3: Installing a KZPSA-BB or KZPBA-CB for Use with External Termination (cont.) Step Action Refer to: TL891/TL892 MiniLibrary Section 8.7 TL890 with TL891/TL892 Section 8.8 TL894 Section 8.9 TL895 Section 8.10 TL893/TL896 Section 8.11 TL881/TL891 DLT MiniLibraries Section 8.12 Compaq ESL9326D Enterprise Library Section 8.
Example 9–1: Displaying Configuration on an AlphaServer 4100 (cont.
Example 9–2: Displaying Devices on an AlphaServer 4100 (cont.) dkd2.0.0.4.1 DKd2 HSZ50-AX X29Z dkd100.1.0.4.1 DKd100 RZ26N 0568 dkd200.1.0.4.1 DKd200 RZ26 392A dkd300.1.0.4.1 DKd300 RZ26N 0568 polling kzpsa0 (DEC KZPSA) slot 5, bus 0 PCI, hose 1 TPwr 1 Fast 1 Bus ID 7 kzpsa0.7.0.5.1 dke TPwr 1 Fast 1 Bus ID 7 L01 A11 dke100.1.0.5.1 DKe100 RZ28 442D dke200.2.0.5.1 DKe200 RZ26 392A dke300.3.0.5.1 DKe300 RZ26L 442D polling floppy0 (FLOPPY) pceb IBUS hose 0 dva0.0.0.1000.
Example 9–4: Displaying Devices on an AlphaServer 8200 >>> show device polling for units polling for units polling for units polling for units polling for units pke0.7.0.0.1 dke0.0.0.0.1 dke200.2.0.0.1 dke400.4.0.0.1 on isp0, slot0, bus0, hose0... on isp1, slot1, bus0, hose0... on isp2, slot4, bus0, hose0... on isp3, slot5, bus0, hose0... kzpaa0, slot0, bus0, hose1... kzpaa4 SCSI Bus ID 7 DKE0 RZ28 442D DKE200 RZ28 442D DKE400 RRD43 0064 polling for units dkf0.0.0.1.1 dkf1.0.0.1.1 dkf2.0.0.1.1 dkf3.0.0.1.
9.1.4 Displaying Console Environment Variables and Setting the KZPSA-BB and KZPBA-CB SCSI ID The following sections show how to use the show console command to display the pk* and isp* console environment variables and set the KZPSA-BB and KZPBA-CB SCSI ID on various AlphaServer systems. Use these examples as guides for your system. Note that the console environment variables used for the SCSI options vary from system to system.
Example 9–5: Displaying the pk* Console Environment Variables on an AlphaServer 4100 System (cont.) pkf0_fast pkf0_host_id pkf0_termpwr 1 7 1 Compare the show pk* command display in Example 9–5 with the show config command in Example 9–1 and the show dev command in Example 9–2. Note that there are no pk* devices in either display.
Example 9–6: Displaying Console Variables for a KZPBA-CB on an AlphaServer 8x00 System P00>>> show isp* isp0_host_id isp0_soft_term 7 on isp1_host_id isp1_soft_term 7 on isp2_host_id isp2_soft_term 7 on isp3_host_id isp3_soft_term 7 on isp5_host_id isp5_soft_term 7 diff Both Example 9–3 and Example 9–4 show five isp devices; isp0, isp1, isp2, isp3, and isp4. In Example 9–6, the show isp* console command shows isp0, isp1, isp2, isp3, and isp5.
Example 9–7: Displaying Console Variables for a KZPSA-BB on an AlphaServer 8x00 System (cont.) pkc0_fast pkc0_host_id pkc0_termpwr 1 7 on 9.1.4.2 Setting the KZPBA-CB SCSI ID After you determine the console environment variables for the KZPBA-CBs on the shared SCSI bus, use the set console command to set the SCSI ID. For a TruCluster Server cluster, you will most likely have to set the SCSI ID for all KZPBA-CB UltraSCSI adapters except one.
9.1.4.3 Setting KZPSA-BB SCSI Bus ID, Bus Speed, and Termination Power If the KZPSA-BB SCSI ID is not correct, or if it was reset to 7 by the firmware update utility, or you need to change the KZPSA-BB speed, or enable termination power, use the set console command. ______________________ Note _______________________ All KZPSA-BB host bus adapters should be enabled to generate termination power.
9.1.4.4 KZPSA-BB and KZPBA-CB Termination Resistors The KZPSA-BB internal termination is disabled by removing termination resistors Z1 through Z5, as shown in Figure 9–1. Figure 9–1: KZPSA-BB Termination Resistors Z1 − Z5 Termination Resistor SIPs The KZPBA-CB internal termination is disabled by removing the termination resistors RM1-RM8 as shown in Figure 4–1. 9.1.4.5 Updating the KZPSA-BB Adapter Firmware You must check, and update as necessary, the system and host bus adapter firmware.
The boot sequence provides firmware update overview information. Use Return to scroll the text, or press Ctrl/C to skip the text. After the overview information has been displayed, the name of the default boot file is provided. If it is the correct boot file, press Return at the Bootfile: prompt. Otherwise, enter the name of the file you want to boot from.
10 Configurations Using External Termination or Radial Connections to Non-UltraSCSI Devices This chapter describes the requirements for the shared SCSI bus using: • Externally terminated TruCluster Server configurations • Radial configurations with non-UltraSCSI RAID array controllers In addition to using only the supported hardware, adhering to the requirements described in this chapter will ensure that your cluster operates correctly.
Introductory information covering SCSI bus configuration concepts (SCSI bus speed, data path, and so on) and SCSI bus configuration requirements can be found in Chapter 3. 10.1 Using SCSI Bus Signal Converters A SCSI bus signal converter allows you to couple a differential bus segment to a single-ended bus segment, allowing the mixing of differential and single-ended devices on the same SCSI bus to isolate bus segments for maintenance purposes.
host bus adapter to a non-UltraSCSI BA356 (single-ended and wide) storage shelf. The DS-BA35X-DA personality module is used in an UltraSCSI BA356 to connect an UltraSCSI host bus adapter to the single-ended disks in the UltraSCSI BA356. You can install a DWZZB-VW in an UltraSCSI BA356, but you will waste a disk slot and it will not work with a KZPBA-CB if there are any UltraSCSI disks in the storage shelves.
Figure 10–1: Standalone SCSI Signal Converter T T Single-ended side Differential side with trilink attached ZK-1050U-AI Figure 10–2 shows the status of internal termination for an SBB SCSI signal converter that has a trilink connector attached to the differential side. Figure 10–2: SBB SCSI Signal Converter T T Single-ended side Differential side with trilink attached ZK-1576U-AI 10.1.2.
______________________ Notes ______________________ S4-3 and S4-4 have no function on the DS-BA35X-DA personality module. See Section 10.3.2.2 for information on how to select the device SCSI IDs in an UltraSCSI BA356. Figure 10–3 shows the relative positions of the two DS-BA35X-DA switch packs. Figure 10–3: DS-BA35X-DA Personality Module Switches OFF ON 1 2 3 4 SCSI Bus Termination Switch S4 ON OFF 1 2 3 4 5 6 7 SCSI Bus Address Switch S3 ZK-1411U-AI 10.
Whenever possible, connect devices to a shared bus so that they can be isolated from the bus. This allows you to disconnect devices from the bus for maintenance purposes without affecting bus termination and cluster operation. You also can set up a shared SCSI bus so that you can connect additional devices at a later time without affecting bus termination.
connector at a later time without affecting bus termination. This allows you to expand your configuration without shutting down the cluster. Figure 10–4 shows a BN21W-0B Y cable, which you may attach to a KZPSA-BB or KZPBA-CB SCSI adapter that has had its onboard termination removed. You can also use the BN21W-0B Y cable with a HSZ40 or HSZ50 controller or the unterminated differential side of a SCSI signal converter.
Figure 10–5: HD68 Trilink Connector (H885-AA) REAR VIEW FRONT VIEW ZK-1140U-AI ______________________ Note _______________________ If you connect a trilink connector to a SCSI bus adapter, you may block access to an adjacent PCI slot. If this occurs, use a Y cable instead of the trilink connector. This is the case with the KZPBA-CB and KZPSA-BB SCSI adapters on some AlphaServer systems. Use the H879-AA terminator to terminate one leg of a BN21W-0B Y cable or H885-AA trilink.
10.3.1 BA350 Storage Shelf Up to seven narrow (8-bit) single-ended StorageWorks building blocks (SBBs) can be installed in the BA350. Their SCSI IDs are based upon the slot they are installed in. For instance, a disk installed in BA350 slot 0 has SCSI ID 0, a disk installed in BA350 slot 1 has SCSI ID 1, and so forth. ______________________ Note _______________________ Do not install disks in the slots corresponding to the host SCSI IDs (usually SCSI ID 6 and 7 for a two-node cluster).
Figure 10–6: BA350 Internal SCSI Bus JA1 JB1 0 T 1 2 3 4 J 5 6 POWER (7) ZK-1338U-AI 10.3.2 BA356 Storage Shelf There are two variations of the BA356 used in TruCluster Server clusters: the BA356 (non-UltraSCSI BA356) and the UltraSCSI BA356. An example of the non-UltraSCSI BA356 is the BA356-KC, which has a wide, single-ended internal SCSI bus. It has a BA35X-MH 16-bit personality module (only used for SCSI ID selection) and a 150-watt power supply.
select SCSI IDs 0 through 6, set the personality module address switches 1 through 7 to off. To select SCSI IDs 8 through 14, set personality module address switches 1 through 3 to on and switches 4 through 7 to off. Figure 10–7 shows the relative location of the BA356 SCSI bus jumper, BA35X-MF. The jumper is accessed from the rear of the box. For operation within a TruCluster Server cluster, you must install the J jumper in the normal position, behind slot 6.
Figure 10–7: BA356 Internal SCSI Bus JA1 JB1 0 1 2 3 4 5 J 6 POWER (7) ZK-1339U-AI JA1 and JB1 are located on the personality module (in the top of the box when it is standing vertically). JB1, on the front of the module, is visible. JA1 is on the left side of the personality module as you face the front of the BA356, and is hidden from the normal view.
Figure 10–8: BA356 Jumper and Terminator Module Identification Pins Slot 6 Jumper Pin Slot 1 Jumper Pin Slot 6 Terminator Pin Slot 1 Terminator Pin ZK-1529U-AI 10.3.2.2 UltraSCSI BA356 Storage Shelf The UltraSCSI BA356 (DS-BA356-JF or DS-BA356-KH) has a single-ended, wide UltraSCSI bus. The DS-BA35X-DA personality module provides the interface between the internal, single-ended UltraSCSI bus segment and the shared, wide, differential UltraSCSI bus. The UltraSCSI BA356 uses a 180-watt power supply.
non-UltraSCSI BA356, as shown in Figure 10–8. With proper lighting you will be able to see a J or T near the hole where the pin sticks through. Termination for both ends of the UltraSCSI BA356 internal, single-ended bus is on the personality module, and is always active. Termination for the differential UltraSCSI bus is also on the personality module, and is controlled by the SCSI bus termination switches, switch pack S4. DS-BA35X-DA termination is discussed in Section 10.1.2.2. 10.
10.4.1 Preparing BA350, BA356, and UltraSCSI BA356 Storage Shelves for an Externally Terminated TruCluster Server Configuration You may be using the BA350, BA356, or UltraSCSI BA356 storage shelves in your TruCluster Server configuration as follows: • A BA350 storage shelf provides access to SCSI devices through an 8-bit, single-ended, and narrow SCSI-2 interface. It can be used with a DWZZA-VA and connected to a differential shared SCSI bus.
Remove the termination from the differential end by removing the five 14-pin differential terminator resistor SIPs. 3. Attach an H885-AA trilink connector to the DWZZA-VA 68-pin high-density connector. 4. Install the DWZZA-VA in slot 0 of the BA350. 10.4.1.2 Preparing a BA356 Storage Shelf for Shared SCSI Usage To prepare a BA356 storage shelf for shared SCSI bus usage, follow these steps: 1. You need either a DWZZB-AA or DWZZB-VW signal converter. The DWZZB-VW is more commonly used.
If you are using a DWZZB-VW, install it in slot 0 of the BA356. 10.4.1.3 Preparing an UltraSCSI BA356 Storage Shelf for a TruCluster Configuration An UltraSCSI BA356 storage shelf is connected to a shared UltraSCSI bus, and provides access to UltraSCSI devices on the internal, single-ended and wide UltraSCSI bus. The interface between the buses is the DS-BA35X-DA personality module installed in the UltraSCSI BA356.
10.4.2.1 Cabling a Single BA350 Storage Shelf To cable a single BA350 storage shelf into a cluster, install a BN21K, BN21L, or 328215-00X HD68 cable between the BN21W-0B Y cable on the host bus adapter of each system and the H885-AA trilink connector installed on the DWZZA-VA installed in slot 0 of the BA350. See the left-half of Figure 10–9. 10.4.2.
10.4.3.1 Connecting a BA350 and a BA356 for Shared SCSI Bus Usage When you use a BA350 and a BA356 for storage on a shared SCSI bus in a TruCluster Server configuration, the BA356 must be configured for SCSI IDs 8 through 14. To prepare a BA350 and BA356 for shared SCSI bus usage (see Figure 10–9), follow these steps: 1. Complete the steps in Section 10.4.1.1 and Section 10.4.1.2 to prepare the BA350 and BA356. Ensure that the BA356 is configured for SCSI IDs 8 through 14. 2.
Figure 10–9 shows a two-member TruCluster Server configuration using a BA350 and a BA356 for storage. Figure 10–9: BA350 and BA356 Cabled for Shared SCSI Bus Usage Network Member System 1 Member System 2 Memory Channel Interface Memory Channel Memory Channel KZPSA-BB (ID 6) T KZPSA-BB (ID 7) 2 T 2 1 1 3 3 BA356 BA350 3 4 DWZZB-VW DWZZA-VA ID 1 ID 9 Member 1 Boot Disk ID 2 ID 10 Member 2 Boot Disk ID 3 Quorum Disk ID 4 Data disk Do not use for data disk.
Table 10–1: Hardware Components Used for Configuration Shown in Figure 10–9 and Figure 10–10 Callout Number Description 1 BN21W-0B Y cable 2 H879-AA terminator 3 BN21K, BN21L, or 328215-00X cablea 4 H885-AA trilink connector a The maximum combined length of the BN21K, BN21L, or 328215-00X cables must not exceed 25 meters (82 feet). 10.4.3.
Figure 10–10 shows a two member TruCluster Server configuration using two BA356s for storage. Figure 10–10: Two BA356s Cabled for Shared SCSI Bus Usage Network Member System 1 Member System 2 Memory Channel Interface Memory Channel Memory Channel KZPSA-BB (ID 6) T KZPSA-BB (ID 7) 2 T 2 1 1 3 3 BA356 BA356 3 4 Do not use for data disk. May be used for redundant power supply.
10.4.3.3 Connecting Two UltraSCSI BA356s for Shared SCSI Bus Usage When you use two UltraSCSI BA356 storage shelves on a shared SCSI bus in a TruCluster configuration, one storage shelf must be configured for SCSI IDs 0 through 6 and the other configured for SCSI IDs 8 through 14. To prepare two UltraSCSI BA356 storage shelves for shared SCSI bus usage (see Figure 10–11), follow these steps: 1. Complete the steps of Section 10.4.1.3 for each UltraSCSI BA356.
Figure 10–11 shows a two member TruCluster Server configuration using two UltraSCSI BA356s for storage. Figure 10–11: Two UltraSCSI BA356s Cabled for Shared SCSI Bus Usage Network Member System 1 Member System 2 Memory Channel Interface Memory Channel Memory Channel KZPBA-CB (ID 6) KZPBA-CB (ID 7) 1 T Tru64 UNIX Disk T 3 2 UltraSCSI BA356 2 4 UltraSCSI BA356 5 4 Data disks Do not use for data disk. May be used for redundant power supply.
Table 10–2: Hardware Components Used for Configuration Shown in Figure 10–11 Callout Number Description 1 BN21W-0B Y cable 2 H879-AA HD68 terminator 3 BN38C (or BN38D) cablea 4 H8861-AA VHDCI trilink connector 5 BN37A cableb b a A BN38E-0B technology adapter cable may be connected to a BN37A cable and used in place of a BN38C or BN38D cable. b The maximum combined length of the BN38C (or BN38D) and BN37A cables on one SCSI bus segment must not exceed 25 meters (82 feet). 10.4.
10.4.4.1 Cabling an HSZ40 or HSZ50 in a Cluster Using External Termination To connect an HSZ40 or HSZ50 controller to an externally terminated shared SCSI bus, follow these steps: 1. If the HSZ40 or HSZ50 will be on the end of the shared SCSI bus, attach an H879-AA terminator to an H885-AA trilink connector. 2. Attach an H885-AA trilink connector to each RAID controller port. Attach the H885-AA trilink connector with the terminator to the controller that will be on the end of the shared SCSI bus. 3.
Figure 10–12 shows two AlphaServer systems in a TruCluster Server configuration with dual-redundant HSZ50 RAID controllers in the middle of the shared SCSI bus. Note that the SCSI bus adapters are KZPSA-BB PCI-to-SCSI adapters. They could be KZPBA-CB host bus adapters without changing any cables.
Figure 10–13: Externally Terminated Shared SCSI Bus with HSZ50 RAID Array Controllers at Bus End Network Member System 1 Member System 2 Memory Channel Interface Memory Channel Memory Channel KZPSA-BB (ID 6) KZPSA-BB (ID 7) 3 T 2 1 4 2 3 4 1 3 T HSZ50 Controller A HSZ50 Controller B ZK-1597U-AI Table 10–3 lists the components that are used to create the cluster that is shown in Figure 10–12 and Figure 10–13.
10.4.4.2 Cabling an HSZ20 in a Cluster Using External Termination To connect a SWXRA-Z1 (HSZ20 controller) to a shared SCSI bus, follow these steps: 1. Referring to the RAID Array 310 Deskside Subsystem (SWXRA-ZX) Hardware User’s Guide, open the SWXRA-Z1 cabinet, locate the SCSI bus converter board, and: • Remove the five differential terminator resistor SIPs. • Ensure that the W1 and W2 jumpers are installed to enable the single-ended termination on one end of the bus.
______________________ Note _______________________ The RA3000 is supported on a shared SCSI bus only with the KZPBA-CB UltraSCSI host bus adapter. Table 10–4 provides the steps necessary to connect TruCluster Server member systems to an RA3000 storage subsystem using external termination and Y cables.
Table 10–4: Installing Cables for RA3000 Configuration Using External Termination and Y Cables (cont.) Action Refer to: Install a BN37A-0E 50-centimeter (19.7-inch) VHDCI cable between the RA3000 controller shelf Host 0 I/O module Host Out port and the Host 1 I/O module Host In port. The connection to Host 0 I/O module Host Out port disables the termination on that Host I/O module. — Install a BN21K, BN21L, or BN31G cable between the BN21W-0B Y cables of any other member systems.
Figure 10–15: Externally Terminated TruCluster Server Configuration with an RA3000 Controller Shelf with Active/Passive Failover 1 2 3 2 4 T KZPBA-CB KZPBA-CB AlphaServer Member System 1 RAID Array 3000 Controller Shelf T Host Host In Out Host 0 I/O Module AlphaServer Member System 2 Host Host In Out Host 1 I/O Module Cluster Interconnect ZK-1481U-AI Figure 10–16 shows an externally terminated TruCluster Server configuration using an RA3000 controller shelf.
Table 10–5: Hardware Components Used in the TruCluster Server Configuration Shown in Figure 10–14, Figure 10–15, and Figure 10–16 Callout Number Description 1 H879-AA terminator 2 BN21W-0B Y cable 3 BN21K (BN21L or BN31G) HD68 cablea 4 BN38C HD68 to VHDCI cablea 5 BN37A-0E 50-centimeter (19.
Table 10–6: Hardware Components Used in the Configuration Shown in Figure 10–17 Callout Number Description 1 H879-AA terminator 2 BN21W-0B Y cable 3 BN38C HD68 to VHDCI cablea 4 BN37A-0E 50-centimeter (19.
6. If you are using a: • DS-DWZZH-03: Install a BN38C (or BN38D) HD to VHDCI cable between any DS-DWZZH-03 port and the open connector on the H885-AA trilink connector (on the RAID array controller). • DS-DWZZH-05: Install a BN38C (or BN38D) cable between the DWZZH-05 controller port and the open trilink connector on HSZ40 or HSZ50 controller. ___________________ Note ___________________ Ensure that the HSZ40 or HSZ50 SCSI IDs match the DS-DWZZH-05 controller port IDs (SCSI IDs 0-6). 7.
Figure 10–18: TruCluster Server Cluster Using DS-DWZZH-03, SCSI Adapter with Terminators Installed, and HSZ50 Network Member System 1 Member System 2 Memory Channel Interface Memory Channel Memory Channel T KZPSA-BB (ID 6) T KZPSA-BB (ID 7) DS-DWZZH-03 1 T T 1 T 1 2 4 2 T HSZ50 Controller A 3 HSZ50 Controller B ZK-1766U-AI Table 10–7 lists the components that are used to create the cluster that is shown in Figure 10–18 and Figure 10–19.
Figure 10–19: TruCluster Server Cluster Using KZPSA-BB SCSI Adapters, a DS-DWZZH-05 UltraSCSI Hub, and an HSZ50 RAID Array Controller Network Member System 2 Member System 1 Memory Channel Memory Channel T KZPSA-BB (ID 4) T KZPSA-BB (ID 5) 1 1 T MC Hub DS-DWZZH-05 T T T 1 2 4 2 T 3 T HSZ50 Controller A 1 HSZ50 Controller B 1 Member System 3 Member System 4 T KZPSA-BB (ID 6) T KZPSA-BB (ID 7) Memory Channel Memory Channel ZK-1767U-AI ______________________ Note _______________
11 Configuring an Eight-Member Cluster Using Externally Terminated Shared SCSI Buses This chapter discusses the following topics: • Overview of an eight-node cluster (Section 11.1) • How to configure an eight-node cluster using an UltraSCSI BA356 and external termination (Section 11.2) TruCluster Server Version 5.1A supports eight-member cluster configurations as follows: • Fibre Channel: Eight-member systems may be connected to common storage over Fibre Channel in a fabric (switch) configuration.
The primary focus of this chapter is on an eight-node cluster that uses externally terminated shared SCSI buses with minimal storage. This type of cluster is of primary interest to high-performance technical computing (HPTC) cluster customers. It is also of importance to those customers who use Tru64 UNIX Versions 4.0F or 4.0G with the TruCluster Software Products Memory Channel Software Version 1.6 product who want to upgrade to Tru64 UNIX Version 5.1A and TruCluster Server Version 5.1A. 11.
Figure 11–1: Block Diagram of an Eight-Node Cluster Member 6, Member 7, Member 8 Boot Disks SCSI ID 7 SCSI ID 6 SCSI ID 5 SCSI ID 4 Member System 1 Member System 6 Member System 7 Member System 8 SCSI ID 7 / /usr /var Memory Channel Hub Member 1, Member 2 Boot Disks SCSI ID 6 Member System 2 Member System 3 SCSI ID 6 SCSI ID 7 Member System 4 Member System 5 SCSI ID 5 SCSI ID 4 Member 3, Member 4, Member 5 Boot Disks ZK-1847U-AI Figure 11–1 shows the following: • All member systems ar
The Tru64 UNIX Version 5.1A operating system is installed on member system 1. It can be installed on an internal disk, as is the case in Figure 11–1, or on a shared disk. Member system 1 is used to create the cluster with the clu_create command. Member system 2 is added to the cluster with the clu_add_member command. The shared storage for member systems 1 and 2 contains the root (/), /usr, /var file systems for the cluster, and the boot disks for member systems 1 and 2.
11.2 Configuring an Eight-Node Cluster Using an UltraSCSI BA356 and External Termination Configuring an eight-node cluster is carried out in three distinct stages, one stage for each shared SCSI bus: 1. Install member systems 1 and 2 and all associated cluster hardware needed to place these two systems on a shared SCSI bus. 2. Install member systems 3, 4, and 5 and all associated cluster hardware needed to place these two systems on a shared SCSI bus with member system 2. 3.
1, which enables failover pair. See the Cluster Highly Available Applications manual for more information. Figure 11–2 provides a detailed illustration of the first two systems in an 8-node shared SCSI cluster. Table 11–1 lists the components that are used to create the portion of the cluster that is shown in Figure 11–2. To install the cluster hardware for the first two member systems of an eight-node cluster, follow these steps: 1. Install Memory Channel adapters on member systems 1 and 2.
7. Prepare the UltraSCSI BA356 for TruCluster Server use (see Section 10.4.1.3). Ensure that you have installed an H8861-AA VHDCI trilink connector on the UltraSCSI BA356 personality module. ____________________ Note _____________________ If you need more storage than one UltraSCSI BA356 provides, you can daisy-chain two of them together. See Section 10.4.3.3 for more information. 8. Select one KZPBA-CB host bus adapter on each system.
Figure 11–2: First Two Nodes of an Eight-Node Cluster To member systems 3, 4, 5, 6, 7, and 8 Tru64 UNIX Disk 5 Memory Channel Hub Member System 1 2 T Member System 2 Memory Channel KZPBA-CB (ID 7) 1 2 T Memory Channel 5 KZPBA-CB (ID 6) KZPBA-CB (ID 7) T 2 1 KZPBA-CB (ID 6) T 1 1 3 UltraSCSI BA356 6 To shared SCSI bus with member systems 6, 7, & 8 6 To shared SCSI bus with member systems 3, 4, & 5 4 Do not use for data disk. May be used for redundant power supply.
Table 11–1: Hardware Components Used for Configuration Shown in Figure 11–2 Callout Number Description 1 BN21W-0B HD68 Y cable 2 H879-AA HD68 terminator 3 BN38C or BN38D HD68 to VHDCI cablea 4 H8861-AA VHDCI trilink connector 5 BN39B-04 or BN39B-10 Memory Channel cable 6 BN21K, BN21L, or 328215-00X HD68 to HD68 cable b a A BN38E-0B technology adapter cable may be connected to a BN37A cable and used in place of a BN38C or BN38D cable.
____________________ Note _____________________ If member systems 1 and 2 are running cluster software, do not run mc_cable Memory Channel diagnostics. Shut all systems down to the console level to run the mc_cable diagnostic. 2. Use BN39B-04 (4 meters; 13.1 feet) or BN39B-10 (10 meters; 32.8 feet) to connect the Memory Channel adapters of member systems 3, 4, and 5 to the Memory Channel hub. 3.
9. Connect a BN21K, BN21L, or 328215-00X cable between the BN21W-0B Y cables on member system 4 and member system 5. 10. Connect a BN38C, BN38D, or a combination of a BN38E-0B technology adapter cable and a BN37A cable between the open leg of the BN21W-0B on member systems 3 and 4 to the H8861-AA VHDCI trilink connector on the UltraSCSI BA356 personality module.
Table 11–2: Hardware Components Used for Configuration Shown Figure 11–3 Callout Number Description 1 BN21W-0B HD68 Y cable 2 H879-AA HD68 terminator 3 BN21K, BN21L, or 328215-00X HD68 to HD68 cablea 4 H8861-AA VHDCI trilink connector 5 BN38C or BN38D HD68 to VHDCI cablea 6 BN39B-04 or BN39B-10 Memory Channel cable b a The maximum combined length of the BN21K, BN21L, 328215-00X, BN38C, BN38D, BN38E-0B, and BN37A cables on one SCSI bus segment must not exceed 25 meters (82 feet).
2. Use BN39B-04 (4 meters; 13.1 feet) or BN39B-10 (10 meters; 32.8 feet) to connect the Memory Channel adapters of member systems 6, 7, and 8 to the Memory Channel hub. 3. Refer to the hardware manuals and install the network adapters for the public network on member systems 6, 7, and 8. The public network is not shown in the illustrations in this chapter. 4. Referring to Table 9–3, install a KZPBA-CB host bus adapter on member system 6, 7, and 8.
Figure 11–4: Third Shared SCSI Bus of an Eight-Node Cluster UltraSCSI BA356 4 Member 3 Boot Disk Member 4 Boot Disk Member 5 Boot Disk ID 2 Data disk ID 3 5 Do not use for data disk. May be used for redundant power supply.
Table 11–3: Hardware Components Used for Configuration Shown in Figure 11–4 Callout Number Description 1 BN21W-0B HD68 Y cable 2 H879-AA HD68 terminator 3 BN21K, BN21L, or 328215-00X HD68 to HD68 cable 4 H8861-AA VHDCI trilink connector 5 BN38C or BN38D HD68 to VHDCI cablea 6 BN39B-04 or BN39B-10 Memory Channel cable b a A BN38E-0B technology adapter cable may be connected to a BN37A cable and used in place of a BN38C or BN38D cable.
A Worldwide ID-to-Disk Name Conversion Table Table A–1: Converting Storageset Unit Numbers to Disk Names File System or Disk HSG80 Unit WWID UDID Device Name dskn Tru64 UNIX disk Cluster root (/) /usr /var Member 1 boot disk Member 2 boot disk Member 3 boot disk Member 4 boot disk Quorum disk Worldwide ID-to-Disk Name Conversion Table A–1
Index Numbers and Special Characters 20/40-GB DLT Tape Drive, 8–11 cabling, 8–12 capacity, 8–11 cartridges, 8–11 connectors, 8–11 setting SCSI ID, 8–11 40/80-GB DLT Drive, 8–15 cabling, 8–16 capacity, 8–15 cartridges, 8–15 connectors, 8–15 setting SCSI ID, 8–15 A ACS V8.5, 2–7 arbitrated loop AL_PA, 6–3 characteristics, 6–6 compared with fabric topology, 6–7 defined, 6–3 setting port_n_topology, 6–48 use of wwidmgr -set, 6–40 Array Control Software ( See ACS V8.
extending differential, 10–2 narrow data path, 3–5 speed, 3–5 terminating, 3–7, 10–5, 10–8 wide data path, 3–5 TL896 tape library, 8–56 TZ885 minilibrary, 8–19 TZ887 minilibrary, 8–22 TZ88N-TA tabletop tape drive, 8–6 TZ88N-VA SBB tape drive, 8–4 changing HSG80 failover modes, 6–77 C caa_relocate command, 5–15t, 5–27t cable length restrictions shared SCSI buses, 3–7t cables BC12N-10 Memory Channel link cable, 2–4, 5–7 BN39B-01 Memory Channel link cable, 5–7, 5–9 BN39B-04 Memory Channel link cable, 5–7,
CSB, 7–4 nodes, 7–5 purpose, 7–4 D data path for buses, 3–5 default SCSI IDs ESL9326D enterprise library, 8–75 TL881/TL891 DLT MiniLibrary, 8–63 TL890 tape library, 8–37 TL891 tape library, 8–37 TL892 tape library, 8–37 TL893 tape library, 8–52 TL894 tape library, 8–38 TL895 tape library, 8–45 TL896 tape library, 8–52 device name, 6–61 device unit number console uses, 6–61 setting, 6–61 diagnostics Memory Channel, 5–12 differential SCSI buses, 3–4 differential transmission, 3–4 Digital Linear Tape ( See
SBB, 3–10 SCSI ID, 3–10 termpwr, 3–9 transfer rate, 2–13 DS-TZ89N-TA tabletop tape drive cabling, 8–10 setting SCSI ID, 8–10 DS-TZ89N-VW SBB tape drive cabling, 8–9 setting SCSI ID, 8–7 dual-redundant controllers, 1–14 DWZZA signal converter incorrect hardware revision, 2–12 termination, 10–3, 10–16 upgrade, 2–13 DWZZB signal converter termination, 10–3, 10–16 DWZZH-03 ( See DS-DWZZH-03 UltraSCSI hub ) DWZZH-05 ( See DS-DWZZH-05 UltraSCSI hub ) E eight-node cluster, 1–20 cabling first two nodes, 11–5 cab
FCP, 6–2 ffauto console environment variable, 6–73 ffnext console environment variable, 6–73 fiber-optic cable Fibre Channel, 2–8, 6–27 Memory Channel, 2–5, 5–6, 5–7, ESL9326D enterprise library, 8–75 fail-safe loader, 7–18 KZPBA-CB, 2–10, 4–7, 7–13, 9–5 KZPSA-BB, 2–9, 9–5t, 9–18 release notes, 4–4 reset system for update, 7–21, 9–19 SRM console, 4–7t, 7–4, 9–5t, 5–9, 5–11 Fibre Channel AL_PA arbitrated loop physical address, 6–3 arbitrated loop, 6–3, 6–6 configurations supported, 6–8, 6–10, 6–17 data r
trilink connectors, 2–16 hardware configuration bus termination, 3–7, 10–5 cables supported, 2–1 disk devices, 3–16, 10–14 hardware requirements, 2–1 hardware restrictions, 2–1 requirements, 3–1, 10–2 SCSI bus adapters, 2–6 SCSI bus speed, 3–5 SCSI cables, 2–14 SCSI signal converters, 10–2 storage shelves, 3–16, 10–14 terminators, 2–16 terminators supported, 2–1 trilink connectors, 2–16 trilinks supported, 2–1 Y cables supported, 2–1 hierarchical switch power manager ( See HPM ) host bus adapters ( See KG
I K I/O buses number of, 2–6 I/O risers cables, 7–3, 7–9 local, 7–3, 7–9 remote, 7–3, 7–9 init command, 6–37, 6–65, 6–68, 6–73, 6–76 initialize after setting bootdef_dev console environment variable, 6–68, 6–73, 6–76 KGPSA Fibre Channel host bus adapter GLM, 6–36 installing, 6–36 mounting bracket, 6–36 obtaining the worldwide name of, 6–41 setting to run on a loop, 6–39 setting to run on fabric, 6–37 KZPBA-CB UltraSCSI host bus adapter displaying device information, 4–9t, 4–16, 9–6t, 9–7t after using
( See LFU ) Logical Storage Manager ( See LSM mirroring ) loop topology AL_PA, 6–3 characteristics, 6–6 defined, 6–3 setting controller values, 6–48 setting port_n_topology, 6–48 use of wwidmgr -set, 6–40 LSM mirroring across SCSI buses, 1–12 clusterwide /usr, 1–13 clusterwide /var, 1–13 clusterwide data disks, 1–13 M MA6000 modular array configuring, 2–9 port configuration, 2–9 transparent failover mode, 2–9 unit configuration, 2–9 MA8000 modular array configuring, 2–9 port configuration, 2–9 transparent
setting, 6–44, 6–78 N N_Port node port, 6–3 NL_Port node loop port, 6–3 no single point of failure BA350, 10–15 BA350 and BA356, 10–19 BA356, 10–16, 10–21 UltraSCSI BA356, 10–17, 10–23 Prestoserve using in a cluster, 4–3 PSM, 7–5 ( See NSPOF ) node name, 6–50 non-Ultra BA356 storage shelf preparing, 10–15 NSPOF, 1–14, 3–18 O optical cable, 6–27, 6–34 optical converter cable connection, 5–6 installation, 5–6 options list, 2–6, 3–16 P partitioned storagesets, 3–18 PBM, 7–5 PCI backplane manager ( See PB
RAID, 1–14 RAID Array 3000 ( See RA3000 ) RAID array controllers advantages, 3–17 preparing, 10–25 shared SCSI bus and, 10–25 using in ASE, 10–25 Redundant Array of Independent Disks ( See RAID ) repartitioning procedure, 7–8 replacing HSG80 controller, 6–51 reset ( See system reset ) resetting offsets, 6–77 restrictions, 2–8 disk devices, 2–10 KZPBA-CB adapters, 2–10 KZPSA adapters, 2–9 Memory Channel interconnects, 2–3 RA3000, 2–11 SCSI bus adapters, 2–6 rm_rail_style, 5–1 rolling upgrade MC1 to MC2,
HSZ50 controller, 10–25 in BA356, 10–11 in UltraSCSI BA356, 10–13 KZPBA-CB, 4–17 RAID subsystem controllers, 10–25 requirement, 3–6 setting, 4–9t, 4–17, 8–25, 9–17, 10–17 UltraSCSI BA356, 10–13, 10–17 SCSI targets number of, 2–8 SCSI terminators supported, 2–16 SCSI-2 bus, 3–6 SCSI_VERSION SCSI-2, 6–45 SCSI-3, 6–45 selecting BA356 disk SCSI IDs, 10–11 selecting UltraSCSI BA356 disk SCSI IDs, 10–13 set bootdef_dev command, 6–68, 6–73, 6–76 SET FAILOVER COPY = THIS_CONTROLLER command, 1–15 set ffauto comm
UltraSCSI BA356 storage shelf, 10–15, 10–17 show config command, 4–10, overview, 10–8, 10–13 setting up, 3–16, 10–14 StorageWorks building block ( See SBB ) 4–13, 9–6t, 9–7t, 9–9, 9–11 show csb command, 7–16 show device command, 4–9t, 4–12, subscriber connector 4–13, 9–6t, 9–7t, 9–11 show nvr command, 7–10, 7–19 show system command, 7–14 supported options SHOW THIS_CONTROLLER command, 6–50 signal converters, 10–2 creating differential bus, 10–2 differential I/O module, 10–2 differential termination,
UltraSCSI BA356, 10–14 termination resistors KZPBA-CB, 4–9t, 4–17, 9–5t, 9–7t KZPSA-BB, 9–5t, 9–7t terminators supported, 2–16 TL881 tape library, 8–58 TL881/891 DLT MiniLibrary cabling, 8–64, 8–67 capacity, 8–58, 8–60 components, 8–59 configuring base unit as slave, 8–70 models, 8–58 performance, 8–60 powering up, 8–71 setting the SCSI ID, 8–62, 8–72 TL890 tape library cabling, 8–31 default SCSI IDs, 8–37 powering up, 8–36 setting SCSI ID, 8–36 TL891 tape library, 8–24, 8–58 cabling, 8–26, 8–31 configurin
personality module address switches, 10–13 power supply, 3–3 preparing, 10–15, 10–17 preparing for shared SCSI usage, 10–17 SCSI ID selection, 10–13, 10–17 termination, 10–13 UltraSCSI host adapter host input connector, 3–3 with non-UltraSCSI BA356, 3–3 with UltraSCSI BA356, 3–3 UltraSCSI hubs, 3–9 ( See also DS-DWZZH-03 UltraSCSI hub; DS-DWZZH-05 UltraSCSI hub ) upgrade DWZZA, 2–13 upgrading ESL9326D, 8–74 ( See console environment variable ) Very High Density Cable Interconnect ( See VHDCI ) VHDCI, 3–