ServerNet Cluster Manual Abstract This manual describes the installation, configuration, and management of HP NonStop™ ServerNet Cluster hardware and software for ServerNet clusters that include the ServerNet Cluster Switch model 6770. Product Version N.A. Supported Release Version Updates (RVUs) This guide supports G06.21 and H06.03 and all subsequent G-series and H-series release version updates until otherwise indicated by its replacement publication.
Document History Part Number Product Version Published 520371-001 N.A. May 2001 520440-001 N.A. July 2001 520575-001 N.A. November 2001 520575-002 N.A. May 2002 520575-003 N.A.
ServerNet Cluster Manual Index Examples What’s New in This Manual xvii Manual Information xvii New and Changed Information About This Manual xix Where to Find More Information Notation Conventions xxii Figures Tables xviii xxi Part I. Introduction 1.
2. Planning for Installation Contents ServerNet Cluster Software Overview 1-33 SNETMON and the ServerNet Cluster Subsystem MSGMON 1-37 NonStop Kernel Message System 1-38 SANMAN Subsystem and SANMAN 1-38 Expand 1-39 OSM and TSM Software 1-44 SCF 1-45 1-33 Part II. Planning and Installation 2.
Contents 3. Installing and Configuring a ServerNet Cluster 3.
4. Upgrading a ServerNet Cluster Contents 4. Upgrading a ServerNet Cluster Benefits of Upgrading 4-2 Benefits of Upgrading to G06.12 (Release 2) Functionality 4-2 Benefits of Upgrading to G06.14 (Release 3) Functionality 4-3 Benefits of Upgrading to G06.
5.
6. Adding or Removing a Node Contents Stopping ServerNet Cluster Services 5-33 Switching the SNETMON or SANMAN Primary and Backup Processes 5-34 6.
8. SCF Commands for SNETMON and the ServerNet Cluster Subsystem Contents Replacing a ServerNet II Switch 7-38 Replacing an AC Transfer Switch 7-38 Replacing a UPS 7-38 Diagnosing Performance Problems 7-39 Part IV. SCF 8.
Contents 9. SCF Commands for the External ServerNet SAN Manager Subsystem 9.
10. SCF Error Messages Contents 10. SCF Error Messages Types of SCF Error Messages 10-1 Command Parsing Error Messages 10-1 SCF-Generated Numbered Error Messages 10-1 Common Error Messages 10-1 SCL Subsystem-Specific Error Messages 10-1 SCF Error Messages Help 10-2 ServerNet Cluster (SCL) Error Messages 10-2 SANMAN (SMN) Error Messages 10-7 If You Have to Call Your Service Provider 10-12 A. Part Numbers B. Blank Planning Forms C. ESD Information D. Service Categories for Hardware Components E.
H. Using OSM to Manage the Star Topologies Contents H. Using OSM to Manage the Star Topologies ServerNet Cluster Resource Appears at Top Level of Tree Pane H-1 Some Cluster Resources Are Represented Differently in OSM H-1 Guided Procedures Have Changed H-2 Options for Changing Topologies H-2 For More Information About OSM H-3 I. SCF Changes at G06.
Figures Contents Example 5-15. Example 5-16. Example 5-17. INFO PATH, DETAIL Command 5-25 INFO PROCESS $NCP, LINESET Command INFO PROCESS $NCP, NETMAP Command 5-25 5-26 Figures Figure 1-1. Figure 1-2. Figure 1-3. Figure 1-4. Figure 1-5. Figure 1-6. Figure 1-7. Figure 1-8. Figure 1-9. Figure 1-10. Figure 1-11. Figure 1-12. Figure 1-13. Figure 1-14. Figure 1-15. Figure 1-16. Figure 1-17. Figure 1-18. Figure 1-19. Figure 1-20. Figure 1-21. Figure 2-1. Figure 2-2. Figure 2-3. Figure 2-4. Figure 2-5.
Figures Contents Figure 2-12. Figure 2-13. Figure 2-14. Figure 2-15. Figure 3-1. Figure 3-2. Figure 3-3. Figure 3-4. Figure 3-5. Figure 3-6. Figure 4-1. Figure 4-2. Figure 4-3. Figure 4-4. Figure 4-5. Figure 4-6. Figure 4-7. Figure 4-8. Figure 4-9. Figure 4-10. Figure 4-11. Figure 4-12. Figure 4-13. Figure 5-1. Figure 5-2. Figure 5-3. Figure 5-4. Figure 5-5. Figure 5-6. Figure 5-7. Figure 5-8.
Figures Contents Figure 5-9. Figure 5-10. Figure 5-11. Figure 5-12. Figure 5-13. Figure 5-14. Figure 5-15. Figure 5-16. Figure 5-17. Figure 5-18. Figure 5-19. Figure 6-1. Figure 7-1. Figure 7-2. Figure 7-3. Figure 7-4. Figure 7-5. Figure 7-6. Figure 7-7. Figure 7-8. Figure 7-9. Figure 8-1. Figure C-1. Figure F-1. Figure F-2. Figure F-3. Figure G-1. Figure G-2. Figure G-3. Figure G-4.
Tables Contents Tables Table 1-1. Table 1-2. Table 1-3. Table 1-4. Table 1-5. Table 2-1. Table 2-2. Table 2-3. Table 2-4. Table 2-5. Table 2-6. Table 2-7. Table 2-8. Table 2-9. Table 2-10. Table 3-1. Table 3-2. Table 3-3. Table 3-4. Table 3-5. Table 3-6. Table 3-7. Table 3-8. Table 3-9. Table 3-10. Table 3-11. Table 4-1. Table 4-2. Table 4-3. Table 4-4. Table 4-5. Table 4-6.
Tables Contents Table 4-7. Table 4-8. Table 4-9. Table 4-10. Table 4-11. Table 4-12. Table 4-13. Table 4-14. Table 4-15. Table 4-16. Table 4-17. Table 4-18. Table 4-19. Table 4-20. Table 4-21. Table 4-22. Table 4-23. Table 4-24. Table 4-25. Table 4-26. Table 4-27. Table 4-28. Table 4-29. Table 4-30. Table 4-31. Table 4-32. Table 4-33. Table 4-34. Table 5-1.
Contents Table 5-2. Table 7-1. Table 7-2. Table 7-3. Table 7-4. Table 7-5. Table 8-1. Table 8-2. Table 8-3. Table 8-4. Table 9-1. Table 9-2. Table 9-3. Table 9-4. Table D-1. Table G-1. Table G-2.
What’s New in This Manual Manual Information ServerNet Cluster Manual Abstract This manual describes the installation, configuration, and management of HP NonStop™ ServerNet Cluster hardware and software for ServerNet clusters that include the ServerNet Cluster Switch model 6770. Product Version N.A. Supported Release Version Updates (RVUs) This guide supports G06.21 and H06.03 and all subsequent G-series and H-series release version updates until otherwise indicated by its replacement publication.
New and Changed Information What’s New in This Manual New and Changed Information This document has been updated throughout to incorporate changes to product and company names. This document now incorporates the ServerNet Cluster 6770 Supplement, for easier access and linking to the information.
About This Manual The following table describes the sections of this manual. Part Section I II III IV Title This section... 1 ServerNet Cluster Description Introduces the ServerNet Cluster product. It describes hardware components for the 6770 ServerNet Cluster Switch and software components and the concepts that are essential to understanding the operation of a ServerNet cluster. 2 Planning for Installation Describes how to plan for installing ServerNet cluster hardware and software.
About This Manual Part Appendix Section Title This section... 10 SCF Error Messages Describes the error messages generated by SCF and provides the cause, effect, and recovery information for the SCF error messages specific to the ServerNet cluster (SCL) subsystem and the SANMAN (SMN) subsystem. A Part Numbers Directs you to NTL for the list of part numbers. B Blank Planning Forms Contains blank copies of Planning Forms.
Where to Find More Information About This Manual Where to Find More Information Other ServerNet Cluster Manuals This manual describes ServerNet clusters that contain ServerNet Cluster Switches (model 6770).
Notation Conventions About This Manual Notation Conventions Hypertext Links Blue underline is used to indicate a hypertext link within text. By clicking a passage of text with a blue underline, you are taken to the location described. For example: This requirement is described under Backup DAM Volumes and Physical Disk Drives on page 3-2. General Syntax Notation The following list summarizes the notation conventions for syntax presentation in this manual. UPPERCASE LETTERS.
General Syntax Notation About This Manual | Vertical Line. A vertical line separates alternatives in a horizontal list that is enclosed in brackets or braces. For example: INSPECT { OFF | ON | SAVEABEND } … Ellipsis. An ellipsis immediately following a pair of brackets or braces indicates that you can repeat the enclosed sequence of syntax items any number of times.
Notation for Messages About This Manual !i,o. In procedure calls, the !i,o notation follows an input/output parameter (one that both passes data to the called procedure and returns data to the calling program). For example: error := COMPRESSEDIT ( filenum ) ; !i:i. !i,o In procedure calls, the !i:i notation follows an input string parameter that has a corresponding parameter specifying the length of the string in bytes.
Notation for Management Programming Interfaces About This Manual horizontally, enclosed in a pair of brackets and separated by vertical lines. For example: proc-name trapped [ in SQL | in SQL file system ] { } Braces. A group of items enclosed in braces is a list of all possible items that can be displayed, of which one is actually displayed.
Change Bar Notation About This Manual !o. The !o notation following a token or field name indicates that the token or field is optional. For example: ZSPI-TKN-MANAGER token-type ZSPI-TYP-FNAME32. !o Change Bar Notation Change bars are used to indicate substantive differences between this edition of the manual and the preceding edition. Change bars are vertical rules placed in the right margin of changed portions of text, figures, tables, examples, and so on.
Part I. Introduction This part contains only one section: Section 1, ServerNet Cluster Description.
Part I.
1 ServerNet Cluster Description This section introduces the ServerNet Cluster product. It describes the hardware and software components and the concepts that are essential to understanding the operation of a ServerNet cluster. Note. This manual, along with the ServerNet Cluster 6770 Installation and Support Guide, describes ServerNet clusters that contain ServerNet Cluster Switches (model 6770).
The ServerNet Cluster Product ServerNet Cluster Description The ServerNet Cluster Product The ServerNet Cluster product is a new interconnection technology for NonStop S-series servers. This technology enables up to 24 servers to be connected in a group, or ServerNet cluster, that can pass information from one server to any other server in the cluster using the ServerNet protocol. Servers using either of the currently supported system topologies (Tetra 8 and Tetra 16) can participate in a cluster.
Three Network Topologies Supported ServerNet Cluster Description Figure 1-1. ServerNet Cluster Topologies (Both Fabrics Shown) Star Group Star Group X1/Y1 X1/Y1 Star Topology X2/Y2 Star Group Star Group Split-Star Topology X1/Y1 X2/Y2 X3/Y3 Star Group Star Group Tri-Star Topology VST080.
Three Network Topologies Supported ServerNet Cluster Description Star Topology The star topology, introduced with the G06.09 RVU, supports up to eight nodes and requires two cluster switches—one for the X fabric and one for the Y fabric. Because there is only one cluster switch per fabric, the position ID of each cluster switch is always 1. Consequently, the cluster switches are named X1 and Y1. Note.
Three Network Topologies Supported ServerNet Cluster Description Split-Star Topology The split-star topology, introduced with the G06.12 RVU, supports from 2 to 16 nodes and uses up to four cluster switches—two for the X fabric and two for the Y fabric. The two cluster switches on each fabric can have a position ID of either 1 or 2. Consequently, the cluster switches on the X fabric are named X1 and X2, and the cluster switches on the Y fabric are named Y1 and Y2.
Three Network Topologies Supported ServerNet Cluster Description Figure 1-3.
Three Network Topologies Supported ServerNet Cluster Description Tri-Star Topology The tri-star topology, introduced with the G06.14 RVU, supports from 2 to 24 nodes and uses up to six cluster switches—three for the X fabric and three for the Y fabric. The three cluster switches on each fabric can have a position ID of 1, 2, or 3. Consequently, the cluster switches on the X fabric are named X1, X2, and X3 and the cluster switches on the Y fabric are named Y1, Y2, and Y3.
Three Network Topologies Supported ServerNet Cluster Description Figure 1-4.
Hardware and Software Components for Clustering ServerNet Cluster Description Hardware and Software Components for Clustering For each topology, Table 1-2 lists key hardware components required to construct a ServerNet cluster. Table 1-2. Hardware Components for Clustering Required for Star Topology Required for Split-Star Topology Required for Tri-Star Topology NonStop S-series servers 1 to 8 1 to 16 1 to 24 Each server can have 2 to 16 processors.
Coexistence With Expand-Based Networking Products ServerNet Cluster Description The ServerNet Cluster product relies on many software components, described in the ServerNet Cluster Software Overview on page 1-33. Coexistence With Expand-Based Networking Products Nodes in a ServerNet cluster coexist as systems belonging to an Expand network. The ServerNet cluster product introduces a new line type for Expand: the Expand-over-ServerNet line.
Coexistence With Expand-Based Networking Products ServerNet Cluster Description Figure 1-5. ServerNet Cluster Coexistence With a FOX Ring FOX Ring K-Series S-Series S-Series External ServerNet Fabrics K-Series S-Series S-Series K-Series K-Series vst037.
Benefits of Clustering ServerNet Cluster Description Benefits of Clustering Clustering has multiple benefits. ServerNet clusters can improve: • • • • Performance. For interprocessor communication, ServerNet clusters take advantage of the NonStop Kernel message system for low message latencies, low message processor costs, and high message throughput. The same message system is used for interprocessor communication within the node and between nodes.
Node ServerNet Cluster Description Node When a system joins a network, the system becomes a network node. A node is a uniquely identified computer system connected to one or more other computer systems. Each system in an Expand network is an Expand node. Each system in a ServerNet cluster is a ServerNet node. In general, a ServerNet node can be any model of server that supports ServerNet fabrics. To determine if your server can be part of a ServerNet cluster, refer to the documentation for your server.
Node Number ServerNet Cluster Description Figure 1-6 shows the ServerNet node-number assignments in a split-star topology. Figure 1-6. ServerNet Node Numbers in a Split-Star Topology (One Fabric Shown) ServerNet Node Numbers 1 X1 or Y1 Cluster Switch 2 0 1 4 2 3 5 4 6 5 7 6 8 7 ServerNet II Switch Port Numbers X2 or Y2 Cluster Switch 10 11 8 9 10 11 8 9 ServerNet II Switch Port Numbers 0 9 3 10 1 2 11 3 12 4 13 5 6 14 7 15 16 ServerNet Node Numbers VST070.
Node Number ServerNet Cluster Description Figure 1-7 shows the ServerNet node-number assignments in a tri-star topology. Figure 1-7.
X and Y Fabrics ServerNet Cluster Description Expand Node Number An Expand node number, sometimes called a “system number,” is a number that identifies a system in an Expand network. A ServerNet node has both a ServerNet node number and an Expand node number. X and Y Fabrics A collection of connected routers and ServerNet links is called a fabric. Two identically configured fabrics, referred to as the X fabric and the Y fabric, together provide a faulttolerant interconnection for the server.
X and Y Fabrics ServerNet Cluster Description Figure 1-8. Simplified Logical Diagram Showing Internal X and Y Fabrics ServerNet Link (X fabric) ServerNet Link (Y fabric) (Used for clarity when both fabrics are shown) X fabric Y fabric Processor X Y Processor X Y Y fabric X fabric ServerNet Adapter Disks ServerNet Adapter ServerNet Adapter ServerNet Adapter vst909.
ServerNet Cluster Hardware Overview ServerNet Cluster Description ServerNet Cluster Hardware Overview ServerNet clusters use the following hardware components: • • • • • • • Routers Service processors (SPs) Modular ServerNet expansion boards (MSEBs)* Plug-in cards (PICs)* Node-numbering agent FPGAs Cluster switches* ServerNet cables* *For service categories, refer to Appendix D, Service Categories for Hardware Components.
ServerNet Cluster Description • • • • Modular ServerNet Expansion Boards (MSEBs) Program the SEBS and/or MSEBs, as well as any other routers within a system to route packets to the MSEBs that connect this system to the external ServerNet fabrics. Such routing occurs whenever these packets are addressed to a ServerNet ID that does not lie within the range of ServerNet IDs for the current system.
Modular ServerNet Expansion Boards (MSEBs) ServerNet Cluster Description Figure 1-9 shows an SEB and an MSEB. Figure 1-9.
Modular ServerNet Expansion Boards (MSEBs) ServerNet Cluster Description These MSEBs route packets out of each server and onto the external ServerNet fabrics. All packets destined for other nodes travel through port 6 of these MSEBs. Figure 1-10 shows MSEBs installed in slots 51 and 52 of a NonStop S7xx000 server. Figure 1-10.
Plug-In Cards ServerNet Cluster Description Plug-In Cards A plug-in card (PIC) allows an MSEB or a ServerNet II Switch to support a variety of ServerNet cable media. The MSEB chassis can hold six single-wide PICs. A singlewide PIC has one connector. The ServerNet II Switch can accommodate eight single-wide PICs and two doublewide PICs. Double-wide PICs have two connectors and are installed in each ServerNet II Switch in ports 8 through 11.
Node-Numbering Agent (NNA) FPGA ServerNet Cluster Description Figure 1-11 shows an NNA PIC and an ECL PIC. Figure 1-11. NNA and ECL PICs NNA PIC ECL PIC VST057.vsd External Routing and the NNA To understand the role of the NNA FPGA in a ServerNet cluster, you must understand ServerNet packets and ServerNet IDs. ServerNet packets are the unit of transmission in a ServerNet network.
Node-Numbering Agent (NNA) FPGA ServerNet Cluster Description The NNA FPGA controls ServerNet addressing on the external ServerNet fabrics. The FPGA contains circuitry that modifies the SID or the DID of each packet so the packets are routed correctly between each node. Note. The NNA FPGA modifies only one of the fields (DID or SID) and the CRC checksum of each packet. The ServerNet address and data payload fields are not modified. Figure 1-12.
Node-Numbering Agent (NNA) FPGA ServerNet Cluster Description Modification of the ServerNet IDs Figure 1-13 illustrates how the node number for a ServerNet packet is modified as a packet moves from one node to another in a cluster: 1. ServerNet packets from \A destined for \B leave the local node through the singlemode, fiber-optic PIC installed in port 6 of the MSEB in group 01.
Cluster Switch ServerNet Cluster Description Cluster Switch The cluster switch is an assembly consisting of the following components: • • • ServerNet II Switch Uninterruptible power supply (UPS) AC transfer switch Depending on the type of topology, a ServerNet cluster uses from two to six cluster switches. Clusters with a star topology use one cluster switch per external fabric for a total of two cluster switches.
Cluster Switch ServerNet Cluster Description Figure 1-14. Cluster Switch Enclosure VST010.vsd The cluster switch can also be packaged in a 19-inch rack that is 24 to 26 (61 to 66 cm) inches deep. ServerNet II Switch The ServerNet II Switch is the main component of the cluster switch. The ServerNet II Switch is a 12-port network switch used in ServerNet networks. In a ServerNet cluster, ports 0 through 7 provide the physical junction points that enable the nodes to connect to the cluster switches.
Cluster Switch ServerNet Cluster Description Figure 1-15. Cluster Switch Components ServerNet II Switch AC Transfer Switch Uninterruptible Power Supply (UPS) VST013.vsd For detailed information about the ServerNet II Switch, refer to the ServerNet Cluster 6770 Hardware Installation and Support Guide. Single-Wide and Double-Wide PICs Like the MSEB, the ServerNet II Switch uses plug-in cards (PICs) to allow for a variety of ServerNet cable media.
Cluster Switch ServerNet Cluster Description Single-wide fiber-optic PICs connect each node to a cluster switch. Double-wide fiberoptic PICs connect cluster switches on the same fabric in the split-star and tri-star topologies. Uninterruptible Power Supply (UPS) Within a cluster switch, the uninterruptible power supply (UPS), AC transfer switch, and ServerNet II Switch power supply form the cluster switch power subsystem.
ServerNet Cables for Each Node ServerNet Cluster Description ServerNet II Switch. Figure 1-15 shows the AC transfer switch in a cluster switch enclosure. Note. Relay scrubbing is not supported for the AC transfer switch. For most installations, the UPS provides backup power for enough time for you either to replace or bypass the AC transfer switch in the event of a failure. ServerNet Cables for Each Node ServerNet cables provide ServerNet links between routing devices.
Connections Between Cluster Switches ServerNet Cluster Description Figure 1-17. Routing Across the Four-Lane Links Routing From Any Node Connected to Cluster Switch X1/Y1 X2/Y2 To Distant ServerNet Nodes . . . Uses Port . . . 9 and 10 8 11 and 12 9 13 and 14 10 15 and 16 11 1 and 2 8 3 and 4 9 5 and 6 10 7 and 8 11 X1 or Y1 01 -02 -03 -04 -05 -06 -07 -08 -- 0 1 2 3 4 5 6 7 8 9 10 11 X2 or Y2 8 9 10 11 0 1 2 3 4 5 6 7 --------- 09 10 11 12 13 14 15 16 VST140.
Connections Between Cluster Switches ServerNet Cluster Description Figure 1-18. Routing Across the Two-Lane Links Routing From Any Node Connected to Cluster Switch X1/Y1 X2/Y2 X3/Y3 To Distant ServerNet Nodes . . . Uses Link . . .
ServerNet Cluster Software Overview ServerNet Cluster Description ServerNet Cluster Software Overview ServerNet clusters use the following software components: • • • • • • • SNETMON and the ServerNet cluster subsystem MSGMON NonStop Kernel message system SANMAN Expand OSM or TSM software SCF SNETMON and the ServerNet Cluster Subsystem SNETMON is the Subsystem Programmatic Interface (SPI) server for ServerNet cluster subsystem-management commands.
SNETMON and the ServerNet Cluster Subsystem ServerNet Cluster Description • • • • Receives path-event information from the individual processors in the system and translates this information into system-connection status information and EMS events Responds to queries from OSM or TSM client applications using the Subsystem Programmatic Interface (SPI) protocol Provides ServerNet status and statistics information to SNETMON clients Keeps its backup process up to date SNETMON also maintains the ServerNet c
SNETMON and the ServerNet Cluster Subsystem ServerNet Cluster Description Figure 1-19. ServerNet Cluster Logical Diagram TSM (RAL) Service Processor SPI SCF SPI SANMAN Line Handler SPI NAM EMS Events Collectors ($0, $ZLOG) Events MsgMon Message System SNetMon NAM One Expand-OverServerNet Line-Handler Process for Each Remote System Line Handler Status/ Commands Application File System NRT NCP VST012.
SNETMON and the ServerNet Cluster Subsystem ServerNet Cluster Description You add the ServerNet cluster monitor process as a generic process using the Kernel subsystem SCF ADD PROCESS command. See the SCF Reference Manual for the Kernel Subsystem for complete command syntax. The ServerNet cluster monitor process must be configured: • • • • • To be persistent (set the AUTORESTART attribute to a nonzero value). To have the process name $ZZSCL (set the NAME attribute to $ZZSCL).
MSGMON ServerNet Cluster Description These states and their transitions are described in detail in Section 8, SCF Commands for SNETMON and the ServerNet Cluster Subsystem. MSGMON MSGMON is a monitor process that resides in each processor of a server and executes functions required by the message system. MSGMON is a helper for SNETMON. MSGMON handles communications between SNETMON and individual processors. MSGMON also logs events from and generates events on behalf of the message system.
NonStop Kernel Message System ServerNet Cluster Description MSGMON must be configured: • • • • To be persistent. To run in every processor of a system. To have the process name $ZIMnn, where nn is the processor number. The CPU ALL attribute ensures that the process names are created with the proper CPU number suffix. The recommended symbolic name is MSGMON. To run under the super ID (255,255).
Expand ServerNet Cluster Description The following list summarizes information about SANMAN: Process Description External system area network manager process Abbreviation SANMAN Generic Process Name (Recommended) $ZZKRN.#ZZSMN Process Pair Name $ZZSMN Product Number T0502 Program File Name $SYSTEM.SYSnn.SANMAN SANMAN can be run in any processor. The process pair must be configured to be started by the persistence manager.
Expand ServerNet Cluster Description The Expand subsystem supports a variety of protocols and communications methods to enable you to connect systems in local area network (LAN) and wide area network (WAN) topologies. Expand-over-FOX and Expand-over-ATM are two examples of communications methods. Expand-Over-ServerNet Line-Handler Processes Expand-over-ServerNet is a communications medium for the Network Access Method (NAM).
Expand ServerNet Cluster Description Figure 1-20. Line-Handler Processes in a Four-Node Cluster \NODE1 (Expand Node Number 001) $SC002 $SC003 $SC004 $SC001 \NODE4 (Expand Node Number 004) $SC001 X-Fabric $SC002 $SC004 $X252 $SC003 Y -Fabric \NODE2 (Expand Node Number 002) $SC003 $SC004 $SC001 $SC002 \NODE3 (Expand Node Number 003) Key Configured Single-Line Path VST092.
Expand ServerNet Cluster Description The following list summarizes information about the line-handler process: Description Expand-over-ServerNet line-handler process Type 63 Subtype 4 Profile PEXPSSN ASSOCIATEDEV Default $ZZSCL (SNETMON) The Expand-over-ServerNet line-handler process manages security-related messages and forwards packets outside the ServerNet cluster.
Expand ServerNet Cluster Description Expand and Message-System Traffic In a ServerNet cluster, message-system traffic flows directly between processors by way of the message system but under the control of Expand. Figure 1-21 diagrams the traffic. Secure message-system traffic between processes on different ServerNet nodes travels through the Expand-over-ServerNet line handlers, and through the local message system between the communicating processes and the line handlers.
OSM and TSM Software ServerNet Cluster Description OSM and TSM Software Either HP Open System Management (OSM) or (its predecessor) Compaq TSM software can be used to monitor and service a ServerNet cluster (in addition to your NonStop servers. The OSM supports all ServerNet cluster topologies on both G-series and H-series. TSM supports only the star topologies (star, split-star, and tri-star), not the newer layered topology, and TSM does not support H-series.
SCF ServerNet Cluster Description SCF Table 1-5 lists the SCF commands that are supported by SNETMON and SANMAN. Note. For SCF changes made at G06.21 to the SNETMON and SANMAN product modules that might affect management of a cluster with one of the star topologies, see Appendix I, SCF Changes at G06.21. Table 1-5.
SCF ServerNet Cluster Description ServerNet Cluster Manual— 520575-003 1- 46
Part II.
Part II.
2 Planning for Installation This section describes how to plan to install a ServerNet Cluster or add a node to an already-installed cluster.
Using the Planning Checklist Planning for Installation Planning Checklist Table 2-1 shows the planning checklist. Table 2-1. Planning Checklist (page 1 of 3) √ Major Planning Step For More Information Plan for the Topology Choose from one of the three supported topologies: star, split-star, or tri-star. Planning for the Topology on page 2-8 Make sure all nodes to be added to the cluster have the software required by the chosen topology.
Using the Planning Checklist Planning for Installation Table 2-1. Planning Checklist (page 2 of 3) √ Major Planning Step For More Information Plan for Floor Space Make sure the number of servers that will be connected to form a ServerNet cluster is no more than 24. Planning for Floor Space on page 2-18 Make sure no server will participate in more than one cluster at a time. Planning for Floor Space on page 2-18 If the servers are already installed . . .
Using the Planning Checklist Planning for Installation Table 2-1. Planning Checklist (page 3 of 3) √ Major Planning Step For More Information Make sure emergency power-off (EPO) cables, if required, can be routed to the cluster switches. NonStop S-Series Planning and Configuration Guide Plan to Upgrade Software Make sure all servers to be added to the cluster are running the required operating system RVUs or SPRs.
Using the Planning Checklist Planning for Installation Cluster Planning Work Sheet (Example) Cluster Name: Production/Sales Date: 17 Oct.
Using the Planning Checklist Planning for Installation Cluster Planning Work Sheet (Example) Cluster Name: Production/Sales Date: 17 Oct.
Using the Planning Checklist Planning for Installation Cluster Planning Work Sheet (Example) Cluster Name: Production/Sales Date: 17 Oct.
Planning for the Topology Planning for Installation Planning for the Topology The topology you use determines the maximum size of the cluster. Table 2-2 compares the topologies. Table 2-2.
Software Requirements for the Star, Split-Star, and Tri-Star Topologies Planning for Installation Software Requirements for the Star, Split-Star, and Tri-Star Topologies Each topology has different software requirements. You must make sure that any server added to a cluster meets the software requirements for the topology. See Table 2-8 on page 2-23.
Alternate Cluster Switch Packaging Planning for Installation Alternate Cluster Switch Packaging The cluster switches are typically packaged in a switch enclosure that is half the height of a NonStop S-series system enclosure. (See Figure 1-14 on page 1-27.) However, the components of a cluster switch can also be ordered separately and installed in a 19-inch rack that you provide. The rack must be an EIA standard rack that is 19 inches wide and 24 to 26 inches deep.
Fiber-Optic Cable Information Planning for Installation Table 2-4. Cable Length Requirements for Multilane Links Cable Length Minimum Requirements Up to 80 m All nodes in the cluster must meet the requirements for the split-star topology. See Table 2-8 on page 2-23. Up to 1 km All nodes in the cluster must be running G06.11 or a later version of the operating system. Up to 5 km All of the following: • • • All nodes in the cluster must be running G06.16 or a later version of the operating system.
Fiber-Optic Cable Information Planning for Installation Table 2-5. Fiber-Optic Cable Requirements Description Requirements • • • • • Supported lengths 10 meters 40 meters 80 meters 80 meters, plenum-rated Cables longer than 80 meters are supported for use in a multilane link if certain requirements are met. See Table 2-4 on page 2-11. Connector/receptacle type Duplex SC (See Figure 2-1 on page 2-11.
Two MSEBs Needed for Each Server Planning for Installation Two MSEBs Needed for Each Server Each server that will join a ServerNet cluster must have at least two modular ServerNet expansion boards (MSEBs). These MSEBs must be installed in slots 51 and 52 of the group 01 enclosure. (Other enclosures do not need MSEBs.) Check the CRUs installed in group 01, slots 51 and 52 to determine if they are SEBs or MSEBs. Figure 2-2 shows the location of these slots. Note.
Two MSEBs Needed for Each Server Planning for Installation Figure 2-2. Slots 51 and 52 of Group 01 in a NonStop Sxx000 Server 50 55 51 52 53 54 Slots 51 and 52 56 Slot Component 50, 55 51, 52 Processor Multifunction (PMF) CRU ServerNet Expansion Board (SEB) or Modular ServerNet Expansion Board (MSEB) 53, 54 ServerNet Adapter 56 Emergency Power-Off (EPO) Connector VST007.
Two MSEBs Needed for Each Server Planning for Installation Figure 2-3 shows SEBs and MSEBs. You can check these components visually or use the OSM Service Connection or TSM Service Application to discover and display all of the system components on the system console. Figure 2-3.
Two MSEBs Needed for Each Server Planning for Installation PICs for MSEBs That Will Replace SEBs If slots 51 and 52 of group 01 contain SEBs, you must replace them with MSEBs. The MSEBs that you install in these slots must contain a single-mode fiber-optic PIC with the node-numbering agent (NNA) field-programmable gate array (FPGA) in port 6. Port 6 is used to connect fiber-optic cables from the server to a ServerNet II switch. See Figure 2-4. Figure 2-4.
Two MSEBs Needed for Each Server Planning for Installation Before an MSEB can replace an SEB, the MSEB must be populated with plug-in cards (PICs) to accept the ServerNet cables previously attached to the SEB. For example, if four ECL ServerNet cables are attached to a SEB, the MSEB that replaces it must contain four ECL PICs in the ports to which the cables connect. Figure 2-5 compares the supported SEB and MSEB connectors. Figure 2-5.
Planning for Floor Space Planning for Installation Replacing SEBs With MSEBs Using the Guided Procedure You can use an OSM or TSM guided replacement procedure to replace an SEB with an MSEB. Note. The guided procedure for replacing an SEB or MSEB cannot be used to replace ServerNet/FX or ServerNet/DA adapters installed in group 01, slots 51 and 52. To move the adapters to an unused slot in the same enclosure or another enclosure, refer to the manual for each adapter.
Locating the Servers and Cluster Switches Planning for Installation Locating the Servers and Cluster Switches The servers must be located so that they can be connected to the cluster switches with a 10-meter, 40-meter, or 80-meter cable. See Figure 2-6. This is not the straight-line distance. Bends in the cable after it is installed can significantly reduce the actual distance from the cluster switch to the server. Figure 2-6.
Locating the Servers and Cluster Switches Planning for Installation Note the following considerations for locating the cluster switches: • • • The power cord for a cluster switch and its power subsystem measures 8.2 feet (2.5 meters). A switch enclosure can be installed on the floor or on top of a base system enclosure. To reduce cabling errors, HP recommends that the X-fabric and Y-fabric cluster switches be installed with some distance between them.
Floor Space for Servicing of Cluster Switches Planning for Installation Floor Space for Servicing of Cluster Switches You must plan floor space for the cluster switches that includes service space in front of and behind the switch enclosure (or 19-inch rack). Table 2-6 shows the dimensions of the switch enclosure. The total weight of the switch enclosure and its components is 180 lbs. Table 2-6.
Planning for Power Planning for Installation Figure 2-9. ServerNet II Switch Extended for Servicing VST075.vsd Planning for Power Power planning for the servers and their peripheral equipment (for example, tape drives, disk drives, and system consoles) is the same whether or not the server is a member of a ServerNet cluster. Because external ServerNet fabrics use fiber-optic cable, each server is electrically isolated from transient signals originating at another server.
Planning for Software Planning for Installation Planning for Software Planning for software includes preparing to upgrade to a new RVU and verifying the Expand system name and number. Minimum Software Requirements Any node that will participate in a ServerNet cluster must have Expand (T9057) software, which is delivered on the site update tape (SUT). In addition, the Expand/ServerNet Profile (T0569) is required for clustering. This is an optional component that, if ordered, is delivered on the SUT.
Minimum Software Requirements Planning for Installation Checking SPR Levels Table 2-9 shows how to check the current SPR levels for ServerNet cluster software. Table 2-9. Checking SPR Levels Product Software Component To check the current SPR level . . . T0502 SANMAN At a TACL prompt: > VPROC $SYSTEM.SYSnn.SANMAN or In SCF: -> VERSION PROCESS $ZZSMN, DETAIL If no version is indicated, see Footnote 1. T0294 SNETMON/MSGMON At a TACL prompt: > VPROC $SYSTEM.SYSnn.
Minimum Software Requirements Planning for Installation Version Procedure Information for ServerNet Cluster Software Some ServerNet cluster software components earlier than G06.12 omit the SPR level from their version procedure information. Table 2-10 shows the version procedure dates that identify the currently installed SPR. Table 2-10. Version Procedure Information for ServerNet Cluster Software Product SPR VPROC String SANMAN T0502 (G06.
SP Firmware Requirement for Systems With Tetra 8 Topology Planning for Installation SP Firmware Requirement for Systems With Tetra 8 Topology When preparing a system with the Tetra 8 topology for ServerNet Cluster connectivity, you must upgrade the SP firmware to a version that supports clustering (shown in Table 2-8 on page 2-23) before you perform a system load with G06.09 or higher.
Planning for Serviceability Planning for Installation For example, if you currently are using Expand-over-IP lines and you want to add Expand-over-ServerNet lines and be able to use both lines together, then you must buy the Expand/FastPipe Profile (T0533G06) for every system that will use an Expandover-IP line. And you must buy the Expand/ServerNet Profile (T0509G06) for every system that will use an Expand-over-ServerNet line.
Planning for the Dedicated OSM or TSM LAN Planning for Installation Figure 2-10 shows how LAN connections are made for NonStop S-series servers. Figure 2-10. LAN Connections for NonStop S-Series Servers Public LAN Dedicated LAN for TSM Dedicated LAN for SWAN concentrators Group 01 VST912.vsd Dedicated LAN The dedicated LAN is an Ethernet LAN used for secure management of a NonStop S-series server. Connection to a dedicated LAN is required for all S-series installations. Note.
Planning for the Dedicated OSM or TSM LAN Planning for Installation Figure 2-11. Recommended Configuration for Dedicated LAN Remote Service Provider Remote Service Provider NonStop S-Series Server Primary System Console Backup System Console Modem Modem Hub 1 Hub 2 Note: Do not use this figure as a wiring diagram. Actual connections vary depending on the Ethernet hub you use. VSTt998.vsd The dedicated LAN connects to the Ethernet ports on PMF CRUs located in group 01. See Figure 2-12.
Planning for the Dedicated OSM or TSM LAN Planning for Installation Figure 2-12. Dedicated LAN Group 01 MSP0: xxx.yyy.zz.b MSP1: xxx.yyy.zz.c Operating system access: xxx.yyy.zzz. j Operating system access: xxx.yyy.zzz. k Remote Service Provider Modem Remote Service Provider Microhub Microhub Modem Backup System Console xxx.yyy.zz.d Primary System Console xxx.yyy.zz.a TSM workstation xxx.yyy.zzz.q TSM workstation xxx.yyy.zzz.r VST917.
Planning for the Dedicated OSM or TSM LAN Planning for Installation Public LAN A public LAN is an Ethernet LAN that can include many clients and servers that might or might not include routers or bridges. NonStop S-series servers connect to public LANs using Ethernet 4 ServerNet Adapter (E4SA) or Fast Ethernet ServerNet Adapter (FESA) ports. Figure 2-13 shows a public LAN using an E4SA. Figure 2-13.
Planning for the Dedicated OSM or TSM LAN Planning for Installation Note the following considerations regarding a public LAN: • • • • Connection to a public LAN is not required for NonStop S-series installations. Additional system consoles can be connected to a public LAN, but a primary or backup (dial-out point) system console cannot be connected to a public LAN. A system console connected to a public LAN can run only the Service Application and the Event Viewer Application.
Planning for Installation Planning for the Dedicated OSM or TSM LAN servers in a ServerNet cluster, provided the servers are located close enough for the Ethernet cables to reach. • A public LAN connected to the E4SAs or FESAs in each NonStop S-series server can be constructed to include all the nodes in a ServerNet cluster. There are no restrictions to the types of servers or workstations that can participate in this LAN.
Planning for the Dedicated OSM or TSM LAN Planning for Installation Figure 2-14. Ethernet LANs Serving Individual Nodes Ethernet LAN Ethernet LAN Hub Hub \A \B Ethernet LAN Ethernet LAN Hub Hub \C \D X Fabric External ServerNet Fabrics Y Fabric ServerNet Cluster VST008.
Planning for the Dedicated OSM or TSM LAN Planning for Installation Figure 2-15 shows the same ServerNet cluster but with an Ethernet LAN that links all the nodes in the cluster. This LAN configuration, which could be a dedicated LAN or a public LAN, allows any system console to manage resources on any node. Figure 2-15. Ethernet LAN Serving Multiple Nodes Ethernet LAN \A \B \C \D X Fabric External ServerNet Fabrics Y Fabric ServerNet Cluster VST009.
Planning for Installation Planning for the Dedicated OSM or TSM LAN ServerNet Cluster Manual— 520575-003 2- 36
3 Installing and Configuring a ServerNet Cluster This section describes how to install a new ServerNet cluster. If you are modifying an existing cluster, see Section 4, Upgrading a ServerNet Cluster. To install a new cluster that supports . . .
Installing and Configuring a ServerNet Cluster Task 1: Complete the Planning Checklist Table 3-1.
Installing and Configuring a ServerNet Cluster Task 2: Inventory Your Hardware Task 2: Inventory Your Hardware Check that you have all the required hardware components to build your cluster. Table 3-2 can help you inventory hardware. Table 3-2. Hardware Inventory Checklist √ Description Notes Check that you have at least two MSEBs for each node that will participate in the cluster. The two required MSEBs must be installed in group 01, slots 51 and 52. MSEBs for other enclosures are optional.
Installing and Configuring a ServerNet Cluster Task 3: Install the Servers Task 3: Install the Servers If the individual servers are not already installed, you must install them and ensure that they function properly as stand-alone systems before adding them to a cluster. You can install all the servers now or install them individually before adding each one to the cluster.
Installing and Configuring a ServerNet Cluster Task 5: Install MSEBs in Slots 51 and 52 of Group 01 For more information about software requirements, refer to Section 2, Planning for Installation. For software installation information, refer to the following: • • • G06.xx Software Installation and Upgrade Guide NonStop System Console Installer Guide Interactive Upgrade Guide After upgrading operating system software, be sure to save a stable copy of the current configuration using the SCF SAVE command.
Installing and Configuring a ServerNet Cluster Task 5: Install MSEBs in Slots 51 and 52 of Group 01 Considerations for Connecting ECL ServerNet Cables to ECL PICs If you are connecting ECL ServerNet cables to ECL PICs, note the following considerations: • • • The ServerNet cable connector and ECL PIC connector have standoffs that must be mated correctly. See Figure 3-1. When possible, connect the cable to the PIC on the MSEB before installing the MSEB into the slot.
Installing and Configuring a ServerNet Cluster Task 6: Add MSGMON, SANMAN, and SNETMON to the System-Configuration Database Task 6: Add MSGMON, SANMAN, and SNETMON to the SystemConfiguration Database Unless the system is new, you must add MSGMON, SANMAN, and SNETMON as generic processes to the system-configuration database. (These processes are preconfigured on new systems.
Installing and Configuring a ServerNet Cluster Task 6: Add MSGMON, SANMAN, and SNETMON to the System-Configuration Database 4. Check that MSGMON ($ZIMnn), SANMAN ($ZZSMN), and SNETMON ($ZZSCL) are started: -> STATUS PROCESS $ZZKRN.* If MSGMON, SANMAN, and SNETMON are . . . Then . . . Started Go to Task 7: Verify That $ZEXP and $NCP Are Started on page 3-11. Not started Start each process as described in Starting MSGMON, SANMAN, and SNETMON on page 3-11.
Installing and Configuring a ServerNet Cluster • • Task 6: Add MSGMON, SANMAN, and SNETMON to the System-Configuration Database From a workstation that has Internet access, you can view a copy of the macro at http://nonstop.compaq.com/. Click Technical Documentation>Compaq S-Series Service (CSSI) Web>Extranet version of the Compaq S-Series Service (CSSI) Web>NonStop ServerNet Cluster>ZPMCONF macro.
Installing and Configuring a ServerNet Cluster Task 6: Add MSGMON, SANMAN, and SNETMON to the System-Configuration Database 2. Configure SANMAN: Note. For two-processor systems, HP recommends that you specify (00, 01) for the CPU list. For four-processor systems, specify (02, 01, 03) for the CPU list. For systems of six or more processors, specify (02, 05, 06, 03, 07, 04) for the CPU list. -> ADD PROCESS $ZZKRN.#ZZSMN, & AUTORESTART 10, & PRIORITY 199, & PROGRAM $SYSTEM.SYSTEM.
Installing and Configuring a ServerNet Cluster Task 7: Verify That $ZEXP and $NCP Are Started Starting MSGMON, SANMAN, and SNETMON 1. Use the SCF START PROCESS command to start MSGMON, SANMAN, and SNETMON: -> START PROCESS $ZZKRN.#MSGMON Note. After typing the START PROCESS $ZZKRN.#MSGMON command, it is normal to receive error messages indicating that one or more processors did not start due to CPU down.
Installing and Configuring a ServerNet Cluster Task 8: Install the Cluster Switches Task 8: Install the Cluster Switches You must install the X-fabric and Y-fabric cluster switches before you can add a node to the cluster. This task includes installing the cluster switches, routing fiber-optic cables and, if necessary, updating the ServerNet II Switch firmware and configuration. 1. Install the cluster switches as instructed in the ServerNet Cluster 6770 Hardware Installation and Support Guide.
Installing and Configuring a ServerNet Cluster Task 8: Install the Cluster Switches Table 3-3. Decision Table for Updating the ServerNet II Switch Firmware and Configuration If . . . Then . . . The cluster switch will be connected in a star topology with up to eight nodes, and all nodes are running G06.09, G06.10, or G06.11 and do not have the required SPRs listed in Table 3-5 on page 3-15. The cluster switch can be installed using the preloaded T0569AAA firmware and configuration.
Installing and Configuring a ServerNet Cluster Task 8: Install the Cluster Switches Table 3-4. Firmware and Configuration Compatibility With the NonStop Kernel If . . . Then all nodes must be running . . . The ServerNet II Switch will have its firmware and configuration updated with T0569AAB.
Installing and Configuring a ServerNet Cluster Task 8: Install the Cluster Switches Table 3-5. Minimum SPR levels for G06.12 and G06.14 ServerNet Cluster Functionality Required Minimum SPR Level Release 21 (G06.12 equivalent) Release 32 (G06.14 equivalent) External ServerNet SAN manager process (SANMAN) T0502AAE T0502AAG ServerNet cluster monitor process/message system monitor process (SNETMON/MSGMON) If G06.09, use T0294G083 If G06.10, use T0294G083 If G06.11, use T0294AAB If G06.
Installing and Configuring a ServerNet Cluster Task 8: Install the Cluster Switches by stopping the ServerNet cluster subsystem and then disconnecting the fiber-optic cables connected to port 6 of the MSEBs in group 01, slots 51 and 52. Note. HP does not recommend this method for permanently removing a node from a ServerNet cluster because of the disruption in traffic across the Expand lines. For complete instructions, refer to Section 6, Adding or Removing a Node. 3.
Installing and Configuring a ServerNet Cluster Task 9: Perform the Guided Procedure for Configuring a ServerNet Node 9. In the Available Actions list, select Firmware Update and click Perform Action. The Update Switch guided procedures interface appears. 10. Click Start and follow the guided procedure to download the appropriate firmware. For online help, click the Help menu or click the Help button in any of the procedure dialog boxes.
Installing and Configuring a ServerNet Cluster Task 9: Perform the Guided Procedure for Configuring a ServerNet Node launched from within the OSM Service Connection by performing the Replace action from the SEB you want to replace. Online help is available to assist you in performing the procedures. What the Guided Procedure Does The guided procedure for configuring a ServerNet node: • • Verifies that the group 01 MSEBs are installed and ready Tells you when to connect the fiber-optic cables Note.
Installing and Configuring a ServerNet Cluster • • • Task 9: Perform the Guided Procedure for Configuring a ServerNet Node Make sure the ferrule housing and the ceramic ferrule tip are visible. The ferrule housing should be at least flush with the connector housing. It is normal for the ferrule housing to slide freely (approximately 2 mm) within the connector body between the stops designed into the connector-body assembly.
Installing and Configuring a ServerNet Cluster Task 9: Perform the Guided Procedure for Configuring a ServerNet Node Figure 3-3. Key Positions on ServerNet II Switch Ports Keys Ports 8-11 Ports 0-7 Keys VST111.vsd 5. Insert the connector into the receptacle, squeezing the connector body gently between your thumb and forefinger as you insert it. Push the connector straight into the receptacle until the connector clicks into place. See Figure 3-4. Figure 3-4.
Installing and Configuring a ServerNet Cluster Task 9: Perform the Guided Procedure for Configuring a ServerNet Node fail to make a solid connection even though the connector is inserted properly. Figure 3-5 shows a fully inserted connector in which one of the fibers does not make a solid connection. Figure 3-5. Inserted Connector With Bad Fiber Connection Bad Fiber Connection VST144.vsd 7. Check the link-alive LED at both ends of the cable.
Installing and Configuring a ServerNet Cluster Task 9: Perform the Guided Procedure for Configuring a ServerNet Node Using SCF to Configure Expand-Over-ServerNet LineHandler Processes HP recommends that you configure line-handler processes using the guided procedure for configuring a ServerNet node. The guided procedure can automatically generate line-handler processes for you, if desired. However, you can configure these processes manually by using SCF.
Installing and Configuring a ServerNet Cluster Task 10: Check for Problems Rules for Configuring Line-Handler Processes Using SCF If you use SCF to configure Expand-over-ServerNet line-handler processes manually (in other words, you do not use the guided procedure), observe the following configuration rules: Rule 1 Whenever possible, configure the primary and backup Expand-overServerNet line-handler processes in different processor enclosures.
Installing and Configuring a ServerNet Cluster Task 11: Add the Remaining Nodes to the Cluster 3. Click the plus (+) sign next to the ServerNet Cluster resource so you can see each node or subcomponent of the ServerNet cluster. The tree pane displays ServerNet cluster resources. 4. Look for yellow or red icons over a resource: • • A yellow icon indicates that a resource is not in a normal operational state or contains subcomponents that are yellow or red.
Installing and Configuring a ServerNet Cluster Installing a ServerNet Cluster Using the Split-Star Topology Installing a ServerNet Cluster Using the Split-Star Topology A ServerNet cluster using a split-star topology has up to two cluster switches per fabric and can support up to 16 nodes. You construct a split-star topology by installing the two star groups of the split-star as independent clusters and then connecting the clusters using the Add Switch guided procedure.
Installing and Configuring a ServerNet Cluster Task 2: Route the Fiber-Optic Cables for the FourLane Links Table 3-7.
Installing and Configuring a ServerNet Cluster Task 4: Use the Guided Procedure to Prepare to Join the Clusters You can save time by configuring one cluster to use X1/Y1 cluster switches and the other cluster to use the X2/Y2 cluster switches, but this practice is not required. The Add Switch guided procedure detects the cluster switch configuration and gives you an opportunity to change it before connecting the four-lane links.
Installing and Configuring a ServerNet Cluster Task 5: Connect the Four-Lane Links 2. Remove the dust caps from the fiber-optic cable connectors. Note. ServerNet II Switch ports 8 through 11 are keyed differently from ports 0 through 7. See Figure 3-3 on page 3-20. To connect the four-lane link cables, you must align the fiber-optic cable connector with the key on top. 3. One cable at a time, connect the cable ends. Table 3-8 shows the cable connections. Note.
Installing and Configuring a ServerNet Cluster Task 6: Configure Expand-Over-ServerNet Lines for the Remote Nodes Task 6: Configure Expand-Over-ServerNet Lines for the Remote Nodes Unless automatic line-handler configuration is enabled, the ServerNet nodes in each half of the split-star topology will not have Expand-over-ServerNet line-handler processes configured for the remote nodes in the other half of the split-star.
Installing and Configuring a ServerNet Cluster Installing a ServerNet Cluster Using the Tri-Star Topology Installing a ServerNet Cluster Using the Tri-Star Topology A ServerNet cluster using a tri-star topology has three cluster switches per fabric and can support up to 24 nodes. You construct a tri-star topology by installing the three star groups of the tri-star as independent clusters and then connecting the clusters using the Add Switch guided procedure.
Installing and Configuring a ServerNet Cluster Task 1: Decide Which Nodes Will Occupy the Three Star Groups of the Tri-Star Topology Table 3-10.
Installing and Configuring a ServerNet Cluster Task 2: Route the Fiber-Optic Cables for the TwoLane Links Task 2: Route the Fiber-Optic Cables for the Two-Lane Links The following tasks assume that you have routed (but not connected) the single-mode fiber-optic cables to be used for the two-lane links. (Do not connect the two-lane links until you are instructed to do so by the guided procedure.) If you have not already routed the cables: 1.
Installing and Configuring a ServerNet Cluster Connecting the Two-Lane Links d. Follow the dialog boxes to update the firmware and configuration for the Xfabric cluster switch if the guided procedure determines that the switch needs updating. e. When prompted, update the firmware and configuration for the Y-fabric cluster switch. The guided procedure remembers the configuration tag you selected for the X-fabric cluster switch and uses it for the Y-fabric cluster switch. f.
Installing and Configuring a ServerNet Cluster Connecting the Two-Lane Links To connect the two-lane links: 1. Remove the black plugs from ports 8 through 11 on the double-wide PICs inside each switch enclosure. 2. Remove the dust caps from the fiber-optic cable connectors. Note. Cluster switch ports 8, 9, 10, and 11 are keyed differently from ports 0 through 7. To connect the two-lane link cables, you must align the fiber-optic cable connector with the key on top. See Figure 3-3 on page 3-20. 3.
Installing and Configuring a ServerNet Cluster Task 5: Configure and Start Expand-Over-ServerNet Lines 4. Check the link-alive LED near each PIC port. The link-alive LEDs should light a few seconds after each cable is connected at both ends. If the link-alive LEDs do not light: • • • Try reconnecting the cable, using care to align the key on the cable plug with the PIC connector. Make sure the dust caps are removed from the cable ends. If possible, try connecting a different cable. 5.
Installing and Configuring a ServerNet Cluster Task 7: Verify Cluster Connectivity Task 7: Verify Cluster Connectivity Use the SCF STATUS SUBNET $ZZSCL, PROBLEMS command to make sure direct ServerNet communication is possible between all nodes connected to the cluster switches: >SCF STATUS SUBNET $ZZSCL, PROBLEMS Note. To obtain information about individual fabrics, you can use the SCF STATUS SUBNET $ZZSCL command on all nodes. SCF STATUS SUBNET $ZZSCL requires T0294AAA or a superseding SPR.
4 Upgrading a ServerNet Cluster This section describes how to upgrade a ServerNet cluster by installing new software, adding ServerNet Switches, or both. You need to upgrade a ServerNet cluster when the cluster cannot accept any more ServerNet nodes. Adding cluster switches usually changes the topology of a cluster. In some cases, before changing the topology of the cluster, you must upgrade the software running on each node and the firmware and configuration loaded in each cluster switch. Note.
Upgrading a ServerNet Cluster Benefits of Upgrading Benefits of Upgrading The three major releases of the ServerNet Cluster product are as follows: Table 4-1. ServerNet Cluster Releases ServerNet Cluster Release Introduced With RVU Supports Release 1 G06.09 Up to 8 nodes Release 2 G06.12 Up to 16 nodes Release 3 G06.14 Up to 24 Nodes Depending on your current software, you can upgrade either to ServerNet cluster release 2 or release 3 functionality. Benefits of Upgrading to G06.
Upgrading a ServerNet Cluster Benefits of Upgrading to G06.14 (Release 3) Functionality Benefits of Upgrading to G06.14 (Release 3) Functionality Upgrading to G06.14 or upgrading to G06.13 and applying the SPRs listed in Table 4-6 on page 4-8 adds functions, including: Note. Upgrading the operating system to G06.13 or a later G-series RVU is required only if you need to configure a tri-star topology. Otherwise, you might be able to apply the Release 3 (G06.14) SPRs to G06.09 through G06.
Upgrading a ServerNet Cluster Planning Tasks for Upgrading a ServerNet Cluster Planning Tasks for Upgrading a ServerNet Cluster Four tasks in planning to upgrade a ServerNet cluster are as follows: • • • • Task 1: Identify the Current Topology on page 4-4 Task 2: Choose the Topology That You Want to Upgrade To on page 4-6 Task 3: Fill Out the Planning Worksheet on page 4-9 Task 4: Select an Upgrade Path on page 4-12 Task 1: Identify the Current Topology To identify the supported upgrade paths for a clu
Upgrading a ServerNet Cluster Task 1: Identify the Current Topology 5. In the Attributes pane, check the Configuration Tag attribute. 6. To identify the topology, compare the attribute value with the supported configuration tags shown in Table 4-3. Table 4-3.
Upgrading a ServerNet Cluster Task 2: Choose the Topology That You Want to Upgrade To Task 2: Choose the Topology That You Want to Upgrade To To choose the upgrade topology, decide the maximum number of nodes that the upgraded cluster will need to support. This number should include anticipated future growth in the cluster. For maximum scalability, HP recommends upgrading to the tristar topology or a subset of the tri-star topology. Note.
Upgrading a ServerNet Cluster Task 2: Choose the Topology That You Want to Upgrade To Table 4-5 compares the topologies. Table 4-5. Comparison of ServerNet Cluster Topologies Topology Star Split-Star Tri-Star Introduced With ServerNet Cluster Release Release 1 Release 2 Release 3 Introduced With RVU G06.09 G06.12 G06.14 G06.09 and later G-series RVUs G06.091 G06.101 G06.111 G06.131 G06.14 G06.15 G06.16 Supported on RVUs G06.12 G06.13 G06.14 G06.15 G06.
Upgrading a ServerNet Cluster Task 2: Choose the Topology That You Want to Upgrade To Table 4-6 shows the SPRs required to obtain G06.12 (release 2) and G06.14 (release 3) functionality. Note. G06.14 functionality includes support for the tri-star topology but also supports the star and split-star topologies and provides significant defect repair. Table 4-6. SPRs for G06.12 and G06.14 ServerNet Cluster Functionality Software Component Release 2 (G06.12 equivalent) Release 3 (G06.
Upgrading a ServerNet Cluster Task 3: Fill Out the Planning Worksheet Task 3: Fill Out the Planning Worksheet To help you plan for upgrading software, use Table 4-7 on page 4-10 to record the SPR levels of ServerNet cluster software on all nodes in the cluster. Table 4-7 accommodates an eight-node cluster. If your cluster contains more than eight nodes, you can make copies of the worksheet.
Upgrading a ServerNet Cluster Task 3: Fill Out the Planning Worksheet Table 4-7. Upgrade Planning Worksheet Node ___ Node ___ Node ___ Node ___ System Name \_________ \_________ \_________ \_________ Release Version Update (RVU) G06._____ G06._____ G06._____ G06._____ SANMAN T0502____ T0502____ T0502____ T0502____ SNETMON/MSGMON T0294____ T0294____ T0294____ T0294____ ServerNet II Switch firmware and configuration (file ver.
Upgrading a ServerNet Cluster Task 3: Fill Out the Planning Worksheet Checking SPR Levels or Version Procedure Information for ServerNet Cluster Software Table 2-9 on page 2-24 shows how to check the current SPR levels for ServerNet cluster software. However, some ServerNet cluster software components earlier than G06.12 omit the SPR level from their version procedure information. In these cases, see Table 2-10 on page 2-25 for the version procedure dates that identify the currently installed SPR.
Upgrading a ServerNet Cluster Task 4: Select an Upgrade Path Table 4-8. T0569 Firmware Revisions T0569 SPR Firmware Revision Supported Topologies T0569AAA 2_0_21 Star T0569AAB 3_0_52 Star and split-star T0569AAE 3_0_81 Star, split-star, and tri-star T0569AAF 3_0_82 Star, split-star, and tri-star Table 4-9. SCF and TSM Display of T0569 Configuration Revisions T0569 SPR Topology SCF displays the configuration revision as . . .
Upgrading a ServerNet Cluster Task 4: Select an Upgrade Path Table 4-10 lists the supported upgrade paths for clusters using the star topology. Table 4-10. Supported Upgrade Paths for Clusters Using the Star Topology Can be upgraded to . . . A star topology using . . . One cluster switch per fabric Topology Cluster Switches Per Fabric To upgrade . . . Star 1 Refer to one of the following: • • Upgrading Software to Obtain G06.12 Functionality on page 4-17 Upgrading Software to Obtain G06.
Upgrading a ServerNet Cluster Task 4: Select an Upgrade Path Table 4-11 lists the supported upgrade paths for clusters using the split-star topology. Table 4-11. Supported Upgrade Paths for Clusters Using the Split-Star Topology (page 1 of 2) Can be upgraded to . . . A split-star topology using . . . One cluster switch per fabric Topology Cluster Switches Per Fabric Split-star 1 Refer to Upgrading Software to Obtain G06.14 Functionality on page 4-34.
Upgrading a ServerNet Cluster Task 4: Select an Upgrade Path Table 4-11. Supported Upgrade Paths for Clusters Using the Split-Star Topology (page 2 of 2) Can be upgraded to . . . A split-star topology using . . . Two cluster switches per fabric Topology Cluster Switches Per Fabric Split-star 2 Refer to Upgrading Software to Obtain G06.14 Functionality on page 4-34. This option upgrades the software from release 2 (G06.12) to release 3 (G06.
Upgrading a ServerNet Cluster Task 4: Select an Upgrade Path Table 4-12 lists the supported upgrade paths for clusters using the tri-star topology. Table 4-12. Supported Upgrade Paths for Clusters Using the Tri-Star Topology Can be upgraded to . . . A tri-star topology using . . . One cluster switch per fabric Topology Cluster Switches Per Fabric Tri-star 2 Merge the cluster with another cluster using one switch per fabric. Refer to Merging Clusters to Create a Tri-Star Topology on page 4-68.
Upgrading a ServerNet Cluster Upgrading Software to Obtain G06.12 Functionality Upgrading Software to Obtain G06.12 Functionality This upgrade begins with a ServerNet cluster consisting of up to eight nodes running the G06.09, G06.10 or G06.11 RVU. The upgrade: • Installs new versions of the software listed in Table 4-13 on all nodes Note. HP recommends upgrading to the latest software whenever possible. See Upgrading Software to Obtain G06.14 Functionality on page 4-34.
Upgrading a ServerNet Cluster Upgrading Software to Obtain G06.12 Functionality Table 4-13. Upgrade Summary: Upgrading Software to Obtain G06.12 Functionality Before the Upgrade After the Upgrade Max. Nodes Supported 8 8 Cluster Switches Per Fabric 1 1 NonStop Kernel Operating System Release G06.09, G06.10, or G06.11 G06.09, G06.10, G06.11, or G06.12 ServerNet II Switch Firmware Empty firmware files (T0569) or T0569AAA T0569AAB (G06.12 equivalent) SANMAN Version T0502 or T0502AAA (G06.
Upgrading a ServerNet Cluster Upgrading Software to Obtain G06.12 Functionality Figure 4-1 shows an example of a four-node ServerNet cluster before and after a software upgrade without a system load. Figure 4-1. Example of Upgrading Software for a Four-Node Cluster Without a System Load G06.09 T0502 T7945AAS T0294 T1089AAX T9082ACN T0569 G06.10 T0502 T7945AAT T0294 T1089AAX T9082ACN T0569 G06.10 T0502 T7945AAT T0294 T1089AAX T9082ACN T0569 G06.
Upgrading a ServerNet Cluster Upgrading Software to Obtain G06.12 Functionality Figure 4-2 shows an example of a four-node ServerNet cluster before and after a software upgrade with system loads. Figure 4-2. Example of Upgrading Software for a Four-Node Cluster With System Loads G06.09 T0502 T7945AAS T0294 T1089AAX T9082ACN T0569 G06.10 T0502 T7945AAT T0294 T1089AAX T9082ACN T0569 G06.10 T0502 T7945AAT T0294 T1089AAX T9082ACN T0569 G06.
Upgrading a ServerNet Cluster Upgrading Software Without System Loads to Obtain G06.12 Functionality You can upgrade two ways: • • Upgrading Software Without System Loads to Obtain G06.12 Functionality on page 4-21 Upgrading Software With System Loads to Obtain G06.12 Functionality on page 4-25 Caution. HP recommends that you have access to a spare cluster switch before starting any upgrade procedure that includes a firmware or configuration change.
Upgrading a ServerNet Cluster Upgrading Software Without System Loads to Obtain G06.12 Functionality Note. The following considerations apply to T0569AAA and T0569AAB: • • T0569AAA is included in addition to T0569AAB so that it is available in the archive for fallback purposes. Because of time constraints unique to each production environment, installing these SPRs sometimes cannot be accomplished all at once for every node in a cluster.
Upgrading a ServerNet Cluster Upgrading Software Without System Loads to Obtain G06.12 Functionality 3. On all nodes, use SCF to make sure direct ServerNet communication is possible on both fabrics between all nodes connected to the cluster switches: >SCF STATUS SUBNET $ZZSCL Note. Using the SCF STATUS SUBNET $ZZSCL command requires T0294AAA or a superseding SPR.
Upgrading a ServerNet Cluster Upgrading Software Without System Loads to Obtain G06.12 Functionality Note. Direct ServerNet connectivity is automatically restored after an interval of approximately 50 seconds times the number of nodes in the cluster (25 seconds for nodes running G06.14 or a later G-series RVU). For faster (but manual) recovery of ServerNet connectivity, use the SCF START SERVERNET \REMOTE.$ZSNET.FABRIC.
Upgrading a ServerNet Cluster Upgrading Software With System Loads to Obtain G06.12 Functionality Upgrading Software With System Loads to Obtain G06.12 Functionality This procedure instructs you to shut down the cluster and upgrade each node to the G06.12 RVU or add SPRs to obtain G06.12 functionality. Then you rebuild the cluster using the installation procedures in Section 3, Installing and Configuring a ServerNet Cluster. 1.
Upgrading a ServerNet Cluster Fallback for Upgrading Software to Obtain G06.12 Functionality Fallback for Upgrading Software to Obtain G06.12 Functionality The fallback procedure you use depends on whether or not your system can be stopped temporarily for a system load: • • Fallback for Upgrading ServerNet Cluster Software Without a System Load to Obtain G06.12 Functionality on page 4-26 Fallback for Upgrading ServerNet Cluster Software With System Loads to Obtain G06.
Upgrading a ServerNet Cluster Fallback for Upgrading ServerNet Cluster Software Without a System Load to Obtain G06.12 detailed steps, refer to Using the TSM Service Application to Download the ServerNet II Switch Firmware or Configuration on page 4-99. Note. Note the following considerations for downloading the T0569AAA configuration: • • Downloading the configuration disrupts ServerNet communications across the X-fabric cluster switch.
Upgrading a ServerNet Cluster Fallback for Upgrading ServerNet Cluster Software Without a System Load to Obtain G06.12 8. Use the Configuration Update action of the TSM Service Application to download the T0569AAA M6770CL configuration to the nearest Y-fabric cluster switch. For detailed steps, refer to Using the TSM Service Application to Download the ServerNet II Switch Firmware or Configuration on page 4-99. Note.
Upgrading a ServerNet Cluster Fallback for Upgrading ServerNet Cluster Software Without a System Load to Obtain G06.12 12. On all nodes, use SCF to make sure direct ServerNet communication is possible on both fabrics between all nodes connected to the cluster switches: >SCF STATUS SUBNET $ZZSCL Note. Using SCF STATUS SUBNET $ZZSCL requires T0294AAA or a superseding SPR.
Upgrading a ServerNet Cluster Fallback for Upgrading ServerNet Cluster Software With System Loads to Obtain G06.12 Functionality Fallback for Upgrading ServerNet Cluster Software With System Loads to Obtain G06.12 Functionality Use this procedure if you need to restore the ServerNet II Switch firmware and configuration files after upgrading the nodes to the G06.12 SUT: 1.
Upgrading a ServerNet Cluster Fallback for Upgrading ServerNet Cluster Software With System Loads to Obtain G06.12 Functionality d. Use the TSM Service Application to perform a hard reset of the nearest Xfabric cluster switch: 1. From the Cluster tab, click the plus (+) sign next to the External ServerNet X Fabric resource to display the Switch resource. 2. Right-click the Switch resource and select Actions. The Actions dialog box appears. 3.
Upgrading a ServerNet Cluster Fallback for Upgrading ServerNet Cluster Software With System Loads to Obtain G06.12 Functionality detailed steps, refer to Using the TSM Service Application to Download the ServerNet II Switch Firmware or Configuration on page 4-99. Note. If the cluster switch is currently running T0569AAA firmware, updating the configuration can change the fabric setting.
Upgrading a ServerNet Cluster Fallback for Upgrading ServerNet Cluster Software With System Loads to Obtain G06.12 Functionality In the SCF display, check the “SvNet Node Number” values for the MSEB port and switch port to make sure they are the same. If the values are not the same, use one of the following recovery measures: • • Use the SCF PRIMARY PROCESS $ZZSMN command to force a takeover of the SANMAN process in the problem node.
Upgrading a ServerNet Cluster Upgrading Software to Obtain G06.14 Functionality Upgrading Software to Obtain G06.14 Functionality This section contains the following procedures: Procedure Use this procedure if . . . Upgrading Software Without System Loads to Obtain G06.14 Functionality on page 4-35 You want to obtain G06.14 functionality, but you do not need to create a tri-star topology. Upgrading Software With System Loads to Obtain G06.
Upgrading a ServerNet Cluster Upgrading Software Without System Loads to Obtain G06.14 Functionality Upgrading Software Without System Loads to Obtain G06.14 Functionality This procedure allows you to take advantage of defect repair and enhancements such as automatic fail-over for the split-star topology. However, if the cluster contains G06.09 through G06.12 nodes, this procedure does not prepare the cluster for upgrading to a tri-star topology.
Upgrading a ServerNet Cluster Upgrading Software Without System Loads to Obtain G06.14 Functionality Table 4-14. Upgrade Summary: Upgrading Software Without System Loads to Obtain G06.14 Functionality Before the Upgrade After the Upgrade Max. Nodes Supported 8 (star topology) or 16 (split-star topology) 8 (star topology) or 16 (split-star topology) Cluster Switches Per Fabric 1 or 2 1 or 2 NonStop Kernel Operating System Release G06.09, G06.10, G06.11, G06.12, or G06.
Upgrading a ServerNet Cluster Upgrading Software Without System Loads to Obtain G06.14 Functionality Figure 4-3 on page 4-37 shows an example of a four-node ServerNet cluster before and after a software upgrade without system loads to obtain G06.14 functionality. Figure 4-3. Example of Upgrading Software Without System Loads to Obtain G06.14 Functionality G06.12 T0502AAE T7945AAW T0294AAE T1089ABB T9082ACQ T0569AAB G06.09 T0502 T7945AAS T0294 T1089AAX T9082ACN T0569 G06.
Upgrading a ServerNet Cluster Steps for Upgrading Software Without System Loads to Obtain G06.14 Functionality Steps for Upgrading Software Without System Loads to Obtain G06.14 Functionality 1. On the system consoles for all nodes in the cluster, upgrade the TSM client software to Compaq 2001D or a later version of the TSM client. For more information, refer to the NonStop System Console Installer Guide. 2. On all nodes in the cluster: a.
Upgrading a ServerNet Cluster Steps for Upgrading Software Without System Loads to Obtain G06.14 Functionality b. Shut down any applications using Expand-over-ServerNet connections between the node and the rest of the cluster. c. Abort all Expand-over-ServerNet lines on the node for remote nodes in the cluster. On remote nodes, abort the Expand-over-ServerNet line for the node receiving the SPRs. >SCF ABORT LINE $SCxxx d. Stop the Servernet cluster subsystem: >SCF STOP SUBSYS $ZZSCL e.
Upgrading a ServerNet Cluster Steps for Upgrading Software Without System Loads to Obtain G06.14 Functionality q. Use the TSM Service Application to check the ServerNet cluster status. For information about using TSM, refer to Section 5, Managing a ServerNet Cluster. 3. Use SCF to make sure direct ServerNet communication is possible on both fabrics between all nodes connected to the cluster switches: >SCF STATUS SUBNET $ZZSCL, PROBLEMS Note.
Upgrading a ServerNet Cluster Steps for Upgrading Software Without System Loads to Obtain G06.14 Functionality This scenario does not change the ServerNet node number range for the cluster switch. Note. Downloading the firmware and configuration disrupts ServerNet connectivity through the X-fabric cluster switch temporarily. c. Use the TSM Service Application to verify that the X-fabric cluster switch is operational. d.
Upgrading a ServerNet Cluster Upgrading Software With System Loads to Obtain G06.14 Functionality Upgrading Software With System Loads to Obtain G06.14 Functionality This upgrade: • • • Begins with a ServerNet cluster consisting of up to 16 nodes (star or split-star topologies) running any of the following RVUs: ° ° ° ° ° G06.09 G06.10 G06.11 G06.12 G06.13 Upgrades the TSM client software on all system consoles Migrates the operating system on all nodes to G06.
Upgrading a ServerNet Cluster Upgrading Software With System Loads to Obtain G06.14 Functionality Table 4-15 summarizes this upgrade. Table 4-15. Upgrade Summary: Upgrading Software With System Loads to Obtain G06.14 Functionality Before the Upgrade After the Upgrade Max. Nodes Supported 8 (star topology) or 16 (split-star topology) 8 (star topology), 16 (split-star topology), or 24 (tri-star topology) Cluster Switches Per Fabric 1 or 2 1, 2, or 3 NonStop Kernel Operating System Release G06.
Upgrading a ServerNet Cluster Upgrading Software With System Loads to Obtain G06.14 Functionality Figure 4-4 shows an example of a four-node ServerNet cluster before and after a software upgrade to obtain G06.14 functionality. This upgrade requires a system load to migrate to G06.13 or G06.14 unless a node is already running G06.13, in which case SPRs can be applied. Figure 4-4. Example of Upgrading Software With System Loads to Obtain G06.14 Functionality G06.
Upgrading a ServerNet Cluster Steps for Upgrading Software With System Loads to Obtain G06.14 Functionality Steps for Upgrading Software With System Loads to Obtain G06.14 Functionality To perform the upgrade: 1. Upgrade any nodes running G06.12 or an earlier RVU to G06.13 or a later G-series RVU (recommended) or G06.13. For information about migrating to a new RVU, refer to: • • Interactive Upgrade Guide G06.xx Software Installation Guide Note. Upgrading the operating system to G06.
Upgrading a ServerNet Cluster Steps for Upgrading Software With System Loads to Obtain G06.14 Functionality Note. Because of time constraints unique to each production environment, installing these SPRs sometimes cannot be accomplished all at once for every node in a cluster. You might need to continue using a star or split-star topology while the SPRs are applied to some—but not all—nodes.
Upgrading a ServerNet Cluster Steps for Upgrading Software With System Loads to Obtain G06.14 Functionality k. Use the TSM Service Application to update the SP firmware. For the detailed steps, refer to the online help. l. Restart the MSGMON processes: >SCF START PROCESS $ZZKRN.#MSGMON m. Restart the SANMAN process: >SCF START PROCESS $ZZKRN.#ZZSMN n. Restart the SNETMON process: >SCF START PROCESS $ZZKRN.#ZZSCL o. Restart the ServerNet cluster subsystem: >SCF START SUBSYS $ZZSCL p.
Upgrading a ServerNet Cluster Steps for Upgrading Software With System Loads to Obtain G06.14 Functionality cluster switches. (You cannot download firmware or a configuration across a fourlane link to a remote cluster switch.) Caution. All nodes attached to a cluster switch whose firmware and configuration will be updated with T0569AAB or T0569AAE must be running a version of the operating system that is compatible with the T0569 SPR to be downloaded.
Upgrading a ServerNet Cluster Steps for Upgrading Software With System Loads to Obtain G06.14 Functionality d. Use SCF on all nodes to verify that direct ServerNet connectivity has been restored on the X fabric: >SCF STATUS SUBNET $ZZSCL Note. Direct ServerNet connectivity is automatically restored after an interval of approximately 50 seconds times the number of nodes in the cluster (25 seconds for nodes running G06.14 or a later G-series RVU).
Upgrading a ServerNet Cluster Fallback for Upgrading Software to Obtain G06.14 Functionality Fallback for Upgrading Software to Obtain G06.14 Functionality Use this procedure if you upgraded a cluster to G06.14 functionality and you now need to restore the ServerNet II Switch firmware and configuration files to an earlier version (G06.12, for example). This procedure uses one of the nodes to restore the old configuration files. The other nodes can continue operating as members of the cluster. Note.
Upgrading a ServerNet Cluster Fallback for Upgrading Software to Obtain G06.14 Functionality 5. If you fell back to earlier firmware from T0569AAE in Step 4, power cycle the ServerNet II Switch subcomponent of the nearest X-fabric cluster switch: a. On the ServerNet II Switch front panel, press the Power On button to remove power. (You must fully depress the button until it clicks.) b. Wait at least one minute. c. Press the Power On button again to reapply power to the ServerNet II Switch. 6.
Upgrading a ServerNet Cluster Fallback for Upgrading Software to Obtain G06.14 Functionality 10. If you need to fall back to earlier firmware, use the TSM Service Application to download the T0569AAB (or T0569AAA) M6770CL configuration to the nearest Xfabric cluster switch. For detailed steps, refer to Using the TSM Service Application to Download the ServerNet II Switch Firmware or Configuration on page 4-99. Otherwise, skip this step. 11.
Upgrading a ServerNet Cluster Fallback for Upgrading Software to Obtain G06.14 Functionality 16. If the requisite SPRs are to be removed, make sure the T0569AAB (or T0569AAA) M6770CL configuration has been downloaded to all cluster switches prior to removing the requisite SPRs. 17. If falling back to an earlier version of the operating system (G06.12, for example), perform the following optional step: a. Shut down any nodes for which fallback to an earlier version of the operating system is desired. b.
Upgrading a ServerNet Cluster Merging Clusters to Create a Split-Star Topology Merging Clusters to Create a Split-Star Topology To create a split-star topology, you must merge two clusters that use one cluster switch per fabric. Typically, you will merge two clusters that use the star topology. However, you can also merge valid subsets of other topologies to create a split-star topology.
Upgrading a ServerNet Cluster Example: Merging Two Star Topologies to Create a Split-Star Topology Table 4-16. Upgrade Summary: Upgrading Software to Create a Split-Star Topology (G06.12 Functionality) Before the Upgrade After the Upgrade Max. Nodes Supported 8 16 Cluster Switches Per Fabric 1 2 NonStop Kernel Operating System Release G06.09, G06.10, or G06.11 If the four-lane link is less than 80 meters, the cluster can contain G06.09, G06.10, G06.11, or G06.12 nodes.
Upgrading a ServerNet Cluster Example: Merging Two Star Topologies to Create a Split-Star Topology Figure 4-5. Example of Merging Clusters Containing Pre-G06.13 Nodes G06.09 T0502 T7945AAS T0294 T1089AAX T9082ACN T0569 NonStop Himalaya S-Series Server G06.10 T0502AAA T7945AAT T0294 T1089AAZ T9082ACN T0569 NonStop Himalaya S-Series Server G06.
Upgrading a ServerNet Cluster Example: Merging Two Star Topologies to Create a Split-Star Topology Figure 4-6 shows a four-node ServerNet cluster that has been modified to support up to 16 nodes by upgrading software and adding cluster switches. In addition, two new nodes have been added to the cluster. Because all of the nodes in the modified cluster are running G06.11 or a later G-series RVU, up to 1-kilometer four-lane links can be used between the two halves of the splitstar topology.
Upgrading a ServerNet Cluster Example: Merging Two Star Topologies to Create a Split-Star Topology Figure 4-6. Example of Upgrading a Cluster to a Release 2 Split-Star Topology and Adding Nodes G06.09 T0502 T7945AAS T0294 T1089AAX T9082ACN T0569 G06.10 T0502 T7945AAT T0294 T1089AAX T9082ACN T0569 G06.10 T0502 T7945AAT T0294 T1089AAX T9082ACN T0569 G06.
Upgrading a ServerNet Cluster Example: Merging Two Star Topologies to Create a Split-Star Topology Figure 4-7 shows a four-node ServerNet cluster that has been modified to support up to 16 nodes by upgrading software and adding cluster switches. In addition, one new server has been added to the cluster. Because the nodes in the modified cluster are running G06.09, G06.10, and G06.11 RVUs, the four-lane links must be 80 meters or less. Figure 4-7.
Upgrading a ServerNet Cluster Steps for Merging Two Star Topologies to Create a Split-Star Topology Steps for Merging Two Star Topologies to Create a Split-Star Topology To merge two clusters that use one cluster switch per fabric to create a split-star topology: Caution. Do not connect the four-lane link until you are instructed to do so by the Add Switch guided procedure.
Upgrading a ServerNet Cluster Steps for Merging Two Star Topologies to Create a Split-Star Topology Table 4-17.
Upgrading a ServerNet Cluster Steps for Merging Two Star Topologies to Create a Split-Star Topology 3. Select one of the ServerNet clusters to be the X1/Y1 cluster. If necessary, upgrade the ServerNet cluster software on all nodes in the cluster by using the steps in Upgrading Software to Obtain G06.12 Functionality on page 4-17. 4. Select one of the ServerNet clusters to be the X2/Y2 cluster.
Upgrading a ServerNet Cluster Steps for Merging Two Star Topologies to Create a Split-Star Topology 12. On all nodes in the newly merged cluster, use SCF to verify ServerNet connectivity: >SCF STATUS SUBNET $ZZSCL, PROBLEMS Note. In order to use the PROBLEMS option, T0294AAG (or a superseding SPR) must be applied. If the PROBLEMS option is not available, use the SCF STATUS SUBNET $ZZSCL command on all nodes. SCF STATUS SUBNET $ZZSCL requires T0294AAA or a superseding SPR.
Upgrading a ServerNet Cluster Connecting the Four-Lane Links Connecting the Four-Lane Links When the guided procedure indicates that both clusters are ready to add remote cluster switches, you can connect the four-lane links between the X1/Y1 and X2/Y2 cluster switches. Caution. Before connecting ServerNet cables, inspect the cables as described in Connecting a Fiber-Optic Cable to an MSEB or ServerNet II Switch on page 3-18. Using defective connectors can cause ServerNet connectivity problems.
Upgrading a ServerNet Cluster Connecting the Four-Lane Links 5. One cable at a time, connect the cable ends. Table 4-19 on page 4-65 shows the cable connections. Note. To avoid generating an alarm, you must connect the four-lane links for both fabrics within four minutes. The TSM incident analysis (IA) software generates an alarm eventually if one external fabric has two cluster switches but the other external fabric has only one cluster switch.
Upgrading a ServerNet Cluster Fallback for Merging Clusters to Create a Split-Star Topology Fallback for Merging Clusters to Create a SplitStar Topology This fallback procedure divides a split-star topology into two ServerNet clusters that support no more than eight nodes each. In this scenario, you separate the clusters into two individual logical clusters with only one switch per fabric for each cluster. (Each star group of the split-star topology is a logical cluster.) 1.
Upgrading a ServerNet Cluster Fallback for Merging Clusters to Create a Split-Star Topology 6. If necessary, use one of the fallback procedures in Upgrading Software to Obtain G06.12 Functionality on page 4-17 to fall back from G06.12 to an earlier RVU in each of the two physically independent clusters. Note. If the cluster switch is currently running T0569AAA firmware, updating the configuration can change the fabric setting.
Upgrading a ServerNet Cluster Merging Clusters to Create a Tri-Star Topology Merging Clusters to Create a Tri-Star Topology To create a tri-star topology supporting up to 24 nodes, you can do one of the following: • • Merge three clusters that currently use one cluster switch per fabric (supporting up to eight nodes each). Merge a cluster that uses one cluster switch per fabric (supporting up to eight nodes) with another cluster that uses two cluster switches per fabric (supporting up to 16 nodes).
Upgrading a ServerNet Cluster • • Example: Merging Three Star Topologies to Create a Tri-Star Topology Upgrades the firmware and configuration in the cluster switches Reconfigures the clusters to use ServerNet node numbers 1 through 8, 9 through 16, and 17 through 24 Following the upgrade, the merged cluster uses the tri-star topology and supports up to 24 nodes. Table 4-20 summarizes the upgrade. Table 4-20.
Upgrading a ServerNet Cluster Example: Merging Three Star Topologies to Create a Tri-Star Topology Table 4-20. Upgrade Summary: Merging Three Star Topologies to Create a TriStar Topology (page 2 of 2) SNETMON/MSGMON Version Before the Upgrade After the Upgrade Any of the following: T0294AAG (or superseding) • • • • Service Processor (SP) Version SCF T0294 (G06.09 or G06.10) T0294AAA (G06.09 or G06.10) T0294AAB (G06.11) T0294AAE (G06.12 or G06.
Upgrading a ServerNet Cluster Example: Merging Three Star Topologies to Create a Tri-Star Topology Figure 4-9 and Figure 4-10 show the merging of three ServerNet clusters into a tri-star topology that can support up to 24 nodes. Figure 4-9 shows three clusters using the star topology installed and ready for merging. Figure 4-9.
Upgrading a ServerNet Cluster Example: Merging Three Star Topologies to Create a Tri-Star Topology Figure 4-10 shows the cluster after the upgrade. The cluster switches have been reconfigured as X1, X2, and X3 in order to construct the new tri-star topology. The upgraded cluster uses 1-kilometer two-lane links. Figure 4-10.
Upgrading a ServerNet Cluster Steps for Merging Three Star Topologies to Create a Tri-Star Topology Steps for Merging Three Star Topologies to Create a Tri-Star Topology The following steps describe how to use the Add Switch guided procedure to merge three clusters that use one cluster switch per fabric to create a tri-star topology: Caution. Do not connect the two-lane links for the tri-star topology until the Add Switch guided procedure instructs you to do so.
Upgrading a ServerNet Cluster Steps for Merging Three Star Topologies to Create a Tri-Star Topology Table 4-21. Planning for Cluster Switches in the Tri-Star Topology Example X Fabric Y Fabric Fabric/Position X1 X____ Y____ GUID V0XE6Z _____________________ _____________________ Configuration Tag 0x10002 _____________________ _____________________ Firmware Rev. 3_0_81 _____________________ _____________________ Configuration Rev.
Upgrading a ServerNet Cluster Steps for Merging Three Star Topologies to Create a Tri-Star Topology Table 4-22.
Upgrading a ServerNet Cluster Steps for Merging Three Star Topologies to Create a Tri-Star Topology 4. If you have not already done so, upgrade all nodes in all clusters to G06.13 or a later G-series RVU. For nodes upgraded to G06.13, you must apply the release 3 SPRs indicated in Table 4-6 on page 4-8. Refer to Upgrading Software to Obtain G06.14 Functionality on page 4-34. 5. On any node connected to one of the clusters, run the Add Switch guided procedure: a.
Upgrading a ServerNet Cluster Steps for Merging Three Star Topologies to Create a Tri-Star Topology You must log on to nodes attached to at least two different star groups of the tristar topology in order for the guided procedure to test all of the remote connections on both fabrics. If you log on to a node attached to . . . The guided procedure checks these remote connections . . .
Upgrading a ServerNet Cluster Example: Merging A Split-Star Topology and a Star Topology to Create a Tri-Star Topology Example: Merging A Split-Star Topology and a Star Topology to Create a Tri-Star Topology The following example begins with two ServerNet clusters running any of the following RVUs: • • • • • G06.09 G06.10 G06.11 G06.12 G06.13 One cluster uses the star topology and includes up to eight nodes. The other cluster uses the split-star topology and includes up to 16 nodes.
Upgrading a ServerNet Cluster Example: Merging A Split-Star Topology and a Star Topology to Create a Tri-Star Topology Table 4-23 summarizes the upgrade. Table 4-23. Upgrade Summary: Merging a Split-Star Topology and a Star Topology to Create a Tri-Star Topology (page 1 of 2) Before the Upgrade After the Upgrade Max. Nodes Supported 8 or 16 24 Cluster Switches Per Fabric 1 or 2 3 NonStop Kernel Operating System Release G06.09, G06.10, G06.11, G06.12, or G06.13 G06.
Upgrading a ServerNet Cluster Example: Merging A Split-Star Topology and a Star Topology to Create a Tri-Star Topology Table 4-23. Upgrade Summary: Merging a Split-Star Topology and a Star Topology to Create a Tri-Star Topology (page 2 of 2) SNETMON/MSGMON Version Before the Upgrade After the Upgrade Any of the following: T0294AAG (or superseding) • • Service Processor (SP) Version SCF T0294 (G06.09 or G06.10) T0294AAB (G06.11, G06.12, or G06.
Upgrading a ServerNet Cluster Example: Merging A Split-Star Topology and a Star Topology to Create a Tri-Star Topology Figure 4-11 and Figure 4-12 show the merging of two ServerNet clusters into a tri-star topology that can support up to 24 nodes. Figure 4-11 shows clusters using the star and split-star topologies installed and ready for merging. The split-star topology includes a four-lane link connecting the cluster switches. Figure 4-11.
Upgrading a ServerNet Cluster Steps for Merging a Split-Star Topology and a Star Topology to Create a Tri-Star Topology Figure 4-12 shows the cluster after the upgrade. The cluster switches from the star topology have been reconfigured as X3 and Y3 in order to construct the new tri-star topology. The upgraded cluster uses 1-kilometer two-lane links. Figure 4-12.
Upgrading a ServerNet Cluster Steps for Merging a Split-Star Topology and a Star Topology to Create a Tri-Star Topology 2. Decide the configuration tags to be used by the cluster switches in each cluster when they are combined in the tri-star topology. During the upgrade, the Add Switch guided procedure prompts you for the configuration tag. You can use Table 4-21 on page 4-74 to record this information. Table 4-3 on page 4-5 shows the supported configuration tags. Note.
Upgrading a ServerNet Cluster Steps for Merging a Split-Star Topology and a Star Topology to Create a Tri-Star Topology Configuring the star topology cluster to support nodes 17 through 24 means you will not have to change the ServerNet node numbers supported by the cluster switches using the split-star topology. Note. This procedure assumes that the other two star groups already support nodes 1 through 8 and 9 through 16.
Upgrading a ServerNet Cluster Steps for Merging a Split-Star Topology and a Star Topology to Create a Tri-Star Topology 6. On the other star group of the split-star cluster, log on to a node and run the Add Switch guided procedure to update the firmware and configuration for the X-fabric cluster switch. 7. When all of the cluster switches on the X fabric are updated, connect the two-lane links between the three cluster switches on the X fabric of the tri-star topology.
Upgrading a ServerNet Cluster Steps for Merging a Split-Star Topology and a Star Topology to Create a Tri-Star Topology c. Log on to a node in the range of ServerNet node numbers 17 through 24, and use the TSM Service Application to verify the operation of the local X-fabric and Y-fabric cluster switches (X3 and Y3). 14. To configure and start Expand-over-ServerNet lines between the three star groups of the tri-star topology, use the guided procedure for configuring a ServerNet node.
Upgrading a ServerNet Cluster Connecting the Two-Lane Links Connecting the Two-Lane Links When the Add Switch guided procedure indicates that the two-lane links can be connected between the cluster switches in different star groups, connect the two-lane links for the specified fabric. Use the following steps: Caution. Do not connect the two-lane links for the tri-star topology until the Add Switch guided procedure instructs you to do so.
Upgrading a ServerNet Cluster Connecting the Two-Lane Links 5. One cable at a time, connect the cable ends. Table 4-24 shows the cable connections. Caution. During an upgrade from a split-star topology to a tri-star topology, you must first connect all cables on the X fabric and then wait for the guided procedure to prompt you to connect the cables on the Y fabric. Table 4-24. Two-Lane Link Connections for the Tri-Star Topology √ Cluster Switch Port Connects to Cluster Switch . . .
Upgrading a ServerNet Cluster Fallback for Merging Clusters to Create a Tri-Star Topology Fallback for Merging Clusters to Create a Tri-Star Topology Use one of the following procedures to fall back from merging clusters to create a tristar topology: • • Fallback for Merging Three Star Topologies to Create a Tri-Star Topology on page 4-89 Fallback for Merging a Split-Star Topology and a Star Topology to Create a Tri-Star Topology on page 4-89 Fallback for Merging Three Star Topologies to Create a Tri-S
Upgrading a ServerNet Cluster Fallback for Merging a Split-Star Topology and a Star Topology to Create a Tri-Star Topology split-star configuration provides greater throughput than a subset of a tri-star topology. But the tri-star subset facilitates expansion to 24 nodes. For information about the topologies, refer to Planning for the Topology on page 2-8. • • Disconnects the two-lane links one fabric at a time. Connects four-lane links one fabric at a time to the split-star topology. Note.
Upgrading a ServerNet Cluster Fallback for Merging a Split-Star Topology and a Star Topology to Create a Tri-Star Topology c. In the Available Actions box, select Configuration Update, and click Perform action. The Update Switch guided procedure is launched d. In the Guided Procedure - Update Switch window, click Start. The Configuration Update dialog box appears. e. In the Topology box, click the radio button to select the Split-Star topology. f.
Upgrading a ServerNet Cluster Fallback for Merging a Split-Star Topology and a Star Topology to Create a Tri-Star Topology 9. Repeat Step 3 on the other Y-fabric cluster switch that will be used for the split-star topology. You must change the configuration tags to support the split-star topology. 10. On all Y-fabric cluster switches in the tri-star topology, disconnect the two-lane links (the switch-to-switch cables connected to ports 8 through 11). 11.
Upgrading a ServerNet Cluster Reference Information Reference Information This section contains reference information for upgrading a ServerNet cluster: • • • • • Considerations for Upgrading SANMAN and TSM on page 4-93 Considerations for Upgrading SNETMON/MSGMON and the Operating System on page 4-95 Updating the Firmware and Configuration on page 4-97 Updating Service Processor (SP) Firmware on page 4-105 Updating the Subsystem Control Facility (SCF) on page 4-105 Considerations for Upgrading SANMAN a
Upgrading a ServerNet Cluster • • • Considerations for Upgrading SANMAN and TSM Before the cluster switches are loaded with T0569AAB (G06.12) firmware and configuration files, all nodes connected to the switches must be running at least the G06.12 versions of SANMAN (T0502AAE) and the T7945AAW version of the TSM server software. Before the cluster switches are loaded with T0569AAE (G06.14) or T0569AAF (G06.
Upgrading a ServerNet Cluster Considerations for Upgrading SNETMON/MSGMON and the Operating System Considerations for Upgrading SNETMON/MSGMON and the Operating System The following considerations apply when you upgrade SNETMON/MSGMON and the operating system: • • SNETMON/MSGMON and the operating system support up to 16 nodes since G06.09. This support allows the coexistence in a 16-node cluster of ServerNet nodes running G06.09 or a later G-series RVU. G06.
Upgrading a ServerNet Cluster • • Considerations for Upgrading SNETMON/MSGMON and the Operating System If SNETMON/MSGMON abends due to a version mismatch error, it cannot start direct ServerNet connectivity between the local node and other remote nodes. However, this condition does not interfere with ServerNet connectivity between those other remote nodes. All nodes in a split-star topology containing 1-kilometer four-lane links must run G06.11 or a subsequent G-series RVU.
Upgrading a ServerNet Cluster Updating the Firmware and Configuration Updating the Firmware and Configuration This subsection contains the following information: • • • • • • • About the ServerNet II Switch Firmware and Configuration on page 4-97 Firmware and Configuration File Names on page 4-98 Using the TSM Service Application to Download the ServerNet II Switch Firmware or Configuration on page 4-99 Soft Reset and Hard Reset on page 4-101 T0569AAA Firmware and Configuration Files on page 4-101 Upgrad
Upgrading a ServerNet Cluster Updating the Firmware and Configuration The firmware and configuration are saved in flash memory in the ServerNet II Switch. Upon power on or a hard reset, the ServerNet II Switch starts running the firmware and configuration. Firmware and Configuration File Names Table 4-31 lists the firmware and configuration files provided on the SUT. These same files are available in the T0569AAB and T0569AAE SPRs. Table 4-31.
Upgrading a ServerNet Cluster Updating the Firmware and Configuration Table 4-32. Firmware and Configuration Compatibility With the NonStop Kernel If . . . Then all nodes must be running one of. . . The ServerNet II Switch will have its firmware and configuration updated with T0569AAB The ServerNet II Switch will have its firmware updated with T0569AAE and its configuration updated with one of the splitstar configuration tags from T0569AAE: • • • • G06.12 or a later G-series RVU G06.09, G06.
Upgrading a ServerNet Cluster Updating the Firmware and Configuration The Firmware Update and Configuration Update actions execute the Update Switch guided procedure. (There is no direct access to this guided procedure from the Start menu.) You can use the guided procedure to update either the firmware or the configuration, depending on the TSM action you used to start the guided procedure. Caution.
Upgrading a ServerNet Cluster Updating the Firmware and Configuration In general, alarms are not created during the rest period. You can ignore any alarms that occur during these operations. Note. TSM alarms are not suppressed when sensitive operations are performed on cluster switches using SCF commands. Soft Reset and Hard Reset You must perform a soft reset of the ServerNet II Switch subcomponent after a firmware download. After a configuration download, you must perform a hard reset.
Upgrading a ServerNet Cluster Updating the Firmware and Configuration Upgrading SANMAN Before Loading New Configuration Beginning with the G06.12 configuration, all pass-through ServerNet data traffic is by default disabled on the ServerNet II switch ports. The G06.12 and superseding versions of SANMAN enable switch ports after ensuring that neighbor checks for the ports have passed. The neighbor check logic is a new functionality implemented by SANMAN in G06.12.
Upgrading a ServerNet Cluster Updating the Firmware and Configuration Combinations of Firmware and Configuration Files For a ServerNet cluster production environment, the recommended combinations of firmware and configuration files are: This combination Is required for . . . And recommended for . . .
Upgrading a ServerNet Cluster Updating the Firmware and Configuration Table 4-33. Upgrading T0569: Sequence for Downloading ServerNet II Switch Firmware and Configuration Currently Running in Cluster Switch Node Number Range Change? T0569 Firmware T0569 Configuration Use This Sequence No Old Old 1. Download the firmware. 2. Download the configuration. Yes Old New Download the firmware only. New Old Download the configuration only. New New No action needed. Old Old 1.
Upgrading a ServerNet Cluster Updating Service Processor (SP) Firmware Falling Back to T0569: Sequence for Downloading the ServerNet II Switch Firmware and Configuration Table 4-34 shows the sequence for falling back to an earlier version of the T0569 firmware and configuration. Values shown in the table are relative. For example, if the running firmware is T0569AAB and the running configuration is T0569AAA, see the column for new firmware and an old configuration.
Upgrading a ServerNet Cluster Updating the Subsystem Control Facility (SCF) ServerNet Cluster Manual— 520575-003 4 -106
Part III.
Part III.
5 Managing a ServerNet Cluster This section describes how to monitor and control a ServerNet cluster. This section contains two subsections: Heading Page Monitoring Tasks 5-1 Control Tasks 5-26 Monitoring Tasks Monitoring tasks allow you to check the general health of the ServerNet Cluster.
Managing a ServerNet Cluster Displaying Status Information Using the TSM Service Application To obtain ServerNet cluster-related information using the TSM Service Application: 1. Using a system console attached to any functioning node in the ServerNet cluster, log on to the TSM Service Application. (For details about logging on, refer to Appendix F, Common System Operations.) The Management Window appears, as shown in Figure 5-1. Figure 5-1. TSM Management Window VST110.
Managing a ServerNet Cluster Displaying Status Information Using the TSM Service Application 2. In the tree pane, click the GRP-1 resource and the GRP-1.MOD-1 resource to display all of the components in the group 01 processor enclosure. 3. Click the MSEB resource to select it. 4. In the details pane, click the Attributes tab. Figure 5-2 shows the attributes for this resource. Figure 5-2. Attributes for the MSEB VST016.vsd 5.
Managing a ServerNet Cluster Displaying Status Information Using the TSM Service Application 8. Click the Cluster tab. The tree pane displays the ServerNet Cluster resource preselected with the high-level cluster resources below it. See Figure 5-4. Note. The Cluster tab appears in the Management Window if the external ServerNet SAN manager process (SANMAN) can communicate with at least one of the cluster switches. The Cluster tab does not appear if SANMAN cannot communicate with any cluster switches.
Managing a ServerNet Cluster Displaying Status Information Using the TSM Service Application 9. In the details pane, click the Attributes tab. Figure 5-5 shows the attributes for the ServerNet Cluster resource. Figure 5-5. Attributes for the ServerNet Cluster Resource VST044.vsd 10. In the tree pane, click the local node to select it. Figure 5-6 shows the attributes for this resource. Figure 5-6. Attributes for the Local Node VST050.
Managing a ServerNet Cluster Displaying Status Information Using the TSM Service Application 11. In the tree pane, click a remote node to select it. Figure 5-7 shows the attributes for this resource. Figure 5-7. Attributes for the Remote Node Resource VST045.vsd 12. In the tree pane, click either the External_ServerNet_X_Fabric or the External_ServerNet_Y_Fabric to select it. Figure 5-8 shows the attributes for this resource. Figure 5-8. Attributes for the External Fabric Resource VST049.
Managing a ServerNet Cluster Displaying Status Information Using the TSM Service Application 13. In the tree pane, click the plus sign (+) next to the external fabric object to expand it, and then click the switch resource to select it. Figure 5-9 shows the attributes for this resource. Figure 5-9. Attributes for the Switch Resource VST090.
Managing a ServerNet Cluster Displaying Status Information Using the TSM Service Application 14. In the tree pane, click the plus sign (+) next to the switch resource to expand it, and then click any switch-to-node link (for example, Y_PIC_1_To_\name) to select it. The switch-to-node link represents the connection between a cluster switch and an MSEB. Figure 5-10 shows the attributes for this resource. Figure 5-10. Attributes for the Switch-to-Node Link VST097.vsd 15.
Managing a ServerNet Cluster Displaying Status Information Using the TSM Service Application 16. In the tree pane, click a remote switch. Figure 5-12 shows the attributes for this resource. Figure 5-12. Attributes for the Remote Switch Object VST099.
Managing a ServerNet Cluster Displaying Status Information Using the TSM Service Application 17. For more information about a specific attribute, select the attribute and then press F1 to view the online help. Figure 5-13 shows the F1 Help for the Service State attribute . Figure 5-13. F1 Help for Service State Attribute VST052vsd 18. In the tree pane, click the External_ServerNet_X_Fabric or the External_ServerNet_Y_Fabric to select it. 19.
Managing a ServerNet Cluster Displaying Status Information Using the TSM Service Application Figure 5-14. Physical/Connection View of External Fabric in a Split-Star Topology VST087.vsd Figure 5-15. Physical/Connection View of External Fabric in a Tri-Star Topology VST133.
Managing a ServerNet Cluster Running SCF Remotely Running SCF Remotely The Subsystem Control Facility (SCF) provides commands that display general information about the ServerNet Cluster subsystem. Because the view of a ServerNet Cluster can change significantly from one node to another, you should gather data at each node by using SCF and the TSM client software, then compare the information.
Managing a ServerNet Cluster Displaying Information About the ServerNet Cluster Monitor Process (SNETMON) Table 5-1. SCF Commands for Monitoring a ServerNet Cluster (page 2 of 2) See page Use this SCF command . . . To . . . STATUS LINE, DETAIL Check the status for the Expand-overServerNet line. 5-22 STATUS PATH, DETAIL Display detailed information about the path. 5-23 STATUS SWITCH $ZZSMN Display dynamic status information about the cluster switches on both external fabrics.
Managing a ServerNet Cluster Checking the Status of SNETMON Example 5-1. INFO PROCESS Command > INFO PROCESS $ZZKRN.#ZZSCL NONSTOP KERNEL - Info PROCESS \MINDEN.$ZZKRN.#ZZSCL Symbolic Name *Name *Autorestart *Program ZZSCL $ZZSCL 10 $SYSTEM.SYSTEM.SNETMON For more information about the SCF INFO PROCESS command, refer to the SCF Reference Manual for the Kernel Subsystem. You can also use the SCF LISTDEV command to display the SNETMON logical device (LDEV) number, name, and device type.
Managing a ServerNet Cluster Checking the Status of SNETMON Figure 5-16. SNETMON Status Displayed by TSM Service Application VST044.vsd Using SCF to Check the SNETMON Status You can use the Kernel subsystem SCF STATUS PROCESS command to check the status of SNETMON. Example 5-3 shows an SCF STATUS PROCESS command and its output. Example 5-3. STATUS PROCESS Command > STATUS PROCESS $ZZKRN.#ZZSCL NONSTOP KERNEL - Status Process \MINDEN.$ZZKRN.
Managing a ServerNet Cluster Checking the Status of the ServerNet Cluster Subsystem Checking the Status of the ServerNet Cluster Subsystem You can use the TSM Service Application or SCF to check the status of the ServerNet cluster subsystem. Using TSM to Check the ServerNet Cluster Subsystem Status 1. Log on using the TSM Service Application. 2. Click the Cluster tab to view information about the cluster. 3. Select the ServerNet Cluster resource. 4. In the Details pane, click the Attributes tab. 5.
Managing a ServerNet Cluster Checking ServerNet Cluster Connections Checking ServerNet Cluster Connections You can use the SCF STATUS SUBNET to check connections to other systems in the ServerNet cluster. Example 5-6 shows the output of a STATUS SUBNET command. Example 5-6.
Managing a ServerNet Cluster Generating Statistics Generating Statistics Each processor in every node of a ServerNet cluster keeps a set of statistical counters for each node in the cluster, including the local node. In addition, each processor keeps a set of generic counters that are not associated with any particular node. Each hour, SNETMON causes the statistical counters in each processor to be sent to the service log ($ZLOG).
Managing a ServerNet Cluster Generating Statistics Figure 5-17. Generate ServerNet Statistics Action vst018.vsd 3. Click Perform action. The Action Status window shows the progress of the action. 4. Click Close to close the Actions dialog box. 5. Use the TSM EMS Event Viewer Application to check the statistics event for the system you are validating: a. Start the event viewer and log on as described in Appendix F, Common System Operations.
Managing a ServerNet Cluster Monitoring Expand-Over-ServerNet Line-Handler Processes Monitoring Expand-Over-ServerNet Line-Handler Processes You can use the WAN subsystem SCF STATUS DEVICE command to obtain information about the state of a line-handler process. Example 5-8 shows a linehandler process in the STARTED state. Example 5-8. STATUS DEVICE Command Showing STARTED Line-Handler Process > STATUS DEVICE $ZZWAN.#SC001 WAN Manager STATUS DEVICE for DEVICE State : ........... STARTED LDEV number .......
Managing a ServerNet Cluster Monitoring Expand-Over-ServerNet Lines and Paths Using TSM to Monitor Expand-Over-ServerNet Lines 1. Log on using the TSM Service Application. 2. Click the Cluster tab to view information about the cluster. 3. For information about the line configured from the local node to a specific remote node, click the Remote Node resource. 4. In the Details pane, click the Attributes tab. 5. Check the Expand/ServerNet Line LDEV State attribute. Figure 5-18.
Managing a ServerNet Cluster Monitoring Expand-Over-ServerNet Lines and Paths Figure 5-19. ServerNet Cluster Connection Status Dialog Box vst058.vsd Using the SCF STATUS LINE, DETAIL Command Use the STATUS LINE, DETAIL command to check the status for the Expand-overServerNet line. Example 5-10 shows an SCF STATUS LINE, DETAIL command and output for an Expand-over-ServerNet line named $SC003. Example 5-10. STATUS LINE, DETAIL Command > STATUS LINE $SC003, DETAIL EXPAND Detailed Status LINE PPID.......
Managing a ServerNet Cluster Monitoring Expand-Over-ServerNet Lines and Paths Using the SCF STATUS PATH, DETAIL Command Use the STATUS PATH, DETAIL command to display detailed information about the path. Example 5-11 shows this command. Example 5-11. STATUS PATH, DETAIL Command > STATUS PATH $SC003, DETAIL EXPAND Detailed Status PATH $SC003 PPID......... ( 0, 23) State......... STARTED Trace Status OFF Line LDEVs.. 126 BPID............ ( 1, Number of lines.. Superpath.........
Managing a ServerNet Cluster Monitoring Expand-Over-ServerNet Lines and Paths Using the SCF STATS PATH Command Use the STATS PATH command to display statistical information about the path. Example 5-13 shows a partial listing for this command. Example 5-13. STATS PATH Command > STATS PATH $SC004 EXPAND Stats PATH $SC004, PPID ( 0 Reset Time.... JUL 28,2000 07:25:28 Current Pool Pages Used Max Pool Pages Used Pool Size in Pages Total Number of Pool Fails 89 89 4177 0 20), BPID ( 1, 18) Sample Time.
Managing a ServerNet Cluster Monitoring Expand-Over-ServerNet Lines and Paths Using the SCF INFO PATH, DETAIL Command Use the INFO PATH, DETAIL command to display detailed information about the current or default attribute values for the path. Example 5-15 shows this command. Example 5-15. INFO PATH, DETAIL Command > INFO PATH $SC043, DETAIL EXPAND Detailed Info *Compress.... OFF *OStimeout... 0:00:03:00 *L4Timeout...
Managing a ServerNet Cluster Control Tasks Using the SCF INFO PROCESS $NCP, NETMAP Command Use the INFO PROCESS $NCP, NETMAP command to display the status of the network as seen from a specific system. Example 5-17 shows this command. Example 5-17.
Managing a ServerNet Cluster Quick Reference: SCF Commands for Controlling a ServerNet Cluster Quick Reference: SCF Commands for Controlling a ServerNet Cluster Table 5-2 lists SCF commands that can be used to control components of a ServerNet Cluster. Table 5-2. SCF Commands for Controlling a ServerNet Cluster Use this SCF command . . . To . . . See page START PROCESS $ZZKRN.#MSGMON Start MSGMON 5-28 ABORT PROCESS $ZZKRN.#MSGMON Abort MSGMON 5-28 START PROCESS $ZZKRN.
Managing a ServerNet Cluster Starting the Message Monitor Process (MSGMON) ServerNet cluster. SCF commands for configuring, starting, stopping, and displaying information about the SCL subsystem SUBSYS object are described in Section 8, SCF Commands for SNETMON and the ServerNet Cluster Subsystem. Starting the Message Monitor Process (MSGMON) Adding the message monitor process (MSGMON) to the configuration database is described in Section 3, Installing and Configuring a ServerNet Cluster.
Managing a ServerNet Cluster Starting the External ServerNet SAN Manager Process (SANMAN) Starting the External ServerNet SAN Manager Process (SANMAN) Adding the external ServerNet SAN manager process (SANMAN) to the configuration database is described in Section 3, Installing and Configuring a ServerNet Cluster. When you add SANMAN, HP recommends that you set the STARTMODE attribute to SYSTEM. If you do so, SANMAN starts automatically after a system load or a processor reload.
Managing a ServerNet Cluster Restarting the External ServerNet SAN Manager Process (SANMAN) Restarting the External ServerNet SAN Manager Process (SANMAN) The external ServerNet SAN manager process may be restarted for the following reasons: • • • Both processors in which the ServerNet SAN manager process pair is running are stopped. The $ZPM persistence manager automatically restarts the process pair as soon as any processor in its processor list becomes available.
Managing a ServerNet Cluster Starting ServerNet Cluster Services Aborting SNETMON on a node does not change the state of ServerNet Cluster IPC connectivity to and from that node. However, while SNETMON is not running, the node will not be able to bring up or automatically repair remote IPC connectivity.
Managing a ServerNet Cluster When a System Joins a ServerNet Cluster 2. Then the ServerNet cluster monitor process checks the configuration of its associated ServerNet cluster (SCL) subsystem SUBSYS object: • • If the SUBSYS object is configured with a STARTSTATE attribute set to STOPPED—which is the default—the ServerNet cluster monitor process waits for an SCF START SUBSYS $ZZSCL command before starting ServerNet cluster services and joining the system to the ServerNet cluster.
Managing a ServerNet Cluster Stopping ServerNet Cluster Services Stopping ServerNet Cluster Services You can use TSM or SCF to stop ServerNet cluster services. Using TSM to Stop ServerNet Cluster Services 1. Log on by using the TSM Service Application. 2. Click the Cluster tab to view information about the ServerNet cluster. 3. In the tree pane, right-click the ServerNet Cluster resource, and select Actions. 4. From the Actions list, click Stop ServerNet Cluster Services. 5. Click Perform action.
Managing a ServerNet Cluster Switching the SNETMON or SANMAN Primary and Backup Processes 5. On remote systems, as the ServerNet cluster monitor processes receive word that a ServerNet cluster member has departed, they instruct their local processors to bring down the ServerNet connections with the departing system. These remote ServerNet cluster monitor processes then log the node disconnection to the event log.
Managing a ServerNet Cluster Switching the SNETMON or SANMAN Primary and Backup Processes 3. In the tree pane, right-click the ServerNet Cluster resource, and select Actions. The actions dialog box appears. 4. Click the Switch SNETMON Primary Processor action or the Switch SANMAN Primary Processor action. 5. Click Perform action. A confirmation dialog box asks if you are sure you want to perform the action. 6. Click OK. The Action Status window shows the progress of the action. 7.
Managing a ServerNet Cluster Switching the SNETMON or SANMAN Primary and Backup Processes ServerNet Cluster Manual— 520575-003 5- 36
6 Adding or Removing a Node This section describes how to change the size of an already-installed ServerNet Cluster or a node in a cluster.
Adding or Removing a Node Removing a Node From a ServerNet Cluster The Configure ServerNet Node guided procedure: • • • • Verifies that the group 01 MSEBs are installed and ready. Tells you when to connect fiber-optic cables between the MSEBs and the cluster switches. Online help shows you show to make the cable connections. Section 3, Installing and Configuring a ServerNet Cluster, also contains information about how to connect cables. Starts ServerNet cluster services.
Adding or Removing a Node Removing a Node From a ServerNet Cluster 5. On all other nodes in the cluster: a. Use the Expand subsystem SCF ABORT LINE command to abort the Expandover-ServerNet line for the node being removed. b. Use the WAN subsystem SCF STOP DEVICE command to stop the Expandover-ServerNet line-handler process for the node being removed. c.
Adding or Removing a Node Moving a Node From One ServerNet Cluster to Another Moving a Node From One ServerNet Cluster to Another If you have more than one ServerNet cluster, you might want to move a node from one cluster to another. Note the following considerations before moving a node from one cluster to another: • • You must ensure that software on the node being moved is compatible with the software on the cluster being joined.
Adding or Removing a Node Moving ServerNet Cables to Different Ports on the ServerNet II Switches Moving ServerNet Cables to Different Ports on the ServerNet II Switches The ServerNet II Switch is the main component of the cluster switch. Figure 6-1 shows the ServerNet II Switch extended for servicing. Figure 6-1. ServerNet II Switch Component of Cluster Switch VST075.
Adding or Removing a Node Moving ServerNet Cables to Different Ports on the ServerNet II Switches To move the fiber-optic cables at the ServerNet II Switches: 1. Make sure an unused port in the range 0 through 7 is available on the X-fabric and Y-fabric ServerNet II Switches. You must use the same port number on both switches. 2. Complete the Planning Form for Moving ServerNet Cables. See the Sample Planning Form for Moving ServerNet Cables on page 6-9.
Adding or Removing a Node Moving ServerNet Cables to Different Ports on the ServerNet II Switches b. On all other nodes in the cluster, use the SCF ABORT LINE command to abort the Expand-over-ServerNet line to the node being removed. Depending on your service LAN, you might have to log on to each node individually to do this. For example: -> ABORT LINE $SC011 c. On the node being removed, bring ServerNet cluster services to a STOPPED logical state: -> STOP SUBSYS $ZZSCL 6.
Adding or Removing a Node • Moving ServerNet Cables to Different Ports on the ServerNet II Switches Select all of the remote nodes in the ServerNet Cluster Connection Status dialog box and click Configure/Start to start the local and remote lines. If the automatic line-handler configuration feature is enabled on the remote nodes, the lines on those nodes are started automatically. d.
Adding or Removing a Node Moving ServerNet Cables to Different Ports on the ServerNet II Switches Sample Planning Form for Moving ServerNet Cables a. Identify the node whose ServerNet cables will be moved: System Name: \ PROD1 Expand Node Number: 011 b.
Adding or Removing a Node Moving ServerNet Cables to Different Ports on the ServerNet II Switches c. List the lines to abort on the node whose ServerNet cables will be moved and on all other nodes: On the node whose cables will be moved... On all other nodes...
Adding or Removing a Node Expanding or Reducing a Node in a ServerNet Cluster Expanding or Reducing a Node in a ServerNet Cluster Like any NonStop S-series server, a node in a ServerNet cluster can be expanded or reduced (enclosures can be added or removed) while the server is online. However, if online expansion requires changes to the MSEBs in the group 01 enclosure, the node’s connections to the cluster might not be in a fault-tolerant state for a short time.
Adding or Removing a Node Splitting a Large Cluster Into Multiple Smaller Clusters 3. In all nodes of the star group selected in Step 1: a. Stop any applications that depend on ServerNet cluster connectivity to nodes in the other star group(s). b. Stop Expand connectivity to the nodes in the other star group(s): 1. Use the Expand subsystem SCF LISTDEV command to identify the currently configured Expand-over-ServerNet lines: -> LISTDEV TYPE 63,4 2.
Adding or Removing a Node Splitting a Large Cluster Into Multiple Smaller Clusters 6. In all nodes of the star group selected in Step 1: a. Use the SCF START SUBSYS $ZZSCL command to bring up direct ServerNet connectivity between the nodes. -> START SUBSYS $ZZSCL b. If desired, start any applications that utilize ServerNet connectivity to nodes within the star group. 7.
Adding or Removing a Node Splitting a Large Cluster Into Multiple Smaller Clusters ServerNet Cluster Manual— 520575-003 6- 14
7 Troubleshooting and Replacement Procedures This section describes how to use software tools to diagnose and troubleshoot a ServerNet Cluster. This section also contains replacement procedures for the main hardware components of a ServerNet cluster. This section contains the following subsections: Heading Page Troubleshooting Procedures 7-1 Replacement Procedures 7-35 Note. You can use OSM instead of TSM for any of the procedures described in this manual.
Troubleshooting and Replacement Procedures Troubleshooting Tips gather data at each node using SCF and the TSM client software and then compare the information.
Troubleshooting and Replacement Procedures Software Problem Areas Software Problem Areas Table 7-1 lists some common software problem areas, describes troubleshooting steps, and provides references for more information. Table 7-1. Software Problem Areas (page 1 of 3) Problem Area Symptom Recovery Compaq TSM Client Software The Cluster tab does not appear. See Troubleshooting the Cluster Tab in the TSM Service Application on page 7-9.
Troubleshooting and Replacement Procedures Software Problem Areas Table 7-1. Software Problem Areas (page 2 of 3) Problem Area Symptom Recovery ServerNet Communication Communication on an external fabric is disrupted. See Using the Fabric Troubleshooting Guided Procedure to Check the Internal ServerNet Fabrics on page 7-26 or do the following: Communication on an external fabric is disrupted by: • • • BTE timeouts CRC checksum errors 1.
Troubleshooting and Replacement Procedures Software Problem Areas Table 7-1. Software Problem Areas (page 3 of 3) Problem Area ServerNet Communication Expand-OverServerNet LineHandler Processes and Lines Symptom Recovery The TSM Service Application shows the remote node name as \Remote_Node_\n nn, where nnn is the Expand node number. Verify that the Expand-over-ServerNet linehandler processes between the local node and the remote node are up.
Troubleshooting and Replacement Procedures Hardware Problem Areas Hardware Problem Areas Table 7-2 lists some common hardware problem areas, describes troubleshooting steps, and provides references for more information. Table 7-2. Hardware Problem Areas (page 1 of 3) Problem Area Symptom Recovery MSEB Any 1. Use TSM to check for alarms and repair actions for the MSEB resource. See Using TSM Alarms on page 7-12. 2.
Troubleshooting and Replacement Procedures Hardware Problem Areas Table 7-2. Hardware Problem Areas (page 2 of 3) Problem Area Symptom Recovery ServerNet Cable (SEB to SEB, SEB to MSEB, or MSEB to MSEB) Internal fabric communication 1. Use TSM to check for alarms and repair actions for the Internal Fabric resource. See Using TSM Alarms on page 7-12. 2. Perform SCF and TSM diagnostic actions to get more information. See Checking the Internal ServerNet X and Y Fabrics on page 7-26. 3.
Troubleshooting and Replacement Procedures Hardware Problem Areas Table 7-2. Hardware Problem Areas (page 3 of 3) Problem Area Symptom Recovery ServerNet II Switch Any 1. Use TSM to check for alarms and repair actions for the Switch resource. See Using TSM Alarms on page 7-12. 2. Check the link-alive LEDs. See MSEB and ServerNet II Switch LEDs on page 7-33. 3. See Replacing a ServerNet II Switch on page 7-38. 4.
Troubleshooting and Replacement Procedures Troubleshooting the Cluster Tab in the TSM Service Application Troubleshooting the Cluster Tab in the TSM Service Application The Cluster tab appears in the Management Window of the TSM Service Application if the external ServerNet SAN manager process (SANMAN) can communicate with at least one cluster switch. The cluster tab does not appear if SANMAN cannot communicate with a cluster switch.
Troubleshooting and Replacement Procedures Troubleshooting the Cluster Tab in the TSM Service Application If the Cluster tab does not appear, try the following: 1. Check the TSM client software version: a. From the Help menu, select About Compaq TSM. The About Compaq TSM dialog box appears. b. Verify that the TSM client software version is Version 10.0 or later.
Troubleshooting and Replacement Procedures Online Help for the Guided Procedures Online Help for the Guided Procedures The guided procedures interface includes the following online help files: File Name Contains online help for . . . CLUSTER.CHM Configure ServerNet Node procedure FABRICTS.CHM Troubleshoot a ServerNet Fabric procedure GRT.CHM All of the following guided procedures: • • • • Replace IOMF Replace PMF Replace Power Supply Replace ServerNet/DA PDK.
Troubleshooting and Replacement Procedures Using TSM Alarms Using TSM Alarms An alarm is a message, similar to an event message, that reports detected faults or abnormal conditions for a CRU or component. The tree pane of the TSM Service Application Management window displays a colored bell icon next to a resource causing an alarm. See Figure 7-2. Figure 7-2. Fabric Alarm Example Alarm Bell vst024.
Troubleshooting and Replacement Procedures Using TSM Alarms 4. In the Alarms tab, do one of the following: • • Double-click the alarm for more information. Right-click the alarm and select Details from the menu. The Alarm Detail dialog box appears, showing detailed information about the alarm. See Figure 7-4. For a list of ServerNet cluster-related alarms, see ServerNet Cluster-Related Alarms on page 7-15. Figure 7-4. Alarm Detail Example vst026.vsd 5.
Troubleshooting and Replacement Procedures Using TSM Alarms Figure 7-5. Repair Actions Example VST027.vsd 6. Perform the repair actions to fix the problem and remove the alarm. More detailed information is provided in the TSM alarm attachment file for the alarm. TSM alarm attachment files are named ZZAL* and are attached to problem incident reports. To view the ZZAL* files, refer to Using ZZAL* (Attachment) Files on page 7-15.
Troubleshooting and Replacement Procedures Using TSM Alarms Using ZZAL* (Attachment) Files To find the ZZAL* file for the alarm you are interested in, do the following: • List all of the ZZAL* files in the $SYSTEM.ZSERVICE subvolume using the FILEINFO command: TACL> FILEINFO $SYSTEM.ZSERVICE.ZZAL* • Look for the ZZAL* file with the same timestamp as the time shown in the Alarm time field on the Alarm Detail dialog box.
Troubleshooting and Replacement Procedures • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • Using TSM Alarms Ground Failure Hardware Error IBC Driver Limit for ServerNet Switches exceeded Insufficient Backup Time on UPS Invalid Fabric Parameter Error Invalid Fabric Setting Invalid FLASH ID Invalid GUID Invalid MSEB Confguration Record Invalid ServerNet Switch Control Block Invalid ServerNet Switch PIC Type Link Receive Disabled on ServerNet Switch Port Link Tran
Troubleshooting and Replacement Procedures • • • • • • Troubleshooting SNETMON SRAM Memory Test Failure Too Many ServerNet Switch Automatic Resets Because of Backpressure Upper Boot Block Section of FLASH Locked UPS Failure UPS Not Responding X Fabric Not Connected to the Same ServerNet Switch Port as Y Fabric Troubleshooting SNETMON For general information about SNETMON, refer to Section 1, ServerNet Cluster Description. The SNETMON process ($ZZKRN.
Troubleshooting and Replacement Procedures Troubleshooting SNETMON Figure 7-6. ServerNet Cluster Attributes Showing SNETMON and SANMAN States VST044.vsd 2. If $ZZKRN.#ZZSCL is not configured, refer to Section 3, Installing and Configuring a ServerNet Cluster, for information about configuring and starting it. If $ZZKRN.#ZZSCL is configured but not started, try starting it by typing the following at an SCF prompt: -> START PROCESS $ZZKRN.#ZZSCL 3.
Troubleshooting and Replacement Procedures Troubleshooting MSGMON 4. If you continue to have problems, contact your service provider. Note. Systems using the Tetra 8 topology must have a version of SP firmware that supports clustering to participate in a ServerNet Cluster. Otherwise the ServerNet cluster processes $ZZKRN.#ZZSCL (SNETMON) and $ZZKRN.#ZZSMN (SANMAN) will abend repeatedly when a system load is performed with G06.09 or later.
Troubleshooting and Replacement Procedures Troubleshooting SANMAN Troubleshooting SANMAN For general information about SANMAN, refer to Section 1, ServerNet Cluster Description. $ZZKRN.#ZZSMN is a persistent process that should be configured to be started at all times. SANMAN must be in the STARTED state on a system in order for the system to join a ServerNet cluster. Use the following steps to troubleshoot SANMAN: 1. Verify that SANMAN is started.
Troubleshooting and Replacement Procedures Troubleshooting Expand-Over-ServerNet LineHandler Processes and Lines Troubleshooting Expand-Over-ServerNet Line-Handler Processes and Lines For general information about Expand-Over-ServerNet lines and line-handler processes, refer to Section 1, ServerNet Cluster Description or the Expand Configuration and Management Manual.
Troubleshooting and Replacement Procedures Troubleshooting Expand-Over-ServerNet LineHandler Processes and Lines Figure 7-7. Remote Node Attributes Showing Expand Information VST045.vsd 5. If the Expand-over-ServerNet lines are stopped, start them by doing one of the following: • • Use the guided procedure for configuring a ServerNet node. For more information, see Section 3, Installing and Configuring a ServerNet Cluster. At an SCF prompt, type: -> START LINE $SC004 6.
Troubleshooting and Replacement Procedures Checking Communications With a Remote Node Checking Communications With a Remote Node Use the Node Responsive Test action in the TSM Service Application to test communications with a remote node. This action pings the remote node, verifying whether or not the node is connected and responding. To ping a remote node: 1. In the tree pane, right-click the X or Y fabric-to-node resource for the node that you want to ping, and select Actions.
Troubleshooting and Replacement Procedures Methods for Repairing ServerNet Connectivity Problems The SCF PRIMARY PROCESS $ZZSMN command is a noninvasive command and is the recommended command to repair ServerNet connectivity. It does not cause the SANMAN process to stop running, and it does not cause any ServerNet connectivity that is already up to go down. It does repair any ServerNet connectivity that is down, except for cases in which connectivity is down because of ServerNet hardware failures.
Troubleshooting and Replacement Procedures Automatic Fail-Over for Two-Lane and Four-Lane Links HP does not recommend stopping and starting the ServerNet cluster subsystem to repair ServerNet connectivity. The SCF STOP SUBSYS $ZZSCL command is used primarily to ensure the orderly removal of a node from the cluster. The SCF STOP SUBSYS $ZZSCL command normally is used prior to: • • • Physically disconnecting a node from the cluster.
Troubleshooting and Replacement Procedures Checking the Internal ServerNet X and Y Fabrics In clusters using the split-star topology, if a lane fails, traffic is redirected as shown in Table 7-3. Table 7-3. Automatic Fail-Over of ServerNet Traffic on a Four-Lane Link (X or Y Fabric) If a lane fails for traffic using port . . . Traffic is redirected to the lane using port . . . 8 9 9 10 10 11 11 8 Split-star topologies using G06.12 software (or G06.09 through G06.
Troubleshooting and Replacement Procedures Checking the Internal ServerNet X and Y Fabrics Using TSM to Check the Internal ServerNet Fabrics Use the Group Connectivity ServerNet Path Test action in the TSM Service Application to check the internal ServerNet X and Y fabrics for the local system. Use this test when you want to check the integrity of group-to-group connections along one ServerNet fabric at a time.
Troubleshooting and Replacement Procedures Checking the Internal ServerNet X and Y Fabrics The system displays: NONSTOP KERNEL X-FABRIC TO 0 1 FROM 00 UP UP 01 UP UP 02 UP UP 03 UP UP 04 UP UP 05 UP UP 06 UP UP 07 UP UP 08 <- DOWN 09 <- DOWN 10 <- DOWN 11 <- DOWN 12 <- DOWN 13 <- DOWN 14 <- DOWN 15 <- DOWN Y-FABRIC TO FROM 00 01 02 03 04 05 06 07 08 <09 <10 <11 <12 <13 <14 <15 <- 0 1 UP UP UP UP UP UP UP UP DN DN DN DN UP UP UP UP DOWN DOWN DOWN DOWN DOWN DOWN DOWN DOWN Status SERVERNET 2 3 4 5 6
Troubleshooting and Replacement Procedures Checking the External ServerNet X and Y Fabrics Checking the External ServerNet X and Y Fabrics You must use the TSM Service Application to check the external ServerNet X and Y fabrics. The guided procedure for troubleshooting a ServerNet fabric cannot troubleshoot the link between an MSEB and a ServerNet II Switch. The guided procedure can troubleshoot internal fabrics only. However, you can perform internal and external loopback tests on an NNA PIC in an MSEB.
Troubleshooting and Replacement Procedures Using the Internal Loopback Test Action Use the following procedure to check ServerNet connectivity on the external fabrics: Note. An error will be returned if you try to run this path test when another Node Connectivity ServerNet Path Test is in progress on the same fabric. The Path Test in Progress attribute indicates if a path test is currently being conducted on the fabric. 1. Log on to the TSM Service Application.
Troubleshooting and Replacement Procedures Using SCF to Check Processor-to-Processor Connections Typically, you use the Internal Loopback Test action to isolate the cause of a malfunctioning ServerNet path where a PIC is part of that path. You can perform the action with a ServerNet cable connected to the PIC. The action isolates the MSEB port occupied by the PIC, preventing the port from sending or receiving ServerNet traffic during the action.
Troubleshooting and Replacement Procedures Finding ServerNet Cluster Event Messages in the Event Log When you view events using the TSM EMS Event Viewer Application, the subsystem name (or, in rare cases, the subsystem number) is shown in the SSID column.
Troubleshooting and Replacement Procedures MSEB and ServerNet II Switch LEDs MSEB and ServerNet II Switch LEDs You can use the LEDs on the MSEB and ServerNet II switch to help diagnose problems. Figure 7-8 describes the LEDs on the MSEB. Figure 7-9 on page 7-34 shows the LEDs on the ServerNet II Switch. For information about LEDs on the AC transfer switch or UPS, refer to the ServerNet Cluster 6770 Hardware Installation and Support Guide. Figure 7-8. MSEB LEDs MSEB 1 No.
Troubleshooting and Replacement Procedures MSEB and ServerNet II Switch LEDs Figure 7-9. ServerNet II Switch LEDs ServerNet II Switch Front Panel LEDs X Y 1 2 0 1 2 3 4 5 6 7 8 9 10 11 3 5 4 7 7 10 0 ServerNet II Switch PIC LEDs 11 1 6 8 2 6 3 6 4 6 9 5 6 7 7 6 6 7 6 6 No. LED Type Color Function 1 Power-On Green Lights to indicate the ServerNet II Switch is powered on. 2 Fault Amber Lights to indicate the ServerNet II Switch is not in a fully functional state.
Troubleshooting and Replacement Procedures Replacement Procedures Replacement Procedures This subsection includes the following replacement procedures: Procedure Page Replacing an MSEB 7-35 Replacing a PIC in a ServerNet II Switch 7-35 Replacing a PIC in an MSEB 7-36 Replacing a Fiber-Optic Cable Between an MSEB and a ServerNet II Switch 7-36 Replacing a Fiber-Optic Cable in a Multilane Link 7-37 Replacing a ServerNet II Switch 7-38 Replacing an AC Transfer Switch 7-38 Replacing a UPS 7-3
Troubleshooting and Replacement Procedures Replacing a PIC in an MSEB Replacing a PIC in an MSEB PICs installed in an MSEB can be replaced, but the MSEB must be removed from the enclosure before the PIC can be replaced. To remove the MSEB safely, you must use the guided procedure. From the system console of the server you are adding, choose Start>Programs>Compaq TSM>Guided Replacement Tools>Replace SEB or MSEB. Online help is available to assist you in performing the procedure.
Troubleshooting and Replacement Procedures Replacing a Fiber-Optic Cable in a Multilane Link Replacing a Fiber-Optic Cable in a Multilane Link Use this procedure to replace a fiber-optic cable in a multilane link between two cluster switches. Before starting, read this procedure all the way through, especially if your cluster switches are in different sites. Note.
Troubleshooting and Replacement Procedures Replacing a ServerNet II Switch 8. Direct ServerNet connectivity is automatically restored after an interval of approximately 50 seconds times the number of nodes in the cluster (25 seconds for nodes running G06.14 or later). If you do not want to wait, you can manually force recovery of ServerNet connectivity as follows: • • On nodes running G06.12 or later RVUs, issue the SCF START SERVERNET $ZNET command. On nodes running the G06.09 through G06.
Troubleshooting and Replacement Procedures Diagnosing Performance Problems Diagnosing Performance Problems Diagnosis of performance problems in any environment involves multiple steps and requires extensive knowledge of performance fundamentals and methodologies.
Troubleshooting and Replacement Procedures ServerNet Cluster Manual— 520575-003 7- 40 Diagnosing Performance Problems
Part IV.
Part IV.
8 SCF Commands for SNETMON and the ServerNet Cluster Subsystem This section describes the SCF commands that are supported specifically for SNETMON and the ServerNet cluster (SCL) subsystem. Note. Commands that are generally supported by SCF, such as the ASSUME and ENV commands, are documented in the SCF Reference Manual for G-Series RVUs.
SCF Commands for SNETMON and the ServerNet Cluster Subsystem ServerNet Cluster SCF Objects Table 8-2. SCF Features for SNETMON and the SCL Subsystem by RVU RVU SNETMON/ MSGMON SPR G06.09 T0294 Introduced These New SCF Features • • G06.12 T0294AAE • • G06.14 T0294AAG • T0294AAA • ALTER, INFO, START, STATUS, STOP, and VERSION commands for the SUBSYS object PRIMARY, TRACE, and VERSION commands for the PROCESS Object. SUBNET object for STATUS command.
SCF Commands for SNETMON and the ServerNet Cluster Subsystem SCL SUBSYS Object Summary States SCL SUBSYS Object Summary States The ServerNet cluster (SCL) subsystem state is maintained by the ServerNet cluster monitor process (SNETMON). There is no aggregate ServerNet cluster subsystem state; each ServerNet cluster monitor process maintains the state of objects relevant to the local system and its connection to the ServerNet cluster.
SCF Commands for SNETMON and the ServerNet Cluster Subsystem ServerNet Cluster Subsystem Start State (STARTSTATE Attribute) Figure 8-1. ServerNet Cluster Subsystem States ServerNet Cluster Subsystem States STARTED STOP STARTING STOP STOPPING START STOPPED vst011.
SCF Commands for SNETMON and the ServerNet Cluster Subsystem ALTER Command Set the STARTSTATE attribute by using the ALTER SUBSYS command. (See ALTER Command on page 8-5.) If STARTSTATE is set to . . . Then STARTED The ServerNet cluster subsystem automatically moves into the STARTED logical state and joins the system to the ServerNet cluster.
SCF Commands for SNETMON and the ServerNet Cluster Subsystem • Example If the ALTER SUBSYS command is entered correctly, an EMS message reports the command, the time it was executed, the terminal from which the command was entered, and the group and user numbers of the user issuing the command. Example The following command alters the STARTSTATE attribute for the ServerNet cluster subsystem.
SCF Commands for SNETMON and the ServerNet Cluster Subsystem PRIMARY Command The fields returned by the INFO SUBSYS command are as follows: StartState shows the current value of the STARTSTATE attribute for the ServerNet cluster subsystem. Possible values are: STARTED The ServerNet cluster subsystem is configured to move into the STARTED state automatically and join the system to the ServerNet cluster after a system load.
SCF Commands for SNETMON and the ServerNet Cluster Subsystem Consideration cpunum is the processor number of the current backup processor for the ServerNet cluster monitor process. Consideration Wild cards are not supported for the PRIMARY PROCESS command. Example The following command causes the previously configured backup processor for the ServerNet cluster monitor process (processor 3) to become the primary processor.
SCF Commands for SNETMON and the ServerNet Cluster Subsystem Example If the configured STARTSTATE is STOPPED, the ServerNet cluster monitor process must wait for a START SUBSYS command before proceeding to start ServerNet cluster services.
SCF Commands for SNETMON and the ServerNet Cluster Subsystem STATUS Command OUT file-spec causes any SCF output generated for this command to be directed to the specified file. DETAIL if specified with STATUS SUBNET, displays detailed status information on all internal and external ServerNet paths between processors for all nodes in the cluster. Currently, STATUS SUBSYS and STATUS SUBSYS, DETAIL show the same information.
SCF Commands for SNETMON and the ServerNet Cluster Subsystem Considerations Considerations The following considerations apply to the STATUS SUBNET $ZZSCL command: • • • • If the DETAIL, LOCAL, and NODE parameters are not specified, a summary table of all ServerNet cluster subsystem connections appears. If detailed status information appears (a DETAIL, LOCAL, or NODE option was specified), the nodes known to the ServerNet cluster subsystem appear in numeric order regardless of the order requested.
SCF Commands for SNETMON and the ServerNet Cluster Subsystem STATUS SUBNET Command Example The following paragraphs explain the fields returned by the STATUS SUBNET summary table: Node is the ServerNet node number (1 through 24) of the local or remote system, where: LCL indicates that the node is local. This ServerNet node’s SNETMON is providing the information in the display. RMT indicates that the node is remote. This is any ServerNet node other than the local node.
SCF Commands for SNETMON and the ServerNet Cluster Subsystem STATUS SUBNET Command Example Remote ServerNet describes the status of a remote node’s ServerNet connection to the local node, where: LocLH is the status (UP or DOWN) of the local Expand-over-ServerNet line and its LDEV number. RemLH is the status (UP or DOWN) of the remote Expand-over-ServerNet line.
SCF Commands for SNETMON and the ServerNet Cluster Subsystem STATUS SUBNET, PROBLEMS Command Example STATUS SUBNET, PROBLEMS Command Example The following example shows the STATUS SUBNET $ZZSCL command with the PROBLEMS option: > STATUS SUBNET $ZZSCL, PROBLEMS Node SysName Nodes With Connectivity Problems ---------------------------------------------------------------------------1) \SIERRA | ( 05 ) 2) \IGATE | ( Error 48 returned while accessing node ) 3) \SPEEDY | ( 05 ) 4) \.......
SCF Commands for SNETMON and the ServerNet Cluster Subsystem STATUS SUBNET, DETAIL Command Example (Partial Display) STATUS SUBNET, DETAIL Command Example (Partial Display) The following example shows a partial display of the STATUS SUBNET $ZZSCL, command with the DETAIL option: > STATUS SUBNET $ZZSCL, DETAIL Remote Node -- ServerNet Node Number: 14 System Name: \STAR2 Expand Node Number: 212 Remote Processors Up (via EXPAND): ( 0 1 2 3 ) Local LH Ldev Number: 122 Local LH Name: $SC212 Local LH Status: UP
SCF Commands for SNETMON and the ServerNet Cluster Subsystem STATUS SUBNET, DETAIL Command Example (Partial Display) |20 External Path States For Y Fabric (OUT): |21 DST 00 01 02 03 04 05 06 07 08 09 10 11 12 13 14 15|22 SRC -----------------------------------------------|23 00 |36 36 36 36 5 5 5 5 5 5 5 5 5 5 5 5|24 01 |36 36 36 36 5 5 5 5 5 5 5 5 5 5 5 5|25 02 |36 36 36 36 5 5 5 5 5 5 5 5 5 5 5 5|26 03 |36 36 36 36 5 5 5 5 5 5 5 5 5 5 5 5|27 04 | 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4|28 05 | 4 4 4 4 4 4 4 4 4
SCF Commands for SNETMON and the ServerNet Cluster Subsystem STATUS SUBNET, DETAIL Command Example (Partial Display) SRC is the source processor (00 through 15) of ServerNet packets intended for processors listed in the top row of the table. The numeric values in the command output indicate the state of the paths between the source and destination processors. Table 8-4 describes each of the 38 path state values.
SCF Commands for SNETMON and the ServerNet Cluster Subsystem STATUS SUBNET, DETAIL Command Example (Partial Display) Table 8-4. Path State Values Returned by the STATUS SUBNET, DETAIL Command (page 2 of 3) No. Path State Value Meaning 10 Dst strtng SNet allc The destination processor is starting direct ServerNet connectivity along the path and has already allocated the necessary ServerNet resources in the kernel.
SCF Commands for SNETMON and the ServerNet Cluster Subsystem STATUS SUBNET, DETAIL Command Example (Partial Display) Table 8-4. Path State Values Returned by the STATUS SUBNET, DETAIL Command (page 3 of 3) No. Path State Value Meaning 24 Sft dwn max BTE timouts The path was downgraded from good to soft down state because the source processor detected more than 20 ServerNet block transfer engine (BTE) data packet timeouts along the path.
SCF Commands for SNETMON and the ServerNet Cluster Subsystem STATUS SUBNET, DETAIL Command Example (Partial Display) About IPC Paths and Connections All processors in a ServerNet cluster are connected over a pair of physical ServerNet X and Y fabrics. The fabric at a processor is said to be up if the processor can communicate over that fabric; otherwise it is considered down. A path is a unidirectional ServerNet communication conduit between a pair of processors over one ServerNet fabric.
SCF Commands for SNETMON and the ServerNet Cluster Subsystem STATUS SUBSYS Command Example STATUS SUBSYS Command Example The following example displays the current logical state of the ServerNet cluster subsystem: -> STATUS SUBSYS $ZZSCL Servernet Cluster - Status SUBSYS \SYS.$ZZSCL State.............STARTED where State is one of STARTING, STARTED, STOPPING, or STOPPED. See SCL SUBSYS Object Summary States on page 8-3 for additional information.
SCF Commands for SNETMON and the ServerNet Cluster Subsystem Considerations Considerations • • If the STOP SUBSYS command is entered correctly, the ServerNet cluster monitor process generates an EMS message that reports the command, the time it was executed, the terminal from which the command was entered, and the group and user numbers of the user issuing the command. Terminating access from the local system to the ServerNet cluster proceeds as follows: 1.
SCF Commands for SNETMON and the ServerNet Cluster Subsystem TRACE Command The TRACE command syntax is: TRACE [ /OUT file-spec/ ]PROCESS $ZZSCL[#msgmon], { TO file-ID [, trace-option ... ] | STOP [, BACKUP ] } trace-option is BACKUP { COUNT records | PAGES pages } RECSIZE bytes SELECT tracelevel { WRAP | NOWRAP } OUT file-spec causes any SCF output generated for this command to be directed to the specified file.
SCF Commands for SNETMON and the ServerNet Cluster Subsystem Considerations PAGES pages designates how much memory space is allocated in the extended data segment used for tracing. The trace will terminate after the number of pages of trace data has been collected. RECSIZE bytes specifies the maximum size for any trace data record. Larger records are truncated. SELECT tracelevel identifies the kind of trace data to be collected. Currently, only ALL is supported.
SCF Commands for SNETMON and the ServerNet Cluster Subsystem VERSION Command VERSION Command The VERSION command displays version information about the ServerNet cluster monitor process. VERSION is a nonsensitive command. The VERSION command syntax is: VERSION [ /OUT file-spec / ] { PROCESS } $ZZSCL [ , DETAIL ] { SUBNET } { SUBSYS } OUT file-spec causes any SCF output generated for this command to be directed to the specified file. DETAIL designates that complete version information is to be returned.
SCF Commands for SNETMON and the ServerNet Cluster Subsystem Examples SCF KERNEL - T9082G02 - (26JUN00) identifies the version of the SCF Kernel (T9082G02) and the release date (26JUN00). SCL PM - T0294G08 - (03JUL00) identifies the version of the SCF product module (T0294G08) and the release date (02JUL00).
9 SCF Commands for the External ServerNet SAN Manager Subsystem This section describes the SCF commands that are supported specifically for the external ServerNet system area network (SAN) manager subsystem (SMN). The SMN subsystem is used to manage the external ServerNet SAN manager process (SANMAN). Note. Commands that are generally supported by SCF, such as the ASSUME and ENV commands, are documented in the SCF Reference Manual for G-Series RVUs.
SCF Commands for the External ServerNet SAN Manager Subsystem SANMAN SCF Objects Table 9-2. SCF Features for SANMAN by RVU RVU SANMAN SPR G06.09 T0502 • G06.12 T0502AAE • T0502AAG • • G06.14 Introduced These New SCF Features • • • G06.
SCF Commands for the External ServerNet SAN Manager Subsystem ALTER Command ALTER Command The ALTER command is a sensitive command. It allows you to: • • • Specify the fabric setting used by a cluster switch. Assign or change a locator string for the cluster switch. Command the LEDs on the ServerNet II Switch to blink or stop blinking. (The ServerNet II Switch is the principal subcomponent of a cluster switch.
SCF Commands for the External ServerNet SAN Manager Subsystem Considerations BLINK ALL Blink all switch port LEDs, including the fault LED. NONE Stop blinking all switch port LEDs, including the fault LED, and restore the normal operating state of the LED. (During normal operation, the port LED lights to indicate link alive.) Considerations • • • • • The ALTER command is a sensitive command and can be used only by a supergroup user (255, n) ID.
SCF Commands for the External ServerNet SAN Manager Subsystem INFO Command The following example blinks the LEDs on the nearest Y-fabric cluster switch: > ALTER SWITCH $ZZSMN, NEAREST Y, BLINK ALL The nearest ServerNet switch in the external ServerNet Y fabric has begun to blink the LEDs of all ports. INFO Command The INFO command obtains information about a cluster switch or about the external ServerNet fabric connection to the nearest cluster switch.
SCF Commands for the External ServerNet SAN Manager Subsystem INFO CONNECTION Command Example INFO CONNECTION Command Example The following example shows the INFO CONNECTION $ZZSMN command: > INFO CONN $ZZSMN INFO CONNECTION X Fabric Y Fabric |--------------------------------------------------------------------------| | Command Status | OK |OK | | Status Detail | No Status Detail |No Status Detail | |--------------------|---------------------------|------------|------------| | Configuration | MSEB Port |
SCF Commands for the External ServerNet SAN Manager Subsystem INFO CONNECTION Command Example TX/RX ENBLD transmit and receive are enabled. TX/RX AUTO no low-level neighbor checks are run. The port can still enable or disable ServerNet traffic. N/A does not apply. Neighbor ID Check indicates the type of neighbor checks to be performed to enable the port. Possible values are: NO QRY PASS perform no query of neighbor and assume it passes. NO QRY FAIL perform no query of neighbor and assume it fails.
SCF Commands for the External ServerNet SAN Manager Subsystem INFO CONNECTION Command Example NNA Version indicates the version of the NNA on the MSEB. A value of N/A (not applicable) appears for the cluster switch port. PIC Functional ID is the type of plug-in card (PIC) used in the MSEB or cluster switch.
SCF Commands for the External ServerNet SAN Manager Subsystem INFO SWITCH Command Example INFO SWITCH Command Example The following example shows the INFO SWITCH $ZZSMN command: > INFO SWITCH $ZZSMN INFO SWITCH X Fabric Y Fabric |----------------------------------------------------------------------------| | Command Status | OK | OK | | Status Detail | No Status Detail | No Status Detail | |--------------------|---------------------------|---------------------------| | Configuration | ServerNet Switch | S
SCF Commands for the External ServerNet SAN Manager Subsystem INFO SWITCH Command Example In this example: Command Status is the general condition of the connection. For a list of possible values, see Command Status Enumeration on page 9-37. Status Detail is the specific condition of the connection. For a list of possible values, see Status Detail Enumeration on page 9-37 Switch Locator is an identifier string of 0 to 32 ASCII characters that can be used to describe or help locate the cluster switch.
SCF Commands for the External ServerNet SAN Manager Subsystem INFO SWITCH Command Example PWA Number is the printed wiring assembly (PWA) number of the ServerNet II Switch subcomponent of the cluster switch. Conf Support Flags indicates the configuration support flags. Number of Ports is the number of ports on the ServerNet switch. This value is always 12 for the ServerNet II Switch. Capability Flag indicates the services required or provided by the node.
SCF Commands for the External ServerNet SAN Manager Subsystem INFO SWITCH Command Example Config Revision is the revision of the configuration data installed on the cluster switch. Config VPROC is the VPROC string for the version of configuration data running on the cluster switch. PIC Func ID (xx) is the type of plug-in card (PIC) installed in the ServerNet II Switch subcomponent of the cluster switch and the port number in which the PIC is installed.
SCF Commands for the External ServerNet SAN Manager Subsystem LOAD Command UPS Info Valid indicates whether the uninterruptible power supply (UPS) information is valid. Possible values are TRUE or FALSE. The value is FALSE if the UPS is disconnected from the ServerNet II Switch subcomponent. UPS Type is the VA rating (volts multiplied by amps) and the firmware version of the UPS subcomponent of the cluster switch. UPS ID is the number used to identify the UPS subcomponent of the cluster switch.
SCF Commands for the External ServerNet SAN Manager Subsystem Considerations FIRMWARE filename indicates the name of the firmware file to be downloaded. The file name should be specified in the standard file system external format. The file must be located on the local system. CONFIG filename indicates the name of the configuration file to be downloaded. The file name should be specified in the standard file system external format. The file must be located on the local system.
SCF Commands for the External ServerNet SAN Manager Subsystem LOAD SWITCH Command Examples LOAD SWITCH Command Examples The following example downloads the firmware file $SYSTEM.SYS65.M6770 to the nearest X-fabric cluster switch: > LOAD SWITCH $ZZSMN, NEAREST X, FIRMWARE $SYSTEM.SYS65.M6770 This command should only be issued by Compaq trained support personnel. Executing this command will load a new firmware image on the switch.
SCF Commands for the External ServerNet SAN Manager Subsystem PRIMARY Command PRIMARY Command The PRIMARY command is a sensitive command. It causes a processor switch, where the backup processor becomes the primary processor, and the primary processor becomes the backup processor. PRIMARY is a sensitive command. The PRIMARY command syntax is: PRIMARY [ /OUT file-spec/ ] PROCESS $ZZSMN [, cpunum ] OUT file-spec causes any SCF output generated for this command to be directed to the specified file.
SCF Commands for the External ServerNet SAN Manager Subsystem Considerations NEAREST allows you to designate the ServerNet II Switch to which the firmware or configuration is downloaded by specifying the cluster switch that is directly connected to the server. fabric-ID specifies the cluster switch on the specified (X or Y) external ServerNet fabric. HARD | SOFT allows you to designate the type of reset.
SCF Commands for the External ServerNet SAN Manager Subsystem STATUS Command The following example performs a hard reset of the ServerNet II Switch subcomponent of the nearest X-fabric cluster switch: > RESET SWITCH $ZZSMN, NEAREST Y, HARD This command should only be issued by Compaq trained support personnel. Executing this command will force the switch into a hard reset, which is functionally equivalent to a power-on reset.
SCF Commands for the External ServerNet SAN Manager Subsystem Considerations Considerations • • The STATUS command displays connection or cluster switch information for both external ServerNet fabrics unless the ONLY option is specified. In addition to the values described in the STATUS command displays, you might see values of N/A or UNKNOWN. In general, a value of N/A means that a value is not applicable or not expected for the field.
SCF Commands for the External ServerNet SAN Manager Subsystem STATUS CONNECTION Command Example Fabric Access indicates the status of the external X and Y fabric connections.
SCF Commands for the External ServerNet SAN Manager Subsystem STATUS CONNECTION Command Example Link Lvl Prtcl Trn indicates whether Link Transmit is enabled. Possible values are ENABLED or DISABLED. Pckt Lvl Prtcl Rcv indicates whether Packet Receive is enabled. Possible values are ENABLED or DISABLED. Pckt Lvl Prtcl Trn indicates whether Packet Transmit is enabled. Possible values are ENABLED or DISABLED. Lost Optical Signl indicates whether a lost optical signal error occurred.
SCF Commands for the External ServerNet SAN Manager Subsystem STATUS CONNECTION, NNA Command Example Node Routing ID is the node number routing ID. For the MSEB, the ID is configured on the NNA PIC. For the cluster switch port, the ID is assigned by the external fabric. SvNet Node Number is a number in the range 1 through 24 used to route ServerNet packets across the external ServerNet X or Y fabrics.
SCF Commands for the External ServerNet SAN Manager Subsystem STATUS CONNECTION, NNA Command Example Status Detail is the specific condition of the connection. For a list of possible values, see Status Detail Enumeration on page 9-37 NNA Reg Data Valid indicates whether the Node Numbering Agent (NNA) register contents shown in the following lines are valid. Possible values are TRUE and FALSE. Node Routing ID shows the node number routing IDs (outbound and inbound) for both external fabrics.
SCF Commands for the External ServerNet SAN Manager Subsystem STATUS CONNECTION, NNA Command Example Accumulator shows the contents of the accumulators (outbound and inbound) for both external fabrics. DID Check Error indicates whether a Destination ServerNet ID check error occurred. Possible values are TRUE and FALSE. CRC Error indicates whether a Cyclic Redundancy Check error occurred. Possible values are TRUE and FALSE. Lost Optical Signal indicates whether a lost optical signal error occurred.
SCF Commands for the External ServerNet SAN Manager Subsystem STATUS SWITCH Command Example STATUS SWITCH Command Example The following example shows the STATUS SWITCH $ZZSMN command: > SCF STATUS SWITCH $ZZSMN STATUS SWITCH X Fabric Y Fabric |---------------------------------------------------------------------------| |Command Status | OK | OK | |Status Detail | No Status Detail | No Status Detail | |-------------------|---------------------------|---------------------------| |General Status | ServerN
SCF Commands for the External ServerNet SAN Manager Subsystem |Primary Power Rail | ON |Scndary Power Rail | ON STATUS SWITCH Command Example | ON | ON | |-------------------|---------------------------|---------------------------| |Switch Port Status | ST LS LR LT PR PT OS NB TP| ST LS LR LT PR PT OS NB TP| |-------------------|---------------------------|---------------------------| |Switch Port (00) | LD S2 DS DS DS DS PR NC DS| LD S2 DS DS DS DS LS NC DS| |Switch Port (01) | LA S2 EN EN EN EN PR OK
SCF Commands for the External ServerNet SAN Manager Subsystem STATUS SWITCH Command Example Unknown Error indicates whether an error of unknown type (hardware or firmware) occurred. Possible values are Error Detected or No Error Detected. Switch Response indicates the success of cluster switch response test. Possible values are PASSED or FAILED. Switch Ownership indicates whether there is an owner for the cluster switch. Ownership is required for some sensitive commands.
SCF Commands for the External ServerNet SAN Manager Subsystem STATUS SWITCH Command Example Router Config CRC indicates the status of the router configuration cyclic redundancy check (CRC). Possible values are Status OK and Status Bad. Firmware Images indicates whether the firmware images stored in FLASH memory are the same. Possible values are Images the Same and Images Different. Config Images indicates whether the configuration images stored in FLASH memory are the same.
SCF Commands for the External ServerNet SAN Manager Subsystem STATUS SWITCH Command Example Flash ID String indicates whether the manufacturer code and device code stored in FLASH memory are the expected values. Possible values are Status OK and Status Bad. Flash Boot Lckot 0 indicates whether an error occurred because the FLASH lower boot block section is locked. Possible values are No Error Detected and Error Detected.
SCF Commands for the External ServerNet SAN Manager Subsystem STATUS SWITCH Command Example Line Regulation indicates the status of line regulation. Possible values are: • • • Normal Straight Through Step Down (Buck) Step Up (Boost) Attention Required describes a condition on the UPS that requires attention. Possible values are: • • • • NONE Ground Failure Battery Failure Overloaded Immediate Attn Req describes a condition on the UPS that requires urgent attention.
SCF Commands for the External ServerNet SAN Manager Subsystem STATUS SWITCH Command Example Table 9-4.
SCF Commands for the External ServerNet SAN Manager Subsystem STATUS SWITCH Command Example Table 9-4. Switch Port Status Codes and Possible Values (page 2 of 2) Status Variables Possible Values Code Code Description DU Disabled, Unknown Reason UN Uninitialized UK Unknown EN Enabled DS Disabled UK Unknown TP Description Target Port Enabled Blinking LED Ports indicates which ports on the ServerNet II Switch have their LEDs blinking.
SCF Commands for the External ServerNet SAN Manager Subsystem STATUS SWITCH, ROUTER Command Example STATUS SWITCH, ROUTER Command Example The following example shows the STATUS SWITCH command with the router option: > SCF STATUS SWITCH $ZZSMN, ROUTER | STATUS SWITCH X Fabric Y Fabric |---------------------------------------------------------------------------| | Command Status | OK | OK | | Status Detail | No Status Detail | No Status Detail | |--------------------|---------------------------|------------
SCF Commands for the External ServerNet SAN Manager Subsystem TRACE Command TRACE Command The TRACE command is a sensitive command: The TRACE command: • • • Starts a trace operation on a ServerNet SAN manager process Alters trace parameters set by a previous TRACE command Stops a previously requested trace operation The TRACE command syntax is: TRACE [ /OUT file-spec/ ]PROCESS $ZZSMN, { TO file-ID [, trace-option ...
SCF Commands for the External ServerNet SAN Manager Subsystem Considerations RECSIZE bytes specifies the maximum size for any trace data record. Larger records are truncated. SELECT tracelevel identifies the kind of trace data to be collected. Currently, only PROCESS is supported. WRAP | NOWRAP WRAP specifies that when the trace disk file end-of-file mark is reached, trace data wraps around to the beginning of the file and overwrites any existing data.
SCF Commands for the External ServerNet SAN Manager Subsystem VERSION Command VERSION Command The VERSION command displays version information about the ServerNet SAN manager process. VERSION is a nonsensitive command. The VERSION command syntax is VERSION [ /OUT file-spec/ ] PROCESS $ZZSMN [ , DETAIL ] OUT file-spec causes any SCF output generated for this command to be directed to the specified file. DETAIL designates that complete version information is to be returned.
SCF Commands for the External ServerNet SAN Manager Subsystem Command Status Enumeration SCF KERNEL - T9082G02 - (03OCT01) (25SEP01) identifies the version of the SCF Kernel and the release date. SANMAN PM - T0502G08 - (02JUL01) - AAG identifies the version of the SCF product module (T0502G08) and the release date.
SCF Commands for the External ServerNet SAN Manager Subsystem ServerNet Cluster Manual— 520575-002 9- 38 Status Detail Enumeration
10 SCF Error Messages This section describes the types of error messages generated by SCF and provides the cause, effect, and recovery information for the SCF error messages specific to the ServerNet cluster subsystem and the external system area network manager process (SANMAN).
SCF Error Messages SCF Error Messages Help Like common errors, subsystem-specific error messages are divided into two classes—critical and noncritical. • • Critical messages can be serious, such as the notification of software errors for which there is no automatic recovery. Critical messages are preceded by an “E.” Noncritical messages are generally informational. Noncritical messages are preceded by a “W.
SCF Error Messages ServerNet Cluster (SCL) Error Messages SCL Error 00003 SCL E00003 Internal error. Case value out of range. Cause. An invalid case value was generated with no associated case label. Effect. The command is not executed. SCF waits for the next command. Recovery. Contact your service provider. (See If You Have to Call Your Service Provider on page 10-12.) SCL Error 00004 SCL E00004 Invalid MsgMon process qualifier. Cause. The optional MSGMON qualifier was not formatted correctly.
SCF Error Messages ServerNet Cluster (SCL) Error Messages SCL Error 00007 SCL E00007 Failure in service function. error: err-num, error detail: err-detail. err-num is the error number returned from a system procedure. err-detail is the error detail subcode associated with the system procedure error. Cause. An unexpected error was returned from a system procedure that was called by the ServerNet cluster monitor process.
SCF Error Messages ServerNet Cluster (SCL) Error Messages SCL Error 00009 SCL E00009 Processor switch failed. Cause. A processor switch was not performed. The process pair continues to execute in the current processor(s). Effect. The command is not executed. SCF waits for the next command. Recovery. Verify the processors used by the ServerNet cluster monitor process. Retry the command if necessary. SCL Error 00010 SCL E00010 MsgMon process does not exist. Cause. The MSGMON process does not exist.
SCF Error Messages ServerNet Cluster (SCL) Error Messages SCL Error 00013 SCL E00013 Trace command error. Cause. The subsystem failed to execute the TRACE PROCESS command. Effect. The command is not executed. SCF waits for the next command. Recovery. Correct the command and reissue it. You can also check the event logs for additional error messages. SCL Error 00014 SCL E00014 PROBLEMS attribute must be specified without any other attributes. Cause.
SCF Error Messages SANMAN (SMN) Error Messages SANMAN (SMN) Error Messages The ServerNet SAN Manager subsystem SCF error messages are listed in numeric order. SMN Error 00001 SMN E00001 Internal error: Call to system procedure failed. Cause. An internal error was caused by an unexpected return code from a system procedure. Effect. The command is not executed. SCF waits for the next command. Recovery. Contact your service provider. (See If You Have to Call Your Service Provider on page 10-12.
SCF Error Messages SANMAN (SMN) Error Messages Recovery. It is not possible to request information from a system running an incompatible version of SCF. Contact your service provider to resolve the version mismatch. (See If You Have to Call Your Service Provider on page 10-12.) SMN Error 00005 SMN E00005 Failure in service function. error: err-num, error detail: err-detail. err-num is the error number returned from a system procedure.
SCF Error Messages SANMAN (SMN) Error Messages SMN Error 00008 SMN E00008 Error returned from the external ServerNet SAN manager. Cause. An unexpected error was returned from the external ServerNet SAN manager process ($ZZSMN) during the processing of a command. This error is often preceded by one or two lines of additional text providing more specific information about the error. For example: ***ERROR: Invalid Parameter ***ERROR DETAIL: Bad Fabric ID Setting Effect. The command is not executed.
SCF Error Messages SANMAN (SMN) Error Messages Recovery. Reissue the command and specify a NEAREST switch fabric attribute value (X or Y). SMN Error 00012 SMN E00012 The command can only be executed interactively. Cause. A sensitive command was executed noninteractively (for example, using an OBEY file). Effect. The command is not executed. SCF waits for the next command. Recovery. Reissue the command from an interactive SCF session.
SCF Error Messages SANMAN (SMN) Error Messages Recovery. Reissue the command and specify just one load file attribute (either FIRMWARE or CONFIG). SMN Error 00016 SMN E00016 The POSITION attribute cannot be used with the FIRMWARE attribute. Cause. The POSITION attribute was specified in a command with the FIRMWARE attribute. Effect. The command is not executed. SCF waits for the next command. Recovery. Reissue the command without the POSITION attribute.
SCF Error Messages If You Have to Call Your Service Provider SMN Error 00020 SMN E00020 The specified POSITION and TOPOLOGY attributes are not compatible. Cause. The specified POSITION attribute is not compatible with the specified TOPOLOGY attribute. Effect. The command is not executed. SCF waits for the next command. Recovery. Reissue the command and specify POSITION and TOPOLOGY attributes that are compatible. For example, a position of 3 is only allowed with a 24-node (24NODES) topology.
SCF Error Messages If You Have to Call Your Service Provider 4. Enter a DETAIL CMDBUFFER command to capture the contents of the SPI buffer. For example: -> DETAIL CMDBUFFER, ON 5. Reproduce the sequence of commands that produced the SCF error.
SCF Error Messages If You Have to Call Your Service Provider ServerNet Cluster Manual— 520575-003 10 -14
A Part Numbers For an up-to-date list of part numbers, refer to: NTL Support and Service Library > Service Information > Part Numbers > Part Number List for NonStop S-Series Customer Replaceable Units (CRUs) > ServerNet Cluster (Model 6770).
Part Numbers (1 meter, 125/250 volts, 10 amps) • • • • • • Rack Mount Kit for ServerNet Cluster Switch (model 6770): ° ° ° ° ° ° ° ° ° ° ° ° UPS, left fixed rail UPS, right fixed rail UPS, rail bracket Cable management tray ServerNet Switch slide rail bracket ServerNet Switch slide rails Cable management assembly arm Cable management assembly bracket Torx screws, M5 Cage nuts, M5 Nuts, 8-32 x .375 Flat-head phillips screws, 8-32 x .
B Blank Planning Forms This appendix contains blank copies of planning forms: • • Cluster Planning Work Sheet Planning Form for Moving ServerNet Cables The Cluster Planning Work Sheet can accommodate information for up to eight nodes. For clusters with more than eight nodes, you need to make additional copies of the Cluster Planning Work Sheet to accommodate the number of nodes in your cluster.
Blank Planning Forms Cluster Planning Work Sheet Cluster Name: Date: Page System Name \ Serial Number Expand Node # Location # of Processors Model NonStop SX/Y Switch # X/Y Switch Port # ServerNet Node # \ \ NonStop S- NonStop S- System Name \ Serial Number Expand Node # Location # of Processors Model NonStop SX/Y Switch # X/Y Switch Port # ServerNet Node # \ \ NonStop S- NonStop S- System Name \ Serial Number Expand Node # Location # of Processors Model NonStop SX/Y Switch # X/Y Switch Port #
Blank Planning Forms Planning Form for Moving ServerNet Cables a. Identify the node whose ServerNet cables will be moved: System Name: \_____________________ Expand Node Number: ________________ b.
Blank Planning Forms c. List the lines to abort on the node whose ServerNet cables will be moved and on all other nodes: On the node whose cables will be moved... On all other nodes...
C ESD Information Observe these ESD guidelines whenever servicing electronic components: • • • • • • • Obtain an electrostatic discharge (ESD) protection kit and follow the directions that come with the kit. You can purchase an ESD kit fromHP or from a local electronics store. Ensure that your ESD wriststrap has a built-in series resistor and that the kit includes an antistatic table mat.
ESD Information Figure C-1. Using ESD Protection When Servicing CRUs System Enclosure (Appearance Side) ESD wriststrap with grounding clip ESD wriststrap clipped to door latch stud ESD floor mats ESD antistatic table mat. Mat should be connected to a soft ground (1 megohm min. to 10 megohm max.) Clip 15-foot straight ground cord to screw on grounded outlet cover VST 001.
D Service Categories for Hardware Components NonStop S-series hardware components fall into the service categories shown in Table D-1. Table D-1. Service Categories for Hardware Components (page 1 of 2) Category Class 1 CRU Class 2 CRU CRU or FRU* • • • • • • Disk drives Power cords Fans ECL ServerNet cables System consoles Tape drives (some) Definition A CRU that probably will not cause a partial or total system outage if the documented replacement procedure is not followed correctly.
Service Categories for Hardware Components Table D-1.
E TACL Macro for Configuring MSGMON, SANMAN, and SNETMON The example macro in this appendix automates the process of adding MSGMON, SANMAN, and SNETMON to the system-configuration database. (The manual steps for adding these processes are documented in Section 3, Installing and Configuring a ServerNet Cluster).
TACL Macro for Configuring MSGMON, SANMAN, and SNETMON Example Macro Example Macro ?tacl macro == This is a sample TACL macro which configures the $ZPM entries for == Msgmon, Sanman, and Snetmon. HP recommends different == configurations depending upon whether the system has 2 processors, == 4 processors, or more than 4 processors. This macro starts by == determining the number of processors currently loaded. If it is == 5 or more, the macro configures the $ZPM entries as such.
TACL Macro for Configuring MSGMON, SANMAN, and SNETMON Example Macro #output Based on the number of processors currently running, it appears #output that this system has a total of [numProcessors] processors.
TACL Macro for Configuring MSGMON, SANMAN, and SNETMON Example Macro + abort process $zzkrn.#msgmon #delay 200 + delete process $zzkrn.#msgmon] + exit #output #output Verifying that the entries were successfully deleted... #output == == Repeat the above process to verify that we really did delete all of the entries. scf /name, outv scf^output/ ; info process $zzkrn.* [#if [#charfindv scf^output 1 "MSGMON"] |THEN| #output ERROR! The Msgmon entry was not successfully deleted.
TACL Macro for Configuring MSGMON, SANMAN, and SNETMON Example Macro + add process $zzkrn.#msgmon, autorestart 10, cpu all, & hometerm $zhome, outfile $zhome, name $zim, & priority 199, program $system.system.msgmon, saveabend on, & startmode system, stopmode sysmsg + add process $zzkrn.#zzsmn, autorestart 10, priority 199, & program $system.system.
TACL Macro for Configuring MSGMON, SANMAN, and SNETMON ServerNet Cluster Manual— 520575-003 E- 6 Example Macro
F Common System Operations This appendix contains procedures for common operations used to manage the ServerNet cluster. Procedure Page Logging On to the TSM Low-Level Link Application F-1 Logging On to the TSM Service Application F-2 Logging On to Multiple TSM Client Applications F-3 Starting a TACL Session Using the Outside View Application F-5 Using the TSM EMS Event Viewer F-6 Note. You can use OSM instead of TSM for any of the procedures described in this manual.
Common System Operations Logging On to the TSM Service Application b. In the Password box, type your password. c. Select the system you want to connect to. d. Click Log On or double-click the system name and number. 3. Click System Discovery to discover the system. Logging On to the TSM Service Application 1. From the Windows Start button, choose Programs>Compaq TSM>TSM Service Application. The TSM Service Application main window opens and the Log On to TSM Service Connection dialog box is displayed.
Common System Operations Logging On to Multiple TSM Client Applications Logging On to Multiple TSM Client Applications TSM client applications, such as the TSM Low-Level Link Application and the TSM Service Application, allow you to log on to only one system at a time. However, you can start multiple instances of each client application. Starting multiple instances of an application allows you to log on to multiple systems at the same time from one system console.
Common System Operations Logging On to Multiple TSM Client Applications Figure F-3. Multiple TSM Client Applications vst061.
Common System Operations Starting a TACL Session Using the Outside View Application Starting a TACL Session Using the Outside View Application Sometimes you need to start a TACL session on the host system to perform certain system actions, such as reloading a processor. Depending on whether you are logged on to the TSM Low-Level Link Application or the TSM Service Application, you might be prompted to type an IP address.
Common System Operations Using the TSM EMS Event Viewer Using the TSM EMS Event Viewer 1. To start the Event Viewer, do one of the following: • • From the TSM Service Application drop-down menu, choose Display and click Events to launch the TSM EMS Event Viewer Application. Click Start and select Programs>Compaq TSM>TSM Event Viewer. 2. From the File menu, choose Log On. 3. In the Choose System list, click the line containing the name of the system you want to log on to. 4.
G Fiber-Optic Cable Information This appendix provides additional information about the fiber-optic cables that connect a cluster switch to either a node or another cluster switch. These fiber-optic cables conform to the IEEE 802.3z (Gigabit Ethernet) specification. For this release of the ServerNet Cluster product, the following HP cables are supported: Cable Length Feet Meters 32.8 10 131.2 40 262.4 80 262.4 80 (plenum-rated) HP does not supply fiber-optic cables longer than 80 meters.
Fiber-Optic Cable Information Fiber-Optic Cabling Model Figure G-2 and Figure G-3 show drawings of the fiber-optic cable. The zipcord cable depicted in Figure G-2 is a cross-over cable. The drawings are for reference only. Figure G-2. Zipcord Cable Drawing SC Connector SC Connector Single-Mode Duplex Cable Label Figure G-3. Ruggedized Cable Drawing SC Connector 2-Fiber Breakout SC Connector VST084.vsd Note.
Fiber-Optic Cable Information Optical Characteristics Optical Characteristics As specified by IEEE 802.3z, the fiber-optic cable requirements are satisfied by the fibers specified in IEC 793-2;1992 for the type B1 (10/125 µm single mode) with the exceptions noted in Table G-1. Table G-1. Optical Fiber and Cable Characteristic Description 9 µm SMF Nominal fiber specification wavelength 1310 nm Fiber cable attenuation (max) 0.5 dB/Km Zero dispersion wavelength (λ0) 1300 nm <= λ0 <= 1324 nm 0.
Fiber-Optic Cable Information ServerNet MDI Optical Power Requirements ServerNet MDI Optical Power Requirements The ServerNet MDI optical power requirements are: Transmitter output optical power Minimum: -9.5 dBm Maximum: -3 dBm Receiver input optical power Minimum: -20 dBm Maximum: -3 dBm Optical power budget 10.5 dBm Connectors The cluster switch and Modular ServerNet Expansion Board (MSEB) PMDs are coupled to the fiber optic cabling through a connector plug into the MDI optical receptacle.
Fiber-Optic Cable Information ServerNet Cluster Connections ServerNet Cluster Connections The following sections list other requirements for the two types of ServerNet cluster connections that use single-mode fiber-optic cables. Node Connections The node connections use ports 0 through 7 on the cluster switch. The range of the node connections is up to 80 meters.
Fiber-Optic Cable Information Cluster Switch Connections ServerNet Cluster Manual—520575-003 G- 6
H Using OSM to Manage the Star Topologies The contents of this section were formerly part of the ServerNet Cluster 6770 Supplement. That supplement has been incorporated into this manual for easier access and linking to the information. OSM supports all network topologies of a ServerNet cluster: the star topologies (star, split-star, and tri-star) and the newer layered topology.
Using OSM to Manage the Star Topologies Guided Procedures Have Changed Guided Procedures Have Changed Some of the TSM guided procedures used with a ServerNet cluster have been replaced by new actions in the OSM Service Connection. Other guided procedures are now launched directly by an action in the OSM Service Connection instead of from the guided procedures interface.
Using OSM to Manage the Star Topologies • • For More Information About OSM Follow the procedure for updating topologies presented in the OSM Service Connection online help. References to TSM alarms generally apply to alarms generated in the OSM Service Connection. The alarm behavior is similar. To get alarm details, follow the instructions in the OSM Service Connection online help.
Using OSM to Manage the Star Topologies For More Information About OSM ServerNet Cluster 6770 Hardware Installation and Support Guide—522544-002 H- 4
I SCF Changes at G06.21 This section describes SCF changes made at G06.21 to the SNETMON and SANMAN product modules that might affect management of a cluster with one of the star topologies. The contents of this section were formerly part of the ServerNet Cluster 6770 Supplement. That supplement has been incorporated into this manual for easier access and linking to the information.
SCF Changes at G06.21 Using SCF Commands for SANMAN Using SCF Commands for SANMAN This section describes the changes made to SCF commands for the SNETMON product module. The changes are backward compatible and still support the star topologies and the 6770 switch. The 6780 switch is also supported, but the syntax for specifying each type of switch is different. Also, the output of the commands differs depending on the switch type. For more details, consult the SCF help text for this product module.
Safety and Compliance Regulatory Compliance Statements The following warning and regulatory compliance statements apply to the products documented by this manual. FCC Compliance This equipment has been tested and found to comply with the limits for a Class A digital device, pursuant to part 15 of the FCC Rules. These limits are designed to provide reasonable protection against harmful interference when the equipment is operated in a commercial environment.
Safety and Compliance Regulatory Compliance Statements Taiwan (BSMI) Compliance Japan (VCCI) Compliance ServerNet Cluster Manual— 520575-003 Statements -2
Safety and Compliance Regulatory Compliance Statements DECLARATION OF CONFORMITY Supplier Name: COMPAQ COMPUTER CORPORATION Supplier Address: Compaq Computer Corporation Non-Stop Division 10300 North Tantau Ave Cupertino, CA 95014 USA Represented in the EU By: Compaq Computer EMEA BV P.O.
Safety and Compliance Consumer Safety Statements Consumer Safety Statements Customer Installation and Servicing of Equipment The following statements pertain to safety issues regarding customer installation and servicing of equipment described in this manual. Do not remove the covers of an AC transfer switch, ServerNet II Switch, or uninterruptible power supply (UPS).
Index A C AC transfer switch described 1-29 replacement 7-38 Add Switch guided procedure 3-25, 3-27, 3-30, 3-32, 4-26, 4-30, 4-60, 4-73, 4-87 Adding a node 3-24, 6-1 Alarm Detail dialog box 7-13 Alarms 7-12, 7-15 ALGORITHM modifier 3-11 Attributes external fabric resource 5-6 local node resource 5-5 MSEB 5-3 online help for 5-10 PIC 5-3 remote node resource 5-6 ServerNet cluster resource 5-5 service state 5-10 switch resource 5-7 Automatic configuration of line-handler processes 3-18 Automatic fail-over o
Index D Cluster switch block diagram 1-29 components 1-28 connections between 1-30, 2-10, 2-19, G-5 defined 1-26 floor space for servicing 2-21 globally unique ID 9-10 installation 3-12 location 2-19 number for clustering 1-26 packaging 1-26 placement of 2-20 polling intervals 9-12 power cord length 2-20 power requirements 2-22 powering on 3-16 remote 5-1 updating firmware and configuration 3-15 CLUSTER.
Index F F F1 help 5-10 Fabrics checking external 7-29 checking internal 7-26 stopping data traffic on 5-33 FABRICTS.
Index I I M Installation planning 2-1 Installation tasks 3-1/3-24, 3-25 Installing cluster switch 3-12 fiber-optic cables 3-12 MSEBs 3-5 split-star topology 3-25 star topology 3-1/3-24 Internal fabrics defined 1-16 testing 7-27 Internal Loopback Test 7-6, 7-30 Inventory of required hardware 3-3 IPC subsystem 7-31 Managing multiple nodes F-3 Merging clusters 4-54/4-92 Message system traffic 1-43 Monitoring tasks 5-1/5-26 MSEB attributes 5-3 connectors 2-17 defined 1-19 diagram 2-15 guided procedure for
Index O reducing 6-11 removing from a cluster 6-2 ServerNet 1-13 Node Connectivity ServerNet Path Test 7-4, 7-7, 7-29 Node number 1-13 Node Responsive Test 7-23 Node routing ID 1-13 Node-number assignments 1-13, 1-14 Node-numbering agent 1-22, 2-16 NonStop Himalaya Cluster Switch see cluster switch NonStop S700 considerations 2-13 O One kilometer cables 2-11 Online expansion 6-11 Online help 5-10, 7-11, 10-2 Operating system requirements 3-4 OSM software package 1-44 OSM, using to manage cluster H-1 Outs
Index S S SANMAN abending 7-19, 7-20 aborting 5-29 and the TSM cluster tab 5-29 cluster switch polling intervals 9-12 compatibility with NonStop Kernel 4-94 considerations for upgrading 4-93 creating 1-39, 3-7 functions 1-38 recommended configuration 3-10 restarting 5-30 SCF commands 1-45, 9-1 starting 5-29 switching primary and backup 5-34 troubleshooting 7-20 upgrading before loading new configuration 4-102 SAVE CONFIGURATION command 3-5 SC cable connectors 2-11 SCF ABORT LINE command 6-6 ABORT PROCESS
Index S STOP SUBSYS command 5-31, 5-33, 6-3, 7-24, 8-21 TRACE PROCESS command 8-23, 9-34 VERSION command 5-16, 5-17, 8-25, 9-36 SCF, changes to SNETMON and SANMAN at G06.21 I-1 SCL subsystem 7-31 SEB compared to MSEB 1-19 connector 2-17 diagram 2-15 guided procedure for replacement 3-5 replacement 2-18, 7-35 SEB.
Index T Source ServerNet ID 1-23 SP firmware checking version 2-24 for Tetra 8 topology 2-26, 7-19, 7-20 minimum required 2-23, 3-15, 4-8 SP functions 1-18 SPEEDK modifier 1-42 SPI 1-33 Splitting a cluster 6-11 Split-star topology configuration tags 4-5 described 1-5 fallback for merging clusters to create 4-66/4-67 installing 3-25 merging clusters to create 4-54/4-63 upgrade paths 4-14/4-15 SPRs checking current levels 2-24, 4-11 G06.12 and G06.
Index U Tree pane 5-4 Tri-star topology configuration tags 4-5 described 1-7 fallback for merging clusters to create 4-89/4-92 installing 3-30 merging clusters to create 4-68/4-86 required releases 4-3 upgrade paths 4-16 Troubleshooting Cluster tab 7-9 Expand-over-ServerNet lines 7-21 Expand-over-ServerNet line-handler processes 7-21 external fabric 7-4 fiber-optic ServerNet cable 7-7 guided procedures interface 7-3 internal fabric 7-3 MSEB 7-6 MSGMON 7-19 PIC installed in MSEB 7-6 procedures 7-1/7-35 rem
Index V V Version of ServerNet cluster subsystem 5-17 Version procedure (VPROC) information 2-25, 4-11 W Website, ServerNet cluster 3-9 X X fabric 1-2, 1-16 Y Y fabric 1-2, 1-16 Z ZPMCONF macro 1-35, 1-37, 1-39, 3-8, 3-9, E-1 ZZAL attachment files 7-15 Special Characters $NCP 3-11 $NCP ALGORITHM modifier 3-11 $ZEXP 3-11 $ZLOG 5-18 $ZPM 8-4 $ZZKRN.#MSGMON 7-19 $ZZKRN.#ZZSCL 7-17 $ZZKRN.