AlphaServer GS80/160/320 User’s Guide Order Number: EK-GS320-UG. B01 This guide is intended for those who manage, operate, or service the AlphaServer GS160/320 system and the AlphaServer GS80 rack system. It covers configuration guidelines, operation, system management, and basic troubleshooting.
First Printing, May 2000 © 2000 Compaq Computer Corporation. COMPAQ and the Compaq logo registered in U.S. Patent and Trademark Office. AlphaServer, OpenVMS, StorageWorks, and Tru64 are trademarks of Compaq Information Technologies Group, L.P. Portions of the software are © copyright Cimetrics Technology. Linux is a registered trademark of Linus Torvalds in several countries. UNIX is a registered trademark of The Open Group in the U.S. and other countries.
Contents Preface ........................................................................................................................ix Chapter 1 1.1 1.2 1.3 AlphaServer GS160/320 and GS80 Systems......................................... 1-2 Firmware Utilities Overview ................................................................ 1-4 System Architecture.............................................................................. 1-5 Chapter 2 2.1 2.2 2.3 2.3.1 2.3.2 2.3.3 2.3.4 2.3.5 2.3.6 2.3.
Chapter 3 3.1 3.2 3.3 3.3.1 3.4 3.5 3.6 3.7 3.8 3.9 3.10 GS160 System Cabinet ......................................................................... 3-2 GS320 System Cabinets........................................................................ 3-4 Power Cabinet....................................................................................... 3-6 Power Supply Slot Assignments ..................................................... 3-8 System Box......................................................
Chapter 7 7.1 7.1.1 7.1.2 7.2 7.2.1 7.2.2 7.2.3 7.2.4 7.3 7.4 7.4.1 7.5 7.6 7.7 SRM Console ......................................................................................... 7-2 SRM Command Overview............................................................... 7-4 Setting the Control Panel Message ................................................ 7-6 Displaying the System Configuration ................................................... 7-7 Show Boot Command.........................................
Glossary Index Examples 6–1 6–2 6–3 6–4 6–5 6–6 6–7 6–8 7–1 7–2 7–3 7–4 7–5 7–6 7–7 8–1 8–2 SCM Power-Up Display ........................................................................ 6-2 SRM Power-Up Display ........................................................................ 6-6 Booting Tru64 UNIX from a Local SCSI Disk .................................... 6-16 RIS Boot ..............................................................................................
2–14 2–15 2–16 2–17 2–18 2–19 2–20 2–21 3–1 3–2 3–3 3–4 3–5 3–6 3–7 3–8 3–9 3–10 3–11 3–12 4–1 4–2 4–3 4–4 5–1 5–2 8–1 8–2 8–3 8–4 Distribution Board .............................................................................. 2-18 Distribution Board in Single-Box System........................................... 2-19 Hierarchical Switch............................................................................. 2-20 Power System...........................................................................
7–1 7–2 7–3 7–4 7–5 7–6 8–1 8–2 8–3 8–4 A–1 A–2 A–3 viii Summary of SRM Commands ............................................................... 7-2 Notation Formats for SRM Console Commands ................................... 7-4 Special Characters for SRM Console ................................................... 7-5 Device Naming Conventions ............................................................... 7-17 SRM Environment Variables for Soft Partitions ................................
Preface Intended Audience This manual is for managers and operators of Compaq AlphaServer 80/160/320 family systems. Document Structure This manual uses a structured documentation design. Topics are organized into small sections, usually consisting of two facing pages. Most topics begin with an abstract that provides an overview of the section, followed by an illustration or example. The facing page contains descriptions, procedures, and syntax definitions.
• Appendix A, Jumpering Information, calls out jumpers and their functions.
Chapter 1 Introduction The Compaq AlphaServer GS160/320 and GS80 systems are high-performance server platforms designed for enterprise-level applications. They offer a high degree of scalability and expandability. The GS160/320 system uses up to four Alpha microprocessors in each quad building block (QBB). Two QBBs are paired back-to-back and rotated 180 degrees with reference to each other and then enclosed in a system box. A system cabinet can hold up to two system boxes.
1.1 AlphaServer GS160/320 and GS80 Systems The AlphaServer GS160/320 system and GS80 rack system are separate, but related, in that they use the same switch technology. The CPU modules, memory modules, and power modules are also the same. In the GS160/320 system, the modules are in a system box in a cabinet. In the GS80 rack system, the modules are in a drawer.
AlphaServer GS160/320 System The AlphaServer GS160 system cabinet contains up to two system boxes supporting a maximum of 16 CPU modules. In an AlphaServer GS320 system, a second system cabinet is used to expand the system (up to four system boxes containing a maximum of 32 CPU modules). A power cabinet contains the power components, I/O boxes, and storage. Additional I/O and storage can be housed in expander cabinets.
1.2 Firmware and Utilities Overview Firmware residing in ROM on CPU and other modules in the system provides commands for booting the operating system, testing devices and I/O adapters, and other tasks useful in operating and maintaining a running system. You type commands at the console device. SRM Console Systems running the Tru64 UNIX or OpenVMS operating systems are configured from the SRM console, a command-line interface (CLI).
1.3 System Architecture Each QBB in a GS160/320 system and each QBB (system drawer) in a GS80 system has a backplane and a switch supporting the CPU modules, memory modules, and I/O riser modules. Figure 1–2 shows two QBBs in a single-box system.
Chapter 2 GS160/320 System Overview Each system cabinet contains one or two system boxes. The system box houses two quad building blocks, or QBBs. CPU modules, memory modules, power modules, and I/O riser modules plug into the QBB backplane. The power cabinet contains power components, PCI boxes, and storage shelves.
2.1 System Characteristics The illustration shows the BA51A-AA system box. Table 2–1 lists the system box characteristics. Table 2–2 lists power cabinet and environmental characteristics. Figure 2–1 System Box PK0611 Table 2–1 System Box Characteristics Characteristic Specification Size 535 mm H x 550 mm W x 475 mm D (21.06 in. x 21.65 in. x 18.7 in.) Weight 54.
Table 2–2 Power Cabinet and System Environmental Characteristics Power Cabinet Specifications Electrical Voltage 120/208 VAC (U.
2.2 System Box Architecture The system box houses two quad building blocks (QBBs). Each QBB has a backplane with a switch interconnect that supports up to four CPU modules, four memory modules, two power modules, two I/O riser modules, and a global port. Figure 2–2 shows two system boxes connected by the hierarchical switch. Figure 2–3 shows one system box and the distribution board.
The switch on the backplane connects the CPU modules, memory modules, I/O riser modules, and global port. In an 8-P system, the global ports connect the QBBs to the distribution board. In a 16-P or a 32-P system, the global ports connect the QBBs to the hierarchical switch.
2.3 Quad Building Block (QBB) Components Figure 2–4 shows two QBBs back to back in the system box.
The QBB backplanes are attached to a stiffener and mounted in a system box enclosure. Each backplane has a differently positioned cutout to accommodate the global port modules. A global port module is mounted on the front of one QBB and the other is mounted on the back of the other QBB, putting both global port modules near the distribution board (or the hierarchical switch) when the system box is installed in the cabinet.
2.3.1 Backplane Each QBB backplane is located at the center of the system box. Figure 2–5 shows an unpopulated backplane (no modules installed) as you would see it from the front of the system box.
The CPU, memory, power, and I/O riser modules plug into the backplane. Because of the orientation of the QBB backplanes, the modules are situated differently in the front and rear of the system box. See Section 3.5 for more information. The switch interconnect on the backplane allows any processor to access any memory on the QBB. The backplane also provides power to the modules.
2.3.2 CPU Module A CPU module comes with an Alpha microprocessor chip with a Bcache, cache control and TAG comparison logic, clock logic, and a DCDC power converter. Also included on the module is logic for implementing self-test diagnostics. Each module has a Run LED and a Hot Swap LED.
2.3.3 Memory Module A memory module has eight DIMM slots. See Section 3.7 for memory configuration guidelines.
2.3.4 Directory Module In a GS160/320 system, one directory module is required for each QBB in a system box. In a two-drawer GS80 system, a directory module is required in each system drawer. No directory module is needed in a one-drawer system. The directory module functions as a memory coherency manager.
2.3.5 Power Modules Two power modules are installed in the QBB backplane. The main power module and the auxiliary power module convert 48 VDC to the various voltages required to power the QBB.
2.3.6 Power System Manager Module Each QBB has one power system manager (PSM) module. This module monitors CPUs, voltages, temperatures, and blower speed in the cabinet and reports this information to the system control manager (SCM). Figure 2–10 Power System Manager Module PK0607 The PSM module is connected to other PSM modules and the SCM microprocessor (located on the standard I/O module) through the console serial bus (CSB). The SCM is the master; the PSM can only operate as a slave.
2.3.7 Clock Splitter Module The clock splitter module converts one global signal to identical copies of a signal that is then distributed to master phase lock loops associated with the ASICs and the system processors within a QBB. It also generates independent clock signals for the I/O domain.
2.3.8 I/O Riser Module The I/O riser module is used to connect the QBB backplane to a PCI box. A “local” I/O riser module is located on the QBB backplane; a “remote” I/O riser module is in the PCI box.
2.3.9 Global Port Module The global port provides the interconnect to the other QBB(s) through the distribution board or the hierarchical switch.
2.3.10 Distribution Board In single-box systems, a distribution board connects the two QBBs through the global ports.
Figure 2–15 is a block diagram showing the distribution board as the interconnect between two QBBs.
2.3.11 Hierarchical Switch In two-box systems, a hierarchical switch links the QBBs through the global ports. The hierarchical switch connects QBBs in three- and four-box systems also.
The hierarchical switch links the QBBs in systems having more than one system box. Figure 2–16 shows cable connectors for each system box (a pair of connectors for two signal cables routed to each QBB global port in the system). The hierarchical switch power manager (HPM) module controls power and monitors the temperature inside the hierarchical switch housing. The HPM module, along with the PSM modules and PBM modules, report status information to the SCM.
2.4 Power System Each system box has a power subrack with up to three 48 VDC power supplies. Figure 2–17 shows the power system for a 32-P system. See Section 3.3 for power configuration rules.
Power cables and components are color-coded to ensure proper identification and easy handling. NOTE: Color-coded components and power cables must match to ensure proper power distribution, particularly in hard-partitioned systems. Figure 2–17 shows each system box and its color-related power subrack and AC input box. The AC input box also has color-coded circuit breakers. Each AC input box provides power to the subracks, PCI boxes, and storage shelves.
2.4.1 AC Input Box A system has two AC input boxes. Figure 2–18 shows the circuit breakers (CB1–CB11), LEDs (L1–L3), and connectors (J1–J22) on the AC input box.
The three LEDs on the AC input box should be lit at all times, indicating that all three power phases are present in the 3-phase AC input. Table 2–3 lists the AC input box circuit breakers and the lines they protect. Table 2–3 AC Input Box Circuit Breakers Circuit Breaker Line(s) Protected CB1 (Main) All lines protected.
2.5 PCI I/O The power cabinet contains at least one PCI master box, and may contain PCI expansion boxes.
A PCI master box has a standard I/O module, a DVD/CD-ROM drive and a floppy drive as shown in Figure 2–19. PCI expansion boxes provide additional slots for options. Each PCI power supply has three LEDs: Vaux OK, Power OK, and Swap OK. BA54A-AA PCI Box The BA54A-AA PCI box is a PCI master box.
2.6 Control Panel The control panel is located at the top of the power cabinet. It has a three-position Off/On/Secure switch, three pushbuttons, three status LEDs, and a diagnostic display.
The callouts in Figure 2–20 point to these components on the control panel: ➊ Secure LED – When lit, indicates that the keyswitch is in the Secure position and system is powered on. All pushbuttons and SCM functions are disabled, including remote access to the system. ➋ Power OK LED – When lit, indicates at least one QBB is powered on and remote console operations are enabled. The keyswitch is in the On position.
2.6.1 Control Panel LEDs Figure 2–21 shows the various control panel LED status indications. Figure 2–21 Control Panel LED Status Control Panel LEDs Secure Power OK Status Halt System powered on; remote console disabled; pause mode. System powered on; remote console disabled. System powered on; remote console enabled; remote console halt or Halt button depressed. System powered on; remote console enabled.
Chapter 3 GS160/320 System Configuration Rules This chapter provides configuration rules for the following: • GS160 System Cabinet • GS320 System Cabinets • Power Cabinet • System Box • QBB Color Code • Memory Configurations • Memory Interleaving Guidelines • PCI Boxes • PCI Box Slot Configuration • Expander Cabinet GS160/320 System Configuration Rules 3-1
3.1 GS160 System Cabinet Figure 3–1 shows the front view of the system cabinet and the power cabinet. One system cabinet houses either one system box or two system boxes. In a one-box system, a distribution board connects the two QBBs. In a two-box system, a hierarchical switch connects the QBBs.
About the System Cabinet The cabinet contains the following components: • Vertical mounting rails • Wrist strap for static discharge protection GS160 Configuration Rules • System box 1 (see Figure 3–1) is mounted in the lower half of the cabinet, above the blower. • System box 2 is mounted in the upper half of the cabinet, over system box 1.
3.2 GS320 System Cabinets Figure 3–2 shows the front view of the system cabinets. Two system cabinets house either three system boxes or four system boxes. A hierarchical switch is used to connect the QBBs.
GS320 System Configuration Rules • In system cabinet 1, system box 1 (see Figure 3–2) is mounted in the lower half of the cabinet, above the blower. System box 2 is mounted in the upper half of the cabinet, above system box 1. • In system cabinet 2, system box 3 is mounted in the lower half of the cabinet; system box 4, the upper half of the cabinet.
3.3 Power Cabinet One power cabinet is required for all systems. The power cabinet houses the control panel, AC input boxes, power supplies, PCI I/O boxes, and storage.
Power System Requirements • Each system box requires a power subrack. • Each power subrack has three power supplies. The third power supply is always redundant. See Section 3.3.1 for power supply slot assignments. • Two AC input boxes are required. Cables, AC input boxes (including AC circuit breakers), power subracks, and system boxes are color-coded at cable connections to ensure proper cabling. Figure 3–3 shows the color coding scheme for a 32-P system.
3.3.1 Power Supply Slot Assignments Figure 3–4 show the power supply slot assignments in each power subrack. Figure 3–4 Power Supply Slot Assignments Power Cabinet Bulkhead Blue Power Subrack R 1 1 2 3 Green Power Subrack R 2 1 2 3 Power Cabinet Bulkhead Orange Power Subrack R 3 1 2 3 Brown Power Subrack R 4 1 2 3 AC Input 2 AC Input 1 R indicates redundant power supply slot.
Power Supply Configuration Rules • Power subracks are always mounted in the same power cabinet location, regardless of the number of system boxes. • Power supply slot assignments remain the same in all systems, regardless of the number of system boxes. • A redundant power supply slot is always the last slot to be used in a subrack.
3.4 System Box The system box contains two QBBs. Figure 3–5 shows a fully populated QBB as seen from the front of the cabinet. Figure 3–6 shows the second QBB at the rear of the cabinet.
System Box Configuration Rules • A system box has two QBBs. • A QBB supports up to four CPU modules. • A QBB supports up to four memory modules. • A QBB has up to two I/O riser modules; each I/O riser module connects to one PCI box. • A system box supports up to four PCI boxes.
3.5 QBB Color Code Figure 3–7 and Figure 3–8 show the center bar color code for module placement in the QBB. Note that CPU and memory slots are colorcoded to ensure the correct placement of each module.
Figure 3–8 QBB Center Bar Color Code (Cabinet Rear) Main Power (Yellow) Auxiliary Power (Red) CPU 0 (Blue) PSM (Orange) Memory 0 (Gray) CPU 2 (Blue) Clock Splitter (Green) Memory 1 (Gray) Memory 2 (Gray) CPU 1 (Blue) CPU 3 (Blue) Memory 3 (Gray) Global Port 1 Global Port 0 Directory (White) PK2229 GS160/320 System Configuration Rules 3-13
3.6 Memory Configurations A memory module has eight DIMM slots. Two arrays (Array 0 and Array 1), consisting of four DIMMs, can be installed on each module. A directory DIMM is required for each array in systems having more than four processors. Directory DIMMs are installed on the directory module.
Memory Configuration Guidelines • On a memory module, DIMMs are divided into two groups of four called arrays. • A memory module must be populated on an array-by-array basis; that is, groups of four DIMMs must be installed. • DIMMs in an array must be the same size and type. • DIMM sizes include 256 Mbyte, 512 Mbyte, and 1 Gbyte. There are two types of DIMMs: single density (SD) and double density (DD). • Density does not affect interleaving.
3.7 Memory Interleaving Guidelines Table 3–1 Interleaving Memory Modules Memory Interleaving Guidelines 4-way The default interleave. One memory module with one array populated (or most mixes not discussed below). 8-way One memory module with two arrays populated. Preferred method: Two memory modules with one array populated on each module. 16-way Two memory modules, each with two arrays populated. Preferred method. Four memory modules with one array populated on each module.
Memory Interleaving Guidelines • The larger the interleaving factor, the better the system performance. • Avoid mixing memory sizes; this limits interleaving capability and potential bandwidth.
3.8 PCI Boxes A QBB supports up to two PCI boxes. A cable connects the QBB “local” I/O riser to the “remote” I/O riser in the PCI box. There are two I/O ports on a local I/O riser. Each I/O port is used to connect to one remote I/O riser. Figure 3–10 shows QBB0 connected to PCI box 0 and PCI box 1.
The I/O subsystem consists of the local I/O interface (QBB) and the remote I/O interface (PCI box) connected by I/O cables. A system can have up to 16 PCI boxes. To identify PCI boxes in a system, a node ID is set using the node ID switch located on the rear panel of each PCI box (see Figure 3–11).
3.9 PCI Box Slot Configuration Each QBB can have two I/O risers supporting up to two PCI boxes. A cable connects a local I/O riser (in the QBB) to a remote I/O riser (in the PCI box). Each PCI box can have up to two remote I/O risers in place. Cable connectors for the two remote I/O risers are shown as Riser 0 and Riser 1 in Figure 3–11. PCI slots and logical hoses are listed in Table 3– 2.
PCI Slot Configuration Guidelines • I/O riser 0 must be installed. • The standard I/O module is always installed in riser 0-slot 1. • Install high-powered modules in slots with one inch module pitch (all slots except riser 0-slot 5, riser 0-slot 6, riser 1-slot 5, and riser 1-slot 6). • Install high-performance adapters across multiple bus/hose segments to get maximum performance. • VGA graphics options must be installed in riser 0-slot 2 or riser 0-slot 3.
3.10 Expander Cabinet Additional PCI boxes and storage devices are housed in an expander cabinet. The same cabinet is used to expand GS160/320 systems and GS80 systems. Figure 3–12 shows five different PCI and BA356 storage configurations.
Chapter 4 GS80 Rack System Overview In the rack system, the BA52A system drawer has a QBB containing a backplane, CPU modules, memory modules, power modules, and I/O riser modules.
4.1 Rack System Characteristics Table 4–1 lists system drawer characteristics. Table 4–2 lists power and environmental specifications for the rack system. Figure 4–1 System Drawer PK-0633-99 Table 4–1 System Drawer Characteristics Characteristic Specification Size 40 cm H x 45 cm W x 65 cm D (15 in. x 18 in. x 25 in.
Table 4–2 Rack System Characteristics Electrical Voltage 120 VAC (U.S.) 220–240 VAC (Europe) 200–240 VAC (Japan) Phase Single Frequency 50–60 Hz Maximum input current/circuit 16 A (U.S.) 12 A (Europe) 13 A (Japan) Maximum power consumption 2.4 – 2.8 KVA (U.S.) 5.2 – 5.7 KVA (Europe) 4.8 – 5.
4.2 System Drawer Architecture The system drawer houses a QBB consisting of a backplane that supports four CPU modules, four memory modules, two power modules and two I/O riser modules. These modules are identical to those used in the box systems. The global port is part of the backplane. In a twodrawer system, the drawers are linked by a distribution board.
The switch that interconnects the CPU modules, memory modules, and I/O riser modules is built into the system drawer backplane. In a two-drawer system, the system drawers are linked together through the global ports and the distribution board. A directory module is required in each system drawer in a two-drawer system.
4.3 System Drawer Modules The modules plug into the system drawer backplane. Figure 4–3 shows a fully populated backplane. Figure 4–4 shows the backplane with no modules.
The CPU, memory, power, and I/O riser modules plug into the backplane located at the bottom of the system drawer.
Chapter 5 GS80 Rack System Configuration Rules This chapter provides configuration rules for the following: • Rack • Rack Power System GS80 Rack System Configuration Rules 5-1
5.1 Rack A rack houses a maximum of two system drawers.
About the Rack Cabinet The cabinet contains the following components: • One or two system drawers • Control panel (see Section 2.
5.2 Rack Power System Figure 5–2 shows a two-drawer rack power system: two AC input boxes and two H7504 power subracks at the bottom of the cabinet. Each subrack holds three power supplies. The system drawer power cables connect to the power subrack.
About the Power System • Each system drawer requires one power subrack. • Each system drawer requires two power supplies. • Each power subrack holds up to three power supplies. The third power supply is used for redundancy.
Chapter 6 Booting and Installing an Operating System This chapter provides basic operating instructions, including powering up the system and booting the operating system.
6.1 Powering Up the System To power up the system, set the keyswitch to On, or power up the system remotely. The SCM power-up display is shown at the system management console and the control panel, followed by the SRM power-up display. 6.1.
➊ The user issues a power on command. ➋ Messages denoted by ~I~ are informational and do not indicate a serious event. Other types of messages include: *** – Diagnostic format indicating an error has occurred. ### – Diagnostic format indicating a warning. ~E~ – An error has occurred; power-up continues, but the affected resource is dropped. ~W~ – An error has occurred; power-up continues, and the affected resource is questionable.
Example 6–1 SCM Power-Up Display (Continued) QBB2 Step(s)-0 1 2 3 4 5 Tested QBB3 Step(s)-0 1 2 3 4 5 Tested QBB0 Step(s)-0 1 2 3 4 5 Tested QBB1 Step(s)-0 1 2 3 4 5 Tested Phase 1 QBB0 IO_MAP0: 000000C101311133 QBB1 IO_MAP1: 0000000000000003 QBB2 IO_MAP2: 0000000000000003 QBB3 IO_MAP3: 000000C001311133 ➍ ~I~ QbbConf(gp/io/c/m)=fbbfffff Assign=ff SQbb0=00 PQbb=00 SoftQbbId=fedcba98 ~I~ SysConfig: 37 13 07 19 07 12 c7 13 37 13 f7 11 f7 13 37 13 SCM_E0> QBB1 now Testing Step-6 QBB1 now Testing Step-7 QBB1 n
Example 6–1 SCM Power-Up Display (Continued) Phase 3 ➏ ~I~ QbbConf(gp/io/c/m)=fbbfffff Assign=ff SQbb0=00 PQbb=00 SoftQbbId=fedcba98 ~I~ SysConfig: 37 13 07 19 07 12 c7 13 37 13 f7 11 f7 13 37 13 SCM_E0> QBB0 now Testing Step-D QBB1 now Testing Step-D QBB2 now Testing Step-D QBB3 now Testing Step-D.............
6.1.2 SRM Power-Up Display Following the initial SCM power-up and the five test phases, the SRM console takes control of the remaining portion of system power-up. Example 6–2 SRM Power-Up Display System Primary QBB0 : 0 System Primary CPU : 0 on QBB0 Par hrd/sft CPU Mem QBB# 3210 3210 IOR3 IOR2 IOR1 IOR0 (pci_box.rio) GP QBB Mod BP Dir PS Temp Mod 321 (:C) (-) (-) (-) (-) --.--.--.--.
➊ A snapshot of the system environment is displayed. See Section 8.7.3 for more information. ➋ PALcode is loaded and started. ➌ The size of the system is determined and mapped. This system has four QBBs and five CPUs.
Example 6–2 SRM Power-Up Display (Continued) CPU 0 speed is 731 MHz create dead_eater create poll create timer create powerup access NVRAM QBB 0 memory, 1 GB QBB 1 memory, 1 GB QBB 2 memory, 512 MB QBB 3 memory, 512 MB total memory, 3 GB copying PALcode to 103ffe8000 copying PALcode to 201ffe8000 copying PALcode to 301ffe8000 probe I/O subsystem probing hose 0, PCI probing PCI-to-ISA bridge, bus 1 bus 1, slot 0 -- dva—Floppy bus 0, slot 1 -- pka—QLogic ISP10x0 bus 0, slot 2 -- vga—ELSA GLoria Synergy bus 0,
Example 6–2 SRM Power-Up Display (Continued) CPU 9 speed is 731 MHz create powerup starting console on CPU 12 initialized idle PCB initializing idle process PID lowering IPL CPU 12 speed is 731 MHz create powerup initializing pka pkb pkc ewa dqa dqb dqc dqd initializing GCT/FRU at 1f2000 AlphaServer Console X5.7-6290, built on Feb 4 2000 at 01:41:06 P00>>> ➍ ➎ ➏ Distributed memory is sized and mapped. ➐ The SRM console prompt is displayed. ➐ The I/O subsystem is mapped.
6.2 Setting Boot Options You can set a default boot device, boot flags, and network boot protocols for Tru64 UNIX or OpenVMS using the SRM set command with environment variables. Once these environment variables are set, the boot command defaults to the stored values. You can override the stored values for the current boot session by entering parameters on the boot command line.
The syntax is: set bootdef_dev boot_device boot_device The name of the device on which the system software has been loaded. To specify more than one device, separate the names with commas. Example In this example, two boot devices are specified. The system will try booting from dkb0 and, if unsuccessful, will boot from dka0.
6.2.3 Boot_osflags The boot_osflags environment variable sets the default boot flags and, for OpenVMS, a root number. Boot flags contain information used by the operating system to determine some aspects of a system bootstrap. Under normal circumstances, you can use the default boot flag settings. To change the boot flags for the current boot only, use the flags_value argument with the boot command. The syntax is: set boot_osflags flags_value The flags_value argument is specific to the operating system.
OpenVMS Systems OpenVMS systems require an ordered pair as the flags_value argument: root_number and boot_flags. root_number Directory number of the system disk on which OpenVMS files are located. For example: boot_flags root_number Root Directory 0 (default) [SYS0.SYSEXE] 1 [SYS1.SYSEXE] 2 [SYS2.SYSEXE] 3 [SYS3.SYSEXE] The hexadecimal value of the bit number or numbers set. To specify multiple boot flags, add the flag values (logical OR).
Example In the following Tru64 UNIX example, the boot flags are set to autoboot the system to multiuser mode when you enter the boot command. P00>>> set boot_osflags a In the following OpenVMS example, root_number is set to 2 and boot_flags is set to 1. With this setting, the system will boot from root directory SYS2.SYSEXE to the SYSBOOT prompt when you enter the boot command. P00>>> set boot_osflags 2,1 In the following OpenVMS example, root_number is set to 0 and boot_flags is set to 80.
6.2.5 ei*0_protocols or ew*0_protocols The ei*0_protocols or ew*0_protocols environment variable sets network protocols for booting and other functions. To list the network devices on your system, enter the show device command. The Ethernet controllers start with the letters “ei” or “ew,” for example, eia0. The third letter is the adapter ID for the specific Ethernet controller. Replace the asterisk (*) with the adapter ID letter when entering the command.
6.3 Booting Tru64 UNIX Tru64 UNIX can be booted from a DVD or CD-ROM on a local drive, from a local SCSI disk, or from a server. Example 6–3 Booting Tru64 UNIX from a Local SCSI Disk P00>>> sho dev dka0.0.0.1.0 dka100.1.0.1.1 dka200.2.0.1.1 dka300.3.0.1.1 dkc0.0.0.1.0 dkc100.1.0.1.0 dkc200.2.0.1.0 dkc300.3.0.1.0 dqa0.0.0.15.0 dva0.0.0.1000.
Example 6–3 Booting Tru64 UNIX from a Local SCSI Disk (Continued) Firmware revision: 5.6-6930 PALcode: Digital Tru64 UNIX version 1.60-1 Compaq AlphaServer GS320 6/731 . . . Digital Tru64 UNIX Version V4.0 login: Example 6–3 shows a boot from a local SCSI drive. The example is abbreviated. For complete instructions on booting Tru64 UNIX, see the Tru64 UNIX Installation Guide. Perform the following tasks to boot a system: 1. Power up the system. The system stops at the SRM console prompt, P00>>>. 2.
6.3.1 Booting Tru64 UNIX Over the Network To boot the system over the network, make sure the system is registered on a Remote Installation Services (RIS) server. See the Tru64 UNIX document entitled Sharing Software on a Local Area Network for registration information. Example 6–4 RIS Boot P00>>> show device dka0.0.0.1.1 DKA0 dka100.1.0.1.1 DKA100 dka200.2.0.1.1 DKA200 dkb0.0.0.3.1 DKB0 dqa0.0.0.15.0 DQA0 dva0.0.0.1000.0 DVA0 eia0.0.0.4.1 EIA0 eib0.0.0.2002.1 EIB0 pka0.7.0.1.1 PKA0 pkb0.7.0.3.
Systems running Tru64 UNIX support network adapters, designated ew*0 or ei*0. The asterisk stands for the adapter ID (a, b, c, and so on). 1. Power up the system. The system stops at the SRM console prompt, P00>>>. 2. Set boot environment variables, if desired. See Section 6.2. 3. Enter the show device command ➊ to determine the unit number of the drive for your device. 4. Enter the following commands. Example 6–4 assumes you are booting from eia0.
6.4 Installing Tru64 UNIX Tru64 UNIX is installed from the DVD/CD-ROM drive connected to the system. Example 6–5 Tru64 UNIX Installation Display P00>>> b dqa0 (boot dqa0.0.0.15.0 -flags a block 0 of dqa0.0.0.15.0 is a valid boot block reading 16 blocks from dqa0.0.0.15.
There are two types of installations: o The Default Installation installs a mandatory set of software subsets on a predetermined file system layout. o The Custom Installation installs a mandatory set of software subsets plus optional software subsets that you select. You can customize the file system layout. The Tru64 UNIX Shell option puts your system in single-user mode with superuser privileges.
6.5 Booting OpenVMS OpenVMS is booted from a local SCSI disk drive or from a DVD/CDROM drive on the InfoServer. Example 6–6 Booting OpenVMS from a Local Disk P00>>> show device dka0.0.0.1.0 DKA0 dkb0.0.0.3.0 DKB0 COMPAQ dkb100.1.0.3.0 DKB100 COMPAQ dkb200.2.0.3.0 DKB200 COMPAQ dkb300.3.0.3.0 DKB300 COMPAQ dkc0.0.0.4.1 DKC0 COMPAQ . . . P00>>> boot -flags 0,0 dka0 (boot dka0.0.0.1.0 -flags 0,0) block 0 of dka0.0.0.1.0 is a valid boot block reading 924 blocks from dka0.0.0.1.
Example 6–6 shows a boot from a local disk. The example is abbreviated. For complete instructions on booting OpenVMS, see the OpenVMS installation document. 1. Power up the system. The system stops at the SRM console prompt, P00>>>. 2. Set boot environment variables, if desired. See Section 6.2. 3. Install the boot medium. For a network boot, see Section 6.2.4. 4. Enter the show device command ➊ to determine the unit number of the drive for your device. 5.
6.5.1 Booting OpenVMS from the InfoServer You can boot OpenVMS from a LAN device on the InfoServer. The devices are designated EI*0 or EW*0. The asterisk stands for the adapter ID (a, b, c, and so on). Example 6–7 InfoServer Boot P00>>> show device ➊ dka0.0.0.1.1 DKA0 RZ2CA-LA dka100.1.0.1.1 DKA100 RZ2CA-LA dqa0.0.0.15.0 DQA0 TOSHIBA CD-ROM XM-6302B dva0.0.0.1000.0 DVA0 eia0.0.0.6.1 EIA0 00-00-F8-10-D6-03 pka0.7.0.1.1 PKA0 SCSI Bus ID 7 P00>>> . . .
Network Initial System Load Function Version 1.2 ➌ FUNCTION FUNCTION ID 1 Display Menu 2 Help 3 Choose Service 4 Select Options 5 Stop Enter a function ID value: Enter a function ID Value: 3 OPTION OPTION ID 1 Find Services 2 Enter known Service Name ➍ Enter an Option ID value: 2 Enter a Known Service Name: ALPHA_V71-2_SSB OpenVMS (TM) Alpha Operating System, Version V7.1-2 1. Power up the system. The system stops at the P00>>> console prompt. 2.
6.6 Installing OpenVMS After you boot the operating system DVD or CD-ROM, an installation menu is displayed on the screen. Choose item 1 (Install or upgrade OpenVMS Alpha). Refer to the OpenVMS installation document for information on creating the system disk. Example 6–8 OpenVMS Installation Menu OpenVMS (TM) Alpha Operating System, Version V7.1-2 ➊ Copyright © 1999 Digital Equipment Corporation. All rights reserved. Installing required known files... Configuring devices...
➊ ➋ The OpenVMS operating system DVD/CD-ROM is booted. Choose option 1 (Install or upgrade OpenVMS Alpha). To create the system disk, see the OpenVMS installation document.
Chapter 7 Operation This chapter gives basic operating instructions.
7.1 SRM Console The SRM console is located in an EEROM on the standard I/O module. From the console interface, you set up and boot the operating system, display the system configuration, and perform other tasks. For complete information on the SRM console, see the AlphaServer GS80/160/320 Firmware Reference Manual. 7.1.
Table 7–1 Summary of SRM Commands (Continued) Command Function show config Displays the logical configuration at the last system initialization. show device Displays a list of controllers and bootable devices in the system. show error Reports errors logged in the EEPROMs. show fru Displays the physical configuration of all field-replaceable units (FRUs). show memory Displays information about system memory. show pal Displays the versions of Tru64 UNIX and OpenVMS PALcode.
Table 7–2 Notation Formats for SRM Console Commands Attribute Conditions Length Up to 255 characters, not including the terminating carriage return or any characters deleted as the command is entered. To enter a command longer than 80 characters, use the backslash character for line continuation. Case Upper- or lowercase characters can be used for input. Characters are displayed in the case in which they are entered. Abbreviation Only by dropping characters from the end of words.
Table 7–3 Special Characters for SRM Console Character Function Return or Enter Terminates a command line. No action is taken on a command until it is terminated. If no characters are entered and this key is pressed, the console just redisplays the prompt. Backslash (\) Continues a command on the next line. Must be the last character on the line to be continued. Delete Deletes the previous character. Ctrl/A Toggles between insert and overstrike modes. The default is overstrike.
Table 7–3 Special Characters for SRM Console (Continued) Character Function Ctrl/Q Resumes output to the console device that was suspended by Ctrl/S. Ctrl/R Redisplays the current line. Deleted characters are omitted. This command is useful for hardcopy terminals. Ctrl/S Suspends output to the console device until Ctrl/Q is entered. Cleared by Ctrl/C. Ctrl/U Deletes the current line. * Wildcarding for commands such as show.
7.2 Displaying the System Configuration View the system hardware configuration from the SRM console. It is useful to view the hardware configuration to ensure that the system recognizes all devices, memory configuration, and network connections. Use the following SRM console commands to view the system configuration. Additional commands to view the system configuration are described in the AlphaServer GS80/160/320 Firmware Reference Manual. show boot* Displays the boot environment variables.
7.2.2 Show Config Command Use the show config command to display the entire logical configuration. Example 7–3 shows a GS80 system configuration. Example 7–3 Show Config P00>>> sh conf Compaq Computer Corporation Compaq AlphaServer GS80 6/631 SRM Console PALcode X5.7-1838, built on Dec 1 1999 at 02:02:47 OpenVMS PALcode V1.71-2, Tru64 UNIX PALcode V1.
➊ Firmware. Version numbers of the SRM console, OpenVMS PALcode, and Tru64 UNIX PALcode. ➋ QBB0. Components listed include the quad switch and the following modules: CPUs, memory modules, directory module, IOP module, and global port. Chip revision numbers are also listed. Component information for each QBB in the system is displayed. ➌ PCI I/O information, PCI Box 0. In this example, QBB0 is connected to PCI Box 0 and PCI box 3 (see ➍).
Example 7–3 Show Config (Continued) QBB 1 Quad Switch Duplicate Tag CPU 1 CPU 2 Memory 0 Memory 3 Directory IOP Local Link 0 Remote Link 0 I/O Port 0 PCI Box 1 PCI Bus 0 PCI Bus 1 Local Link 1 Remote Link 1 I/O Port 1 PCI Box 1 PCI Bus 0 PCI Bus 1 Local Link 2 Local Link 3 Global Port QBB 0 1 Hose 0 4 8 Size 8 GB 4 GB IOP 0 0 1 Hard QBB 1 QSA rev 2, QSD revs 0/0/0/0 Up To 4 MB Caches DTag revs 1/1/1/1 4 MB Cache EV67 pass 2.2.2 4 MB Cache EV67 pass 2.2.
➎ QBB1. QBB1 components are listed. ➏ PCI I/O information, PCI box 1. QBB1 is connected to only one PCI box, PCI box 1. QBB1 I/O port 0 is linked to remote I/O riser 0 located on the right side of PCI box 1. Logical hose numbers are 8 and 9. QBB1 I/O port 1 is linked to remote I/O riser 1 located on the left side of PCI box 1. Logical hose numbers are 10 and 11. ➐ The total system memory size is reported. QBB0 has 8 Gbytes in a 32-way interleave; QBB1 has 4 Gbytes in a 16-way interleave.
Example 7–3 Show Config (Continued) ➓ PCI Box 0 Riser 0 0 0 0 0 0 0 0 0 1 1 2 3 7 1 2 0 1 3 0 0 1 1 1 0 6 7 1 1 1 1 0 0 0 1 1 1 1 1 3 3 3 3 ➆ Slot 1 Slot 1 2 3 7-12 0 1 1 1 1 0 ➇ 1 1 1 2 3 5 7 1 2 3 7 1 1 2 7 ➀ ➁ Option Hose Standard I/O Module 0 + Acer Labs M1543C 0 + Acer Labs M1543C IDE 0 + Acer Labs M1543C IDE 0 + Acer Labs M1543C USB 0 + QLogic ISP10x0 0 DE500-BA Network Con 0 ELSA GLoria Synergy 0 DEGPA-SA 1 QLogic ISP10x0 2 NCR 53C896 2 + NCR 53C896 2 09608086 2 + 19608086
Example 7–3 Show Config (Continued) 7 15 Acer Labs M1543C Acer Labs M1543C IDE 19 Acer Labs M1543C USB Option Floppy Bridge to Bus 1, ISA dqa.0.0.15.0 dqb.0.1.15.0 dqa0.0.0.15.0 COMPAQ CDR-8435 Hose 0, Bus 1, ISA dva0.0.0.1000.0 ➓ PCI Box, Riser, Slot. Each PCI box in the system is identified by a number (0 to F hexadecimal). A system can have a maximum of 16 PCI boxes. The physical location of options in the PCI box are identifed by the remote I/O riser (0 or 1), and slot number in the PCI box.
Example 7–3 Show Config (Continued) Slot 7 Option DEGPA-SA Hose 1, Bus 0, PCI Slot 1 Option QLogic ISP10x0 Hose 2, Bus 0, PCI pkb0.7.0.1.2 dkb0.0.0.1.2 dkb100.1.0.1.2 dkb200.2.0.1.2 dkb400.4.0.1.2 dkb500.5.0.1.2 dkb600.6.0.1.2 pkc0.7.0.2.2 pkd0.7.0.102.2 2/0 NCR 53C896 2/1 NCR 53C896 3/0 09608086 3/1 19608086/0415129A Slot Option 6 DEC KZPSA 7 Slot 1 7 15 6302B 19 DEC PCI FDDI Option QLogic ISP10x0 Acer Labs M1543C Acer Labs M1543C IDE Hose 3, Bus 0, PCI pke0.7.0.6.3 dke100.1.0.6.3 dke200.2.0.6.
Example 7–3 Show Config (Continued) 3 7 15 PowerStorm 350 Acer Labs M1543C Acer Labs M1543C IDE 19 Acer Labs M1543C USB Bridge to Bus 1, ISA dqe.0.0.15.8 dqf.0.1.15.8 dqe0.0.0.15.8 COMPAQ CDR-8435 Option Floppy Hose 8, Bus 1, ISA dvc0.0.0.1000.8 Slot 5 7 Option DECchip 21154-AA DEC PCI MC Hose 9, Bus 0, PCI Slot 4 5 Option DE602-AA DE602-AA Hose 9, Bus 2, PCI eia0.0.0.2004.9 eib0.0.0.2005.
7.2.3 Show Device Command Use the show device command to display the bootable devices. DK = SCSI drive; DQ = IDE drive; DV = diskette drive; EI or EW = Ethernet controller; PK = SCSI controller. Example 7–4 Show Device P00>>> sho dev dka0.0.0.1.0 dkb0.0.0.7.1 dkb100.1.0.7.1 dkb200.2.0.7.1 dkb300.3.0.7.1 dqa0.0.0.15.0 dqc0.0.0.15.6 dqe0.0.0.15.8 dva0.0.0.1000.0 dvb0.0.0.1000.6 dvc0.0.0.1000.8 dvd0.0.0.1000.16 eia0.0.0.3.8 fwa0.0.0.4.1 fwb0.0.0.5.7 fwc0.0.0.1.10 pga0.0.0.7.7 pka0.7.0.1.0 pkb0.7.0.7.1 pkc0.
Table 7–4 Device Naming Conventions ➊ Category Description dq Driver ID a Storage adapter ID 0 Device unit number 0 0 15 0 Bus node number Channel number Logical slot number Hose number Two-letter designator of port or class driver dk SCSI drive or CD ew Ethernet port dq IDE CD-ROM fw FDDI device dr RAID set device mk SCSI tape ci DSSI disk ci DSSI tape dv Diskette drive pk SCSI port ei Ethernet port One-letter designator of storage adapter (a, b, c…). Unique number (MSCP unit number).
7.2.4 Show Memory Command The show memory command displays the main memory configuration.
➊ ➋ The total system memory size is reported. ➌ Each memory board has two sets (arrays) of DIMMs installed. A set is numbered 0 or 1. Each set consists of four DIMMs. ➍ In this example, all DIMMs have the same density (4 GB) and are in a 32way interleave. The first array on board 0 is board 0, set 0, and is referred to as array 0; the second array on board 0 is board 0, set 1, and is referred to as array 4, and so forth. ➎ ➏ ➐ ➑ The size, or density of the array.
7.4 Setting SRM Console Security You can set the SRM console to secure mode to prevent unauthorized personnel from modifying the system parameters or otherwise tampering with the system from the console.
7.4.1 Setting Tru64 UNIX or OpenVMS Systems to Auto Start The SRM auto_action environment variable determines the default action the system takes when the system is power cycled, reset, or experiences a failure. On systems that are factory configured for Tru64 UNIX or OpenVMS, the factory setting for auto_action is halt. The halt setting causes the system to stop in the SRM console. You must then boot the operating system manually.
7.6 Soft Partitioning Soft partitioning allows you to run multiple instances of an OpenVMS operating system on one hardware system. Soft partitions are created by setting environment variables that define the number of partitions, as well as the CPU modules, I/O risers, memory size, and size of shared memory. Also, one partition is assigned to receive error interrupts. See the OpenVMS Alpha Galaxy Guide and GS80/160/320 Firmware Reference Manual for more information.
Table 7–5 SRM Environment Variables for Soft Partitions Environment Variable Definition lp_count n The number of soft partitions to create. Possible values are: 0 Default. All IOPs, CPUs, and memory are assigned to one soft partition. No shared memory is defined. 1 One soft partition is created (partition 0). You need to use the other environment variables to define the partition. 2-8 From two to eight soft partitions can be defined.
At the SRM console prompt, you set values for one environment variable to define the number of soft partitions in the system, one to set the memory mode, and two for each partition that define the CPU and I/O modules in each partition. The lpinit command completes the procedure.
➊ ➋ The number of soft partitions is set to 3. ➌ Set lp_cpu_mask0 f assigns CPUs 0–3 to partition 0. The set lp_mem_size0 6GB command assigns 6 GB of memory to partition 0. Set lp_ io_mask1 6 defines QBB1 and QBB2 (and its I/O risers) as residing in partition 1. The set lp_cpu_mask1 cf0 command assigns CPUs 4–7, 10, 11 to partition 1. The set lp_mem_size1 16GB command assigns 12 GB of memory to partition 1. The set lp_ io_mask2 8 defines QBB3 (and its I/O risers) as residing in partition 2.
7.7 Hard Partitioning Hard partitioning allows you to run multiple operating systems on one hardware system. Table 7–6 lists the SCM environment variables used to create hard partitions. After entering the SCM commands, set the control panel switch to Off and then to On to create the hard partitions. Table 7–6 SCM Environment Variables for Hard Partitions Environment Variable Definition hp_count A value from 0 to n indicating the number of hard partitions to be configured.
Partitioning a system is done at the QBB level: You can have a maximum of eight partitions. In hard partitioning mode: • System partitions are independent. • Each partition requires its own configuration tree. • Hardware isolation is required. • Address spaces are disjointed. • Failures are isolated to a specific partition.
Chapter 8 Using the System Control Manager The system control manager (SCM) communicates with microprocessors on the console serial bus (CSB) to monitor and manage the system. The SCM also provides remote server management functions. This chapter explains the operation and use of the SCM.
8.1 Console Serial Bus Subsystem The console serial bus (CSB) links microprocessors throughout the system, forming a monitoring and control subsystem managed by the system control manager (SCM). The SCM microprocessor is located on the standard I/O module in the master PCI box. Figure 8–1 shows a block diagram of the CSB, the microprocessors (or “managers”), and the SCM. Also shown is a redundant SCM available as a standby console.
The SCM communicates with the PCI backplane managers (PBMs), the hierarchical switch power manager (HPM), and the QBB power system managers (PSMs) distributed throughout the CSB subsystem. The subsystem has a power source separate from the rest of the system called auxiliary voltage (Vaux). Vaux enables the subsystem to be turned on from a remote site when the main system breaker is on and the system is plugged in.
8.2 System Control Manager Overview With the SCM, you can monitor and control the system (reset, power on/off, halt) without using the operating system. You can enter SCM commands at a local or remote console device. You also manage hard partitions using the SCM. The SCM: • Controls the control panel display and writes status messages on the display. • Detects conditions that require alerts, such as excessive temperature, fan failure, and power supply failure.
SCM Firmware SCM firmware resides on the standard I/O module in flash memory. If the SCM firmware should ever become corrupted or obsolete, you can update it manually using the Loadable Firmware Update Utility. The microprocessor can also communicate with the system power control logic to turn on or turn off power to the rest of the system. The SCM is powered by an auxiliary 5V supply.
8.3 SCM COM1 Operating Modes The SCM can be configured to manage different data flow paths defined by the com1_mode environment variable. In through mode (the default), all data and control signals flow from the system COM1 port through the SCM to the active external port. You can also set bypass modes so that the signals partially or completely bypass the SCM. The com1_mode environment variable can be set from either SRM or the SCM. See Section 8.7.1.
Through Mode Through mode is the default operating mode. The SCM routes every character of data between the internal system COM1 port and the active external port, either the local port or the modem port. If a modem is connected, the data goes to the modem. The SCM filters the data for a specific escape sequence. If it detects the default escape sequence, “scm”, it enters the SCM CLI. Figure 8–2 illustrates the data flow in through mode.
8.3.1 Bypass Modes For modem connection, you can set the operating mode so that data and control signals partially or completely bypass the SCM. The bypass modes are snoop, soft bypass, and firm bypass.
Figure 8–3 shows the data flow in bypass mode. Note that the internal system COM1 port is connected directly to the modem port. NOTE: You can connect a console device to the modem port in any of the bypass modes. The local console device is still connected to the SCM and can still enter the SCM to switch the COM1 mode if necessary. In any of the bypass modes, when the system loses power, it defaults to snoop mode. Snoop Mode In snoop mode data partially bypasses the SCM.
After downloading binary files, you can set the com1_mode environment variable from the SRM console to switch back to snoop mode or other modes for accessing the SCM, or you can hang up the current modem session and reconnect it. Firm Bypass Mode Firm bypass mode effectively disables the SCM. All data and control signals are routed directly from the system COM1 port to the external modem port. The SCM does not configure or monitor the modem, and the SCM dial-in and callout features are disabled.
8.4 Console Device Setup You can use the SCM from a console device connected to the system or a modem hookup. As shown in Figure 8–4, local connection is made from the local port ➊. You connect remotely from a modem connected to the modem port ➋. Figure 8–4 Setups for SCM (PCI Box) 1 2 PK-0650-99 Note: Both the local port and the modem port are set at: 1 start bit, 8 data bits, 1 stop bit, no parity. Local port defaults: baud rate is 9600, flow control is soft.
8.5 Entering the SCM Type an escape sequence to invoke the SCM from the SRM console. You can enter SCM from a modem, or from a local console device. From a VGA monitor, you can enter SCM commands from the SRM console. • You can enter the SCM from the local console device regardless of the current operating mode. • You can enter the SCM from the modem if the SCM is in through mode, snoop mode, or local mode. In snoop mode the escape sequence is passed to the system and displayed.
8.6 SRM Environment Variables for COM1 Several SRM environment variables allow you to set up the COM1 port for use with the SCM. Default values are read from shared RAM and set to whatever the SCM values are at console boot time. com1_baud Sets the baud rate of the COM1 port and the modem port. The default is 57600 for the local port and the modem port. Other allowable baud rates are: 38400, 19200, 9600, 7200, 4800, 3600, and 2400. com1_flow Specifies the flow control on the serial port.
8.7 SCM Command-Line Interface The system control manager supports setup commands and commands for managing the system. See Table 8–1. For an SCM commands reference, see the AlphaServer GS80/160/320 Firmware Reference Manual. Table 8–1 SCM Commands Command Description Clear {alert, error} Clears firmware or hardware state (alert). Clears SDD and TDD error logs (error). Crash Forces a crash dump at the operating system level. Disable {alert, remote} Disable system alerts or remote access.
Table 8–1 SCM Commands (Continued) Command Description Set modem_baud Sets the modem baud rate. Set dial Sets the string to be used by the SCM to dial out when an alert condition occurs. Set local_baud Sets the baud rate of the SCM-to-local console device UART. Set com1_baud Sets the baud rate to the SCM-to-system UART. Set hp_count Defines the number of hard partitions in system.
Command Conventions Follow these conventions when entering SCM commands: • Enter enough characters to distinguish the command. NOTE: The reset and quit commands are exceptions. You must enter the entire string for these commands to work. • For commands consisting of two words, enter the entire first word and at least one letter of the second word. For example, you can enter disable a for disable alert. • For commands that have parameters, you are prompted for the parameter.
8.7.1 Defining the COM1 Data Flow Use the set com1_mode command from SRM or SCM to define the COM1 data flow paths. You can set com1_mode to one of the following values: through All data passes through SCM and is filtered for the escape sequence. This is the default. snoop Data partially bypasses SCM, but SCM taps into the data lines and listens passively for the escape sequence. soft_bypass Data bypasses SCM, but SCM switches automatically into snoop mode if loss of carrier occurs.
8.7.2 Displaying the System Status The SCM status command displays the current Table 8–2 explains the status fields. SCM settings.
Table 8–2 Status Command Fields (Continued) Field Meaning OCP power switch: OCP halt: OCP secure: Indicates the position of the control panel keyswitch and Halt button. In this case, the keyswitch is in the Off position and the Halt button is not pushed in. The OCP is in a nonsecure state. Remote access: Remote access to the system is disabled or enabled. Remote user: A remote user is either not connected, as in the example, or connected. Alerts: Alert status is reported: Enabled or Disabled.
8.7.3 Displaying the System Environment The SCM show system command provides a snapshot of the system environment. SCM_E0> show system ➊ ➋ ➌ ➍ PPPP PPPP PPPP PPPP PPPP PPPP PPPP PPPP ➎ Par hrd/csb CPU Mem QBB# 3210 3210 IOR3 IOR2 IOR1 IOR0 (pci_box.rio) (-) (-) (-) (-) Px.x P4.0 Px.x --.
For QBB0: Px.x indicates that a remote I/O riser is present (P), but no I/O mapping (x.x) has been determined. P2.0 indicates that a hose from local I/O riser 2, port 2, (IOR2) is connected to PCI box 2, remote I/O riser 0. Pf.1 indicates that a hose from local I/O riser 1, port 1 (IOR1) is connected to PCI box F, remote I/O riser 1. Pf.0 indicates that a hose from local I/O riser 1, port 0 (IOR0) is connected to PCI box F, remote I/O riser 0. ➏ ➐ ➑ Global port module self-test results.
➅ RIO. Remote I/O modules. * indicates the presence of a remote I/O module; – indicates its absence. In the example, PCI box 10 has one remote I/O riser, I/O riser 0, installed. ➆ PS. PCI box power supplies 2 and 1. A P indicates a power supply is powered on and passed self-test; a – indicates no power supply is installed. Other indications include: p, power supply passed self-test and is powered off *, power supply is present, but no pass/fail status yet F, power supply failed ➇ Temp.
Halt In and Continue The halt in command halts the operating system. The continue command releases the halt. Issuing the continue command will restart the operating system even if the Halt button is latched in. Reset NOTE: The environment variable, auto_quit, must be enabled for the reset command to return to the console or operating system. The SCM reset command restarts the system. The console device exits SCM and reconnects to the server’s COM1 port.
8.7.5 Configuring Remote Dial-In Before you can dial in through the SCM modem port or enable the system to call out in response to system alerts, you must configure the system for remote dial-in. Example 8–1 Dial-In Configuration SCM_E0> SCM_E0> SCM_E0> SCM_E0> enable remote set password wffirmware set init ate0v0&c1s0=2 init ➊ ➋ ➌ ➍ Querying the modem port...modem detected Initializing modem...
➊ ➋ Enables remote access to the SCM modem port. ➌ Sets the modem initialization string. The string is limited to 31 characters and can be modified depending on the type of modem used. Sets the password (in the example, “wffirmare”) that is prompted for at the beginning of a modem session. The string cannot exceed 14 characters and is not case sensitive. For security, the password is not echoed on the screen. When prompted for verification, type the password again.
8.7.6 Configuring Alert Dial-Out Set a few parameters to configure alert dial-out. The dial string and alert string are set to send a message to a remote operator. When an alert condition is triggered, the dial string is sent first, followed by the alert string. Once the two strings are set, alert dial-out is enabled and the new parameters should be verified. The elements of the dial string and alert string are shown in Table 8–3. You must configure remote dial-in for the dial-out feature to be enabled.
➊ Sets the string to be used by the SCM to dial out when an alert condition occurs. The dial string must include the appropriate modem commands to dial the number. ➋ Sets the alert string that is transmitted through the modem when an alert condition is detected. Set the alert string to the phone number of the modem connected to the remote system. The alert string is appended after the dial string, and the combined string is sent to the modem. ➌ ➍ Alert dial-out is enabled.
Table 8–3 Elements of Dial String and Alert String Dial String The dial string is case sensitive. The SCM automatically converts all alphabetic characters to uppercase. ATDT AT = Attention. D = Dial T = Tone (for touch-tone) 9, The number for an outside line (in this example, 9). Enter the number for an outside line if your system requires it. ,= Pause for two seconds. 15551212 Phone number of the paging service. Alert String ,,,,, Each comma (,) provides a 2-second delay.
8.7.7 Resetting the Escape Sequence The SCM set escape command allows the user to change the escape sequence. The default escape sequence is “scm”. The new escape sequence can be any printable character string, not to exceed six characters. Use the show status command to verify the new escape sequence. SCM_E0> set escape Escape Sequence: foofoo SCM_E0> show status . . .
8.8 Troubleshooting Tips Table 8–4 lists possible causes and suggested solutions for symptoms you might see. Table 8–4 SCM Troubleshooting Symptom Possible Cause Suggested Solution You cannot enter the SCM from the modem. The SCM may be in soft bypass or firm bypass mode. Issue the show com1_mode command from SRM and change the setting if necessary. If in soft bypass mode, you can disconnect the modem session and reconnect it. The console device cannot communicate with the SCM correctly.
Table 8–4 SCM Troubleshooting (Continued) Symptom Possible Cause Suggested Solution SCM will not answer when modem is called. On power-up, SCM defers initializing the modem for 30 seconds to allow the modem to complete its internal diagnostics and initializations. Wait 30 seconds after powering up the system and SCM before attempting to dial in. After the system is powered up, the COM1 port seems to hang and then starts working after a few seconds. This delay is normal.
Appendix A Jumpering Information This appendix contains jumpering information for the PCI backplane, the hierarchical switch power manager (HPM), and the standard I/O module. A.1 PCI Backplane Jumpers Table A–1 lists PCI backplane jumpers and their functions. These two jumpers are not normally installed. Table A–1 PCI Backplane Jumpers Jumper Function J60 If flash ROM is corrupt, installing this jumper will force the PCI backplane manager (PBM) into fail-safe loader mode.
A.2 HPM Jumpers Table A–2 lists hierarchical switch power manager (HPM) jumpers and their functions. The HPM module has three 2-position jumpers, none of which are normally installed. Table A–2 HPM Jumpers Jumper Function, When Installed J2 Flash_Write_Inhibit. Prevents the hardware from writing to flash memory. J3 Force_FSL. Causes the firmware to remain in the fail-safe loader code after HPM reset. J4 HS_CSB_ID0. Sets ID0 of the microprocessor’s CSB address field to a 1. A.
Glossary AC off state One of the system power states in which all power is removed from the system. See also Hot-swap, Coldswap, and Warm-swap states. Clock splitter module Module that provides the system with multiple copies of the global and I/O reference clocks. Cold-swap state One of the system power states in which AC power and Vaux are present in the system, but power is removed from the area being serviced. See also AC off, Hotswap, and Warm-swap states. Console serial bus See CSB.
Hard partition A partition whose basic unit is a QBB and which shares no resources with any other QBB; defined by using the SCM command language. The smallest hard partition is one QBB; the maximum number of hard partitions in one system is eight. Hierarchical switch See H-switch. Hose The connection between a QBB and a PCI box; can be logical or physical. Hot-swap state One of the power states of the system in which all power is present in the system.
Local I/O riser module The I/O riser module that is on the QBB backplane. Local primary CPU The CPU chosen to be the primary CPU in a QBB. Local testing Testing confined to the QBB on which the CPU doing the testing resides. Memory directory module See Directory module. OCP Operator control panel; used by the operator to control the system. It has a keyswitch, display screen, indicators, and buttons. The keyswitch is used to power the system up or down or to secure it from remote access.
Power cabinet Cabinet in the GS160/320 systems that provides power for the system cabinets and houses PCI boxes and storage shelves. Power input Power is supplied to the entire system box through the QBB at the rear of the cabinet. Vaux and 48 volt input and return are supplied. Power system manager See PSM. Primary PCI box The PCI master box whose system control manager (SCM) controls the system. See also Secondary PCI box.
SCM System control manager; a microprocessor on a standard I/O module that monitors and controls other microprocessors that monitor system state. The SCM provides a command language for an operator and allows for remote management of the system. A second standard I/O module with another SCM provides a backup control system. Secondary PCI box A PCI master box that serves as a backup to the primary PCI box. See also Primary PCI box. SIO Standard I/O module.
Warm-swap state Glossary-6 One of the power states of the system in which power is removed from a specified QBB but other segments of the system remain fully powered. In this state only the power going to the specified QBB is removed so that the QBB can be serviced. See also Hot-swap, Cold-swap, and AC off states.
Index A APB program, 6-25 Architecture, 1-5 auto_action environment variable, 7-21 Auxiliary power supply, SCM, 8-5 B Boot flags OpenVMS, 6-13 Tru64 UNIX, 6-12 Boot procedure OpenVMS, 6-23 Tru64 UNIX, 6-17 boot_file environment variable, 6-11 boot_osflags environment variable, 6-12 bootdef_dev environment variable, 6-10 bootp protocol, 6-15 Bypass modes, 8-8 E ei*_protocols environment variable, 615 ei*0_inet_init environment variable, 614 env command (SCM), 8-20 Environment, monitoring, 8-20 Escape sequ
L Loadable firmware update utility, 1-4 M Memory module, 2-11 Message conventions, SCM, 8-16 MOP protocol, 6-15 O OpenVMS booting from InfoServer, 6-24 P Pagers, 8-27 Partitioning hard, 7-26 soft, 7-22 PCI backplane manager, 8-3 PCI box configuration guidelines, 3-20 PIC processor, 8-5 Power modules, 2-13 Power system manager, 8-3 Power system manager module, 2-14 Power-on/off, from SCM, 8-22 Q QBB backplane, 2-8 color code, 3-12 power system manager, 8-3 quit command (SCM), 8-12 R Remote power-on/off,
System box characteristics, 2-2 configuration rules, 3-11 QBB, 2-6 System drawer electrical and environmental parameters, 4-3 T Through mode (SCM), 8-7 Troubleshooting SCM, 8-30 U Updating firmware, 1-4 Index-3