User Service Guide HP Integrity Superdome/sx2000 Server Second Edition Manufacturing Part Number : A9834-9001B September 2006
Legal Notices Copyright 2006 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. The only warranties for HP products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. HP shall not be liable for technical or editorial errors or omissions contained herein.
Contents 1. Overview Server History and Specifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Server Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Power System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . AC Power . . . . . . . . . . . . . . . . .
Contents Hardware Corrected Errors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Global Shared Memory Errrors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Hardware Uncorrectable Errors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Fatal Errors. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Contents Installing and Verifying the PDCA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Voltage Check . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Removing the EMI Panels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Connecting the Cables . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Contents Booting Red Hat Enterprise Linux . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165 Booting SuSE Linux Enterprise Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166 Shutting Down Linux . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167 A. sx2000 LEDs B. Management Processor Commands MP Command: BO . . . . . . . . .
Contents Powering Off the System Using the pe Command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219 Turning On Housekeeping Power . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 222 Powering On the System Using the pe Command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225 D. Templates Templates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Contents 8
Tables Table 1-1. HSO LED Status Indicator Meaning. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 Table 1-2. Supported Processors and Minimum Firmware Version Required . . . . . . . . . . . . . . . . . 42 Table 2-1. Server Component Dimensions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60 Table 2-2. I/O Expansion Cabinet Component Dimensions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60 Table 2-3.
Tables 10
Figures Figure 1-1. Superdome Cabinet. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 Figure 1-2. UGUY . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 Figure 1-3. Management Processor. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 Figure 1-4. HUCB . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Figures Figure 3-31. Power Supply Indicator LED Detail . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Figure 3-32. Removing Front EMI Panel Screw . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Figure 3-33. Removing the Back EMI Panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Figure 3-34. Cable Labeling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Figures Figure C-16. Power Status First Window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Figure C-17. Power Status Window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Figure D-1. Cable Cutouts and Caster Locations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Figure D-2. SD16 and SD32 Space Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Figures 14
About This Document This document contains a system overview, system specific parameters, how to install the system, and operating system specifics for the system.
Intended Audience This document is intended for HP trained Customer Support Consultants. Document Organization This document is organized as follows: Chapter 1 This chapter presents an historical view of the Superdome server family, describes the various server components, and describes how the server components function together. Chapter 2 This chapter contains the dimensions and weights for the server and various components.
Typographic Conventions The following typographic conventions are used in this publication. WARNING A warning lists requirements that you must meet to avoid personal injury. CAUTION A caution provides information required to avoid losing data or avoid losing system functionality. IMPORTANT Provides essential information to explain a concept or to complete a task. NOTE A note highlights useful information such as restrictions, recommendations, or important details about HP product features.
Related Information You can find other information on HP server hardware management, Microsoft® Windows®, and diagnostic support tools at the following Web sites. Web Site for HP Technical Documentation: http://docs.hp.com This is the main Web site for HP technical documentation. This site offers comprehensive information about HP products available for free. Server Hardware Information: http://docs.hp.com/hpux/hw/ This Web site is the systems hardware portion of the docs.hp.com site.
Publishing History The publishing history of this document includes the following editions. Updates are made to this document on an unscheduled as needed basis. The updates consist of a complete replacement manual and pertinent Web-based or CD documentation. First Edition ........................................................ March 2006 Second Edition ........................................................
HP Encourages Your Comments HP welcomes your feedback on this publication. Address your comments to edit@presskit.rsn.hp.com and note that you will not receive an immediate reply. All comments are appreciated.
1 Overview The HP superscalable sx2000 processor chipset is the new chipset for the Superdome high-end platform. It supports up to 128 PA-RISC or Intel Itanium 2 processors and provides an enterprise server upgrade path for the Superdome line of systems. The sx2000 provides the final major hardware upgrade to the Superdome platform.
Overview - A new cell board - A new system backplane and it’s power board - A new I/O backplanes and it’s power board - New I/O - backplane cables - And the addition of a redundant, hot swappable clock source.
Overview Server History and Specifications Server History and Specifications Superdome was introduced as the new platform architecture for HP high-end servers in 2000-2004. Superdome represented the first collaborative hardware design effort between traditional HP and Convex technologies. Superdome was designed to replace T and V Class servers and to prepare for the transition from PA-RISC to Intel Itanium 2 processors (IA).
Overview Server Components Server Components A Superdome system consists of the following types of cabinet assemblies: At least one Superdome left cabinet. The Superdome cabinets contain all of the processors, memory, and core devices of the system. They also house most (usually all) of the system's PCI cards. Systems can include both left and right cabinet assemblies containing a left or right backplane (SD64) respectively. One or more HP Rack System/E cabinets.
Overview Server Components When the PA dual-core or the IA dual-core processors are used, the CPU counts are doubled by the use of the dual-die processors, as supported on the Itanium cell boards. Up to 128 processors can be supported.
Overview Power System Power System The power subsystem consists of the following components: - 1 or 2 Power Distribution Component Assembly (PDCA) - 1 Front End Power Supply (FEPS) - Up to 6 Bulk Power Supplies (BPS) - 1 power board per cell - An HIOB power system - Backplane power bricks - Power monitor (PM) on the Universal Glob of Utilities (UGUY) - And local power monitors (LPM) on the cell, the HIOB and the backplanes. AC Power The AC power system includes one or two PDCAs and one FEPS.
Overview Power System - Inline connector: Mennekes ME532C6-16, 3-phase, 5-wire, 32 Amps, 450/475 V, VDE certified, color red,IEC309-1, IEC309-2, grounded at 6:00 o'clock. - Panel-mount receptacle: Mennekes ME532R6-1276, 3-phase, 5-wire, 32 Amp, 450/475 V, VDE certified, color red, IEC309-1, IEC309-2, grounded at 6:00 o'clock. - FUSE per phase: 25 Amp (valid for Germany). DC Power Each power supply output provides 48 V dc up to 60 A (2.88kVA) and 5.3 V dc housekeeping.
Overview Cooling System Cooling System The Superdome has four blowers and five I/O fans per cabinet. These components are all hot-swap devices. All have LEDs indicating the current status. These LEDs are self-enplanation. Temperature monitoring occurs for the following: - Inlet air for temperature increases above normal - BPS for temperature increases above normal - The I/O power board over temperature signal is monitored.
Overview Cooling System If the failure causes a transition to N- I/O or main fans in a CPU cabinet, the cabinet is immediately powered off. If the failure causes a transition to N- I/O fans in an IOX cabinet, the I/O backplanes contained in the I/O Chassis Enclosure (ICE) containing that fan group are immediately powered off. Only inlet temperature increases will be monitored by HPUX, all other high temperature increase chassis codes will not activate the envd daemon to act as configured in the /etc/envd.
Overview Utilities Subsystem Utilities Subsystem The Superdome utilities subsystem is comprised of a number of hardware and firmware components located throughout the Superdome system. Platform Management The sx2000 platform management subsystem consists of a number of hardware and firmware components located throughout the sx2000 system. The sx2000 uses the sx1000 platform management components, with firmware changes to support new functionality.
Overview Utilities Subsystem - Supports USB for keyboard and mouse at boot - Supports VGA during boot - Enables global shared memory (GSM) - Supports PCI 2.3, PCI-X 1.0, and PCI-X 2.0 UGUY Every cabinet contains one UGUY. Refer to Figure 1-2. The UGUY plugs into the HUCB. It is not hot swappable. Its MP microprocessor controls power monitor functions, executing the Power Monitor 3 (PM3) firmware and the cabinet-level utility (CLU) firmware.
Overview Utilities Subsystem - Status LEDs for the SBA cable OL*, the cell OL*, and the I/O backplane OL* PM3 Functionality The PM3 performs the following functions: 1) FEPS control and monitoring. For each of the BPSs in the FEPS. Superdome has six BPS and the UGUY sends 5 V to the BPS for use by the fault collection circuitry. 2) FAN control and monitoring. In addition to the blowers, there are five I/O system fans (above and between I/O bays).
Overview Utilities Subsystem -The ability to process and store log entries (chassis codes) - Console functions to every partition - OL* functions - Virtual front panel and system alert notification - The ability to connect to the MP for maintenance, either locally or remotely - The ability to run diagnostics (ODE and scan) Figure 1-3 Management Processor SBC SBCH UGUY The SBCH provides the physical and electrical interface to the SBC, the fanning out of the universal serial bus (USB) to internal and ext
Overview Utilities Subsystem HUCB The HUCB, shown in Figure 1-4, is the backplane of the utility subsystem. It provides cable distribution for all the utility signals except the clocks. It also provides the customer LAN interface and serial ports. The SMS connects to the HUCB. The system type switch is located on the HUCB. This board has no active circuits. It is not hot-swappable.
Overview Backplane (Fabric) Backplane (Fabric) The system backplane assembly provides the following functionality in an sx2000 system: - Interfaces the CLU subsystem to the system backplane and cell modules - Houses the system crossbar switch fabrics and cell modules - Provides switch fabric interconnect between multiple cabinets - Generates system clock sources - Performs redundant system clock source switching - Distributes the system clock to crossbar chips and cell modules - Distributes housekeeping po
Overview Backplane (Fabric) an additional crossbar in a second backplane for a dual backplane configuration. The connection is through a high-speed cable interface to the second backplane. This 12-cable high-speed interface replaces the flex cable interface previously used on the Superdome system. Backplane Monitor and Control The backplane implements the following monitor and control functions.
Overview Backplane (Fabric) System Clock Distribution The following system components receive the system clock are the eight cell boards that plug into to the backplane, the six XBC crossbar switch chips on the system backplane. Two backplane clock power detectors – one for each 8-way sine clock power splitter are on the RCS.
Overview Backplane (Fabric) The HSO connects to the system backplane through an HMZD2X10 right-angle receptacle. sx2000 RCS Module The sx2000 RCS module supplies clocks to the Superdome sx2000 backplane, communicates clock alarm to the RPM, and accepts control input from the RPM. It has an I2C EEPROM on the module so that the the firmware can inventory the module on system power up. The RCS supplies 16 copies of the sine wave system clock to the sx2000 system backplane.
Overview Backplane (Fabric) If one of the HSOs outputs does not have the correct amplitude then the RCS uses the other one as the source of clocks and sends an alarm signal to the RPM indicating which oscillator failed. The green LED is lit on the good HSO and the yellow LED is lit on the failed HSO. If an external clock coax is connected from the master backplane clock output MCX connector to the slave backplane clock input MCX connector then, this overrides any firmware clock selections.
Overview Backplane (Fabric) The backplane has two slots for power supply modules. The power supply connector for each slot has a 1-bit slot address to identify the slot. The address bit for power supply slot 0 is grounded. The address bit for slot 1 is floating on the backplane. The power supply module provides a pull-up resistor on the address line on slot 1. The power supply module uses the slot address bit as bit A0 for generating a unique I2C address for the FRU ID prom.
Overview CPUs and Memories CPUs and Memories The cell provides the processing and memory resources required by each sx2000 system configuration.
Overview CPUs and Memories The remote I/O link provides a self-correcting, high-speed communication pathway between the cell and the I/O subsystem through a pair of cables. Sustained I/O bandwidth is 5.5 GBs for a 50 percent inbound and outbound mix, and roughly 4.2 GBs for a range of mixes. The CC interfaces to the cell's memory system. The memory interface is capable of providing a sustained bandwidth of 14 to 16 GBs at 266.67 MH to the cell controller.
Overview CPUs and Memories platforms may support DIMMs based on non monolithic (or stacked) DRAMs, which are incompatible with the sx2000. There is no support for the use of the older SDRAM DIMMs designed for Superdome. Cell memory is illustrated in Figure 1-9. Figure 1-9 Cell Memory DIMMs are named according to both physical location and loading order. The physical location is used for connectivity on the board, and is the same for all quads.
Overview CPUs and Memories industry-standard DIMM. This increase in height allows the DIMM to accommodate twice as many DRAMs as an industry-standard DIMM and to provide redundant address and control signal contacts not available on industry-standard DDR2 DIMMs. Memory Interconnect MID bus data is transmitted via the four 72-bit, ECC-protected MID buses, each with a clock frequency equal to the CC s core frequency.
Overview CPUs and Memories - Cellmap (across cells) - Link (across fabrics) Memory Bank Attribute Table The MBAT interleaving is done on a per-cell basis before the partition is rendezvoused. The cell map and fabric interleaving are done after the partition has rendezvoused. SDRAM on the cell board is installed in physical units called echelons. For the new sx2000, there will be 16 independent echelons. Each echlon consists of two DDR DIMMs.
Overview CPUs and Memories Memory Error Protection All of the CC cache lines are protected in memory by an error correction code (ECC). The sx2000 memory ECC scheme is significantly different from the sx1000 memory ECC scheme. An ECC code word is contained in each pair of 144-bit chunks. The memory data path (MDP) block is responsible for checking for and, if necessary, correcting any correctable errors.
Overview CPUs and Memories Platform Dependant Hardware The platform dependent hardware's (PDH) includes functionality that is required by both system and management firmware. Features provided by the PDH provide the following features: - An interface that is capable of passing multiple forms of information between system firmware and the management processor (MP, on the SBC) by the platform dependant hardware controller (PDHC, on the PDH daughter card) - Flash EPROM for PDHC boot code storage.
Overview I/O Subsystem I/O Subsystem The sx2000 I/O backplane (SIOBP) is an update of the sx1000 I/O backplane, with a new set of chips that increase the board’s internal bandwidth and support the newer PCI-X 2.0 protocol. The sx2000 I/O backplane uses most of the same mechanical parts as the sx1000 I/O backplane. The connections between the I/O chassis and the rest of the system have changed.
Overview I/O Subsystem SBA Chip: CC-to-Ropes The SBA chip communicates with the CC on the cell board via a pair of high-speed serial unidirectional links known as HSS or E-links. Each unidirectional E-link consists of 20 serial 8b/10b encoded differential data bits operating at 2.36 GT/s. This yields a peak total bidirectional HSS link bandwidth of 8.5 GB/s. Internally, SBA routes this high-speed data to/from one of two rope units. Each rope unit spawns four single ropes and four fat ropes.
Overview I/O Subsystem PCI Slots For maximum performance and availability, each PCI slot is sourced by its own LBA chip and is supported by its own portion of a hot-plug controller. All slots are designed to Revision 2.2 of the PCI specification and Revision 2.0a of the PCI-X specification and can support full size. Shorter and smaller cards are supported, as are 32-bit cards. Slot 0 support for the core I/O card has been removed on the SIOBP.
Overview I/O Subsystem together 5.V +3.3 V auxilary will be on whenever AC is applied. The SIOBP FPGA is responsible for ensuring that each voltage is stable before enabling the next voltage. The power-down sequence is the opposite of the power-up sequence, turning off the 3.3 V voltage first and finally turning off the two 12 V supplies.
Overview New Server Cabling New Server Cabling Most of the Superdome cables remain unchanged except three cables designed for the sx2000 to improve data rate and electrical performance: nn M-link cable, two types (lengths) of L-link cable, and a clock cable. M-Link Cable The M-link cable (A9834-2002A) is the primary backplane to 2nd cabinet backplane high speed interconnect. The M-link cable connects XBCs between system and I/O backplanes.
Overview New Server Cabling Figure 1-11 Chapter 1 Backplane Cables 53
Overview Firmware Firmware The newer Intel Itanium® Processor firmware consists of many components loosely coupled by a single framework. These components are individually linked binary images that are bound together at run time. Internally, the firmware employs a software database called a device tree to represent the structure of the hardware platform and to provide a means of associating software elements with hardware functionality.
Overview Server Configurations Server Configurations Refer to the HP System Partitions Guide (5990-8170A) for extensive details on the topic of proper configurations. Also, an interactive program found on the PC SMS, titled “Superdome Partitions Revisited,” can be very useful.
Overview Server Errors Server Errors To support high availability (HA), the new chipset has included functionality to do error correction, detection and recovery.
Overview Server Errors are opened between PDs when it is established that the PDs are up and communication between them is open. When there is a failure in GSM, the goal is to close the sharing windows between those two cells but not to affect sharing windows to other cells. There are two methods to detect GSM errors. The first method is a software-only-method, in which software wraps data with a CRC code and sequence number. Software checks this for each buffer transferred.
Overview Server Errors 1. Detection is the hardware checks that realize an error has occurred. 2. Transaction handling modifies how the hardware treats the tmansaction with the detected error. 3. Logging is storing the error indication in the primary error mode register, which sets the error state for the block. 4. State behavior is any special actions taken in the various error states.
2 System Specifications The following specifications are based on ASHRAE Class 1. Class 1 is a controlled computer room environment, in which products are subject to controlled temperature and humidity extremes. Throughout this chapter each specification is defined as thoroughly as possible to ensure that all data is considered, to ensure a successful site preparation and system installation.
System Specifications Dimensions and Weights Dimensions and Weights This section contains server component dimensions and weights for the system. Component Dimensions Table 2-1 lists the dimensions for the cabinet and components. Table 2-2 list the dimensions for optional I/O expansion (IOX) cabinets. Table 2-1 Component Server Component Dimensions Width (in / cm) Depth (in / cm) Height (in / cm) Maximum Quantity per Cabinet Cabinet 30 / 76.2 48 / 121.9 77.2 / 195.6 1 Cell board 16.5 / 41.
System Specifications Dimensions and Weights Component Weights Table 2-3 lists the server and component weights. Table 2-4 lists the weights for optional I/O expansion (IOX) cabinets. NOTE Refer to the appropriate documents to determine the weight of the Support Management Station (SMS) and any console that will be used with this server. Table 2-3 System Component Weights Component Weight Per Unit (lb / kg) Quantity Weight (lb / kg) Chassisa 745.17 / 338.1 1 745.17 / 338.
System Specifications Dimensions and Weights Shipping Dimensions and Weights Table 2-5 lists the dimensions and weights of the Support Management Station and a single cabinet with shipping pallet. Table 2-5 Miscellaneous Dimensions and Weights Equipment System on shipping Width (in / cm) Depth/Length (in / cm) Height (in / cm) Weight (lb / kg) 39.00 / 99.06 48.63 / 123.5 73.25 / 186.7 1471.24 / 669.79 Blowers/frame on shipping pallet 40.00 / 101.6 48.00 / 121.9 62.00 / 157.5 99.2 / 45.
System Specifications Electrical Specifications Electrical Specifications The following specifications are based on ASHRAE Class 1. Class 1 is a controlled computer room environment, in which products are subject to controlled temperature and humidity extremes. Throughout this chapter each specification is defined as thoroughly as possible to ensure that all data is considered to ensure a successful site preparation and system installation.
System Specifications Electrical Specifications Table 2-6 Option 7 Available Power Options (Continued) Source Type Source Voltage (Nominal) 3-phase Voltage range 200 to 240 V ac, phase-to-neutral, 50 Hz / 60 Hz PDCA Required 5-wire Input Current Per Phase 200 to 240 V ac a 24 A maximum per phase Power Receptacle Required Connector and plug provided with a 2.5 meter (8.2 feet) power cable. Electrician must hard wire receptacle to 32 A site power. a.
System Specifications Electrical Specifications Figure 2-1 PDCA Locations PDCA 1 PDCA 0 System Power Requirements Table 2-8 and Table 2-9 list the ac power requirements for an HP Integrity Superdome/sx2000 system. These tables provide information to help determine the amount of ac power needed for your computer room.
System Specifications Electrical Specifications Table 2-8 Power Requirements (Without Support Management Station) Requirement Value Comments Product label maximum current, 3-phase, 4-wire 44 A rms Per phase at 200 to 240 V ac Product label maximum current, 3-phase, 5-wire 24 A rms Per phase at 200 to 240 V ac Power factor correction 0.95 minimum Ground leakage current (mA) > 3.5 mA WARNING See the following WARNING. Beware of shock hazard.
System Specifications Electrical Specifications Table 2-10 I/O Expansion Cabinet Power Requirements (Without Support Management Station) Requirement Value Nominal input voltage 200/208/220/230/240 V ac rms Input voltage range (minimum to maximum) 170-264 V ac rms Frequency range (minimum to maximum) 50/60 Hz Number of phases 1 Marked electrical input current 16 A Maximum inrush current 60 A (Peak) Power factor correction 0.
System Specifications Environmental Requirements Environmental Requirements This section provides the environmental, power dissipation, noise emission, and air flow specifications.
System Specifications Environmental Requirements WARNING Do not connect a 380 to 415 V ac supply to a 4-wire PDCA. This is a safety hazard and will result in damage to the product. Line-to-line or phase-to-phase voltage measured at 380 to 415 V ac must always be connected using a 5-wire PDCA.
System Specifications Environmental Requirements b. These numbers are valid only for the specific configurations shown. Any upgrades may require a change to the breaker size. A 5-wire source utilizes a 4-pole breaker, and a 4-wire source utilizes a 3-pole breaker. The protective earth (PE) ground wire is not switched.
System Specifications Environmental Requirements b. These numbers are valid only for the specific configurations shown. Any upgrades may require a change to the breaker size. A 5-wire source utilizes a 4-pole breaker, and a 4-wire source utilizes a 3-pole breaker. The protective earth (PE) ground wire is not switched.
System Specifications Environmental Requirements Acoustic Noise Specification The acoustic noise specifications are as follows: • 8.2 bel (sound power level) • 65.1 dBA (sound pressure level at operator position) These levels are appropriate for dedicated computer room environments, not office environments.
System Specifications Environmental Requirements Airflow HP Integrity Superdome/sx2000 systems require the cabinet air intake temperature to be between 15οC and 32οC (59οF and 89.6οF) at 2900 CFM. Figure 2-2 illustrates the location of the inlet and outlet air ducts on a single cabinet. Approximately 5 percent of the system airflow is drawn from the rear of the system and exits the top of the system.
System Specifications Environmental Requirements Table 2-17 Physical Environmental Specifications Condition Voltage 200–240 Vac Typical Heat Release Airflow, Nominalb Airflow, Maximum at 32oCa,b Weight Overall System Dimensions (W X D X H) Description Watts CFM m3/hr CFM m3/hr lb kg in mm Minimum Configuration 3423 2900 5.0 2900 5.0 926.3 420.3 30x48 x77.2 76.2x121.9 x195.6 Maximum Configuration 9130 2900 5.0 2900 5.0 1241.2 563.2 30x48 x77.2 76.2x121.9 x195.
3 Installing the System This chapter describes installation of an HP Integrity Superdome/sx2000 system. Installers must have received adequate training, be knowledgeable about the product, and have a good overall background in electronics and customer hardware installation.
Installing the System Introduction Introduction The instructions in this chapter are written for Customer Support Consultants (CSC) who are experienced at installing complex systems. It provides details about each step in the installation process. Some steps must be performed before others can be completed successfully. To avoid having to undo and redo an installation step, follow the installation sequence outlined in this chapter.
Installing the System Unpacking and Inspecting the System Unpacking and Inspecting the System This section describes what to do before unpacking the server and how to unpack the system itself. WARNING Do not attempt to move the cabinet, either packed or unpacked, up or down an incline of more than 15o. Verifying Site Preparation Verifying site preparation includes gathering LAN information and verifying electrical requirements.
Installing the System Unpacking and Inspecting the System Inspecting the Shipping Containers for Damage HP shipping containers are designed to protect their contents under normal shipping conditions. After the equipment arrives at the customer site, carefully inspect each carton for signs of shipping damage. WARNING Do not attempt to move the cabinet, either packed or unpacked, up or down an incline of more than 15o.
Installing the System Unpacking and Inspecting the System Figure 3-1 Normal Tilt Indicator Tilt indicator Retaining bands Retaining bands Figure 3-2 NOTE Chapter 3 Abnormal Tilt Indicator If the tilt indicator shows that an abnormal shipping condition has occurred, write “possible hidden damage” on the bill of lading and keep the packaging.
Installing the System Unpacking and Inspecting the System Inspection Precautions • When the shipment arrives, check each container against the carrier's bill of lading. Inspect the exterior of each container immediately for mishandling or damage during transit. If any of the containers are damaged, request the carrier's agent be present when the container is opened. • When unpacking the containers, inspect each item for external damage.
Installing the System Unpacking and Inspecting the System Unpacking the Cabinet WARNING Use three people to unpack the cabinet safely. HP recommends removing the cardboard shipping container before moving the cabinet into the computer room. NOTE If unpacking the cabinet in the computer room, be sure to position it so that it can be moved into its final position easily. Notice that the front of the cabinet (Figure 3-3) is the side with the label showing how to align the ramps.
Installing the System Unpacking and Inspecting the System Figure 3-4 Cutting Polystrap Bands Hold here Cut here Polystrap bands Step 3. Lift the cardboard corrugated top cap off of the shipping box. Step 4. Remove the corrugated sleeves surrounding the cabinet. CAUTION Cut the plastic wrapping material off rather than pull it off. Pulling the plastic covering off represents an electrostatic discharge (ESD) hazard to the hardware. Step 5.
Installing the System Unpacking and Inspecting the System Figure 3-5 Removing the Ramps from the Pallet Ramps Chapter 3 83
Installing the System Unpacking and Inspecting the System Step 7. Remove the plastic anti-static bag by lifting it straight up off the cabinet. If the cabinet or any components are damaged, follow the claims procedure. Some damage can be repaired by replacing the damaged part. If extensive damage is found, it may be necessary to repack and return the entire cabinet to HP. Inspecting the Cabinet Inspect the cabinet exterior for signs of shipping damage. Step 1.
Installing the System Unpacking and Inspecting the System Step 3. Verify that the I/O chassis mounting screws are in place and secure (Figure 3-7). Inspect all components for signs of shifting during shipment or any signs of damage. Figure 3-7 I/O Chassis Mounting Screws Mounting screws I/O chassis Moving the Cabinet Off the Pallet Step 1. Remove the shipping strap that holds the BPSs in place during shipping (Figure 3-8 on page 86).
Installing the System Unpacking and Inspecting the System Figure 3-8 Shipping Strap Location Shipping strap Step 2. Remove the pallet mounting brackets and pads on the side of the pallet where the ramp slots are located (Figure 3-9).
Installing the System Unpacking and Inspecting the System WARNING Do not remove the bolts on the mounting brackets that attach to the pallet. These bolts prevent the cabinet from rolling off the back of the pallet. Step 3. On the other side of the pallet, remove only the bolt on each mounting bracket that is attached to the cabinet. Step 4. Insert the ramps into the slots on the pallet. CAUTION Make sure the ramps are parallel and aligned (Figure 3-10).
Installing the System Unpacking and Inspecting the System Step 5. Carefully roll the cabinet down the ramp (Figure 3-11). Figure 3-11 Rolling the Cabinet Down the Ramp Step 6. Unpack any other cabinets that were shipped. Unpacking the PDCA At least one power distribution control assembly (PDCA) is shipped with the system. In some cases, the customer may have ordered two PDCAs, the second to be used as a backup power source.
Installing the System Unpacking and Inspecting the System Table 3-1 Available Power Options PDCA Required Input Current Per Phase 200 to 240 V aca Power Receptacle Required Source Type Source Voltage (Nominal) 6 3-phase Voltage range 200 to 240 V ac, phase-to-phase, 50 Hz / 60 Hz 4-wire 44 A maximum per phase Connector and plug provided with a 2.5 m (8.2 feet) power cable. Electrician must hard wire receptacle to 60 A site power.
Installing the System Unpacking and Inspecting the System WARNING Do not attempt to push the loaded cabinet up the ramp onto the pallet. Three people are required to push the cabinet up the ramp and position it on the pallet. Inspect the condition of the loading and unloading ramp before use. Repackaging To repackage the cabinet, perform the following steps: Step 1. Assemble the HP packing materials that came with the cabinet. Step 2. Carefully roll the cabinet up the ramp. Step 3.
Installing the System Setting Up the System Setting Up the System After a site has been prepared, the system has been unpacked, and all components have been inspected, the system can be prepared for booting. Moving the System and Related Equipment to the Installation Site Carefully move the cabinets and related equipment to the installation site but not into the final location. If the system is to be placed at the end of a row, you must add side bezels before positioning the cabinet in its final location.
Installing the System Setting Up the System This cardboard protects the housing baffle during shipping. If it is not removed, the fans will not work properly. Figure 3-13 Removing Protective Cardboard from the Housing Cardboard NOTE Double-check that the protective cardboard has been removed. Step 3.
Installing the System Setting Up the System Step 4. Using the handles on the housing labeled Blower 0 Blower 1, part number A5201-62030, align the edge of the housing over the edge at the top front of the cabinet, and slide it into place until the connectors at the back of each housing are fully mated (Figure 3-15). Then tighten the thumbscrews at the front of the housing. Figure 3-15 Installing the Front Blower Housing Step 5. Unpack each of the four blowers. Step 6.
Installing the System Setting Up the System Step 7. Tighten the thumbscrews at the front of each blower. Step 8. If required, install housings on any other cabinets that were shipped with the system. Attaching the Side Skins and Blower Side Bezels Two cosmetic side panels affix to the left and right sides of the system. In addition, each system has bezels that cover the sides of the blowers.
Installing the System Setting Up the System Step 3. Attach the skin without the lap joint (Front) over the top bracket and under the bottom bracket and gently slide the skin into position. Figure 3-18 Attaching the Front Side Skins Step 4. Push the side skins together, making sure the skins overlap at the lap joint.
Installing the System Setting Up the System Step 1. Place the side bezel slightly above the blower housing frame. Figure 3-19 Attaching the Side Bezels Lip Tab (2) Brackets Blower side bessel (See detail) Notches Brackets Step 2. Align the lower bezel tabs to the slots in the side panels. Step 3. Lower the bezel so the bezel top lip fits securely on the blower housing frame and the two lower tabs are fully inserted into the side panel slots.
Installing the System Setting Up the System Step 6. To secure the side bezels to the side skins, attach the blower bracket locks (HP part number A5201-00268) to the front and back blowers using a T-20 driver. There are two blower bracket locks on the front blowers and two on the rear. Attaching the Leveling Feet and Leveling the Cabinet After positioning the cabinet to its final position, attach and adjust the leveling feet using the following procedure: Step 1.
Installing the System Setting Up the System NOTE The procedure in this section requires two people and must be performed with the front metal chassis door open. To install the front door assembly: Step 1. Open the door, unsnap the screen, and remove all the filters held in place with Velcro. Step 2. Remove the cabinet keys that are taped inside the top front door bezel. Step 3. Insert the shoulder studs on the lower door bezel into the holes on the front door metal chassis (Figure 3-21).
Installing the System Setting Up the System Figure 3-22 Installing the Upper Front Door Assembly Front panel display cable Step 6. Feed the grounding strap through the door and attach it to the cabinet. Step 7. Insert the shoulder studs on the upper door bezel into the holes on the front door metal chassis. Step 8. Using a T-10 driver, secure the upper door bezel to the metal door with eight of the screws provided.
Installing the System Setting Up the System Figure 3-23 Installing the Rear Blower Bezel Step 3. Align the bezel over the nuts that are attached to the bracket at the rear of the cabinet. Step 4. Using a T-20 driver, tighten the two captive screws on the lower flange of the bezel. NOTE Tighten the screws securely to prevent them from interfering with the door. Step 5. Close the cabinet rear door.
Installing the System Setting Up the System Figure 3-24 Installing the Front Blower Bezel Step 3. Align the bezel over the nuts that are attached to the bracket at the front of the cabinet. Step 4. Using a T-20 driver, tighten the two captive screws on the lower flange of the bezel. NOTE Tighten the screws securely to prevent them from interfering with the door. Step 5. Close the front door.
Installing the System Setting Up the System • Required method of grounding is to connect the green power cord safety ground to the site ground point. This is accomplished through the power cord receptacle wiring. HP does not recommend cabinet grounding. Cabinet grounding should be treated as auxiliary or additional grounding over and above the ground wire included within the supplied power cord. • As a minimum, the green power cord safety ground must be connected to the site ground point.
Installing the System Setting Up the System Installing and Verifying the PDCA All systems are delivered with the appropriate cable plug for options 6 and 7 (Figure 3-25 on page 104). Check the voltages at the receptacle prior to plugging in the PDCA plug. • To verify the proper wiring for a 4-wire PDCA, use a DVM to measure the voltage at the receptacle.
Installing the System Setting Up the System Figure 3-25 PDCA Assembly for Options 6 and 7 Figure 3-26 A 4-Wire Connector 104 L3 PE L2 L1 Chapter 3
Installing the System Setting Up the System Figure 3-27 A 5-Wire Connector L3 L2 N L1 PE Use the following procedure to install the PDCA: WARNING Make sure the circuit breaker on the PDCA is OFF. Step 1. Remove the rear PDCA bezel by removing the four retaining screws. Step 2. Run the power cord down through the appropriate opening in the floor tile. Step 3. Insert the PDCA into its slot and secure with four screws (Figure 3-28 on page 106).
Installing the System Setting Up the System Figure 3-28 Installing the PDCA Step 4. Using a T-20 driver, attach the four screws that hold the PDCA in place. Step 5. If required, repeat step 2 through step 4 for the second PDCA. Step 6. Re-install the rear PDCA bezel. CAUTION Do not measure voltages with the PDCA breaker set to ON. Make sure the electrical panel breaker is ON and the PDCA breaker is OFF. Step 7. Plug in the PDCA connector. Step 8. Check the voltage at the PDCA: a.
Installing the System Setting Up the System Figure 3-29 Checking PDCA Test Points (5-Wire) (See detail) Detail B Detail A Test points Retaining screw Table 3-3 4- and 5-Wire Voltage Ranges 4-Wire 5-Wire L2 to L3: 200-240 V L1 to N: 200-240 V L2 to L1: 200-240 V L2 to N: 200-240 V L1 to L3: 200-240 V L3 to N: 200-240 V N to Ground: a a. Neutral to ground voltage can vary from millivolts to several volts depending on the distance to the ground/neutral bond at the transformer.
Installing the System Setting Up the System Voltage Check The voltage check ensures that all phases (and neutral, for international systems) are wired correctly for the cabinet and that the AC input voltage is within limits. If a UPS is used, refer to applicable UPS documentation for information to connect the server and to check the UPS output voltage. UPS User Manual documentation is shipped with the UPS. Documentation may also be found at http://docs.hp.com NOTE Step 1. Verify that site power is OFF.
Installing the System Setting Up the System WARNING SHOCK HAZARD Risk of shock hazard while testing primary power. Use properly insulated probes. Be sure to replace access cover when finished testing primary power. Step 10. Set the server power to ON. Step 11. Check that the indicator LED on each power supply is lit. See Figure 3-31.
Installing the System Setting Up the System Removing the EMI Panels Remove the front and back electromagnetic interference (EMI) panels to access ports and to visually check whether components are in place and the LEDs are properly illuminated when power is applied to the system. To remove the front and back EMI panels: Step 1. Using a T-20 driver, loosen the captive screw at the top center of the front EMI panel (Figure 3-32). Figure 3-32 Removing Front EMI Panel Screw Front EMI panel screw Step 2.
Installing the System Setting Up the System Figure 3-33 Removing the Back EMI Panel Back EMI panel screw Step 4. Use the handle provided to gently remove the EMI panel and set it aside. Connecting the Cables The I/O cables are attached and tied inside the cabinet. When the system is installed, these cables must be untied, routed, and connected to the cabinets where the other end of the cables terminate. Use the following guidelines and Figure 3-34 to route and connect cables.
Installing the System Setting Up the System Figure 3-34 Cable Labeling Routing the I/O Cables Routing the cables is a significant task in the installation process. Efficient cable routing is important not only for the initial installation, but also to aid in future service calls. Neatness counts. The most efficient use of space is to route cables so that they are not crossed or tangled. Figure 3-35 on page 113 illustrates an example of efficient I/O cable routing.
Installing the System Setting Up the System Figure 3-35 Routing I/O Cables Use the following procedure and guidelines to route cables through the cable groomer at the bottom rear of the cabinet. Step 1. Remove the cable access plate at the bottom of the groomer. Step 2. Beginning at the front of the cabinet, route the cables using the following pattern: a. Route the first cable on the left side of the leftmost card cage first.
Installing the System Setting Up the System Step 3. Connect the management processor cables last. Step 4. Reattach the cable access plate at the bottom of the cable groomer. Step 5. Reattach the cable groomer kick plate at the back of the cabinet. Step 6. Slip the L brackets under the power cord on the rear of the PDCA. Step 7. While holding the L bracket in place, insert the PDCA completely into the cabinet and secure the L bracket with one screw.
Installing the System Installing the Support Management Station Installing the Support Management Station The Support Management Station (SMS) ships in one of two ways: rack-mounted in the cabinet or separately in boxes for installation in the field. For field installation, see the Installation Guide that shipped in the box with the SMS. The SMS software is pre-loaded at the factory.
Installing the System Configuring the Event Information Tools Configuring the Event Information Tools There are three tools included in the Event Information Tools (EIT) bundle for the Support Management Station (SMS). They are the Console Logger, the IPMI Log Acquirer and the IPMI Event Viewer. These tools work together to collect, interpret, and display system event messages on the SMS.
Installing the System Turning On Housekeeping Power Turning On Housekeeping Power Use the following procedure to turn on housekeeping power to the system: Step 1. Verify that the ac voltage at the input source is within specifications for each cabinet being installed. Step 2. Ensure that: • The ac breakers are in the OFF position. • The cabinet power switch at the front of the cabinet is in the OFF position.
Installing the System Turning On Housekeeping Power Figure 3-36 Front Panel with Housekeeping (HKP) Power On and Present LEDs Front panel Step 5. Examine the bulk power supply (BPS) LEDs (Figure 3-37). When on, the breakers on the PDCA distribute ac power to the BPSs. Power is present at the BPSs when: 118 • The amber light next to the label AC0 Present is on (if the breakers on the PDCA are on the left side at the back of the cabinet).
Installing the System Turning On Housekeeping Power Figure 3-37 BPS LEDs BPS LEDs Chapter 3 119
Installing the System Connecting the MP to the Customer LAN Connecting the MP to the Customer LAN This section discusses how to connect, set up, and verify the management processor (MP) to the customer LAN. LAN information includes the MP network name (host name), the MP IP address, the subnet mask, and gateway address. The customer provides this information.
Installing the System Connecting the MP to the Customer LAN Setting the Customer IP Address NOTE The default IP address for the customer LAN port on the MP is 192.168.1.1. To set the customer LAN IP address: Step 1. From the MP Command Menu prompt (MP:CM>), enter lc (for LAN configuration). The screen displays the default values and asks if you want to modify them. It is a good idea to write down the information, as it may be required for future troubleshooting.
Installing the System Connecting the MP to the Customer LAN This is the host name for the customer LAN. You can use any name you like. The name can be up to 64 characters long, and can include alphanumerics, dash (-), under score (_), period (.), or space. HP recommends that the name be a derivative of the complex name. For example, Maggie.com_MP. Step 7. Enter the LAN parameters for Subnet mask and Gateway address. This information comes from the customer. Step 8.
Installing the System Booting and Verifying the System Booting and Verifying the System After installing the system, verify that the proper hardware is installed and booted. This section describes how to power on the cabinet and boot and test each partition. A console window must be open for each partition. Two additional windows must also be open: one window for initiating reset on partitions and the other for monitoring system partition status. Initiate the management processor (MP) in each window.
Installing the System Booting and Verifying the System The MP Main Menu appears as shown in Figure 3-42. Figure 3-42 Main MP Menu Step 3. Repeat the first two steps for each partition required. Step 4. In one window bring up the command prompt by entering cm at the MP> prompt as shown in Figure 3-43. Figure 3-43 MP Command Option Step 5. In the another window bring up the Virtual Front Panel (VFP) by entering vfp as shown in Figure 3-44. Use this window to observe partition status.
Installing the System Booting and Verifying the System Figure 3-44 MP Virtual Front Panel Step 6. From the VFP menu, enter s to select the whole system, or enter the partition number to select a particular partition. An output similar to that shown in Figure 3-45 appears. In this example, no status is listed because the system 48 V has not been switched on. Figure 3-45 Example of Partition State—Cabinet Not Powered Up Step 7.
Installing the System Booting and Verifying the System Figure 3-46 MP Console Option Powering On the System 48 V Supply Step 1. Switch on the 48V supply from each cabinet front panel. If the complex has an IOX cabinet, power on this cabinet first. In a large complex, power on cabinets in one of the two following orders: 9, 8, 1, 0 or 8, 9, 0, 1. IMPORTANT The MP should be running in each window. As the cabinet boots, observe the partition activity in the window displaying the VFP. Step 2.
Installing the System Booting and Verifying the System Booting the HP Integrity Superdome/sx2000 to a EFI Shell After powering on or using the CM bo command, all partition console windows will show activity while the firmware is initialized and will stop momentarily at an EFI Boot Manager menu (Figure 3-47). Figure 3-47 HP Integrity Superdome/sx2000 EFI Boot Manager Use the up and down arrow keys on the keyboard to highlight EFI Shell (Built-in) and press Enter. Do this for all partitions.
Installing the System Booting and Verifying the System Figure 3-48 NOTE EFI Shell Prompt If autoboot is enabled for an nPartition, you must interrupt it to stop the boot process at the EFI firmware console. At this point, the Virtual Front Panel indicates that each partition is at system firmware console as indicated in Figure 3-49.
Installing the System Booting and Verifying the System Figure 3-49 HP Integrity Superdome/sx2000 Partitions at System Firmware Console Verifying the System Use the following procedure to verify the system: Step 1. From the CM> prompt, enter ps to observe the power status. A status screen similar to the one in Figure 3-50 should appear.
Installing the System Booting and Verifying the System Step 2. At the Select Device: prompt, enter b then the cabinet number to check the power status of the cabinet. Observe Power Switch: on and Power: enabled as shown in Figure 3-51. Figure 3-51 Power Status Window Figure 3-51 shows that cells are installed in slots 0 and 4. In the cabinet, verify that cells are physically located in slots 0 and 4. Step 3. Press one more time to observe the status as shown in Figure 3-52.
Installing the System Booting and Verifying the System IMPORTANT An asterisk (*) appears in the MP column only for cabinet 0; that is, the cabinet containing the MP. Only cabinet 0 contains the MP. Verify that there is an asterisk (*) for each of the cells installed in the cabinet by comparing what is in the Cells column with the cells located inside the cabinet.
Installing the System Running JET Software Running JET Software Ensure that the network diagnostic is enabled at the MP prompt; MP:CM>nd. This needs to be performed in order to run scan and to do firmware updates to the system. The JTAG Utility for Scan Tests (JUST) Exploration Tool, or JET, collects system information for each system on a network and places it in files for use by other scan tools.
Installing the System Offline Diagnostic Environment (ODE) Offline Diagnostic Environment (ODE) Now that scan has been run, you can run all the appropriate diagnostics for this system. See the appropriate diagnostic documentation for instructions.
Installing the System Attaching the Rear Kick Plates Attaching the Rear Kick Plates Kick plates protect cables from accidentally being disconnected or damaged and add an attractive cosmetic touch to the cabinet. You need to attach three metal kick plates to the bottom rear of the cabinet. To install the kick plates: Step 1. Hold the left kick plate in position and attach a clip nut (0590-2318) on the cabinet column next to the hole in the flange at the top of the kick plate (Figure 3-53). Step 2.
Installing the System Performing a Visual Inspection and Completing the Installation Performing a Visual Inspection and Completing the Installation After booting the system, carefully inspect it and reinstall the EMI panels. Here are the steps required to perform a final inspection and complete the installation: Step 1. Visually inspect the system to verify that all components are in place and secure. Step 2. Check that the cables are secured and routed properly. Step 3.
Installing the System Performing a Visual Inspection and Completing the Installation Step 4. Reinstall the front EMI panel (Figure 3-55). Figure 3-55 Front EMI Panel Flange and Cabinet Holes Hole Flange See detail a. Hook the flange at the lower corners of the EMI panel into the holes on the cabinet. b. Position the panel at the top lip, and lift the panel up while pushing the bottom into position. You might need to compress the EMI gasket to seat the panel properly. c.
Installing the System Performing a Visual Inspection and Completing the Installation a. Align the lip inside the cabinet with the lip on the EMI panel. Figure 3-56 Reinstalling the Back EMI Panel Cabinet EMI panel lip EMI panel lip b. Push the EMI panel up and in. The EMI gasket may have to be compressed at the top of the enclosure to get the panel to seat properly. c. Reattach the screw at the bottom of the EMI panel.
Installing the System Conducting a Post Installation Check Conducting a Post Installation Check After the system has been installed in a computer room and verified, conduct the post installation check. Before turning the system over to the customer, inspect the system visually and clean up the installation area. Do the following: • Inspect circuit boards. Verify that all circuit boards are installed and properly seated and that the circuit board retainers are reinstalled. • Inspect cabling.
4 Booting and Shutting Down the Operating System This chapter presents procedures for booting an operating system (OS) on an nPartition (hardware partition) and procedures for shutting down the OS.
Booting and Shutting Down the Operating System Operating Systems Supported on Cell-based HP Servers Operating Systems Supported on Cell-based HP Servers HP supports nPartitions on cell-based HP 9000 servers and cell-based HP Integrity servers. The following list describes the OSes supported on cell-based servers based on the HP sx2000 chipset.
Booting and Shutting Down the Operating System System Boot Configuration Options System Boot Configuration Options This section briefly discusses the system boot options you can configure on cell-based servers. You can configure boot options that are specific to each nPartition in the server complex.
Booting and Shutting Down the Operating System System Boot Configuration Options Manager utility) to manage boot options for your system disk. The OpenVMS I64 Boot Manager (BOOT_OPTIONS.COM) utility is a menu-based utility and is easier to use than EFI. To configure OpenVMS I64 booting on Fibre Channel devices, you must use the OpenVMS I64 Boot Manager utility (BOOT_OPTIONS.COM).
Booting and Shutting Down the Operating System System Boot Configuration Options To set the ACPI configuration value, issue the acpiconfig value command at the EFI Shell, where value is either default or windows. Then reset the nPartition by issuing the reset EFI Shell command for the setting to take effect. The ACPI configuration settings for the supported OSes are in the following list.
Booting and Shutting Down the Operating System System Boot Configuration Options CAUTION An nPartition on an HP Integrity server cannot boot HP-UX virtual partitions when in nPars boot mode. Likewise, an nPartition on an HP Integrity server cannot boot an operating system outside of a virtual partition when in vPars boot mode. To display or set the boot mode for an nPartition on a cell-based HP Integrity server, use any of the following tools as appropriate.
Booting and Shutting Down the Operating System Booting and Shutting Down HP-UX Booting and Shutting Down HP-UX This section presents procedures for booting and shutting down HP-UX on cell-based HP servers and a procedure for adding HP-UX to the boot options list on HP Integrity servers. • To determine whether the cell local memory (CLM) configuration is appropriate for HP-UX, refer to “HP-UX Support for Cell Local Memory” on page 145.
Booting and Shutting Down the Operating System Booting and Shutting Down HP-UX To add an HP-UX boot option when logged in to HP-UX, use the setboot command. For details, refer to the setboot (1M) manpage. Step 1. Access the EFI Shell environment. Log in to the management processor, and enter CO to access the system console. When accessing the console, confirm that you are at the EFI Boot Manager menu (the main EFI menu).
Booting and Shutting Down the Operating System Booting and Shutting Down HP-UX CAUTION ACPI Configuration for HP-UX Must Be default On cell-based HP Integrity servers, to boot the HP-UX OS, an nPartition ACPI configuration value must be set to default. At the EFI Shell interface, enter the acpiconfig command with no arguments to list the current ACPI configuration. If the acpiconfig value is not set to default, then HP-UX cannot boot.
Booting and Shutting Down the Operating System Booting and Shutting Down HP-UX HP-UX Booting (EFI Shell) From the EFI Shell environment, to boot HP-UX on a device first access the EFI System Partition for the root device (for example fs0:) and then enter HPUX to initiate the loader. The EFI Shell is available only on HP Integrity servers. Refer to “ACPI Configuration for HP-UX Must Be default” on page 147 for required configuration details. Step 1.
Booting and Shutting Down the Operating System Booting and Shutting Down HP-UX To boot the HP-UX OS, do not type anything during the 10-second period given for stopping at the HPUX.EFI loader.
Booting and Shutting Down the Operating System Booting and Shutting Down HP-UX Step 4. Boot to the HP-UX Boot Loader prompt (HPUX>) by pressing any key within the 10 seconds given for interrupting the HP-UX boot process. You will use the HPUX.EFI loader to boot HP-UX in single-user mode in the next step. After you press any key, the HPUX.EFI interface (the HP-UX Boot Loader prompt, HPUX>) is provided. For help using the HPUX.EFI loader, enter the help command. To return to the EFI Shell, enter exit.
Booting and Shutting Down the Operating System Booting and Shutting Down HP-UX Step 1. Access the EFI Shell environment for the nPartition on which you want to boot HP-UX in LVM-maintenance mode. Log in to the management processor, and enter CO to access the Console list. Select the nPartition console. When accessing the console, confirm that you are at the EFI Boot Manager menu (the main EFI menu).
Booting and Shutting Down the Operating System Booting and Shutting Down HP-UX Log in to the management processor for the server and use the Console menu to access the system console. Accessing the console through the MP enables you to maintain console access to the system after HP-UX has shut down. Step 2. Issue the shutdown command with the appropriate command-line options.
Booting and Shutting Down the Operating System Booting and Shutting Down HP OpenVMS I64 Booting and Shutting Down HP OpenVMS I64 This section presents procedures for booting and shutting down HP OpenVMS I64 on cell-based HP Integrity servers and procedures for adding HP OpenVMS to the boot options list. • To determine whether the cell local memory (CLM) configuration is appropriate for HP OpenVMS, refer to “HP OpenVMS I64 Support for Cell Local Memory” on page 153.
Booting and Shutting Down the Operating System Booting and Shutting Down HP OpenVMS I64 To configure booting on Fibre Channel devices, you must use the OpenVMS I64 Boot Manager utility (BOOT_OPTIONS.COM). For more information on this utility and other restrictions, refer to the HP OpenVMS for Integrity Servers Upgrade and Installation Manual. Adding an HP OpenVMS Boot Option This procedure adds an HP OpenVMS item to the boot options list from the EFI Shell.
Booting and Shutting Down the Operating System Booting and Shutting Down HP OpenVMS I64 To exit the EFI environment, press ^B (Control+B); this exits the nPartition console and returns to the management processor Main Menu. To exit the management processor, enter X at the Main Menu. Booting HP OpenVMS To boot HP OpenVMS I64 on a cell-based HP Integrity server use either of the following procedures.
Booting and Shutting Down the Operating System Booting and Shutting Down HP OpenVMS I64 Booting HP OpenVMS (EFI Shell) From the EFI Shell environment, to boot HP OpenVMS on a device first access the EFI System Partition for the root device (for example fs0:), and enter \efi\vms\vms_loader to initiate the OpenVMS loader. Step 1. Access the EFI Shell environment for the system on which you want to boot HP OpenVMS. Log in to the management processor, and enter CO to select the system console.
Booting and Shutting Down the Operating System Booting and Shutting Down HP OpenVMS I64 Log in to the management processor (MP) for the server and use the Console menu to access the system console. Accessing the console through the MP enables you to maintain console access to the system after HP OpenVMS has shut down. Step 2. At the OpenVMS command line (DCL) issue the @SYS$SYSTEM:SHUTDOWN command and specify the shutdown options in response to the prompts given.
Booting and Shutting Down the Operating System Booting and Shutting Down Microsoft Windows Booting and Shutting Down Microsoft Windows This section presents procedures for booting and shutting down the Microsoft Windows OS on cell-based HP Integrity servers and a procedure for adding Windows to the boot options list. • To determine whether the cell local memory (CLM) configuration is appropriate for Windows, refer to “Microsoft Windows Support for Cell Local Memory” on page 158.
Booting and Shutting Down the Operating System Booting and Shutting Down Microsoft Windows Adding a Microsoft Windows Boot Option This procedure adds the Microsoft Windows item to the boot options list. Step 1. Access the EFI Shell environment. Log in to the management processor, and enter CO to access the system console. When accessing the console, confirm that you are at the EFI Boot Manager menu (the main EFI menu).
Booting and Shutting Down the Operating System Booting and Shutting Down Microsoft Windows Step 6. Press Q to quit the NVRBOOT utility, and exit the console and management processor interfaces if you are finished using them. To exit the EFI environment press ^B (Control+B); this exits the system console and returns to the management processor Main Menu. To exit the management processor, enter X at the Main Menu.
Booting and Shutting Down the Operating System Booting and Shutting Down Microsoft Windows Step 3. Press Enter to initiate booting using the chosen boot option. Step 4. When Windows begins loading, wait for the Special Administration Console (SAC) to become available. The SAC interface provides a text-based administration tool that is available from the nPartition console. For details, refer to the SAC online help (type ? at the SAC> prompt). Loading.
Booting and Shutting Down the Operating System Booting and Shutting Down Microsoft Windows Abort a system shutdown. /a /t xxx Set the timeout period before shutdown to xxx seconds. The timeout period can range from 0–600, with a default of 30. Refer to the help shutdown Windows command for details. On HP Integrity Superdome servers, the Windows shutdown /s command shuts down the system and keeps all cells at the boot-is-blocked (BIB) inactive state.
Booting and Shutting Down the Operating System Booting and Shutting Down Linux Booting and Shutting Down Linux This section presents procedures for booting and shutting down the Linux OS on cell-based HP Integrity servers and a procedure for adding Linux to the boot options list. • To determine whether the cell local memory (CLM) configuration is appropriate for Red Hat Enterprise Linux or SuSE Linux Enterprise Server, refer to “Linux Support for Cell Local Memory” on page 163.
Booting and Shutting Down the Operating System Booting and Shutting Down Linux See “Boot Options List” on page 141 for additional information about saving, restoring, and creating boot options. On HP Integrity servers, the OS installer automatically adds an entry to the boot options list. NOTE Adding a Linux Boot Option This procedure adds a Linux item to the boot options list. Step 1. Access the EFI Shell environment. Log in to the management processor, and enter CO to access the system console.
Booting and Shutting Down the Operating System Booting and Shutting Down Linux Booting Red Hat Enterprise Linux You can boot the Red Hat Enterprise Linux OS on HP Integrity servers using either of the methods described in this section. Refer to “Shutting Down Linux” on page 167 for details on shutting down the Red Hat Enterprise Linux OS.
Booting and Shutting Down the Operating System Booting and Shutting Down Linux From the system console, select the EFI Shell entry from the EFI Boot Manager menu to access the shell. Step 2. Access the EFI System Partition for the Red Hat Enterprise Linux boot device. Use the map EFI Shell command to list the file systems (fs0, fs1, and so on) that are known and have been mapped. To select a file system to use, enter its mapped name followed by a colon (:).
Booting and Shutting Down the Operating System Booting and Shutting Down Linux Refer to the procedure “Booting SuSE Linux Enterprise Server (EFI Shell)” on page 167 for details. After choosing the file system for the boot device (for example, fs0:), you can initiate the Linux loader from the EFI Shell prompt by entering the full path for the ELILO.EFI loader. On a SuSE Linux Enterprise Server boot device EFI System Partition, the full paths to the loader and configuration files are: \efi\SuSE\elilo.
Booting and Shutting Down the Operating System Booting and Shutting Down Linux Use the PE command at the management processor Command Menu to manually power on or power off server hardware, as needed. -r Reboot after shutdown. -c Cancel an already running shutdown. time When to shut down (required). You can specify the time option in any of the following ways: • Absolute time in the format hh:mm, in which hh is the hour (one or two digits) and mm is the minute of the hour (two digits).
A sx2000 LEDs Appendix A 169
sx2000 LEDs Table A-1 LED Front Panel LEDs Driven By State Meaning 48V Good PM On (green) 48V is good HKP Good PM On (green) Housekeeping is good.
sx2000 LEDs Table A-2 Power and OL* LEDs LED Location Driven By State Meaning Cell Power Chassis beside cell, and on cell Cell LPM On Green HKP, PWR_GOOD Cell Attention Chassis beside cell CLU On Yellow Cell OL* PDHC Post Cell PDHC 0x0 PDHC Post or run state oxf 0xe->0x1 PM Post CLU Post On the UGUYboard, driven by the PM On the UGUY board, driven by the CLU MOP SARG 0x0 No HKP 0xf MOP is reset or dead 0xe->0x1 PM Post or run state 0x0 No HKP 0xf CLU is reset or dead 0
sx2000 LEDs Table A-2 Power and OL* LEDs (Continued) LED Hot swap oscillators (HSO) Driven By Location System Backplane RPM Figure A-1 Utilities LEDs Table A-3 OL* LED States Description State Meaning On Green HSO Supply running On Yellow HSO clock fault Power (Green) OL* (Yellow) Normal operation (powered) On Off Fault detected, power on On Flashing Slot selected, power on, not ready for OLA/D On On Power off or slot available Off Off Fault detected, power off Off Flashing
sx2000 LEDs Figure A-2 PDH Status PDH STATUS LSB MSB BIB SM US Power HB Good A label will be placed on the outside of the SDCPB Frame to indicated PDH Status, DC/DC Converter faults that shutdown the sx2000 cell, and loss of DC/DC Converter Redundancy. Figure A-2 illustrates the label and table A-4 describes each LED. Note: The Power Good LED is a Bi-Color LED (Green/Yellow).
sx2000 LEDs 174 Appendix A
B Management Processor Commands This Appendix summarizes the Management Processor commands. Notice that in the examples herein, MP is used as the command prompt. The term Guardian Service Processor has been changed to Management Processor, but some code already written uses the old term.
Management Processor Commands MP Command: BO MP Command: BO BO - Boot partition • Access level—Single PD user • Scope—partition This command boots the specified partition. It ensures that all the cells assigned to the target partition have valid complex profiles and then releases Boot-Is-Blocked (BIB).
Management Processor Commands MP Command: CA MP Command: CA CA - Configure Asynchronous & Modem Parameters • Access level—Operator • Scope—Complex This command allows the operator to configure the local and remote console ports. The parameters that can be configured are the baud rate, flow control, and modem type.
Management Processor Commands MP Command: CC MP Command: CC CC - Complex Configuration • Access level—Administrator • Scope—Complex This command performs an initial out of the box complex configuration. The system can be configured as either a single (user specified) cell in partition 0 (the genesis complex profile) or the last profile can be restored. The state of the complex prior to command execution has no bearing on the changes to the configuration.
Management Processor Commands MP Command: CP MP Command: CP CP - Cells Assigned by Partition • Access Level - Single Partition User • Scope - Complex The cp command displays a table of cells assigned to partitions and arranged by cabinets. This is for display only, no configuration is possible with this command.
Management Processor Commands MP Command: DATE MP Command: DATE DATE Command - Set Date and Time. • Access level—Administrator • Scope—Complex This command changes the value of the real time clock chip on the MP.
Management Processor Commands MP Command: DC MP Command: DC DC - Default Configuration • Access level—Administratrix • Scope—Complex This command resets some or all of the configuration parameters to their default values. The clock setting is not effected by the DC command. The example below shows the various parameters and their defaults.
Management Processor Commands MP Command: DF MP Command: DF DF - Display FRUID • Access level—Single Partition User • Scope—Complex This command displays the FRUID data of the specified FRU. FRU information for the SBC, BPS, and processors are “constructed,” because they do not have a FRU ID EEPROM. Because of this fact, the list of FRUs is different than the list presented in the WF command.
Management Processor Commands MP Command: DI MP Command: DI DI - Disconnect Remote or LAN Console • Access level—Operator • Scope—Complex This command initiates separate remote console or LAN console disconnect sequences. For the remote console, the modem control lines are de-asserted, forcing the modem to hang up the telephone line. For the LAN console, the telnet connection is closed.
Management Processor Commands MP Command: DL MP Command: DL DL - Disable LAN Access • Access level—Administrator • Scope—Complex This command disables Telnet LAN access. Disabling Telnet access kills all of the current Telnet connections and causes future telnet connection requests to be given a connection refused message. Example B-9 DL Command Example: In this example, the administrator is connected via telnet to the MP. When DL executes, his/her telnet connection to the MP is closed.
Management Processor Commands MP Command: EL MP Command: EL DL - Enable LAN Access • Access level—Administrator • Scope—Complex This command enables Telnet LAN access. Example B-10 EL Command MP:CM> el Enable telnet access? (Y/[N]) y -> Telnet access enabled. MP:CM> • See also: DI, DL Note that this command is deprecated and does not support SSH. Use the SA command instead.
Management Processor Commands MP Command: HE MP Command: HE HE - Help Menu • Scope—N/A • Access level—Single PD user This command displays a list of all MP commands available to the level of the MP access (Administrator, Operator, or Single PD user). The commands that are available in manufacturing mode will be displayed if the MP is in manufacturing mode. In the following example, the MP is in manufacturing mode and as a result, the manufacturing commands are shown in the last screen.
Management Processor Commands MP Command: HE Example B-11 Appendix B HE Command 187
Management Processor Commands MP Command: ID MP Command: ID ID - Configure Complex Identification • Access level—Operator • Scope—Complex This command configures the complex identification information. The complex identification information includes the following: • model number • model string • complex serial number • complex system name • original product number • current product number • enterprise ID and diagnostic license This command is similar to the SSCONFIG command in ODE.
Management Processor Commands MP Command: IO MP Command: IO IO - Display Connectivity Between Cells and I/O • Access level—Sinfle Partition User • Scope—Complex This command displays a mapping of the connectivity between cells and I/O. • Example: MP:CM> io --------------------------+ Cabinet | 0 | 1 | --------+--------+--------+ Slot |01234567|01234567| --------+--------+--------+ Cell |XXXX....|........| IO Cab |0000....|........| IO Bay |0101....|........| IO Chas |1133....|........
Management Processor Commands MP Command: IT MP Command: IT IT - View / Configure Inactivity Timeout Parameters • Access level—Operator • Scope—Complex This command sets the two inactivity time-outs. The session inactivity timeout prevents a session to a partition to be inadvertently left opened, preventing other users to log onto a partition using this path.
Management Processor Commands MP Command: LC MP Command: LC LC - LAN Configuration • Access level—Administrator • Scope—Complex This command displays and modifies the LAN configurations. The IP address, Hostname, Subnet mask, and Gateway address can be modified with this command.
Management Processor Commands MP Command: LS MP Command: LS LS - LAN Status • Access level—Single Partition User • Scope—Complex This command displays all parameters and current connection status of the LAN interface.
Management Processor Commands MP Command: MA MP Command: MA MA - Main Menu • Access level—Single Partition User • Scope—N/A The command takes the specific user from the Command menu and returns the user to the main menu. Only the user that enters the command is returned to his private main menu.
Management Processor Commands MP Command: ND MP Command: ND ND - Network Diagnostics • Access level—Administrator • Scope—Complex This command enables/disables network diagnostics. This will enable or disable the Ethernet access to MP Ethernet ports other than the main telnet port (TCP port 23). Disabling the network diagnostic port prevents the user from accessing the system with diagnostic tools such as JUST, GDB, LDB and firmware update (FWUU).
Management Processor Commands MP Command: PD MP Command: PD PD - Set Default Partition • Access level—Operator • Scope—Complex This command sets the default partition. If a default partition already exists, then this command overrides the previously defined partition. Setting the default partition prevents the user from being forced to enter a partition in commands that require a partition for their operation. For example, this prevents a user from accidentally TOCing the wrong partition.
Management Processor Commands MP Command: PE MP Command: PE PE - Power Entity • Access level—Operator • Scope—Complex This command turns power on/off to the specified entity. If there is a default partition defined then the targeted entity must be a member of that partition. In the case when the entity being powered is an entire cabinet this command has some interesting interactions with the physical cabinet power switch.
Management Processor Commands MP Command: PE I - IO Chassis P - Partition Select Device: p # Name --- ---0) Partition 0 1) Partition 1 2) Partition 2 3) Partition 3 Select a partition number: 0 The power state is OFF for partition 0.
Management Processor Commands MP Command: PS MP Command: PS PS - Power and Configuration Status • Access level—Single Partition User • Scope—Cabinet This command displays the status of the specified hardware. This command adds new information from previous versions of the PS command in other systems. The user can retrieve a summary or more detailed information on one of the following: a cabinet, a cell, a core IO, and the MP.
Management Processor Commands MP Command: PS Example B-20 Appendix B PS Command 199
Management Processor Commands MP Command: RE MP Command: RE RE - Reset Entity • Access level—Operator • Scope—Complex This command resets the specified entity. Care should be exercised when resetting entities because of the side effects. Resetting an entity has the following side effects.
Management Processor Commands MP Command: RL MP Command: RL RL - Re-key Complex Profile Lock • Access level—Operator • Scope—Complex This command re-keys the complex profile lock. It should only be used to recover from the error caused by the holder of the lock terminating before releasing the complex profile lock. It invalidates any outstanding key to the complex profile lock.
Management Processor Commands MP Command: RR MP Command: RR RR - Reset Partition for Re-configuration • Access level—Single Partition User • Scope—Partition This command resets the specified partition but does not automatically boot it. The utility system resets each cell that is a member of the specified partition.If the user is either Administrator or Operator, a choice of which partition will be offered.
Management Processor Commands MP Command: RS MP Command: RS RS - Reset Partition • Access level—Single PD user • Scope—Partition This command resets and boots the specified partition. The utility system resets each cell that is a member of the specified partition. Once all cells have completed reset, the partition is booted. If the user is either Administrator or Operator, a choice of which partition is offered.
Management Processor Commands MP Command: SA MP Command: SA SA - Set Access Parameters • Access level—Administrator • Scope—Complex This command modifies the enablement of interfaces including telnet, SSH, modem, network diagnostics, IPMI LAN, web console, etc. • Example: [spudome] MP:CM> sa This command displays and allows modification of access parameters.
Management Processor Commands MP Command: SO MP Command: SO SO - Security Options and Access Control Configuration • Access level—Administrator • Scope—Complex This command modifies the security options and access control to the MP handler.
Management Processor Commands MP Command: SYSREV MP Command: SYSREV SYSREV - Display System and Manageability Firmware Revisions • Access level—Single Partition User • Scope—Complex This command will display the firmware revisions of all of the entities in the complex. • Example: MP:CM> sysrev Manageability Subsystem FW Revision Level: 7.14 | Cabinet #0 | -----------------------+-----------------+ | SYS FW | PDHC | Cell (slot 0) | 32.2 | 7.6 | Cell (slot 1) | 32.2 | 7.6 | Cell (slot 2) | 32.2 | 7.
Management Processor Commands MP Command: TC MP Command: TC TC - TOC Partition • Access level—Single Partition User • Scope—Partition This command transfers the control (TOC) of the specified partition. The SINC on each cell in the specified partition asserts the sys_init signal to Dillon.
Management Processor Commands MP Command: TE MP Command: TE TE - Tell • Access level—Single Partition User • Scope—Complex This command treats all characters following the TE as a message that is broadcast when the is pressed. The message size is limited to 80 characters. Any extra characters are not broadcast. Also, any message that is written is not entered into the console log.
Management Processor Commands MP Command: VM MP Command: VM VM - Voltage Margin • Access level—Single Partition User • Scope—Cabinet The command adjusts the voltage of all marginable supplies within a range of +/- 5% No reset is required for the command to become effective.
Management Processor Commands MP Command: WHO MP Command: WHO WHO - Display List of Connected Users • Access level—Single Partition User • Scope—Complex This command displays the login name of the connected console client user and the port on which they are connected. For LAN console clients the remote IP address is displayed.
Management Processor Commands MP Command: XD MP Command: XD XD - Diagnostic and Reset of MP • Access level—Operator • Scope—Complex This command tests certain functions of the SBC and SBCH boards. Some of the tests are destructive and should not be performed on a system running the operating system.
Management Processor Commands MP Command: XD 212 Appendix B
C Powering the System On and Off This appendix provides procedures to shut down and bring up a system. Chose the appropriate section for the desired task. Not all steps in a procedure may apply. For example, if you are checking the configuration as outlined in “Checking System Configuration” on page 214 and you already connected to the host, step 1. is unnecessary.
Powering the System On and Off Shutting Down the System Shutting Down the System Use this procedure whenever the system must be shut down. Checking System Configuration To check the current system configuration, in preparation for shutdown, perform the following procedure: Step 1. Open a command prompt window and connect to the MP (Figure C-1): telnet Figure C-1 Connecting to Host Step 2. Enter the appropriate login and password at the MP prompt.
Powering the System On and Off Shutting Down the System Figure C-2 Main MP Menu Step 3. Invoke the Command Menu by entering cm at the MP prompt. Step 4. Make sure that no one else is using the system by entering who at the CM prompt. Only one user should be seen, as indicated in Figure C-3.
Powering the System On and Off Shutting Down the System Step 5. Read the and save the current system configuration by entering cp and the CM prompt. Cabinet and partition information should be displayed as in Figure C-4. Figure C-4 Checking Current System Configuration Step 6. Go back to the Main Menu by entering ma at the CM prompt. Step 7. From the Main Menu, enter vfp to invoke the Virtual Front Panel (Figure C-5).
Powering the System On and Off Shutting Down the System Step 8. From the VFP, enter s to select the whole system or enter the partition number to select a particular partition. You should see an output similar to that shown in Figure C-6. Figure C-6 Example of Partition State Step 9. Enter ctrl+B to exit the Virtual Front Panel and bring up the Main Menu. Shutting Down the Operating System You must shutdown the operating system on each partition.
Powering the System On and Off Shutting Down the System • Windows: Log in as Administrator. From the Special Administration Console (SAC> prompt) enter cmd to start a new command prompt. Press Esc+Tab to switch to the channel for the command prompt and log in. Step 3. At the console prompt, shut down and halt the operating system by entering the shutdown command.
Powering the System On and Off Shutting Down the System Step 6. Read the Cell PDH Controller status to determine if the partition is at BIB. Figure C-9 Using the de -s Command Boot-is-blocked Step 7. Repeat step 1 through step 6 for each partition. Powering Off the System Using the pe Command Perform the following steps to power off the system.
Powering the System On and Off Shutting Down the System Step 1. From the Command Menu, enter pe (Figure C-10). Figure C-10 Power Entity Command Step 2. Enter the number of the cabinet to power off. In Figure C-10, the number is 0. Step 3. When prompted for the state of the cabinet power, enter off. Step 4. Enter ps (power status command) at the CM> prompt to view the power status (Figure C-11).
Powering the System On and Off Shutting Down the System Step 5. Enter b at the select device prompt to select ensure that the cabinet power is off. The output should be similar to that in Figure C-12. The Power switch should be on, but the Power should be not enabled. Figure C-12 Power Status Second Window The cabinet is now powered off.
Powering the System On and Off Turning On Housekeeping Power Turning On Housekeeping Power Use the following procedure to turn on housekeeping power to the system: Step 1. Verify that the ac voltage at the input source is within specifications for each cabinet being installed. Step 2. Ensure that: • The ac breakers are in the OFF position. • The cabinet power switch at the front of the cabinet is in the OFF position.
Powering the System On and Off Turning On Housekeeping Power Figure C-13 Front Panel Display with Housekeeping (HKP) Power On, and Present LEDs HKP, Present, and Attention LEDs Step 5. Examine the bulk power suppply (BPS) LEDs (Figure C-14). When on, the breakers on the PDCA distribute power to the BPSs. Power is present at the BPSs when: • The amber light next to the label AC0 Present is on (if the breakers are on the PDCA on the left side at the back of the cabinet).
Powering the System On and Off Turning On Housekeeping Power Figure C-14 BPS LEDs BPS LEDs 224 Appendix C
Powering the System On and Off Powering On the System Using the pe Command Powering On the System Using the pe Command This section describes how to power on the system. Use the following procedures whenever the system needs to be powered on. Step 1. From the Command Menu, enter the pe command. IMPORTANT If the complex has an IOX cabinet, power on this cabinet first. In a large complex, cabinets should be powered on in one of the two following orders: • 9, 8, 1, 0 • 8, 9, 0, 1 Step 2.
Powering the System On and Off Powering On the System Using the pe Command Step 4. From the CM> prompt, enter ps to observe the power status. The status screen shown in Figure C-16 appears. Figure C-16 Power Status First Window Step 5. At the Select Device prompt, enter B then the cabinet number to check the power status of the cabinet. Observe that the Power switch is on and Power is enabled as shown in Figure C-17.
D Templates This appendix contains blank floor plan grids and equipment templates. Combine the necessary number of floor plan grid sheets to create a scaled version of the computer room floor plan.
Templates Templates Templates This section contains blank floor plan grids and equipment templates. Combine the necessary number of floor plan grid sheets to create a scaled version of the computer room floor plan. Figure D-1 illustrates the locations required for the cable cutouts. Figure D-2 on page 230 illustrates the overall dimensions required for SD16 and SD32 systems.
Templates Templates Figure D-3 on page 231 illustrates the overall dimensions required for an SD64 complex.
Templates Templates Figure D-2 230 SD16 and SD32 Space Requirements Appendix D
Templates Templates Figure D-3 SD64 Space Requirements Equipment Footprint Templates Equipment footprint templates are drawn to the same scale as the floor plan grid (1/4 inch = 1 foot). These templates are provided to show basic equipment dimensions and space requirements for servicing. The service areas shown on the template drawings are lightly shaded. Use equipment templates with the floor plan grid to define the location of the equipment that will be installed in the computer room.
Templates Templates NOTE Photocopying typically changes the scale of copied drawings. If any templates are copied, then all templates and floor plan grids must also be copied. Computer Room Layout Plan Use the following procedure to create a computer room layout plan: Step 1. Remove several copies of the floor plan grid. Step 2. Cut and join them together (as necessary) to create a scale model floor plan of the computer room. Step 3. Remove a copy of each applicable equipment footprint template. Step 4.
Templates Templates Figure D-4 Appendix D Computer Floor Template 233
Templates Templates Figure D-5 234 Computer Floor Template Appendix D
Templates Templates Figure D-6 Appendix D Computer Floor Template 235
Templates Templates Figure D-7 236 Computer Floor Template Appendix D
Templates Templates Figure D-8 Appendix D Computer Floor Template 237
Templates Templates Figure D-9 238 SD32 and SD64, and I/O Expansion Cabinet Templates Appendix D
Templates Templates Figure D-10 Appendix D SD32 and SD64, and I/O Expansion Cabinet Templates 239
Templates Templates Figure D-11 240 SD32 and SD64, and I/O Expansion Cabinet Templates Appendix D
Templates Templates Figure D-12 Appendix D SD32 and SD64, and I/O Expansion Cabinet Templates 241
Templates Templates Figure D-13 242 SD32 and SD64, and I/O Expansion Cabinet Templates Appendix D
Templates Templates Figure D-14 Appendix D SD32 and SD64, and I/O Expansion Cabinet Templates 243
Templates Templates 244 Appendix D
Index A ac power voltage check, 108 wiring check, 101 ac power verification 4-wire PDCA, 103 5-wire PDCA, 103 AC0 Present LED, 118, 223 AC1 Present LED, 118, 223 acoustic noise specifications sound power level, 72 sound pressure level, 72 American Society of Heating, Refrigerating and Air-Conditioning Engineers, See ASHRAE ASHRAE Class 1, 59, 63, 74 attention LED, 222 B bezel attaching front bezel, 100 attaching rear bezel, 99 attaching side bezels, 94 blower bezels (See also "bezel"), 94 blower housings in
Index interference communications, 76 inventory check, 77 IP address default values, 121 LAN configuration screen, 121 setting private and customer LAN, 121 J JTAG utility for scan test JUST, 132 JUST JTAG utility for scan test, 132 K kick plates attaching to cabinet, 134 shown on cabinet, 134 L LAN port 0, 121 port 1, 121 status, 121 LED AC0 Present, 118, 223 AC1 Present, 118, 223 Attention, 222 HKP (housekeeping), 117, 222 Present, 117, 222 leveling feet attaching, 97 M MAC address, 121 moving the system
Index W wiring check, 101 wrist strap usage, 76 247