HP Integrity rx8640 and HP 9000 rp8440 Servers User Service Guide HP Part Number: AB297-9013B Published: November 2011 Edition: 7
© Copyright 2006, 2011 Hewlett-Packard Development Company, L.P Legal Notices The information contained herein is subject to change without notice. The only warranties for HP products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. HP shall not be liable for technical or editorial errors or omissions contained herein. Microsoft and Windows are U.S.
Contents About This Document.....................................................................................8 Book Layout.............................................................................................................................8 Intended Audience....................................................................................................................8 Publishing History..........................................................................................................
Typical Power Dissipation and Cooling.................................................................................31 Acoustic Noise Specification...............................................................................................31 Air Flow............................................................................................................................32 3 Installing the System..................................................................................
Additional Notes on Console Selection............................................................................75 Configuring AC Line Status..................................................................................................76 Booting the Server ............................................................................................................76 Selecting a Boot Partition Using the MP ...........................................................................
Server Management Overview...............................................................................................123 Server Management Behavior................................................................................................123 Thermal Monitoring..........................................................................................................124 Fan Control.....................................................................................................................
Preliminary Procedures.....................................................................................................152 Removing the PCI Smart Fan Assembly................................................................................153 Replacing the PCI Smart Fan Assembly...............................................................................153 Removing and Replacing a PCI-X Power Supply........................................................................153 Preliminary Procedures......
About This Document This document covers the HP Integrity rx8640 and the HP 9000 rp8440 server systems. This document does not describe system software or partition configuration in any detail. For detailed information concerning those topics, see the nPartition Administrator's Guide.
Server Hardware Information: The following website offers more system information: http://www.hp.com/go/ integrity_servers-docs. It provides HP nPartition server hardware management information, including site preparation, installation, and more. Windows Operating System Information: You can find information about administration of the Microsoft® Windows® operating system at the following Web sites, among others: • http://www.hp.com/go/windows-on-integrity-docs • http://www.microsoft.
HP contact information For the name of the nearest HP authorized reseller: • In the United States, see the HP US service locator webpage (http://welcome.hp.com/country/ us/en/wwcontact.html.) • In other locations, see the Contact HP worldwide (in English) webpage: http://welcome.hp.com/country/us/en/wwcontact.html. For HP technical support: • In the United States, for contact options see the Contact HP United States webpage: (http:// welcome.hp.com/country/us/en/contact_us.
1 HP Integrity rx8640 Server Overview The HP Integrity rx8640 server and the HP 9000 rp8440 server are members of the HP business-critical computing platform family of mid-range, mid-volume servers, positioned between the HP Integrity rx7640, HP 9000 rp7440 and HP Integrity Superdome servers. IMPORTANT: The differences between the HP Integrity rx8640 and the HP 9000 rp8440 are identified in Chapter 7. See Chapter 7 (page 158).
Figure 1 Server (Front View With Bezel) Figure 2 Server (Front View Without Bezel) Removable Media Drives PCI Power Supplies Power Switch Hard Disk Drives Front OLR Fans Bulk Power Supplies The server has the following dimensions: • Depth: Defined by cable management constraints to fit into a standard 36-inch deep rack: 25.
26.7 inches from front rack column to core I/O card connector surface 30 inches overall package dimension, including 2.7 inches protruding in front of the front rack columns • Width: 17.5 inches, constrained by EIA standard 19-inch racks • Height: 17 U (29.55 inches), constrained by package density The mass storage section located in the front enables access to removable media drives without removal of the bezel. The mass storage bay accommodates two 5.25-inch removable media drives and up to four 3.
The cell boards are located on the right side of the server behind a removable side cover. For rack mounted servers on slides, the rack front door requires removal if it is hinged on the right side of the rack. Removal will allow unrestricted access to server sides after sliding server out for service.. The two redundant core I/O cards are positioned vertically end-to-end at the rear of the chassis. Redundant line cords attach to the AC power receptacles at the bottom rear.
Figure 5 Cell Board The server has a 48 V distributed power system and receives the 48 V power from the system backplane board. The cell board contains DC-to-DC converters to generate the required voltage rails. The DC-to-DC converters on the cell board do not provide N+1 redundancy.
Central Processor Units The cell board can hold up to four CPU modules. Each CPU module can contain up to two CPU cores on a single die. Modules are populated in increments of one. On a cell board, the processor modules must be the same family, type, and clock frequencies. Mixing of different processors on a cell or a partition is not supported. See Table 1 for the load order that must be maintained when adding processor modules to the cell board.
Figure 7 Memory Subsystem DIMM DIMM DIMM DIMM Address/ Controller Buffer Buffer Buffer Address/ Controller Buffer Buffer DIMM DIMM Front Side Bus 1 CPU 2 To Quad 1 Address/Controller Buffers To Quad 0 Address/Controller Buffers QUAD 2 DIMM DIMM Buffer Buffer Buffer DIMM DIMM To Quad 2 Address/Controller Buffers To Quad 3 Address/Controller Buffers DIMM DIMM Address/ Controller Buffer QUAD 1 Buffer QUAD 0 QUAD 3 PDH Riser Board DIMM DIMM Buffer Address/ Controller Buffer DIMM DIMM
The server complex includes all hardware within an nPartition server: all cabinets, cells, I/O chassis, I/O devices and racks, management and interconnecting hardware, power supplies, and fans. A server complex can contain one or more nPartitions, enabling the hardware to function as a single system or as multiple systems. NOTE: Partition configuration information is available at the following website: http://www.hp.com/go/bizsupport See the nPartition Administrator's Guide for details.
Table 3 Removable Media Drive Path Removable Media Path Slot 0 media 0/0/0/2/1.x1.0 Slot 1 media 1/0/0/2/1.x1.0 1 X equals 2 for a DVD drive while X equals 3 for a DDS-4 DAT drive. Table 4 Hard Disk Drive Path Hard Drive Path Slot 0 drive 0/0/0/2/0.6.0 Slot 1 drive 0/0/0/3/0.6.0 Slot 2 drive 1/0/0/2/0.6.0 Slot 3 drive 1/0/0/3/0.6.
You can remove the core I/O cards from the system as long as you shut down the partition for the core I/O card before removing the card. The hot-plug circuitry that enables this feature is located on the system backplane near the core I/O sockets. System Backplane to PCI-X Backplane Connectivity The PCI-X backplane uses two connectors for the SBA link bus and two connectors for the high-speed data signals and the manageability signals.
Table 6 PCI-X Slot Boot Paths Cell 1 Cell PCI Slot Ropes Path 1 1 8/9 1/0/8/1/0 1 2 10/11 1/0/10/1/0 1 3 12/13 1/0/12/1/0 1 4 14/15 1/0/14/1/0 1 5 6/7 1/0/6/1/0 1 6 4/5 1/0/4/1/0 1 7 2/3 1/0/2/1/0 1 8 1 1/0/1/1/0 The server supports two internal SBAs. Each SBA provides the control and interfaces for eight PCI-X slots. The interface is through the rope bus (16 ropes per SBA).
Table 7 PCI-X Slot Types (continued) I/O Partition Slot1 Maximum MHz Maximum Peak Ropes Bandwidth Supported Cards PCI Mode Supported 4 266 2.13 GB/s 014/015 3.3 V or 1.5 V PCI-X Mode 2 3 266 2.13 GB/s 012/013 3.3 V or 1.5 V PCI-X Mode 2 2 133 1.06 GB/s 010/011 3.3 V PCI or PCI-X Mode 1 1 133 1.06 GB/s 008/009 3.3 V PCI or PCI-X Mode 1 82 66 533 MB/s 001 3.3 V PCI or PCI-X Mode 1 7 133 1.06 GB/s 002/003 3.3 V PCI or PCI-X Mode 1 6 266 2.13 GB/s 004/005 3.3 V or 1.
The ropes in each I/O partition are distributed as follows: • One PCI-X ASIC is connected to each I/O chip with a single rope capable of peak data rates of 533Mb/s (PCIX-66). • Three PCI-X ASICs are connected to each I/O chip with dual ropes capable of peak data rates of 1.06Gb/s (PCIX-133). • Four PCIe ASICs are connected to each I/O chip with dual fat ropes capable of peak data rates of 2.12Gb/s (PCIe x8). In addition, each I/O chip provides an external single rope connection for the core I/O.
Table 8 PCIe Slot Types (continued) I/O Partition Slot1 1 2 Maximum MHz Maximum Peak Ropes Bandwidth Supported Cards PCI Mode Supported 2 133 1.06 GB/s 010/011 3.3 V PCI or PCI-X Mode 1 1 133 1.06 GB/s 008/009 3.3 V PCI or PCI-X Mode 1 Each slot will auto select the proper speed for the card installed up to the maximum speed for the slot. Placing high speed cards into slow speed slots will cause the card to be driven at the slow speed.
Table 9 Core I/O Boot Paths (continued) Core I/O Card Device Path Description 1 SCSI Drive 1/0/0/3/0.6.0 Hard drive located in the lower right disk bay. 1 SCSI Drive 1/0/0/3/1 SCSI drive connected to the external SCSI Ultra3 connector on the core I/O card. Mass Storage (Disk) Backplane Internal mass storage connections to disks are routed on the mass storage backplane, which has connectors and termination logic. All hard disks are hot-plug but removable media disks are not hot-plug.
2 System Specifications This chapter describes the basic system configuration, physical specifications and requirements for the server. Dimensions and Weights This section provides dimensions and weights of the server and server components. Table 10 gives the dimensions and weights for a fully configured server. Table 10 Server Dimensions and Weights Standalone Packaged Height - Inches (centimeters) 29.55 (75.00) 86.50 (219.70) Width - Inches (centimeters) 17.50 (44.50) 40.00 (101.
Table 12 Example Weight Summary (continued) Component Quantity Multiply Weight (kg) Power supply (BPS) 6 12 lb 72 lb 5.44 kg 32.66 kg 2.2 lb 4.4 lb 1.0 kg 2.0 kg 1.6 lb 6.40 lb 0.73 kg 2.90 kg 131 lb 131 lb 59.42 kg 59.42 kg Total weight 322.36 lb DVD drive Hard disk drive Chassis with skins and front bezel cover 2 4 1 146.
Install a protective earthing (PE) conductor that is identical in size, insulation material, and thickness to the branch-circuit supply conductors. The PE conductor must be green with yellow stripes. The earthing conductor described is to be connected from the unit to the building installation earth or if supplied by a separately derived system, at the supply transformer or motor-generator set grounding point. Circuit Breaker The Marked Electrical for the server is 15 amps per line cord.
Table 15 AC Power Requirements (continued) Requirements Value Comments Power factor correction >0.98 At all loads of 50%–100% of supply rating. >0.95 At all loads 0f 25%–50% of supply rating Ground leakage current (mA) <3.
Table 17 Example ASHRAE Thermal Report Condition Voltage 208 Volts Typical Heat Release Airflow, nominal Airflow, maximum at 32° C Weight Description Watts cfm (m3/hr) lb kg Inches mm Minimum configuration 971 960 1631 178 81 h=29.55 750.57 w=17.50 444.50 d=30.00 762.00 h=29.55 750.57 w=17.50 444.50 d=30.00 762.00 h=29.55 750.57 w=17.50 444.50 d=30.00 762.
Bulk Power Supply Cooling Cooling for the bulk power supplies (BPS) is provided by two 60-mm fans contained within each BPS. Air flows into the front of the BPS and is exhausted out of the top of the power supply though upward facing vents near the rear of the supply. The air is then ducted out of the rear of the chassis. PCI/Mass Storage Section Cooling Six 92-mm fans located between the mass storage devices and the PCI card cage provide airflow through these devices.
to operator positions within the computer room or when adding servers to computer rooms with existing noise sources. Air Flow The recommended server cabinet air intake temperature is between 20° C and 25° C (68° F and 77° F) at 960 CFM. Figure 9 illustrates the location of the inlet and outlet airducts on a single cabinet. Air is drawn into the front of the server and forced out the rear.
3 Installing the System Inspect shipping containers when the equipment arrives at the site. Check equipment after the packing has been removed. This chapter discusses how to receive, inspect and install the server. Receiving and Inspecting the Server Cabinet This section contains information about receiving, unpacking and inspecting the server cabinet. NOTE: The server will ship in one of three different configurations.
Figure 10 Removing the Polystraps and Cardboard 3. 4. Remove the corrugated wrap from the pallet. Remove the packing materials. CAUTION: Cut the plastic wrapping material off rather than pull it off. Pulling the plastic covering off represents an electrostatic discharge (ESD) hazard to the hardware. 5. Remove the four bolts holding down the ramps, and remove the ramps.
Figure 11 Removing the Shipping Bolts and Plastic Cover 6. Remove the six bolts from the base that attaches the rack to the pallet. Figure 12 Preparing to Roll Off the Pallet WARNING! Be sure that the leveling feet on the rack are raised before you roll the rack down the ramp, and any time you roll the rack on the casters. Use caution when rolling the cabinet off the ramp. A single server in the cabinet weighs approximately 508 lb.
After unpacking the cabinet, examine it for damage that might have been obscured by the shipping container. If you discover damage, document the damage with photographs and contact the transport carrier immediately. If the equipment has any damage, the customer must obtain a damage claim form from the shipping representative. The customer must complete the form and return it to the shipping representative.
Power Management • PDUs • Cordsets • Rackmounted UPS System Management • Console switches • Flat panel/keyboards Data cables • CAT 5 cables • Fibre optic cables • SCSI cables • SCSI terminators Lifting the Server Cabinet Manually Use this procedure only if no HP approved lift is available. CAUTION: This procedure must only be performed by four qualified HP Service Personnel utilizing proper lifting techniques and procedures. 1. 2.
WARNING! Use caution when using the lifter. To avoid injury, because of the weight of the server, center the server on the lifter forks before raising it off the pallet. Always rack the server in the bottom of a cabinet for safety reasons. Never extend more than one server from the same cabinet while installing or servicing another server product. Failure to follow these instructions could result in the cabinet tipping over. 1. 2. 3.
Figure 15 Raising the Server Off the Pallet Cushions 6. 7. Carefully roll the lifter and server away from the pallet. Do not raise the server any higher than necessary when moving it over to the rack.
Table 19 Wheel Kit Packing List (continued) Part Number Description Quantity 0515-2478 M4 x 0.
5. Remove the front cushion only (Figure 17). Do not remove any other cushions until further instructed. Figure 17 Removing Cushion from Front Edge of Server Rear Cushion Side Cushion Front Cushion 6. 7. Open the wheel kit box and locate the two front casters. The front casters are shorter in length than the two rear casters. Each front caster is designed to fit only one corner of the server (right front caster and left front caster). Remove two of the eight screws from the plastic pouch.
Figure 18 Attaching a Caster Wheel to the Server Front Casters 8. Attach the remaining front caster to the server using two more screws supplied in the plastic pouch. 9. Remove the rear cushion at the rear of the server. Do not remove the remaining cushions. 10. Mount the two rear casters to the server using the remaining four screws. 11. Obtain the plywood ramp from the wheel kit. 12. The ramp has two predrilled holes (Figure 19).
13. Remove the two side cushions from the server, (Figure 20) and unfold the cardboard tray so that it lays flat on the pallet. Figure 20 Removing Side Cushion from Server Ramp Side Cushion 14. Carefully roll the server off the pallet and down the ramp. 15. Obtain the caster covers from the wheel kit. Note that the caster covers are designed to fit on either side of the server. 16. Insert the slot on the caster cover into the front caster (Figure 21).
Figure 21 Securing Each Caster Cover to the Server Captive Screw Caster Covers Rear Casters Front Casters 17. Snap the bezel cover into place on the front of the server. Figure 22 shows the server cabinet with the wheel kit installed. Figure 22 Completed Wheel Kit Installation Attached Caster Cover Installing the Top and Side Covers This section describes the procedures for installing the top and side server covers.
Figure 23 Cover Locations Top Cover Side Cover Front Bezel Removing the Top Cover The 1. 2. 3. 4. 5. following section describes the procedure for removing the top cover. Connect to ground with a wrist strap. Loosen the blue retaining screws securing the cover to the chassis (Figure 24). Slide the cover toward the rear of the chassis. Lift the cover up and away from the chassis. Place the cover in a safe location.
Figure 24 Top Cover Detail Retaining Screws Installing the Top Cover The 1. 2. 3. following section describes the procedure for installing the top cover. Orient the cover according to its position on the chassis. Slide the cover into position using a slow, firm pressure to properly seat the cover. Tighten the blue retaining screws securing the cover to the chassis. Removing the Side Cover The following section describes the procedure for removing the side cover. 1. Connect to ground with a wrist strap.
Figure 25 Side Cover Detail Retaining Screw 3. 4. Slide the cover from the chassis toward the rear of the system. Place the cover in a safe location. Installing the Side Cover The 1. 2. 3. following section describes the procedure for installing the side cover. Orient the cover according to its position on the chassis. Slide the cover into position using a slow, firm pressure to properly seat the cover. Tighten the blue retaining screw securing the cover to the chassis.
This PDU might be referred to as a Relocatable Power Tapoutside HP. The PDU installation kit contains the following: • PDU with cord and plug • Mounting hardware • Installation instructions Installing Additional Cards and Storage This section provides information on additional products ordered after installation and any dependencies for these add-on products.
3. 4. Press the front locking latch to secure the disk drive in the chassis. If the server OS is running, spin up the disk by entering one of the following commands: #diskinfo -v /dev/rdsk/cxtxdx #ioscan -f Removable Media Drive Installation The DVD drive or DDS-4 tape drives are located in the front of the chassis. Figure 27 Removable Media Drive Location Removable Media Drives If an upper drive is installed, remove it before installing a lower drive. 1. Remove the filler panel. 2.
Table 20 HP Integrity rx8640 Server PCI-X/PCIe I/O Cards Part Number Card Description A4926A Gigabit Ethernet (1000b-SX) A4929A Gigabit Ethernet (1000b-T) A5158A FCMS - Tachlite A5230A 10/100b-TX (RJ45) A5506B 4-port 10/100b-TX A5838A 2-port Ultra2 SCSI/2-Port 100b-T Combo A6386A Hyperfabric II A6749A 64-port Terminal MUX A6795A 2G FC Tachlite B A6825A Next Gen 1000b-T b A6826A1 2-port 2Gb FC B A6828A 1-port U160 SCSI B B A6829A 2-port U160 SCSI B B A6847A Next Gen 1000b-
Table 20 HP Integrity rx8640 Server PCI-X/PCIe I/O Cards (continued) Part Number Card Description HP-UX Windows Linux VMS 1 Emulex 1050DC Fibre Channel B 1 AB467A Emulex 1050D Fibre Channel B AB545A 4-Port 1000b-T Ethernet AD167A1 Emulex 4Gb/s B B AD168A1 Emulex 4 Gb/s DC B B AD193A 1-port 4Gb FC & 1-port GbE HBA PCI-X Bb B AD194A 2-port 4Gb FC & 2 port GbE HBA PCI-X Bb B AD278A 8-Port Terminal MUX AD279A 64-Port Terminal MUX AD307A iLO (USB/VGA/RMP) B B AD331A PCI/PC
Table 21 HP 9000 rp8440 Server PCI-X I/O Cards Part Number Card Description A4926A Gigabit Ethernet (1000b-SX) A4929A Gigabit Ethernet (1000b-T) A5158A FCMS - Tachlite A5159B 2–port FWD SCSI A5230A 10/100b-TX (RJ45) A5506B 4-port 10/100b-TX A5838A 2-port Ultra2 SCSI/2-Port 100b-T Combo A6386A1 Hyperfabric II A6749A 64-port Terminal MUX 1 B B A6795A 2G FC Tachlite B A6825A1 Next Gen 1000b-T b A6826A1 2-port 2Gb FC B A6828A 1-port U160 SCSI B A6829A 2-port U160 SCSI B A684
Table 21 HP 9000 rp8440 Server PCI-X I/O Cards (continued) Part Number Card Description HP-UX 1 64-Port Terminal MUX 1 AD331A PCI/PCI-X 1–port 1000b-T Adapter b AD332A1 PCI/PCI-X 1–port 1000b-SX Adapter b J3525A1 2-port Serial (X25/FR/SDLC) AD279A • B- Supports Mass Storage Boot • b- Supports LAN Boot • 1. Bb- Supports Mass Storage and LAN Boot Available with Factory Integration n/a n/a n/a IMPORTANT: The above list of part numbers is current and correct as of September 2007.
The LOA card has specific slotting requirements that must be followed for full card functionality: • Must be placed in a mode 1 PCI/PCI-X slot • Must be placed in an I/O chassis with a functional core I/O card • Must be only one LOA card in each partition NOTE: HP recommends that you place the LOA card in the lowest numbered slot possible.
6. Press the attention button. The green power LED will start to blink. Figure 28 PCI I/O Slot Details Manual Release Latch Closed Manual Release Latch Open OL* Attention Button Power LED (green) Attention LED (yellow) 7. 8. Wait for the green power LED to stop blinking and turn on solid. Check for errors in the hotplugddaemon log file (default: /var/adm/hotplugd.log).
Figure 29 PCI/PCI-X Card Location PCI/PCI-X Cards IMPORTANT: Some PCI I/O cards, such as the A6869B VGA/USB PCI card, cannot be added or replaced online (while Windows remains running). For these cards, you must shut down Windows on the nPartition before performing the card replacement or addition. See the section on Shutting Down nPartitions and Powering off Hardware Components in the appropriate service guide. 1. 2. 3. 4. 5. 6. 7.
Reference URL There are many features available for HP Servers at this website including links to download Windows Drivers. HP Servers Technical Support http://www.hp.com/support/itaniumservers System Console Selection Each operating system requires that the correct console type be selected from the firmware selection menu. The following section describes how to determine the correct console device.
VGA Consoles Any device that has a PCI section in its path and does not have a UART section will be a VGA device. If you require a VGA console, choose the device and unmark all others. Figure 30 shows that a VGA device is selected as the console. Interface Differences Between Itanium-based Systems Each Itanium-based system has a similar interface with minor differences. Some devices may not be available on all systems depending on system design or installed options.
Figure 31 Voltage Reference Points for IEC-320 C19 Plug IMPORTANT: 1. 2. 3. Perform these measurements for every power cord that plugs into the server. Measure the voltage between L1 and L2. This is considered to be a phase-to-phase measurement in North America. In Europe and certain parts of Asia-Pacific, this measurement is referred to as a phase-to-neutral measurement. The expected voltage should be between 200–240 V AC regardless of the geographic region. Measure the voltage between L1 and ground.
Figure 32 Safety Ground Reference Check — Single Power Source WARNING! SHOCK HAZARD Risk of shock hazard while testing primary power. Use properly insulated probes. Be sure to replace access cover when finished testing primary power. 1. Measure the voltage between A0 and A1 as follows: 1. Take the AC voltage down to the lowest scale on the volt meter. 2. Insert the probe into the ground pin for A0. 3. Insert the other probe into the ground pin for A1. 4. Verify that the measurement is between 0-5 V AC.
Figure 33 Safety Ground Reference Check — Dual Power Source WARNING! SHOCK HAZARD Risk of shock hazard while testing primary power. Use properly insulated probes. Be sure to replace access cover when finished testing primary power. 1. Measure the voltage between A0 and A1 as follows: 1. Take the AC voltage down to the lowest scale on the volt meter. 2. Insert the probe into the ground pin for A0. 3. Insert the other probe into the ground pin for A1. 4. Verify that the measurement is between 0-5 V AC.
4. Measure the voltage between A1 and B1 as follows: 1. Take the AC voltage down to the lowest scale on the volt meter. 2. Insert the probe into the ground pin for A1. 3. Insert the other probe into the ground pin for B1. 4. Verify that the measurement is between 0-5 V AC. If the measurement is 5 V or greater, escalate the situation. Do not attempt to plug the power cord into the server cabinet.
1. 2. For locking type receptacles, line up the key on the plug with the groove in the receptacle. Push the plug into the receptacle and rotate to lock the connector in place. WARNING! Do not set site AC circuit breakers serving the processor cabinets to ON before verifying that the cabinet has been wired into the site AC power supply correctly. Failure to do so can result in injury to personnel or damage to equipment when AC power is applied to the cabinet. 8. Set the site AC circuit breaker to ON. 9.
Figure 36 Distribution of Input Power for Each Bulk Power Supply WARNING! Voltage is present at various locations within the server whenever a power source is connected. This voltage is present even when the main power switch is in the off position. To completely remove power, all power cords must be removed from the server. Failure to observe this warning could result in personal injury or damage to equipment. CAUTION: Do not route data and power cables together in the same cable management arm.
IMPORTANT: The minimum supported N+1 BPS configuration for one cell board must have BPS slots 0, 1, and 3 populated. When selecting a single power source, the power cords are connected into A0 and A1.
Figure 37 Four Cell Line Cord Anchor (rp8400, rp8420, rp8440, rx8620, rx8640) 2. 3. 4. Tighten the captive thumbscrews to secure the line cord anchor to the chassis. Weave the power cables through the line cord anchor. Leave enough slack that the plugs can be disconnected from the receptacles without removing the cords from the line cord anchor Use the supplied Velcro straps to attach the cords to the anchor.
External connections to the core I/O board include the following: • One Ultra 320 (320 MB/sec) 68-pin SCSI port for connection to external SCSI devices by a high-density cable interconnect (VHDCI) connector. • One RJ-45 style 10Base-T/100Base-T/1000Base-T system LAN connector. This LAN uses standby power and is active when AC is present and the front panel power switch is OFF. • One RJ-45 style 10Base-T/100Base-T MP LAN connector.
Connecting the CE Tool to the Local RS-232 Port on the MP This connection enables direct communications with the MP. Only one window can be created on the CE Tool to monitor the MP. When enabled, it provides direct access to the MP and to any partition. Use the following procedure to connect the CE Tool to the RS-232 Local port on the MP: 1.
2. If not already done, power on the serial display device. The preferred tool is the CE Tool running Reflection 1. To power on the MP, set up a communications link, and log in to the MP: 1. Apply power to the server cabinet. On the front of the server, the MP Status LED will illuminate yellow until the MP is booted successfully. Once the MP is booted successfully, and no other cabinet faults exist, the LED will change to solid green. See Figure 40. Figure 40 Front Panel Display 2.
3. Log in to the MP: 1. Enter Admin at the login prompt. (This term is case-sensitive.) It takes a few moments for the MP> prompt to appear. If the MP> prompt does not appear, verify that the laptop serial device settings are correct: 8 bits, no parity, 9600 baud, and None for both Receive and Transmit. Then try again. 2. Enter Admin at the password prompt. (This term is case-sensitive.
Figure 43 The lc Command Screen MP:CM> lc This command modifies the LAN parameters. Current configuration of MP customer LAN interface MAC address : 00:12:79:b4:03:1c IP address : 15.11.134.222 0x0f0b86de Hostname : metro-s Subnet mask : 255.255.248.0 0xfffff800 Gateway : 15.11.128.1 0x0f0b8001 Status : UP and RUNNING Link : Connected 100Mb Half Duplex Do you want to modify the configuration for the MP LAN? (Y/[N]) q 3. NOTE: The value in the IP address field has been set at the factory.
11. A screen similar to Figure 44 will display allowing verification of the settings. Figure 44 The ls Command Screen To return to the MP Main menu, enter ma. To exit the MP, enter x at the MP Main Menu. Accessing the Management Processor via a Web Browser Web browser access is an embedded feature of the management processor (MP). The Web browser enables access to the server via the LAN port on the core I/O card. MP configuration must be done from an ASCII console.
Figure 45 Example sa Command 5. 6. Launch a Web browser on the same subnet using the IP address for the MP LAN port. Click anywhere on the Zoom In/Out title bar (Figure 46) to generate a full screen MP window. Figure 46 Browser Window t 7. 8. Zoom In/Out Title Bar Select the emulation type you want to use. Log in to the MP when the login window appears. Access to the MP via a Web browser is now possible.
After logging in to the MP, verify that the MP detects the presence of all the cells installed in the server cabinet. It is important for the MP to detect the cell boards. If it does not, the partitions will not boot. To determine if the MP detects the cell boards: 1. At the MP prompt, enter cm. This displays the Command Menu. Among other things, the Command Menu enables you to view or modify the configuration and to look at utilities controlled by the MP.
2. Select the appropriate console device (deselect unused devices): a. Choose the “Boot option maintenance menu” choice from the main Boot Manager Menu. b. Select the Console Output, Input or Error devices menu item for the device type you are modifying: c. • “Select Active Console Output Devices” • “Select Active Console Input Devices” • “Select Active Console Error Devices” Available devices will be displayed for each menu selection.
are chosen the OS may fail to boot or will boot with output directed to the wrong location. Therefore, any time new potential console devices are added to the system or anytime NVRAM on the system is cleared console selections should be reviewed to ensure that they are correct. Configuring AC Line Status The MP utilities can detect if power is applied to each of the AC input cords for the server, by sampling the status of the bulk power supplies.
1. A window showing all activity in the complex. Following the installation procedure in this document causes a window to be open at startup. To display activity for the complex: 1. Open a separate Reflection window and connect to the MP. 2. From the MP Main Menu, select the VFP command with the s option. 2. A window showing activity for a single partition. To display activity for each partition as it powers on: 1. 2. Open a separate Reflection window and connect to the MP.
Once the parameters have been verified, enter x to return to the EFI Main Menu. Booting HP-UX Using the EFI Shell If the Instant Ignition was ordered, HP-UX will have been installed in the factory at the Primary Path address. If HP-UX is at a path other than the Primary Path, do the following: 1. Type cmto enter the Command Menu from the Main Menu. 2. MP:CM> bo This command boots the selected partition. Select a partition to boot: 3. 4.
Table 24 Factory-Integrated Installation Checklist Procedure In-process Initials Comments Completed Initials Comments Obtain LAN information Verify site preparation Site grounding verified Power requirements verified Check inventory Inspect shipping containers for damage Unpack cabinet Allow proper clearance Cut polystrap bands Remove cardboard top cap Remove corrugated wrap from the pallet Remove four bolts holding down the ramps and remove the ramps Remove antistatic bag Check for damage (exterior an
Table 24 Factory-Integrated Installation Checklist (continued) Procedure Log in to MP Set LAN IP address on MP Connect customer console Set up network on customer console Verify LAN connection Verify presence of cells Power on cabinet (48 V) Verify system configuration and set boot parameters Set automatic system restart Boot partitions Configure remote login (if required). Verify remote link (if required).
4 Booting and Shutting Down the Operating System This chapter presents procedures for booting an operating system (OS) on an nPartition (hardware partition) and procedures for shutting down the OS. Operating Systems Supported on Cell-based HP Servers HP supports nPartitions on cell-based HP 9000 servers and cell-based HP Integrity servers. The following list describes the OSes supported on cell-based servers based on the HP sx2000 chipset.
See “Booting and Shutting Down Linux” (page 106) for details. NOTE: On servers based on the HP sx2000 chipset, each cell has a cell local memory (CLM) parameter, which determines how firmware may interleave memory residing on the cell. The supported and recommended CLM setting for the cells in an nPartition depends on the OS running in the nPartition. Some OSes support using CLM, and some do not. For details on CLM support for the OS you will boot in an nPartition, see the booting section for that OS.
The EFI Boot Configuration menu provides the Add a Boot Option, Delete Boot Option(s), and Change Boot Order menu items. (If you must add an EFI Shell entry to the boot options list, use this method.) To save and restore boot options, use the EFI Shell variable command. The variable -save file command saves the contents of the boot options list to the specified file on an EFI disk partition. The variable -restore file command restores the boot options list from the specified file that was previously saved.
• Autoboot Setting You can configure the autoboot setting for each nPartition either by using the autoboot command at the EFI Shell, or by using the Set Auto Boot TimeOut menu item at the EFI Boot Option Maintenance menu. To set autoboot from HP-UX, use the setboot command. • ACPI Configuration Value—HP Integrity Server OS Boot On cell-based HP Integrity servers you must set the proper ACPI configuration for the OS that will be booted on the nPartition.
To change the nPartition behavior when an OS is shut down and halted, use either the acpiconfig enable softpowerdown EFI Shell command or the acpiconfig disable softpowerdown command, and then reset the nPartition to make the ACPI configuration change take effect.
To display or set the boot mode for an nPartition on a cell-based HP Integrity server, use any of the following tools as appropriate. See Installing and Managing HP-UX Virtual Partitions (vPars), Sixth Edition, for details, examples, and restrictions. ◦ parconfig EFI shell command The parconfig command is a built-in EFI shell command. See the help parconfig command for details.
noninterleaved memory, then use Partition Manager or the parstatus command to confirm the CLM configuration details. To set the CLM configuration, use Partition Manager or the parmodify command. For details, see the nPartition Administrator's Guide (http://www.hp.com/go/virtualization-manuals). Adding HP-UX to the Boot Options List This section describes how to add an HP-UX entry to the system boot options list. You can add the \EFI\HPUX\HPUX.
4. Exit the console and management processor interfaces if you are finished using them. To exit the EFI environment press ^B (Control+B); this exits the system console and returns to the management processor Main Menu. To exit the management processor, enter X at the Main Menu. Booting HP-UX This section describes the following methods of booting HP-UX: • “Standard HP-UX Booting” (page 88) — The standard method to boot HP-UX. Typically, this results in booting HP-UX in multiuser mode.
Main Menu: Enter command or menu > path Primary Boot Path: HA Alternate Boot Path: Alternate Boot Path: Main Menu: 3. 0/0/2/0/0.13 0/0/2/0/0.d (hex) 0/0/2/0/0.14 0/0/2/0/0.e (hex) 0/0/2/0/0.0 0/0/2/0/0.0 (hex) Enter command or menu > Boot the device by using the boot command from the BCH interface. You can issue the boot command in any of the following ways: • boot Issuing the boot command with no arguments boots the device at the primary (PRI) boot path.
4. Exit the console and management processor interfaces if you are finished using them. To exit the BCH environment, press ^B (control B); this exits the nPartition console and returns to the management processor Main Menu. To exit the management processor, enter x at the Main Menu. Procedure 3 HP-UX Booting (EFI Boot Manager) From the EFI Boot Manager menu, select an item from the boot options list to boot HP-UX using that boot option. The EFI Boot Manager is available only on HP Integrity servers.
3. At the EFI Shell environment, issue the map command to list all currently mapped bootable devices. The bootable file systems of interest typically are listed as fs0:, fs1:, and so on. 4. Access the EFI System Partition for the device from which you want to boot HP-UX (fsX: where X is the file system number). For example, enter fs2: to access the EFI System Partition for the bootable file system number 2. The EFI Shell prompt changes to reflect the file system currently accessed.
Procedure 5 Single-User Mode HP-UX Booting (BCH Menu) From the BCH Menu, you can boot HP-UX in single-user mode by issuing the boot command, stopping at the ISL interface, and issuing hpux loader options. The BCH Menu is available only on HP 9000 servers. 1. Access the BCH Main Menu for the nPartition on which you want to boot HP-UX in single-user mode. Log in to the management processor and enter co to access the Console list. Select the nPartition console.
Example 1 Single-User HP-UX Boot ISL revision A.00.42 JUN 19, 1999 ISL>hpux –is boot /stand/vmunix Boot : disk(0/0/2/0/0.13.0.0.0.0.0;0)/stand/vmunix 8241152 + 1736704 + 1402336 start 0x21a0e8 .... INIT: Overriding default level with level ‘s’ INIT: SINGLE USER MODE INIT: Running /sbin/sh # 4. Exit the console and management processor interfaces if you are finished using them.
Seconds left till autoboot - 9 [User Types a Key to Stop the HP-UX Boot Process and Access the HPUX.EFI Loader ] Type ’help’ for help HPUX> 5. At the HPUX.EFI interface (the HP-UX Boot Loader prompt, HPUX>), enter the boot -is vmunix command to boot HP-UX (the /stand/vmunix kernel) in single-user (-is) mode. HPUX> boot -is vmunix > System Memory = 4063 MB loading section 0 ................................................... (complete) loading section 1 ........
4. Exit the console and management processor interfaces if you are finished using them. To exit the BCH environment, press ^B (Control B); this exits the nPartition console and returns to the management processor Main Menu. To exit the management processor, enter x at the Main Menu. Procedure 8 LVM-Maintenance Mode HP-UX Booting (EFI Shell) From the EFI Shell environment, boot in LVM-maintenance mode by stopping the boot process at the HPUX.
NOTE: On HP rx7620, rx7640, rx8620, and rx8640 servers, you can configure the nPartition behavior when an OS is shut down and halted (shutdown -h or shutdown -R -H). The two options are to have hardware power off when the OS is halted, or to have the nPartition be made inactive (all cells are in a boot-is-blocked state). The normal behavior for HP-UX shut down and halt is for the nPartition to be made inactive. For details, see “ACPI Softpowerdown Configuration—OS Shutdown Behavior” (page 84).
Booting and Shutting Down HP OpenVMS I64 This section presents procedures for booting and shutting down HP OpenVMS I64 on cell-based HP Integrity servers and procedures for adding HP OpenVMS to the boot options list. • To determine whether the cell local memory (CLM) configuration is appropriate for HP OpenVMS, See “HP OpenVMS I64 Support for Cell Local Memory” (page 97). • To add an HP OpenVMS entry to the boot options list, See “Adding HP OpenVMS to the Boot Options List” (page 97).
To add an HP OpenVMS boot option when logged in to OpenVMS, use the @SYS$MANAGER:BOOT_OPTIONS.COM command. 1. Access the EFI Shell environment. Log in to the management processor, and enter CO to access the system console. When accessing the console, confirm that you are at the EFI Boot Manager menu (the main EFI menu). If you are at another EFI menu, select the Exit option from the submenus until you return to the screen with the EFI Boot Manager heading.
Booting HP OpenVMS To boot HP OpenVMS I64 on a cell-based HP Integrity server use either of the following procedures. • “Booting HP OpenVMS (EFI Boot Manager)” (page 99) • “Booting HP OpenVMS (EFI Shell)” (page 99) CAUTION: ACPI Configuration for HP OpenVMS I64 Must Be default On cell-based HP Integrity servers, to boot the HP OpenVMS OS, an nPartition ACPI configuration value must be set to default.
2. At the EFI Shell environment, issue the map command to list all currently mapped bootable devices. The bootable file systems of interest typically are listed as fs0:, fs1:, and so on. 3. Access the EFI System Partition for the device from which you want to boot HP OpenVMS (fsX:, where X is the file system number). For example, enter fs2: to access the EFI System Partition for the bootable file system number 2. The EFI Shell prompt changes to reflect the file system currently accessed.
2. At the OpenVMS command line (DCL) issue the @SYS$SYSTEM:SHUTDOWN command and specify the shutdown options in response to the prompts given.
IMPORTANT: Microsoft Windows supports using CLM on cell-based HP Integrity servers. For best performance in an nPartition running Windows, HP recommends that you configure the CLM parameter to 100 percent for each cell in the nPartition. To check CLM configuration details from an OS, use Partition Manager or the parstatus command.
09/18/03 12/18/03 11:58a
08:16a 1 File(s) 2 Dir(s) 1,024 354 354 bytes .. Boot0001 fs0:\> 4. At the EFI Shell environment, issue the \MSUtil\nvrboot.efi command to launch the Microsoft Windows boot options utility. fs0:\> msutil\nvrboot NVRBOOT: OS Boot Options Maintenance Tool [Version 5.2.3683] 1. 2. * 3. 4. SUSE SLES 9 HP-UX Primary Boot: 0/0/1/0/0.2.NOTE: Microsoft Windows Booting on HP Integrity Servers The recommended method for booting Windows is to use the EFI Boot Manager menu to choose a Windows entry from the boot options list. Using the ia64ldr.efi Windows loader from the EFI Shell is not recommended. Procedure 15 Windows Booting From the EFI Boot Manager menu, select an item from the boot options list to boot Windows using that boot option. The EFI Boot Manager is available only on HP Integrity servers.
Shutting Down Microsoft Windows You can shut down the Windows OS on HP Integrity servers using the Start menu or the shutdown command. CAUTION: Do not shut down Windows using Special Administration Console (SAC) restart or shutdown commands under normal circumstances. Issuing restart or shutdown at the SAC> prompt causes the system to restart or shut down immediately and can result in the loss of data. Instead, use the Windows Start menu or the shutdown command to shut down without loss of data.
3. Issue the shutdown command and the appropriate options to shut down the Windows Server 2003 on the system. You have the following options when shutting down Windows: • To shut down Windows and reboot: shutdown /r Alternatively, you can select the Start —> Shut Down action and select Restart from the drop-down menu.
is the cell number) or the specified nPartition (-p#, where # is the nPartition number). For details, see the nPartition Administrator's Guide (http://www.hp.com/go/virtualization-manuals). To display CLM configuration details from the EFI Shell on a cell-based HP Integrity server, use the info mem command.
• bcfg boot mv #a #b — Move the item number specified by #a to the position specified by #b in the boot options list. • bcfg boot add # file.efi "Description" — Add a new boot option to the position in the boot options list specified by #. The new boot option references file.efi and is listed with the title specified by Description. For example, bcfg boot add 1 \EFI\redhat\elilo.efi "Red Hat Enterprise Linux"adds a Red Hat Enterprise Linux item as the first entry in the boot options list.
\EFI\redhat\elilo.efi \EFI\redhat\elilo.conf By default the ELILO.EFI loader boots Linux using the kernel image and parameters specified by the default entry in the elilo.conf file on the EFI System Partition for the boot device. To interact with the ELILO.EFI loader, interrupt the boot process (for example, type a space) at the ELILO boot prompt. To exit the ELILO.EFI loader, use the exit command.
Use either of the following methods to boot SuSE Linux Enterprise Server: • Select a SuSE Linux Enterprise Server entry from the EFI Boot Manager menu. To load the SuSE Linux Enterprise Server OS at the EFI Boot Manager menu, choose its entry from the list of boot options. Choosing a Linux entry from the boot options list boots the OS using ELILO.EFI loader and the elilo.conf file. • Initiate the ELILO.EFI Linux loader from the EFI Shell.
On cell-based HP Integrity servers, this either powers down server hardware or puts the nPartition into a shutdown for reconfiguration state. Use the PE command at the management processor Command Menu to manually power on or power off server hardware, as needed. -r Reboot after shutdown. -c Cancel an already running shutdown. time When to shut down (required).
5 Server Troubleshooting This chapter contains tips and procedures for diagnosing and correcting problems with the server and its customer replaceable units (CRUs). Information about the various status LEDs on the server is also included. Common Installation Problems The following sections contain general procedures to help you locate installation problems. CAUTION: Do not operate the server with the top cover removed for an extended period of time.
The Server Powers On But Fails Power-On Self Test Use a. b. c. this checklist when the server fails power on self test (POST): Check for error messages on the system console. Check for fault LEDs. Check for error messages in the MP logs. Server LED Indicators The server has LEDs that indicate system health. This section defines those LEDs. Front Panel LEDs There are seven (7) LEDs located on the front panel.
Table 25 Front Panel LEDs (continued) LED Status Description Green (flashing) Cell power Off Green (solid) Cell power On Amber (flashing) Cell fault warning. Check for: • Cell latches not latched • LPM not ready • Cell VRMs reporting not good or overtemp • Cell fan slow/failed Red (solid) Cell fault.
Table 26 BPS LEDs (continued) LED Indication Description Blink Yellow BPS in standby or run state and warnings present but no faults Yellow BPS in standby state and recoverable faults present but no non-recoverable faults Blink Red BPS state may be unknown, non-recoverable faults present Red This LED state is not used Off BPS fault or failure, no power cords installed or no power to chassis PCI-X Power Supply LEDs There are two active LEDs on the PCI-X power supply.
Figure 53 Fan LED Locations PCI I/O Fan LED Front OLR fan LED Rear OLR fan LED Table 28 Front, Rear, and I/O Fan LEDs LED Driven By State Description Fan Status Fan Solid Green Normal Flash Yellow Predictive Failure Flash Red Failed Off No Power OL* LEDs Cell Board LEDs There is one green power LED located next to each ejector on the cell board in the server that indicates the power is good.
Figure 54 Cell Board LED Locations Voltage Margin Active (Red) Standby (Green) BIB (Green) SM (Green) Manageability Fabric (Green) PDHC Heartbeat (Green) V3P3 Standby (Green) V12 Standby (Green) Cell Power (Green) Attention (Yellow) Cell Power (Green) Attention (Yellow) Table 29 Cell Board OL* LED Indicators Location LED On cell board Power (located in the server cabinet) Attention Driven by State Description Cell LPM On Green 3.3 V Standby and Cell_Power_Good Off 3.3 V Standby off, or 3.
Figure 55 PCI OL* LED Locations Slot Attention (Yellow) Slot Power (Green) Card Divider Table 30 OL* LED States State Power (Green) Attention (Yellow) Normal operation, slot power on On Off Slot selected, slot power on On Flashing Slot needs attention, slot power on On On Slot available, slot power off Off Off Ready for OL*, slot power off Off Flashing Fault detected, slot power off Off On Slot powering down or up Flashing Off Core I/O LEDs The core I/O LEDs are located on the bu
Figure 56 Core I/O Card Bulkhead LEDs SCSI Term SCSI LVD ATTN Power 10=OFF/100=GRN/1000=ORNG Act/Link Locate Reset 10=OFF/100=ON Act/Link Active MP Power Table 31 Core I/O LEDs LED (as silk-screened on the bulkhead) State Description SCSI TRM On Green SCSI termpower is on SCSI LVD On Green SCSI LVD mode (on = LVD, off = SE) ATTN On Yellow PCI attention PWR On Green I/O power on SYS LAN 10 BT On Green SYS LAN in 10 BT mode SYS LAN 100 BT On Green SYS LAN in 100 BT mode SYS LAN 1Gb
Table 31 Core I/O LEDs (continued) LED (as silk-screened on the bulkhead) State Description SYS LAN LINK On Green SYS LAN link is ok Locate On Blue Locater LED Reset On Red Indicates that the MP is being reset MP LAN 10 BT On Green MP LAN in 10 BT mode MP LAN 100 BT On Green MP LAN in 100 BT mode MP LAN ACT On Green Indicates MP LAN activity MP LAN LINK On Green MP LAN link is OK Active On Green This core I/O is managing the system MP Power On Green Indicates standby power is on
Table 32 Core I/O Buttons (continued) Button Identification (as Location silk-screened on the bulkhead) Function NOTE: If the MP RESET button is held for longer than five seconds, it will clear the MP password and reset the LAN, RS-232 (serial port), and modem port parameters to their default values. LAN Default Parameters • IP Address - 192.168.1.1 • Subnet mask - 255.255.255.0 • Default gateway - 192.168.1.
Table 33 Disk Drive LEDs (continued) Activity LED Status LED Flash Rate Description Green Off Flutter at rate of activity I/O Disk activity Off Yellow Flashing at 1Hz or Predictive failure, needs immediate investigation 2 Hz Off Yellow Flashing at 0.5Hz Operator inducing manually or 1Hz Off Yellow Steady Module fault, critical Off Off LEDs off Unit not powered or installed Interlock Switches There are three interlock switches located in the server.
Server Management Overview Server management consists of four basic functional groups: • Chassis management • Chassis logging • Console and session redirection • Service access Chassis Management Chassis management consists of control and sensing the state of the server subsystems: • Control and sensing of bulk power • Control and sensing of DC-to-DC converters • Control and sensing of fans • Control of the front panel LEDs • Sensing temperature • Sensing of the power switch • Sensing c
Thermal Monitoring The manageability firmware is responsible for monitoring the ambient temperature in the server and taking appropriate action if this temperature becomes too high. To this end, the ambient temperature of the server is broken into four ranges: normal, overtemp low (OTL), overtemp medium (OTM), and overtemp high (OTH). Figure 59 shows the actions taken at each range transition. Actions for increasing temperatures are shown on the left; actions for decreasing temps are shown on the right.
Power Control If active, the manageability firmware is responsible for monitoring the power switch on the front panel. Setting this switch to the ON position is a signal to the MP to turn on 48 V DC power to the server. The PE command can also be used to send this signal. This signal does not always generate a transition to the powered state.
CAUTION: Instructions for updating the firmware are contained in the firmware release notes for each version of firmware. The procedure should be followed exactly for each firmware update otherwise the system could be left in an unbootable state. Figure 60 should not be used as an upgrade procedure and is provided only as an example.
Installing and Uninstalling on HP_UX Install The following must be performed to update the firmware. Enter the swinstall command: #swinstall -x autoreboot=true -s /tmp/FUTests/OSIFU.depot PHSS_28608 Figure 61 swinstall Output Uninstall The following must be performed to downgrade the firmware. Enter the swremove command.
Figure 62 swremove Output Installing on Linux The firmware update is installed using the rpm. Enter the rpm command. # rpm -i FWPHSS_28608.rpm Figure 63 rpm output Installing on Windows An executable file must be downloaded, then executed in Windows. Upon running the utility, a setup wizard guides the user through the installation steps. The following are the various steps of the setup wizard. 1. Run the executable file. 2. Accept the terms of the agreement and click the Next button.
Figure 64 License Agreement 3. Carefully read the readme text and click the Next button.
4. The status of the installation is displayed in the Setup Status screen. Figure 66 Setup Status PDC Code CRU Reporting The processor dependent code (PDC) interface defines the locations for the CRUs. These locations are denoted in the following figures to aid in physically locating the CRU when the diagnostics point to a specific CRU that has failed or may be failing in the near future.
Figure 67 Server Cabinet CRUs (Front View) PCI Power 1 PCI Power 0 Cell 0 Cabinet Fan 0 Cabinet Fan 1 Cabinet Fan 2 Cell 1 Cell 2 Cell 3 Cabinet Fan 3 Cabinet Fan 4 Cabinet Fan 5 Cabinet Fan 6 Cabinet Fan 7 Cabinet Fan 8 BPS 0 BPS 1 BPS 2 BPS 3 BPS 4 BPS 5 PDC Code CRU Reporting 131
Figure 68 Server Cabinet CRUs (Rear View) I/O Fan 2 I/O Fan 5 I/O Fan 1 I/O Fan 4 I/O Fan 0 I/O Fan 3 Cabinet Fan 9 Cabinet Fan 10 Cabinet Fan 11 Cabinet Fan 12 Cabinet Fan 13 Cabinet Fan 14 Core I/O (Cell 0) Cabinet Fan 15 Cabinet Fan 16 Cabinet Fan 17 Core I/O (Cell 1) Cabinet Fan 18 Cabinet Fan 19 Cabinet Fan 20 B1 A1 B0 A0 132 Server Troubleshooting
6 Removing and Replacing Components This chapter provides a detailed description of the server field replaceable unit (CRU) removal and replacement procedures. The procedures in this chapter are intended for use by trained and experienced HP service personnel only. Customer Replaceable Units (CRUs) The following section lists the different types of CRUs the server supports.
Communications Interference HP system compliance tests are conducted with HP supported peripheral devices and shielded cables, such as those received with the system. The system meets interference requirements of all countries in which it is sold. These requirements provide reasonable protection against interference with radio and television communications.
2. If the component you will power off is assigned to an nPartition, then use the Virtual Front Panel (VFP) to view the current boot state of the nPartition. Shut down HP-UX on the nPartition down before you power off any of the hardware assigned to the nPartition. See Chapter 4 “Operating System Boot and Shutdown.” When you are certain the nPartition is not running HP-UX, you can power off components that belong to the nPartition.
Removing and Replacing Covers It is necessary to remove one or more of the covers (Figure 69) to access many of the CRUs within the server chassis. Figure 69 Cover Locations Removing the Top Cover 1. 2. 3. 4. 5. 136 Connect to ground with a wrist strap. See “Electrostatic Discharge ” (page 134) for more information. Loosen the retaining screws securing the cover to the chassis. See Figure 70. Slide the cover toward the rear of the chassis. Lift the cover up and away from the chassis.
Figure 70 Top Cover Removed Retaining Screws Replacing the Top Cover 1. Orient the cover on the top of the chassis. NOTE: 2. 3. Carefully seat the cover to avoid damage to the intrusion switch. Slide the cover into position using a slow, firm pressure to properly seat the cover. Tighten the blue retaining screws securing the cover to the chassis. Removing the Side Cover 1. 2. Connect to ground with a wrist strap. See “Electrostatic Discharge ” (page 134) for more information.
Figure 71 Side Cover Removal Detail Retaining Screw 3. 4. Slide the cover from the chassis toward the rear of the system. Place the cover in a safe location. Replacing the Side Cover 1. 2. 3. Orient the cover on the side of the chassis. Slide the cover into position using a slow, firm pressure to properly seat the cover. Tighten the blue retaining screw securing the cover to the chassis.
Figure 72 Bezel Removal and Replacement Grasp Here Replacing the Front Bezel 1. 2. If you are replacint the bezel, visually inspect the replacement part for the proper part number. From the front of the server, grasp both sides of the bezel and push toward the server. The catches will secure the bezel to the chassis. Removing and Replacing the Front Panel Board The front panel board is located behind the front panel bezel. Both are located on the front of the chassis (Figure 73). The system power must.
Figure 73 Front Panel Assembly Location Front Panel Bezel Removing the Front Panel Board 1. 2. 3. 4. 5. 6. 7. Power off the system. Remove the front bezel. Remove the top cover. Remove the left side cover. Remove the two top disk drives. Remove and retain the two screws securing the front panel bezel to the front panel. Depress the front bezel center tab and pull straight out, away from chassis toward the front of the system (Figure 74).
10. Make note of the cable routing, and disconnect the cable assembly from the system board. See Figure 75 and Figure 76. Figure 75 Front Panel Board Detail Front Panel Board Figure 76 Front Panel Board Location on Backplane Front Panel Board Connection System Backplane Replacing the Front Panel Board 1. 2. Position the front panel board within the front panel bezel. Ensure that the standoffs on the board are aligned with the screw holes in the front panel bezel.
3. 4. 5. 6. 7. 8. 9. Route the cable the same way it was before removal and connect the cable to the system backplane (Figure 77). Reinstall the front panel bezel. Align the light pipes and screw back into place. Replace the two top disk drives. Replace top cover. Replace the left side cover. Replace the front bezel. Power on the system.
Figure 78 Front Smart Fan Assembly Location Front Smart Fan Assembly Removing the Front Smart Fan Assembly 1. 2. Remove the front bezel. Identify the failed fan assembly. Table 34 defines the fan LED states. Table 34 Smart Fan Assembly LED States LED State Meaning Green Fan is at speed and in sync or not at speed less than 12 seconds. Flash Yellow Fan is not keeping up with speed/sync pulse for longer than 12 seconds. Red Fan failed, stalled or has run slow or fast for longer than 12 seconds.
Figure 79 Front Fan Removal 3. 4. Loosen the two thumb screws securing the fan to the chassis. Slide the fan from the chassis. Replacing the Front Smart Fan Assembly 1. 2. 3. 4. Position the fan assembly in the chassis. Tighten the two thumb screws to secure the fan to the chassis. Check the fan status LED. It should be Green. See Table 34 for LED definitions. Replace the front bezel.
Figure 80 Rear Smart Fan Assembly Location Rear Fan Assembly Removing the Rear Smart Fan Assembly 1. 2. 3. Identify the failed fan assembly. Table 34 defines the fan LED states. Loosen the two thumb screws securing the fan to the chassis. Slide the fan from the chassis (Figure 81).
Replacing the Rear Smart Fan Assembly 1. 2. 3. 4. Position the fan assembly in the chassis. Slide the fan into the connector. Tighten the two thumb screws to secure the fan to the chassis. The LED should be Green. See Table 34 for a listing of LED definitions. Removing and Replacing a Disk Drive The disk drive is located in the front of the chassis. Internal disk drives are hot-plug components. See “Hot-Plug CRUs” (page 133) for a list and description of hot-plug CRUs.
Figure 83 Disk Drive Detail Replacing the Disk Drive 1. 2. 3. 4. 5. Sometimes diskinfo and ioscan will display cached data. Running diskinfo on the device without a disk installed clears the cached data. Enter either of the following commands. For the diskinfo command, replace x with actual values. • #diskinfo -v /dev/rdsk/cxtxdx • #ioscan -f Be sure the front locking latch is open, then carefully position the disk drive in the chassis.
Figure 84 Removable Media Drive Location Removable Media Drives Removing the Removable Media Drive NOTE: 1. 2. 3. 4. 5. 6. 7. 8. 9. When removing the bottom drive, remove the top drive first. Identify the failed removable media drive. Turn off the power to the server. Connect to ground with a wrist strap. See “Electrostatic Discharge ” (page 134) for more information. Remove the front bezel. Push the front locking tab inward to detach the drive from the chassis (Figure 85).
Replacing the Removable Media Drive NOTE: 1. 2. 3. 4. 5. If applicable, install the bottom drive before installing the top drive. Attach the rails and clips to the drive. Connect the cables to the rear of the drive. Position the drive in the chassis. Turn the power on to the server. Verify operation of the drive. Enter the SEArch or INFO command at the EFI Shell to ensure that the system recognizes the drive.
olrad The command line method of performing OL*. Attention button The hardware system slot-based method of performing OL*. This procedure describes how to perform an online replacement of a PCI card using the attention button for cards whose drivers support online addition or replacement (OLAR). The attention button is also referred to as the doorbell. The following are prerequisites for this procedure: • The replacement card uses the same drivers and is of the same type as the card being installed.
Replacing the PCI Card 1. Install the new replacement PCI card in the slot. NOTE: Online addition using the attention button does not perform the pre-add sequence of olrad which uses the olrad -a command. 2. 3. 4. Flip the PCI MRL for the card slot to the closed position. Connect all cables to the replacement PCI card. Press the attention button. The green power LED will start to blink. 5. 6. 7. Wait for the green power LED to stop blinking and turn solid green.
4. Execute the following EFI command: map -r NOTE: Each I/O card type and firmware image update may require a different flash utility and procedure. Follow the instructions in the .txt file included with the latest HP IPF Offline Diagnostic & Utilities CD. 5. Load the HP IPF Offline Diagnostic & Utilities CD. The CD will contain the flash utility for IO each card type, firmware images, and a .txt file that will include instructions and information about updating the firmware images.
Table 35 Smart Fan Assembly LED Indications (continued) LED State Meaning Red Fan failed, stalled or has run slow or fast for longer than 12 seconds. Off Fan is not present, no power is applied to fan, or the fan has failed. Removing the PCI Smart Fan Assembly 1. 2. Securely grasp the two tabs on the fan assembly (Figure 89). Slide the fan upward from the chassis. Figure 89 PCI Smart Fan Assembly Detail Tabs Replacing the PCI Smart Fan Assembly 1. 2. Position the fan assembly in the chassis.
Figure 90 PCI-X Power Supply Location PCI Power 1 PCI Power 0 Preliminary Procedures Complete these procedures before removing the PCI-X power supply. 1. Connect to ground with a wrist strap. See “Electrostatic Discharge ” (page 134) for more information. 2. Remove the front bezel. See “Removing and Replacing the Front Bezel” (page 138). 3. Identify the failed power supply. Table 36 identifies the meaning of the PCI-X power supply LED state. 4.
2. 3. Firmly depress the securing thumb latch. Slide the module from the chassis. See Figure 91. Figure 91 PCI-X Power Supply Detail Replacing the PCI-X Power Supply 1. 2. 3. Slide the power supply in the chassis until the thumb latch clicks into the locked position. The module easily slides into the chassis; apply a slow, firm pressure to properly seat the connection. Verify the status of the power supply LEDs. Green LED should be ON and the fault LED should be OFF.
Table 37 N+1 BPS-to-Cell Board Configuration (continued) Number of Cell Boards Installed in the Server Number of Operational BPS Installed to Maintain N+1 Functionality 3 5 4 6 The power distribution for the bulk power supplies follows: • A0 input provides power to BPS 0, BPS 1, and BPS 2 • A1 input provides power to BPS 3, BPS 4, and BPS 5 • B0 input provides power to BPS 0, BPS 1, and BPS 2 • B1 input provides power to BPS 3, BPS 4, and BPS 5 Figure 92 BPS Location (Front Bezel Removed) BPS
Table 38 BPS LED Definitions (continued) 3. 4. LED State Description Blink RED BPS state may be unknown, nonrecoverable faults are present. Red This LED state is not used. Off BPS fault or failure (unless AC power is not connected to server). Depress the release latch on the upper front center portion of the BPS (Figure 93). Slide the BPS forward using the handle to remove it from the chassis. Figure 93 BPS Detail Release Latch Replacing the BPS 1.
7 HP 9000 rp8440 Delta The following information describes material specific to the HP 9000 rp8440 server and the PA-8900 processor: • System power requirements for the HP 9000 rp8440 server. • Boot Console Handler (BCH) for the HP 9000 rp8440 server. • HP-UX for the HP 9000 rp8440 server. • The PA-8900 Processor Module. • System verification. System Specifications System specifications for the HP 9000 rp8440 are described in the following sections.
Table 40 Typical HP 9000 rp8440 Server Configurations Cell Board Memory per Cell Board PCI Cards (assumes 10W each) DVDs Hard Disk Drives Core I/O Bulk Power Supplies Typical Power Typical Cooling Qty GBytes Qty Qty Qty Qty Qty Watts BTU/hr 4 32 16 2 4 2 6 3789 12936 4 16 16 2 4 2 6 3533 12062 4 8 8 0 2 2 6 3325 11352 2 32 16 2 4 2 4 2702 9225 2 16 8 0 2 2 4 2414 8241 2 8 8 0 2 2 4 2350 8023 1 8 8 0 1 1 3 1893 6463 The air-cond
For information about any of the available BCH commands, use the he command. HP-UX for the HP 9000 rp8440 Server HP supports nPartitions on cell-based HP 9000 servers. The HP 9000 rp8440 server runs HP-UX 11i Version 1 (B.11.11). HP 9000 Boot Configurations Options On cell-based HP 9000 servers, the configurable system boot options include boot device paths (pri, haa, and alt) and the autoboot setting for the nPartition. To set these options from HP-UX, use the setboot command.
3. Boot the device by using the boot command from the BCH interface. You can issue the boot command in any of the following ways: • boot Issuing the boot command with no arguments boots the device at the primary (pri) boot path. • boot bootvariable This command boots the device indicated by the specified boot path, where bootvariable is the pri, haa, or alt boot path. • boot lan install or boot lan.ip-address install The boot...
Single-User Mode HP-UX Booting (BCH Menu) From the BCH Menu, you can boot HP-UX in single-user mode by issuing the boot command, stopping at the ISL interface, and issuing hpux loader options. The BCH Menu is available only on HP 9000 servers. 1. Access the BCH Main Menu for the nPartition on which you want to boot HP-UX in single-user mode. Log in to the management processor and enter co to access the console list. Select the nPartiton console.
Example 2 Single-User HP-UX Boot ISL revision A.00.42 JUN 19, 1999 ISL> hpux –is boot /stand/vmunix Boot : disk(0/0/2/0/0.13.0.0.0.0.0;0)/stand/vmunix 8241152 + 1736704 + 1402336 start 0x21a0e8 .... INIT: Overriding default level with level ‘s’ INIT: SINGLE USER MODE INIT: Running /sbin/sh # 4. Exit the console and management processor interfaces if you are finished using them.
On nPartitions you have the following command options when shutting down HP-UX: • To shut down HP-UX and reboot an nPartition: shutdown -r • To shut down HP-UX and halt an nPartition: shutdown -h • To perform a reboot for reconfiguration of an nPartition: shutdown -R • To hold an nPartition at a shutdown for reconfiguration state: shutdown -R -H Procedure for Shutting Down HP-UX From the HP-UX command line, issue the shutdown command to shutdown the HP-UX OS. 1.
System Backplane : GPM ----------001.001.000 EMMUX ----------001.000.000 IO Backplane : IO_LPM-1 ----------001.001.001 Core IO : CIO-0 ----------001.002.000 CIO-1 - not installed ----------000.000.000 CELL_LPM ----------001.002.000 000.000.000 000.000.000 000.000.000 CELL_JTAG ----------001.002.000 000.000.000 000.000.000 000.000.000 Cell Cell Cell Cell 0 1 2 3 IO_LPM-0 ----------001.001.001 : : : : CELL_PDH ----------001.005.000 000.000.000 - not installed 000.000.000 - not installed 000.
Partition Total Processors: 8 Partition Active Processors: 8 Partition Deconfigured Processors: 0 166 HP 9000 rp8440 Delta
A Replaceable Parts This appendix contains the server CRU list. For a more updated list of part numbers, go to the HP Part Surfer web site at: http://partsurfer.hp.com/. Replaceable Parts Table 41 Server CRU Descriptions and Part Numbers CRU DESCRIPTION Replacement P/N Exchange P/N Pwr Crd, Jumper UPS-PDU 2.5m C19/C20 8120-6884 None Pwr Crd, C19/unterminated intl-Europe 8120-6895 None Pwr Crd, C19/IEC-309 L6-20 4.5 m BLACK CA Ay 8120-6897 None Pwr Crd, C19/L6-20 4.
B Management Processor Commands This appendix contains a list of the server management commands. Server Management Commands Table 42 lists the server m anagement commands.
Table 44 System and Access Configuration Commands (continued) DC Reset parameters to default configuration DE Display entity status DI Disconnect Remote or LAN console DFW Duplicate firmware DU Display devices on bus FW Obsolete. FW is now available at the MP Main Menu.
C Templates This appendix contains blank floor plan grids and equipment templates. Combine the necessary number of floor plan grid sheets to create a scaled version of the computer room floor plan. Figure 94 illustrates the overall dimensions required for the servers. Figure 94 Server Space Requirements Equipment Footprint Templates Equipment footprint templates are drawn to the same scale as the floor plan grid (1/4 inch = 1 foot).
2. 3. 4. 5. Cut and join them together (as necessary) to create a scale model floor plan of your computer room. Remove a copy of each applicable equipment footprint template (Figure 95). Cut out each template selected in step 3; then place it on the floor plan grid created in step 2. Position pieces until you obtain the desired layout, then fasten the pieces to the grid. Mark locations of computer room doors, air-conditioning floor vents, utility outlets, and so on.
Figure 96 Planning Grid 172 Templates
Figure 97 Planning Grid Computer Room Layout Plan 173
Index A ac power input, 63 AC power inputs A0, 63 A1, 63 B0, 63 B1, 63 AC power specifications, 28 access commands, 168 air ducts, 32 illustrated, 32 AR, 168 ASIC, 11 B backplane mass storage, 25, 26 system, 21, 24, 26, 30 BO, 168 BPS (Bulk Power Supply), 69 Bulk Power Supplies BPS, 64, 155 C CA, 168 cable, 112 cards core I/O, 122 CC, 168 cell board, 20, 26, 30, 64, 76, 116, 155 verifying presence, 73 cell controller, 11 checklist installation, 78 circuit breaker, 28 cm (Command Menu) command, 74 co (Cons
H HE, 168 high availability (HA), 122 hot-plug defined, 133 hot-swap defined, 133 housekeeping power, 68 HP-UX, 122 humidity, 29 I I/O Subsystem, 20 iCOD definition, 78 email requirements, 78 ID, 168 IF, 168 initial observations interval one, 65 interval three, 65 interval two, 65 installation checklist, 78 warranty, 33 installation problems, 112 interference, 134 IP address default, 70 lc Comand Screen, 70 IT, 168 L LAN, 122, 125 LAN status, 70 LC, 168 lc (LAN configuration) command, 71 LED, 112 Attentio
Reflection 1, 67, 76 RL, 168 RR, 168 RS, 168 RS-232, 122 RU, 168 S safety considerations, 133 serial display device connecting, 67, 68 recommended windows, 76 setting parameters, 67 server, 122 configuration, 122 front panel, 14 management, 122 management commands, 168 management overview, 123 status commands, 168 service processor, 11, 122 SO, 168 Standby power LED, 14 status LEDs, 14 subnet mask, 70 SYSREV, 168 system commands, 168 configuration, 122 power on, 135 system backplane, 21, 24, 26, 30 system