HP Integrity rx7640 and HP 9000 rp7440 Servers User Service Guide HP Part Number: AB312-9010B Published: November 2011 Edition: 5
© Copyright 2006, 2011 Hewlett-Packard Development Company, L.P Legal Notices The information contained herein is subject to change without notice. The only warranties for HP products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. HP shall not be liable for technical or editorial errors or omissions contained herein. Microsoft and Windows are U.S.
Contents About this Document......................................................................................8 Book Layout.............................................................................................................................8 Intended Audience....................................................................................................................8 Publishing History.........................................................................................................
Airflow.............................................................................................................................30 System Requirements Summary.................................................................................................31 Power Consumption and Air Conditioning.............................................................................31 3 Installing the Server..................................................................................
Booting HP-UX Using the EFI Shell...................................................................................72 Adding Processors with Instant Capacity...............................................................................72 Installation Checklist...........................................................................................................73 4 Booting and Shutting Down the Operating System........................................
Updating Firmware...............................................................................................................119 Firmware Manager .........................................................................................................119 Using FTP to Update Firmware...........................................................................................120 Possible Error Messages....................................................................................................
Installing the New LAN/SCSI Core I/O PCI-X Card(s)...........................................................150 PCI/PCI-X Card Replacement Preliminary Procedures............................................................151 Removing a PCI/PCI-X Card..............................................................................................151 Replacing the PCI/PCI-X Card...........................................................................................151 Option ROM..............................
About this Document This document covers the HP Integrity rx7640 and HP 9000 rp7440 Servers. This document does not describe system software or partition configuration in any detail. For detailed information concerning those topics, See the HP nPartition Administrator's Guide.
Windows Operating System Information: You can find information about administration of the Microsoft® Windows® operating system at the following Web sites, among others: • http://www.hp.com/go/windows-on-integrity-docs • http://www.microsoft.com/technet/ Diagnostics and Event Monitoring: Hardware Support Tools: Complete information about HP hardware support tools, including online and offline diagnostics and event monitoring tools, is at the www.hp.com/go/bizsupport Web site.
For HP technical support: • In the United States, for contact options see the Contact HP United States webpage: (http:// welcome.hp.com/country/us/en/contact_us.html) To contact HP by phone: • ◦ Call 1-800-HP-INVENT (1-800-474-6836). This service is available 24 hours a day, 7 days a week. For continuous quality improvement, calls may be recorded or monitored. ◦ If you have purchased a Care Pack (service upgrade), call 1-800-633-3600.
1 HP Integrity rx7640 Server and HP 9000 rp7440 Server Overview The HP Integrity rx7640 and HP 9000 rp7440 Servers are members of HP’s business-critical computing platform family in the mid-range product line. The information in chapters one through six of this guide applies to the HP Integrity rx7640 and HP 9000 rp7440 Servers, except for a few items specifically denoted as applying only to the HP Integrity rx7640 Server. Chapter seven covers any information specific to the HP 9000 rp7440 Server only.
Figure 1 Server (Front View With Bezel) Figure 2 Server (Front View Without Bezel) 12 HP Integrity rx7640 Server and HP 9000 rp7440 Server Overview
The server has the following dimensions: • Depth: Defined by cable management constraints to fit into standard 36-inch deep rack: 25.5 inches from front rack column to PCI connector surface 26.7 inches from front rack column to MP Core I/O connector surface 30 inches overall package dimension, including 2.7 inches protruding in front of the front rack columns. • Width: 44.45 cm (17.5 inches), constrained by EIA standard 19 inch racks. • Height: 10U – 0.54 cm = 43.91 cm (17.287 inches).
The PCI OLR fan modules are located in front of the PCI-X cards. These six 9.2-cm fans are housed in plastic carriers. They are configured in two rows of three fans. Four OLR system fan modules, externally attached to the chassis, are 15-cm (6.5-inch) fans. Two fans are mounted on the front surface of the chassis and two are mounted on the rear surface. The cell boards are accessed from the right side of the chassis behind a removable side cover.
• Management processor (MP) status LED (tri-color) • Cell 0, 1 status (tri-color) LEDs Figure 5 Front Panel LEDs and Power Switch Cell Board The cell board, illustrated in Figure 6, contains the processors, main memory, and the CC application specific integrated circuit (ASIC) which interfaces the processors and memory with the I/O, and to the other cell board in the server. The CC is the heart of the cell board, enabling communication with the other cell board in the system.
• Incoming and outgoing crossbar bus that goes off board to the other cell board • PDH bus that goes to the PDH and microcontroller circuitry All of these buses come together at the CC chip. Because of space limitations on the cell board, the PDH and microcontroller circuitry resides on a riser board that plugs into the cell board at a right angle. The cell board also includes clock circuits, test circuits, and de-coupling capacitors.
Figure 7 CPU Locations on Cell Board Memory Subsystem Figure 8 shows a simplified view of the memory subsystem. It consists of two independent access paths, each path having its own address bus, control bus, data bus, and DIMMs . Address and control signals are fanned out through register ports to the synchronous dynamic random access memory (SDRAM) on the DIMMs. The memory subsystem comprises four independent quadrants.
Figure 8 Memory Subsystem DIMMs The memory DIMMs used by the server are custom designed by HP. Each DIMM contains DDR-II SDRAM memory that operates at 533 MT/s. Industry standard DIMM modules do not support the high availability and shared memory features of the server. Therefore, industry standard DIMM modules are not supported. The server supports DIMMs with densities of 1, 2, 4, and 8 GB.
On the server, each nPartition has its own dedicated portion of the server hardware which can run a single instance of the operating system. Each nPartition can boot, reboot, and operate independently of any other nPartitions and hardware within the same server complex. The server complex includes all hardware within an nPartition server: all cabinets, cells, I/O chassis, I/O devices and racks, management and interconnecting hardware, power supplies, and fans.
• DC-to-DC converters • Power monitor logic • Two local bus adapter (LBA) chips that create internal PCI buses for communicating with the core I/O card The backplane also contains connectors for attaching the cell boards, the PCI-X backplane, the core I/O board set, SCSI cables, bulk power, chassis fans, the front panel display, intrusion switches, and the system scan card. Unlike Superdome or the HP Integrity rx8640, there are no Crossbar Chips (XBC) on the system backplane.
Table 3 PCI-X paths for Cell 0 (continued) Cell PCI-X Slot IO Chassis Path 0 7 0 0/0/2/1 0 8 0 0/0/1/1 Table 4 PCI-X Paths Cell 1 Cell PCI-X Slot I/O Chassis Path 1 1 1 1/0/8/1 1 2 1 1/0/10/1 1 3 1 1/0/12/1 1 4 1 1/0/14/1 1 5 1 1/0/6/1 1 6 1 1/0/4/1 1 7 1 1/0/2/1 1 8 1 1/0/1/1 The server supports two internal SBAs. Each SBA provides the control and interfaces for eight PCI-X slots. The interface is through the rope bus (16 ropes per SBA).
Table 5 PCI-X Slot Types I/O Partition Slot1 0 1 1 Maximum MHz Maximum Peak Ropes Bandwidth Supported Cards PCI Mode Supported 8 133 533 MB/s 001 3.3 V PCI or PCI-X Mode 1 7 133 1.06 GB/s 002/003 3.3 V PCI or PCI-X Mode 1 6 266 2.13 GB/s 004/005 3.3 V or 1.5 V PCI-X Mode 2 5 266 2.13 GB/s 006/007 3.3 V or 1.5 V PCI-X Mode 2 4 266 2.13 GB/s 014/015 3.3 V or 1.5 V PCI-X Mode 2 3 266 2.13 GB/s 012/013 3.3 V or 1.5 V PCI-X Mode 2 2 133 1.06 GB/s 010/011 3.
controller chip on cell board 2, and the ASIC on cell location 1 connects to the cell controller chip on cell board 3 through external link cables. Downstream, the ASIC spawns 16 logical 'ropes' that communicate with the core I/O bridge on the system backplane, PCI interface chips, and PCIe interface chips. Each PCI chip produces a single 64–bit PCI-X bus supporting a single PCI or PCI-X add-in card. Each PCIe chip produces a single x8 PCI-Express bus supporting a single PCIe add-in card.
Table 6 PCI-X/PCIe Slot Types (continued) I/O Partition Slot1 Maximum Peak Maximum MHz Bandwidth Ropes Supported Cards PCI Mode Supported 7 133 1.06 GB/s 002/003 3.3 V PCI or PCI-X Mode 1 6 266 2.13 GB/s 004/005 3.3 V PCIe 5 266 2.13 GB/s 006/007 3.3 V PCIe 4 266 2.13 GB/s 014/015 3.3 V PCIe 3 266 2.13 GB/s 012/013 3.3 V PCIe 2 133 1.06 GB/s 010/011 3.3 V PCI or PCI-X Mode 1 1 133 1.06 GB/s 008/009 3.3 V PCI or PCI-X Mode 1 1.
2 Server Site Preparation This chapter describes the basic server configuration and its physical specifications and requirements. Dimensions and Weights This section provides dimensions and weights of the system components. Table 7 gives the dimensions and weights for a fully configured server. Table 7 Server Dimensions and Weights Standalone Packaged Height- Inches (centimeters) 17.3 (43.9) 35.75 (90.8) Width- Inches (centimeters) 17.5 (44.4) 28.0 (71.1) Depth- Inches (centimeters) 30.0 (76.
Table 9 Example Weight Summary (continued) Component Quantity Multiply By Weight (kg) Hard disk drive 4 1.6 (0.73) 6.40 (2.90) 90.0 (41.0) 90.0 (41.0) Total weight 191.56 (87.0) Chassis with skins and front 1 bezel cover Table 10 Weight Summary Component Quantity Multiply By Cell Board 27.8 (12.16) PCI Card 0.34 (0.153) Power Supply (BPS) 18 (8.2) DVD Drive 2.2 (1.0) Hard Disk Drive 1.6 (0.73) Chassis with skins and front bezel cover 90.0 (41.
Table 11 Power Cords Part Number Description Where Used 8120-6895 Stripped end, 240 volt International - Other 8120-6897 Male IEC309, 240 volt International - Europe 8121-0070 Male GB-1002, 240 volts China 8120-6903 Male NEMA L6-20, 240 volt North America/Japan System Power Specifications Table 12 and Table 13 list the AC power requirements for the servers. These tables provide information to help determine the amount of AC power needed for your computer room.
Environmental Specifications This section provides the environmental, power dissipation, noise emission, and airflow specifications for the server. Temperature and Humidity The cabinet is actively cooled using forced convection in a Class C1-modified environment. The recommended humidity level for Class C1 is 40 to 55% relative humidity (RH). Operating Environment The system is designed to run continuously and meet reliability goals in an ambient temperature of 5° to 35° C at sea level.
Non-Operating Environment The system is designed to withstand ambient temperatures between -40° to 70° C (-40° to 158°) under non-operating conditions. Cooling Internal Chassis Cooling The cabinet incorporates front-to-back airflow across the cell boards and system backplane. Two 150 mm fans, mounted externally on the front chassis wall behind the cosmetic front bezel, push air into the cell section.
The air conditioning data is derived using the following equations. • Watts x (0.860) = kcal/hour • Watts x (3.414) = Btu/hour • Btu/hour divided by 12,000 = tons of refrigeration required NOTE: When determining power requirements you must consider any peripheral equipment that will be installed during initial installation or as a later update. See the applicable documentation for such devices to determine the power and air-conditioning that is required to support these devices.
System Requirements Summary This section summarizes the requirements that must be considered in preparing the site for the server. Power Consumption and Air Conditioning To determine the power consumed and the air conditioning required, follow the guidelines in Table 15 (page 29). NOTE: When determining power requirements, consider any peripheral equipment that will be installed during initial installation or as a later update.
3 Installing the Server Inspect shipping containers when the equipment arrives at the site. Check equipment after the packing has been removed. This chapter discusses how to inspect and install the server. Receiving and Inspecting the Server Cabinet This section contains information about receiving, unpacking and inspecting the server cabinet. NOTE: The server will ship in one of three different configurations.
Figure 11 Removing the Polystraps and Cardboard 3. 4. Remove the corrugated wrap from the pallet. Remove the packing materials. CAUTION: Cut the plastic wrapping material off rather than pull it off. Pulling the plastic covering off represents an electrostatic discharge (ESD) hazard to the hardware. 5. Remove the four bolts holding down the ramps, and remove the ramps.
Figure 12 Removing the Shipping Bolts and Plastic Cover 34 Installing the Server
6. Remove the six bolts from the base that attaches the rack to the pallet. Figure 13 Preparing to Roll Off the Pallet WARNING! Be sure that the leveling feet on the rack are raised before you roll the rack down the ramp, and any time you roll the rack on the casters. Use caution when rolling the cabinet off the ramp. A single server in the cabinet may weigh in excess of 280 lbs. It is strongly recommended that two people roll the cabinet off the pallet.
Figure 14 Securing the Cabinet Standalone and To-Be-Racked Systems Servers shipped in a stand-alone or to-be-racked configuration must have the core I/O handles and the PCI towel bars attached at system installation. Obtain and install the core I/O handles and PCI towel bars from the accessory kit A6093-04046. The towel bars and handles are the same part. See the following service note A6093A-11. Rack-Mount System Installation Information is available to help with rack-mounting the server.
3. Remove the systems left and right side covers. NOTE: The latest lift handles available for the 2-cell servers are symmetrical and can be installed on either side of the server. 4. 5. Locate one handle and ensure the two thumbscrews are removed from its front flange. Insert the 2 protruding tabs on rear flange of handle into the slotted keyways in the server’s chassis. See Figure 15. Figure 15 Inserting Rear Handle Tabs into Chassis 6.
Figure 16 Attaching the Front of Handle to Chassis 7. 8. 9. 10. 11. 12. Repeat steps 2—4 to install the other handle on the other side of the server. After handles are secured, server is ready to lift. Handles are removed in reverse order of steps 2—4. After moving the server, remove the lift handles from the chassis. After the server is secured, replace the previously removed cell boards and bulk power supplies. Reinstall the side covers and front bezel.
Figure 17 RonI Lifter 1. 2. 3. Obtain the HP J1530C Rack Integration Kit Installation Guide before proceeding with the rack mount procedure. This guide covers these important steps: • Installing the anti-tip stabilizer kit (A5540A) • Installing the ballast kit (J1479A) • Installing the barrel nuts on the front and rear columns • Installing the slides Follow the instructions on the outside of the server packaging to remove the banding and carton top from the server pallet.
Figure 18 Positioning the Lifter to the Pallet 4. 5. Carefully slide server onto lifter forks. Slowly raise the server off the pallet until it clears the pallet cushions. Figure 19 Raising the Server Off the Pallet Cushions 6. 40 Carefully roll the lifter and server away from the pallet. Do not raise the server any higher than necessary when moving it over to the rack.
7. Follow the HP J1530C Rack Integration Kit Installation Guide to complete these steps: • Mounting the server to the slides • Installing the cable management arm (CMA) • Installing the interlock device assembly (if two servers are in the same cabinet) Wheel Kit Installation Compare the packing list (Table 16) with the contents of the wheel kit before beginning the installation. For a more updated list of part numbers, go to the HP Part Surfer web site at the following website: http://www.partsurfer.
Figure 20 Component Locations 4. 5. Unfold bottom cardboard tray. Carefully tilt the server and place one of the foam blocks (A6093-44002) under the left side of the server. Do not remove any other cushions until instructed to do so. Figure 21 Left Foam Block Position 6. 42 Carefully tilt the server and place the other foam block provided in the kit under the right side of the server.
Figure 22 Right Foam Block Position 7. Remove the cushions from the lower front and rear of the server. Do not disturb the side cushions. Figure 23 Foam Block Removal 8. Locate and identify the caster assemblies. Use the following table to identify the casters. NOTE: The caster part number is stamped on the caster mounting plate.
Table 17 Caster Part Numbers 9. Caster Part Number Right front A6753-04001 Right rear A6753-04005 Left front A6753-04006 Left rear A6753-04007 Locate and remove one of the four screws from the plastic pouch. Attach the a caster to the server. Figure 24 Attaching a Caster to the Server 10. 11. 12. 13. Attach the remaining casters to the server using the screws supplied in the plastic pouch. Remove the foam blocks from the left and right side of the server. Locate the plywood ramp.
Figure 25 Securing Each Caster Cover to the Server 17. Wheel kit installation is complete when both caster covers are attached to the server, and the front bezel and all covers are installed. Figure 26 Completed Server Installing the Power Distribution Unit The server may ship with a power distribution unit (PDU). Two 60 A PDUs available for the server. Each PDU 3 U high and is mounted horizontally between the rear columns of the server cabinet. The 60 A PDUs are delivered with an IEC-309 60 A plug.
The 60A IEC PDU has four 16A circuit breakers and is constructed for International use. Each of the four circuit breakers has two IEC-320 C19 outlets providing a total of eight IEC-320 C19 outlets. Each PDU is 3U high and is rack-mounted in the server cabinet. Documentation for installation will accompany the PDU. The documentation can also be found at the external Rack Solutions Web site at: http://www.hp.com/go/rackandpower This PDU might be referred to as a Relocatable Power Tap outside HP.
Figure 27 Disk Drive and DVD Drive Location Use the following procedure to install the disk drives: 1. Be sure the front locking latch is open, then position the disk drive in the chassis. 2. Slide the disk drive into the chassis, a slow firm pressure is needed to properly seat the connector. 3. Press the front locking latch to secure the disk drive in the chassis. 4.
Figure 28 Removable Media Location 1. 2. 3. 4. 5. Remove the front bezel. Remove the filler panel from the server. Install the left and right media rails and clips to the drive. Connect the cables to the rear of the drive Fold the cables out of the way and slide the drive into the chassis. The drive easily slides into the chassis; however, a slow firm pressure is needed for proper seating. The front locking tab will latch to secure the drive in the chassis. 6. 7. 8. Replace the front bezel.
Table 18 HP Integrity rx7640 PCI-X and PCIe I/O Cards (continued) Part Number Card Description A5506B 4-port 10/100b-TX A5838A 2-port Ultra2 SCSI/2-Port 100b-T Combo A6386A Hyperfabric II A6749A 64-port Terminal MUX A6795A 2G FC Tachlite B Next Gen 1000b-T b A6826A 2-port 2Gb FC B A6828A 1-port U160 SCSI B B A6829A 2-port U160 SCSI B B A6847A Next Gen 1000b-SX b b A6869B Obsidian 2 VGA/USB B A7011A 1000b-SX Dual Port b b b A7012A 1000b-T Dual Port b b b A7173A 2-p
Table 18 HP Integrity rx7640 PCI-X and PCIe I/O Cards (continued) Part Number Card Description HP-UX AD168A1 Emulex 4Gb/s DC AD193A 1 port 4Gb FC & 1 port GbE HBA PCI-X Bb B AD194A 2 port 4Gb FC & 2 port GbE HBA PCI-X Bb B AD278A 8-Port Terminal MUX AD279A 64-Port Terminal MUX AD307A LOA (USB/VGA/RMP) B B J3525A 2-port Serial 337972-B21 SA P600 (Redstone) Windows® Linux® B B B B VMS PCIe Cards A8002A Emulex 1–port 4Gb FC PCIe B B A8003A Emulex 2–port 4Gb FC PCIe B B A
NOTE: The PCI I/O card installation process varies depending on what version of the HP-UX operating system you are running on your system. PCI I/O card installation procedures should be downloaded from the following website: www.hp.com/go/bizsupport. Select a following guide and enter the title in the search field for background information and procedures to add a new PCI I/O card using online addition: • nPartition Administrator's Guide • Interface Card OL* Support Guide for HP-UX 11.
Adding a PCI I/O Card Using the Attention Button The following prerequisites for this procedure: • Drivers for the card have already been installed. • No drivers are associated with the slot. • The green power LED is steady OFF. Should the empty slot be in the ON state use the olrad command or the pdweb tool to power the slot OFF. • The yellow attention LED if steady OFF or is blinking if a user has requested the slot location.
Figure 29 PCI I/O Slot Details 7. 8. Wait for the green power LED to stop blinking. Check for errors in the hotplugd daemon log file (default: /var/adm/hotplugd.log). The critical resource analysis (CRA) performed while doing an attention button initiated add action is very restrictive and the action will not complete–it will fail–to protect critical resources from being impacted. For finer control over CRA actions use pdweb or the olrad command.
Figure 30 PCI/PCI-X Card Location IMPORTANT: Some PCI I/O cards, such as the A6869B VGA/USB PCI card, cannot be added or replaced online (while Windows® remains running). For these cards, you must shut down Windows on the nPartition before performing the card replacement or addition. See the section on Shutting Down nPartitions and Powering off Hardware Components in the appropriate service guide. 1. 2. 3. 4. 5. 6. 7.
Reference URL There are many features available for HP Servers at this website including links to download Windows Drivers. HP Servers Technical Support http://www.hp.com/support/itaniumservers Cabling and Power Up After the system has been unpacked and moved into position, it must be connected to a source of AC power. The AC power must be checked for the proper voltage before the system is powered up. This chapter describes these activities.
1. 2. 3. Measure the voltage between L1 and L2. This is considered to be a phase-to-phase measurement in North America. In Europe and certain parts of Asia-Pacific, this measurement is referred to as a phase-to-neutral measurement. The expected voltage should be between 200–240 V AC regardless of the geographic region. Measure the voltage between L1 and ground. In North America, verify that this voltage is between 100–120 V AC.
Figure 32 Safety Ground Reference Check WARNING! SHOCK HAZARD Risk of shock hazard while testing primary power. Use properly insulated probes. Be sure to replace access cover when finished testing primary power. 1. Measure the voltage between A0 and A1 as follows: a. Take the AC voltage down to the lowest scale on the volt meter. b. Insert the probe into the ground pin for A0. c. Insert the other probe into the ground pin for A1. d. Verify that the measurement is between 0-5 V AC.
Figure 33 Safety Ground Reference Check WARNING! SHOCK HAZARD Risk of shock hazard while testing primary power. Use properly insulated probes. Be sure to replace access cover when finished testing primary power. 1. Measure the voltage between A0 and A1 as follows: a. Take the AC voltage down to the lowest scale on the volt meter. b. Insert the probe into the ground pin for A0. c. Insert the other probe into the ground pin for A1. d. Verify that the measurement is between 0-5 V AC.
4. Measure the voltage between A1 and B1 as follows: a. Take the AC voltage down to the lowest scale on the volt meter. b. Insert the probe into the ground pin for A1. c. Insert the other probe into the ground pin for B1. d. Verify that the measurement is between 0-5 V AC. If the measurement is 5 V or greater, escalate the situation. Do not attempt to plug the power cord into the server cabinet.
8. Set the site power circuit breaker to ON. 9. Set the server power to ON. 10. Check that the indicator light on each power supply is lit. Connecting AC Input Power The server can receive AC input power from two different AC power sources. If two separate power sources are available, the server can be plugged into the separate power sources, increasing system realibility if one power source fails. The main power source is defined to be A0 and B0. The redundant power source is defined to be A1 and B1.
The current power grid configuration is: Single grid Power grid configuration preference. 1. Single grid 2. Dual grid Select Option: Figure 36 Distribution of Input Power for Each Bulk Power Supply WARNING! Voltage is present at various locations within the server whenever a power source is connected. This voltage is present even when the main power switch is in the off position. To completely remove power, all power cords must be removed from the server.
Two Cell Server Installation (rp7410, rp7420, rp7440, rx7620, rx7640) There are 3 studs with thumb nuts located at the rear of the server chassis. The line cord anchor installs on these studs. To install the line cord anchor: 1. Remove and retain the thumb nuts from the studs. 2. Install the line cord anchor over the studs. See Figure 37: “Two Cell Line Cord Anchor (rp7410, rp7420, rp7440, rx7620, rx7640)”. 3. Tighten the thumb nuts onto the studs. 4. Weave the power cables through the line cord anchor.
MP/SCSI I/O Connections The MP/SCSI board is required to update firmware, access the console, turn partition power on or off, access one of the HDDs and one of the removable media devices, and utilize other features of the system. For systems running a single partition, one MP/SCSI board is required. A second MP/SCSI board is required for a dual-partition configuration, or if you want to enable primary or secondary MP failover for the server.
If the CE Tool is a laptop using Reflection 1, ensure communications settings are in place, using the following procedure: 1. From the Reflection 1 Main screen, pull down the Connection menu and select Connection Setup. 2. Select Serial Port. 3. Select Com1. 4. Check the settings and change, if required. Go to More Settings to set Xon/Xoff. Click OK to close the More Settings window. 5. 6. 7. 8. Click OK to close the Connection Setup window.
Figure 39 Front Panel Display 2. Check the bulk power supply LED for each BPS. When on, the breakers distribute power to the BPSs. AC power is present at the BPSs: • When power is first applied. The BPS LEDs will be flashing amber. • After 30 seconds has elapsed. The flashing amber BPS LED for each BPS becomes a flashing green LED. See power cord policies to interpret LED indicators. 3. Log in to the MP: a. Enter Admin at the login prompt. The login is case sensitive.
1. At the MP Main Menu prompt (MP>) enter cm to enter the MP Command. NOTE: If the Command Menu is not shown, enter q to return to the MP Main Menu, then enter cm.. 2. From the MP Command Menu prompt (MP:CM>) enter lc (for LAN configuration). The screen displays the default values and asks if you want to modify them. Write down the information or log it in a file, as it may be required for future troubleshooting. See Figure 41 (page 66).
10. A screen similar to the following is displayed, allowing verification of the settings: Figure 42 The ls Command Screen 11. To return to the MP main menu, enter ma. 12. To exit the MP, enter x at the MP main menu. Accessing the Management Processor via a Web Browser Web browser access is an embedded feature of the MP/SCSI card. The Web browser enables access to the server through the LAN port on the core I/O card. MP configuration must be done from an ASCII console connected to the Local RS232 port..
Figure 43 Example sa Command 5. 6. 7. Enter W to modify web access mode. Enter option 2 to enable web access. Launch a Web browser on the same subnet using the IP address for the MP LAN port. Figure 44 Browser Window 8. Select the emulation type you want to use. 9. Click anywhere on the Zoom In/Out title bar to generate a full screen MP window. 10. Login to the MP when the login window appears. Access to the MP via a Web browser is now possible.
After logging in to the MP, verify that the MP detects the presence of all the cells installed in the cabinet. It is important for the MP to detect the cell boards. If it does not, the partitions will not boot. To determine if the MP detects the cell boards: 1. At the MP prompt, enter cm. This displays the Command Menu. The Command Menu enables viewing or modifying the configuration and viewing the utilities controlled by the MP. To view a list of the commands available, enter he.
2. Select the appropriate console device (deselect unused devices): a. Choose the Boot option maintenance menu choice from the main Boot Manager Menu. b. Select the Console Output, Input or Error devices menu item for the device type you are modifying: c. • Select Active Console Output Devices • Select Active Console Input Devices • Select Active Console Error Devices Available devices will be displayed for each menu selection.
Additional Notes on Console Selection Each Operating System makes decisions based on the EFI Boot Maintenance Manager menu’s Select Active Console selections to determine where to send its output. If incorrect console devices are chosen the OS may fail to boot or will boot with output directed to the wrong location. Therefore, any time new potential console devices are added to the system or anytime NVRAM on the system is cleared console selections should be reviewed to ensure that they are correct.
4. 5. Enter the partition number of the partition to boot. Press Enter. Selecting a Boot Partition Using the MP At this point in the installation process, the hardware is set up, the MP is connected to the LAN, the AC and DC power have been turned on, and the self-test is completed. Now the configuration can be verified. After the DC power on and the self-test is complete, use the MP to select a boot partition. 1. From the MP Main Menu, enter cm. 2. From the MP Command Menu, enter bo. 3.
CPUs reside in the purchased system and are tracked as HP-owned assets. A nominal “Right-To-Access Fee” is paid to HP for each Instant Capacity CPU in the system. Any number of Instant Capacity CPUs can be activated at any time. Activating an Instant Capacity CPU automatically and promptly transforms the Instant Capacity CPU into an instantly ordered and fulfilled CPU upgrade that requires payment.
Table 20 Factory-Integrated Installation Checklist (continued) Procedure Remove corrugated wrap from the pallet Remove four bolts holding down the ramps and remove the ramps Remove antistatic bag Check for damage (exterior and interior) Position ramps Roll cabinet off ramp Unpack the peripheral cabinet (if ordered) Unpack other equipment Remove and dispose of packaging material Move cabinet(s) and equipment to computer room Move cabinets into final position Position cabinets next to each other (approximatel
Table 20 Factory-Integrated Installation Checklist (continued) Procedure In-process Completed Configure remote login (if required). See Appendix B.
4 Booting and Shutting Down the Operating System This chapter presents procedures for booting an operating system (OS) on an nPartition (hardware partition) and procedures for shutting down the OS. Operating Systems Supported on Cell-based HP Servers HP supports nPartitions on cell-based HP 9000 servers and cell-based HP Integrity servers. The following list describes the OSes supported on cell-based servers based on the HP sx2000 chipset.
NOTE: SuSE Linux Enterprise Server 10 is supported on HP rx8640 servers, and will be supported on other cell-based HP Integrity servers with the Intel® Itanium® dual-core processor (rx7640 and Superdome) soon after the release of those servers. See “Booting and Shutting Down Linux” (page 100) for details. NOTE: On servers based on the HP sx2000 chipset, each cell has a cell local memory (CLM) parameter, which determines how firmware may interleave memory residing on the cell.
At the EFI Shell, the bcfg command supports listing and managing the boot options list for all OSs except Microsoft Windows. On HP Integrity systems with Windows installed the \MSUtil\nvrboot.efi utility is provided for managing Windows boot options from the EFI Shell. On HP Integrity systems with OpenVMS installed, the \efi\vms\vms_bcfg.efi and \efi\vms\vms_show utilities are provided for managing OpenVMS boot options.
Enabled means that Hyper-Threading will be active on the next reboot of the nPartition. Active means that each processor core in the nPartition has a second virtual core that enables simultaneously running multiple threads. • Autoboot Setting You can configure the autoboot setting for each nPartition either by using the autoboot command at the EFI Shell, or by using the Set Auto Boot TimeOut menu item at the EFI Boot Option Maintenance menu. To set autoboot from HP-UX, use the setboot command.
boot-is-blocked state). The normal OS shutdown behavior on these servers depends on the ACPI configuration for the nPartition. You can run the acpiconfig command with no arguments to check the current ACPI configuration setting; however, softpowerdown information is displayed only when different from normal behavior.
CAUTION: An nPartition on an HP Integrity server cannot boot HP-UX virtual partitions when in nPars boot mode. Likewise, an nPartition on an HP Integrity server cannot boot an operating system outside of a virtual partition when in vPars boot mode. To display or set the boot mode for an nPartition on a cell-based HP Integrity server, use any of the following tools as appropriate. See Installing and Managing HP-UX Virtual Partitions (vPars), Sixth Edition, for details, examples, and restrictions.
To display CLM configuration details from the EFI Shell on a cell-based HP Integrity server, use the info mem command. If the amount of noninterleaved memory reported is less than 512 MB, then no CLM is configured for any cells in the nPartition (and the indicated amount of noninterleaved memory is used by system firmware). If the info mem command reports more than 512 MB of noninterleaved memory, then use Partition Manager or the parstatus command to confirm the CLM configuration details.
4. Exit the console and management processor interfaces if you are finished using them. To exit the EFI environment press ^B (Control+B); this exits the system console and returns to the management processor Main Menu. To exit the management processor, enter X at the Main Menu. Booting HP-UX This section describes the following methods of booting HP-UX: • “Standard HP-UX Booting” (page 83) — The standard ways to boot HP-UX. Typically, this results in booting HP-UX in multiuser mode.
Main Menu: Enter command or menu > Primary Boot Path: path 0/0/2/0/0.13 0/0/2/0/0.d 0/0/2/0/0.14 0/0/2/0/0.e 0/0/2/0/0.0 0/0/2/0/0.0 HA Alternate Boot Path: Alternate Boot Path: (hex) (hex) (hex) Main Menu: Enter command or menu > 3. Boot the device by using the BOOT command from the BCH interface. You can issue the BOOT command in any of the following ways: • BOOT Issuing the BOOT command with no arguments boots the device at the primary (PRI) boot path.
Procedure 3 HP-UX Booting (EFI Boot Manager) From the EFI Boot Manager menu, select an item from the boot options list to boot HP-UX using that boot option. The EFI Boot Manager is available only on HP Integrity servers. See “ACPI Configuration for HP-UX Must Be default” (page 83) for required configuration details. 1. Access the EFI Boot Manager menu for the nPartition on which you want to boot HP-UX. Log in to the management processor, and enter CO to access the Console list.
4. Access the EFI System Partition for the device from which you want to boot HP-UX (fsX: where X is the file system number). For example, enter fs2: to access the EFI System Partition for the bootable file system number 2. The EFI Shell prompt changes to reflect the file system currently accessed. The file system number can change each time it is mapped (for example, when the nPartition boots, or when the map -r command is issued). 5.
1. Access the BCH Main Menu for the nPartition on which you want to boot HP-UX in single-user mode. Log in to the management processor, and enter CO to access the Console list. Select the nPartition console. When accessing the console, confirm that you are at the BCH Main Menu (the Main Menu: Enter command or menu> prompt). If you are at a BCH menu other than the Main Menu, then enter MA to return to the BCH Main Menu. 2.
4. Exit the console and management processor interfaces if you are finished using them. To exit the BCH environment, press ^B (Control+B); this exits the nPartition console and returns to the management processor Main Menu. To exit the management processor, enter X at the Main Menu. Procedure 6 Single-User Mode HP-UX Booting (EFI Shell) From the EFI Shell environment, boot in single-user mode by stopping the boot process at the HPUX.
Booting kernel... 6. Exit the console and management processor interfaces if you are finished using them. To exit the EFI environment, press ^B (Control+B); this exits the nPartition console and returns to the management processor Main Menu. To exit the management processor, enter X at the Main Menu. LVM-Maintenance Mode HP-UX Booting This section describes how to boot HP-UX in LVM-maintenance mode on cell-based HP 9000 servers and cell-based HP Integrity servers.
1. Access the EFI Shell environment for the nPartition on which you want to boot HP-UX in LVM-maintenance mode. Log in to the management processor, and enter CO to access the Console list. Select the nPartition console. When accessing the console, confirm that you are at the EFI Boot Manager menu (the main EFI menu). If you are at another EFI menu, select the Exit option from the submenus until you return to the screen with the EFI Boot Manager heading.
1. Log in to HP-UX running on the nPartition that you want to shut down. Log in to the management processor for the server and use the Console menu to access the system console. Accessing the console through the MP enables you to maintain console access to the system after HP-UX has shut down. 2. Issue the shutdown command with the appropriate command-line options.
HP OpenVMS I64 Support for Cell Local Memory On servers based on the HP sx2000 chipset, each cell has a cell local memory (CLM) parameter, which determines how firmware interleaves memory residing on the cell. IMPORTANT: HP OpenVMS I64 does not support using CLM. Before booting OpenVMS on an nPartition, you must ensure that the CLM parameter for each cell in the nPartition is set to zero (0).
2. Access the EFI System Partition for the device from which you want to boot HP OpenVMS (fsX:, where X is the file system number). For example, enter fs2: to access the EFI System Partition for the bootable file system number 2. The EFI Shell prompt changes to reflect the file system currently accessed. The full path for the HP OpenVMS loader is \efi\vms\vms_loader.efi, and it should be on the device you are accessing. 3.
Procedure 11 Booting HP OpenVMS (EFI Boot Manager) From the EFI Boot Manager menu, select an item from the boot options list to boot HP OpenVMS using the selected boot option. 1. Access the EFI Boot Manager menu for the system on which you want to boot HP OpenVMS. Log in to the management processor, and enter CO to select the system console. When accessing the console, confirm that you are at the EFI Boot Manager menu (the main EFI menu).
%SMP-I-CPUTRN, CPU #02 has joined the active set. ... 5. Exit the console and management processor interfaces when you have finished using them. To exit the EFI environment press ^B (Control+B); this exits the system console and returns to the management processor Main Menu. To exit the management processor, enter X at the Main Menu. Shutting Down HP OpenVMS This section describes how to shut down the HP OpenVMS OS on cell-based HP Integrity servers.
Booting and Shutting Down Microsoft Windows This section presents procedures for booting and shutting down the Microsoft Windows OS on cell-based HP Integrity servers and a procedure for adding Windows to the boot options list. • To determine whether the cell local memory (CLM) configuration is appropriate for Windows, see “Microsoft Windows Support for Cell Local Memory” (page 96). • To add a Windows entry to the boot options list, see “Adding Microsoft Windows to the Boot Options List” (page 96).
1. Access the EFI Shell environment. Log in to the management processor, and enter CO to access the system console. When accessing the console, confirm that you are at the EFI Boot Manager menu (the main EFI menu). If you are at another EFI menu, select the Exit option from the submenus until you return to the screen with the EFI Boot Manager heading. From the EFI Boot Manager menu, select the EFI Shell menu option to access the EFI Shell environment. 2.
Booting Microsoft Windows You can boot the Windows Server 2003 OS on an HP Integrity server by using the EFI Boot Manager to choose the appropriate Windows item from the boot options list. See “Shutting Down Microsoft Windows” (page 99) for details on shutting down the Windows OS. CAUTION: ACPI Configuration for Windows Must Be windows On cell-based HP Integrity servers, to boot the Windows OS, an nPartition ACPI configuration value must be set to windows.
Use the "ch -?" command for information about using channels. Use the "?" command for general help. SAC> 5. Exit the console and management processor interfaces if you are finished using them. To exit the console environment, press ^B (Control+B); this exits the console and returns to the management processor Main menu. To exit the management processor, enter X at the Main menu.
1. Log in to Windows running on the system that you want to shut down. For example, access the system console and use the Windows SAC interface to start a command prompt, from which you can issue Windows commands to shut down the the system. 2. Check whether any users are logged in. Use the query user or query session command. 3. Issue the shutdown command and the appropriate options to shut down the Windows Server 2003 on the system.
report the CLM amount requested and CLM amount allocated for the specified cell (-c#, where # is the cell number) or the specified nPartition (-p#, where # is the nPartition number). For details, see the nPartition Administrator's Guide (http://www.hp.com/go/virtualization-manuals). To display CLM configuration details from the EFI Shell on a cell-based HP Integrity server, use the info mem command.
• bcfg boot mv #a #b — Move the item number specified by #a to the position specified by #b in the boot options list. • bcfg boot add # file.efi "Description" — Add a new boot option to the position in the boot options list specified by #. The new boot option references file.efi and is listed with the title specified by Description. For example, bcfg boot add 1 \EFI\redhat\elilo.efi "Red Hat Enterprise Linux"adds a Red Hat Enterprise Linux item as the first entry in the boot options list.
\EFI\redhat\elilo.efi \EFI\redhat\elilo.conf By default the ELILO.EFI loader boots Linux using the kernel image and parameters specified by the default entry in the elilo.conf file on the EFI System Partition for the boot device. To interact with the ELILO.EFI loader, interrupt the boot process (for example, type a space) at the ELILO boot prompt. To exit the ELILO.EFI loader, use the exit command.
Use either of the following methods to boot SuSE Linux Enterprise Server: • Choose a SuSE Linux Enterprise Server entry from the EFI Boot Manager menu. To load the SuSE Linux Enterprise Server OS at the EFI Boot Manager menu, choose its entry from the list of boot options. Choosing a Linux entry from the boot options list boots the OS using ELILO.EFI loader and the elilo.conf file. • Initiate the ELILO.EFI Linux loader from the EFI Shell.
On cell-based HP Integrity servers, this either powers down server hardware or puts the nPartition into a shutdown for reconfiguration state. Use the PE command at the management processor Command Menu to manually power on or power off server hardware, as needed. -r Reboot after shutdown. -c Cancel an already running shutdown. time When to shut down (required).
5 Server Troubleshooting This chapter contains tips and procedures for diagnosing and correcting problems with the server and its field replaceable units (CRUs). Information about the various status LEDs on the server is also included. Common Installation Problems The following sections contain general procedures to help you locate installation problems. CAUTION: Do not operate the server with the top cover removed for an extended period of time.
a. Check the LED for each bulk power supply (BPS). The LED is located in the lower left hand corner of the power supply face. Table 22 shows the states of the LEDs. b. Verify that the power supply and a minimum of two power cords are plugged in to the chassis. A yellow LED indicates that the line cord connections are not consistent with the pwrgrd settings. NOTE: A minimum of two power cords must be connected to A0 and B0 or A1 and B1.
Table 21 Front Panel LEDs (continued) LED Status Description Yellow FPGA detects no MPs present or functioning (solid) Cell 0 and Cell 1 Green Cell power on (solid) Locate Off Cell power off Red (solid) Cell fault.
Table 22 BPS LEDs (continued) LED Indication Description Blinking RED BPS state might be unknown, non-recoverable fault(s) present Red Not Used Off BPS fault or failure, no power cords installed or no power to the chassis PCI-X Power Supply LEDs There are two active LEDs on the PCI-X power supply. A green power LED and a multi-color LED reports warnings and faults.
Figure 50 Front, Rear and PCI I/O Fan LEDs Table 24 System and PCI I/O Fan LEDs LED Driven By State Description Fan Status Fan On Green Normal Flash Yellow Predictive failure Flash Red Failed Off No power OL* LEDs Cell Board LEDs There is one green power LED located next to each ejector on the cell board in the server that indicates the power is good. When the LED is illuminated green, power is being supplied to the cell board and it is unsafe to remove the cell board from the server.
Figure 51 Cell Board LED Locations Table 25 Cell Board OL* LED Indicators Location LED On cell board Power (located in the server cabinet) Attention Driven by State Description Cell LPM On Green 3.3 V Standby and Cell_Pwr_Good Off 3.3 V Standby off, or 3.
Figure 52 PCI-X OL* LED Locations Core I/O LEDs The core I/O LEDs are located on the bulkhead of the installed core I/O PCA. See Table 26 (page 113) to determine status and description. .
Figure 53 Core I/O Card Bulkhead LEDs Table 26 Core I/O LEDs LED (as silk-screened on the State bulkhead) Description Power On Green I/O power on Attention On Yellow PCI attention MP LAN 10 BT On Green MP LAN in 10 BT mode MP LAN 100 BT On Green MP LAN in 100 BT mode ACT/Link On Green MP LAN activity Locate On Blue Locater LED Reset On Amber Indicates that the MP is being reset Active On Green This core I/O is managing the system MP Power On Green Indicates standby power is on
Figure 54 Core I/O Button Locations Table 27 Core I/O Buttons Button Identification (as Location silk-screened on the bulkhead) MP RESET 114 Server Troubleshooting Function Center of the core I/O card Resets the MP
Table 27 Core I/O Buttons (continued) Button Identification (as Location silk-screened on the bulkhead) Function NOTE: If the MP RESET button is held for longer than five seconds, it will clear the MP password and reset the LAN, RS-232 (serial port), and modem port parameters to their default values. LAN Default Parameters • IP Address—192.168.1.1 • Subnet mask—255.255.255.0 • Default gateway—192.168.1.
Figure 55 Disk Drive LED Location Table 29 Disk Drive LEDs Activity LED Status LED Flash Rate Description Off Green Steady Normal operation, power applied Green Off Steady Green stays on during foreground drive self-test Green Off Flutter at rate of activity I/O Disk activity Off Yellow Flashing at 1Hz or Predictive failure, needs immediate investigation 2 Hz Off Yellow Flashing at 0.
secondary MP and redirects the operating system gettys to the primary MP over an internal MP-to-MP link. All external connections to the MP must be to the primart MP in slot 1. The secondary MP ports will be disabled. The server configuration cannot be changed without the MP. In the event of a primary MP failure, the secondary MP automatically becomes the primary MP.
Server Management Behavior This section describes how the system responds to over-temperature situations, how the firmware controls and monitors fans, and how it controls power to the server. Thermal Monitoring The manageability firmware is responsible for monitoring the ambient temperature in the server and taking appropriate action if this temperature becomes too high.
normally. If the altimeter has failed, and the stable storage value has been lost because of a core I/O failure or replacement, the MP will adjust the fan speeds for sea-level operation. NOTE: Fans driven to a high RPM in dense air cannot maintain expected RPM and will be considered bad by the MP leading to a “False Fan Failure” condition. Power Control If active, the manageability firmware is responsible for monitoring the power switch on the front panel.
Using FTP to Update Firmware The following section contains instructions for using FTP to update firmware. • The user logs into the server console through the LAN, local serial, or remote serial locations. • The user gives the FW command to start the firmware update. NOTE: The LAN configuration for the server must be set for the FTP connection to function correctly regardless of whether the console LAN, local serial, or other connection is used to issue the FW command.
Figure 57 Firmware Update Command Sample Possible Error Messages • Could not ping host • Could not validate CRC of packet • Could not find firmware update • Invalid password PDC Code CRU Reporting The processor dependent code (PDC) interface defines the locations for the CRUs. These locations are denoted in the following figures to aid in physically locating the CRU when the diagnostics point to a specific CRU that has failed or may be failing in the near future.
Figure 58 Server Cabinet CRUs (Front View) 122 Server Troubleshooting
Figure 59 Server Cabinet CRUs (Rear View) Verifying Cell Board Insertion Cell Board Extraction Levers It is important that both extraction levers on the cell board be in the locked position. Both levers must be locked for the cell board to power up and function properly. Power to the cell board should only be removed using the MP:CM>PE command or by shutting down the partition or server.
Table 30 Ready Bit States Ready Bit State MP:CM> DE Command Power Status True “RDY” (denoted by upper case letters) All cell VRMs are installed and both cell latches are locked. False “rdy” (denoted by lower case letters) One or more VRMs are not installed or failed and/or one or more cell latches are not locked.
6 Removing and Replacing Components This chapter provides a detailed description of the server field replaceable unit (CRU) removal and replacement procedures. The sections contained in this chapter are: Customer Replaceable Units (CRUs) The following section lists the different types of CRUs the server supports. Hot-plug CRUs A CRU is defined as hot-plug if it can be removed from the chassis while the system remains operational, but requires software intervention prior to removing the CRU.
Communications Interference HP system compliance tests are conducted with HP supported peripheral devices and shielded cables, such as those received with the system. The system meets interference requirements of all countries in which it is sold. These requirements provide reasonable protection against interference with radio and television communications.
See Chapter 4 “Operating System Boot and Shutdown” for details on determining the nPartition boot state and shutting down HP-UX. 3. Access the MP Command menu. From the MP Main menu, enter CM to access the Command Menu. 4. Use the MP Command Menu PS command to check details about the hardware component you plan to power off. The PS command enables you to check the status of the cabinet, system backplane, MP core I/O, PCI power domains—or bricks—in the I/O card cage and cells. 5.
Figure 61 Top Cover Removing the Top Cover Figure 62 Top Cover Retaining Screws 1. 2. 3. 4. 128 Connect to ground with a wrist strap and grounding mat. See “Electrostatic Discharge ” (page 126) for more information. Loosen the retaining screws securing the cover to the rear of the chassis. Slide the cover toward the rear of the chassis. Lift the cover up and away from the chassis.
Replacing the Top Cover 1. Orient the cover on the top of the chassis. NOTE: 2. 3. Carefully seat the cover to avoid damage to the intrusion switch. Slide the cover into position using a slow firm pressure to properly seat the cover. Tighten the retaining screws to secure the cover to the chassis. Removing and Replacing a Side Cover It is necessary to remove and replace one or both of the side covers to access the components within the server chassis.
1. 2. 3. Connect to ground with a wrist strap and grounding mat. See “Electrostatic Discharge ” (page 126) for more information. Loosen the retaining screw securing the cover to the rear of the chassis. Slide the cover toward the rear of the chassis; then rotate outward and remove from chassis. Figure 65 Side Cover Removal Detail Replacing a Side Cover 1. 2. 3. Slide the cover in position. The cover easily slides into position. Use a slow firm pressure to properly seat the cover.
Removing and Replacing the Front Bezel Figure 66 Bezel hand slots Removing the Front Bezel • From the front of the server, grasp both sides of the bezel and pull firmly toward you. The catches will release and the bezel will pull free. Replacing the Front Bezel • From the front of the server, grasp both sides of the bezel and push toward the server. The catches will secure the bezel to the chassis.
Removing the PCA Front Panel Board 1. 2. 3. Remove the front bezel and the top and left side covers. Follow proper procedures to power off the server. Disconnect the SCSI cables from MSBP and move them out of the way. This helps provide access to the common tray cage cover. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. Disconnect the DVD power cable from the mass storage backplane. ( Disconnect the front panel cable from the system backplane. (Figure 68). Unscrew the captive fastener on the common tray cage cover.
Figure 68 Front Panel Board Detail Replacing the Front Panel Board 1. 2. Slide the front panel into its slot from inside the server. Insert the left side of the board into the slot first; the right side of the board is angled toward the rear of the chassis. Insert the right side of the board. Ensure that the power switch does not get caught in one of the many holes in the front of the chassis. Push the panel forward until the lock tabs click. 3. Attach the front panel bezel.
Figure 69 Front Panel Board Cable Location on Backplane Removing and Replacing a Front Smart Fan Assembly The Front Smart Fan Assembly is located in the front of the chassis. The fan assembly is a hot swappable component. CAUTION: Observe all ESD safety precautions before attempting this procedure. Failure to follow ESD safety precautions could result in damage to the server.
Figure 70 Front Smart Fan Assembly Locations Table 31 Front Smart Fan Assembly LED Indications LED State Meaning On Green Fan is at speed and in sync or not at speed less than six seconds Flashing Yellow Fan is not keeping up with speed/sync pulse for greater than six seconds Flashing Red Fan failed/stalled or has run slow or fast for greater than six seconds Off Fan is not installed or no power is applied to fan Removing and Replacing a Front Smart Fan Assembly 135
Removing a Front Smart Fan Assembly Figure 71 Front Fan Detail 1. 2. 3. 4. Remove the front bezel. Pull the fan release pin upward away from the fan. Slide the fan away from the connector. Pull the fan away from the chassis. Replacing a Front Smart Fan Assembly 1. 2. 3. 4. Position the fan assembly on the chassis fan guide pins. Slide the fan into the connector. Verify that the fan release pin is in the locked position. Replace the front bezel. NOTE: The fan LED should show fan is operational (green).
Figure 72 Rear Smart Fan Assembly Locations Table 32 Rear Smart Fan Assembly LED Indications LED State Meaning On Green Fan is at speed and in sync or not at speed less than six seconds Flashing Yellow Fan is not keeping up with speed/sync pulse for greater than six seconds Flashing Red Fan failed/stalled or has run slow or fast for greater than six seconds Off Fan is not installed or no power is applied to fan Removing and Replacing a Rear Smart Fan Assembly 137
Removing a Rear Smart Fan Assembly Figure 73 Rear Fan Detail 1. 2. 3. Pull the fan release pin upward away from the fan. Slide the fan away from the connector. Pull the fan away from the chassis. Replacing a Rear Smart Fan Assembly 1. 2. 3. Carefully position the fan assembly on the chassis fan guide pins. Slide the fan into the connector. Verify that the fan release pin is in the locked position. NOTE: A green fan LED indicates the fan is operational.
Figure 74 Disk Drive Location Removing a Disk Drive Figure 75 Disk Drive Detail 1. 2. Disengage the front locking latch on the disk drive by pushing the release tab to the right and the latch lever to the left. Pull forward on the front locking latch and carefully slide the disk drive from the chassis.
Replacing a Disk Drive NOTE: Sometimes using the diskinfo and ioscan commands will produce cached data. To resolve this, these commands should be run when the disk drive is removed. 1. Before installing the disk drive, enter the following command: #diskinfo -v /dev/rdsk/cxtxdx 2. Enter the following command: #ioscan -f The response message after running this command is: NO_HW 3. 4. 5. 6. Be sure the front locking latch is open, then position the disk drive in the chassis.
Figure 76 DVD/DAT Location Removing a DVD/DAT Drive 1. 2. 3. 4. 5. To remove the DVD/DAT, depress the front locking latch to loosen the drive from the chassis. Partially slide the drive out. Disengage the cables from the rear of the DVD/DAT. Remove the rails and clips from the drive. Completely slide the drive from the chassis. (Figure 77).
Installing a Half-Height DVD or DAT Drive. CAUTION: The following section describes precise instructions for removable media cable measurement and orientation. Failure to comply will damage drive(s), data, and power cables. Use this section to configure and install a half-height DVD or DAT drive. Internal DVD and DAT Devices That Are Not Supported In HP Integrity rx7640 Table 33 refers to DVD or DAT drives that are not supported in the HP Integrity rx7640 server.
8. 9. Carefully fold the Top DVD/DAT data cable and insert it into the media bay. The cable must extend out of the drive bay so the black line aligns with the front of the chassis. The cable terminator remains outside of the drive bay in the top of the chassis. Insert one power cable into the drive bay to the right of the data cable as shown in Figure 79. The power cable must extend out of the drive bay so the red flag on the red wire aligns with the front of the chassis. See Figure 79.
Installing the Half-Height DVD or DAT drive 1. Ensure the cables are the correct length. The black line on the SCSI cable and the red flag on the red power cable must align with the front of the front bezel. See Figure 81. Figure 81 SCSI and Power Cable Lengths 2. 3. 4. On the rear of the DVD drive, insert the removable media power cable through the keyed rectangular opening. See Figure 82. Plug the DVD drive power cable into the removable media power cable.
Figure 83 DVD Drive Location Removing a Slimline DVD Drive 1. 2. To remove the DVD drive, press the drive release mechanism to release the drive from the drive bay. Slide the drive out of the DVD carrier. Replacing a Slimline DVD Drive • Slide the drive into the DVD carrier until it clicks into place. Removing and Replacing a Dual Slimline DVD Carrier The Slimline DVD carrier is located in the front of the chassis.
Figure 84 Slimline DVD Carrier Location Removing a Slimline DVD Carrier To remove the carrier, use the following procedure: 1. Depress the front locking latch to loosen the carrier from the chassis. 2. Partially slide the carrier out. 3. Disengage the cables from the rear of the carrier. 4. Completely slide the carrier from the chassis. Installation of Two Slimline DVD+RW Drives. The HP Integrity rx7640 server can be configured with two slimline DVD+RW drives.
Figure 85 Data and Power Cable Configuration for Slimline DVD Installation The following procedure provides information on configuring the removable media drive bay cables for use with the slimline DVD+RW drives. 1. If the cable configuration appears as shown in figure Figure 85 with two power cables and both the Top DVD/DAT and Bottom DVD data cables, proceed with the installation of the drives as described in “Installing the Slimline DVD+RW Drives” (page 149). 2. Turn off power and remove the top cover.
in the top of the chassis. When correctly installed, the cables must be configured as shown in Figure 87. Figure 87 SCSI and Power Cables for Slimline DVD+RW Installation 10. Carefully position the metal removable media cover over the SCSI data and power cables and fasten into place. CAUTION: Ensure the service length of the cables remains fixed as described in steps 7 and 8 when securing the removable media cover. Failure to comply will damage the removable media drives, data, and power cables.
Installing the Slimline DVD+RW Drives 1. Ensure the cables are the correct length. The black line on the SCSI cables and the red flags on the red power cables must align with the front of the front bezel. See Figure 88. IMPORTANT: The SCSI connectors must be on the right and the power cables must be on the left when viewed from the front of the server for proper installation. See Figure 88.
Figure 89 PCI/PCI-X Card Location PCI/PCI-X I/O cards can be removed and replaced by using the SAM (/usr/sbin/sam) application or by using Partition Manager (/opt/parmgr/bin/parmgr). This procedure describes how to perform an online replacement of a PCI/PCI-X card using SAM, for cards whose drivers support online add or replacement (OLAR). IMPORTANT: Some PCI/PCI-X I/O cards cannot be added or replaced online (while HP-UX remains running).
PCI/PCI-X Card Replacement Preliminary Procedures 1. 2. 3. Run SAM (/usr/sbin/sam) and from the main SAM Areas screen select the Peripheral Devices area, then select the Cards area. From the I/O Cards screen, select the card you will replace and then select the Actions—>Replace menu item. Wait for SAM to complete its critical resource analysis for the selected card and then review the analysis results.
To add all cards: search all 4. Execute the following EFI command: map –r 5. Enter the Boot Manager by executing the following command: exit 6. From the EFI Boot Manager Menu, select “Boot Option Maintenance Menu” and then from the Main Menu, select “Add a Boot Option”. Now add the device as a new boot device. Updating Option ROMs The Option ROM on a PCI I/O card can be “flashed” or updated. The procedure to flash an I/O card follows. 1. Install the I/O card into the chassis. 2.
Figure 90 PCI Smart Fan Assembly Location Table 34 Smart Fan Assembly LED Indications LED State Meaning On Green Fan is at speed and in sync or not at speed less than six seconds Flashing Yellow Fan is not keeping up with speed/sync pulse for greater than six seconds Flashing Red Fan failed/stalled or has run slow or fast for greater than six seconds Off Fan is not installed or no power is applied to fan Removing a PCI Smart Fan Assembly Figure 91 PCI Smart Fan Assembly Detail 1. 2.
3. Slide the fan upward from the chassis. Replacing a PCI Smart Fan Assembly 1. 2. 3. Carefully position the fan assembly in the chassis. The fan easily slides into the chassis. Use a slow firm pressure to properly seat the connection. Replace the top cover. NOTE: A green fan LED indicates the fan is operational. Removing and Replacing a PCI-X Power Supply The PCI-X power supply is located in the front of the chassis. The PCI-X power supply is N+1 and a hot-swap unit.
Table 35 PCI-X Power Supply LEDs (continued) LED Driven By State Description Fault Each supply Flash Red Power supply has shut down due to an over temperature condition, a failure to regulate the power within expected limits, or a current-limit condition. Off Normal operation. Removing a PCI-X Power Supply Figure 93 PCI Power Supply Detail 1. 2. Securely grasp the handle on the front of the power supply. Slide and hold the locking tab to the right and pull the PCI-X supply from the chassis.
Figure 94 BPS Location IMPORTANT: When a BPS is pulled from the server and then immediately re-inserted, the server might report an overcurrent condition and shut down. Removing a BPS 1. 2. Remove the front bezel. Press in on the extraction lever release mechanism and pull outward. Figure 95 Extraction Levers 3. 156 Slide the BPS forward using the extractions levers to remove it from the chassis.
Figure 96 BPS Detail CAUTION: Use caution when handling the BPS. A BPS weighs 18 lbs. Replacing a BPS 1. 2. 3. 4. Verify that the extraction levers are in the open position, then insert the BPS into the empty slot. The BPS easily slides into the chassis. Use a slow firm pressure to properly seat the connection. Ensure the BPS has seated by closing the extraction levers. Replace the front bezel. NOTE: The BPS LED should show BPS operational and no fault. The BPS LED should be GREEN.
Table 36 Default Configuration for Management Processor LAN (continued) MP LAN Subnet Mask 255.255.255.0 MP LAN Gateway 192.168.1.1 This procedure (Command menu, LC command) configures the management processor’s MP LAN network settings from the management processor Command menu. 1. Connect to the server complex management processor and enter CM to access the Command menu. Use telnet to connect to the management processor, if possible.
7 HP Integrity rp7440 Server The following information describes material specific to the HP Integrity rx7640 and HP 9000 rp7440 Servers and the PA-8900 processor.
Table 38 Typical Server Configurations for the HP 9000 rp7440 Server Cell Boards Memory Per PCI Cards Cell Board (assumes 10 watts each) DVDs Hard Disk Core I/O Bulk Power Drives Supplies Typical Power Typical Cooling Qty GBytes Qty Qty Qty Qty Qty Watts BTU/hr 2 32 16 3 2 2 2 2078 7096 2 16 8 2 2 2 2 1908 6515 2 8 8 2 2 2 2 1871 6389 1 8 8 1 1 1 2 1237 4224 The air conditioning data is derived using the following equations. • Watts x (0.
HP 9000 Boot Configuration Options On cell-based HP 9000 servers the configurable system boot options include boot device paths (PRI, HAA, and ALT) and the autoboot setting for the nPartition. To set these options from HP-UX, use the setboot command. From the BCH system boot environment, use the PATH command at the BCH Main Menu to set boot device paths, and use the PATHFLAGS command at the BCH Configuration menu to set autoboot options.
• BOOT LAN INSTALL or BOOT LAN.ip-address INSTALL The BOOT... INSTALL commands boot HP-UX from the default HP-UX install server or from the server specified by ip-address. • BOOT path This command boots the device at the specified path. You can specify the path in HP-UX hardware path notation (for example, 0/0/2/0/0.13) or in path label format (for example, P0 or P1) . If you specify the path in path label format, then path refers to a device path reported by the last SEARCH command.
Do you wish to stop at the ISL prompt prior to booting? (y/n) >> y Initializing boot Device. .... ISL Revision A.00.42 JUN 19, 1999 ISL> 3. From the ISL prompt, issue the appropriate Secondary System Loader (hpux) command to boot the HP-UX kernel in the desired mode. Use the hpux loader to specify the boot mode options and to specify which kernel to boot on the nPartition (for example, /stand/vmunix).
1. Access the BCH Main Menu for the nPartition on which you want to boot HP-UX in LVM-maintenance mode. Log in to the management processor, and enter CO to access the Console list. Select the nPartition console. When accessing the console, confirm that you are at the BCH Main Menu (the Main Menu: Enter command or menu> prompt). If you are at a BCH menu other than the Main Menu, then enter MA to return to the BCH Main Menu. 2. 3.
2. Issue the shutdown command with the appropriate command-line options. The command-line options you specify dictate the way in which HP-UX is shut down, whether the nPartition is rebooted, and whether any nPartition configuration changes take place (for example, adding or removing cells). Use the following list to choose an HP-UX shutdown option for your nPartition: • Shut down HP-UX and halt the nPartition. • Shut down HP-UX and reboot the nPartition.
FIRMWARE: Core IO MP-0 : B.002.005.010 ED-0 : 002.007.000 MP-1 : B.002.005.010 ED-1 Cell 0 : 002.007.000 PDHC : B.023.003.033 - Active PDHC : B.023.003.030 PDC_FW : 042.009.000 - Active PDC_FW : 042.009.000 Cell 1 PDHC : B.023.003.033 - Active PDHC : B.023.003.030 PDC_FW : 042.009.000 - Active PDC_FW : 042.009.
A Replaceable Parts Replaceable Parts This appendix contains the server CRU list. For a more updated list of part numbers, go to the HP Part Surfer web site at: the following website: http://www.partsurfer.hp.com Table 39 Server CRU Descriptions and Part Numbers CRU DESCRIPTION Replacement P/N Exchange P/N 8120-6895 None Pwr Crd C19/IEC-309 L6-20 4.5m BLACK CA ASSY 8120-6897 None Pwr Crd C19/L6-20 4.5m BLACK C 8120-6903 None 240V N.AMERICAN UPS 4.5M C19/L 8120-8494 None Pwr Crd C19/GB 1002 4.
Table 39 Server CRU Descriptions and Part Numbers (continued) CRU DESCRIPTION Replacement P/N Exchange P/N AC Power Supply 0957-2183 None PCI-X N+1 Power Module 0950-4637 None Nameplate, rp7440 A9959-3401A None Nameplate, rx7640 AB312-2108A None Box, DVD Filler (Carbon) A6912-00014 None Intrusion Switch 5040-6317 None Assy, Bezel, No NamePlate (Graphite) A7025-04001 None Assy, Front Panel Display Bezel AB312-2102A None Snap, Bezel Attach C2786-40002 None POWER OTHER COMPONENTS
B MP Commands This appendix contains a list of the Server Management Commands. Server Management Commands Table 40 lists the server management commands.
Table 42 System and Access Config Commands (continued) 170 CP Display partition cell assignments DC Reset parameters to default configuration DI Disconnect Remote or LAN console ID Change certain stable complex configuration profile fields IF Display network interface information IT Modify command interface inactivity time-out LC Configure LAN connections LS Display LAN connected console status PARPERM Enable/Disable interpartition security PD Modify default Partition for this login ses
C Templates This appendix contains blank floor plan grids and equipment templates. Combine the necessary number of floor plan grid sheets to create a scaled version of the computer room floor plan. Figure 97 illustrates the overall dimensions required for the server. Figure 97 Server Space Requirements Equipment Footprint Templates Equipment footprint templates are drawn to the same scale as the floor plan grid (1/4 inch = 1 foot).
3. 4. 5. Remove a copy of each applicable equipment footprint template. Cut out each template selected in step 3; then place it on the floor plan grid created in step 2. Position pieces until the desired layout is obtained; then fasten the pieces to the grid. Mark locations of computer room doors, air-conditioning floor vents, utility outlets, and so on. NOTE: Attach a reduced copy of the completed floor plan to the site survey.
Figure 99 Planning Grid Computer Room Layout Plan 173
Figure 100 Planning Grid 174 Templates
Index A access commands, 169 air ducts, 30 illustrated, 30 AR, 169 ASIC, 11 B backplane mass storage, 24, 25, 132 PCI, 20, 24 system, 14, 20, 24, 25, 29, 133 BO, 169 BPS (Bulk Power Supply), 65 C CA, 169 cards core I/O, 116 CC, 169 cell board, 13, 14, 15, 25, 28, 63, 68, 71, 110 verifying presence, 68 cell controller, 11 checklist installation, 73 cm (Command Menu) command, 69 co (Console) command, 71 command, 169 co (Console), 71 CTRL-B, 71 di (Display), 72 PE, 127 scsi default, 127 ser, 127 T, 127 vfp (
IT, 169 K Keystone system air ducts, 30 L LAN, 116 LC, 169 LED Attention, 64 Bulk Power Supply, 65 management processor, 14 remote port, 14 SP Active, 64 Standby Power Good, 64 traffic light, 14 login name MP, 65 LS, 169 M MA, 169 management hardware, 116 Management Processor (MP), 63 management processor (MP), 116 mass storage backplane, 24, 25, 132 memory, 11 MP login name, 65 password, 65 MP (Management Processor) logging in, 64 powering on, 64 MP core I/O, 13, 14, 20, 24, 62, 63 MP/SCSI, 13, 14, 20,
TE, 169 turbocoolers, 11 U update firmware, 121 V verifying system configuration, 72 W warranty, 32 web console, 116 WHO, 169 wrist strap, 126 X XD, 169 177