HP Cluster Platform InfiniBand Interconnect Installation and User's Guide HP Part Number: A-CPIBI-1E Published: October 2006
© Copyright 2008 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. The only warranties for HP products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. HP shall not be liable for technical or editorial errors or omissions contained herein.
Table of Contents About This Manual..........................................................................................................11 Organization.........................................................................................................................................11 Audience...............................................................................................................................................12 Related Documentation.........................................
3.6 Operating the ISR 9024 S/D Interconnect........................................................................................47 3.7 Troubleshooting the ISR 9024 S/D...................................................................................................47 4 Installing and Maintaining the ISR 9096 and ISR 9288 Interconnects..................49 4.1 Overview of the ISR 9XXX Interconnects........................................................................................49 4.1.
7.1.7 Cable Routing Procedure for the ISR 9288 .............................................................................88 7.2 Connecting the ISR 9288 Router Blades..........................................................................................89 7.2.1 Connecting to IPR GbE Ports..................................................................................................89 7.2.2 Connecting to FCR FC Ports.....................................................................................
F.5 Mellanox PCI-Express HCA Specifications...................................................................................128 F.6 Mellanox Memory Free PCI-Express HCA (SDR) Specifications..................................................129 F.7 Mellanox Memory Free PCI-Express HCA (DDR) Specifications.................................................129 F.8 Mellanox PCI-Express DDR HCA Specifications..........................................................................130 F.
List of Figures 1-1 1-2 1-3 1-4 1-5 1-6 2-1 2-2 2-3 2-4 2-5 3-1 3-2 3-3 3-4 3-5 3-6 3-7 4-1 4-2 4-3 4-4 4-5 4-6 4-7 4-8 4-9 4-10 4-11 4-12 4-13 4-14 4-15 4-16 4-17 4-18 6-1 6-2 6-3 6-4 6-5 6-6 6-7 6-8 6-9 6-10 6-11 6-12 7-1 7-2 8-1 8-2 Typical InfiniBand Network..........................................................................................................21 ISR 9024 Interconnect Front Panel View.......................................................................................
-3 8-4 8-5 8-6 8-7 8-8 8-9 8-10 8 Topspin/Mellanox SDR PCI–X HCA.............................................................................................98 Topspin/Mellanox SDR PCI–Express HCA...................................................................................98 Mellanox PCI–X HCA...................................................................................................................99 Mellanox PCI–Express HCA...................................................................
List of Tables 4-1 6-1 6-2 6-3 6-4 6-5 6-6 6-7 6-8 7-1 8-1 ISR 9XXX Standard Configurations..............................................................................................64 sFB-12 Fabric Board LED Status....................................................................................................73 Line Board Status LEDs.................................................................................................................74 sMB Board LEDs......................................
About This Manual This manual describes how to install and operate an InfiniBand interconnect as part of an HP Cluster Platform. This manual limits detailed information and procedures for cluster components that have their own documentation, such as the cluster nodes, network switches, and system racks. This manual contains references to the component documentation, which should be readily available during the installation procedure.
Chapter Description Appendix E: 4x DDR IB Switch Module Specifications Defines the operational and technical specifications for the 4x DDR InfiniBand Switch Module for the c-Class Bladesystem enclosure. Appendix F: HCA PCI Card Specifications Defines the operational and technical specifications for the HCA-400-series and other HCA cards installed in cluster nodes.
• • • Blades InfiniBand Interconnect Cabling Tables – Provides the point-to-point InfiniBand cabling tables for c-Class BladeSystems. c-Class Blade Cable Management Bracket Installation Guide – Provides the installation procedure for the c-Class Blade Cable Management Bracket. HP Cluster Platform InfiniBand Fabric Management and Diagnostic Guide – Provides test and diagnostic procedures that are specific to the application of Voltaire InfiniBand interconnects. Important: Go to http://www.docs.hp.
Warning ! Text set off in this manner indicates that failure to follow directions could result in bodily harm or loss of life. Caution: Text set off in this manner indicates that failure to follow directions could result in equipment damage or data loss. Important: Text set off in this manner presents clarifying information or specific instructions. Note: Text set off in this manner presents commentary, sidelights, or interesting points of information.
Safety Considerations To avoid bodily harm and damage to electronic components, read the following safety and comfort guidelines before unpacking and configuring the cluster components. Heed the following additional warnings and refer to the HP Cluster Platform Overview and the HP Cluster Platform Site Preparation Guide to obtain specific information on safety issues.
Refer to the specifications section of the component documentation to find the weight of a component. Removing and Replacing Component Covers Using Cellular Phones and Other Wireless Technology Battery Safety Information Avoiding Hearing Damage Avoiding Burn Injuries Avoiding Static Electricity For your safety, never remove the cover from a cluster component without first disconnecting the power cord from the power outlet and removing any connection to a telecommunications network.
Recycling Shipping an integrated cluster generates far less packaging than the individual components that it contains. However, large clusters use a substantial amount of packaging material that is not reusable. The bulk of the packaging material is recyclable, and is labeled as such. You should plan on providing a number of dumpsters into which this packaging can be sorted and recycled. HP has a strong commitment to protecting the environment.
1 InfiniBand Technology Overview InfiniBand™ is a specification of the InfiniBand® Trade Association of which Hewlett-Packard is a member. The trade association has generated specifications for a 100 Gb/s communications protocol for high-bandwidth, low latency server clusters. The same communications protocol can operate across all system components for computing, communications, and storage as a distributed fabric.
— — • Security and partitioning Integrated management tools A strong technical future, with support for: — PCI cards: ◦ Voltaire SDR PCI-X ◦ Topspin/Mellanox SDR PCI-X Rev B; Mellanox SDR PCI-X Rev C ◦ Topspin/Mellanox SDR PCI-Express Rev B; Mellanox SDR PCI-Express Rev C ◦ Mellanox Mem Free SDR PCI-Express ◦ Mellanox Mem Free DDR PCI-Express ◦ Mellanox DDR PCI-Express — — Remote direct memory access (RDMA) Faster link speeds InfiniBand Protocol InfiniBand fabrics provide a single point of management fo
— — • A PCI bus adapter that is installed in a host, such as a server. Any networked device that includes an InfiniBand-compliant HCA chip on its motherboard. An interconnect (switch) connecting all devices (such as servers) in a switched point-to-point network. A router or gateway, providing connections to other InfiniBand or TCP/IP Ethernet subnets. InfiniBand cables that are of parallel copper design that link the InfiniBand components together.
• • Support only single data rate links (SDR). Are not compliant with Restriction of Hazardous Substances. RoHS, European Union Directive 2002/95/EC, which restricts the use of specific hazardous materials found in electrical and electronic products. All applicable products in the EU market after July 1, 2006 must pass RoHS compliance. For information on newer models, refer to the following documents: • Voltaire ISR 9024 S/D Installation Manual • Voltaire ISR 9288 / ISR 9096 Installation Manual 1.
Figure 1-3 ISR 9024 S/D (RoHS Compliant) Interconnect Front Panel View The ISR 9024 S/D (RoHS Compliant) interconnect is a fully non-blocking interconnect with a theoretical throughput of 960 Gb/s. This device has the following physical and operational features: • A 1U chassis, designed for industry-standard racks. • 24 ports of 20 Gb/s throughput (DDR), 10 Gb/s (SDR). • Fabric scalability from several nodes to hundreds of nodes.
Important: Do not mix SDR and DDR components in an ISR 9096 chassis. Figure 1-5 ISR 9096 Interconnect Rear Panel View The ISR 9096 enables high performance applications to run on distributed nodes, share common storage and share network resources. This device has the following physical and operational features: • A 6U chassis, designed for industry-standard racks. • 96 ports, each providing 10 Gb/Sec of throughput. • Latency of less than 420 nanoseconds.
Figure 1-6 ISR 9288 Interconnect Front Panel View Important: Do not mix SDR and DDR components in an ISR 9288 chassis. The ISR 9288 enables high performance applications to run on distributed nodes, share common storage and share network resources. This device has the following physical and operational features: • A 14U chassis, designed for industry-standard racks. • 288 ports, each providing 10 Gb/Sec of throughput. • A fully non-blocking interconnect with a latency of less than 420 nanoseconds.
• Up to 11 router blade drawers, model sRBD that each support up to three router blades, which are either of the following models: — TCP/IP internet protocol router blade, model IPR that provides four Gigabit Ethernet ports, and an RJ-45 management port that includes both 10/100 Base-T Ethernet and serial ports. — Fibre Channel router blade, model FCR. that provides four ports of 1 or 2 Gb/s auto-sensing Fiber Channel. • • • Up to five redundant power supplies (PSUs). A vertical fan unit, model sFU-4.
See the Voltaire InfiniBand Fabric Management and Diagnostic Guide and the HP Cluster Platform InfiniBand Fabric Management and Diagnostic Guide for information on how to launch and use the management interfaces. 1.
2 Installing and Maintaining the ISR 9024 Interconnect The hardware maintenance operations described in this chapter require appropriate training and knowledge of safety procedures. The safety procedures for Cluster Platform are described in the HP Cluster Platform Site Preparation Guide. The following interconnect maintenance topics are described: • Overview of the ISR 9024 (Section 2.1). • Unpacking the ISR 9024 (Section 2.2). • Mounting the ISR 9024 in the rack (Section 2.3).
— — DC to DC conversion. Temperature sensing mechanism. • The management card, which is a processor PCI mezzanine card (PrPMC), is installed only in an internally managed ISR 9024. It provides the following features: — A CPU, operating at 400 MHz. — An EEPROM that stores configuration information using the I²C interface. — User flash memory on the CPU’s device bus.
1. Power supply module that converts AC to DC supply. One or two power supplies are installed. Serial port connector that provides access to the serial CLI console interface. Ethernet RJ-45 port that provides management interface over a local network (The management data traffic is called out-of-band, because it does not pass over the InfiniBand high-speed network). You can use this port only when the rear RJ-45 connector is not in use.
2.1.3 Externally Managed ISR 9024 Front Panel In this configuration, the I²C port panel replaces the management card panel. The I²C port provides access to channel management functions. Figure 2-3 shows the front panel of the externally managed configuration. Figure 2-3 Externally Managed ISR 9024, Front Panel 1 6 2 3 7 4 8 5 9 10 Figure 2-3 shows the following front panel features: 1. Captive fasteners retaining the power supply unit. 2. Logical link indicator LED (amber) for port 13. 3.
The following features are shown: 1. Indicator (green LED ) showing the physical status of each InfiniBand link. 2. Indicator (amber LED ) showing the logical status of each InfiniBand link. 3. Reset button: • Press and hold this button for more than 2 seconds, but less than 6 seconds to reset the management card function without affecting data traffic through the high-speed network. • Press and hold this button for more than 6 seconds to reset the interconnect.
8. Open the small box and verify its contents against the packing list as follows. Some parts are optional, and might not be included in all shipments: a. Verify that the chassis is the correct configuration; internally managed or externally managed. b. Identify the rail kit consisting of: 1. Four angle brackets. 2. Two chassis-mounted rails. c. d. e. f. If an internally managed unit, verify that a management cable adapter is present. At least one power cable, depending on the configuration.
2. 3. 4. 5. 6. Determine the correct mounting location for the interconnect in your model of cluster platform. The position is specified as a U-location, which relates to the marked locations on the vertical columns an a HP 10000-series rack. Each U-location has three holes; top, middle, and bottom. This rack kit uses the top and bottom hole of the specified location. mark the location with masking tape or a marker pen.
2.4 Installing the ISR 9024 Power Supply Unit An ISR 9024 contains either one or two power supplies in the side bays of the unit chassis, as shown in Figure 2-3. Warning: To maintain the correct cooling air flow, you must fill an unused power supply bay by using the supplied blank filler panel Use the following procedure to install or remove a power supply. You can do this when the interconnect is running 1. 2. 3. 4. 5.
3. The PS LED is flashing. The PSU is not properly plugged in or not completely seated: • Check the power cord and its connections. Ensure that the cord is connected to the correct inlet in single-PSU configurations. • Check the installation of the PSU module. 4. The SYS LED does not illuminate. Check the status of the PS LED if it is also not illuminated, it is a PSU problem, see Step 1. If the PS LED is illuminated, it is an over-temperature problem.
3 Installing and Maintaining the ISR 9024 S/D Interconnect (RoHS Compliant) The hardware maintenance operations described in this chapter require appropriate training and knowledge of safety procedures. The safety procedures for HP Cluster Platform are described in the HP Cluster Platform Site Preparation Guide. The following interconnect maintenance topics are described: • Overview of the ISR 9024 S/D (Section 3.1). • Unpacking the ISR 9024 S/D (Section 3.2).
The ISR 9024 S/D chassis is a 1U-high 19-inch wide rack mounted ANSI, ETSI and RoHS-compliant chassis that contains the following modules: • The base board, which hosts the InfiniBand 4X SDR or DDR switch chip and provides the following: — 24–port 4x 10 Gb/s or 20 Gb/s switch chip. — 12 4x physical connection to base board ports. — Base board to management card interface. — DC to DC conversion. — Temperature sensing mechanism.
Figure 3-1 Internally Managed ISR 9024 D–M, Front Panel 1 3 2 4 5 7 6 8 9 10 11 Figure 3-1 shows the following front panel features: 1. Power supply indicator. 2. Hot-swappable power supply. 3. Fan unit indicator. 4. Hot-swappable fan unit (contains two fans for high-availability) with auto-heat sensing to allow for silent fan operation. Front-to-rear cooling. 5. Reset button.
Figure 3-2 Externally Managed ISR 9024 S/D, Front Panel 1 3 2 4 5 8 6 7 9 Figure 2-3 shows the following front panel features: The following list corresponds to the callouts shown in Figure 3-2. 1. Power supply indicator. 2. Power supply module. 3. Fan unit indicator. 4. Fan unit. 5. Reset button. 6. System status LEDs include: • Info-User defined (for future use). • Pwr-System power status. 7. 8. 9. I²C Connector. Provides serial I²C interface. Power supply indicator. Power supply module. 3.1.
• • 6. Info Power IEC power receptacle. 3.2 Unpacking the ISR 9024 S/D Interconnect The chassis is shipped in a double wall carton, surrounded by shock absorption material as shown in Figure 3-4. Figure 3-4 ISR 9024 S/D Packaging 1 5 2 6 3 7 4 Use the following procedure to unpack the ISR 9024 S/D (callout 3 in Figure 3-4): 1. 2. 3. 4. 5. 6. 7. Place the carton on a secure work platform at a safe working height. Check the carton to ensure that it is factory sealed and undamaged.
8. Open the small box (callout 4 in Figure 3-4) and verify its contents against the packing list as follows. Some parts are optional, and might not be included in all shipments: a. Verify that the chassis is the correct configuration; internally managed or externally managed. b. Identify the rail kit consisting of: 1. Four angle brackets. 2. Two chassis-mounted rails. c. d. e. f. If an internally managed unit, verify that a management cable adapter is present.
3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. Attach the front L-brackets to either side of the chassis, using the three 8/32-in screws provided, as shown by callout 1 in Figure 3-5. Attach the rear L-brackets loosely to each rail, as shown by callout 2 in Figure 3-5. The screws should be just loose enough that you can easily adjust the position of the bracket on the rail. Position the L-bracket so that the fasteners are approximately in the centre of the elongated holes.
3.4.1 Replacing the Fan Unit In normal operation, the two fans work at 50% utilization. In case of fan failure or high temperature detection, the fans go into Turbo mode. In case of fan failure, the fan drawer LED and the PS/FAN LED on the rear panel blink. When removing the fan unit, the system can continue to function up to five (5) minutes before a new fan unit is installed. Figure 3-6 shows the fan unit. Figure 3-6 Replacing the Fan Unit 1 Use the following procedure to replace a fan unit: 1. 2. 3. 4.
2. 3. 4. 5. Position the rear (connector end) of the power supply in the empty bay. Holding the power supply level with the chassis, slide it into the slot until it meets resistance at the chassis connector. Do not force the unit if you feel resistance, ensure that it is level and square with the bay. Push the module further until it is completely seated on the connector, and its bezel is flush with the chassis front panel. Use the captive fasteners at each side of the power supply to secure it in place.
5. The SM LED does not illuminate. The SM LED does not illuminate if the ISR 9024 S/D is externally managed (there is no management card present). If the SM indicator does not illuminate on an internally managed ISR 9024 S/D-M, the management card is not running the subnet management software. Call HP service. 6. The InfiniBand port links do not illuminate, as follows: • The green physical link indicator does not illuminate.
4 Installing and Maintaining the ISR 9096 and ISR 9288 Interconnects This chapter describes the installation and maintenance of the Voltaire ISR 9096 and the ISR 9288 interconnects. The actual product name will be used when referring to a specific feature or procedure. When features are common to both interconnects, they are referred to as ISR 9XXX. The hardware maintenance operations described in this chapter require appropriate training and knowledge of safety procedures.
— — • • • • • • • ISR 9096: up to 96 InfiniBand 4X (10 Gb/s, 20 Gb/s DDR) ports or 32 12X (30 Gb/s, 3.84 Tb/s DDR) ports. ISR 9288: up to 288 InfiniBand 4X (10 Gb/s, 20 Gb/s DDR) ports or 96 12X (30 Gb/s, 3.84 Tb/s DDR) ports. Modular architecture with any combination of InfiniBand ports and router blades. Bisectional switch bandwidth in a passive mid-plane architecture. Fat tree topology with less than 1 microsecond of latency between any two ports.
from the front of the chassis to the Midplane. The sLB and Router Module signals are routed from the rear of the chassis to the midplane. The ISR 9096 has one non-swappable board, and the ISR 9288 has three non-swappable boards. • Chassis backplane The sPSU and sFU-8 modules are connected to the system through the backplane, which provides power and communication connections between the modules. It provides the mechanical, control and electrical interface to the controllable AC/DC 48V power supplies.
• Low level link and PHY function. • Subnet management agent (SMA). • Performance management agent (PMA). You upgrade the firmware by using the Ethernet, RS232, or I²C port to connect to the system on the sMB board. You can also upgrade the firmware in-band via any of the InfiniBand line board ports. See theVoltaire InfiniBand Fabric Management and Diagnostic Guide. The ISR 9XXX features an embedded InfiniBand subnet manager (SM) and management software.
Caution: Due to the weight of the chassis, two people are required to remove it from the shipping container. Before unpacking, ensure that the shipping containers are not damaged. Damage to shipping containers (other than the normal wear-and-tear of shipping) indicates that the contents might have been subjected to excessive shock. 4.2.1 Unpacking the ISR 9288 Use the following procedure to unpack the ISR 9288 container: 1. 2. Remove the top cover of the crate after releasing the four clamps.
3. 4. Remove all of the above components and the packing foam from the top of the crate. Remove the documentation and the CD wrapped in an antistatic bag on top of the chassis. Figure 4-2 Documentation and CD Location 1 2 5. Item Description 1 Getting Started Short Guide 2 ISR 9288 Product CD and other CDs, according to system configuration Open up the front door of the wooden crate by releasing the clamps (Figure 4-3). Two people are required to complete the next step.
6. 7. Remove the chassis from the box and place it on a flat surface. Open any additional boxes and remove the items inside. Verify the contents, which depend on the configuration of your cluster. 4.2.2 Unpacking the ISR 9096 Use the following procedure to unpack the ISR 9096 container: 1. 2. Remove the top cover of the box. Verify that the top compartment (see Figure 4-4) contains the following components: 1. Cabling Guide brackets (these brackets are not used for HP Cluster platform installations). 2.
Figure 4-5 ISR 9096 Accessories Box 1 2 4 3 5 Item Description 1 Ground stud kit 2 Console cable 3 ISR 9096 Product CD and other CDs, depending on system configuration 4 Power cable 5 Tray screws kit 4. Configurations that include cable management will also have the following items in the accessories box: • Cabling guide bracket for cable management. • Cable management screws. 5. 6. Remove all of the above components and the foam grips from the top of the chassis.
Note: Depending on the reason for the chassis replacement, the shipment might contain preinstalled modules. In other instances (such as a failed mid-plane in the original chassis) you might need to depopulate the failed chassis, and transfer the modules to the new chassis after you install the chassis in the rack. Figure 4-6 shows a front view of the chassis and Figure 4-7 shows the rear (port) view.
Figure 4-7 ISR 9288 Rear View 1 2 3 4 6 5 Item Description 1 24-port line board 2 Line board latch 3 sRBD router blade drawer 4 Router blade, installed in the sRBD router blade drawer 5 One of 5 redundant power supplies 6 sCTRL control module 4.3 Repacking the Chassis If you need to return a defective chassis for servicing or replacement, it must be packed for shipping in its original container. Use the following procedure to pack the ISR 9XXX chassis: 1. 2. 3. 4. 5. 6.
14U in the rack, and the ISR 9096 chassis occupies 6U in the rack. Under normal circumstances, the only reason for performing an installation is when you remove a complete chassis from its location in an existing HP Cluster Platform. Note the following constraints when replacing a chassis: • Ensure that the replacement of a chassis does not disrupt the proper ventilation, so that ambient temperature is maintained for air intake at the front of the chassis and exhaust at the chassis vents.
The two L-brackets that are pre-assembled to the front sides of the interconnect chassis are also used to secure the chassis to the rack columns, as shown in Figure 4-9. Figure 4-9 ISR 9XXX Rack Mount Procedure 3 4 1 2 Figure 4-10 shows a view of the correct assembly of each telescoping rail unit from the rear of the rack.
3. Assemble each pair of rails, as shown in Figure 4-11. Figure 4-11 Assembling 9XXX Rails 1 a. b. c. 4. 5. 6. 7. 2 3 Place the slotted rail (callout 1 in Figure 4-11) on the outside of the larger rail (callout 3 in Figure 4-11) Secure the slotted rail loosely, using eight of the machine screws provided. Two of these screws are shown by (callout 2 in Figure 4-11) Repeat Step 3a and 3b for the second rail assembly.
1. Align the left cable management hook assembly with the top of the chassis as shown in Figure 4-12. Figure 4-12 Attaching the Cable Management Hooks 2. 3. 4. 5. For each of the 5 mounting holes in the hook assembly, clip an M6 cage nut into the back of the rack column. Secure the hook assembly to the cage nuts by using 5 M6 machine screws. Repeat steps 2 and 3 for the right hook assembly.
4.4.3 Installing the Cabling Clamps If line boards are installed in slots 1 and 2 of the chassis (the topmost slots) or the number of cables connected to the interconnect exceeds 128 and the interconnect is installed in a standard 600mm wide cabinet, additional cabling clamps are required, as shown in Figure 4-13, which shows the cable clamp installation on the rack's rear right column.
Table 4-1 ISR 9XXX Standard Configurations Description ISR 9288 ISR 9096 Fabric boards sFB-12 (up to four) sFB-4 (up to four) Fan units Horizontal: sFU-8 Vertical: sFU-4 Vertical: N/A Control (rear) sCTRL Power supply unit sPSU (up to five) sPSU (up to four) High-memory sMB for extended fabric management sMB-hi-mem N/A Cabling guide bracket kit (chassis gripping) Chassis gripping bracket kit Chassis gripping bracket kit Cabling guide bracket kit (rack gripping) Rack gripping bracket kit
The front panel contains the following modules and features: 1. A master sMB module. 2. The sFU-8 eight-fan horizontal cooling module. 3. The sFU-4 four-fan vertical cooling module. 4. Up to 4 sFB-12 fabric boards,. 5. A redundant slave sMB module. 6. L-brackets that you use to attach the chassis to the rack columns. In an HP Cluster Platform, the ISR 9288 rear panel is configured as shown in Figure 4-15 and Figure 4-16.
Figure 4-16 Populated I/O Drawer 1 2 3 4 The following features are shown in Figure 4-16: 1. sRBD router blade drawer. 2. Blade installed in the sRBD, 3. Location of the locking levers for inserting and removing sRBD router blade drawers 4. sLB-24 InfiniBand line board. In an HP Cluster Platform, the ISR 9096 front panel is configured as shown in Figure 4-17: Figure 4-17 ISR 9096 Chassis Front Panel Configuration 1 2 3 4 The following features are shown in Figure 4-17: 1. Master sMB module 2.
Figure 4-18 ISR 9096 InfiniBand Ports 1 2 3 4 5 6 The following features are shown in Figure 4-18: 1. Up to 4 sLB-24 InfiniBand line boards 2. Locking levers for inserting and removing the line boards 3. Blade installed in the sRBD 4. sRBD router blade drawer 5. A single sCTRL management module 6. Ethernet slots 4.6 Post-installation Tasks Following the installation of the interconnect, your next tasks are as follows: 1. Cabling the interconnect, as described in Chapter 7.
5 HP 4x DDR IB Switch Module for c-Class BladeSystems Overview The 4x DDR IB switch module is a double-wide switch module for the HP BladeSystem c-Class enclosure. It is based on the Mellanox 24-port InfiniScale III 4x DDR InfiniBand switch chip. When an IB mezzanine HCA is plugged into the c-Class server blade, the mezzanine HCA is connected to the IB switch through the mid-plane in the c-Class enclosure.
Note: You must remove the c-Class blade cable management bracket to remove or replace a 4x DDR IB switch module installed in the back of a c-Class BladeSystem enclosure. To remove the bracket, reverse the installation procedure described in the c-Class Blade Cable Management Bracket Installation Guide. 5.1.1 Subnet Manager A subnet manager is required to establish the InfiniBand fabric and provide InfiniBand fabric services.
6 Installing Field-Replaceable Units in the ISR 9XXX Chassis The ISR 9288 chassis is pre-assembled with the fan units (sFU-4 and sFU-8) and the rear panel sCTRL board. You must install all boards in their designated slot to enable their full functionality. All boards are hot swap enabled (for some boards, only when the hot swap indicator LED is illuminated) and you can insert and extract these boards without turning off electrical power.
Figure 6-1 Board Ejectors 1 2 3 4 Figure 6-1 shows the top ejector latches on the fabric boards, and includes the following information: 1. Security screws on each board complete the board seating, and lock of the board in place. 2. The latch release button, which also electronically signals a board extraction request when you press it. 3. The latch mechanism. After pressing the button (and waiting for the hot swap LED to illuminate), lift both ejector latches simultaneously to eject the board. 4.
Table 6-1 sFB-12 Fabric Board LED Status Indication Description Link Each pair of LEDs indicates the status of six ports of each line board. The two LEDs in each pair are: • Green – The physical link is up when this LED is illuminated. • Amber – The logical link is up when this LED is illuminated.
You can also connect and disconnect InfiniBand cables while the ISR 9288 is powered on. If the node at the other end of a link is powered up, you will see the link status LEDs illuminate to indicate the link status. The rear of the ISR 9288 chassis provides slots for up to 12 line boards. Figure 6-3 shows the front panel of the line board. Figure 6-3 Line Board Front Panel 1 2 3 Figure 6-3 shows the following line board features: 1. The physical port number and link status LEDs. 2.
c. d. 6. Top left. Bottom right. Verify the status of the following LEDs: a. The power LED is illuminated. b. The info LED is not illuminated. Use the following procedure to extract a line board from the chassis: 1. 2. 3. 4. Unsecure and disconnect all cables. Release the security screws. Press the red buttons to unlock the ejectors. Press the ejectors outwards and slowly slide out the board. 6.
Table 6-3 sMB Board LEDs (continued) Indication Description Power (Green) Illuminated: The voltage levels of the board are within nominal values Off: There is a power problem, if the chassis is powered up. Otherwise, no power is applied to the chassis. Info This is a general purpose green LED for management use. Its state (illuminated or off) indicates system status when interpreted with the state of other LEDs in the chassis.
Figure 6-5 Power Supply Unit 2 1 4 3 Figure 6-5 shows the following features of the PSU: 1. IEC Power Inlet. 2. Fan vents. 3. Power status signal LEDs. 4. Locking screw. Table 6-4 lists and describes the power supply module LEDs. Table 6-4 Power Supply Unit LEDs Indication Description DC ON (Green) Illuminated: The DC power source is present. Off: There is a problem with the DC supply. Otherwise, power might not be applied to the chassis. AC ON (Green) Illuminated: The AC power source is present.
• Four small form-factor SFP GBIC GbE ports, providing a fast link to an IP network for devices on the InfiniBand fabric • An RJ-45 management port. Figure 6-6 shows the front panel of the IPR blade. Figure 6-6 IPR Blade Front Panel 1 2 3 4 Figure 6-6 shows the following features of the IPR blade: 1. The link/activity LEDs, labelled 1 – 4, one for each labelled port. 2. Gigabit Ethernet ports, labelled Et1 – Et4. 3.
Figure 6-7 FCR Blade Front Panel 1 2 3 4 Figure 6-7 shows the following features of the FCR blade: 1. The link/activity LEDs, labelled 1 – 4, one for each labelled FC port. 2. Gigabit Ethernet ports, labelled FC1 – FC4. 3. 1, 2, and Sys LEDs that indicate the following status: • Link status of the combined interfaces in the integrated management port • Overall system status. 4. Management RJ-45 port, with status LEDs.
1. 2. 3. 4. Ensure that the lock handle is disengaged. Holding the module by the corners of its bezel, align it so that it is level and square with the front panel of the chassis. Slide in the module until most of the unit is in the slot and you feel the resistance of the midplane connector, and the lock handle begins to engage in its locking slot. Press the lock handle in to complete the connection and fasten the unit in the slot. Use the following procedure to remove a module from the sRBD drawer: 1. 2.
6.8 Installing and Replacing the sCTRL Board The sCTRL board provides the chassis management ports (Ethernet and RS-232 DB9 serial) and includes a reset button for the chassis. You install the sCTRL board on the rear panel of the ISR 9288 chassis. Figure 6-10 shows the sCTRL board. Figure 6-10 sCTRL Board Front Panel 1 2 3 4 5 6 7 Figure 6-10 shows the following features of the sCTRL board: 1. 2. 3. 4. 5. 6. 7.
Table 6-7 sCTRL Board LEDs (continued) LED label Description CM Active 2 (Green) If illuminated, the chassis manager is running on management card #2 Eth 1 and Eth 2 Each Ethernet port (which provides access to the management interface) has two LEDs, which provide the following status information: • Link (green): Illuminated if a link is detected. • Activity (amber): Illuminated or flashing if data traffic is detected on the link. The sCTRL board is installed at the factory.
Warning! Never remove the sFU-4 and sFU-8 at the same time; at least one fan module must be installed at all times. Use the following procedure to replace a defective fan module by hot-swap: 1. Unpack and prepare all components on a convenient work surface adjacent to the chassis before you begin the swap procedure. This will enable you to preform the swap quickly. 2. Before starting the procedure, ensure that the sFU-8 module is functioning correctly. 3.
Table 6-8 sFU-8 Fan Module LEDs LED Label Description Temp (Amber) This LED indicates an over-temperature fault on the chassis. When present, this usually indicates a problem with one of the fan modules. • Illuminated: An over-temperature fault is detected • Off: Temperature levels are normal. sFU-4 (Amber) This LED indicates that at least one of the four fans has failed in the other fan module (the sFU-4). sFU-8 (Amber) This LED indicates that at least one of the eight sFU-8 fans has failed.
7 Cabling the Interconnect Each model of HP cluster platform has a set of specific port-to-port cabling tables that describe the origin and destination of every link between the interconnect and the cluster nodes. When the cluster is integrated at the factory, the cables are installed and tested. Certain cables are then removed and the cluster racks are packaged for shipping. At the receiving site, installation engineers cable up the system according to a system-specific cabling table.
Caution: Do not bend the InfiniBand cables too sharply. The minimum bend radius is 4” (10 cm). • • • • • Follow the cabling procedure to avoid creating airflow dams, particularly for the ISR 9024, which employs passive cooling. Consider the requirements and constraints of the installation location. For example, if cooling air is supplied through floor vents, avoid covering the vents. Avoid air leaks into cable conduits by using the appropriate seals or baffles.
Note: Cluster Platform Express (CPE) configurations use a cable routing procedure that is specific to CPE models. Follow the cabling instructions in the Cluster Platform Express Installation Guide. The cabling tables are specific to: • The total number of nodes in the cluster. • The models of server used as nodes. • The node format, which determines rack density. • Other configuration-specific factors, such as: — Whether the cluster is a bounded or federated design.
— — — Whether the cluster is a bounded or federated design. Whether the cluster is a full or constrained bandwidth design. The presence of utility racks (UXR). Use the following procedure to route and connect cables: 1. During factory integration, all cables are labelled with their origin and destination ports. 2. Begin at the bottom of the chassis with line board 4. 3.
Figure 7-2 Routing Model for a Fully Configured ISR9288 1 4 3 2 Use the following procedure to route and connect cables: 1. During factory integration, all cables are labelled with their origin and destination ports. 2. Begin at the bottom with the lowest line board in the chassis, as shown in Figure 7-2. 3. Working upwards through the line boards, dress the cables in the bracket hook that is adjacent to the line board for each line card as follows: a.
7.2.2 Connecting to FCR FC Ports Use the following procedure to connect a cable to the Fibre Channel port: 1. Connect the small form-factor FC 1G/2G 850nm LC transceiver connector to the FC port on the FCR module. 2. Connect the other end of the cable to the appropriate device. 7.3 Chassis Administrative Connections You make the initial CLI connection to a chassis through a PC connected to a serial port. This connection enables you to perform basic configuration, such as setting up IP addresses.
Use the following procedure to connect to the management port Ethernet Interface: 1. See the Ethernet Network Cabling Tables to determine where the management port connects to an in-rack ProCurve switch. 2. Connect the end of the management cable with the single RJ-45 connector to the management port on the interconnect. 3. Connect the end of the cable with Ethernet connector to the appropriate ProCurve switch port. 7.3.
8 Installing and Maintaining HCA Cards and Mezzanines A feature of InfiniBand fabrics is that individual components can contain an InfiniBand host channel adapter (HCA), or as in the case of an HP BladeSystem server blade, a mezzanine HCA. Every device containing an HCA is a visible and manageable part of the InfiniBand fabric.
Figure 8-1 HCA-400 PCI Card 5 5 1 2 4 3 The following features of the HCA-400 PCI card are identified by the numbered callouts in Figure 8-1. 1. An InfiniBand port, one of two. 2. The indicator LEDs for physical (green) and logical (amber) link status. 3. The card's metal bracket. Always handle the card by holding only the bracket. Do not touch the printed circuit board, card components or the gold-plated connector. 4. Gold-plated connector fingers. Avoid touching the connector during installation. 5.
server's maintenance documentation and understand the procedures for installing a PCI card before you begin installing the HCA. Caution: The HCA-400 is PCI-X 2.2 and PCI 2.1 compliant (32/64 bit, 66/100/133 MHz). All server models that are qualified by HP are tested and configured to meet the requirements of this HCA card. You cannot use these installation instructions with alternate server models, or with alternate configurations of supported servers.
a. b. c. Configure the node out of the cluster, as described in the operating environment documentation. Shut down the node, as described in the User Guide for the server. Disconnect an existing InfiniBand cable link. This step might require the removal of cable management components that are specific to the model of server and the InfiniBand interconnect. Note: You might need to remove the server from the rack to install a PCI card.
11. Attach the metal bracket to the bulkhead with the retaining screw that you removed in step 6. 12. Close the server chassis and slide it back into the rack. 13. Reinstall any cable management components that you removed in step 4. 14. Reconnect the cables that you disconnected in step 3. 15. You are now ready to cable the card , following the cabling instructions for your model of cluster platform and using the following sequence: a.
Figure 8-3 Topspin/Mellanox SDR PCI–X HCA Features of the Topspin/Mellanox PCI-X HCA include: • Two 4x InfiniBand 10 Gb/s ports (40 Gb/s InfiniBand bandwidth full duplex – 2 ports x 10 Gb/s/port x 2 for full duplex). • 64-bit PCI-X v2.2 133Mhz • 7 Gb/s transfer rates per port • <6 μm latency • 128 MB local memory The LED ports on the Topspin/Mellanox PCI-X HCA blink to indicate traffic on the InfiniBand link on initial setup.
The LED ports on the Topspin/Mellanox PCI-Express HCA blink to indicate traffic on the InfiniBand link on initial setup. The installation and LEDs of the Topspin cards are similar to that of the Voltaire HCAs described previously in this chapter. 8.5 Mellanox PCI-X HCA (SDR) The Mellanox PCI-X HCA supports InfiniBand protocols. It is single data rate (SDR) card with two 4x InfiniBand 10 Gb/s ports and 128 MB memory.
Port 2 LED Name Physical Link - Green Data Activity - Yellow 8.6 Mellanox PCI-Express HCA (SDR) The Mellanox PCI-Express HCA supports InfiniBand protocols. It is single data rate (SDR) card with two 4x InfiniBand 10 Gb/s ports and 128 MB memory. Figure 8-6 shows the Mellanox PCI-Express HCA. The Mellanox PCI–Express HCA has 128MB, a tall bracket, and is a RoHS-R5 HCA card.
no data is being passed. The activity link blinks when data is being passed. If the LEDs are not active, either the physical or the logical (or both) connections have not been established. Port 1 LED Name Physical Link - Green Data Activity - Yellow Port 2 LED Name Physical Link - Green Data Activity - Yellow 8.7 Mellanox Memory Free PCI-Express HCA (SDR) The Mellanox Memory Free PCI-Express HCA supports InfiniBand protocols. It is single data rate (SDR) card with one 4x InfiniBand 20 Gb/s port.
• EEPROM used for storing Vital Product Data (VPD) • • • • • • Embedded InfiniRISC™ Processors for Management & Subnet Management Agent (SMA) Integrated Physical Layer SerDes Integrated GSA (General Service Agents) Low-Latency Communication Technology Flexible Completion Mechanism Support (Completion Queue, Event, or Polled operation) I/O Panel LEDs 8.7.1 LEDs The board has two LEDs located on the I/O panel for the 4X port.
• • • • • • • • Hardware support for up to 16 million QPs, EEs and CQs Memory Protection and Translation Tables fully implemented in hardware IB Native layer 4 DMA hardware acceleration Multicast support Programmable MTU size from 256 to 2K bytes Four Virtual Lanes supported plus Management Lane Support for InfiniBand transport mechanisms (UC, UD, RC, RAW) EEPROM used for storing Vital Product Data (VPD) • • • • • • Embedded InfiniRISC™ Processors for Management & Subnet Management Agent (SMA) Integrated
Figure 8-9 Mellanox PCI–Express HCA (DDR) Features of the Mellanox PCI-Express DDR HCA include: • Two 4X InfiniBand copper ports for connecting InfiniBand traffic (4X IB connectors) • 4X port supports 20 Gb/s • Third-Generation HCA core • On board DDR SDRAM memory (memory configurations vary) • PCI-Express x8 edge connector • I/O Panel LEDs Installation of the Mellanox PCI-Express DDR HCA is similar to that of the Voltaire HCAs described previously in this chapter. 8.9.
8.9.2 HP HPC 4x DDR IB Mezzanine HCA HP Cluster Platform supports a server blade mezzanine HCA card that has the following features: • 4x DDR • Single port • PCI–Express • 20 Gb/s • Memory-free The HPC 4x DDR IB Mezzanine HCA ships with the Voltaire GridStack software. Voltaire GridStack is a comprehensive set of drivers and protocols that enable applications to utilize the RDMA features in the hardware.
9 Postinstallation Troubleshooting and Diagnostics This chapter provides information on the interconnect firmware's logging and monitoring functions. Use these functions to perform initial fabric debugging and confirm device failures if you encounter a problem that is indicated by the interconnect or HCA LED status arrays.
3. Check the Link LEDs on the line boards front panel for the various ports. In the event of a system lock-up in the ISR 9XXX, use the reset button to reset and reboot the system. The system has two reset buttons: one is located on the front panel of the sFU-8 fan assembly module and one of the front panel of the sCTRL board. To reset the router push the button, using a thick wire or tip of a pen until the system reboots. Remove the wire immediately afterwards. 4.
The following status values are associated with port failures: • BROKEN_LINK One end of the physical link is up, but the other end is unreachable. • INVALID_LINK The port shows different connectivity on both sides of the link. • DUPLICATE_HCA_GUID Another HCA (or interconnect) exists with the same GUID. • DUPLICATE_SWITCH_GUID Another interconnect exists with the same GUID. Functioning ports are discovered automatically when the fabric is polled or initialized.
A ISR 9024 Interconnect Specifications The IBTA 1.1 compliant InfiniBand technical specifications of the ISR 9024 are as follows: Feature Technical Specification InfiniBand Ports 4X and/or 12X factory configurable copper ports Up to 24 4x (10 Gb/s) InfiniBand non-blocking full-wire speed ports 140-nanosecond single-hop port to port switching latency Overall throughput of up to 480 Gb/s.
Feature Technical Specification Externally managed over 400,000 hours Environmental Operating Humidity 15% to 80%, non-condensing Altitude 0 to 9843 ft (3000m) Environmental Storage 112 Temperature 13°F to 185°F (-25°C to +85°C) Humidity 5% to 90% non-condensing Altitude 0 ft to 15,000 ft (4570 m) ISR 9024 Interconnect Specifications
B ISR 9024 S/D Interconnect Specifications (RoHS Compliant) The technical specifications of the ISR 9024 S/D interconnect (RoHS Compliant) are as follows: Feature Technical Specification InfiniBand Ports 24 4X Single Data Rate (SDR – 10 Gb/s) ports, or 24 4X Dual Data Rate (DDR – 20 or 10 Gb/s auto-negotiate) Interconnect options: copper, with optional support for optical adaptors on top row 12 ports Indicators: physical and logical status All ports are located on the rear panel Management Remote InfiniBa
Feature Technical Specification Power Rating Power consumption*: ISR 9024S — 51W, max. ISR 9024S-M — 63W, max. ISR 9024D — 58W, max. ISR 9024D-M — 69W, max. BTU/hour = Watts x 3.413 Power Factor: 90 Vac/60 Hz/Max Load = 0.998 120 Vac/60 Hz/Max Load = 0.997 230 Vac/60 Hz/Max Load = 0.973 264 Vac/60 Hz/Max Load = 0.957 Maximum Power Draw 120W Power Supply Efficiency 82% Leakage current @ 264V 0.737 [mA] 2 power cords, 2 meter jumper, with a universal plug for PDU (Power Distribution Unit).
Feature Technical Specification EMC EN55022:98/EN55024:98/EN61000-3-2:00/ EN61000-3-3:95 This device complies with par 15 of FCC rules. Operation is subject to the following two conditions: (1) This device may not cause harmful interference (2) this device must accept any interference received.
C ISR 9096 Interconnect Specifications The technical specifications of the ISR 9096 are as follows: Feature Specification Chassis Architecture • Four slots, each of which can accommodate one line board or one router blade drawer (hosting up to 3 router blades). • Redundant power supplies. • Redundant fan trays. • Redundant management blades.
Feature Specification Router Blade Drawer You can install up to three of the following InfiniBand form factor router blades in each router blade drawer: • Voltaire IPR (IP Router): — IPR InfiniBand Form Factor for integration with ISR 9096, providing high-speed layer 4-7 switching and TCP Termination — 4 interfaces: IEEE 802.3ab, and 802ad (link aggregation) compliant. — Connectors: 4 ports, 1000BASE-SX mini-GBIC RJ-45 or optic (SMF); RJ-45 for management.
D ISR 9288 Interconnect Specifications The technical specifications of the ISR 9288 are as follows: Feature Specification Chassis Architecture Based on passive Midplane boards which allow the IB differential signals, control signals and power connectivity between the different boards in the chassis. Cables for InfiniBand, management, and power all located at the rear. Four slots of fabric boards in front and 12 slots for a combination of line boards/router blades drawer in the rear.
Feature Specification Router Blades You can install up to three of the following InfiniBand form factor router blades in each router blade drawer (sRBD): • FCR InfiniBand form factor for integration with ISR 9288 and OEM system: — Four 1G/2G Fiber Channel ports — RJ-45 port for management — Indicators: ◦ two InfiniBand link/activity LEDs ◦ four Fiber Channel activity LEDs ◦ system LED ◦ Management link LEDs.
Feature Specification Weight 110 to 187 lbs (50 to 85 kg), depending on configuration Environmental • • • • • • Ambient Operating Temperature 32° to 113°F (0° to 45°C). Operating Humidity 15 to 80%, non-condensing. Operating Altitude 0 to 9843 ft (3000m). Storage Temperature -13° to 158°F (-25° to +70°C). Storage Humidity 5 to 90% non-condensing. Storage Altitude 0 to 15,000 ft (4570m).
E 4x DDR IB Switch Module Specifications The technical specifications of the 4x DDR IB Switch Module are as follows: 4x DDR IB Switch Module Specifications Compliance IBTA version 1.1 compatible ROHS-R5 General Specifications Communications Processor MT25204A0-FCC-D, InfiniHost III Lx On-chip memory 4 KB MTU Data path Fully non-blocking 4X DDR switching with 16 downlinks and 8 uplinks. Dimensions (LxW) 15.3 x 10.6 inches Power and Environmental Specifications Operating Temperature 0 to 400 C.
F HCA PCI Card Specifications Appendix F includes specifications for the following HCA PCI cards and mezzanine HCAs: • Voltaire HCA–400 PCI Card Specifications (Section F.1). • Topspin/Mellanox PCI-X HCA Specifications (Section F.2). • Topspin/Mellanox PCI-Express HCA Specifications (Section F.3). • Mellanox PCI-X HCA Specifications (Section F.4). • Mellanox PCI-Express HCA Specifications (Section F.5). • Mellanox Memory Free PCI-Express HCA (SDR) Specifications (Section F.6).
Feature Specification Physical Characteristics Low-profile MD2. Dimensions(H x D) 2.5 in. (64 mm) x 6.7 in. ( 170 mm). Environmental, Operating Temperature: 32 Deg to 122 Deg F (0 Deg to 50Deg C) Humidity: 15 to 80%, non-condensing. Altitude: 0 to 9843 ft (3000m). Environmental, Storage Temperature: -13 Deg to 158 Deg F (-25 Deg to 70 Deg C). Humidity: 5 to 90% non-condensing. Altitude: 0 to 15,000 ft (4570m). F.
64–Bit Linux: Red Hat Enterprise Linux AS 3, 4 SuSE LINUX Enterprise Server 8, 9 F.3 Topspin/Mellanox PCI-Express HCA Specifications This section includes the specifications for the Topspin/Mellanox PCI-Express HCA. The HCA offers two 4x InfiniBand 10 Gb/s ports providing an aggregate 40 Gb/s InfiniBand bandwidth full duplex (2 ports x 10 Gb/s/port x 2 for full duplex).
F.4 Mellanox PCI-X HCA Specifications The specifications listed below cover the Mellanox PCI-X SDR HCA. Physical Power and Environmental Protocol Support Regulatory Size: 2.5 in. x 6.6 in. Air Flow: 200 LFM @55°C 10 Gb/s Connector: InfiniBand Dual Copper MicroGigaCN Voltage: 12V, 3.3V Maximum Power: 9.2W Temperature: 0° to 55° Celsius InfiniBand: Auto-Negotiation (10 Gb/s, 2.
Safety: IEC/EN 60950-1:2001 ETSI EN 300 019-2-2 Environmental: IEC 60068-2- 64, 29, 32 F.6 Mellanox Memory Free PCI-Express HCA (SDR) Specifications The specifications listed below cover the Mellanox Mem-Free PCI-Express HCA (SDR). Physical Power and Environmental Protocol Support Regulatory Size: 54mm x 102mm (2.13 in. x 4 in.) Air Flow: 200 LFM @55°C 10 Gb/s Connector: Amphenol InfiniBand MicroGigaCN Voltage: 3.3V Maximum Power: 4.
F.8 Mellanox PCI-Express DDR HCA Specifications The specifications listed below cover the Mellanox PCI-Express DDR HCA. Physical Power and Environmental Protocol Support Regulatory Size: 2.5in. x 6.6in. Air Flow: 200 LFM @55°C 4X 20 Gb/s Connector: InfiniBand (Copper) Voltage: 12V, 3.3V Maximum Power: 10W Temperature: 0° to 55° Celsius InfiniBand: Auto-Negotiation (20 Gb/s, 5 Gb/s) or (10 Gb/s, 2.
4x DDR IB Mezzanine HCA Specifications Non-operating Temperature 40 to 700 C. Non-operating Humidity (non-condensing) 5% to 95% Power requirement 1.35 A at 3.3V max (4.5W) Emissions Classifications FCC CFR 47 Part 15 Class A CISPR 22 Class A ICES-003 Class A VCCI Class A ACA CISPR 22 Class A F.
Index Symbols 12x ports ISR 9024, 30 24-port 12x, 29 24-port 4x, 29 4x DDR IB mezzanine HCA, 105 specifications, 130 4x DDR IB switch module, 69 specifications, 123 4X port switch chip, 29 4X port type, 24, 25 4x ports ISR 9024, 30 A administration, 26 (see also management) administrative connections, 90 annotations, 13 ANSI, 29, 40 API, 20 Applications programming interface, 20 (see also API) architecture ISR 9096, 50 ISR 9288, 50 mid-plane, 50 modular, 50 audience, 12 availability, 29, 39 B backplane IS
D DB-9 adapter ISR 9024, 30, 31 ISR 9024 S/D, 40 DB-9 connector, 30 DB9 cable, 26 device management, 26 device discovery, 19 device management ISR 9024 slave mode, 30, 40 diagnosing clear bad ports, 109 fabric failure, 108 failed port, 108 HCA port GUID, 109 NodeInfo, 108 port GUID, 108 port malfunction, 108 port status values, 109 docs.hp.
HCA 4x DDR IB mezzanine, 105 HCA mellanox memory free PCI-Express DDR, 102 HCA mellanox memory free PCI-Express SDR, 101 HCA mellanox PCI-Express SDR, 100 HCA Mellanox PCI-X (SDR), 99 HCA port GUID, 109 HCA Topspin/Mellanox PCI-Express, 98 HCA Topspin/Mellanox PCI-X, 97 head shell, 85 hearing protection, 16 high availability, 29, 39 horizontal fan, 51 host channel adapter, 67, 93 (see also HCA) Host channel adapters, 19 (see also HCA) hot components, 16 hot swap, 76 hot-swap LED, 72 hot–swap, 50 HP document
ISR 9024 S/D cooling, 45 DB-9 adapter, 40 externally managed, 39 front panel, externally managed, 41 installing PSU, 46 internally managed, 39 master mode, 40 (see also internally managed) overview, 39 rack kit, 44 slave mode, 40 (see also externally managed) specifications, 113 unpacking, 43 ISR 9024D-M reset button, 41 ISR 9024S/D-M management card, 41 ISR 9024–M front panel, internally managed, 40 ISR 9096 administrative cabling connections, 90 architecture, 50 backplane, 51 building blocks, 50 cabling p
ISR 9xxxx-series interconnect, 21 I²C, 30 Port on ISR 9024 S/D, 41 Port on ISR9024, 32 J Java Web client, 26 L laser radiation, 15 latency, 19, 24, 25, 29, 49, 50 LED, 37, 47 FCR, 79 HCA, 94 hot-swap, 72 InfiniBand Port, 37, 48 IPR, 78 PS, 37, 47 PSU, 77 sCTRL, 81 sFB-12, 72 sFU-8, 83 sLB-24, 74 sMB, 75 SYS, 37, 47, 48 LEDs HCA, 97 InfiniBand link, 33 legal notices and trademarks, 2 link LEDs ISR 9024, 33 locking screws, 71 logging events, 108 login, 26 Web site, 12 M MAC, 30 maintaining ISR 9096, 49 ISR
ISR 9288, 49 P packaging recycling, 17 packing ISR 9096, 58 ISR 9288, 58 PCI card, 93 PCI cards, 20 PCI mezzanine card, 30 PCI-Express HCA Topspin/Mellanox, 98 PCI-Express SDR HCA mellanox, 100 PCI-X HCA Topspin/Mellanox, 97 PCI-X SDR HCA Mellanox, 99 performance management , 108 (see also PM) performance management agent ISR 9096, 52 ISR 9288, 52 performance statistics, 26, 29 physical link aggregation, 19 PM, 108 port clear bad, 109 counters, 108 debugging malfunction, 108 detecting failed, 108 GUID, 108
installing, 79 router blade drawer, 24, 26, 50 (see also sRBD) S safety battery, 16 burn injuries, 16 caution, 14 component covers, 16 electrical, 15 ergonomics, 16 important, 14 laser radiation, 15 noise levels, 16 power protection device, 16 rack stability, 15 removing components, 15 warning, 14 safety considerations, 15 screwdriver, powered, 71 screws, locking, 71 sCTRL, 50 administrative cabling connections, 90 installing, 81 LEDs, 81 serial console interface, 30, 40 serial interface, 91 serial port I²
specifications, 127 Topspin/Mellanox PCI-X HCA specifications, 126 torque setting, 35, 45 transport layer, 20 troubleshooting clear bad ports, 109 fabric failure, 108 failed ports, 108 HCA port GUID, 109 ISR 9024, 36, 47 NodeInfo, 108 port GUID, 108 port status values, 109 ports, 108 U U-location, 35, 44 unpacking ISR 9024, 33 ISR 9024 S/D, 43 ISR 9096, 55 ISR 9288, 53 unreliable datagram, 20 V verifying shipment ISR 9288, 56 vertical fan, 51 vertical fan unit sFU-2, 50 sFU-4, 50 virtual lanes, 19 virtual