HP Cluster Platform Workgroup System and Cluster Platform Express Overview and Hardware Installation Guide HP Part Number: A-CPCPE-1E Published: February 2009
© Copyright 2009 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. The only warranties for HP products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. HP shall not be liable for technical or editorial errors or omissions contained herein.
Table of Contents About This Document.........................................................................................................9 Intended Audience.................................................................................................................................9 Document Organization.........................................................................................................................9 Typographic Conventions.....................................................
3 CPE BladeSystem with c7000 Enclosure Configurations.........................................43 3.1 CPE c7000 Configuration Examples................................................................................................43 3.1.1 Server Blade Configuration Example with Three Enclosures in a 42U Rack.........................44 3.1.2 Server Blade Configuration Example with Two Enclosures in a 42U Rack............................46 3.1.
D.6 Administrative/Console Wiring Tables for Three c7000 Enclosures with Double-Density Server Blades and the InfiniBand Fabric..........................................................................................................84 D.7 One c7000 Enclosure with Single-Density Blades and an HP 4x DDR InfiniBand Switch Module..................................................................................................................................................86 D.
F.2.3 Using Two ISR 9024 Interconnects to Support 25-32 Nodes.................................................123 Index...............................................................................................................................
List of Figures 1-1 1-2 1-3 1-4 1-5 1-6 1-7 1-8 1-9 1-10 1-11 1-12 1-13 1-14 1-15 1-16 1-17 2-1 2-2 3-1 3-2 3-3 3-4 3-5 3-6 3-7 4-1 4-2 B-1 B-2 B-3 B-4 B-5 B-6 D-1 D-2 E-1 Zero-U Mounted Power Distribution Unit....................................................................................14 Zero-U Mounted Power Distribution Unit for c7000 Configurations...........................................15 42U Power Strip Installation in a 42U Cabinet............................................................
List of Tables 1-1 1-2 1-3 1-4 A-1 B-1 D-1 D-2 D-3 D-4 8 PDUs Per Rack...............................................................................................................................16 HP GbE2c Ethernet Blade Switch for c-Class BladeSystem User and Service Documentation....20 Rack-Mountable Server Control Nodes........................................................................................25 CPE Documentation for Supported Components..................................................
About This Document This document provides overview and hardware installation information for the following HP Cluster Platform Express (CPE) configurations: • HP Cluster Platform Workgroup System with HP BladeSystem c3000 Enclosure • CPE BladeSystem with c7000 Enclosure • CPE with rack-mountable servers CPE configurations are easy-to-order, preconfigured, single-rack cluster solutions supporting either InfiniBand or Gigabit Ethernet system interconnects.
Typographic Conventions This document uses the following typographic conventions: Reader Notes NOTE: Content of the note. Notes provide additional information, supplementing the adjoining content or emphasizing points of information. Reader Warnings WARNING! Content of the warning. A warning calls attention to important information that if not understood or followed will result in personal injury or nonrecoverable system problems. Reader Cautions CAUTION: Content of the caution.
1 Cluster Platform Express Overview HP Cluster Platform Express (CPE) systems are delivered to you factory assembled and ready for deployment. They are available with a choice of pre-installed software for cluster management. The extensive array of software development tools and applications featured in the Unified Cluster Portfolio are supported on Cluster Platform Express solutions, which follow the same design and quality standards established for HP Cluster Platform solutions.
1.2 CPE BladeSystem with c7000 Enclosure CPE BladeSystem configurations use integrated c7000 enclosures that can hold up to 16 half-height servers in each enclosure.
• • • • • An overview of the power strips used in CPE (see Section 1.4.3) An overview of the networks used in CPE (see Section 1.4.4) External control nodes used in CPE BladeSystems (see Section 1.4.5) Examples of making connections to an external network (see Section 1.4.6) Cable management components used in CPE (see Section 1.4.7) 1.4.1 Basic Components Used in CPE There are several components that are common to most CPE configurations.
nodes to the folding keyboard and screen (TFT7600 KVM). These connections let you log on to the nodes and perform system and job management tasks from the local system console. See the HP Cluster Platform Core Components Overview for more information. • Optical media device: Access to a DVD RD or RW drive is required for software installation. When the configuration includes an external control node, that node must be configured with a DVD drive.
The following list describes the callouts shown in Figure 1-1: 1. Installation screws and cage nuts secure the PDU to the side of the rack column, adjacent to the rack's side panels for servicing. 2. The rack's front left column. 3. Main inlet power cable, connected to the site power supply. 4. Outlet power cables (up to 4) which connect to the power strips (see Figure 1-3). 5. Connector shield, providing support for the outlet power cables. 6. PDU main breaker switch. 7. Individual breaker switches (4). 8.
Table 1-1 PDUs Per Rack CPE Configuration CP Workgroup System Number of Power Supplies Installed in the c-Class Enclosure or Rack Components The c3000 enclosure is configured with up to four power supplies. Number of 0U Mounted PDUs Required Up to two 24A PDUs are necessary when an external control node and TFT are used CPE BladeSystem with c7000 There are up to five power supplies installed Two PDUs for each c7000 enclosure, enclosure in each c7000 enclosure.
Figure 1-3 42U Power Strip Installation in a 42U Cabinet 6 7 4 3 2 1 5 Figure 1-3 shows the following PDU installation features: 1. The first power strip is located in the ninth hole from the bottom of the rack. 2. The second power strip is located in the fifth hole from the top of the first power strip. 3. The third power strip is located in the third hole from the center of the cabinet rail. 4. The fourth power strip is located in the third hole from the top of the third power strip. 5.
Figure 1-4 42U Power Strip Installation 1 3 2 7 8 10 9 6 4 5 11 The following list describes the callouts shown in Figure 1-4: 1. M5 screw attached to power strip. 2. M5 screw attached to cable management bracket. 3. Power strip mounting plate. 4. Bracket attaching the power strip to the power strip mounting plate. 5. Power strip mounting plate. Attach to right side of rack (looking in from the rear of the rack). 6. Mounting hole location on cable management plate. 7.
1.4.4.1 Fast-Ethernet Switches The Fast-Ethernet network, which is documented in the HP Cluster Platform Gigabit Ethernet Interconnect Guide, is used for administrative and console networking in CPE solutions. You can also use the switches for MPI applications. Cluster management on server blade configurations is supported over an administration network with both console and operating-system level management branches. These branches share a physical switch that can be separated using VLANs, if required.
For user and service information for the HP GbE2c blade switch, see Table 1-2. Table 1-2 HP GbE2c Ethernet Blade Switch for c-Class BladeSystem User and Service Documentation Document Title Location GbE2c Ethernet Blade Switch for HP c-Class Quick Setup Instructions http://bizsupport.austin.hp.com/bc/docs/support/SupportManual/c00700745/ c00700745.pdf HP GbE2c Ethernet Blade Switch for http://bizsupport.austin.hp.
Figure 1-6 Example InfiniBand Cable Routing for CPE BladeSystem with c7000 Enclosure Configurations 7 3 2 1 6 4 5 The following list describes the callouts in Figure 1-6: 1. c-Class enclosure in the bottom of a cabinet. 2. Second c-Class enclosure in a cabinet (there is a maximum of three enclosures in a 42U rack). 3. c-Class cable management bracket. 4. 24-node cable management bracket. 5. Direction of view, facing the rear of the cabinet. 6.
Figure 1-7 HP 4X DDR InfiniBand Switch Module 1 2 Note: When the 4X DDR InfiniBand switch module is used in a CP Workgroup System, it is installed in module bays 3 and 4 of the HP BladeSystem c3000 enclosure. Figure 1-8 shows the new HP BLc 4X DDR InfiniBand Gen2 Switch being installed in an HP BladeSystem c7000 enclosure.
Note: The HP BLc 4X DDR InfiniBand Gen 2 Switch is not currently supported in HP BladeSystem c3000 enclosures. For more information, contact your HP representative. Figure 1-9 provides and example of the InfiniBand cable management in CPE rack-mountable server configurations. Note the InfiniBand cables running from a Voltaire ISR 9024D, 24-port interconnect to each rack-mounted server in the rack.
Figure 1-10 Cable Management Plate 1 2 3 5 4 The following list describes the callouts in Figure 1-10: 1. Fabric cable management strap with a metal ring and a hook-and-loop closure to secure the cable bundles. 2. Single screw attaching the strap to the plate, enabling you to change to a longer length strap for larger cable bundles. 3. The strap passes through the cut-out slots in the cable management plate to provide full support for a cable bundle. 4.
Table 1-3 Rack-Mountable Server Control Nodes HP ProLiant DLxxx Gx with Opteron Processors HP ProLiant DLxxx Gx with Xeon Processors DL145 Gx DL140 Gx DL165 Gx DL160 Gx DL385 Gx DL380 Gx Note: Rack-mountable control nodes are referred to as external control nodes and are considered an option for BladeSystem configurations. If the external control node option is not selected, one of the server blades in a c3000 or c7000 enclosure must be designated as a control node. 1.4.5.
5. 6. 7. Embedded network interface card NIC 2, RJ-45 port which is optionally connected by a CAT5e cable to the local network (LAN). Embedded network interface, connected by a CAT5e cable to the HP ProCurve administrative network switch. This connection enables cluster management tasks such as job submission and monitoring. Optional redundant power supplies. 1.4.
Figure 1-12 CP Workgroup System Configuration with Single-Density Server Blade Control Node 3 1 2 4 5 The following list describes the callouts in Figure 1-12: 1. HP GbE2c switch 2. Connection from the Onboard Administrator's iLO port to the HP GbE2c port 22 3. Optional connection from a server blade control node through an HP GbE2c external port to an external campus network. 4. Onboard Administrator module 5.
Note: In all configurations with a server blade as the control node, NIC2 of the server blades installed in the same enclosure as the control node will also be connected to the switch used to provide external connectivity. Cluster management software or the system administrator for the cluster must determine what steps are necessary to provide appropriate connectivity to the external network.
Figure 1-14 CPE BladeSystem Configurations with a Rack-Mountable Server Control Node 11 2 2 8 9 6 3 12 4 6 10 1 7 5 6 The following list describes the callouts in Figure 1-14: 1. Four connections from an HP GbE2c switch installed in Interconnect Module Bay 1 (IMB1) of the first HP BladeSystem c7000 enclosure to an HP ProCurve 2824 switch (see the following note) 2. HP ProCurve 2824 switch 3.
10. Connection from the OA's external link to an administrative switch (HP ProCurve 2824 switch) 11. Optional connection from the control node's NIC2 port to an external network (such as a campus network) 12. Optional connection from the control node's iLO port to either an administrative switch or to an external network (such as a campus network) Note: Only one connection from each HP GbE2c switch to the HP ProCurve 2824 switch is required if the optional InfiniBand network is present. 1.4.6.
1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16.
Figure 1-16 CPE BladeSystem (c7000) Configurations with a Single-Density Server Blade Control Node 2 6 3 4 6 8 1 7 9 5 10 6 The following list describes the callouts in Figure 1-16: 1. Four connections from an HP GbE2c switch installed in Interconnect Module Bay 1 (IMB1) of the first HP BladeSystem c7000 enclosure to an HP ProCurve 2824 switch (see the following note) 2. HP ProCurve 2824 switch 3.
Note: Only one connection from each HP GbE2c switch to the HP ProCurve 2824 switch is required if the optional InfiniBand network is present. 1.4.6.6 CPE BladeSystem Configurations with Double-Density Server Blade Control Node Figure 1-17 shows a connections to an external network for configurations of c7000 enclosures with a double-density server blade control node.
5. 6. 7. 8. 9. 10. 11. 12. 13. 14.
documentation for each unique component in the cluster (such as an HP 10000 Series Rack, DLxxx Gx server, or Ethernet switch) are supplied with your shipment. Table 1-4 CPE Documentation for Supported Components Component Document Note PDU and power distribution bars HP Cluster Platform Core Components Safety information. HP Cluster Platform Site Preparation Guide HP 10000 series racks; 22U or 42U HP Cluster Platform Core Components Safety information.
Table 1-4 CPE Documentation for Supported Components (continued) Component Document Note c-Class cable management bracket HP BladeSystem c-Class Enclosure Cable (used for the two c-Class Management Bracket Installation Guide InfiniBand switch modules with (436670-doc) either 8 or 16 external uplink ports) Installation and usage information for c-Class cable management.
2 Cluster Platform Workgroup System The CP Workgroup System is comprised of a single-cabinet cluster containing a variety of components and a c3000 enclosure to meet the following specifications: • A single rack in a 22U configuration • An optional rack-mount (1U or 2U) control node (some restrictions apply) • An optional InfiniBand interconnect • A shared administration/Gigabit Ethernet network • Configurations within one c3000 enclosure The CP Workgroup System configuration architecture is based on HP Clu
Note: See Appendix B for the configuration diagrams and the cabling tables for CP Workgroup System. 2.1.1 22U Cabinet with a c3000 Enclosure and an External Control Node Figure 2-1 shows a typical CP Workgroup System with one HP BladeSystem c3000 Enclosure in a 22U rack with an external control node. Figure 2-1 CP Workgroup System with a c3000 in a 22U Rack 1 2 3 4 5 The following list describes the callouts shown in Figure 2-1: 1. HP 10000 Series 22U rack. 2. DL140 or DL145 G3 external control node.
Figure 2-2 CP Workgroup System in a 22U Rack Without an External Control Node 1 2 3 4 Figure 2-2 shows the following cluster components: 1. HP 10000 Series 22U rack. 2. Filler panels to close up reserved space on the front of the rack. 3. The HP BladeSystem c3000 enclosure 4. Filler panels to close up reserved space on the front of the rack. Additional storage may be provided by an HP StorageWorks SB40c storage blade installed in the server bay next to the control node server blade.
Note: The HP BladeSystem c3000 enclosure is not documented in the HP BladeSystem Site Planning Guide. However, the site preparation procedures for CP Workgroup System are similar to the site preparation procedures as currently described in the HP BladeSystem Site Planning Guide and the HP Cluster Platform Site Preparation Guide. See Appendix E for more information about the power configuration for CP Workgroup System. 2.
2.5 Post-Installation and Diagnostics CP Workgroup System uses the Insight Management interface to troubleshoot failures. For more information, see the HP BladeSystem c3000 Enclosure and c3000 Tower Enclosure Maintenance and Service Guide: http://h20000.www2.hp.com/bc/docs/support/SupportManual/c01126895/c01126895.pdf For more information on post-installation and diagnostics, see the HP BladeSystem c-Class Enclosure Troubleshooting Guide: http://h20000.www2.hp.
3 CPE BladeSystem with c7000 Enclosure Configurations Cluster Platform Express (CPE) BladeSystem with c7000 enclosure configurations are comprised of a single-cabinet cluster containing a variety of components and c7000 enclosures to meet the following specifications: • A single rack (22U or 42U, depending on the configuration) • An optional rack-mount (1U or 2U) control node (some restrictions apply) • An optional InfiniBand interconnect • A shared administration/Gigabit Ethernet network • Configurations o
Note: At this time, HP does not plan to support joining Cluster Platform Express for BladeSystem configurations to adjacent cabinets due to potential service issues with the 0U (side-mounted) PDUs. The following sections show CPE c7000 enclosure configuration examples in 42U and 22U cabinets: • Three enclosures in an HP 10000 Series 42U rack (Section 3.1.1) • Two enclosures in an HP 10000 Series 42U rack (Section 3.1.2) • One enclosure in an HP 10000 Series 42U rack (Section 3.1.
Figure 3-1 Server Blade Configuration Example with Three Enclosures in a 42U Rack 1 2 3 4 5 3 6 3 8 7 Figure 3-1 shows the following cluster components: 1. HP 10000 Series 42U rack. 2. Filler panels to close up any unused space on the front of the rack. 3. An HP c7000 enclosure with 16 half-height blades (BL460c or BL465c servers). There are three c7000 enclosures in this configuration with a total of 48 half-height server blades. 4. A control node (this is optional in some configurations).
3.1.2 Server Blade Configuration Example with Two Enclosures in a 42U Rack Figure 3-2 shows a typical CPE BladeSystem with c7000 server blade solution with two enclosures in a 42U rack. Some of the components shown in this figure might be optional, or they might be installed in a different location or orientation with respect to the front and rear of the rack.
3.1.3 Server Blade Configuration Example with One Enclosure in a 42U Rack Figure 3-3 shows a typical CPE BladeSystem with c7000 enclosure solution with one enclosure in a 42U rack. Some of the components shown in this figure might be optional, or they might be installed in a different location or orientation with respect to the front and rear of the rack. Figure 3-3 Server Blade Configuration with One Enclosure in a 42U Rack 1 2 3 4 2 5 6 7 Figure 3-3 shows the following cluster components: 1.
Note: The configurations with one c7000 enclosure with an InfiniBand interconnect might not have an external control node. If your configuration does not have an external control node, one of the server blades in the enclosure is designated as the control node. 3.1.4 Server Blade Configuration with Two Enclosures in a 22U Rack Figure 3-4 shows a typical CPE BladeSystem with c7000 enclosure solution with two c7000 enclosures in a 22U rack.
Figure 3-5 Server Blade Configuration with One Enclosure in a 22U Rack 1 2 3 4 5 Figure 3-5 shows the following cluster components: 1. HP 10000 Series rack. 2. One TFT7600 rackmount keyboard and monitor. This unit combines a full 17-inch WXGA+ monitor and keyboard with touch pad in a 1U format. 3. HP ProLiant DL14x Gx control node. 4. Filler panels to close up any unused space on the front of the rack. 5. HP BladeSystem c7000 enclosure with 16 half-height server blades (BL460c or BL465c server blades).
Note: The instructions to remove a rack from a shock pallet are also printed on the side of the cardboard packaging. 3.4 Installation and Start-up To prepare for powering on the cluster, see the component documentation for your model of DLxxx-series server, HP ProCurve switch, and optional cluster components. Ensure that: • All component power switches are set to the off position. • You know the pattern of signal LEDs indicating that a component has powered up successfully.
http://h20000.www2.hp.com/bc/docs/support/SupportManual/c00702815/c00702815.pdf 3.5 Post-Installation and Diagnostics CPE BladeSystem with c7000 enclosure configurations use the Insight Management interface to troubleshoot failures. See the HP BladeSystem c7000 Enclosure Maintenance and Service Guide: http://bizsupport.austin.hp.com/bc/docs/support/SupportManual/c00714237/c00714237.
Figure 3-6 Server Blade Access Through a Local KVM 4 1 3 5 6 2 The following components are shown in Figure 3-6: 1. Monitor 2. Keyboard 3. Mouse 4. Local I/O cable connected to the control node server blade 5. Half-height server blade 6. c-Class enclosure You will also need a local I/O cable to connect to the server blade, as shown in Figure 3-7.
Figure 3-7 Local I/O Cable 1 2 3 4 The I/O cable connections shown in Figure 3-7 are: 1. Server blade connector 2. Serial port 3. USB ports (2) 4. Video port 3.
4 CPE with Rack-Mountable Servers HP Cluster Platform Express rack-mountable configurations are built around a system interconnect that might be an HP ProCurve Gigabit Ethernet or a 24-port InfiniBand interconnect. Chapter 4 discusses the following topics: • CPE with rack-mountable servers — 42U rack configuration (see Section 4.1) • CPE with rack-mountable servers — 22U rack configuration (see Section 4.2) • CPE with rack-mountable servers site preparation overview (see Section 4.
Figure 4-1 shows the following cluster components: 1. Cable management harness. 2. DLxx control node. 3. A 24-port HP ProCurve Ethernet switch, providing the cluster's administrative and console management networks. In Gigabit Ethernet clusters, such switches also provide the high-speed message-passing interface (MPI) network. 4. Keyboard, video, and mouse (KVM) unit. 5. Several ProLiant DL14x-series servers, functioning as the cluster's compute nodes. 6.
Figure 4-2 Typical 22U CPE with Rack-Mountable Servers Configuration 1 2 3 SCSI Port 1 1 1 hp ProLiant DL380 G4 2 UID 0 2 0 S implex 3 1 5 5 2 1 4 2 3 Ta pe iLO 4 4 2 D uplex ch 1 ch 2 PCI-X 3 UID Serial A Mouse Video 1 3 5 7 2 4 6 8 Keybd 5 DL140 G2 1 2 UID LO100i UID LO100i UID LO100i UID LO100i UID LO100i UID LO100i UID LO100i UID LO100i UID LO100i UID LO100i UID LO100i UID LO100i UID LO100i UID LO100i UID LO100i DL140 G2 1 2 DL140
• • PDUs, and power strips Keyboard, video, mouse (KVM) and console switch 4.3 Site Preparation Site preparation tasks are generally performed before delivery of your HP CPE system. For information to prepare your site for a CPE with rack-mountable servers, see the HP Cluster Platform Site Preparation Guide: http://www.docs.hp.com/en/A-CPSPG-1D/A-CPSPG-1D.pdf 4.4 Unpacking and Removing From a Shock Pallet A preconfigured CPE cluster can weigh several hundred pounds (or kilos).
4.8 Operating System Information CPE systems come with the operating system software factory installed. Software options include Red Hat and SUSE Linux distributions, as well as HP's message passing library, HP-MPI. For Windows environments, clusters are available with Microsoft Windows Server (Microsoft Windows Compute Cluster Server was used in older HP CPE Microsoft Windows based systems).
A Port Labeling Syntax This appendix provides the port labeling syntax for all of the cabling tables provided in this document. Table A-1 provides the wiring table syntax for CPE configurations.
B CP Workgroup System Configuration Diagrams and Cabling Tables This appendix describes the following topics: • Example configuration diagrams (see Section B.1) • CP Workgroup System cabling tables (see Section B.2) B.1 Example Configuration Diagrams This section includes the following CP Workgroup System configuration examples: • CP Workgroup System – basic configuration (see Section B.1.1) • CP Workgroup System – external control node (see Section B.1.
7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17. 18. 19. DVD drive Compute node 6 Compute node 5 Compute node 4 Administrative network and system interconnect (GigE only) iLO access Onboard Administrator LAN or direct connect External File Sharing (CIFS, NFS) Clients (such as HP PCs or workstations) GbE2c switch module Connection from control node NIC1 to the HP GbE2c switch Connection from control node NIC2 to the HP GbE2c switch B.1.
5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17. 18. 19. 20. 21. 22.
Note: Compute nodes cannot be attached to the SB40c storage blade. Figure B-3 CP Workgroup System with an SB40c Storage Blade 15 16 14 17 6 1 19 7 2 3 11 4 8 9 5 10 12 18 13 The following list describes the callouts in Figure B-3: 1. c3000 enclosure 2. Compute node 2 3. Compute node 1 4. SB40c storage blade 5. Control node 6. Compute node 6 7. DVD drive installed in the external control node 8. Compute node 5 9. Compute node 4 10. Compute node 3 11.
B.1.4 CP Workgroup System with an Optional InfiniBand Interconnect Figure B-4 shows a CP Workgroup System with an optional InfiniBand interconnect. Figure B-4 CP Workgroup System with an Optional InfiniBand Interconnect 15 16 14 17 6 1 19 7 2 3 11 4 8 5 9 10 20 12 18 13 The following list describes the callouts in Figure B-4: 1. c3000 enclosure 2. Compute node 3 3. Compute node 2 4. Compute node 1 5. Control node 6. Compute node 7 7. DVD drive 8. Compute node 6 9. Compute node 5 10.
Note: It is possible to provide InfiniBand connections without using an external InfiniBand interconnect. However, at the time of this release, these configurations are not supported in a Microsoft Windows environment. Currently, if you plan to use a Microsoft Windows environment, the configuration requires an external InfiniBand interconnect for fabric management. Figure B-5 shows a c3000 configuration with an external control node that is not connected to the InfiniBand network.
Figure B-6 c3000 Configuration with External Control Node Connection to the InfiniBand Interconnect 2 5 3 2 4 6 1 7 The following list describes the callouts shown in Figure B-6: 1. Connection from the HP 4X DDR InfiniBand Switch to the Control Node InfiniBand HCA 2. HP GbE2c switch 3. Connection from the HP GbE2c to the control node's NIC1 port 4. Connection to HP GbE2c (IMB1) from the OA external link port 5. Optional connection to an external network (for example, a campus network) 6.
Note: Section B.2.1 also provides the administrative network wiring for a single c7000 enclosure with standard-density server blades. B.2.1 HP BladeSystem c3000 and c7000 Network Cabling Tables Table B-1 CP Workgroup System with Either Single-Density or Double-Density Server Blades and Single c7000 Enclosure with Standard-Density Server Blades Origin Label Enclosure 1, IMB1, Port 21 To Label Comment E1-IMB1-P21 External Control Node iLO Control-iLO Only present with external control node.
From Label Enclosure 1 Interconnect Module 3/4 Port 7 E1-IMB3-P7 Enclosure 1 Interconnect Module 3/4 Port 8 E1-IMB3-P8 To Label N/A Comment N/A Control Node InfiniBand Control-P1 HCA Optional External InfinBand Switch Wiring From Label To Label IB 24 Port Switch Port 1 IB-SW1-P1 Enclosure 1 Interconnect E1-IMB3-P1 Module 3/4 Port 1 IB 24 Port Switch Port 2 IB-SW1-P2 N/A N/A IB 24 Port Switch Port 3 IB-SW1-P3 N/A N/A IB 24 Port Switch Port 4 IB-SW1-P4 N/A N/A IB 24 Port Switch P
C Configuring a VLAN for the Administrative Network for c3000 BladeSystem Configurations If a c3000-based configuration is connected to an external network, both the Administrative/Console network and the external network are connected to the same switch. Although IP addressing and subnetting may be used to separate these networks, the default settings of the HP GbE2C Ethernet switch will transmit broadcast traffic between the Administrative/Console network and the external network.
D Cabling Tables for CPE BladeSystem, with c7000 Enclosure Configurations This appendix provides cabling examples and tables for CPE BladeSystem with c7000 Enclosure configurations: Network Description Section Number N/A Cabling examples for CPE BladeSystem with one or more c7000 Section D.1 enclosures Wiring tables for one c7000 enclosure with single-density server Section B.2.1 blades1 Wiring tables for one c7000 enclosure with double-density server Section D.
Network Description Section Number Wiring table for one c7000 enclosure with single-density server Section D.13 blades and the HP BLc 4x DDR InfiniBand Gen2 Switch Wiring table for two c7000 enclosures with single-density server Section D.14 blades and the HP BLx 4X DDR IB Gen2 Switch and no external InfiniBand switch InfiniBand with Gen 2 Switch Module Wiring table for two or three c7000 enclosures with single-density server blades and using the HP BLc 4X DDR InfiniBand Gen2 switch Section D.
Figure D-1 Cabling Configuration for One c-Class Enclosure 1 2 15 2 13 3 4 5 6 9 8 7 10 R e mo v e ma n a g e me n m t o d u le s b e o f re e je c tin g s le e v e 6 11 12 14 The following list describes the callouts in Figure D-1: 1. Control node (optional). If the configuration has a control node that is connected to the InfiniBand fabric, then the HCA in the control node is also connected to the enclosure's InfiniBand switch. (This example uses a DL380 G5.
Figure D-2 Cabling Configuration for Two or More c-Class Enclosures 1 2 6 2 13 3 5 4 9 8 7 10 15 R e mo v e ma n a g e me n m t o d u le s b e o f re e je c tin g s le e v e 6 11 12 14 The following list describes the callouts in Figure D-2: 1. Control node (optional).
D.2 Administrative/Console Wiring Tables for One c7000 Enclosure with Double-Density Server Blades The following table provides the Administrative and Console wiring scheme for one c7000 enclosure with double-density server blades. Connections for the external control node and external InfiniBand switch are omitted if those components are not part of the configuration. Note: For the port labeling syntax for all of the cabling tables in this appendix, see Table A-1.
Table D-3 Administrative/Console Wiring Tables for Configurations with Two or Three c7000 Enclosures with Single-Density Server Blades (continued) Origin Label Destination Label Comment ProCurve 2824 Port 3 Admin-ES1-P3 Enclosure 1, IMB1, Port 23 E1-IMB1-P23 Only present if interconnect is GigE ProCurve 2824 Port 4 Admin-ES1-P4 Enclosure 1, IMB1, Port 24 E1-IMB1-P24 Only present if interconnect is GigE ProCurve 2824 Port 5 Not Connected ProCurve 2824 Port 6 Admin-ES1-P6 Enclosure 2, IMB1,
D.4 Administrative/Console Wiring Tables for Two c7000 Enclosures with Double-Density Server Blades The following table provides the Administrative/Console wiring scheme for two c7000 enclosures with double-density server blades. Connections for the external control node and external InfiniBand switches are omitted if those components are not specified in the configuration.
Table D-4 Administrative/Console Wiring Tables for Configurations with Two c7000 Enclosures with Double-Density Server Blades (continued) Origin Label Destination Label Comment ProCurve 2824 Port 20 Admin-ES1-P20 Enclosure 2 OA1 OA/iLO E3-OA1-iLO ProCurve 2824 Port 21 Admin-ES1-P21 Enclosure 1 OA1 OA/iLO E3-OA1-iLO ProCurve 2824 Port 22 Admin-ES1-P22 Optional External Control Node NIC1 Control- NIC1 Only present if the control node is not a server blade ProCurve 2824 Port 23 Admin-ES1-P23
From Label To ProCurve 2848 Port 11 Not Connected ProCurve 2848 Port 12 Not Connected Label ProCurve 2848 Port 13 Admin-ES1-P13 Enclosure 3 Interconnect Module E3-IMB1-P21 1 Port 21 ProCurve 2848 Port 14 Admin-ES1-P14 Enclosure 3 Interconnect Module E3-IMB1-P22 1 Port 22 ProCurve 2848 Port 15 Admin-ES1-P15 Enclosure 3 Interconnect Module E3-IMB1-P23 1 Port 23 ProCurve 2848 Port 16 Admin-ES1-P16 Enclosure 3 Interconnect Module E3-IMB1-P24 1 Port 24 ProCurve 2848 Port 17 Not Connected Pr
From Label To ProCurve 2848 Port 30 Admin-ES1-P30 Enclosure 2 Interconnect Module E2-IMB2-P24 2 Port 24 ProCurve 2848 Port 31 Not Connected ProCurve 2848 Port 32 Not Connected ProCurve 2848 Port 33 Not Connected Label ProCurve 2848 Port 34 Admin-ES1-P34 Enclosure 3 Interconnect Module E3-IMB2-P21 2 Port 21 ProCurve 2848 Port 35 Admin-ES1-P35 Enclosure 3 Interconnect Module E3-IMB2-P22 2 Port 22 ProCurve 2848 Port 36 Admin-ES1-P36 Enclosure 3 Interconnect Module E3-IMB2-P23 2 Port 23 Pr
the interconnect. Connections for the external control node and external InfiniBand switches are omitted if those components are not specified in the configuration.
From Label To Label Comment ProCurve 2824 Port 21 Admin-ES1-P21 Enclosure 1 OA1 OA/ILO E1-OA1-ILO ProCurve 2824 Port 22 Admin-ES1-P22 Optional External Control Node NIC1 Control- NIC1 Only Ppresent if control node is not a server blade ProCurve 2824 Port 23 Admin-ES1-P23 InfiniBand Switch 1 Management Port IB-SW1-NIC Only used when an external managed InfiniBand switch is present ProCurve 2824 Port 24 Admin-ES1-P24 Optional external control node iLO Control-iLO Optional connection D.
From Label Enclosure 1 Interconnect Module 5/6 Port 7 E1-IMB5-P7 Enclosure 1 Interconnect Module 5/6 Port 8 E1-IMB5-P8 To Label N/A Control node InfiniBand HCA Comment N/A Control-P1 External InfiniBand Switch Wiring From Label IB 24 Port Switch IB-SW1-P1 Port 1 To Label Comment Enclosure 1 Interconnect Module E1-IMB5-P1 5/6 Port 1 IB 24 Port Switch Port 2 IB-SW1-P2 N/A N/A IB 24 Port Switch Port 3 IB-SW1-P3 N/A N/A IB 24 Port Switch Port 4 IB-SW1-P4 N/A N/A IB 24 Port Switch Po
From Label To Label Comment IB 24 Port Switch Port 18 IB-SW1-P18 N/A N/A IB 24 Port Switch Port 19 IB-SW1-P19 N/A N/A IB 24 Port Switch Port 20 IB-SW1-P20 N/A N/A IB 24 Port Switch Port 21 IB-SW1-P21 N/A N/A IB 24 Port Switch Port 22 IB-SW1-P22 N/A N/A IB 24 Port Switch Port 23 IB-SW1-P23 N/A N/A IB 24 Port Switch Port 24 IB-SW1-P24 N/A N/A D.
From Enclosure 1 Interconnect Module 5/6 Port 8 Label To Enclosure 2 Interconnect Module 5/6 Port 8 Label Comments E2-IMB5-P8 Used if there is no rack-mount control node connected to the InfiniBand fabric E1-IMB5-P8 Control node InfiniBand HCA Control-P1 Only used if the control node is a rack-mountable server connected to the InfiniBand switch D.
From Label To Label IB 24 Port Switch Port 14 IB-SW1-P14 Enclosure 2 Interconnect Module 5/6 Port 6 E2-IMB5-P6 IB 24 Port Switch Port 15 IB-SW1-P15 Enclosure 2 Interconnect Module 5/6 Port 7 E2-IMB5-P7 IB 24 Port Switch Port 16 IB-SW1-P16 Enclosure 2 Interconnect Module 5/6 Port 8 E2-IMB5-P8 IB 24 Port Switch Port 17 IB-SW1-P17 Enclosure 3 Interconnect Module 5/6 Port 1 E3-IMB5-P1 IB 24 Port Switch Port 18 IB-SW1-P18 Enclosure 3 Interconnect Module 5/6 Port 2 E3-IMB5-P2 IB 24 Port Sw
From Label To Enclosure 1 Interconnect Module 5/6 Port 5 E1-IMB5-P5 Enclosure 1 Interconnect Module 7/8 Port 5 E1-IMB7-P5 Enclosure 1 Interconnect Module 5/6 Port 6 E1-IMB5-P6 Enclosure 1 Interconnect Module 7/8 Port 6 E1-IMB7-P6 Enclosure 1 Interconnect Module 5/6 Port 7 E1-IMB5-P7 Enclosure 1 Interconnect Module 7/8 Port 7 E1-IMB7-P7 Enclosure 1 Interconnect Module 5/6 Port 8 Label Comment Enclosure 1 Interconnect E1-IMB7-P8 Module 7/8 Port 8 Not used if the control node is a rack-mount se
From Label To IB 24 Port Switch Port 13 IB-SW1-P13 Enclosure 1 Interconnect Module 7/8 Port 5 E1-IMB7-P5 IB 24 Port Switch Port 14 IB-SW1-P14 Enclosure 1 Interconnect Module 7/8 Port 6 E1-IMB7-P6 IB 24 Port Switch Port 15 IB-SW1-P15 Enclosure 1 Interconnect Module 7/8 Port 7 E1-IMB7-P7 IB 24 Port Switch Port 16 IB-SW1-P16 Enclosure 1 Interconnect Module 7/8 Port 8 E1-IMB7-P8 IB 24 Port Switch Port 17 IB-SW1-P17 N/A N/A IB 24 Port Switch Port 18 IB-SW1-P18 N/A N/A IB 24 Port Switch Por
From Label To Label IB Switch 1 24 Port IB-SW1-P4 Switch Port 4 Enclosure 1 Interconnect E1-IMB5-P4 Module 5/6 Port 4 IB Switch 1 24 Port IB-SW1-P5 Switch Port 5 Enclosure 1 Interconnect E1-IMB7-P1 Module 7/8 Port 1 IB Switch 1 24 Port IB-SW1-P6 Switch Port 6 Enclosure 1 Interconnect E1-IMB7-P2 Module 7/8 Port 2 IB Switch 1 24 Port IB-SW1-P7 Switch Port 7 Enclosure 1 Interconnect E1-IMB7-P3 Module 7/8 Port 3 IB Switch 1 24 Port IB-SW1-P8 Switch Port 8 Enclosure 1 Interconnect E1-IMB7-P4 Module
Wiring for Second External InfiniBand Switch From Label To Label Comment IB Switch 2 24 Port IB-SW2-P1 Switch Port 1 Enclosure 1 Interconnect E1-IMB5-P5 Module 5/6 Port 5 IB Switch 2 24 Port IB-SW2-P2 Switch Port 2 Enclosure 1 Interconnect E1-IMB5-P6 Module 5/6 Port 6 IB Switch 2 24 Port IB-SW2-P3 Switch Port 3 Enclosure 1 Interconnect E1-IMB5-P7 Module 5/6 Port 7 Enclosure 1 Interconnect E1-IMB5-P8 Module 5/6 Port 8 Used if there is no rack-mount control node connected to the InfiniBand fabric
From Label To Label Comment IB Switch 2 24 Port IB-SW2-P22 Switch Port 22 Enclosure 3 Interconnect E3-IMB7-P6 Module 7/8 Port 6 IB Switch 2 24 Port IB-SW2-P23 Switch Port 23 Enclosure 3 Interconnect E3-IMB7-P7 Module 7/8 Port 7 IB Switch 2 24 Port IB-SW2-P24 Switch Port 24 Enclosure 3 Interconnect E3-IMB7-P8 Module 7/8 Port 8 D.
From Label To Label Comment Enclosure 1 Interconnect Module 5/6 Port 12 N/A N/A Enclosure 1 Interconnect Module 5/6 Port 13 N/A N/A Enclosure 1 Interconnect Module 5/6 Port 14 N/A N/A Enclosure 1 Interconnect Module 5/6 Port 15 N/A N/A Enclosure 1 Interconnect Module 5/6 Port 16 N/A N/A Wiring for One c7000 Enclosure with Single-Density Server Blades, the HP BLc 4X DDR InfiniBand Gen2 Switch, and an External InfiniBand Switch From 96 Label To Label IB 24 Port Switch Port 1 IB-SW1-P
From Label To Label IB 24 Port Switch Port 14 IB-SW1-P14 N/A N/A IB 24 Port Switch Port 15 IB-SW1-P15 N/A N/A IB 24 Port Switch Port 16 IB-SW1-P16 N/A N/A IB 24 Port Switch Port 17 IB-SW1-P17 N/A N/A IB 24 Port Switch Port 18 IB-SW1-P18 N/A N/A IB 24 Port Switch Port 19 IB-SW1-P19 N/A N/A IB 24 Port Switch Port 20 IB-SW1-P20 N/A N/A IB 24 Port Switch Port 21 IB-SW1-P21 N/A N/A IB 24 Port Switch Port 22 IB-SW1-P22 N/A N/A IB 24 Port Switch Port 23 IB-SW1-P23 N/A N/A
From Label To Enclosure 1 Interconnect Module 5/6 Port 5 E1-IMB5-P5 Enclosure 2 Interconnect Module 5/6 Port 5 Enclosure 1 Interconnect Module 5/6 Port 6 E1-IMB5-P6 Enclosure 1 Interconnect Module 5/6 Port 7 E1-IMB5-P7 Enclosure 1 Interconnect Module 5/6 Port 8 E1-IMB5-P8 Enclosure 1 Interconnect Module 5/6 Port 9 E1-IMB5-P9 Enclosure 2 Interconnect Module 5/6 Port 6 Enclosure 2 Interconnect Module 5/6 Port 7 Label Comments E2-IMB5-P5 E2-IMB5-P6 E2-IMB5-P7 Enclosure 2 Interconnect Module
Note: For one c7000 enclosure wiring tables, see Section D.13.
From Label To Label Comment IB Switch 1 24 Port Switch Port 21 IB-SW1-P21 Enclosure 3 Interconnect Module 5/6 Port 5 E3-IMB5-P5 IB Switch 1 24 Port Switch Port 22 IB-SW1-P22 Enclosure 3 Interconnect Module 5/6 Port 6 E3-IMB5-P6 IB Switch 1 24 Port Switch Port 23 IB-SW1-P23 Enclosure 3 Interconnect Module 5/6 Port 7 E3-IMB5-P7 IB Switch 1 24 Port Switch Port 24 IB-SW1-P24 Enclosure 3 Interconnect Module 5/6 Port 8 E3-IMB5-P8 Wiring Table for the Second InfiniBand Switch (IB-SW2) From Label
From Label To Label IB Switch 2 24 Port Switch Port 16 IB-SW2-P16 Enclosure 2 Interconnect Module 5/6 Port 16 E2-IMB5-P16 IB Switch 2 24 Port Switch Port 17 IB-SW2-P17 Enclosure 3 Interconnect Module 5/6 Port 9 E3-IMB5-P9 IB Switch 2 24 Port Switch Port 18 IB-SW2-P18 Enclosure 3 Interconnect Module 5/6 Port 10 E3-IMB5-P10 IB Switch 2 24 Port Switch Port 19 IB-SW2-P19 Enclosure 3 Interconnect Module 5/6 Port 11 E3-IMB5-P11 IB Switch 2 24 Port Switch Port 20 IB-SW2-P20 Enclosure 3 Interconne
From Label To Label Enclosure 1 Interconnect Module 5/6 Port 6 E1-IMB5-P6 Enclosure 1 Interconnect Module 7/8 Port 6 E1-IMB7-P6 Enclosure 1 Interconnect Module 5/6 Port 7 E1-IMB5-P7 Enclosure 1 Interconnect Module 7/8 Port 7 E1-IMB7-P7 Enclosure 1 Interconnect Module 5/6 Port 8 E1-IMB5-P8 Enclosure 1 Interconnect Module 7/8 Port 8 E1-IMB7-P8 Enclosure 1 Interconnect Module 5/6 Port 9 E1-IMB5-P9 Enclosure 1 Interconnect Module 7/8 Port 9 E1-IMB7-P9 Enclosure 1 Interconnect Module 5/6 Port
Wiring Table for the First InfiniBand Switch (IB-SW1) From Label To Label Comment IB 24 Port Switch 1 Port 1 IB-SW1-P1 Enclosure 1 Interconnect Module 5/6 Port 1 E1-IMB5-P1 IB 24 Port Switch 1 Port 2 IB-SW1-P2 Enclosure 1 Interconnect Module 5/6 Port 2 E1-IMB5-P2 IB 24 Port Switch 1 Port 3 IB-SW1-P3 Enclosure 1 Interconnect Module 5/6 Port 3 E1-IMB5-P3 IB 24 Port Switch 1 Port 4 IB-SW1-P4 Enclosure 1 Interconnect Module 5/6 Port 4 E1-IMB5-P4 IB 24 Port Switch 1 Port 5 IB-SW1-P5 Enclos
From Label IB 24 Port Switch 1 Port 23 IB-SW1-P23 N/A IB-SW1-P24 Control node InfiniBand HCA IB 24 Port Switch 1 Port 24 To Label Comment N/A Control-P1 Only used if the control node is a rack-mount server connected to the InfiniBand fabric Wiring table for the Second InfiniBand Switch (IB-SW2) 104 From Label To Label IB 24 Port Switch 2 Port 1 IB-SW2-P1 Enclosure 1 Interconnect Module 5/6 Port 9 E1-IMB5-P9 IB 24 Port Switch 2 Port 2 IB-SW2-P2 Enclosure 1 Interconnect Module 5/6 Por
From Label IB 24 Port Switch 2 Port 20 IB-SW2-P20 N/A N/A IB 24 Port Switch 2 Port 21 IB-SW2-P21 N/A N/A IB 24 Port Switch 2 Port 22 IB-SW2-P22 N/A N/A IB 24 Port Switch 2 Port 23 IB-SW2-P23 N/A N/A IB-SW2-P24 Control node InfiniBand HCA IB 24 Port Switch 2 Port 24 To Label Comment Control-P1 Only used if the control node is a rack-mount server connected to the InfiniBand fabric D.
From Label To Label IB Switch 1 24 Port Switch Port 11 IB-SW1-P11 Enclosure 2 Interconnect Module 5/6 Port 3 E2-IMB5-P3 IB Switch 1 24 Port Switch Port 12 IB-SW1-P12 Enclosure 2 Interconnect Module 5/6 Port 4 E2-IMB5-P4 IB Switch 1 24 Port Switch Port 13 IB-SW1-P13 Enclosure 2 Interconnect Module 7/8 Port 1 E2-IMB7-P1 IB Switch 1 24 Port Switch Port 14 IB-SW1-P14 Enclosure 2 Interconnect Module 7/8 Port 2 E2-IMB7-P2 IB Switch 1 24 Port Switch Port 15 IB-SW1-P15 Enclosure 2 Interconnect
From Label To Label IB Switch 2 24 Port Switch Port 8 IB-SW2-P8 Enclosure 1 Interconnect Module 7/8 Port 8 E1-IMB7-P8 IB Switch 2 24 Port Switch Port 9 IB-SW2-P9 Enclosure 2 Interconnect Module 5/6 Port 5 E2-IMB5-P5 IB Switch 2 24 Port Switch Port 10 IB-SW2-P10 Enclosure 2 Interconnect Module 5/6 Port 6 E2-IMB5-P6 IB Switch 2 24 Port Switch Port 11 IB-SW2-P11 Enclosure 2 Interconnect Module 5/6 Port 7 E2-IMB5-P7 IB Switch 2 24 Port Switch Port 12 IB-SW2-P12 Enclosure 2 Interconnect Mod
From Label To Label IB Switch 3 24 Port Switch Port 5 IB-SW3-P5 Enclosure 1 Interconnect Module 7/8 Port 9 E1-IMB7-P9 IB Switch 3 24 Port Switch Port 6 IB-SW3-P6 Enclosure 1 Interconnect Module 7/8 Port 10 E1-IMB7-P10 IB Switch 3 24 Port Switch Port 7 IB-SW3-P7 Enclosure 1 Interconnect Module 7/8 Port 11 E1-IMB7-P11 IB Switch 3 24 Port Switch Port 8 IB-SW3-P8 Enclosure 1 Interconnect Module 7/8 Port 12 E1-IMB7-P12 IB Switch 3 24 Port Switch Port 9 IB-SW3-P9 Enclosure 2 Interconnect Mo
Wiring Table for the Fourth InfiniBand Switch (IB-SW4) From Label To IB Switch 4 24 Port Switch Port 1 IB-SW4-P1 Enclosure 1 Interconnect Module 5/6 Port 13 E1-IMB5-P13 IB Switch 4 24 Port Switch Port 2 IB-SW4-P2 Enclosure 1 Interconnect Module 5/6 Port 14 E1-IMB5-P14 IB Switch 4 24 Port Switch Port 3 IB-SW4-P3 Enclosure 1 Interconnect Module 5/6 Port 15 E1-IMB5-P15 IB Switch 4 24 Port Switch Port 4 Label Comment Enclosure 1 Interconnect E1-IMB5-P16 Module 5/6 Port 16 Connect if there is no
From Label To Label IB Switch 4 24 Port Switch Port 22 IB-SW4-P22 Enclosure 3 Interconnect Module 7/8 Port 14 E3-IMB7-P14 IB Switch 4 24 Port Switch Port 23 IB-SW4-P23 Enclosure 3 Interconnect Module 7/8 Port 15 E3-IMB7-P15 IB Switch 4 24 Port Switch Port 24 IB-SW4-P24 Enclosure 3 Interconnect Module 7/8 Port 16 E3-IMB7-P16 Cabling Tables for CPE BladeSystem, with c7000 Enclosure Configurations Comment
E Configuration Rules for CPE BladeSystems This appendix provides the configuration and power distribution rules for CPE BladeSystem configurations. The following sections discuss: • CP Workgroup System device and interconnect bays (see Section E.1) • CP Workgroup System fan bays (see Section E.2) • CP Workgroup System power and configuration rules (see Section E.3) • CPE with BladeSystem c7000 enclosure device and interconnect module bay rules (see Section E.
second, additional power strip is required for configurations that have an externally managed InfiniBand switch.
The following table summarizes the use of the interconnect module bays for c7000 enclosures in HP Cluster Platform Express BladeSystem configurations with double-density server blades.
Figure E-1 c7000 PSUs HP Cluster Platform Express supports high-voltage power connections (200V-240V). Power requirements can be met by using up to eight single phase PDU’s per enclosure as shown in the table below. PDU’s will be mounted in 0U positions.
F Cabling Tables for Rack-Mountable Servers This appendix provides the following cabling tables: • Ethernet and Gigabit Ethernet network cabling tables for rack-mountable servers (see Section F.1). • InfiniBand cabling tables for rack-mountable servers (see Section F.2). F.
From Label To Label Cable ProCurve 2824 Port 13 Admin-ES1-P13 Enclosure 3 Interconnect Module 1 Port 21 E3-IMB1-P21 ProCurve 2824 Port 14 Admin-ES1-P14 Enclosure 3 Interconnect Module 1 Port 22 E3-IMB1-P22 Only present if interconnect is GigE ProCurve 2824 Port 15 Admin-ES1-P15 Enclosure 3 Interconnect Module 1 Port 23 E3-IMB1-P23 Only present if interconnect is GigE ProCurve 2824 Port 16 Admin-ES1-P16 Enclosure 3 Interconnect Module 1 Port 24 E3-IMB1-P24 Only present if interconnect is GigE
From Label To Label Cable ProCurve 2824 Port 15 Admin-ES1-P15 Embedded NIC1 (eth0) in Node 07 Server07-NIC1 C7533A ProCurve 2824 Port 16 Admin-ES1-P16 Embedded NIC1 (eth0) in Node 06 Server06-NIC1 C7533A ProCurve 2824 Port 17 Admin-ES1-P17 Embedded NIC1 (eth0) in Node 05 Server05-NIC1 C7533A ProCurve 2824 Port 18 Admin-ES1-P18 Embedded NIC1 (eth0) in Node 04 Server04-NIC1 C7533A ProCurve 2824 Port 19 Admin-ES1-P19 Embedded NIC1 (eth0) in Node 03 Server03-NIC1 C7533A ProCurve 28
From Label To Label Cable ProCurve 2848 Port 21 Admin-ES1-P21 Embedded NIC1 (eth0) in Node 17 Server17-NIC1 C7533A ProCurve 2848 Port 22 Admin-ES1-P22 Embedded NIC1 (eth0) in Node 16 Server16-NIC1 C7533A ProCurve 2848 Port 23 Not connected ProCurve 2848 Port 24 Not connected ProCurve 2848 Port 25 Admin-ES1-P25 Embedded NIC1 (eth0) in Node 15 Server15-NIC1 C7533A ProCurve 2848 Port 26 Admin-ES1-P26 Embedded NIC1 (eth0) in Node 14 Server14-NIC1 C7533A ProCurve 2848 Port 27 A
F.1.
F.1.
From Label To Label Cable Server06-MP C7533A ProCurve 2650 Port 37 Consol-ES1-P37 Embedded MP in Node 05 Server05-MP C7533A ProCurve 2650 Port 38 Consol-ES1-P38 Embedded MP in Node 04 Server04-MP C7533A ProCurve 2650 Port 39 Consol-ES1-P39 Embedded MP in Node 03 Server03-MP C7533A ProCurve 2650 Port 40 Consol-ES1-P40 Embedded MP in Node 02 Server02-MP C7533A ProCurve 2650 Port 41 Consol-ES1-P41 Embedded MP in Node 01 Server01-MP C7533A ProCurve 2650 Port 42 Consol-ES1-P42 Embedded MP i
If more than one interconnect is present, they are joined together in a federated configuration. A typical cabling example of the federated cabling for two interconnects is shown in the following table: Downlink Ports Uplink Ports 9 – 12 21 – 24 In the cable labels, or cabling tables, you will see that both the origin and the destination for these federated link cables is an ISR 9024 port, such as IB-SW1-P24 to IB-SW2-P24. F.2.
F.2.
Origin Primary Destination Secondary Destination IB-SW2-P10 IB-SW1-P10 IB-SW2-P11 IB-SW1-P11 IB-SW2-P12 IB-SW1-P12 IB-SW2-P13 Server7-IB-P0 Server8-IB-P0 IB-SW2-P14 Server6-IB-P0 Server7-IB-P0 IB-SW2-P15 Server5-IB-P0 Server6-IB-P0 IB-SW2-P16 Server4-IB-P0 Server5-IB-P0 IB-SW2-P17 Server3-IB-P0 Server4-IB-P0 IB-SW2-P18 Server2-IB-P0 Server3-IB-P0 IB-SW2-P19 Server1-IB-P0 Server2-IB-P0 IB-SW2-P20 Control-IB-P0 Server1-IB-P0 IB-SW2-P21 IB-SW1-P20 IB-SW2-P22 IB-SW1-P21
Index Symbols C 22U cabinet configuration BladeSystem, 43 rack-mountable servers, 56 22U configuration example BladeSystem with c3000 enclosure, 38 BladeSystem with one enclosure, 48 BladeSystem with two enclosures, 48 22U configuration example – without external control node BladeSystem with c3000 enclosure, 38 42U cabinet configuration BladeSystem, 43 rack-mountable, 55 rack-mountable components, 55 42U configuration example BladeSystem with one enclosure, 47 BladeSystem with three enclosures, 44 BladeS
configurations, 9 console, 25 control n ode, 57 control node, 56 cooling, 56, 57 Ethernet switches, 56, 57 example configuration, 56 KVM, 56, 57 rack-mountable components, 55 testing, 58 Cluster Platform Express BladeSystem configurations, 43 nodes, 13 overview, 11 supported components, 36 comments, reader, 10 components, 55 documentation, 34 supported in Cluster Platform Express, 36 compute node, 56, 57 configuration BladeSystem, 43 example, 56 rule, 111 Workgroup System, 37 configuring VLAN, 73 connection
KVM console switch, 56 L LAN, 26 (see also local area network) LED link state, 24 power, 50, 58 legal notices and trademarks, 2 link, 25 (see also cable) status LED, 24 local I/O cable, 52 log-in Web site, 10 M management port port, 25 message passing interface, 56, 57 MPI, 56, 57 (see also message passing interface) (see also message passing interface) N network cabling tables, 115 node compute, 56, 57 control, 57 nodes Cluster Platform Express, 13 O operating system Microsoft Windows Compute Cluster,
operating system, 41, 51, 59 software management accessing server with KVM, 51 BladeSystem requirements, 51 Start-up (see installation) straps, 23 switch console, 13 ethernet, 56, 57 KVM, 56, 57 syntax wiring, 61 T tables Ethernet cabling, 76, 116, 119 InfiniBand cabling, 122 testing, 58 TFT7600, 25 thermal, 56, 57 topology, 121 typographic conventions, 10 U Unified Cluster Portfolio, 10 Unpacking CPE systems, 40 Unpacking CPE BladeSystem with c7000 CPE systems, 49, 58 uplink, 122 using cabling tables, 12
*A-CPCPE-1E* Printed in the US