QLogic HCA and InfiniPath® Software Install Guide Version 2.
S QLogic HCA and InfiniPath® Software Install Guide Version 2.2 Information furnished in this manual is believed to be accurate and reliable. However, QLogic Corporation assumes no responsibility for its use, nor for any infringements of patents or other rights of third parties which may result from its use. QLogic Corporation reserves the right to change product specifications at any time without notice. Applications described in this document for any of these products are for illustrative purposes only.
Table of Contents 1 Introduction Who Should Read this Guide . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . How this Guide is Organized . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Interoperability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Conventions Used in this Guide . . . . . . . . . . . . .
QLogic HCA and InfiniPath® Software Install Guide Version 2.2 S Hardware Installation for QLE7240, QLE7280, or QLE7140 with PCI Express Riser . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-9 Dual Adapter Installation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-9 Installation Steps. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-9 Hardware Installation for QHT7140 with HTX Riser . . . . . . . . . . . . . .
A QLogic HCA and InfiniPath® Software Install Guide Version 2.2 Starting and Stopping the InfiniPath Software . . . . . . . . . . . . . . . . . . . . . . . Rebuilding or Reinstalling Drivers After a Kernel Upgrade . . . . . . . . . . . . . Rebuilding or Reinstalling Drivers if a Different Kernel is Installed. . . . . . . . Further Information on Configuring and Loading Drivers . . . . . . . . . . . . . . . LED Link and Data Indicators. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
QLogic HCA and InfiniPath® Software Install Guide Version 2.2 B Configuration Files C RPM Descriptions S InfiniPath and OpenFabrics RPMs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Different Nodes May Use Different RPMs . . . . . . . . . . . . . . . . . . . . . . . . . . RPM Version Numbers and Identifiers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . RPM Organization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
A QLogic HCA and InfiniPath® Software Install Guide Version 2.2 List of Figures Figure Page 4-1 QLogic QLE7280 with IBA7220 ASIC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-7 4-2 QLogic QLE7140 Card with Riser, Top View . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-7 4-3 QLogic QHT7040/QHT7140 Full and Low Profile Cards with Riser, Top View . . . . 4-8 4-4 PCIe Slot in a Typical Motherboard . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
QLogic HCA and InfiniPath® Software Install Guide Version 2.
1 Introduction This chapter describes the contents, intended audience, and organization of the QLogic HCA and InfiniPath Software Install Guide. The QLogic HCA and InfiniPath Software Install Guide contains instructions for installing the QLogic Host Channel Adapters (HCAs) and the InfiniPath and OpenFabrics software.
S 1 – Introduction Overview Section 3, Step-by-Step Installation Checklist, provides a high-level overview of the hardware and software installation procedures. Section 4, Hardware Installation, includes instructions for installing the QLogic QLE7140, QLE7240, QLE7280, QHT7040, and QHT7140 HCAs. Section 5, Software Installation, includes instructions for installing the QLogic InfiniPath and OpenFabrics software.
A 1 – Introduction Interoperability InfiniPath OpenFabrics software is interoperable with other vendors’ InfiniBand Host Channel Adapters (HCAs) running compatible OpenFabrics releases. There are several options for subnet management in your cluster: Use the embedded Subnet Manager (SM) in one or more managed switches supplied by your InfiniBand switch vendor. Use the Open source Subnet Manager (OpenSM) component of OpenFabrics. Use a host-based Subnet Manager.
S 1 – Introduction Documentation Table 1-1. Typographical Conventions (Continued) Convention user input Meaning Bold fixed-space font is used for literal items in commands or constructs that you type. $ Indicates a command line prompt. # Indicates a command line prompt as root when using bash or sh. [] Brackets enclose optional elements of a command or program construct. ... Ellipses indicate that a preceding element can be repeated.
A 1 – Introduction Contact Information Contact Information Support Headquarters QLogic Corporation 4601 Dean Lakes Blvd Shakopee, MN 55379 USA QLogic Web Site www.qlogic.com Technical Support Web Site support.qlogic.com Technical Support Email support@qlogic.com Technical Training Email tech.training@qlogic.com North American Region Email support@qlogic.com Phone +1-952-932-4040 Fax +1 952-974-4910 All other regions of the world QLogic Web Site IB0056101-00 G www.qlogic.
1 – Introduction Contact Information S Notes 1-6 IB0056101-00 G
2 Feature Overview This section contains the features for this release, the supported QLogic adapter models, supported distributions and kernels, and a list of the software components. What’s New in this Release This release adds support for the QLE7240 and QLE7280 InfiniBand DDR Host Channel Adapters (HCAs), which offer twice the link bandwidth of SDR HCAs. The extra bandwidth improves performance for both latency-sensitive and bandwidth-intensive applications.
S 2 – Feature Overview New Features Table 2-1. QLogic Adapter Model Numbers (Continued) QLogic Model Number QLE7280 Description Single port 20Gbps DDR 4X InfiniBand to PCI Express x16 adapter. Supported on systems with PCI Express x16 slots. The QLE7280 is backward compatible; it can also be used with PCIe adapters that connect to x8 slots. Table Notes PCIe is Gen 1 a The QHT7140 has a smaller form factor than the QHT7040, but is otherwise the same.
A 2 – Feature Overview Supported Distributions and Kernels A subset of the QLogic InfiniBand Fabric Suite, the enablement tools, are offered with this release. Two separate SCSI RDMA Protocol (SRP) modules are provided: the standard OpenFabrics (OFED) SRP, and QLogic SRP. QLogic MPI supports running exclusively on a single node without the installation of the HCA hardware.
S 2 – Feature Overview Compiler Support Table 2-2. InfiniPath/OpenFabrics Supported Distributions and Kernels (Continued) Distribution InfiniPath/OpenFabrics Supported Kernels Red Hat Enterprise Linux 5.0 (RHEL 5.0), RHEL 5.1 2.6.18-8, 2.6.18-53 (x86_64) CentOS 5.0, 5.1 (Rocks 5.0, 5.1) 2.6.18, 2.6.18-53 (x86_64) Scientific Linux 5.0, 5.1 2.6.18, 2.6.18-53 (x86_64) SUSE® Linux Enterprise Server (SLES 10 GM, SP 1) 2.6.16.21, 2.6.16.
A 2 – Feature Overview Software Components Included components are: InfiniPath driver InfiniPath Ethernet emulation (ipath_ether) InfiniPath libraries InfiniPath utilities, configuration, and support tools, including ipath_checkout, ipath_control, ipath_pkt_test, and ipathstats QLogic MPI QLogic MPI benchmarks and utilities OpenMPI and MVAPICH libraries built with the GNU, PGI, PathScale, and Intel compilers, with corresponding mpitests RPMs and mpi-selector OpenFabrics prot
2 – Feature Overview Software Components S More details about the hardware and software can be found in Section 4 and Section 5.
3 Step-by-Step Installation Checklist This section provides an overview of the hardware and software installation procedures. Detailed steps are found in Section 4 “Hardware Installation” and Section 5 “Software Installation”. Hardware Installation The following steps summarize the basic hardware installation procedure: 1. Check that the adapter hardware is appropriate for your platform. See Table 4-1. 2.
S 3 – Step-by-Step Installation Checklist Software Installation Software Installation The following steps summarize the basic InfiniPath and OpenFabrics software installation and startup. These steps must be performed on each node in the cluster: 1. Make sure that the HCA hardware installation has been completed according to the instructions in “Hardware Installation” on page 4-1. 2. Verify that the Linux kernel software is installed on each node in the cluster.
A 3 – Step-by-Step Installation Checklist Software Installation 10. Optimize your adapter for the best performance. See “Adapter Settings” on page 5-30. Also see the Performance and Management Tips section in the QLogic HCA and InfiniPath Software User Guide. 11. Perform the recommended health checks. See “Customer Acceptance Utility” on page 5-31. 12.
3 – Step-by-Step Installation Checklist Software Installation S Notes 3-4 IB0056101-00 G
4 Hardware Installation This section lists the requirements and provides instructions for installing the QLogic InfiniPath Interconnect adapters. Instructions are included for the QLogic DDR PCI Express adapters, the QLE7240 and QLE7280; the QLogic InfiniPath PCIe adapter and PCIe riser card, QLE7140; and the QHT7040 or QHT7140 adapter hardware and HTX riser card. These components are collectively referred to as the adapter and the riser card in the remainder of this document.
S 4 – Hardware Installation Hardware Installation Requirements Table 4-1. Adapter Models and Related Platforms (Continued) QLogic Model Number Platform QLE7140 PCI Express systems Standard PCI Express x8 or x16 slot QHT7040 Motherboards with HTX connectors HyperTransport HTX slot QHT7140 Motherboards with HTX connectors HyperTransport HTX slot Plugs Into Installation of the QLE7240, QLE7280, QLE7140, QHT7040, or QHT7140 in a 1U or 2U chassis requires the use of a riser card.
A 4 – Hardware Installation Hardware Installation Requirements Cabling and Switches The cable installation uses a standard InfiniBand (IB) 4X cable. Any InfiniBand cable that has been qualified by the vendor should work. For SDR, the longest passive copper IB cable that QLogic has currently qualified is 20 meters. For DDR-capable adapters and switches, the DDR-capable passive copper cables cannot be longer than 10 meters. Active cables can eliminate some of the cable length restrictions.
4 – Hardware Installation Hardware Installation Requirements S Optical Fibre Option The QLogic adapter also supports connection to the switch by means of optical fibres through optical media converters such as the EMCORE™ QT2400. Not all switches support these types of convertors. For more information on the EMCORE convertor, see www.emcore.com. Intel® and Zarlink™ also offer optical cable solutions. See www.intel.com and www.zarlink.com for more information.
A 4 – Hardware Installation Safety with Electricity Safety with Electricity Observe these guidelines and safety precautions when working around computer hardware and electrical equipment: Locate the power source shutoff for the computer room or lab where you are working. This is where you will turn OFF the power in the event of an emergency or accident. Never assume that power has been disconnected for a circuit; always check first. Do not wear loose clothing.
S 4 – Hardware Installation Unpacking Information The package contents for the QLE7280 adapter are: QLogic QLE7280 Additional short bracket Quick Start Guide Standard PCIe risers can be used, typically supplied by your system or motherboard vendor. The package contents for the QLE7140 adapter are: QLogic QLE7140 Quick Start Guide Standard PCIe risers can be used, typically supplied by your system or motherboard vendor. The contents are illustrated in Figure 4-2.
A 4 – Hardware Installation Unpacking Information PCI Express Edge Connectors InfiniBand Connector Face Plate IBA7220 ASIC Figure 4-1. QLogic QLE7280 with IBA7220 ASIC PCI Express Riser Card. Not supplied; shown for reference. PCI Express Edge Connectors InfiniBand Connector IBA6120 ASIC Face Plate Figure 4-2.
S 4 – Hardware Installation Unpacking Information HTX Riser Card HTX Edge Connectors InfiniBand Connector IBA6110 ASIC QHT7140 Low Profile Card Face Plate PathScale InfiniBand Connector Face Plate PathScale QHT7040 Full Height Short Card Figure 4-3. QLogic QHT7040/QHT7140 Full and Low Profile Cards with Riser, Top View Unpacking the QLogic Adapter Follow these steps when unpacking the QLogic adapter: 4-8 1.
A 4 – Hardware Installation Hardware Installation Hardware Installation This section contains hardware installation instructions for the QLE7240, QLE7280, QLE7140, QHT7040, and QHT7140. Hardware Installation for QLE7240, QLE7280, or QLE7140 with PCI Express Riser Installation for the QLE7240, QLE7280, and QLE7140 is similar. The following instructions are for the QLE7140, but can be used for any of these three adapters. Most installations will be in 1U and 2U chassis, using a PCIe right angle riser card.
S 4 – Hardware Installation Hardware Installation 4. Remove the cover screws and cover plate to expose the system’s motherboard. For specific instructions on how to do this, follow the hardware documentation that came with your system. 5. Locate the PCIe slot on your motherboard. Note that the PCIe slot has two separate sections, with the smaller slot opening located towards the front (see Figure 4-4). These two sections correspond to the shorter and longer connector edges of the adapter and riser.
A 4 – Hardware Installation Hardware Installation 9. Connect the QLogic adapter and PCIe riser card together, forming the assembly that you will insert into your motherboard. First, visually line up the adapter slot connector edge with the edge connector of the PCIe riser card (see Figure 4-5). . PCIe Riser Card QLogic Adapter Face Plate LEDs InfiniBand connector Figure 4-5. QLogic PCIe HCA Assembly with Riser Card 10.
4 – Hardware Installation Hardware Installation 13. S Insert the riser assembly into the motherboard’s PCIe slot, ensuring good contact. The QLogic adapter should now be parallel to the motherboard and about one inch above it (see Figure 4-6). Figure 4-6. Assembled PCIe HCA with Riser 14. Secure the face plate to the chassis. The QLogic adapter has a screw hole on the side of the face plate that can be attached to the chassis with a retention screw.
A 4 – Hardware Installation Hardware Installation To install the QLogic adapter with an HTX riser card: 1. The BIOS should be already be configured properly by the motherboard manufacturer. However, if any additional BIOS configuration is required, it will usually need to be done before installing the QLogic adapter. See “Configuring the BIOS” on page 4-4. 2. Shut down the power supply to the system into which you will install the QLogic adapter. 3.
S 4 – Hardware Installation Hardware Installation 7. Remove the QLogic QHT7140 from the anti-static bag. NOTE: Be careful not to touch any of the components on the printed circuit board during these steps. You can hold the adapter by its face plate or edges. 8. Locate the face plate on the connector edge of the card. 9. Connect the QLogic adapter and HTX riser card together, forming the assembly that you will insert into your motherboard.
A 4 – Hardware Installation Hardware Installation 13. Insert the HT riser assembly into the motherboard’s HTX slot, ensuring good contact. The QLogic adapter should now be parallel to the motherboard and about one inch above it, as shown in Figure 4-9. Figure 4-9. Assembled QHT7140 with Riser 14. Secure the face plate to the chassis. The QLogic adapter has a screw hole on the side of the face plate that can be attached to the chassis with a retention screw.
4 – Hardware Installation Hardware Installation S Hardware Installation for the QHT7140 Without an HTX Riser Installing the QLogic QHT7140 without an HTX riser card requires a 3U or larger chassis. The card slot connectors on the QHT7140 fit into the HTX slot in a vertical installation. To install the QLogic adapter without the HTX riser card: 4-16 1. The BIOS should already be configured properly by the motherboard manufacturer.
A 4 – Hardware Installation Switch Configuration and Monitoring 8. Insert the card by pressing firmly and evenly on the top of the horizontal bracket and the top rear corner of the card simultaneously. The card should insert evenly into the slot. Be careful not to push, grab, or put pressure on any other part of the card, and avoid touching any of the components. See Figure 4-10. Figure 4-10. QHT7140 Without Riser Installed in a 3U Chassis 9. Secure the face plate to the chassis.
S 4 – Hardware Installation Completing the Installation The QLE7240, QLE7280, QLE7140, QHT7040, and QHT7140 adapters are all cabled the same way. To install the InfiniBand cables: 1. Check that you have removed the protector plugs from the cable connector ends. 2. Different vendor cables might have different latch mechanisms. Determine if your cable has a spring-loaded latch mechanism. If your cable is spring-loaded, grasp the metal shell and pull on the plastic latch to release the cable.
5 Software Installation This section provides instructions for installing the InfiniPath and OpenFabrics software. The InfiniPath software includes drivers, protocol libraries, QLogic’s implementation of the MPI message passing standard, and example programs, including benchmarks. A complete list of the provided software is in “Software Components” on page 2-4.
S 5 – Software Installation Cluster Setup Supported Linux Distributions The currently supported distributions and associated Linux kernel versions for InfiniPath and OpenFabrics are listed in Table 5-1. The kernels are the ones that shipped with the distributions. Table 5-1. InfiniPath/OpenFabrics Supported Distributions and Kernels Distribution InfiniPath/OpenFabrics Supported Kernels Fedora 6 2.6.22 (x86_64) Red Hat® Enterprise Linux® 4.4, 4.5, 4.6 (RHEL4.4, 4.5, 4.6) 2.6.9-42 (U4), 2.6.
A 5 – Software Installation Downloading and Unpacking the InfiniPath and OpenFabrics Software Make sure that all previously existing (stock) OpenFabrics RPMs are uninstalled. See “Removing Software Packages” on page 5-33 for more information on uninstalling. If you are using RHEL5, make sure that opensm-* is manually uninstalled. See “Version Number Conflict with opensm-* on RHEL5 Systems” on page A-4 for more information.
S 5 – Software Installation Downloading and Unpacking the InfiniPath and OpenFabrics Software Table 5-2. Available Packages (Continued) Package Enablement tools Description Comments Subset of QLogic InfiniBand Fabric Suite. Has associated Readme, Release Notes, and documentation. Available as a separate download. CD may be purchased separately. Follow the links on the QLogic download page. Documentation is included.
A 5 – Software Installation Downloading and Unpacking the InfiniPath and OpenFabrics Software NOTE: The files can be downloaded to any directory. The install process will create and install the files in the correct directories. The locations of the directories after installation are listed in “Installed Layout” on page 5-9. The RPMs are organized as follows: InfiniPath_license.txt,LEGAL.
S 5 – Software Installation Installing the InfiniPath and OpenFabrics RPMs Installing the InfiniPath and OpenFabrics RPMs Linux distributions of InfiniPath and OpenFabrics software are installed from binary RPMs. RPM is a Linux packaging and installation tool used by Red Hat, SUSE, and CentOS. There are multiple interdependent RPM packages that make up the InfiniPath and OpenFabrics software. The OpenFabrics kernel module support is now part of the InfiniPath RPMs.
A 5 – Software Installation Installing the InfiniPath and OpenFabrics RPMs Choosing the RPMs to Install Although QLogic recommends that all RPMs are installed on all nodes, some are optional depending on which type of node is being used. To see which RPMs are required or optional for each type of node, according to its function as a compute node, front end node, development machine, or Subnet Manager (SM), see Appendix C “RPM Descriptions”.
S 5 – Software Installation Installing the InfiniPath and OpenFabrics RPMs Using rpm to Install InfiniPath and OpenFabrics The RPMs need to be available on each node on which they will be used. One way to do this is to copy the RPMs to a directory on each node that will need them. Another way is to put the RPMs in a directory that is accessible (e.g., via Network File System (NFS)) to every node.
A 5 – Software Installation Installed Layout NOTE: Parallel command starters can be used for installation on multiple nodes, but this subject is beyond the scope of this document. Installed Layout The default installed layout for the InfiniPath software is described in the following paragraphs.
5 – Software Installation Starting the InfiniPath Service S Other OFED-installed modules may also be in this directory; these are also renamed if found during the install process. Starting the InfiniPath Service If this is the initial installation of InfiniPath (see “Lockable Memory Error on Initial Installation of InfiniPath” on page A-7), or if you have installed VNIC with the OpenFabrics RPM set, reboot after installing. If this is an upgrade, you can restart the InfiniPath service without rebooting.
A 5 – Software Installation InfiniPath and OpenFabrics Driver Overview InfiniPath and OpenFabrics Driver Overview The InfiniPath ib_ipath module provides low level QLogic hardware support, and is the base driver for both the InfiniPath and OpenFabrics software components. The ib_ipath module does hardware initialization, handles InfiniPath-specific memory management, and provides services to other InfiniPath and OpenFabrics modules.
S 5 – Software Installation Configuring the InfiniPath Drivers Configuring the ib_ipath Driver The ib_ipath module provides both low-level InfiniPath support and management functions for OpenFabrics protocols. The ib_ipath driver has several configuration variables that set reserved buffers for the software, define events to create trace records, and set the debug level.
A 5 – Software Installation Configuring the InfiniPath Drivers Servers typically have two Ethernet devices present, numbered as 0 (eth0) and 1 (eth1). This example creates a third device, eth2. NOTE: When multiple QLogic HCAs are present, the configuration for eth3, eth4, and so on follow the same format as for adding eth2 in the following example. 1.
5 – Software Installation Configuring the InfiniPath Drivers 4. S Check whether the Ethernet driver has been loaded with: $ lsmod | grep ipath_ether 5. Verify that the driver is up with: $ ifconfig -a ipath_ether Configuration on SLES The following procedure causes the ipath_ether network interfaces to be automatically configured the next time you reboot the system. These instructions are for the SLES 10 distribution.
A 5 – Software Installation Configuring the InfiniPath Drivers The Globally Unique IDentifer (GUID) can also be returned by running: # ipath_control -i $Id: QLogic Release2.2 $ $Date: 2007-09-05-04:16 $ 00: Version: ChipABI 2.0, InfiniPath_QHT7140, InfiniPath1 3.
S 5 – Software Installation OpenFabrics Drivers and Services Configuration and Startup If you are using Dynamic Host Configuration Protocol (DHCP), add these lines to the file: STARTMODE=onboot BOOTPROTO=dhcp NAME=’InfiniPath Network Card’ _nm_name=eth-id-$MAC Proceed to Step 6. If you are you are using static IP addresses (not DHCP), add these lines to the file: STARTMODE=onboot BOOTPROTO=static NAME=’InfiniPath Network Card’ NETWORK=192.168.5.0 NETMASK=255.255.255.0 BROADCAST=192.168.5.255 IPADDR=192.
A 5 – Software Installation OpenFabrics Drivers and Services Configuration and Startup Configuring the IPoIB Network Interface The following instructions show you how to manually configure your OpenFabrics IPoIB network interface. This example assumes that you are using sh or bash as your shell, all required InfiniPath and OpenFabrics RPMs are installed, and your startup scripts have been run (either manually or at system boot). For this example, the IPoIB network is 10.1.17.
S 5 – Software Installation OpenFabrics Drivers and Services Configuration and Startup NOTE: The configuration must be repeated each time the system is rebooted. IPoIB-CM (Connected Mode) is enabled by default. If you want to change this to datagram mode, edit the file /etc/sysconfig/infinipath. Change this line to: IPOIB_MODE="datagram" Restart infinipath (as root) by typing: # /etc/init.d/infinipath restart The default IPOIB_MODE setting is "CM" for Connected Mode.
A 5 – Software Installation OpenFabrics Drivers and Services Configuration and Startup service[ 1]: 1000066a00000101 InfiniNIC.InfiniConSys.Data:01 . . . NOTE: A VIO hardware card can contain up to six IOCs (and therefore up to six IOCGUIDs); one for each Ethernet port on the VIO hardware card. Each VIO hardware card contains a unique set of IOCGUIDs: (e.g., IOC 1 maps to Ethernet Port 1, IOC 2 maps to Ethernet Port 2, IOC 3 maps to Ethernet Port 3, etc.). 2.
5 – Software Installation OpenFabrics Drivers and Services Configuration and Startup a. S Format 1: Defining an IOC using the IOCGUID.
A 5 – Software Installation OpenFabrics Drivers and Services Configuration and Startup Each CREATE block must specify a unique NAME. The NAME represents the Ethernet interface name that will be registered with the Linux operating system. c. Format 3: Starting VNIC using DGID. Following is an example of a DGID and IOCGUID VNIC configuration.
S 5 – Software Installation OpenFabrics Drivers and Services Configuration and Startup 5. Start the QLogic VNIC driver and the QLogic VNIC interfaces. Once you have created a configuration file, you can start the VNIC driver and create the VNIC interfaces specified in the configuration file by running the following command: # /etc/init.d/qlgc_vnic start You can stop the VNIC driver and bring down the VNIC interfaces by running the following command: # /etc/init.
A 5 – Software Installation OpenFabrics Drivers and Services Configuration and Startup If you want to restart the QLogic VNIC interfaces, run the following command: # /etc/init.d/qlgc_vnic restart You can get information about the QLogic VNIC interfaces by using the following script: # ib_qlgc_vnic_info This information is collected from the /sys/class/infiniband_qlgc_vnic/interfaces/ directory, under which there is a separate directory corresponding to each VNIC interface.
5 – Software Installation OpenFabrics Drivers and Services Configuration and Startup S For example: # Use the UPDN algorithm instead of the Min Hop algorithm. OPTIONS="-R updn" SRP SRP stands for SCSI RDMA Protocol. It was originally intended to allow the SCSI protocol to run over InfiniBand for Storage Area Network (SAN) usage. SRP interfaces directly to the Linux file system through the SRP Upper Layer Protocol. SRP storage can be treated as another device.
A 5 – Software Installation OpenFabrics Drivers and Services Configuration and Startup ID: Data Direct Networks SRP Target System service entries: 1 service[ 0]: f60b04ff01000021 / SRP.T10:21000001ff040bf6 Note that not all the output is shown here; key elements are expected to show the match in Step 3. 3.
5 – Software Installation Other Configuration: Changing the MTU Size S NOTE: Use sde1 rather than sde. See the mount(8) man page for more information on creating mount points. MPI over uDAPL Some MPI implementations, such as Intel MPI and HP-MPI, can be run over uDAPL. uDAPL is the user mode version of the Direct Access Provider Library (DAPL). Examples of these types of MPI implementations are Intel MPI and one option on Open MPI.
A 5 – Software Installation Starting and Stopping the InfiniPath Software To check the configuration state, use the command: $ chkconfig --list infinipath To enable the driver, use the command (as root): # chkconfig infinipath on 2345 To disable the driver on the next system boot, use the command (as root): # chkconfig infinipath off NOTE: This command does not stop and unload the driver if the driver is already loaded. You can start, stop, or restart (as root) the InfiniPath support with: # /etc/init.
S 5 – Software Installation Rebuilding or Reinstalling Drivers After a Kernel Upgrade An equivalent way to restart infinipath this is to use same sequence as above, except use the restart command instead of start and stop: # # # # # /etc/init.d/opensmd stop ifdown eth2 /etc/init.d/infinipath restart ifup eth2 /etc/init.d/opensmd start NOTE: Stopping or restarting InfiniPath terminates any QLogic MPI processes, as well as any OpenFabrics processes that are running at the time.
A 5 – Software Installation Rebuilding or Reinstalling Drivers if a Different Kernel is Installed Rebuilding or Reinstalling Drivers if a Different Kernel is Installed Installation of the InfiniPath driver RPM (infinipath-kernel-2.2-xxx-yyy) builds kernel modules for the currently running kernel version. These InfiniPath modules will work only with that kernel. If a different kernel is booted, then you must reboot and then re-install or rebuild the InfiniPath driver RPM. Here is an example.
S 5 – Software Installation Adapter Settings Table 5-5. LED Link and Data Indicators (Continued) LED States Green ON Amber OFF Indication Signal detected and the physical link is up. Ready to talk to SM to bring the link fully up. If this state persists, the SM may be missing or the link may not be configured. Use ipath_control -i to verify the software state. If all HCAs are in this state, then the SM is not running. Check the SM configuration, or install and run opensmd.
A 5 – Software Installation Customer Acceptance Utility Use a PCIe MaxReadRequest size of at least 512 bytes with the QLE7240 and QLE7280. QLE7240 and QLE7280 adapters can support sizes from 128 bytes to 4096 byte in powers of two. This value is typically set by the BIOS. Use the largest available PCIe MaxPayload size with the QLE7240 and QLE7280. The QLE7240 and QLE7280 adapters can support 128, 256, or 512 bytes.
S 5 – Software Installation Customer Acceptance Utility 6. Verifies the ability to mpirun jobs on the nodes. 7. Runs a bandwidth and latency test on every pair of nodes and analyzes the results. The options available with ipath_checkout are shown in Table 5-6. Table 5-6. ipath_checkout Options Command Meaning -h, --help These options display help messages describing how a command is used.
A 5 – Software Installation Removing Software Packages Removing Software Packages This section provides instructions for uninstalling or downgrading the InfiniPath and OpenFabrics software. Uninstalling InfiniPath and OpenFabrics RPMs To uninstall the InfiniPath software packages on any node, type the following command (as root) using a bash shell: # rpm -e $(rpm -qa ’InfiniPath-MPI/mpi*’ ’InfiniPath/infinipath*’) This command uninstalls the InfiniPath and MPI software RPMs on that node.
5 – Software Installation Additional Installation Instructions S Installing Lustre This InfiniPath release supports Lustre. Lustre is a fast, scalable Linux cluster file system that interoperates with InfiniBand.
A 5 – Software Installation Additional Installation Instructions For example, install all RPMs that relate to QLogic MPI in /usr/mpi/qlogic. Leave all remaining InfiniPath libraries and tools in their default installation location (/usr). This approach leaves InfiniPath libraries (such as libpsm_infinipath.so and libinfinipath.so) in standard system directories so that other MPIs can easily find them in their expected location.
S 5 – Software Installation Additional Installation Instructions NOTE: Using the override may not result in a buildable or working driver if your distribution/kernel combination is not similar enough to a tested and supported distribution/kernel pair. The following example installation is for a Red Hat Enterprise Linux 4 Update 4 compatible kernel, where the /etc/redhat-release file indicates another distribution. If you are a bash or sh user, type: # export IPATH_DISTRO=2.6.
A 5 – Software Installation Additional Installation Instructions Rocks is a way to manage the kickstart automated installation method created by Red Hat. By using the Rocks conventions, the installation process can be automated for clusters of any size. A Roll is an extension to the Rocks base distribution that supports different cluster types or provides extra functionality.
5 – Software Installation Additional Installation Instructions S Use the following contents: A skeleton XML node file. This file is only a template and is intended as an example of how to customize your Rocks cluster and use InfiniPath InfiniBand software and MPI. We want to extend.... -lam-gnu set -x IPLOG=/root/In
S 5 – Software Installation Additional Installation Instructions mode="create" perms="a+rx"> #!/bin/sh cd /home/install/contrib/4.2.1/x86_64/RPMS rpm -Uvh --force infinipath*.rpm ‘ls mpi*rpm | grep -v openmpi‘ # If and Only IF OpenSM is needed and then please enable OpenSM # only on one node. rpm -Uvh --force opensm-2.2-*_qlc.x86_64.rpm \ libibcommon-2.2-*_qlc.x86_64.rpm \ libibmad-2.2-*_qlc.x86_64.rpm \ libibumad-2.2-*_qlc.x86_64.rpm \ opensm-devel-2.2-*_qlc.x86_64.rpm \ opensm-libs-2.2-*_qlc.x86_64.
A 5 – Software Installation Additional Installation Instructions NOTE: If you intend to use OpenFabrics and are using RHEL4 or RHEL5, make sure you install rhel4-ofed-fixup-2.2-4081.772.rhel4_psc.noarch.rpm, which is in the OpenFabrics directory. This RPM will fix two conflicts. See “OpenFabrics Library Dependencies” on page A-4 for more information. 6. The completion of the installation is done using the extend-compute.xml file.
5 – Software Installation Additional Installation Instructions S Notes 5-42 IB0056101-00 G
A Installation Troubleshooting The following sections contain information about issues that may occur during installation. Some of this material is repeated in the Troubleshooting appendix of the QLogic HCA and InfiniPath Software User Guide. Many programs and files are available that gather information about the cluster, and can be helpful for debugging. See appendix D, Useful Programs and Files, in the QLogic HCA and InfiniPath Software User Guide.
S A – Installation Troubleshooting BIOS Settings MTRR Mapping and Write Combining MTRR is used by the InfiniPath driver to enable write combining to the QLogic on-chip transmit buffers. Write combining improves write bandwidth to the QLogic chip by writing multiple words in a single bus transaction (typically 64 bytes). Write combining applies only to x86_64 systems.
A A – Installation Troubleshooting Software Installation Issues Some BIOS’ do not have the MTRR mapping option. It may have a different name, depending on the chipset, vendor, BIOS, or other factors. For example, it is sometimes referred to as 32 bit memory hole. This setting must be enabled. If there is no setting for MTRR mapping or 32 bit memory hole, and you have problems with degraded performance, contact your system or motherboard vendor and ask how to enable write combining.
S A – Installation Troubleshooting Version Number Conflict with opensm-* on RHEL5 Systems Version Number Conflict with opensm-* on RHEL5 Systems The older opensm-* packages that come with the RHEL5 distribution have a version number (3) that is greater the InfiniPath version number (2.2). This prevents the newer InfiniPath packages from being installed. You may see an error message similar to this when trying to install: Preparing packages for installation... package opensm-3.1.8-0.1.
A A – Installation Troubleshooting Version Number Conflict with opensm-* on RHEL5 Systems Missing Kernel RPM Errors Install the kernel-source, kernel-devel, and, if using an older release, kernel-smp-devel RPMs for your distribution before installing the InfiniPath RPMs, as there are dependencies. Use uname -a to find out which kernel is currently running, to make sure that you install the version with which it matches.
A – Installation Troubleshooting Version Number Conflict with opensm-* on RHEL5 Systems S Resolving Conflicts Occasionally, conflicts may arise when trying to install "on top of" an existing set of files that may come from a different set of RPMs. For example, if you install the QLogic MPI RPMs after having previously installed Local Area Multicomputer (LAM)/MPI, there will be conflicts, since both installations have versions of some of the same programs and documentation.
A A – Installation Troubleshooting Version Number Conflict with opensm-* on RHEL5 Systems In newer distributions, glibc is an RPM name. The 32-bit glibc is named similarly to: glibc-2.3.4-2.i686.rpm OR glibc-2.3.4-2.i386.rpm Check your distribution for the exact RPM name. ifup on ipath_ether on SLES 10 Reports "unknown device" SLES 10 does not have all of the QLogic (formerly PathScale) hardware listed in its pciutils database.
A – Installation Troubleshooting Version Number Conflict with opensm-* on RHEL5 Systems S Notes A-8 IB0056101-00 G
B Configuration Files Table B-1 contains descriptions of the configuration and configuration template files used by the InfiniPath and OpenFabrics software. Table B-1. Configuration Files Configuration File Name Description /etc/infiniband/qlogic_vnic.cfg VirtualNIC configuration file /etc/modprobe.conf Specifies options for modules when added or removed by the modprobe command. Also used for creating aliases. For Red Hat systems /etc/modprobe.conf.
S B – Configuration Files Table B-1. Configuration Files (Continued) Configuration File Name /etc/sysconfig/network-scripts/ifcfg- Description Network configuration file for network interfaces. When used for ipath_ether, is in the form ethX, where X is the number of the device, typically, 2, 3, etc. When used for VNIC configuration, the name is in the form eiocX, where X is the number of the device.
C RPM Descriptions The following sections contain detailed descriptions of the RPMs for InfiniPath and OpenFabrics software. InfiniPath and OpenFabrics RPMs For ease of installation, QLogic recommends that all RPMs are installed on all nodes. However, some RPMs are optional. Since cluster nodes can be used for different functions, it is possible to selectively install RPMs. For example, you can install the opensm package for use on the node that will act as a subnet manager.
S C – RPM Descriptions RPM Organization Non-InfiniPath components may also have their own version number: mvapich_gcc-2.2-33597.832.1_0_0.sles10_qlc.x86_64.rpm 1_0_0 is the 1.0.0 build for mvapich. In all of the tables in this appendix, the build identifier is xxx and the distribution identifier is yyy. Using this convention, the previous RPMs would be listed as: infinipath-2.2-xxx_yyy.x86_64.rpm mvapich_gcc-2.2-xxx.1_0_0.yyy.x86_64.
A C – RPM Descriptions Documentation and InfiniPath RPMs The InfiniPath/RPMs are listed in Table C-2. Table C-2. InfiniPath/RPMs RPM Name infinipath-2.2-xxx_yyy.x86_64.rpm Front End Compute Development Optional Required Optional Optional Required Optional Optional Required Optional Utilities and source code InfiniPath configuration files Contains ipath_checkout and ipathbug-helpera infinipath-kernel-2.2-xxx_yyy.x86_64.rpm InfiniPath drivers, OpenFabrics kernel modules infinipath-libs-2.
S C – RPM Descriptions OpenFabrics RPMs The InfiniPath-MPI/RPMs are listed in Table C-4. Table C-4. InfiniPath-MPI/RPMs RPM Name Front End Compute Development mpi-benchmark-2.2-xxx_yyy.x86_64.rpm Optional Required Optional Required Required Optional Optional Required Required MPI benchmark binaries mpi-frontend-2.2-xxx_yyy.i386.rpm MPI job launch scripts and binaries, including mpirun and MPD mpi-libs-2.2-xxx_yyy.i386.
A C – RPM Descriptions OpenFabrics RPMs Table C-6. OpenFabrics/RPMs (Continued) RPM Name dapl-utils-2.2-xxx.2_0_7.yyy.x86_64.rpm uDAPL support ib-bonding-2.2-xxx.0_9_0.yyy.x86_64.rpm Utilities to manage and control the driver operation ibsim-2.2-xxx.0_4.yyy.x86_64.rpm Voltaire InfiniBand Fabric Simulator ibutils-2.2-xxx.1_2.yyy.x86_64.rpm ibutils provides InfiniBand (IB) network and path diagnostics. ibvexdmtools-2.2-xxx.0_0_1.yyy.x86_64.
S C – RPM Descriptions OpenFabrics RPMs Table C-6. OpenFabrics/RPMs (Continued) RPM Name libibverbs-2.2-xxx.1_1_1.yyy.x86_64.rpm Library that allows userspace processes to use InfiniBand verbs as described in the InfiniBand Architecture Specification. This library includes direct hardware access for fast path operations. For this library to be useful, a device-specific plug-in module must also be installed. libibverbs-utils-2.2-xxx.1_1_1.yyy.x86_64.
A C – RPM Descriptions OpenFabrics RPMs Table C-6. OpenFabrics/RPMs (Continued) RPM Name qlgc_vnic_daemon-2.2-xxx.0_0_1.yyy.x86_64.rpm Used with VNIC ULP service qlvnictools-2.2-xxx.0_0_1.yyy.x86_64.rpm Startup script, sample config file, and utilities qperf-2.2-xxx.0_4_0.yyy.x86_64.rpm IB performance tests rds-tools-2.2-xxx.1_1.yyy.x86_64.rpm Supports RDS rhel4-ofed-fixup-2.2-xxx.2_2.yyy.noarch.rpm Fixes conflicts with older versions of OpenFabrics for RHEL4 and RHEL5 sdpnetstat-2.2-xxx.1_60.yyy.x86_64.
S C – RPM Descriptions OpenFabrics RPMs Table C-7. OpenFabrics-Devel/RPMs (Continued) RPM Name libibcommon-devel-2.2-xxx.1_0_8.yyy.x86_64.rpm Development files for the libibcommon library libibmad-devel-2.2-xxx.1_1_6.yyy.x86_64.rpm Development files for the libibmad library libibumad-devel-2.2-xxx.1_1_7.yyy.x86_64.rpm Development files for the libibumad library libibverbs-devel-2.2-xxx.1_1_1.yyy.x86_64.rpm Libraries and header files for the libibverbs verbs library libipathverbs-devel-2.2-xxx.1_1.yyy.
A C – RPM Descriptions OpenFabrics RPMs Table C-9. Other HCAs/RPMs (Continued) RPM Name Comments Optional for OpenFabrics libmlx4-2.2-xxx.1_0.yyy.x86_64.rpm Userspace driver for Mellanox ConnectX™ InfiniBand HCAs ® libmthca-2.2-xxx.1_0_4.yyy.x86_64.rpm Provides a device-specific userspace driver for Mellanox HCAs for use with the libibverbs library libnes-2.2-xxx.0_5.yyy.x86_64.
S C – RPM Descriptions OpenFabrics RPMs Table C-11. OtherMPIs/RPMs (Continued) RPM Name mpitests_mvapich_pathscale-2.2-xxx .3_0.yyy.x86_64.
A C – RPM Descriptions OpenFabrics RPMs Table C-11. OtherMPIs/RPMs (Continued) RPM Name openmpi_intel-2.2-xxx.1_2_5.yyy.x8 6_64.rpm a Front End Comput e Developme nt Optional Optional Optional Optional Optional Optional Optional Optional Optional Optional Optional Optional Open MPI compiled with Intel openmpi_pathscale-2.2-xxx.1_2_5.yy y.x86_64.rpm Open MPI compiled with PathScale openmpi_pgi-2.2-xxx.1_2_5.yyy.x86_ 64.rpm Open MPI compiled with PGI qlogic-mpi-register-2.2-xxx.1_0.yyy.
C – RPM Descriptions OpenFabrics RPMs S Notes C-12 IB0056101-00 G
Index A ACPI 4-4, A-1 Adapter, see HCA B BIOS configuring 4-4 settings A-1 settings to fix MTRR issues A-2 C -c 5-32 Cables supported 4-3 Compiler support 2-4 Configuration files B-1 ib_ipath 5-12 ipath_ether on Fedora, RHEL4, RHEL5 5-12 ipath_ether on SLES 5-14 OpenSM 5-23 VNIC 5-18 --continue 5-32 CPUs, HTX motherboards may require two or more CPUs A-1 D -d 5-32 --debug 5-32 Distribution override, setting 5-35 Distributions supported 2-3 IB0056101-00 G Document conventions 1-3 Documentation for Infin
QLogic HCA and InfiniPath® Software Install Guide Version 2.
A K -k 5-32 --keep 5-32 Kernel supported 2-3 Kernel, missing kernel RPM errors A-5 Kernels supported 2-3 L LEDs, blink patterns 5-29 Linux, supported distributions 5-2 Lockable memory error A-7 Lustre, installing 5-34 M Model numbers for HCAs 2-1 MPI, other MPIs/RPMs C-9 MPI over uDAPL 5-26 mpirun, installation requires 32-bit support A-6 mpi-selector 5-34 MTRR 4-4 editing BIOS settings to fix A-2 mapping and write combining A-2 using ipath_mtrr script to fix issues A-3 MTU, changing the size 5-26 N Node
S QLogic HCA and InfiniPath® Software Install Guide Version 2.
A QLogic HCA and InfiniPath® Software Install Guide Version 2.
QLogic HCA and InfiniPath® Software Install Guide Version 2.
D Corporate Headquarters QLogic Corporation 26650 Aliso Viejo Parkway Aliso Viejo, CA 92656 949.389.6000 Europe Headquarters QLogic (UK) LTD. Quatro House Lyon Way, Frimley Camberley Surrey, GU16 7ER UK www.qlogic.com +44 (0) 1276 804 670 © 2006–2008 QLogic Corporation. Specifications are subject to change without notice. All rights reserved worldwide. QLA, QLogic, SANsurfer, the QLogic logo, InfiniPath, SilverStorm, and EKOPath are trademarks or registered trademarks of QLogic Corporation.