EMC® Host Connectivity with QLogic Fibre Channel and iSCSI Host Bus Adapters (HBAs) and Fibre Channel over Ethernet Converged Network Adapters (CNAs) for the Linux Environment P/N 300-002-803 REV A20 EMC Corporation Corporate Headquarters: Hopkinton, MA 01748-9103 1-508-435-1000 www.EMC.
Copyright © 2001–2011 EMC Corporation. All rights reserved. Published December, 2011 EMC believes the information in this publication is accurate as of its publication date. The information is subject to change without notice. THE INFORMATION IN THIS PUBLICATION IS PROVIDED “AS IS.” EMC CORPORATION MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY KIND WITH RESPECT TO THE INFORMATION IN THIS PUBLICATION, AND SPECIFICALLY DISCLAIMS IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE.
Contents Preface............................................................................................................................ 11 Chapter 1 Introduction Purpose of this document................................................................ 16 Host connectivity .............................................................................. 16 Fibre Channel ..............................................................................16 Fibre Channel over Ethernet ............................
Contents Manually setting the topology for QLogic Fibre Channel adapters.............................................................................................. 43 Manually setting the data rate for QLogic Fibre Channel adapters.............................................................................................. 44 Chapter 4 Installing and Configuring the Linux Host with the QLogic Driver Introduction .......................................................................................
Contents Chapter 6 Connecting to the Storage Zoning and connection planning in a Fibre Channel or Fibre Channel over Ethernet environment ............................................ 134 Planning procedure ..................................................................134 Establishing connectivity to the storage array......................134 Zoning and connection planning in an iSCSI environment...... 135 Configuring the QLA40xx-Series HBA to discover iSCSI targets.....................................
Contents SLES 11 OS SAN-boot installation with QLogic FCoE adapters...................................................................................... 167 Configuring a Symmetrix boot device for iSCSI 3.x .................. 168 Preparing the Symmetrix storage array ................................ 168 Preparing the host .................................................................... 168 Configuring the QLogic BIOS for SAN boot ........................
Contents Unloading and reloading the modular QLogic driver ........203 Device reconfiguration: Device numbering ................................ 206 HPQ server-specific note................................................................ 207 (VNX series or CLARiiON Only) disconnected ghost LUNs ... 208 Appendix A Setting Up External Boot for IBM Blade Server HS40 (8839) Configure HS40 BladeCenter server to boot from external array ...
Contents 8 EMC Host Connectivity with QLogic FC and iSCSI HBAs and FCoE CNAs for the Linux Environment
Tables Title 1 2 3 4 5 6 7 8 Page Installation steps ..............................................................................................24 Slot requirements of EMC-supported QLogic adapters ............................32 QLogic BIOS settings for Fibre Channel HBAs ..........................................40 Supported FC and FCoE in kernel driver versions ....................................49 Supported FC and FCoE out of kernel driver versions .............................59 QLogic v7.
Tables 10 EMC Host Connectivity with QLogic FC and iSCSI HBAs and FCoE CNAs for the Linux Environment
Preface As part of an effort to improve and enhance the performance and capabilities of its product line, EMC from time to time releases revisions of its hardware and software. Therefore, some functions described in this document may not be supported by all revisions of the software or hardware currently in use. For the most up-to-date information on product features, refer to your product release notes.
Preface Related documentation Conventions used in this document The following related documents are available on Powerlink: ◆ EMC Host Connectivity Guide for Linux ◆ EMC Linux iSCSI Attach Release Notes ◆ The EMC Networked Storage Topology Guide has been divided into several TechBooks and reference manuals. These are available through the E-Lab Interoperability Navigator, Topology Resource Center tab, at http://elabnavigator.EMC.com.
Preface Used in procedures for: • Names of interface elements (such as names of windows, dialog boxes, buttons, fields, and menus) • What user specifically selects, clicks, presses, or types Where to get help Italic: Used in all text (including procedures) for: • Full titles of publications referenced in text • Emphasis (for example a new term) • Variables Courier Used for: • System output, such as an error message or script • URLs, complete paths, filenames, prompts, and syntax when shown outside of
Preface Your comments Your suggestions will help us continue to improve the accuracy, organization, and overall quality of the user publications. Please send your opinion of this document to: techpub_comments@EMC.
1 Invisible Body Tag Introduction This document describes the procedures for installing an EMC-approved QLogic host bus adapter (HBA) or converged network adapter (CNA) into a Linux host environment and configuring the host for connection to an EMC storage array over Fibre Channel, Fibre Channel over Ethernet (FCoE), or iSCSI. ◆ ◆ ◆ ◆ ◆ Purpose of this document ................................................................. Host connectivity ............................................................
Introduction Purpose of this document This document is meant to assist in the installation and configuration of QLogic Fibre Channel host bus adapters (HBAs) and Fibre Channel Over Ethernet (FCoE) converged network adapters (CNAs), and iSCSI HBAs in Linux environments.
Introduction ◆ In virtualization environments, where several physical storage and network links are commonly required. The installation of the QLogic FCoE CNA provides the host with an Intel-based 10 gigabit Ethernet interface (using the existing in-box drivers), and an QLogic Fibre Channel adapter interface, which requires the installation of the supported driver revision.
Introduction Boot device support Linux hosts using QLogic adapters have been qualified for booting from EMC storage array devices interfaced through Fibre Channel and iSCSI as specified in the EMC Support Matrix. The EMC Symmetrix® , EMC VNX™ series, or EMC CLARiiON® device that is to contain the Master Boot Record (MBR) for the host must have a lower logical unit number (LUN) than any other device visible to the host.
Introduction Zoning This section contains general configuration guidelines when connecting a Linux server via Fibre Channel or iSCSI to an EMC storage array. Note: Multi-initiator zones are not recommended in a Linux fabric environment. FC and FCoE When using Linux hosts in a fabric environment, the zoning must be set up as single initiator and single target zoning. A single initiator/single target zone is composed of one adapter and one EMC storage array port.
Introduction Figure 1 provides a zoning example. Linux Server HBA or NIC HBA or NIC sub-network SPA 0 sub-network SPA 1 SPB 0 SPB 1 Array Figure 1 Zoning example EMC storage array-specific settings Refer to the EMC Host Connectivity Guide for Linux, available at http://Powerlink.EMC.com, for EMC storage array-specific settings.
2 Invisible Body Tag Installation Steps This chapter outlines the prerequisisites for first-time installation, offers a summary of the installation steps with links to the appropriate sections, and provides information on installing the adapter. Review the EMC Support Matrix for the latest information on approved adapters and drivers. ◆ ◆ ◆ Prerequisites for first-time installation ........................................... 22 Summary of installation steps...............................................
Installation Steps Prerequisites for first-time installation In order to complete a first-time installation of the QLogic adpater in your server, you will need the following: Operating system ! ◆ “Operating system” on page 22 ◆ “QLogic SANSurfer and SANSurfer CLI” on page 22 ◆ “BIOS and firmware” on page 22 ◆ “Linux driver” on page 23 Before the adapter is installed, the Linux operating system must be installed and properly configured.
Installation Steps Follow the Downloads > EMC links to your adapter for the appropriate version. Linux driver The Linux driver for your HBA or CNA per theEMC Support Matrix for your supported configuration. EMC supports both in-kernel and out-of-kernel drivers. Note: The installation of the in-kernel driver occurs when you install your Linux distribution of choice.
Installation Steps Summary of installation steps Table 1describes the procedures for installing an EMC-approved QLogic adapters into a Linux host and configuring the host for connection to an EMC Storage Array over Fibre Channel (FC) or Fibre Channel over Ethernet (FCoE). Table 1 24 Installation steps (page 1 of 3) Step Instructions For Fibre Channel, refer to For Fibre Channel over Ethernet (FCoE), refer to For iSCSI, refer to 1 Install the adapter .
Installation Steps Table 1 Step Instructions 4 Install the driver. There are two states: Installation steps (page 2 of 3) For Fibre Channel, refer to For Fibre Channel over Ethernet (FCoE), refer to For iSCSI, refer to • In kernel For drivers listed in the EMC Support Matrix as in kernel drivers, there is no need to install a driver since the process of installing the operating system has already included the driver. Table 4 on page 49 lists supported QLogic driver versions .
Installation Steps Table 1 Step Instructions 5 Install the firmware. There are two states: • Wrong firmware Installation steps (page 3 of 3) For Fibre Channel, refer to For Fibre Channel over Ethernet (FCoE), refer to For iSCSI, refer to The adapter firmware is part of the Linux driver and cannot be altered.. The adapter firmware is part of the Linux driver and cannot be altered. “Updating the QLogic firmware for iSCSI adapters” on page 131 Proceed to Step 6.
Installation Steps Installing the adapter Follow the instructions included with your adapter. The adapter installs into a single slot. To connect the cable to the adapter: 1. (Optical cable only) Remove the protective covers on each fiber-optic cable. 2. Plug one end of the cable into the connector on the adapter as shown in the appropriate figure in this step. (The hardware might be rotated 90 degrees clockwise from the orientation shown.
Installation Steps • Fibre Channel over Ethernet converged network adapter (CNA) connectivity options include LC optical and Cisco SFP+, shown next. – LC optical cable: 1 2 3 – Cisco SFP+ (Twinax cable) 3. Plug the other end of the cable into a connector on the storage system or a hub/switch port. 4. Label each cable to identify the adapter and the storage/switch/hub port to which it connects. 5. After connecting all adapters in the server, power up the server.
Installation Steps Servers have several different bus slot types for accepting adapters: ◆ ◆ ◆ ◆ PCI PCI-X PCI-X 2.0 PCI-Express PCI slots can be 32-bit and 64-bit (denoted by their 124-pin or 188-pin connectors.) These slots have plastic "keys" that prevent certain adapters from fitting into them. These keys work with the cutout notches in the adapter edge connector so that only compatible adapters will fit into them. This is done because of the voltage characteristics of the adapter. Inserting a 3.
Installation Steps Figure 3 shows the adapter edge connectors compatible with the PCI slots shown in Figure 2 on page 29. Note adapter 5, which shows a universal adapter edge connector. Universal adapters are compatible with both 3.3 V and 5 V PCI slots. Figure 3 Adapter edge connectors PCI-X (or PCI Extended) slots increase the speed with which data travels over the bus. PCI-X slots appear identical to a 64-bit PCI slot keyed for 3.3 V. (Refer to number 3 in Figure 2 on page 29 and Figure 3.
Installation Steps throughput. Because of how PCI Express slots are keyed, an x1 adapter can be inserted in all four slot types, as the adapter will negotiate with the slot to determine the highest mutually supported number of lanes. However, an adapter requiring x16 lanes will not fit into a smaller slot. Figure 4 PCI Express slots Figure 5 shows x1, x4, and x16 lane slots aligned on a mainboard. You can see how the slots are keyed so that low-lane adapters can fit into larger slots.
Installation Steps QLogic offers adapters for each bus/slot type available. Table 2 shows each of the EMC-supported QLogic adapters, and their respective slot requirements. Be sure to consult both your server user guide and QLogic to ensure that the adapter you want to use is compatible with your server's bus. Slot requirements of EMC-supported QLogic adapters Table 2 Adapter model Protocol PCI spec BUS length Power Slot key QLA2200F FC PCI 2.1 64-bit 3.3V, 5V Universal QLA200 FC PCI-X 1.
Installation Steps most up-to-date information on which servers support these adapters.
Installation Steps 34 EMC Host Connectivity with QLogic FC and iSCSI HBAs and FCoE CNAs for the Linux Environment
3 Invisible Body Tag Installing and Configuring the BIOS Settings This chapter describes the procedures for installing and configuring the BIOS settings. ◆ ◆ ◆ Verifying and configuring the BIOS settings ................................. 36 Manually setting the topology for QLogic Fibre Channel adapters ............................................................................................... 43 Manually setting the data rate for QLogic Fibre Channel adapters .......................................
Installing and Configuring the BIOS Settings Verifying and configuring the BIOS settings After the adapter is installed, follow these steps during system boot to verify and configure adapter firmware settings. To use SANsurfer or SANsurfer CLI for this function refer to the SANsurfer or SANsurfer CLI documentation you have downloaded. Refer to the EMC Support Matrix for required BIOS versions for qualified adapters.
Installing and Configuring the BIOS Settings d. Under Adapter Settings, note the BIOS version: – If the banner displays the required version, continue to “EMC recommended adapter BIOS settings” on page 39. – If the banner does not display the required version, upgrade the firmware as described under the “Upgrading the adapter BIOS” on page 37; then proceed to “EMC recommended adapter BIOS settings” on page 39.
Installing and Configuring the BIOS Settings Be sure to check the readme included with the BIOS files to make sure you have all of the appropriate files before proceeding. a. Insert a diskette into a Microsoft Windows 9x machine. b. Open any DOS window. c. At the DOS prompt, format the diskette by entering: format /s a: d. At the DOS prompt, change directory (cd) to the location of the saved zipped file, then extract the file to the diskette.
Installing and Configuring the BIOS Settings Method 3: Upgrading the adapter BIOS using QLogic SANsurfer CLI The SANsurfer CLI (scli) is installed as part of the qlinstaller or may be downloaded from the EMC-approved section of the QLogic website. To update the BIOS using the SANsurfer CLI, refer to the QLogic provided documentation on their website for detailed instructions.
Installing and Configuring the BIOS Settings heading are those that have been tested and determined to be applicable in a Linux environment. The settings are configurable in NVRAM using the Host Adapter Settings, Advanced Settings, and Extended Firmware Settings menus. To use SANsurfer or the SANsurfer CLI to modify the NVRAM settings, refer to the SANsurfer or SANsurfer CLI documentation from QLogic.
Installing and Configuring the BIOS Settings QLogic BIOS settings for Fibre Channel HBAs (page 2 of 2) Table 3 EMC recommended settings QLogic default No Multipath functionality With Multipath functionality Enable LIP Full Login Yes Yes Yes Enable Target Reset Yes Yes Yes Login Retry Count 8 8 8 Port Down Retry Count 8 45 30 Link Down Timeout 15 45 15 Extended Error Logging Disabled • Disabled (Do not use debugging) • Enable (Use debugging) • Disabled (Do not use debugging) • En
Installing and Configuring the BIOS Settings Note: For Linux attach, EMC recommends setting the Connection Options parameter to 1 when attached to a fabric and to 0 when attached to an EMC storage array directly. Fibre Channel over Ethernet (FCoE) CNAs EMC recommends the default settings for the QLogic CNAs. There are no settings to the BIOS or NVRAM to alter. iSCSI HBAs The only settings that are required to complete the installation are those of the intended iSCSI targets.
Installing and Configuring the BIOS Settings Manually setting the topology for QLogic Fibre Channel adapters The EMC default setting for the topology is set to 2 (Loop preferred; otherwise, point to point). For Linux environments, it is recommended that the Connection Options parameter be set to 1 when attached to a fabric and to 0 when directly attached to an EMC storage array. Follow these steps to set the NVRAM variables for the topology: 1. Boot the host.
Installing and Configuring the BIOS Settings Manually setting the data rate for QLogic Fibre Channel adapters The EMC default setting for the data rate on the QLA23xx/QLE23xx adapters is Auto Select mode. If necessary, the mode may be set manually to 1 GB, 2 GB, or Auto Select mode. The EMC default setting for the data rate on the QLA24xx/QLE24xx 4 GB capable adapters is Auto Select mode. If necessary, the mode may be set manually to 1 GB, 2 GB, 4 GB, or Auto Select mode.
4 Invisible Body Tag Installing and Configuring the Linux Host with the QLogic Driver This chapter describes the procedures for installing and configuring the driver. It is divided into the following sections. ◆ ◆ ◆ ◆ ◆ ◆ Introduction ........................................................................................ QLogic SANsurfer and SANsurfer CLI .......................................... Fibre Channel and FCoE in kernel driver versions.......................
Installing and Configuring the Linux Host with the QLogic Driver Introduction Using the QLogic adapter with the Linux operating system requires adapter driver software. The driver functions at a layer below the Linux SCSI driver to present Fibre Channel (FC), FibreChannel over Ethernet (FCoE) or iSCSI devices to the operating system as if they were standard SCSI devices.
Installing and Configuring the Linux Host with the QLogic Driver QLogic SANsurfer and SANsurfer CLI QLogic's SANsurfer program is a GUI-based utility and the SANsurfer CLI is a text-based utility. Both applications may be installed on any Linux system and used to manage, configure, and update the EMC-approved QLogic adapters. Complete documentation and the EMC-qualified versions of SANsurfer and the SANsurfer CLI are available for download from the EMC-approved section of the QLogic website. http://www.
Installing and Configuring the Linux Host with the QLogic Driver 3. Enter qioctl-install –install The following is an example of load IOCTL module on RHLE4 U5 and above with command: modprobe -v qioctlmod RPM packages needed for RHEL5 To run the SANsurfer installer under Redhat 5 Linux, if the default install is selected, the following RPMs need to be installed: compat-libstdc++-33-3.2.3-61..rpm libXp-1.0.0-8..rpm Note: On x86_64 make sure to load 32 bit libs.
Installing and Configuring the Linux Host with the QLogic Driver Fibre Channel and FCoE in kernel driver versions The following installation information is contained in this section: ◆ “Supported in kernel driver versions” on page 49 ◆ “Installation instructions for the in kernel QLogic driver for Linux 2.4.x kernel” on page 53 ◆ “Installation Instructions for the in kernel QLogic driver in Linux 2.6.
Installing and Configuring the Linux Host with the QLogic Driver Table 4 OS 50 Supported FC and FCoE in kernel driver versions (page 2 of 4) Driver version Supported adapters 1/2 Gb 4 Gb 8 Gb CNA RHEL 4 U3 Miracle Linux SE 4.0 SP1 RedFlag DC Server 5.0 SP1 Haansoft Linux 2006 Server SP1 8.01.02-d4 √ √ SLES 9 SP3 8.01.02-sles √ √ RHEL 4 U4 Asianux 2.0 SP2 OEL 4 U4 8.01.04-d7 √ √ SLES 10 GA 8.01.04-k √ √ RHEL 4.5 OEL 4.5 8.01.04-d8 √ √ RHEL 4.6 OEL 4.6 8.01.07-d4 √ √ RHEL 4.
Installing and Configuring the Linux Host with the QLogic Driver Table 4 OS Supported FC and FCoE in kernel driver versions (page 3 of 4) Driver version Supported adapters 1/2 Gb 4 Gb 8 Gb CNA SLES 10 SP1 8.01.07-k3 √ √ RHEL 5.1 Asianux 3.0 SP1 OEL 5.1 8.01.07-k7 √ √ √ RHEL 5.2 OEL 5.2 8.02.00-k5-rhel5.2-03 √ √ √ RHEL 5.2 (errata kernels equal to or greater than 2.6.18-92.1.6.el5) OEL 5.2 (errata kernels equal to or greater than 2.6.18-92.1.6.0.1.el5) 8.02.00-k5-rhel5.
Installing and Configuring the Linux Host with the QLogic Driver Table 4 Supported FC and FCoE in kernel driver versions (page 4 of 4) OS Driver version Supported adapters 1/2 Gb 4 Gb 8 Gb CNA SLES 11 SP1 (kernel < 2.6.32.13-0.4.1) 8.03.01.06.11.1-k8 √ √ √ √a SLES 11 SP1 (kernel > 2.6.32.13-0.4.1 < 2.6.32.27-0.2.2) 8.03.01.07.11.1-k8 √ √ √ √a SLES 11 SP1 (kernel > 2.6.32.27-0.2.2) 8.03.01.08.11.1-k8 √ √ √ √a RHEL 6.0 8.03.01.05.06.0-k8 √ √ √ √a SLES 10 SP4 8.03.07.03.06.
Installing and Configuring the Linux Host with the QLogic Driver Installation instructions for the in kernel QLogic driver for Linux 2.4.x kernel The section contains the following instructions for enabling the QLogic driver: ◆ “Enabling the QLogic driver in RHEL 3.0” on page 53 ◆ “Enabling the QLogic driver in SLES 8” on page 54 Enabling the QLogic driver in RHEL 3.0 To enable this driver, follow these steps: 1. Ensure that the /etc/modules.
Installing and Configuring the Linux Host with the QLogic Driver Enabling the QLogic driver in SLES 8 In order for the driver to be loaded at boot time, the driver must be listed in the /etc/sysconfig/kernel and /etc/modules.conf files and the ramdisk must be updated to reflect the changes. To enable the driver: 1. Edit /etc/sysconfig/kernel: vi /etc/sysconfig/kernel a. Add a reference to the QLogic qla2300.o driver in the INITRD_MODULES line: INITRD_MODULES="scsi_mod sd_mod mptscsih qla2300 reiserfs" b.
Installing and Configuring the Linux Host with the QLogic Driver where $1 is the v2.4.x kernel version currently running. Example: cd /boot mkinitrd -k vmlinuz-2.4.21-295-smp -i initrd-2.4.21-295-smp 4. Reboot the system. Installation Instructions for the in kernel QLogic driver in Linux 2.6.x kernels If you are installing the OS after the adapter has been installed in the server, the OS will automatically detect the adapter, change the configure file, and build a RAM disk including the driver.
Installing and Configuring the Linux Host with the QLogic Driver Note: QLA2300 manages QLA2310, QLA2340, and QLA2342. QLA 2322 manages QLE2360 and QLE2362. QLA2400 manages QLA2460, QLS2462, QLE2460, and QLE2462. QLA6312 manages QLE220. 2. Whenever /etc/modprobe.conf is modified, a new ramdisk should be created to reflect the changes made. Create a new ramdisk image to include the newly added references to the QLogic adapters: cd /boot mkinitrd -v initrd-$1.img $1 where $1 is the v2.6.
Installing and Configuring the Linux Host with the QLogic Driver where $1 is the v2.6.x kernel version currently running. Example: mkinitrd -v initrd- 2.6.18-8.el5.img 2.6.18-8.el5 3. Reboot the host. Enabling the QLogic driver in SLES 9 In order for the driver to be loaded at boot time, the driver must be listed in the /etc/sysconfig/kernel and the ramdisk must be updated to reflect the changes. To enable the driver: 1. Edit /etc/sysconfig/kernel: vi /etc/sysconfig/kernel a.
Installing and Configuring the Linux Host with the QLogic Driver b. Save the changes and quit from vi. 2. Create a new ramdisk to reflect the changes made: cd /boot mkinitrd -k vmlinuz-$1 -i initrd-$1 where $1 is the v2.6.x kernel version currently running. Example: cd /boot mkinitrd -k vmlinuz- 2.6.16.21-0.8-smp -i initrd2.6.16.21-0.8-smp 3. Reboot the system.
Installing and Configuring the Linux Host with the QLogic Driver Fibre Channel and FCoE out of kernel driver versions The following installation information is contained in this section: ◆ “Supported out of kernel driver versions” on page 59 ◆ “Installation instructions for the out of kernel QLogic driver” on page 61 ◆ “Uninstallation methods for the QLogic v7.xx.xx/v8.xx.xx driver” on page 76 ◆ “QLogic SANsurfer and SANsurfer CLI” on page 47 ◆ “QLogic v7.x and v8.
Installing and Configuring the Linux Host with the QLogic Driver Table 5 OS Supported FC and FCoE out of kernel driver versions (page 2 of 2) Driver version Supported adapters 1/2 Gb 4 Gb 8 Gb CNA Miracle Linux SE 4.0 RedFlag DC Server 5.0 Haansoft Linux 2006 Server 8.00.03b1 √ RHEL4 U3 Miracle Linux SE 4.0 SP1 RedFlag DC Server 5.0 SP1 Haansoft Linux 2006 Server SP1 8.01.02-d4 √ √ SLES9 SP3 8.01.02-sles √ √ RHEL 3 U2 RHEL 3 U3 RHEL 3 U4 RHEL 3 U5 SLES 8 SP3 SLES 8 SP4 7.03.
Installing and Configuring the Linux Host with the QLogic Driver Refer to the latest EMC Support Matrix for specific qualified kernel versions and distributions. Note: The support stated in the EMC Support Matrix supersedes versions listed in this document. Installation instructions for the out of kernel QLogic driver This section contains the following information for installing the out of kernel QLogic driver: ◆ “Downloading the QLogic v7.x/v8.x-series driver for the v2.4/v2.6.
Installing and Configuring the Linux Host with the QLogic Driver 4. Find the desired and supported driver for the kernel version and distribution, and click the associated Download link to save the file. Preinstallation instructions for the QLogic v7.xx.xx/v8.xx.xx driver Perform the following steps prior to the installation: 1. Stop all I/O. 2. Unmount all filesystems attached to the QLogic driver. 3.
Installing and Configuring the Linux Host with the QLogic Driver ◆ To create a modular v7.xx.xx/v8.xx.xx driver using the DKMS RPM, refer to “Method 1: Installing the QLogic v7.xx.xx/v8.xx.xx driver via the QLogic DKMS RPM” on page 63. Use the QLogic DKMS RPM to compile and install the modular driver for Dell servers and attached to EMC storage arrays. This method requires no manual edits for Dell servers attached to EMC storage arrays.
Installing and Configuring the Linux Host with the QLogic Driver The following are example steps to integrate the QLogic driver. Also refer to the README file in the driver package. 1. Boot into the qualified and supported kernel onto which the driver will be installed. 2. Obtain the qla2xxx-v8.xx.xx1-2dkms.tgz package from the EMC-approved section of the QLogic website as instructed under the “Downloading the QLogic v7.x/v8.x-series driver for the v2.4/v2.6.x kernel” on page 61. 3.
Installing and Configuring the Linux Host with the QLogic Driver Method 2: Installing the QLogic v7.xx.xx/v8.xx.xx driver via the QLogic installation script This section guides you through the process of installing and utilizing the QLogic installation script The script will build and install the driver and will modify the /etc/modprobe.conf.local and /etc/sysconfig/kernel files on SLES hosts.
Installing and Configuring the Linux Host with the QLogic Driver Proceed with the installation. cd qlafc-linux-8.xx.xx-1-install/./qlinstall -i -dp The qlinstall installation script provides the following features: ◆ Installs the driver source RPM which installs the driver source code in the following path: /usr/src/qlogic/ ◆ Builds and installs the QLogic driver and configuration module (qla2xxx_conf.o) for the QLogic adapter model(s) installed in the system.
Installing and Configuring the Linux Host with the QLogic Driver RHEL examples An example of the console output reported by the QLogic installation script on RHEL hosts is as follows: ./qlinstall -i -dp #*********************************************************# # QLogic HBA Linux Driver Installation # # Version: 1.00.00b2pre9 #*********************************************************# # Kernel version: 2.6.9-5.
Installing and Configuring the Linux Host with the QLogic Driver the saved configuration to take effect. Configuration saved on HBA port 1. Changes have been saved to persistent storage. Please reload the QLA driver module/rebuild the RAM disk for the saved configuration to take effect. Saved copy of /etc/modprobe.conf as /usr/src/QLogic/v8.00.03-3/backup/modprobe.conf-2.6.9-5.EL-050505-161350.bak Saved copy of /boot/efi/efi/redhat/initrd-2.6.9-5.EL.img as /usr/src/QLogic/v8.00.03-3/backup/initrd-2.6.9-5.
Installing and Configuring the Linux Host with the QLogic Driver Product Type : Disk Number of LUN(s) : 26 Status : Online --------------------------------------------------------------------------------------------------------------------------------------------------------HBA Port 1 - QLA2342 Port Name: 21-01-00-E0-8B-39-9A-54 Port ID: 6B-0E-00 ----------------------------------------------------------------------------Path : 0 Target : 0 Device ID : 0x81 Port ID : 49-1B-00 Product Vendor : DGC Product I
Installing and Configuring the Linux Host with the QLogic Driver ./qlinstall -i -dp #*********************************************************# # QLogic HBA Linux Driver Installation # # Version: 1.00.00b2pre4 # #*********************************************************# Kernel version: 2.6.5-7.151-smp Distribution: SUSE LINUX Enterprise Server 9 (i586) Found QLogic Fibre Channel Adapter in the system 1: QLA2312 Installation will begin for following driver(s) 1: qla2xxx version: v8.00.03 Preparing...
Installing and Configuring the Linux Host with the QLogic Driver Saved copy of /etc/sysconfig/kernel as /usr/src/qlogic/v8.00.03-1/backup/kernel-2.6.5-7.151-smp-042905-124100.bak Saved copy of /etc/modprobe.conf.local as /usr/src/qlogic/v8.00.03-1/backup/modprobe.conf-2.6.5-7.151-smp-042905-124100.ba k Saved copy of /boot/initrd-2.6.5-7.151-smp as /usr/src/qlogic/v8.00.03-1/backup/initrd-2.6.5-7.151-smp-042905-124100.bak QLA2XXX -- Rebuilding ramdisk image... Ramdisk created.
Installing and Configuring the Linux Host with the QLogic Driver Status : Online ----------------------------------------------------------------------------Path : 0 Target : 2 Device ID : 0x83 Port ID : 61-1A-13 Product Vendor : DGC Product ID : RAID 3 Product Revision : 0207 Node Name : 50-06-01-60-90-60-12-70 Port Name : 50-06-01-6A-10-60-12-70 Product Type : Disk Number of LUN(s) : 14 Status : Online ----------------------------------------------------------------------------Path : 0 Target : 3 Device
Installing and Configuring the Linux Host with the QLogic Driver Status : Online --------------------------------------------------------------------------------------------------------------------------------------------------------HBA Port 0 - QLA2340 Port Name: 21-00-00-E0-8B-13-77-20 Port ID: 74-3B-13 ----------------------------------------------------------------------------Path : 0 Target : 0 Device ID : 0x81 Port ID : 61-1A-13 Product Vendor : DGC Product ID : RAID 3 Product Revision : 0207 Node Na
Installing and Configuring the Linux Host with the QLogic Driver Port Name : 50-06-01-62-10-60-12-70 Product Type : Disk Number of LUN(s) : 14 Status : Online ----------------------------------------------------------------------------Path : 0 Target : 4 Device ID : 0x00 Port ID : 74-4A-13 Product Vendor : DGC Product ID : LUNZ Product Revision : 0206 Node Name : 50-06-01-60-90-60-12-5C Port Name : 50-06-01-62-10-60-12-5C Product Type : Disk Number of LUN(s) : 1 Status : Online ----------------------------
Installing and Configuring the Linux Host with the QLogic Driver Method 3: Installing the QLogic v7.xx.xx driver via the QLogic RPM This section guides you through the process of installing and utilizing the QLogic driver RPM. The RPM builds and installs the qla2300.o driver and modifies the /etc/modules.conf file. In /etc/modules.conf, the host adapter line for the qla2300.o driver will be appended.
Installing and Configuring the Linux Host with the QLogic Driver /lib/modules/2.4.21-32.0.1.ELsmp/kernel/drivers/scsi/ install -o root -g root qla2200_conf.o /lib/modules/2.4.21-32.0.1.ELsmp/kernel/drivers/scsi/ install -o root -g root qla2300_conf.o /lib/modules/2.4.21-32.0.1.ELsmp/kernel/drivers/scsi/ depmod -a make: Nothing to be done for `/lib/modules/2.4.21-32.0.1.ELsmp/kernel/drivers/scsi/'. depmod... adding line: alias scsi_hostadapter2 qla2300_conf to /etc/modules.
Installing and Configuring the Linux Host with the QLogic Driver ◆ “Method 2: Uninstalling the QLogic v7.xx.xx/v8.xx.xx driver via the QLogic installation script” on page 77 ◆ “Method 3: Uninstalling the QLogic v7.xx.xx driver via the QLogic RPM” on page 78 Method 1: Uninstalling the QLogicv7.xx.xx/v8.xx.xx driver via QLogic DKMS RPM This section provides guidance for uninstalling the QLogic v7.xx.xx/v8.xx.xx driver via the QLogic DKMS RPM package.
Installing and Configuring the Linux Host with the QLogic Driver [root@l82bi116 ./qlinstall -u qlafc-linux-8.xx.xx-install]# An example of the console output reported by the driver removal is as follows: 3. Verify that the /etc/modprobe.conf file contains the information necessary for the server to boot and that a new ramdisk has been created. If the ramdisk has not been created as in the example above, create one. cd /boot mkinitrd -v initrd-$1.img $1 where $1 is the currently running v2.6.
Installing and Configuring the Linux Host with the QLogic Driver 4. Reboot the host. QLogic v7.x and v8.x series driver parameters The QLogic driver contains a number of parameters that may be modified to perform failover functionality or to enhance performance. QLogic v7.x series driver parameters The QLogic and EMC recommended values are in Table 6 and descriptions of the parameters follow the table.
Installing and Configuring the Linux Host with the QLogic Driver Table 6 QLogic v7.
Installing and Configuring the Linux Host with the QLogic Driver Table 6 QLogic v7.x series driver parameters (page 3 of 3) Parameters QLogic default values EMC default recommendations qlFailoverNotifyType 0 0 recoveryTime 10 seconds 10 seconds failbackTime 5 seconds 5 seconds Description of QLogic v7.x-series driver parameters When attaching to VNX series, CLARiiON, or Symmetrix storage systems, EMC recommends that the ConfigRequired and ql2xfailover parameters be set to zero.
Installing and Configuring the Linux Host with the QLogic Driver ◆ ql2xintrdelaytimer: defines the amount of time for the firmware to wait before generating an interrupt to the host as notification of the request completion. ◆ retry_gnnft: defines the number of times to retry GNN_FT in order to obtain the Node Name and PortID of the device list. ◆ ConfigRequired: If set to 1, then only devices configured and passed through the ql2xopts parameter are presented to the OS.
Installing and Configuring the Linux Host with the QLogic Driver ◆ MaxRetriesPerIo: defines the total number of retries to perform before failing the command and returning a DID_NO_CONNECT selection timeout to the OS. ◆ qlFailoverNotifyType: defines the type of failover notification mechanism to use when a failover or failback occurs.
Installing and Configuring the Linux Host with the QLogic Driver 4. After the modification to /etc/modules.conf has been made, a new ramdisk needs to be created and the host rebooted. To create a new ramdisk, type the mkinitrd command: • For Red Hat, type: cd /boot mkinitrd –v initrd-$1.img $1 where $1 is the v2.4.x kernel version currently running. Example: cd /boot • For SuSE, type: cd /boot mkinitrd –i initrd-$1 -k vmlinuz-$1 where $1 is the v2.4.x kernel version currently running.
Installing and Configuring the Linux Host with the QLogic Driver Note: When attaching to VNX series, CLARiiON, or Symmetrix storage arrays, EMC recommends that the ConfigRequired and ql2xfailover parameters be set to zero in the /etc/modules.conf file.
Installing and Configuring the Linux Host with the QLogic Driver where $1 is the currently running v2.6.x kernel version. Example: cd /boot mkinitrd -v initrd-2.6.9-22.ELsmp.img 2.6.9-22.ELsmp • For SuSE distributions, use: cd /boot mkinitrd -i initrd-$1 -k vmlinuz-$1 where $1 is the currently running v2.6.x kernel version. Example: cd /boot mkinitrd -i initrd-2.6.5-7.201smp -k vmlinuz-2.6.5-7.201smp 4. Reboot the host. Displaying the QLogic v8.
Installing and Configuring the Linux Host with the QLogic Driver An example of the console output displayed when modinfo is run on the qla2xxx module is as follows: [root@l82bi205 ~]# modinfo qla2xxx filename: /lib/modules/2.6.9-22.ELsmp/kernel/drivers/scsi/qla2xxx/qla2xxx.ko version: 8.01.06 license: GPL description: QLogic Fibre Channel HBA Driver author: QLogic Corporation parm: ql2xfdmienable:Enables FDMI registratons Default is 0 - no FDMI. 1 - perfom FDMI.
Installing and Configuring the Linux Host with the QLogic Driver parm: ql2xexcludemodel:Exclude device models from being marked as failover capable.Combine one or more of the following model numbers into an exclusion mask: 0x20 - HSV210, 0x10 - DSXXX, 0x04 - HSV110, 0x02 - MSA1000, 0x01 - XP128.
Installing and Configuring the Linux Host with the QLogic Driver iSCSI in kernel driver versions The following installation information is contained in this section: ◆ “iSCSI supported in kernel driver versions” on page 89 ◆ “Installation instructions for the in kernel QLogic driver in Linux 2.6.x kernels” on page 91 iSCSI supported in kernel driver versions Table 7 lists some examples of supported operating systems in kernel driver versions.
Installing and Configuring the Linux Host with the QLogic Driver Table 7 What next? Supported iSCSI in kernel driver versions (page 2 of 2) OS Driver version RHEL 5.3 Asianux 3.0 SP2 OEL 5.3 5.01.00.01.05.03-k9 SLES 11 GA 5.01.00-k8_sles11-04 SLES 11 GA (errata kernels equal to or greater than 2.6.27.23-0.1.1) 5.01.00-k9_sles11-04 SLES 11 SP1 5.01.00.00.11.01-k14 RHEL 5.4 OEL 5.4 RHEL 5.5 OEL 5.5 AX3 SP3 5.01.00.01.05.04-k9 SLES 10 SP3 (kernel errata 2.6.16.60-0.57.
Installing and Configuring the Linux Host with the QLogic Driver Installation instructions for the in kernel QLogic driver in Linux 2.6.x kernels CAUTION The qla3xxx driver which is used by the QLogic iSCSI HBA to perform TCP/IP traffic will automatically be enabled along with the qla4xxx driver. If the qla3xxx driver is activated, it will take over the HBA, no iSCSI traffic may be conducted through the HBA, and the server will appear to hang on boot. This is a known issue (Red Hat Bugzilla #249556).
Installing and Configuring the Linux Host with the QLogic Driver • “Enabling the QLogic driver in SLES10 and SLES 11” on page 94 Enabling the QLogic driver in RHEL 4 To enable this driver: 1. Ensure that the /etc/modprobe.conf file references an entry for each installed QLogic adapter.
Installing and Configuring the Linux Host with the QLogic Driver where N is the sequential value of QLogic adapter installed in the system, beginning with the number after the last host adapter number entry in the file. (The first host adapter entry begins with zero.). Example: alias scsi_hostadapter1 qla4xxx 2. Whenever /etc/modprobe.conf/ is modified, a new ramdisk should be created to reflect the changes made.
Installing and Configuring the Linux Host with the QLogic Driver 3. Reboot the system. Enabling the QLogic driver in SLES10 and SLES 11 If the server install the OS without the adapter, in order for the driver to be loaded at boot time, the driver must be listed in the /etc/sysconfig/kernel and the ramdisk must be updated to reflect the changes. To enable the driver: 1. Edit /etc/sysconfig/kernel: vi /etc/sysconfig/kernel a. Add a reference to the QLogic qla4xxx.
Installing and Configuring the Linux Host with the QLogic Driver iSCSI out of kernel driver versions The following installation information is contained in this section: ◆ “iSCSI supported out of kernel driver versions” on page 95 ◆ “Installing the Linux v2.4.x host and the QLogic v3.x-Series iSCSI HBA driver” on page 96 ◆ “Installing the Linux v2.6.x host and the QLogic v5.
Installing and Configuring the Linux Host with the QLogic Driver Table 8 Supported iSCSI out of kernel driver versions OS Driver version RHEL 4.6 RHEL 5.0 5.01.01.04 SLES 11 SP1 RHEL 6.0 RHEL 6.1 5.02.11.00.05.06-c3 a SLES 10 SP4 5.02.11.00.10.4-d2 a a. For models QLE8240, QLE8242, QLE8250, and QLE8252 only. Refer to the latest EMC Support Matrix for specific qualified kernel versions and distributions.
Installing and Configuring the Linux Host with the QLogic Driver This section provides the following instructions for installing the QLogic v3.x-Series iSCSI driver: ◆ “Preinstallation instructions,” next ◆ “Downloading the QLogic v3.x-Series iSCSI driver for the v2.4.x kernel” on page 98 ◆ “Installing QLogic v3.x-Series iSCSI driver via the QLogic DKMS RPM, Method one” on page 99 ◆ “Installing QLogic v3.
Installing and Configuring the Linux Host with the QLogic Driver To stop the iqlremote service, issue one of the two following commands: /etc/init.d/iqlremote stop service iqlremote stop Downloading the QLogic v3.x-Series iSCSI driver for the v2.4.x kernel Use the following procedure to download the EMC-approved QLogic iSCSI driver from the QLogic website: 1. Use a web browser to access the EMC-approved section of the QLogic website at the following url: http://www.qlogic.com 2.
Installing and Configuring the Linux Host with the QLogic Driver This method requires no manual edits for systems attached to EMC storage arrays. By installing the QLogic RPM, the necessary files will be edited and the driver will be compiled and installed automatically. Note: Refer to “Installing QLogic v3.x-Series iSCSI driver via the QLogic installation script, Method two” on page 101. Installing QLogic v3.
Installing and Configuring the Linux Host with the QLogic Driver qliscsi-linux-3.22-1dkms/README.dkms 4. Install the DKMS RPM: cd qliscsi-linux-3.22-1dkms rpm -ivh dkms-2.0.5-1.noarch.rpm Output example: Preparing... ########################################### [100%] 1:dkms ########################################### [100%] 5. Install the QLogic driver RPM: rpm -ivh qla4xxx-v3.22-1dkms.noarch.rpm An example of console output reported by the driver RPM installation is as follows: Preparing...
Installing and Configuring the Linux Host with the QLogic Driver - No original module exists within this kernel - Installation - Installing to /lib/modules/2.4.21-32.0.1.ELsmp/kernel/drivers/scsi/qla4xxx/ depmod.... Saving old initrd as /boot/initrd-2.4.21-32.0.1.ELsmp_old.img Making new initrd as /boot/initrd-2.4.21-32.0.1.ELsmp.img (If next boot fails, revert to the _old initrd image) mkinitrd.... DKMS: install Completed. An example of the modified /etc/modules.
Installing and Configuring the Linux Host with the QLogic Driver In the /etc/modules.conf file, the hostadapter line for the qla4010 driver will be appended. The options line containing the addition of the scsi_allow_ghost_devices and max_scsi_luns parameters will also be appended to the file. This will allow the host to correctly identify-the disconnected LUN 0 that is reported when attached to VNX series or CLARiiON storage systems as well as allow the SCSI stack to scan up to 255 devices.
Installing and Configuring the Linux Host with the QLogic Driver 2: qla4xxx version: v3.22 Preparing... ################################################## qla4xxx ################################################## Creating initial /usr/src/qlogic/v3.22-2/install.v3.22-2.log... Please wait: Preparing qla4xxx modular driver build building for SMP \ Installing driver in /lib/modules/2.4.21-32.0.1.ELsmp/kernel/drivers/scsi.... Building module dependency.... depmod... Loading module qla4010 version: v3.22....
Installing and Configuring the Linux Host with the QLogic Driver ProdRv = 0217 LunSize = 4.176 GB HBA/Target/Lun Number = 0/5/5 Vend = DGC ProdID = RAID 5 ProdRv = 0217 LunSize = 4.176 GB Target ID: 6 -----------------------------HBA/Target/Lun Number = 0/6/0 Vend = DGC ProdID = RAID 3 ProdRv = 0217 LunSize = 17179869184.000 GB HBA/Target/Lun Number = 0/6/1 Vend = DGC ProdID = RAID 3 ProdRv = 0217 LunSize = 17179869184.
Installing and Configuring the Linux Host with the QLogic Driver HBA/Target/Lun Number Vend = DGC ProdID = RAID 3 ProdRv = 0217 LunSize = 3.982 GB HBA/Target/Lun Number Vend = DGC ProdID = RAID 3 ProdRv = 0217 HBA/Target/Lun Number Vend = DGC ProdID = RAID 5 ProdRv = 0217 LunSize = 4.176 GB HBA/Target/Lun Number Vend = DGC ProdID = RAID 5 ProdRv = 0217 LunSize = 4.
Installing and Configuring the Linux Host with the QLogic Driver #*********************************************************# # INSTALLATION SUCCESSFUL!! # # QLogic HBA Linux driver installation completed. # #*********************************************************# An example of the modified /etc/modules.conf file is as follows: [root@l82bi114 qla2x00-v7.07.00]# more /etc/modules.
Installing and Configuring the Linux Host with the QLogic Driver mkinitrd -i initrd-2.4.21-286-smp -k vmlinuz-2.4.21-286-smp 5. Reboot the host. Refer to the latest EMC Support Matrix for specific qualified kernel versions and distributions. Note: The support stated in the EMC Support Matrix supersedes versions listed in this document. Installing the Linux v2.6.x host and the QLogic v5.
Installing and Configuring the Linux Host with the QLogic Driver Preinstallation instructions Prior to the installation: ◆ All I/O must be stopped. ◆ All filesystems attached to the QLogic driver must be unmounted. ◆ If the Naviagent/CLI is installed and enabled on the host, then the Naviagent/CLI service must be stopped. To stop the Naviagent/CLI service, issue one of the two following commands: /etc/init.
Installing and Configuring the Linux Host with the QLogic Driver 3. After selecting a category, find the HBA model being used and select the link to be transferred to the page of resources for that HBA. 4. Find the desired and supported driver for the kernel version and distribution, and click the associated Download link to save the file. The QLogic v5.x-series iSCSI driver can be installed onto a Linux v2.6.
Installing and Configuring the Linux Host with the QLogic Driver In the /etc/modprobe.conf file, the hostadapter line for the qla4xxx driver will be appended. Note: The Unisphere/Navisphere Host Agent requires that the disconnected LUN 0 be reported. The DKMS RPM will create the QLogic v5.x-series driver as a module. Follow these steps to integrate the QLogic driver into RHEL 4.0 hosts: 1. Boot into the qualified and supported kernel onto which the driver will be installed. 2. Obtain the qliscsi-linux-5.
Installing and Configuring the Linux Host with the QLogic Driver alias alias alias alias scsi_hostadapter mptbase scsi_hostadapter1 mptscsih scsi_hostadapter2 qla4xxx usb-controller usb-uhci As specified in the driver installation output, a new ramdisk is created automatically by the DKMS RPM installation. If additional changes to the /etc/modprobe.conf file are required, create a new ramdisk manually: cd /boot mkinitrd initrd-$1.img $1 where $1 is the currently running v2.6.x kernel version.
Installing and Configuring the Linux Host with the QLogic Driver 2. Obtain the qliscsi-linux-5.00.4-2-install.tgz package from the EMC-approved section of the QLogic website listed in “Downloading the QLogic v3.x-Series iSCSI driver for the v2.4.x kernel” on page 98. 3. Uncompress and extract the source files from the tar archive: tar zxvf qliscsi-linux-5.00.4-2-install.tgz The initial uncompression will provide you with the following: qlaiscsi-linux-5.00.04-2-install/ qlaiscsi-linux-5.00.
Installing and Configuring the Linux Host with the QLogic Driver Saved copy of /etc/modprobe.conf as /usr/src/qlogic/5.00.04-1/backup/modprobe.conf-2.6.9-22.ELsmp-122705-195448.bak Saved copy of /boot/initrd-2.6.9-22.ELsmp.img as /usr/src/qlogic/5.00.04-1/backup/initrd-2.6.9-22.ELsmp.img-122705-195448.bak qla4xxx -- Rebuilding ramdisk image... Ramdisk created.
Installing and Configuring the Linux Host with the QLogic Driver ProdRv = 0217 LunSize = 4.176 GB HBA/Target/Lun Number Vend = DGC ProdID = RAID 1 ProdRv = 0217 LunSize = 2.088 GB HBA/Target/Lun Number Vend = DGC ProdID = RAID 1 ProdRv = 0217 LunSize = 2.088 GB HBA/Target/Lun Number Vend = DGC ProdID = RAID 1 ProdRv = 0217 LunSize = 2.088 GB HBA/Target/Lun Number Vend = DGC ProdID = RAID 1 ProdRv = 0217 LunSize = 2.088 GB HBA/Target/Lun Number Vend = DGC ProdID = RAID 5 ProdRv = 0217 LunSize = 2.
Installing and Configuring the Linux Host with the QLogic Driver HBA/Target/Lun Number Vend = DGC ProdID = DISK ProdRv = 0217 LunSize = 2.088 GB HBA/Target/Lun Number Vend = DGC ProdID = DISK ProdRv = 0217 LunSize = 2.088 GB HBA/Target/Lun Number Vend = DGC ProdID = RAID 10 ProdRv = 0217 LunSize = 2.088 GB HBA/Target/Lun Number Vend = DGC ProdID = RAID 10 ProdRv = 0217 LunSize = 2.088 GB HBA/Target/Lun Number Vend = DGC ProdID = RAID 10 ProdRv = 0217 LunSize = 2.
Installing and Configuring the Linux Host with the QLogic Driver ProdID = RAID 3 ProdRv = 0217 LunSize = 4.176 GB HBA/Target/Lun Number = 0/2/29 Vend = DGC ProdID = RAID 3 ProdRv = 0217 LunSize = 4.176 GB HBA/Target/Lun Number = 0/2/30 Vend = DGC ProdID = RAID 3 ProdRv = 0217 LunSize = 4.176 GB HBA/Target/Lun Number = 0/2/31 Vend = DGC ProdID = RAID 3 ProdRv = 0217 LunSize = 4.
Installing and Configuring the Linux Host with the QLogic Driver Vend = DGC ProdID = RAID 5 ProdRv = 0217 LunSize = 4.176 GB HBA/Target/Lun Number Vend = DGC ProdID = RAID 5 ProdRv = 0217 LunSize = 4.176 GB HBA/Target/Lun Number Vend = DGC ProdID = RAID 1 ProdRv = 0217 LunSize = 2.088 GB HBA/Target/Lun Number Vend = DGC ProdID = RAID 1 ProdRv = 0217 LunSize = 2.088 GB HBA/Target/Lun Number Vend = DGC ProdID = RAID 1 ProdRv = 0217 LunSize = 2.
Installing and Configuring the Linux Host with the QLogic Driver ProdRv = 0217 LunSize = 2.088 GB HBA/Target/Lun Number Vend = DGC ProdID = DISK ProdRv = 0217 LunSize = 2.088 GB HBA/Target/Lun Number Vend = DGC ProdID = DISK ProdRv = 0217 LunSize = 2.088 GB HBA/Target/Lun Number Vend = DGC ProdID = DISK ProdRv = 0217 LunSize = 2.088 GB HBA/Target/Lun Number Vend = DGC ProdID = RAID 10 ProdRv = 0217 LunSize = 2.088 GB HBA/Target/Lun Number Vend = DGC ProdID = RAID 10 ProdRv = 0217 LunSize = 2.
Installing and Configuring the Linux Host with the QLogic Driver HBA/Target/Lun Number Vend = DGC ProdID = RAID 5 ProdRv = 0217 LunSize = 2.784 GB HBA/Target/Lun Number Vend = DGC ProdID = RAID 3 ProdRv = 0217 LunSize = 4.176 GB HBA/Target/Lun Number Vend = DGC ProdID = RAID 3 ProdRv = 0217 LunSize = 4.176 GB HBA/Target/Lun Number Vend = DGC ProdID = RAID 3 ProdRv = 0217 LunSize = 4.176 GB HBA/Target/Lun Number Vend = DGC ProdID = RAID 3 ProdRv = 0217 LunSize = 4.
Installing and Configuring the Linux Host with the QLogic Driver Vend = DGC ProdID = RAID 3 ProdRv = 0217 LunSize = 3.982 GB HBA/Target/Lun Number Vend = DGC ProdID = RAID 5 ProdRv = 0217 LunSize = 4.176 GB HBA/Target/Lun Number Vend = DGC ProdID = RAID 5 ProdRv = 0217 LunSize = 4.176 GB HBA/Target/Lun Number Vend = DGC ProdID = RAID 5 ProdRv = 0217 LunSize = 4.176 GB HBA/Target/Lun Number Vend = DGC ProdID = RAID 5 ProdRv = 0217 LunSize = 4.
Installing and Configuring the Linux Host with the QLogic Driver ProdRv = 0217 LunSize = 2.784 GB HBA/Target/Lun Number Vend = DGC ProdID = RAID 5 ProdRv = 0217 LunSize = 2.784 GB HBA/Target/Lun Number Vend = DGC ProdID = RAID 5 ProdRv = 0217 LunSize = 2.784 GB HBA/Target/Lun Number Vend = DGC ProdID = DISK ProdRv = 0217 LunSize = 2.088 GB HBA/Target/Lun Number Vend = DGC ProdID = DISK ProdRv = 0217 LunSize = 2.088 GB HBA/Target/Lun Number Vend = DGC ProdID = DISK ProdRv = 0217 LunSize = 2.
Installing and Configuring the Linux Host with the QLogic Driver HBA/Target/Lun Number Vend = DGC ProdID = RAID 5 ProdRv = 0217 LunSize = 2.784 GB HBA/Target/Lun Number Vend = DGC ProdID = RAID 5 ProdRv = 0217 LunSize = 2.784 GB HBA/Target/Lun Number Vend = DGC ProdID = RAID 5 ProdRv = 0217 LunSize = 2.784 GB HBA/Target/Lun Number Vend = DGC ProdID = RAID 5 ProdRv = 0217 LunSize = 2.784 GB HBA/Target/Lun Number Vend = DGC ProdID = RAID 3 ProdRv = 0217 LunSize = 4.
Installing and Configuring the Linux Host with the QLogic Driver LunSize = 3.982 GB HBA/Target/Lun Number Vend = DGC ProdID = RAID 3 ProdRv = 0217 LunSize = 3.982 GB HBA/Target/Lun Number Vend = DGC ProdID = RAID 3 ProdRv = 0217 LunSize = 3.982 GB HBA/Target/Lun Number Vend = DGC ProdID = RAID 5 ProdRv = 0217 LunSize = 4.176 GB HBA/Target/Lun Number Vend = DGC ProdID = RAID 5 ProdRv = 0217 LunSize = 4.176 GB HBA/Target/Lun Number Vend = DGC ProdID = RAID 5 ProdRv = 0217 LunSize = 4.
Installing and Configuring the Linux Host with the QLogic Driver Vend = DGC ProdID = RAID 5 ProdRv = 0217 LunSize = 2.784 GB HBA/Target/Lun Number Vend = DGC ProdID = RAID 5 ProdRv = 0217 LunSize = 2.784 GB HBA/Target/Lun Number Vend = DGC ProdID = RAID 5 ProdRv = 0217 LunSize = 2.784 GB HBA/Target/Lun Number Vend = DGC ProdID = RAID 5 ProdRv = 0217 LunSize = 2.784 GB HBA/Target/Lun Number Vend = DGC ProdID = DISK ProdRv = 0217 LunSize = 2.
Installing and Configuring the Linux Host with the QLogic Driver ProdRv = 0217 LunSize = 2.088 GB HBA/Target/Lun Number Vend = DGC ProdID = RAID 10 ProdRv = 0217 LunSize = 2.088 GB HBA/Target/Lun Number Vend = DGC ProdID = RAID 5 ProdRv = 0217 LunSize = 2.784 GB HBA/Target/Lun Number Vend = DGC ProdID = RAID 5 ProdRv = 0217 LunSize = 2.784 GB HBA/Target/Lun Number Vend = DGC ProdID = RAID 5 ProdRv = 0217 LunSize = 2.784 GB HBA/Target/Lun Number Vend = DGC ProdID = RAID 5 ProdRv = 0217 LunSize = 2.
Installing and Configuring the Linux Host with the QLogic Driver ProdID = LUNZ ProdRv = 0218 LunSize = 17179869184.000 GB #***************************************************# # INSTALLATION SUCCESSFUL!! # SANsurfer Driver installation for Linux completed #***************************************************# # # An example of the modified /etc/modprobe.conf file is as follows: [root@l82bi114 root]# more /etc/modules.
Installing and Configuring the Linux Host with the QLogic Driver Example: cd /boot mkinitrd -i initrd-2.6.5-7.201-smp -k vmlinuz-2.6.5-7.201-smp 5. Reboot the host. What’s next? Proceed to “Configuring the QLA40xx-Series HBA to discover iSCSI targets” on page 136.
Installing and Configuring the Linux Host with the QLogic Driver 128 EMC Host Connectivity with QLogic FC and iSCSI HBAs and FCoE CNAs for the Linux Environment
5 Invisible Body Tag Updating the CEE/Menlo or iSCSI Firmware This chapter provides information on updating the CEE/Menlo or iSCSI firmware for Fibre Channel over Ethernet adapters. ◆ Updating the QLogic CEE /Menlo firmware for FCoE adapters ... 130 ◆ Updating the QLogic firmware for iSCSI adapters.....................
Updating the CEE/Menlo or iSCSI Firmware Updating the QLogic CEE /Menlo firmware for FCoE adapters FCoE adapters include an additional chip component which requires the latest supported firmware. This chip is commonly referred to as a CEE (converged enhanced ethernet) or "Menlo" chip, the purpose of which is to handle the convergence of storage (FC) and network (IP) traffic over a single ethernet interface. To update the CEE/Menlo firmware on the CNAs, follow these steps: 1.
Updating the CEE/Menlo or iSCSI Firmware Updating the QLogic firmware for iSCSI adapters The adapter firmware for the QLogic iSCSI HBA is not part of the Linux driver and is installed in NVRAM on the HBA. To update the firmware on the iSCSI HBA, follow these steps: 1. Ensure that QLogic SANsurfer and SANsurfer CLI is installed. Note: Refer to “Upgrading the adapter BIOS” on page 33 for installation instructions. 2.
Updating the CEE/Menlo or iSCSI Firmware 132 EMC Host Connectivity with QLogic FC and iSCSI HBAs and FCoE CNAs for the Linux Environment
6 Invisible Body Tag Connecting to the Storage This chapter provides information on connecting to the storage. ◆ ◆ ◆ ◆ ◆ ◆ Zoning and connection planning in a Fibre Channel or Fibre Channel over Ethernet environment ............................................. 134 Zoning and connection planning in an iSCSI environment ...... 135 Configuring the QLA40xx-Series HBA to discover iSCSI targets .................................................................................................
Connecting to the Storage Zoning and connection planning in a Fibre Channel or Fibre Channel over Ethernet environment In a fabric environment, the user should plan for the switch topology, target-to-hosts mapping, and the zone. Planning procedure The recommended procedure is as follows: 1. Draw the connectivity among the hosts, switch, and storage array to verify the correct fabric configuration. 2. Configure the zone capability in the switch.
Connecting to the Storage Zoning and connection planning in an iSCSI environment The user should plan the connectivity of the EMC array to the QLogic iSCSI HBA based on the following considerations: Be sure to follow the configuration guidelines that EMC outlines. Using improper settings can cause erratic behavior.
Connecting to the Storage Configuring the QLA40xx-Series HBA to discover iSCSI targets The Ethernet IP and the iSCSI targets must be configured for the QLogic iSCSI QLA40xx-Series HBAs. To perform these tasks, knowledge is required of the Ethernet infrastructure topology, the IP addresses to be used for the HBA, and the IP addresses of the iSCSI ports on the targeted EMC storage arrays.
Connecting to the Storage Configuring persistent binding for the Linux QLogic iSCSI HBA This section provides the instructions for enabling persistent binding for the Linux QLogic iSCSI HBA v3.x- or v5.x-series drivers. Note: Future revisions of this driver will not contain the target level binding mechanism that is now present, and the Linux kernel udev() functionality will be used as a per device persistent binding mechanism.
Connecting to the Storage Configuring persistent binding using SANsurferCLI Note: This example uses a v5.x-series driver. The same basic steps would apply for the v3.x-series driver. QLogic SANsurferCLI is installed in the qliscsi-linux-5.00.4-2-install/ directory. In order to configure persistent binding using the SCLI, use the following command: qliscsi-linux-5.00.4-2-install/scix 1.
Connecting to the Storage 3. Unbind Target 4. Configure Target Parameters 5. Add A Target 6. Configure Target Authentication Menu 7. List LUN information 8. Save Target changes 9. Set Working Adapter 10. Refresh 11. Exit enter selection: 2 Target ID: 64 IP: 51.50.51.198 Port: 3260 ISCSI Name: iqn.1992-04.com.emc:cx.apm00033300794.a0 Alias: 0794.a0 State: Session Active Target ID: 65 IP: 51.50.51.199 Port: 3260 ISCSI Name: iqn.1992-04.com.emc:cx.apm00033300794.b0 Alias: 0794.
Connecting to the Storage Unconfiguring persistent binding using SANsurferCLI Note: This example uses a v5.x-series driver. The same basic steps would apply for the v3.x-series driver. In order to unconfigure persistent binding using the SCLI, use the following command: qliscsi-linux-5.00.4-2-install/scix 1.
Connecting to the Storage 4. Configure Target Parameters 5. Add A Target 6. Configure Target Authentication Menu 7. List LUN information 8. Save Target changes 9. Set Working Adapter 10. Refresh 11. Exit enter selection: 3 3. Select the desired target ID to be unbound: Target ID: 2 IP: 51.50.51.198 Port: 3260 ISCSI Name: Alias: State: No Connection Target ID: 3 IP: 51.50.51.199 Port: 3260 ISCSI Name: Alias: State: No Connection Target ID: 4 IP: 51.51.51.
Connecting to the Storage Installing the SANSurfer iSCSI GUI Note: This example in this section uses a v5.x-series driver. The same basic steps would apply for the v3.x-series driver. To install the SANSurfer iSCSI GUI, complete the following steps: 1. Download the QLogic iSCSI SANsurfer GUI package from the EMC-approved webpage on the QLogic webpage at www.qlogic.com, as shown below. The Sansurfer (ISCSI HBA Manager - Standalone) window diplays. 2. Click Next.
Connecting to the Storage An Important Information screen displays. 3. Click Next. A Choose Product Features screen displays. 4. Choose iSCSI GUI and Agent and click Next.
Connecting to the Storage The Choose Install Folder window displays. 5. Type where you want the iSCSI GUI and Agent installed and click Next. The Pre-Installation Summary window displays. 6. Confirm the information and click Install.
Connecting to the Storage An Installing SANSurfer window displays showing the progress of the installation. Once the installation is completed, an Install Complete window displays. 7. Click Done.
Connecting to the Storage Configuring persistent binding using the SANsurfer GUI Note: This example in this section uses a v5.x-series driver. The same basic steps would apply for the v3.x-series driver. To configure persistent binding using the SANsurfer GUI, complete the following steps. 1. Launch the SANSurfer GUI Utility. # /opt/QLogic_Corporation/SANSurfer/SANSurfer The following window displays: 2. Select the Target Options > Target Settings. 3.
Connecting to the Storage An IP Address screen displays. 4. Fill out the IPv4 target address and click OK. The IP address now displays in the list. 5. Click Save Target Settings.
Connecting to the Storage An HBA Save Data Warnings window displays. 6. Click Yes. A Security Check window displays. 7. Enter the default password, config, and click OK.
Connecting to the Storage The State in the Target Options tab shows that the configuration is saving.
Connecting to the Storage Once it is saved, the State changes to Ready, Link Up, and an ISCSI Configuration Change box displays. 8. Click Yes.
7 Invisible Body Tag Configuring a Boot Device on an EMC Storage Array EMC supports booting Linux from an EMC storage array through an EMC-qualified QLogic Fibre Channel HBA, Fibre Channel over Ethernet CNA, or iSCSI HBA. (Refer to the EMC Support Matrix for specific HBAs, BIOS revisions, and drivers.) ◆ ◆ ◆ ◆ ◆ ◆ ◆ ◆ ◆ ◆ ◆ ◆ Introduction ......................................................................................
Configuring a Boot Device on an EMC Storage Array Introduction This chapter discusses the installation of a QLogic HBA or CNA to be used to boot the Linux operating system from a device provided by an EMC storage array. This chapter is provided as a supplement to the EMC Linux Host Connectivity Guide, located Powerlink, which provides greater detail on the installation of the Linux operating system on a boot device provided by EMC storage.
Configuring a Boot Device on an EMC Storage Array Cautions and restrictions for booting from EMC storage array ! CAUTION If Linux loses connectivity long enough, the disks disappear from the system. To prevent further data from being lost in a situation like this, EMC recommends that the error behavior be changed from continue to remount read-only. To make this change, consult the manpage for tune2fs. A hard reboot is required to bring the system back to a usable state.
Configuring a Boot Device on an EMC Storage Array 154 ◆ CLARiiON service and upgrade procedures, such as on-line CLARiiON FLARE upgrades and/or configuration changes. ◆ VNX series or CLARiiON SP failures, including failed lasers. ◆ VNX series or CLARiiON storage system power failure. ◆ Storage Area Network failures, such as failures in Fibre Channel switches, failures in Ethernet swtiches, switch components, or switch power.
Configuring a Boot Device on an EMC Storage Array Limitations This section discusses limitations, including: ◆ “Common limitations” on page 145 ◆ “Symmetrix-specific limitations” on page 146 ◆ “CLARiiON-specific limitations” on page 147 Common limitations Boot configurations must not deviate from the following limitations established by EMC: ◆ The EMC Storage device must have enough disk space to hold the Linux operating system.
Configuring a Boot Device on an EMC Storage Array – An alternative to masking the VCM DB is to map the Volume Logix database device so that it is the highest LUN presented to the host. Be aware that the LUN number should not be higher than 254 (FE). Note: The EMC-recommended method is to use LUN masking. • VNX series or CLARiiON ghost LUN - If no LUN 0 exists in the storage group, a phantom device (LUNZ) will be presented by the array in its place.
Configuring a Boot Device on an EMC Storage Array ◆ When attached to a Symmetrix, the physical-to-logical split must be such that you meet the minimum required disk space required to install the Linux operating system. Refer to your Linux distribution for these requirements. ◆ For RHEL 4.5 boot from a LUN with VCM gatekeeper existing on a Symmetrix, you may receive an "unhandled exception with ZeroDivisionError" message when partitioning the boot LUN.
Configuring a Boot Device on an EMC Storage Array Configuring a Symmetrix boot device for FC or FCoE This section describes how to install an EMC-qualified version of Linux onto an EMC Symmetrix storage array connected to an Intel-based x86 and x86_64 class systems and AMD Opteron-based x86_64 class systems. Preparing the Symmetrix storage array To prepare the Symmetrix storage array, ◆ It is recommended that Volume Logix be enabled on the Symmetrix storage array for LUN masking purposes.
Configuring a Boot Device on an EMC Storage Array ◆ For servers with IDE CD-ROM drivers, disable the BIOS on the server's integrated SCSI adapter(s). The SCSI BIOS is not required to boot from the CD-ROM. ◆ Disable the BIOS on any other adapters in the system other than the QLogic adapter designated for booting. Configuring the QLogic BIOS for SAN boot After the BIOS is installed and enabled, it must be configured for use for booting from the SAN.
Configuring a Boot Device on an EMC Storage Array 4. From the Fast!UTIL Options menu, select Configuration Settings and press Enter. 5. From the Configuration Settings menu, select Adapter Settings and press Enter. 6. From the Host Adapter Settings menu, select Host Adapter BIOS and press Enter to enable it if it is not already enabled. Note: Refer to Table 3 on page 36 for recommended settings. 7. Press ESC to exit the Configuration Settings menu. 8.
Configuring a Boot Device on an EMC Storage Array Configuring a VNX series or CLARiiON boot device for FC or FCoE This section describes how to install an EMC-qualified version of Linux onto an EMC VNX series or CLARiiON storage system connected to an Intel-based x86 and x86_64 class systems and AMD Opteron-based x86_64 class systems.
Configuring a Boot Device on an EMC Storage Array ◆ The PCI Fibre Channel adapter must be the lowest-numbered PCI slot in the server. For example, if there are three adapters in the system in slots 2, 4, and 5, connect the cable to the adapter in slot. Do not connect cables to other adapters until the installation is complete and the host rebooted. ◆ SCSI hard disks are allowed in SAN boot configurations. However, the BIOS for the disk's SCSI adapters must be disabled.
Configuring a Boot Device on an EMC Storage Array 3. After Fast!UTIL loads, the display depends on whether there are multiple QLogic adapters installed: • If there is only one QLogic adapter, the Fast!UTIL Options menu appears. • If there are multiple QLogic adapters, a list of addresses occupied by those adapters appears. Since the EMC storage array is attached to the lowest-numbered PCI slot, select the first adapter from the list; then press Enter. The Fast!UTIL Options menu appears. 4.
Configuring a Boot Device on an EMC Storage Array From the management host, manually register the host's adapter and add the host to the newly created Storage Group using Unisphere/Navisphere Management software. 12. Return to the BIOS configuration and reboot the host. 13. When the QLogic banner is displayed (as shown in step 2), press Ctrl-Q. 14. Once the Fast!UTIL loads, select the Configuration Settings menu and press Enter. 15.
Configuring a Boot Device on an EMC Storage Array Installing the Linux operating systems with out of kernel drivers onto a boot device using FCoE Adapters EMC supports booting from an array device in FCoE environments with RHEL 5, SLES10, and SLES 11operating systems.
Configuring a Boot Device on an EMC Storage Array 5. Insert the disk (described in Step 1) into either the floppy disk drive or the CD drive, depending on the option selected in Step 4. 6. Click OK, then press ENTER. The SCSI driver is loaded automatically. 7. The Disk Driver window displays, prompting for more drivers to install. Click NO, then press ENTER. 8. Insert the current Linux Red Hat product CD #1 in the CD drive (remove the iso-dd-kit CD first if necessary), then press ENTER. 9.
Configuring a Boot Device on an EMC Storage Array The following message displays: Make sure that CD number 1 is in your drive. 8. Put SLES10 CD 1 in the drive and press OK. 9. Follow the on-screen instructions to complete the installation. SLES 11 OS SAN-boot installation with QLogic FCoE adapters To install SLES 11 SAN-boot with QLogic FCoE adapters: 1. .
Configuring a Boot Device on an EMC Storage Array Configuring a Symmetrix boot device for iSCSI 3.x This section describes how to install an EMC-qualified version of Linux onto an EMC Symmetrix storage array connected to an Intel-based x86 and x86_64 class systems and AMD Opteron-based x86_64 class systems. Preparing the Symmetrix storage array ◆ It is recommended that Volume Logix be enabled on the Symmetrix storage array for LUN masking purposes.
Configuring a Boot Device on an EMC Storage Array ◆ Disable the BIOS on any other HBAs in the system other than the QLogic HBA designated for booting. Configuring the QLogic BIOS for SAN boot After the BIOS is installed and enabled, it must be configured for use for booting from EMC VNX series or CLARiiON storage systems. In cases where the host is booting from an internal drive and is being converted to boot from the SAN, QLogic SANsurfer may be used to configure the BIOS for SAN boot.
Configuring a Boot Device on an EMC Storage Array • If there are multiple QLogic HBAs, a list of addresses occupied by those HBAs appears. Since the EMC storage array is attached to the lowest-numbered PCI slot, select the first adapter from the list; then press ENTER. The Fast!UTIL Options menu appears. 4. From the Fast!UTIL Options menu, select Configuration Settings and press ENTER. 5. From the Configuration Settings menu, select Adapter Settings and press ENTER. 6.
Configuring a Boot Device on an EMC Storage Array 16. Select Save Changes and press ENTER. 17. Press ESC to exit the Fast!UTIL menu. 18. Reboot the host. Configuring a Symmetrix boot device for iSCSI 3.
Configuring a Boot Device on an EMC Storage Array Configuring a VNX series or CLARiiON boot device for iSCSI 3.x This section describes how to install an EMC-qualified version of Linux onto an EMC VNX series or CLARiiON storage system connected to an Intel-based x86 and x86_64 class systems and AMD Opteron-based x86_64 class systems. Preparing the VNX series or CLARiiON storage system ◆ It is recommended that Access Logix be enabled on the VNX series or CLARiiON storage system for LUN masking purposes.
Configuring a Boot Device on an EMC Storage Array ◆ SCSI hard disks are allowed in SAN boot configurations. However, the BIOS for the disk's SCSI adapters must be disabled. Any SCSI disks attached to the host should be disconnected during the operating system installation. ◆ For servers with SCSI CD-ROM drives, ensure that the BIOS is enabled on the SCSI channel that includes the CD-ROM. Disable the BIOS on any other integrated SCSI channels.
Configuring a Boot Device on an EMC Storage Array • If there is only one QLogic HBA, the Fast!UTIL Options menu appears. • If there are multiple QLogic HBAs, a list of addresses occupied by those HBAs appears. Since the EMC storage array is attached to the lowest-numbered PCI slot, select the first adapter from the list; then press ENTER. The Fast!UTIL Options menu appears. 4. From the Fast!UTIL Options menu, select Configuration Settings and press ENTER. 5.
Configuring a Boot Device on an EMC Storage Array 12. Once the Fast!UTIL loads, select the Configuration Settings menu and press ENTER. 13. From the Configuration Settings menu, select the iSCSI Boot Settings menu and press ENTER. 14. From the iSCSI Boot Settings menu, select Primary and press ENTER to enable this option if it is not already enabled. The adapter will scan for attached storage devices and a list of the available LUN(s) will be displayed.
Configuring a Boot Device on an EMC Storage Array Installing onto the boot device with the QLogic HBA v3.x-Series driver To install the OS on an EMC storage array device, you will need to create a Device Driver Update Disk. To simplify the installation EMC recommends only having one LUN presented by the targeted EMC storage array during the installation process. Additional LUNs should be added after the OS is completely installed and has been rebooted to ensure proper operation.
Configuring a Boot Device on an EMC Storage Array Install kernel headers and sources The kernel sources must be installed on the system on which the driver diskette image will be built. If the kernel sources are not installed, install the kernel-source RPM from the Red Hat installation CD or from RHN prior to continuing. Note: The kernel sources must match the kernel version of the ISO images to be installed on the boot device. For example, the kernel version of RHEL 3.0 Update 5 is 2.4.21-32.EL.
Configuring a Boot Device on an EMC Storage Array The tarball contains the configuration files required to configure the QLogic driver within the DD-kit development environment. It contains the following files: Makefile, disk-info.qla4xxx, modinfo.qla4xxx, pcitable.qla4xxx, and modules.dep.qla4xxx. Obtain and configure a generic Red Hat Driver Diskette Development Kit 1. Download the current Device Driver Update Disk Development Kit (mod_devel_kit.tgz) from: http://people.redhat.com/dledford/ 2.
Configuring a Boot Device on an EMC Storage Array [mod_devel_kit]# make IMPORT_TREE=/usr/src/linux-2.4.21-32.EL IMPORT_VER=2.4.21-32.EL import If your kernel version is other than the one mentioned above, please execute the import command with your desired kernel version accordingly. ! IMPORTANT The IMPORT_TREE variable should be the path to the selected kernel sources to be used and the IMPORT_VER variable is the kernel version without any arch or platform additions. For example, the 2.4.21-32.EL.
Configuring a Boot Device on an EMC Storage Array 5. Delete the Makefile, Makefile.kernel, and Config.in files using the following command: [scsi]# rm -f Makefile Makefile.kernel Config.in 6. Copy the qla4xxx_dd_config_files.tgz file from the QLogic sample DD-kit (retrieved in step 2 above) and untar it into the current directory (temp/mod_devel_kit/scsi/) using the following command: [scsi]# cp temp/sample/qla4xxx_dd_config_files.tgz . [scsi]# tar xvzf qla4xxx_dd_config_files.tgz 7.
Configuring a Boot Device on an EMC Storage Array a. To speed up the build process, only build for the architecture you wish to install. To accomplish this you can go into the /mod_devel_kit//configs directory and rename any of the configs you DO NOT wish to be compiled. For example for kernel version - 2.4.21-32.EL: [mod_devel_kit]# cd 2.4.21-32.EL/configs [configs]# mv kernel-2.4.21-athlon.config old_kernel-2.4.21[configs]# cd ../../ ! athlon.
Configuring a Boot Device on an EMC Storage Array 6. Change to the system specific directory in the mod_devel_kit path. For RHEL 3.0: [mod_devel_kit]# cd rhel3 7. Build the architecture-specific RHEL 3.0 driver diskette image by decompressing the file dd.img-xx.gz, where xx denotes the specific type of architecture. An example for an IA32 driver diskette image is as follows: [rhel3]# gzip -d dd.img-i686.gz An example for a 64-bit driver diskette image is as follows: [rhel3]# gzip -d v1-dd.img.gz 8.
Configuring a Boot Device on an EMC Storage Array Upgrading the kernel After successfully completing the installation and rebooting the host, the kernel may be upgraded to a newer kernel revision to take advantage of fixes and features incorporated into the newer kernel errata. Note: Please refer to the EMC Support Matrix for supported kernel revisions. EMC recommends installing the kernel packages, rather than upgrading them, so that either kernel version may be used for boot.
Configuring a Boot Device on an EMC Storage Array Configuring a Symmetrix boot device for iSCSI 5.x This section describes how to install an EMC-qualified version of Linux onto an EMC Symmetrix storage array connected to an Intel-based x86 and x86_64 class systems and AMD Opteron-based x86_64 class systems. Preparing the Symmetrix storage array ◆ It is recommended that Volume Logix be enabled on the Symmetrix storage array for LUN masking purposes.
Configuring a Boot Device on an EMC Storage Array ◆ Disable the BIOS on any other HBAs in the system other than the QLogic HBA designated for booting. Configuring the QLogic BIOS for SAN boot After the BIOS is installed and enabled, it must be configured for use for booting from EMC Symmetrix storage arrays. In cases where the host is booting from an internal drive and is being converted to boot from the SAN, QLogic SANsurfer may be used to configure the BIOS for SAN boot.
Configuring a Boot Device on an EMC Storage Array 4. From the Fast!UTIL Options menu, select Configuration Settings and press ENTER. 5. From the Configuration Settings menu, select Adapter Settings and press ENTER. 6. From the Host Adapter Settings menu, select Host Adapter BIOS and press ENTER to enable it if it is not already enabled. Note: Refer to “EMC recommended NVRAM settings for Linux” on page 37 for recommended settings. 7.
Configuring a Boot Device on an EMC Storage Array 19. Reboot the host. 20. Go to “Installing onto the boot device with the QLogic HBA v5.x-Series driver” on page 192. Configuring a Symmetrix boot device for iSCSI 5.
Configuring a Boot Device on an EMC Storage Array Configuring a VNX series or CLARiiON boot device for iSCSI 5.x This section describes how to install an EMC-qualified version of Linux onto an EMC VNX series or CLARiiON storage system connected to an Intel-based x86 and x86_64 class systems and AMD Opteron-based x86_64 class systems. Preparing the VNX series or CLARiiON storage system ◆ It is recommended that Access Logix be enabled on the VNX series or CLARiiON storage system for LUN masking purposes.
Configuring a Boot Device on an EMC Storage Array ◆ SCSI hard disks are allowed in SAN boot configurations. However, the BIOS for the disk's SCSI adapters must be disabled. Any SCSI disks attached to the host should be disconnected during the operating system installation. ◆ For servers with SCSI CD-ROM drives, ensure that the BIOS is enabled on the SCSI channel that includes the CD-ROM. Disable the BIOS on any other integrated SCSI channels.
Configuring a Boot Device on an EMC Storage Array • If there is only one QLogic HBA, the Fast!UTIL Options menu appears. • If there are multiple QLogic HBAs, a list of addresses occupied by those HBAs appears. Since the EMC storage array is attached to the lowest-numbered PCI slot, select the first adapter from the list; then press ENTER. The Fast!UTIL Options menu appears. 4. From the Fast!UTIL Options menu, select Configuration Settings and press ENTER. 5.
Configuring a Boot Device on an EMC Storage Array When the QLogic banner is displayed (in Step 2), press CTRL-Q. 13. Once the Fast!UTIL loads, select the Configuration Settings menu and press ENTER. 14. From the Configuration Settings menu, select the iSCSI Boot Settings menu and press ENTER. 15. From the iSCSI Boot Settings menu, select Primary and press ENTER to enable this option if it is not already enabled.
Configuring a Boot Device on an EMC Storage Array Installing onto the boot device with the QLogic HBA v5.x-Series driver EMC only supports the Linux distributor's in-box driver that arrives with the kernel. This simplifies the process of installing the OS on an EMC storage array device. The Linux distributor's installer will detect the QLogic iSCSI HBA and select the proper driver for the installation.
8 Invisible Body Tag Additional Notes This chapter provides additional notes to consider. ◆ ◆ ◆ ◆ ◆ ◆ ◆ ◆ ◆ Ethernet connectivity over the CNA ............................................. 194 Device reconfiguration procedures for FC and FCoE................. 195 Device reconfiguration procedures for the iSCSI 3.x driver ...... 196 Device reconfiguration procedures for the iSCSI 5.x driver ...... 198 Adapter information for RHEL5, SLES10, and SLES 11 .............
Additional Notes Ethernet connectivity over the CNA The QLogic FCoE CNA delivers lossless 10 Gb/s Enhanced Ethernet support with dynamic allocation of networking and storage bandwidth that may be used for either system ethernet or iSCSI traffic, as well as FCoE. The Linux driver that supports the ethernet and iSCSI traffic for this device is ixgbe. The driver will automatically be installed and loaded by your supported Linux distribution.
Additional Notes Device reconfiguration procedures for FC and FCoE There are three methods to reconfigure the devices added or removed in the system. Method 1: Reboot the system : shutdown -r now Method 2: Remove and reinsert the modular driver For example: modprobe -rv qla2400 modprobe -v qla2400 Method 3: Use QLogic script to dynamically scan the devices. QLogic has the QLogic FC HBA LUN Scan Utility which is available from the EMC-approved site on the QLogic website.
Additional Notes Device reconfiguration procedures for the iSCSI 3.x driver The Linux v2.4.x kernel lacks a command built into the kernel that allows for a dynamic SCSI channel reconfiguration like drvconfig or ioscan. The methods of rescanning the SCSI bus in a Linux host are: ◆ Rebooting the host ◆ Unloading and reloading the modular QLogic iSCSI driver Rebooting the host Rebooting the host will reliably detect newly added devices.
Additional Notes To stop the PowerPath service, issue one of the two following commands: /etc/init.d/PowerPath stop or service PowerPath stop ◆ If the QLogic SANsurfer daemon iqlremote is installed and enabled on the host, then the iqlremote service must be stopped in order for the driver to be removed from the currently running kernel. To stop the iqlremote service, issue one of the two following commands: /etc/init.
Additional Notes Device reconfiguration procedures for the iSCSI 5.x driver The Linux v2.6.x kernel lacks a command built into the kernel that allows for a dynamic SCSI channel reconfiguration like drvconfig or ioscan. The methods of rescanning the SCSI bus in a Linux host are: ◆ Rebooting the host ◆ Unloading and reloading the modular QLogic iSCSI driver In either case, all I/O must be stopped and all other mounted filesystems must be unmounted before rebooting or removing the modular driver.
Additional Notes or service naviagentcli stop ◆ If PowerPath is installed and enabled on the host, then the PowerPath service must be stopped. To stop the PowerPath service, issue one of the two following commands: /etc/init.d/PowerPath stop or service PowerPath stop ◆ If the QLogic SANsurfer daemon iqlremote is installed and enabled on the host, then the iqlremote service must be stopped in order for the driver to be removed from the currently running kernel.
Additional Notes The unloading of the module can be accomplished with the modprobe (with the -r switch) command or the rmmod command. These commands are used to unload the loadable modules from the running kernel if they are not in use and if other modules are not dependent upon them.
Additional Notes Adapter information for RHEL5, SLES10, and SLES 11 QLogic fully supports upstream driver using sysfs start from RHEL5, SLES10, and SLES 11. QLogic adapter information is not available on /proc file system. To get the QLogic adapter information, you can manually go to /sys file system to probe all the necessary information. QLogic provides a script tool to help. You can download QLogic FC HBA Information Utility from EMC-approved site on the QLogic website.
Additional Notes SNIA API for third-party software (EMC Ionix ControlCenter and Solution Enabler) For the OS version that supports in kernel driver, SNIA API library shall be installed in the host to display QLogic adapter information for EMC products such as EMC ControlCenter and Solution Enabler usage. For the OS version that supports out of kernel drivers, the installation script will install the API library as well as FC driver.
Additional Notes OS upgrade from supporting out of kernel driver to OS version supporting in kernel driver When RHEL or SLES is upgraded from supporting the OS version that supports the out of kernel driver to the OS version that supports the in kernel driver, old entries in the configure file will not be delete. For Qlogic in kernel driver, the following features are disabled: ◆ ◆ Persistent binding QLogic failover QLogic driver parameter ConfigRequired and ql2xfailover do not need to be set.
Additional Notes ◆ If PowerPath is installed and enabled on the host, then the PowerPath service must be stopped. To stop the PowerPath service, issue one of the two following commands: /etc/init.d/PowerPath stop or service PowerPath stop ◆ If the QLogic SANsurfer daemon qlremote is installed and enabled on the host, then the qlremote service must be stopped in order for the driver to be removed from the currently running kernel.
Additional Notes running kernel if they are not in use and if other modules are not dependent upon them. The v8.x series driver consists of multiple modules. For example, if the command lsmod is invoked on a server with a QLA2340-E-SP adapters installed, the following three modules will be reported: ◆ qla2xxx_conf - The QLogic Linux driver configuration module containing information regarding persistent binding. ◆ qla2xxx - The low level QLogic Linux adapter driver module.
Additional Notes Device reconfiguration: Device numbering In the Linux kernel, the SCSI addresses are not used in the device names as they are in other types of UNIX (Sun, SGI, HP-UX, and BSD, for example). Block device filenames take the form /dev/sd ln, where l is the letter denoting the physical drive and n is the number denoting the partition on that physical drive. Disk device file names and major and minor numbers are assigned dynamically at boot time or device loading time in the order of discovery.
Additional Notes HPQ server-specific note When using HPQ systems, it is highly recommended that the HPQ SmartStart CD be run to configure the HPQ server prior to installing the Linux operating system. The SmartStart CD is shipped by HPQ with their systems and is a bootable CD that is used to configure HPQ servers. If another operating system is selected other than Linux, there may be problems installing the operating system or using the drivers installed in the kernel.
Additional Notes (VNX series or CLARiiON Only) disconnected ghost LUNs When a Linux host is attached to both SPs in a VNX series or CLARiiON storage system, the driver will report a disconnected LUN 0 on SPB and a failure to read the capacity of the device. The Unisphere/Navisphere Host Agent requires that disconnected LUN 0 be reported properly. A device file name is allocated to the disconnected LUN 0 in the /dev filesystem, but the device cannot be mounted, partitioned, or otherwise accessed.
A Invisible Body Tag Setting Up External Boot for IBM Blade Server HS40 (8839) This appendix contains information on setting up external boot for IBM Blade Server HS40. ◆ Configure HS40 BladeCenter server to boot from external array...
Setting Up External Boot for IBM Blade Server HS40 (8839) Configure HS40 BladeCenter server to boot from external array IBM HS40 (8839) Blade Servers encounter a dual-port adapter conflict when attempting to configure boot BIOS to boot from an external array. To configure an HS40 BladeCenter server to boot successfully follow the steps below. 1. Create a single zone containing the adapter port from which you want to boot. This prevents any conflicts with the other fibre port. 2.
B Invisible Body Tag Special Instructions This appendix contains special instructions for the following: ◆ ◆ CLARiiON CX200 direct-connect dual-host Oracle9i RAC or RHEL 2.1 Cluster Manager cluster configurations with QLA234x adapters ........................................................................... 212 Setting the FC-AL loop ID for CLARiiON CX200 directconnect Oracle9iRAC and RHEL 2.1 Cluster Manager configurations with QLogic QLA234x-Series adapters ..............
Special Instructions CLARiiON CX200 direct-connect dual-host Oracle9i RAC or RHEL 2.1 Cluster Manager cluster configurations with QLA234x adapters For CLARiiON CX200 direct-connect dual-host Oracle9i RAC or RHEL 2.1 Cluster Manager cluster configurations with QLA234x adapters, the default adapter optic jumper position must be changed. ! CAUTION Modifying the jumper setting without using the recommended firmware and/or drivers may cause a loss of connectivity. 1.
Special Instructions Setting the FC-AL loop ID for CLARiiON CX200 direct-connect Oracle9iRAC and RHEL 2.1 Cluster Manager configurations with QLogic QLA234x-Series adapters The FC-AL Loop ID for QLA234x-series adapters must be set manually when directly attaching RHEL 2.1 hosts in Oracle9iRAC or RHEL 2.1 Cluster Manager configurations. Follow the steps below to enable hard addressing and to set the loop ID on each adapter. Perform this procedure on both nodes in the cluster connected to the CX200.
Special Instructions 214 EMC Host Connectivity with QLogic FC and iSCSI HBAs and FCoE CNAs for the Linux Environment
Index B I BIOS Settings 32 Version 32 boot configuration 141 boot device cautions and restrictions 139 boot disk 139 booting from an EMC storage array 139 insmod 190 C M Menlo (CEE) firmware 122 message url http //driverdownloads.qlogic.com/QLogicDriv erDownloads_UI/Oem_EMC.
Index S SANsurfer 32, 126, 131 SANsurferCLI 32, 126, 128, 131, 132, 134 system booting 139 crash events 139 216 EMC Host Connectivity with QLogic FC and iSCSI HBAs and FCoE CNAs for the Linux Environment