This document and related products are distributed under licenses restricting their use, copying, distribution, and reverse-engineering. No part of this document may be reproduced in any form or by any means without prior written permission by Chelsio Communications. All third party trademarks are copyright of their respective owners.
Document History Version Revision Date 1.0.0 1.0.1 1.0.2 1.0.3 1.0.4 1.0.5 1.0.6 1.0.7 1.0.8 1.0.9 1.1.0 1.1.1 1.1.2 1.1.3 1.1.4 1.1.5 1.1.
TABLE OF CONTENTS I. CHELSIO UNIFIED WIRE 1. Introduction 1.1. 1.2. 1.3. 1.4. 2. 3. Hardware Installation Software/Driver Installation 3.1. 3.2. 3.3. 3.4. 4. Pre-requisites Installing Chelsio Unified Wire from source Installing Chelsio Unified Wire from RPM Firmware update Software/Driver Uninstallation 4.1. 4.2. 5. Uninstalling Chelsio Unified Wire from source Uninstalling Chelsio Unified Wire from RPM Configuring Chelsio Network Interfaces 5.1. 5.2. 5.3. 5.4. 6.
1.2. 2. Software Requirements Software/Driver Loading 2.1. 3. Loading the driver Software/Driver Unloading 3.1. 4. Unloading the driver Software/Driver Configuration and Fine-tuning 4.1. IV. 1. Instantiate Virtual Functions IWARP (RDMA) Introduction 1.1. 1.2. 2. Hardware Requirements Software Requirements Software/Driver Loading 2.1. 3. 4. Compiling and Loading iWARP driver Software/Driver Unloading Software/Driver Configuration and Fine-tuning 4.1. 4.2. 4.3.
1.1. 1.2. 1.3. 2. Features Hardware Requirements Software Requirements Software/Driver Loading 2.1. 2.2. 2.3. 3. 4. Latest iSCSI Software Stack Driver Software Generating single RPM for T3 and T4 adapters Obtaining the iSCSI Software License Software/Driver Unloading Software/Driver Configuration and Fine-tuning 4.1. 4.2. 4.3. 4.4. 4.5. 4.6. 4.7. 4.8. 4.9. 4.10. 4.11.
X. 1. FCOE PDU OFFLOAD TARGET Introduction 1.1. 1.2. 2. 3. Software/Driver Loading Software/Driver Configuration and Fine-tuning 3.1. 3.2. 3.3. 3.4. 3.5. 3.6. 3.7. 4. FCOE FULL OFFLOAD INITIATOR Introduction 1.1. 1.2. 2. 3. 4. Configuring Cisco Nexus 5010 switch Collecting port information Configuring LUNs on Target Configuring Persistent Target Verifying initiators connected to the target Removing LUNs Performance tuning Software/Driver Unloading XI. 1.
2. 3. 4. Software/Driver Loading Software/Driver Unloading Software/Driver Configuration and Fine-tuning 4.1. 4.2. Offloading TCP traffic over a bonded interface Network Device Configuration XIV. UDP SEGMENTATION OFFLOAD AND PACING 1. Introduction 1.1. 1.2. 2. 3. 4. Software/Driver Loading Software/Driver Unloading Software/Driver Configuration and Fine-tuning 4.1. 4.2. XV. 1. Modifying the application Configuring UDP Pacing OFFLOAD IPV6 DRIVER Introduction 1.1. 1.2. 2. 3.
2.1. 2.2. 2.3. Installing basic support Using Sniffer (wd_sniffer) Using Tracer (wd_tcpdump_trace) XVIII.CLASSIFICATION AND FILTERING 1. Introduction 1.1. 1.2. 2. Hardware Requirements Software Requirements Usage 2.1. 2.2. 2.3. 2.4. 2.5. 2.6. 3. Configuration Creating Filter Rules Listing Filter Rules Removing Filter Rules Layer 3 example Layer 2 example Hash/DDR Filters 3.1. 3.2. 3.3. 3.4. 3.5. Creating Filter Rules Listing Filter Rules Removing Filter Rules Swap MAC feature Hit Counters XIX.
1.4. 2. Hardware and Software 2.1. 2.2. 2.3. 3. 4. CLI Help system Client conflict resolution Web GUI client 7.1. 7.2. 7.3. 7.4. 7.5. 7.6. 7.7. 7.8. 8. Communication Configuration Service configuration Firewall CLI client 6.1. 6.2. 7. Verifying Management Agent Verifying Management Client Verifying Management Station Management Agent 5.1. 5.2. 5.3. 5.4. 6. Supported Adapters Platform/Component Matrix Platform/Driver Matrix Installing Unified Wire Manager Verifying UM components status 4.1. 4.
4.1. 5. FCoE boot process 5.1. 6. Legacy iSCSI boot Creating Driver Update Disk (DUD) 7.1. 7.2. 8. Legacy FCoE boot iSCSI boot process 6.1. 7. Legacy PXE boot Creating DUD for RedHat Enterprise Linux Creating DUD for Suse Enterprise Linux OS Installation 8.1. 8.2. 8.3. Installation using Chelsio NIC DUD (PXE only) Installation on FCoE LUN Installation on iSCSI LUN XXII. LUSTRE FILE SYSTEM 1. Introduction 1.1. 1.2. 2.
Chapter I. Chelsio Unified Wire I.
Chapter I. Chelsio Unified Wire 1. Introduction Thank you for choosing Chelsio T5/T4 Unified Wire adapters. These high speed, single chip, single firmware cards provide enterprises and data centers with high performance solutions for various Network and Storage related requirements. The Terminator 5 (T5) is Chelsio’s next generation of highly integrated, hyper-virtualized 40/10GbE controllers.
Chapter I.
Chapter I. Chelsio Unified Wire install.py,dialog.py: Python scripts needed for the GUI installer. EULA: Chelsio’s End User License Agreement install.log: File containing installation summary. docs: The docs directory contains support documents - README, Release Notes and User’s Guide (this document) for the software. libs: This directory is for libraries required to install the WD-TOE, WD-UDP and iWARP drivers.
Chapter I. Chelsio Unified Wire t4_perftune.sh: This shell script is to tune the system for higher performance. It achieves it through modifying the IRQ-CPU binding. This script can also be used to change Tx coalescing settings. t4-forward.sh: RFC2544 Forward test tuning script. uname_r: This file is used by chstatus script to verify if the Linux platform is supported or not. wdload: UDP acceleration tool. wdunload: Used to unload all the loaded Chelsio drivers.
Chapter I. Chelsio Unified Wire 2. Hardware Installation Follow these steps to install Chelsio Adapter in your system: 1. 2. 3. 4. 5. 6. Shutdown/power off your system. Power off all remaining peripherals attached to your system. Unpack the Chelsio adapter and place it on an anti-static surface. Remove the system case cover according to the system manufacturer’s instructions. Remove the PCI filler plate from the slot where you will install the Ethernet adapter.
Chapter I. Chelsio Unified Wire b. And for T4 adapters : [root@host]# lspci | grep –i Chelsio 03:00.0 Ethernet controller: Chelsio Communications Inc T420-CR Unified Wire Ethernet Controller 03:00.1 Ethernet controller: Chelsio Communications Inc T420-CR Unified Wire Ethernet Controller 03:00.2 Ethernet controller: Chelsio Communications Inc T420-CR Unified Wire Ethernet Controller 03:00.3 Ethernet controller: Chelsio Communications Inc T420-CR Unified Wire Ethernet Controller 03:00.
Chapter I. Chelsio Unified Wire The above outputs indicate the hardware configuration of the adapters as well as the Serial numbers. As observed by the x8, the card is properly installed in an x8 slot on the machine and using MSI-X interrupts. Note Network device names for Chelsio’s physical ports are assigned using the following convention: the port farthest from the motherboard will appear as the first network interface.
Chapter I. Chelsio Unified Wire 3. Software/Driver Installation There are two main methods to install the Chelsio Unified Wire package: from source and RPM. If you decide to use source, you can install the package using CLI or GUI mode. If you decide to use RPM, you can install the package using Menu or CLI mode. Irrespective of the method chosen for installation, the machine needs to be rebooted for changes to take effect.
Chapter I. Chelsio Unified Wire 3.1. Pre-requisites Depending on the component you choose to install, please ensure that the following requirements are met, before proceeding with the installation. If you want to install OFED with NFS-RDMA support, please refer “Setting up NFS-RDMA” in iWARP (RDMA) (Click here). If you’re planning to install iSCSI PDU Offload Initiator, please install openssl-devel package. IPv6 should be enabled in the machine to use the RPM Packages. 3.2.
Chapter I. Chelsio Unified Wire v. Select “install” under “Choose an action” vi. Select Enable IPv6-Offload to install drivers with IPv6 Offload support or Disable IPv6offload to continue installation without IPv6 offload support. vii. Select the required T5/T4 configuration tuning option: Note The tuning options may vary depending on the Linux distribution.
Chapter I. Chelsio Unified Wire viii. Under “Choose install components”, select “all” to install all the related components for the option chosen in step (ix) or select “custom” to install specific components. Important To install Bypass or FCoE PDU Offload Target drivers, please select Unified Wire in step (ix). Then select “custom” option. ix. Select the required performance tuning option. a. Enable Binding IRQs to CPUs: Bind MSI-X interrupts to different CPUs and disable IRQ balance daemon. b.
Chapter I. Chelsio Unified Wire x. If you already have the required version of OFED software installed, you can skip this step. To install OFED-3.5-2 choose the Install-OFED option. To install a different version, select Choose-OFED-Version and then select the appropriate version. To skip this step, select Skip-OFED. Note OFED is currently not supported on RHEL 6.5 xi.
Chapter I. Chelsio Unified Wire xii. After successful installation, summary of installed components will be displayed. xiii. Select “View log” to view the installation log or “Exit” to continue. xiv. Select “Yes” to exit the installer or “No” to go back. xv. Reboot your machine for changes to take effect. Note Press Esc or Ctrl+C to exit the installer at any point of time.
Chapter I. Chelsio Unified Wire 3.2.1.1. Installation on updated kernels If the kernel version on your Linux distribution is updated, follow the steps mentioned below to install the Unified Wire package: i. Change your current working directory to Chelsio Unified Wire package directory and run the following script to start the GUI installer: [root@host]# ./install.py ii. Select “Yes” to continue with the installation on the updated kernel or “No” to exit. iii.
Chapter I. Chelsio Unified Wire i. Download the tarball ChelsioUwire-x.xx.x.x.tar.gz from Chelsio Download Center, http://service.chelsio.com/ ii. Untar the tarball using the following command: [root@host]# tar -zxvfm ChelsioUwire-x.xx.x.x.tar.gz iii. Change your current working directory to Chelsio Unified Wire package directory and run the following script to start the installer: [root@host]# ./install.py iv.
Chapter I. Chelsio Unified Wire i. Create a file (machinefilename) containing the IP addresses or hostnames of the nodes in the cluster. You can view the sample file, sample_machinefile, provided in the package to view the format in which the nodes have to be listed. ii. Now, execute the following command: [root@host] # ./install.py -C -m iii. Select the required T5/T4 configuration tuning option. The tuning options may vary depending on the Linux distribution. iv.
Chapter I. Chelsio Unified Wire v. The default configuration tuning option is Unified Wire. The configuration tuning can be selected using the following commands: [root@host]# make CONF= [root@host]# make CONF= install Important Steps (iv)and (v) mentioned above will NOT install Bypass, FCoE PDU offload target drivers and benchmark tools.They will have to be installed manually.
Chapter I. Chelsio Unified Wire 3.2.4. CLI mode (individual drivers) You can also choose to install drivers individually. Provided here are steps to build and install NIC, TOE, iWARP, Bypass, WD-UDP, WD-TOE, UDP Segmentation Offload, FCoE PDU Offload target drivers and benchmarking tools. To know about other drivers, view help by running make help.
Chapter I. Chelsio Unified Wire To build and install WD-TOE driver: [root@host]# make wdtoe [root@host]# make wdtoe_install To build and install WD-TOE and WD-UDP drivers together: [root@host]# make wdtoe_wdudp [root@host]# make wdtoe_wdudp_install To build and install all drivers with DCB support: [root@host]# make dcbx=1 [root@host]# make dcbx=1 install The offload drivers support UDP Segmentation Offload with limited number of connections (1024 connections).
Chapter I. Chelsio Unified Wire To build and install drivers along with benchmarks, [root@host]# make BENCHMARKS=1 [root@host]# make BENCHMARKS=1 install Note To view the different configuration tuning options, view the help by typing Note If IPv6 is administratively disabled in the machine, by default the drivers will be built and installed without IPv6 Offload support. [root@host]#make help 3.3. Installing Chelsio Unified Wire from RPM 3.3.1. Menu Mode i.
Chapter I. Chelsio Unified Wire Note The Installation options may vary depending on the Configuration tuning option selected. vii. The selected components will now be installed. viii. Reboot your machine for changes to take effect. Note If the installation aborts with the message "Resolve the errors/dependencies manually and restart the installation", please go through the install.log to resolve errors/dependencies and then start the installation again. 3.3.2. CLI mode i.
Chapter I. Chelsio Unified Wire iv. The default configuration tuning option is Unified Wire. The configuration tuning can be selected using the following command: [root@host]# ./install.py –i -c Note To view the different configuration tuning options, view the help by typing [root@host]# ./install.py –h v. To install OFED and Chelsio drivers built against OFED, run the above command with -o option. [root@host]# ./install.
Chapter I. Chelsio Unified Wire 3.4. Firmware update The T5 and T4 firmwares are installed on the system, typically in /lib/firmware/cxgb4, and the driver will auto-load the firmwares if an update is required.
Chapter I. Chelsio Unified Wire 4. Software/Driver Uninstallation Similar to installation, the Chelsio Unified Wire package can be uninstalled using two main methods: from the source and RPM, based on the method used for installation. If you decide to use source, you can uninstall the package using CLI or GUI mode. 4.1. Uninstalling Chelsio Unified Wire from source 4.1.1. GUI mode (with Dialog utility) i.
Chapter I. Chelsio Unified Wire iv. The selected components will now be uninstalled. v. After successful uninstalltion, summary of the uninstalled components will be displayed. vi. Select “View log” to view uninstallation log or “Exit” to continue.
Chapter I. Chelsio Unified Wire vii. Select “Yes” to exit the installer or “No” to go back. Note Press Esc or Ctrl+C to exit the installer at any point of time. 4.1.2. CLI mode (without Dialog utility) Run the following script with –u option to uninstall the Unified Wire Package: [root@host]# ./install.py –u Note View help by typing [root@host]# ./install.py –h for more information 4.1.3.
Chapter I. Chelsio Unified Wire The above command will remove Chelsio iWARP (iw_cxgb4) and TOE (t4_tom) drivers from all the nodes listed in the machinefilename file. 4.1.4. CLI mode (individual drivers/software) You can also choose to uninstall drivers/software individually. Provided here are steps to uninstall NIC, TOE, iWARP, Bypass, WD-TOE, UDP Segmentation Offload, FCoE PDU Offload target drivers and Unified Wire Manager (UM).
Chapter I. Chelsio Unified Wire To uninstall WD-TOE and WD-UPD drivers together: [root@host]# make wdtoe_wdudp_uninstall To uninstall FCoE Target driver: [root@host]# make fcoe_pdu_offload_target_uninstall To uninstall Unified Wire Manager (UM): [root@host]# make uninstall UM_UNINST=1 OR [root@host]# make tools_uninstall UM_UNINST=1 4.2.
Chapter I. Chelsio Unified Wire 4.2.1.1. iWARP driver uninstallation on Cluster nodes To uninstal iWARP drivers on multiple Cluster nodes with a single command, run the following: [root@host] # ./install.py -C -m -u The above command will remove Chelsio iWARP (iw_cxgb4) and TOE (t4_tom) drivers from all the nodes listed in the machinefilename file.
Chapter I. Chelsio Unified Wire 5. Configuring Chelsio Network Interfaces In order to test Chelsio adapters’ features it is required to use two machines both with Chelsio’s (T5, T4 or both) network adapters installed. These two machines can be connected directly without a switch (back-to-back), or both connected to a switch. The interfaces have to be declared and configured.
Chapter I. Chelsio Unified Wire iv. Select the required mode: Possible T580 adapter modes: |------------------------------| | 1: 2x40G | | 2: 4x10G | |------------------------------| Select mode for adapter (1,2): v. Reload the network driver for changes to take effect. [root@host]# rmmod cxgb4 [root@host]# modprobe cxgb4 Note In case of T580-SO-CR adapters, reboot the machine for changes to take effect. 5.2. Configuring network-scripts A typical interface network-script (e.g. eth0) on RHEL 6.
Chapter I. Chelsio Unified Wire The ifcfg-ethX files have to be created manually. They are required for bringing the interfaces up and down and attribute the desired IP addresses. 5.3. Creating network-scripts To spot the new interfaces, make sure the driver is unloaded first. To that point ifconfig -a | grep HWaddr should display all non-chelsio interfaces whose drivers are loaded, whether the interfaces are up or not.
Chapter I. Chelsio Unified Wire 5.4. Checking Link Once the network-scripts are created for the interfaces you should check the link i.e. make sure it is actually connected to the network. First, bring up the interface you want to test using ifup eth1. You should now be able to ping any other machine from your network provided it has ping response enabled.
Chapter I. Chelsio Unified Wire 6. Software/Driver Update For any distribution specific problems, please check README and Release Notes included in the release for possible workaround. Please visit Chelsio support web site http://service.chelsio.com/ for regular updates on various software/drivers. You can also subscribe to our newsletter for the latest software updates.
Chapter II. Network (NIC/TOE) II.
Chapter II. Network (NIC/TOE) 1. Introduction Chelsio’s T5 and T4 series of Unified Wire Adapters provide extensive support for NIC operation, including all stateless offload mechanisms for both IPv4 and IPv6 (IP, TCP and UDP checksum offload, LSO - Large Send Offload aka TSO - TCP Segmentation Offload, and assist mechanisms for accelerating LRO - Large Receive Offload). A high performance fully offloaded and fully featured TCP/IP stack meets or exceeds software implementations in RFC compliance.
Chapter II. Network (NIC/TOE) 1.2. Software Requirements 1.2.1. Linux Requirements Currently the Network driver is available for the following versions: Redhat Enterprise Linux 5 update 9 kernel Redhat Enterprise Linux 5 update 10 kernel Redhat Enterprise Linux 6 update 4 kernel Redhat Enterprise Linux 6 update 5 kernel Suse Linux Enterprise Server 11 SP1 kernel Suse Linux Enterprise Server 11 SP2 kernel Suse Linux Enterprise Server 11 SP3 kernel Ubuntu 12.04, 3.2.0-23 Ubuntu 12.04.2, 3.5.0-23* Kernel.
Chapter II. Network (NIC/TOE) 2. Software/Driver Loading The driver must be loaded by the root user. Any attempt to load the driver as a regular user will fail. 2.1. Loading in NIC mode (without full offload support) To load the Network driver without full offload support, run the following command: [root@host]# modprobe cxgb4 2.2.
Chapter II. Network (NIC/TOE) 3. Software/Driver Unloading 3.1. Unloading the NIC driver To unload the NIC driver, run the following command: [root@host]# rmmod cxgb4 3.2. Unloading the TOE driver Please reboot the system to unload the TOE driver.
Chapter II. Network (NIC/TOE) 4. Software/Driver Configuration and Fine-tuning 4.1. Instantiate Virtual Functions (SR-IOV) To instantiate the Virtual functions, load the cxgb4 driver with num_vf parameter with a non-zero value. For example: [root@host]# modprobe cxgb4 num_vf=1,0,0,0 The number(s) provided for num_vf parameter specifies the number of Virtual Functions to be instantiated per Physical Function. The Virtual Functions can be assigned to Virtual Machines (Guests).
Chapter II. Network (NIC/TOE) Receiver Side Scaling (RSS) Receiver Side Scaling enables the receiving network traffic to scale with the available number of processors on a modern networked computer. RSS enables parallel receive processing and dynamically balances the load among multiple processors. Chelsio’s T5/T4 network controller fully supports Receiver Side Scaling for IPv4 and IPv6.
Chapter II. Network (NIC/TOE) Then on the receiver host, look at interrupt rate at /proc/interrupts: [root@receiver_host]# cat /proc/interrupts | grep eth6 Id CPU0 CPU1 CPU2 CPU3 type interface 36: 115229 0 0 1 PCI-MSI-edge eth6 (queue 0) 37: 0 121083 1 0 PCI-MSI-edge eth6 (queue 1) 38: 0 0 105423 1 PCI-MSI-edge eth6 (queue 2) 39: 0 0 0 115724 PCI-MSI-edge eth6 (queue 3) Now interrupts from eth6 are evenly distributed among the 4 CPUs.
Chapter II. Network (NIC/TOE) Interrupt Coalescing The idea behind Interrupt Coalescing (IC) is to avoid flooding the host CPUs with too many interrupts. Instead of throwing one interrupt per incoming packet, IC waits for ‘n’ packets to be available in the Rx queues and placed into the host memory through DMA operations before an interrupt is thrown, reducing the CPU load and thus improving latency.
Chapter II. Network (NIC/TOE) [root@host]# lsmod | grep t4_tom [root@host]# modprobe t4_tom [root@host]# lsmod | grep t4_tom t4_tom 88378 0 [permanent] toecore 21618 1 t4_tom cxgb4 225342 1 t4_tom Then T5/T4’s hardware GRO/LRO implementation is enabled. If you would like to use the Linux GRO/LRO for any reason, first the t4_tom kernel module needs to be removed from kernel module list. Please note you might need to reboot your system.
Chapter II. Network (NIC/TOE) GROPackets is the number of held packets. Those are candidate packets held by the kernel to be processed individually or to be merged to larger packets. This number is usually zero. GROMerged is the number of packets that merged to larger packets. Usually this number increases if there is any continuous traffic stream present.
Chapter III. Virtual Function Network (vNIC) III.
Chapter III. Virtual Function Network (vNIC) 1. Introduction The ever increasing network infrastructure of IT enterprises has lead to a phenomenal increase in maintenance and operational costs. IT managers are forced to acquire more physical servers and other data center resources to satisfy storage and network demands.
Chapter III. Virtual Function Network (vNIC) 1.2. Software Requirements 1.2.1. Linux Requirements Currently the vNIC driver is available for the following versions: Redhat Enterprise Linux 5 update 9 kernel Redhat Enterprise Linux 5 update 10 kernel Redhat Enterprise Linux 6 update 4 kernel Redhat Enterprise Linux 6 update 5 kernel Suse Linux Enterprise Server 11 SP1 kernel Suse Linux Enterprise Server 11 SP2 kernel Suse Linux Enterprise Server 11 SP3 kernel Ubuntu 12.04, 3.2.0-23 Ubuntu 12.04.2, 3.5.
Chapter III. Virtual Function Network (vNIC) 2. Software/Driver Loading The vNIC driver must be loaded or unloaded on the Guest OS by the root user. Any attempt to load the driver as a regular user will fail. 2.1.
Chapter III. Virtual Function Network (vNIC) 3. Software/Driver Unloading 3.1.
Chapter III. Virtual Function Network (vNIC) 4. Software/Driver Configuration and Fine-tuning 4.1.
Chapter IV. iWARP (RDMA) IV.
Chapter IV. iWARP (RDMA) 1. Introduction Chelsio’s T5/T4 engine implements a feature rich RDMA implementation which adheres to the IETF standards with optional markers and MPA CRC-32C. The iWARP RDMA operation benefits from the virtualization, traffic management and QoS mechanisms provided by T5/T4 engine. It is possible to ACL process iWARP RDMA packets.
Chapter IV. iWARP (RDMA) Redhat Enterprise Linux 6 update 5 kernel Suse Linux Enterprise Server 11 SP1 kernel Suse Linux Enterprise Server 11 SP2 kernel Suse Linux Enterprise Server 11 SP3 kernel Ubuntu 12.04, 3.2.0-23 Kernel.org linux-3.4 Kernel.org linux-3.6* Kernel.org linux-3.7 Kernel.org linux-3.8* Kernel.org linux-3.9* Kernel.org linux-3.10* Kernel.org linux-3.11* Kernel.org linux-3.12* Kernel.org linux-3.13* (RHEL6.5), 2.6.32-431.el6* (SLES11SP1), 2.6.32.12-0.7 (SLES11SP2), 3.0.13-0.
Chapter IV. iWARP (RDMA) 2. Software/Driver Loading 2.1. Compiling and Loading iWARP driver The driver must be loaded by the root user. Any attempt to load the driver as a regular user will fail. Change your current working directory to driver package directory and run the following command : [root@host]# make [root@host]# make install To load the iWARP driver we need to load the NIC driver and core RDMA drivers first.
Chapter IV. iWARP (RDMA) 3.
Chapter IV. iWARP (RDMA) 4. Software/Driver Configuration and Fine-tuning 4.1. Testing connectivity with ping and rping Load the NIC, iWARP & core RDMA modules as mentioned in Software/Driver Loading section. After which, you will see two or four ethernet interfaces for the T5/T4 device. Configure them with an appropriate ip address, netmask, etc. You can use the Linux ping command to test basic connectivity via the T5/T4 interface.
Chapter IV. iWARP (RDMA) 4.2. Enabling various MPIs 4.2.1. DAPL Library configuration for Intel MPI and Platform MPI To run Intel MPI over RDMA interface, DAPL 2.0 should be set up as follows: Enable the Chelsio device by adding an entry at the beginning of the /etc/dat.conf file for the Chelsio interface. For instance, if your Chelsio interface name is eth2, then the following line adds a DAT version 2.0 device named “chelsio2" for that interface: chelsio2 u2.0 nonthreadsafe default libdaplofa.so.2 dapl.
Chapter IV. iWARP (RDMA) 4.2.3. Configuration of various MPIs (Installation and Setup) Intel-MPI i. ii. iii. iv. v. Download latest Intel MPI from the Intel website Copy the license file (.lic file) into l_mpi_p_x.y.z directory Create machines.LINUX (list of node names) in l_mpi_p_x.y.z Select advanced options during installation and register the MPI. Install software on every node. [root@host]# ./install.py vi. Set IntelMPI with mpi-selector (do this on all nodes).
Chapter IV. iWARP (RDMA) The performance is best with NIC MTU set to 9000 bytes. Open MPI (Installation and Setup) Open MPI iWARP support is only available in Open MPI version 1.3 or greater. Open MPI will work without any specific configuration via the openib btl. Users wishing to performance tune the configurable options may wish to inspect the receive queue values. Those can be found in the "Chelsio T4" section of mca-btl-openib-device-params.ini.
Chapter IV. iWARP (RDMA) v. Next, create a shell script , mpivars.csh, with the following entry: # path if ("" == "`echo $path | grep /usr/mpi/gcc/openmpi-x.y.z/bin`") then set path=(/usr/mpi/gcc/openmpi-x.y.z/bin $path) endif # LD_LIBRARY_PATH if ("1" == "$?LD_LIBRARY_PATH") then if ("$LD_LIBRARY_PATH" !~ */usr/mpi/gcc/openmpi-x.y.z/lib64*) then setenv LD_LIBRARY_PATH /usr/mpi/gcc/openmpix.y.z/lib64:${LD_LIBRARY_PATH} endif else setenv LD_LIBRARY_PATH /usr/mpi/gcc/openmpi-x.y.
Chapter IV. iWARP (RDMA) viii. Register OpenMPi with MPI-selector: [root@host]# mpi-selector --register openmpi --source-dir /usr/mpi/gcc/openmpi-x.y.z/bin ix. Verify if it is listed in mpi-selector: [root@host]# mpi-selector --l x. Set OpenMPI: [root@host]# mpi-selector --set openmpi –yes xi. Logut and log back in. MVAPICH2 (Installation and Setup) i. Download the latest MVAPICH2 software package from http://mvapich.cse.ohio-state.edu/ ii.
Chapter IV. iWARP (RDMA) iv. Next, create a shell script , mpivars.csh, with the following entry: # path if ("" == "`echo $path | grep /usr/mpi/gcc/mvapich2-x.y/bin`") then set path=(/usr/mpi/gcc/mvapich2-x.y/bin $path) endif # LD_LIBRARY_PATH if ("1" == "$?LD_LIBRARY_PATH") then if ("$LD_LIBRARY_PATH" !~ */usr/mpi/gcc/mvapich2-x.y/lib64*) then setenv LD_LIBRARY_PATH /usr/mpi/gcc/mvapich2x.y/lib64:${LD_LIBRARY_PATH} endif else setenv LD_LIBRARY_PATH /usr/mpi/gcc/mvapich2-x.
Chapter IV. iWARP (RDMA) vi. Next, copy the two files created in steps (iv) and (v) to /usr/mpi/gcc/mvapich2-x.y/bin and /usr/mpi/gcc/mvapich2-x.y/etc vii. Add the following entries in .bashrc file: export MVAPICH2_HOME=/usr/mpi/gcc/mvapich2-x.y/ export MV2_USE_IWARP_MODE=1 export MV2_USE_RDMA_CM=1 viii. Register MPI: [root@host]# mpi-selector --register mvapich2 --source-dir /usr/mpi/gcc/mvapich2-x.y/bin/ xii. Verify if it is listed in mpi-selector: [root@host]# mpi-selector --l ix.
Chapter IV. iWARP (RDMA) iv. Next, build and install the benchmarks using: [root@host]# gmake -f make_mpich The above step will install IMB-MPI1, IMB-IO and IMB-EXT benchmarks in the current working directory (i.e. src). v. Change your working directory to the MPI installation directory. In case of OpenMPI, it will be /usr/mpi/gcc/openmpi-x.y.z/ vi. Create a directory called tests and then another directory called imb under tests. vii.
Chapter IV. iWARP (RDMA) Run MVAPICH2 application as : mpirun_rsh -ssh -np 8 -hostfile mpd.hosts $MVAPICH2_HOME/tests/imb/IMB-MPI1 4.3. Setting up NFS-RDMA 4.3.1. Starting NFS-RDMA Server-side settings Follow the steps mentioned below to set up an NFS-RDMA server. i.
Chapter IV. iWARP (RDMA) vii. Run exportfs to make local directories available for Network File System (NFS) clients to mount. [root@host]# exportfs Now the NFS-RDMA server is ready. Client-side settings Follow the steps mentioned below at the client side. i. Load the iwarp modules and make sure peer2peer is set to 1. Make sure you are able to ping and ssh to the server Chelsio interface through which directories will be exported. ii. Load the xprtrdma module. [root@host]# modprobe xprtrdma iii.
Chapter V. WD-UDP V.
Chapter V. WD-UDP 1. Introduction Chelsio WD-UDP (Wire Direct-User Datagram Protocol) with Multicast is a user-space UDP stack with Multicast address reception and socket acceleration that enables users to run their existing UDP socket applications unmodified.
Chapter V. WD-UDP Suse Linux Enterprise Server 11 SP2 kernel (SLES11SP2), 3.0.13-0.27 Suse Linux Enterprise Server 11 SP3 kernel (SLES11SP3), 3.0.76-0.11 Ubuntu 12.04, 3.2.0-23 Ubuntu 12.04.2, 3.5.0-23* Kernel.org linux-3.4 Kernel.org linux-3.6* Kernel.org linux-3.7 Kernel.org linux-3.8* Kernel.org linux-3.9* Kernel.org linux-3.10* Kernel.org linux-3.11* Kernel.org linux-3.12* Kernel.org linux-3.13* Other kernel versions have not been tested and are not guaranteed to work.
Chapter V. WD-UDP 2. Software/Driver Loading The driver must be loaded by the root user. Any attempt to load the driver as a regular user will fail. i. Change your current working directory to driver package directory & run the following command : [root@host]# make [root@host]# make install ii. RDMA core modules from the OFED package should be loaded before proceeding.
Chapter V. WD-UDP 3.
Chapter V. WD-UDP 4. Software/Driver Configuration and Fine-tuning 4.1. Accelerating UDP Socket communications The libcxgb4_sock library is a LD_PRELOAD-able library that accelerates UDP Socket communications transparently and without recompilation of the user application. This section describes how to use libcxgb4_sock. By preloading libcxgb4_sock, all sockets created by the application are intercepted and possibly accelerated based on the user’s configuration.
Chapter V. WD-UDP egress, if the destination IP address will not route out via the T5/T4 device, then it too will not be accelerated. 4.1.2. Using libcxgb4_sock The libcxgb4_sock library utilizes the Linux RDMA Verbs subsystem, and thus requires the RDMA modules be loaded.
Chapter V. WD-UDP # # # # # # # # e.g. endpoint { interface=eth2.5 port = 8000 vlan = 5 priority=1 } endpoint { interface=eth2 port=9999} endpoints that bind to port 0 (requesting the host allocate a port) # can be accelerated with port=0: # # endpoint {interface=eth1 port=0} # Assume your T5/T4 interface is eth2. To accelerate all applications that preload libcxgb4_sock using eth2, you only need one entry in /etc/libcxgb4_sock.
Chapter V. WD-UDP 4.1.3. Running WD-UDP in debug mode To use libcxgb4_sock’s debug capabilities, use the libcxgb4_sock_debug library provided in the package. Follow the steps mentioned below: i. Make the following entry in the /etc/syslog.conf file: *.debug ii. /var/log/cxgb4.log Restart the service: [root@host]# /etc/init.d/syslog restart iii. Finally, preload libcxgb4_sock_debug using the command mentioned below when starting your application: root@host]# LD_PRELOAD=libcxgb4_sock_debug.
Chapter V. WD-UDP [root@r9 ~]# ifconfig eth1|grep inet inet addr:192.168.2.111 Bcast:192.168.2.255 Mask:255.255.255.0 inet6 addr: fe80::7:4300:104:465a/64 Scope:Link [root@r9 ~]# [root@r10 ~]# ifconfig eth1|grep inet inet addr:192.168.2.112 Bcast:192.168.2.255 Mask:255.255.255.0 inet6 addr: fe80::7:4300:104:456a/64 Scope:Link [root@r10 ~]# For this benchmark, we need a simple “accelerate all” configuration on both nodes: [root@r9 ~]# cat /etc/libcxgb4_sock.
Chapter V. WD-UDP 4.1.6. Performance tuning on 2.6.18 kernel To get better performance with WD-UDP using the 2.6.18 kernel, load the iw_cxgb4 modules with the ocqp_support=0 parameter. For example, modprobe iw_cxgb4 ocqp_support=0 4.1.7. Determining if the application is being offloaded To see if the application is being offloaded, open a window on one of the machines, and run tcpdump against the Chelsio interface.
Chapter VI. WD-TOE VI.
Chapter VI. WD-TOE 1. Introduction Chelsio WD-TOE (Wire Direct-Transmission Control Protocol) with a user-space TCP stack enables users to run their existing TCP socket applications unmodified. It features software modules that enable direct wire access from user space to the Chelsio T5/T4 network adapter with complete bypass of the kernel, which results in a low latency 10Gb Ethernet solution for high frequency trading and other delay-sensitive applications. 1.1. Hardware Requirements 1.1.1.
Chapter VI. WD-TOE Suse Linux Enterprise Server 11 SP2 kernel (SLES11SP2), 3.0.13-0.27 Suse Linux Enterprise Server 11 SP3 kernel (SLES11SP3), 3.0.76-0.11 Ubuntu 12.04.2, 3.5.0-23* Kernel.org linux-3.4 Kernel.org linux-3.6 Kernel.org linux-3.7 Other kernel versions have not been tested and are not guaranteed to work.
Chapter VI. WD-TOE 2. Software/Driver Loading Before proceeding, please ensure that the offload drivers are installed with WD-TOE support. They are installed by default with Low Latency Networking, T5 Wire Direct Latency or T5 High Capacity WD configuration tuning Options. With any other configuration tuning option, the installation needs to be customized. The driver must be loaded by the root user. Any attempt to load the driver as a regular user will fail.
Chapter VI. WD-TOE 3. Software/Driver Unloading Reboot the system to unload the driver.
Chapter VI. WD-TOE 4. Software/Driver Configuration and Fine-tuning 4.1. Running the application To run an application with WD-TOE, use the following command: [root@host]# PROT=TCP wdload Example: To run Netperf application with WD-TOE. i. Start netserver at the PEER, using the following command: [root@host]# PROT=TCP wdload netserver –D -4 ii. On the Test Machine, run the following command to run netperf application.
Chapter VII. iSCSI PDU Offload Target VII.
Chapter VII. iSCSI PDU Offload Target 1. Introduction This section describes how to install and configure iSCSI PDU Offload Target software for use as a key element in your iSCSI SAN. The software runs on Linux-based systems that use Chelsio or non-Chelsio based Ethernet adapters. However to guarantee highest performance, Chelsio recommends using Chelsio adapters.
Chapter VII.
Chapter VII. iSCSI PDU Offload Target T440-LP-CR T420-BT T420-LL-CR T420-CX 1.2.2. Adapter Requirements The Chelsio iSCSI PDU Offload Target software can be used with or without hardware protocol offload technology. There are four modes of operation using the iSCSI PDU Offload Target software on Ethernet-based adapters: Regular NIC – The software can be used in non-offloaded (regular NIC) mode. Please note however that this is the least optimal mode of operating the software in terms of performance.
Chapter VII. iSCSI PDU Offload Target 1.3. Software Requirements chiscsi_base.ko is iSCSI non-offload target mode driver and chiscsi_t4.ko is iSCSI PDU offload target mode driver. cxgb4, toecore, t4_tom and chiscsi_base modules are required by chiscsi_t4.ko module to work in offloaded mode. Whereas in iscsi non-offloaded target (NIC) mode, only cxgb4 is needed by chiscsi_base.ko module. 1.3.1.
Chapter VII. iSCSI PDU Offload Target To obtain the key file a binary program called “chinfotool” must be run on the host system where the iSCSI software is to be installed. That program generates an information file that contains data about the system. The information file must be sent back to Chelsio where a license key file will be generated. The key file will be sent back and must be installed on the system in order to unlock the software.
Chapter VII. iSCSI PDU Offload Target 2. Software/Driver Loading There are two main steps to installing the Chelsio iSCSI PDU Offload Target software. They are: 1. Installing the iSCSI software – The majority of this section deals with how to install the iSCSI software. 2. Configuring the iSCSI software – Information on configuring the software can be found in a section further into this user’s guide. 2.1.
Chapter VII. iSCSI PDU Offload Target Note i. While using rpm-tar-ball for installation a. Uninstallation will result into chiscsi.conf file renamed into chiscsi.conf.rpmsave, but if again uninstallation is done then it will lead to overwriting of the old chiscsi.rpmsave file. b. Its advised to take a backup of chiscsi.conf file before you do an uninstallation and installation of new/same unified wire package. As re-installing/upgrading unified-wire package may lead to loss of chiscsi.conf file. ii.
Chapter VII. iSCSI PDU Offload Target 2.2. Generating single RPM for T3 and T4 adapters Follow the below procedure to generate a single iSCSI PDU Offload Target RPM for both T3 and T4 adapters: 1. If you haven’t done already, download Unified Wire for Linux from Chelsio Download Center, http://service.chelsio.com. 2. Untar the tarball using the following command: [root@host]# tar -zxvfm ChelsioUwire-x.xx.x.x.tar.gz 3. Browse to the ChelsioUwire-x.xx.x.
Chapter VII. iSCSI PDU Offload Target 2.3. Obtaining the iSCSI Software License A license file is required for each copy of the Chelsio iSCSI PDU Offload Target software installed. The license is tied to the selected Chelsio NIC present in the system. The license file will be generated depending on your requirement for a Chelsio iSCSI Target. 2.3.1. Linux Requirements To obtain an iSCSI license key file, which could be either a production or an evaluation version, please follow the steps below. 1.
Chapter VII. iSCSI PDU Offload Target [root@host]# chinfotool Scanning System for network devices.... License key will be tied to any of the following interfaces. Please select the interface 1. Interface eth1 with INTEL Adapter Linkspeed is 1000 Mbps/s MAC is: 00:30:48:00:00:10. 2. Interface eth2 with CHELSIO Adapter Linkspeed is 10000 Mbps/s MAC is: 00:07:43:00:00:10.
Chapter VII. iSCSI PDU Offload Target 3.
Chapter VII. iSCSI PDU Offload Target 4. Software/Driver Configuration and Fine-tuning The Chelsio iSCSI software needs configuration before it can become useful. The following sections describe how this is done. There are two main components used in configuring the Chelsio iSCSI software: the configuration file and the iSCSI control tool. This section describes in some detail what they are and their relationship they have with one another. 4.1.
Chapter VII. iSCSI PDU Offload Target There are many specific parameters that can be configured, some of which are iSCSI specific and the rest being Chelsio specific. An example of an iSCSI specific item is “HeaderDigest” which is defaulted to “None” but can be overridden to “CRC32C”. An example of a Chelsio specific configurable item is “ACL” (for Access Control List). “ACL” is one of the few items that have no default. Before starting any iSCSI target, an iSCSI configuration file must be created.
Chapter VII. iSCSI PDU Offload Target A target can serve multiple devices, each device will be assigned a Logical Unit Number (LUN) according to the order it is specified (i.e., the first device specified is assigned LUN 0, the second one LUN 1, …, and so on and so forth). Multiple TargetDevice key=value pairs are needed to indicate multiple devices. Here is a sample of a minimum iSCSI target configuration located at /etc/chelsioiscsi/chiscsi.conf: target: TargetName=iqn.2006-02.com.chelsio.diskarray.
Chapter VII. iSCSI PDU Offload Target To stop a specific target execute iscsictl with “-s” followed by the target name. [root@host]# iscsictl –s target=iqn.2006-02.com.chelsio.diskarray.san1 View Configuration: To see the configuration of all the active iSCSI targets, execute iscsictl with “-c” option. [root@host]# iscsictl –c To see the more detailed configuration settings of a specific target, execute iscsictl with “-c” option followed by the target name. [root@host]# iscsictl –c target=iqn.2006-02.
Chapter VII. iSCSI PDU Offload Target 4.4. The iSCSI Configuration File The iSCSI configuration file consists of a series of blocks consisting of the following types of iSCSI entity blocks: 1. global 2. target There can be only one global entity block whereas multiple target entity blocks are allowed. The global entity block is optional but there must be at least one target entity block. An entity block begins with a block type (global or target).
Chapter VII. iSCSI PDU Offload Target Table of Chelsio Global Entity Settings Key iscsi_offload_mode Valid Values “AUTO” “TOE” “ULP” Default Value “AUTO” Multiple Values No Description Defines the offload mode AUTO: iSCSI software will make the decision. If the connection goes through Chelsio’sHBA which has the iSCSI acceleration enabled, then ULP. TOE: Use Chelsio HBA TCP Offloading Engine (TOE) capabilities.
Chapter VII.
Chapter VII. iSCSI PDU Offload Target IFMarker “Yes” “No” “No” No OFMarkInt 1 to 65535 2048 No IFMarkInt 1 to 65535 2048 No To turn on or off the target to initiator markers on the connection. Chelsio only supports “No”. To set the interval for the initiator to target markers on a connection. To set the interval for the target to initiator markers on a connection. 4.4.3. Chelsio Entity Settings Description Chelsio Entity Parameters pass control information to the Chelsio iSCSI module.
Chapter VII. iSCSI PDU Offload Target allowed. must be between 6 and 255 characters. Commas “,” are not allowed. The initiator user id and secret are used by the target to authenticate the initiator. Auth_CHAP_Challenge Length Auth_CHAP_Policy 16 to 1024 16 No NOTE: The double quotes are required as part of the format.
Chapter VII. iSCSI PDU Offload Target TargetSessionMaxCmd 1 to 2048 TargetDevice* [,FILE|MEM|BL K] [,NULLRW] [,SYNC] [,RO] [,size=xMB] [,ScsiID=xxxx xx] [,WWN=xxxxxxx xx] 64 No No The maximum number of outstanding iSCSI commands per session. A device served up by the associated target. The device mode can be a: Block Device (e.g. /dev/sda) Virtual Block Device (e.g.
Chapter VII. iSCSI PDU Offload Target WWN=xxxxxx is a 16 character unique value set for multipath aware iSCSI initiator host. When multipath aware initiator host is accessing the storage Logical Unit Number( LUN) via multiple iSCSI session, ScsiID and WWN values must be set for the TargetDevice. These values will be returned in Inquiry response (VPD 0x83). Multiple TargetDevice key=value pairs are needed to indicate multiple devices. There can be multiple devices for any particular target.
Chapter VII. iSCSI PDU Offload Target sip, and/or dip. lun=: controls how the initiators access the luns. The supported value for is ALL. can be: R: Read Only RW or WR: Read and Write If permissions are specified then the associated LUN list is required. If no lun=:[R|RW] is specified then it defaults to ALL:RW. NOTE: For the Chelsio Target Software release with lun-masking included, is in the format of <0..N | 0~N | ALL> Where: 0..
Chapter VII. iSCSI PDU Offload Target # lun 0: a ramdisk with default size of 16MB TargetDevice=ramdisk,MEM PortalGroup=5@192.0.2.178:3260 # # an iSCSI Target “iqn.2005-8.com.chelsio:diskarrays.san.328” # being served by the portal group "1" and "2" # target: # # iSCSI configuration # TargetName=iqn.2005-8.com.chelsio:diskarrays.san.
Chapter VII. iSCSI PDU Offload Target # Auth_CHAP_Policy=Mutual Auth_CHAP_target=“iTarget1ID”:“iTarget1Secret” Auth_CHAP_Initiator=“iInitiator1”:“InitSecret1” Auth_CHAP_Initiator=“iInitiator2”:“InitSecret2” Auth_CHAP_ChallengeLength=16 # # ACL configuration # # initiator “iqn.2006-02.com.chelsio.san1” is allowed full access # to this target ACL=iname=iqn.2006-02.com.chelsio.san1 # any initiator from IP address 102.50.50.101 is allowed full access # of this target ACL=sip=102.50.50.
Chapter VII. iSCSI PDU Offload Target For one-way CHAP, the initiator CHAP id and secret are configured and stored on a per-initiator with Chelsio Entity parameter “Auth_CHAP_Initiator”. 4.5.2. Mutual CHAP authentication With mutual CHAP (also called bidirectional CHAP), the target and initiator use CHAP to authenticate each other. For mutual CHAP, in addition to the initiator CHAP id and secret, the target CHAP id and secret are required.
Chapter VII. iSCSI PDU Offload Target Auth_CHAP_Policy=Mutual, the Chelsio iSCSI target will accept a relevant initiator if it does a) no CHAP or b) CHAP Mutual With AuthMethod=None, regardless the setting of the key Auth_CHAP_Policy, the Chelsio iSCSI target will only accept a relevant initiator if it does no CHAP. With AuthMethod=CHAP, CHAP is enforced on the target: i. Auth_CHAP_Policy=Oneway, the iSCSI target will accept a relevant initiator only if it does a) CHAP Oneway or b) CHAP Mutual ii.
Chapter VII. iSCSI PDU Offload Target ACL=iname=iqn.2006-02.com.chelsio.san1 # any initiator from IP address 102.50.50.101 is allowed full # read-write access of this target ACL=sip=102.50.50.101 # any initiator connected via the target portal 102.60.60.25 # is allowed full read-write access to this target ACL=dip=102.60.60.25 # initiator “iqn.2005-09.com.chelsio.san2” from 102.50.50.22 # and connected via the target portal 102.50.50.25 is allowed # read only access of this target ACL=iname=iqn.200602.com.
Chapter VII. iSCSI PDU Offload Target The details of the parameters for the key TargetDevice are found in the table of Chelsio Entity Settings section earlier in this document. 4.7.1. RAM Disk Details For the built-in RAM disk: The minimum size of the RAM disk is 1 Megabyte (MB) and the maximum is limited by system memory. To use a RAM disk with a Windows Initiator, it is recommended to set the size >= 16MB.
Chapter VII. iSCSI PDU Offload Target Where: Is the path to the actual storage device, such as /dev/sdb for a block device or /dev/md0 for a software RAID. The path must exist in the system. SYNC When specified, the Target will flush all the data in the system cache to the storage driver before sending response back to the Initiator. 4.7.3.
Chapter VII. iSCSI PDU Offload Target 4.8. Target Redirection Support An iSCSI Target can redirect an initiator to use a different IP address and port (often called a portal) instead of the current one to connect to the target. The redirected target portal can either be on the same machine, or a different one. 4.8.1. ShadowMode for Local vs. Remote Redirection The ShadowMode setting specifies whether the Redirected portal groups should be present on the same machine or not.
Chapter VII. iSCSI PDU Offload Target 4.8.2. Redirecting to Multiple Portal Groups The Chelsio iSCSI Target Redirection allows redirecting all login requests received on a particular portal group to multiple portal groups in a round robin manner. Below is an example Redirection to Multiple Portal Groups: target: # # any login requests received on 10.193.184.81:3260 will be # redirected to 10.193.184.85:3261 and 10.193.184.85:3262 in a # Round Robin Manner. PortalGroup=1@10.193.184.
Chapter VII. iSCSI PDU Offload Target 4.9.3. iscsictl options Options Mandatory Parameters Optional Parameters -h Display the help messages. -v -f Description Display the version. <[path/] filename> Specifies a pre-written iSCSI configuration text file, used to start, update, save, or reload the iSCSI node(s). This option must be specified with one of the following other options: “-S”, “-U”, or “-W”. For the “-S” option “-f” must be specified first. All other options will ignore this “-f” option.
Chapter VII. iSCSI PDU Offload Target If the target= option is specified, the -k option can optionally be specified along with this option to display only the selected entity parameter setting. -F target= -k lun= Example: iscsictl -c target=iqn.com.cc.target1 -k HeaderDigest Flush the cached data to the target disk(s). target= parameter: Where name is the name of the target to be flushed.
Chapter VII. iSCSI PDU Offload Target -s target= If any of the specified var=const parameter is invalid, the command will reject only the invalid parameters, but will continue on and complete all other valid parameters if any others are specified. Stop the specified active iSCSI targets. target= parameter: See the description of option -c for the target= parameter definition. The target= parameter is mandatory.
Chapter VII. iSCSI PDU Offload Target currently active, they will be started. For Rules 2 & 3, please note the differences – they are not the same! -r target= -k initiator= The global settings are also reloaded from the configuration file with this option. Retrieve active iSCSI sessions under a target. target= parameter: Where name must be a single target name.
Chapter VII. iSCSI PDU Offload Target In the first example the minimum command set is given where the IP address of the iSNS server is specified. In the second example a fully qualified command is specified by also setting three optional parameters. Here, the mandatory IP address and the corresponding optional port number are specified. Also set is the iSNS entity ID to “isnscln2” as well as the query interval to 30 seconds. 4.10. Rules of Target Reload (i.e.
Chapter VII.
Chapter VII. iSCSI PDU Offload Target Note iscsi_offload_mode has no meaning when the iSCSI software is used on a nonTOE based NIC.keyfile from Chelsio support. 4.11.2. iscsi_auth_order Options: “ACL” or “CHAP”, defaults to “CHAP” On an iSCSI target when ACL_Enable is set to Yes, iscsi_auth_order decides whether to perform CHAP first then ACL or perform ACL then CHAP.
Chapter VIII. iSCSI PDU Offload Initiator VIII.
Chapter VIII. iSCSI PDU Offload Initiator 1. Introduction The Chelsio T5/T4 series Adapters support iSCSI acceleration and iSCSI Direct Data Placement (DDP) where the hardware handles the expensive byte touching operations, such as CRC computation and verification, and direct DMA to the final host memory destination: iSCSI PDU digest generation and verification On transmit -side, Chelsio hardware computes and inserts the Header and Data digest into the PDUs.
Chapter VIII. iSCSI PDU Offload Initiator T420-CR T440-CR T422-CR T404-BT T420-BCH T440-LP-CR T420-BT T420-LL-CR T420-CX 1.2. Software Requirements 1.2.1.
Chapter VIII. iSCSI PDU Offload Initiator 2. Software/Driver Loading The driver must be loaded by the root user. Any attempt to load the driver as a regular user will fail.
Chapter VIII. iSCSI PDU Offload Initiator ii.
Chapter VIII. iSCSI PDU Offload Initiator 3.
Chapter VIII. iSCSI PDU Offload Initiator 4. Software/Driver Configuration and Fine-tuning 4.1. Accelerating open-iSCSI Initiator The following steps need to be taken to accelerate the open-iSCSI initiator: 4.1.1. Configuring iscsid.conf file Edit the iscsi/iscsid.conf file and change the setting for MaxRecvDataSegmentLength: node.conn[0].iscsi.MaxRecvDataSegmentLength = 8192 The login would fail for a normal session if MaxRecvDataSegmentLength is too big.
Chapter VIII. iSCSI PDU Offload Initiator E.g.:iface.iscsi_ifacename = cxgb4i.00:07:43:04:5b:da iface.hwaddress = 00:07:43:04:5b:da iface.transport_name = cxgb4i iface.net_ifacename = eth3 iface.ipaddress = 102.2.2.137 Alternatively, you can create the file automatically by executing the following command: [root@host]# iscsiadm -m iface Here, iface.iscsi_ifacename denotes the name of interface file in /etc/iscsi/ifaces/. iface.
Chapter VIII. iSCSI PDU Offload Initiator ii. Discovering iSCSI Targets To discovery an iSCSI target execute a command in the following format: iscsiadm -m discovery -t st -p : -I E.g.:[root@host]# iscsiadm -m discovery -t st -p 102.2.2.155:3260 -I cxgb4i.00:07:43:04:5b:da iii.
Chapter VIII. iSCSI PDU Offload Initiator 4.2. Auto login from cxgb4i initiator at OS bootup For iSCSI auto login (via cxgb4i) to work on OS startup, please add the following line to start() in /etc/rc.d/init.d/iscsid file on RHEL: modprobe -q cxgb4i E.g.
Chapter IX. Data Center Bridging (DCB) IX.
Chapter IX. Data Center Bridging (DCB) 1. Introduction Data Center Bridging (DCB) refers to a set of bridge specification standards, aimed to create a converged Ethernet network infrastructure shared by all storage, data networking and traffic management services. An improvement to the existing specification, DCB uses priority-based flow control to provide hardware-based bandwith allocation and enhances transport reliability.
Chapter IX. Data Center Bridging (DCB) 2. Software/Driver Loading Before proceeding, please ensure that Unified Wire Installer is installed with DCB support as mentioned in CLI mode (individual drivers) section of Unified Wire Installer chapter. Network (cxgb4; t4_tom for full offload support) and FCoE Initiator (csiostor) drivers must be loaded in order to enable DCB feature. Also, the drivers must be loaded by the root user. Any attempt to load the drivers as a regular user will fail.
Chapter IX. Data Center Bridging (DCB) 3. Software/Driver Unloading To disable DCB feature, unload FCoE Initiator and Network drivers: [root@host]# rmmod csiostor [root@host]# rmmod cxgb4 Note If t4_tom is loaded, please reboot machine after unloading FCoE Initiator and Network drivers.
Chapter IX. Data Center Bridging (DCB) 4. Software/Driver Configuration and Fine-tuning 4.1. Configuring Cisco Nexus 5010 switch 4.1.1. Configuring the DCB parameters Note By default the Cisco Nexus switch enables DCB functionality and configures PFC for FCoE traffic making it no drop with bandwidth of 50% assigned to FCoE class of traffic and another 50% for the rest(like NIC). If you wish to configure custom bandwidth, then follow the procedure below.
Chapter IX. Data Center Bridging (DCB) v. Configure qos policy-maps. switch(config)#policy-map type qos policy-test switch(config-pmap-qos)#class type qos class-nic switch(config-pmap-c-qos)#set qos-group 2 vi. Configure queuing policy-maps and assign network bandwidth. Divide the network bandwidth between FcoE and NIC traffic.
Chapter IX. Data Center Bridging (DCB) i. Following steps will enable FCoE services on a particular VLAN and does a VSAN-VLAN mapping. Need not do these steps every time, unless a new mapping has to be created. switch(config)# vlan 2 switch(config-vlan)# fcoe vsan 2 switch(config-vlan)#exit ii. Following steps help in creating a virtual fibre channel (VFC) and binds that VFC to a Ethernet interface so that the Ethernet port begins functioning as a FCoE port.
Chapter IX. Data Center Bridging (DCB) iv. Enabling DCB: switch(config)# interface ethernet 1/13 switch(config-if)# priority-flow-control mode auto switch(config-if)# flowcontrol send off switch(config-if)# flowcontrol receive off switch(config-if)# lldp transmit switch(config-if)# lldp receive switch(config-if)# no shutdown v.
Chapter IX. Data Center Bridging (DCB) ii. Create a CEE Map to carry LAN and SAN traffic if it does not exist. Example of creating a CEE map. switch(config)# cee-map default switch(conf-cee-map)#priority-group-table 1 weight 40 pfc switch(conf-cee-map)#priority-group-table 2 weight 60 switch(conf-cee-map)#priority-table 2 2 2 1 2 2 2 2 iii. Configure the CEE interface as a Layer 2 switch port. Example of configuring the switch port as a 10-Gigabit Ethernet interface.
Chapter IX. Data Center Bridging (DCB) v.
Chapter X. FCoE PDU Offload Target X.
Chapter X. FCoE PDU Offload Target 1. Introduction Chelsio FCoE PDU Offload Target driver supports existing FCF (BB-5) mode which allows communicating with FC and FCoE nodes using FCF (Fibre-Channel Forwarding) switch. It also supports the new VN2VN (BB-6) mode which allows communicating with FCoE nodes using regular switches, without the need for expensive FCF enabled switches. 1.1. Hardware Requirements 1.1.1.
Chapter X. FCoE PDU Offload Target i. Preparing License information file A license information file is required for each license. Run the following command to obtain the license information file: [root@host]# chinfotool The chinfotool scans and lists all the NICs in the system, and prompts the user to select one NIC, to which the keyfile will be tied. At the end, it prints out a summary of the license information file. Note Only one NIC per system needs to be selected.
Chapter X. FCoE PDU Offload Target The generated file hostname_chelsio_infofile needs to be sent to Chelsio at support@chelsio.com as instructed by the chinfotool. Note Please be sure that the selected NIC Adapter is present in the system at all times when using Chelsio’s FCoE software. If it is removed the license will be invalid and the process of obtaining a new license file will need to be restarted. That includes using chinfotool to rescan the system and obtaining a new keyfile from Chelsio support.
Chapter X. FCoE PDU Offload Target 2. Software/Driver Loading FCoE PDU Offload Target driver (chfcoe) is dependent on Network (cxgb4) and SCST (scst) drivers. SCST driver will be installed by default during Unified Wire Installation. Important Any existing version of SCST driver will be replaced by version 3.0.0-pre2 during installation. The driver must be loaded by the root user. Any attempt to load the driver as a regular user will fail.
Chapter X. FCoE PDU Offload Target 3. Software/Driver Configuration and Fine-tuning 3.1. Configuring Cisco Nexus 5010 switch Note Refer the following sections only if you wish to configure the FCoE PDU Offload Target in FCF mode. 3.1.1. Configuring the DCB parameters Note By default the Cisco Nexus switch enables DCB functionality and configures PFC for FCoE traffic making it no drop with bandwidth of 50% assigned to FCoE class of traffic and another 50% for the rest(like NIC).
Chapter X. FCoE PDU Offload Target iv. Configure network-qos class-maps. switch(config)#class-map type network-qos class-nic switch(config-cmap-nq)#match qos-group 2 v. Configure qos policy-maps. switch(config)#policy-map type qos policy-test switch(config-pmap-qos)#class type qos class-nic switch(config-pmap-c-qos)#set qos-group 2 vi. Configure queuing policy-maps and assign network bandwidth. Divide the network bandwidth between FcoE and NIC traffic.
Chapter X. FCoE PDU Offload Target 3.1.2. Configuring the FCoE ports In this procedure, you may need to adjust some of the parameters to suit your environment, such as VLAN IDs, Ethernet interfaces, and virtual Fibre Channel interfaces i. Following steps will enable FCoE services on a particular vlan and does a vsan-vlan mapping. Need not do these steps every time, unless a new mapping has to be created. switch(config)# vlan 2 switch(config-vlan)# fcoe vsan 2 switch(config-vlan)#exit ii.
Chapter X. FCoE PDU Offload Target iv. Enabling DCBX: switch(config)# interface ethernet 1/13 switch(config-if)# priority-flow-control mode auto switch(config-if)# flowcontrol send off switch(config-if)# flowcontrol receive off switch(config-if)# lldp transmit switch(config-if)# lldp receive switch(config-if)# no shutdown v.
Chapter X. FCoE PDU Offload Target 3.2.1. Verifying local ports Use the following command to determine the local port information: [root@host]#cxgbtool stor -a --show-lnode If FCoE PDU Offload Target is operating in FCF mode, then the local node information will be available only after the target completes FLOGI to the swtich. Note In order to identify Chelsio target’s wwpn from other vendors’, the WWPN always begins with 0x5000743.
Chapter X. FCoE PDU Offload Target 3.2.2. Verifying remote ports To verify remote ports (fabric, name server, initiator ports etc.
Chapter X.
Chapter X. FCoE PDU Offload Target 3.3. Configuring LUNs on Target i. Determine the target and initiator WWPNs using the procedure mentioned in previous section. ii. Create an SCST configuration file based on your setup. A sample configuration file will be available at /etc/chelsio-fcoe/ after Unified Wire installation. iii. Ensure that SCST handler modules, which are used in configuration (eg: scst_vdisk), are loaded before proceeding. iv. Configure LUNs on target by running the following command.
Chapter X. FCoE PDU Offload Target The following configuration file adds three LUNs(ram disk, physical disk & nullio disk) for the target specified. Only initiators present in the group will be able to access the LUNs.
Chapter X. FCoE PDU Offload Target Here is a sample config file for two targets each having two LUNs. Here, Logical Volumes are exposed as LUNs. Target’s WWPN needs to be specified under TARGET_DRIVER. Initiator’s WWPN needs to be specified under GROUP. 3.4. Configuring Persistent Target chfcoe service is required to configure persistent target and will be installed during Unified Wire installation. Please follow procedure mentioned below: i. ii. iii.
Chapter X. FCoE PDU Offload Target iv. v. vi. Configure Chelsio interfaces (used for FCoE traffic) to come up with minimum MTU 2180 (recommended) during boot. Configure FCoE target mode of operation (FCF or VN2VN) by editing /etc/modprobe.d/chfcoe.conf By default the chfcoe service will be in disabled state.
Chapter X. FCoE PDU Offload Target 3.6. Removing LUNs Execute the following command to remove the LUNs from the configuration file: [root@host] # scstadmin –force -clear_config < LUN Config file > 3.7. Performance tuning For Performance tuning, enable hyperthreading and bind chfcoe workers and irqs to different CPUs. For irq binding to work, irqbalance service should be disabled in the system.
Chapter X. FCoE PDU Offload Target For NUMA machines, determine NUMA node of interface and bind all the chfcoe workers and irqs to the CPUs of same NUMA node.
Chapter X. FCoE PDU Offload Target Binding FCoE irqs to remaining CPUs Use the following command to bind chfcoe irqs to CPUs 0,1,16,17 [root@host]# chfcoe_perftune.
Chapter X. FCoE PDU Offload Target 4.
Chapter XI. FCoE Full Offload Initiator XI.
Chapter XI. FCoE Full Offload Initiator 1. Introduction Fibre Channel over Ethernet (FCoE) is a mapping of Fibre Channel over selected full duplex IEEE 802.3 networks. The goal is to provide I/O consolidation over Ethernet, reducing network complexity in the Datacenter. Chelsio FCoE initiator maps Fibre Channel directly over Ethernet while being independent of the Ethernet forwarding scheme. The FCoE protocol specification replaces the FC0 and FC1 layers of the Fibre Channel stack with Ethernet.
Chapter XI. FCoE Full Offload Initiator 2. Software/Driver Loading The driver must be loaded by the root user. Any attempt to load the driver as a regular user will fail.
Chapter XI. FCoE Full Offload Initiator 3. Software/Driver Unloading To unload the driver: [root@host]# modprobe -r Note csiostor If multipath services are running, unload of FCoE driver is not possible. Stop the multipath service and then unload the driver.
Chapter XI. FCoE Full Offload Initiator 4. Software/Driver Configuration and Fine-tuning 4.1. Configuring Cisco Nexus 5010 and Brocade switch To configure various Cisco and Brocade switch settings, please refer Software/Driver Configuration and Fine-tuning section of Data Center Bridging (DCB) chapter. 4.2. FCoE fabric discovery verification 4.2.1. Verifying Local Ports Once connected to the switch, use the following command to see if the FIP has gone through and a VN_Port MAC address has been assigned.
Chapter XI.
Chapter XI. FCoE Full Offload Initiator 4.2.2. Verifying the target discovery To view the list of targets discovered on a particular FCoE port, use the following commands: i. Check for the adapter number using the following command. [root@host]# cxgbtool stor –s ii. To check the list of targets discovered on a particular FCoE port, first determine the wwpn of the initiator local port under sysfs. The hosts under fc_host depends on the number of ports on the adapter used.
Chapter XI. FCoE Full Offload Initiator After finding out the wwpn of the local node, to verify the list of discovered targets, use the following command.
Chapter XI. FCoE Full Offload Initiator 4.3.
Chapter XI. FCoE Full Offload Initiator Alternatively, the LUNs discovered by the Chelsio FCoE initiators can be accessed via easilyidentifiable ‘udev’ path device files like: [root@host]# ls /dev/disk/by-path/pci-0000:04:00.
Chapter XI. FCoE Full Offload Initiator 4.4. Creating Filesystem Create an ext3 filesystem using the following command: [root@host]# mkfs.
Chapter XI. FCoE Full Offload Initiator 4.5.
Chapter XII. Offload Bonding driver XII.
Chapter XII. Offload Bonding driver 1. Introduction The Chelsio Offload bonding driver provides a method to aggregate multiple network interfaces into a single logical bonded interface effectively combining the bandwidth into a single connection. It also provides redundancy in case one of link fails. The traffic running over the bonded interface can be fully offloaded to the T5/T4 Adapter, thus freeing the CPU from TCP/IP overhead. 1.1. Hardware Requirements 1.1.1.
Chapter XII. Offload Bonding driver Suse Linux Enterprise Server 11 SP1 kernel (SLES11SP1), 2.6.32.12-0.7 Suse Linux Enterprise Server 11 SP2 kernel (SLES11SP2), 3.0.13-0.27 Suse Linux Enterprise Server 11 SP3 kernel (SLES11SP3), 3.0.76-0.11 Ubuntu 12.04, 3.2.0-23 Ubuntu 12.04.2, 3.5.0-23* Kernel.org linux-3.4 Kernel.org linux-3.6* Kernel.org linux-3.7 Kernel.org linux-3.8* Other kernel versions have not been tested and are not guaranteed to work.
Chapter XII. Offload Bonding driver 2. Software/Driver Loading The driver must be loaded by the root user. Any attempt to load the driver as a regular user will fail.
Chapter XII. Offload Bonding driver 3.
Chapter XII. Offload Bonding driver 4. Software/Driver Configuration and Fine-tuning 4.1. Offloading TCP traffic over a bonded interface The Chelsio Offload Bonding driver supports the active-backup (mode=1), 802.3ad (mode=4) and balance-xor (mode=2) modes. To offload TCP traffic over a bonded interface, use the following method: i. Load the network driver with TOE support: [root@host]# modprobe t4_tom ii. Create a bonded interface: [root@host]# modprobe bonding mode=1 miimon=100 iii.
Chapter XIII. Offload Multi-Adapter Failover (MAFO) XIII.
Chapter XIII. Offload Multi-Adapter Failover (MAFO) 1. Introduction Chelsio’s T5 and T4-based adapters offer a complete suite of high reliability features, including adapter-to-adapter failover. The patented offload Multi-Adapter Failover (MAFO) feature ensures all offloaded traffic continue operating seamless in the face of port failure. MAFO allows aggregating network interfaces across multiple adapters into a single logical bonded interface, providing effective fault tolerance.
Chapter XIII. Offload Multi-Adapter Failover (MAFO) 1.2. Software Requirements 1.2.1. Linux Requirements Currently the Offload Multi-Adapter Failover driver is available for the following versions: Redhat Enterprise Linux 5 update 10 kernel Redhat Enterprise Linux 5 update 9 kernel Redhat Enterprise Linux 6 update 4 kernel Redhat Enterprise Linux 6 update 5 kernel Suse Linux Enterprise Server 11 SP1 kernel Suse Linux Enterprise Server 11 SP2 kernel Suse Linux Enterprise Server 11 SP3 kernel Ubuntu 12.
Chapter XIII. Offload Multi-Adapter Failover (MAFO) 2. Software/Driver Loading The driver must be loaded by the root user. Any attempt to load the driver as a regular user will fail.
Chapter XIII. Offload Multi-Adapter Failover (MAFO) 3.
Chapter XIII. Offload Multi-Adapter Failover (MAFO) 4. Software/Driver Configuration and Fine-tuning 4.1. Offloading TCP traffic over a bonded interface The Chelsio MAFO driver supports only the active-backup (mode=1) mode. To offload TCP traffic over a bonded interface, use the following method: i. Load the network driver with TOE support: [root@host]# modprobe t4_tom ii. Create a bonded interface: [root@host]# modprobe bonding mode=1 miimon=100 iii.
Chapter XIII. Offload Multi-Adapter Failover (MAFO) 4.2. Network Device Configuration Please refer to the operating system documentation for administration and configuration of network devices. Note Some operating systems may attempt to auto-configure the detected hardware and some may not detect all ports on a multi-port adapter. If this happens, please refer to the operating system documentation for manually configuring the network device.
Chapter XIV. UDP Segmentation Offload and Pacing XIV.
Chapter XIV. UDP Segmentation Offload and Pacing 1. Introduction Chelsio’s T5/T4 series of adapters provide UDP segmentation offload and per-stream rate shaping to drastically lower server CPU utilization, increase content delivery capacity, and improve service quality. Tailored for UDP content, UDP Segmentation Offload (USO) technology moves the processing required to packetize UDP data and rate control its transmission from software running on the host to the network adapter.
Chapter XIV. UDP Segmentation Offload and Pacing 1.1. Hardware Requirements 1.1.1. Supported Adapters The following are the currently shipping Chelsio Adapters that are compatible with the UDP Segmentation Offload and Pacing driver. T502-BT T580-CR T520-LL-CR T520-CR T522-CR T580-LP-CR T540-CR T420-CR T440-CR T422-CR T404-BT T420-BCH T440-LP-CR T420-BT T420-LL-CR T420-CX 1.2. Software Requirements 1.2.1.
Chapter XIV. UDP Segmentation Offload and Pacing Other kernel versions have not been tested and are not guaranteed to work. *Limited QA Performed.
Chapter XIV. UDP Segmentation Offload and Pacing 2. Software/Driver Loading The driver must be loaded by the root user. Any attempt to load the driver as a regular user will fail. Run the following commands to load the driver: [root@host]# modprobe cxgb4 [root@host]# modprobe t4_tom Though normally associated with the Chelsio TCP Offload engine, the t4_tom module is required in order to allow for the proper redirection of UDP socket calls.
Chapter XIV. UDP Segmentation Offload and Pacing 3. Software/Driver Unloading Reboot the system to unload the driver.
Chapter XIV. UDP Segmentation Offload and Pacing 4. Software/Driver Configuration and Fine-tuning 4.1. Modifying the application To use the UDP offload functionality, the application needs to be modified. Follow the steps mentioned below: i. Determine the UDP socket file descriptor in the application through which data is sent ii. Declare and initialize two variables in the application: int fs=1316; int cl=1; Here, fs is the UDP packet payload size in bytes that is transmitted on the wire.
Chapter XIV. UDP Segmentation Offload and Pacing Here: sockfd : The file descriptor of the UDP socket &fs / &cl : Pointer to the framesize and class variables sizeof(fs) / sizeof(cl) : The size of the variables v. Now, compile the application. 4.1.1. UDP offload functionality for RTP data In case of RTP data, the video server application sends the initial sequence number and the RTP payload. The USO engine segments the payload data, increments the sequence number and sends out the data.
Chapter XIV. UDP Segmentation Offload and Pacing 4.2. Configuring UDP Pacing Now that the application has been modified to associate the application’s UDP socket to a particular UDP traffic class, the pacing of that socket’s traffic can be set using the cxgbtool utility. The command and its parameters are explained below: [root@host]# cxgbtool sched-class params type packet level cl-rl mode flow rate-unit bits rate-mode absolute channel
Chapter XV. Offload IPv6 driver XV.
Chapter XV. Offload IPv6 driver 1. Introduction The growth of the Internet has created a need for more addresses than are possible with IPv4. Internet Protocol version 6 (IPv6) is a version of the Internet Protocol (IP) designed to succeed the Internet Protocol version 4 (IPv4). Chelsio’s Offload IPv6 feature provides support to fully offload IPv6 traffic to the T5/T4 adapter. 1.1. Hardware Requirements 1.1.1.
Chapter XV. Offload IPv6 driver Suse Linux Enterprise Server 11 SP2 kernel (SLES11SP2), 3.0.13-0.27 Suse Linux Enterprise Server 11 SP3 kernel (SLES11SP3), 3.0.76-0.11 Ubuntu 12.04, 3.2.0-23 Ubuntu 12.04.2, 3.5.0-23* Kernel.org linux-3.4 Kernel.org linux-3.6* Kernel.org linux-3.7 Kernel.org linux-3.8* Other kernel versions have not been tested and are not guaranteed to work.
Chapter XV. Offload IPv6 driver 2. Software/Driver Loading IPv6 must be enabled in your system (enabled by default) to use the Offload IPv6 feature.Also, Unified Wire package must be installed with IPv6 support (see Software/Driver Installation). After installing Unified Wire package and rebooting the host, load the NIC (cxgb4) and TOE (t4_tom) drivers. The drivers must be loaded by the root user. Any attempt to load the drivers as a regular user will fail.
Chapter XV. Offload IPv6 driver 3. Software/Driver Unloading To disable Offload IPv6 feature, unload NIC and TOE drivers: 3.1. Unloading the NIC driver To unload the NIC driver, run the following command: [root@host]# rmmod cxgb4 3.2. Unloading the TOE driver Please reboot the system to unload the TOE driver.
Chapter XVI. Bypass Driver XVI.
Chapter XVI. Bypass Driver 1. Introduction Chelsio’s B420 and B404 Bypass Adapters are Ethernet cards that provide bypass functionality and an integrated L2, L3, and L4 Ethernet switch. The integrated switch allows for selective bypass on a per-packet basis at line rate. To use the bypass adapters, you must have both the Chelsio NIC driver and the bypass CLI user space application loaded. 1.1.
Chapter XVI. Bypass Driver Disconnect Mode The Bypass cards can also be programmed to drop all the packets. Selective Bypass In Normal mode, the Bypass Adapters can be programmed to perform redirection of packets depending on the certain portion of the packet. The specification of the match criteria is called a rule. When a rule is matched an action is applied to the ingress packet. The actions that are supported are drop, forward and input. The drop action causes the packet to be discarded.
Chapter XVI. Bypass Driver 1.3. Software Requirements 1.3.1. Linux Requirements Currently the Bypass driver is available for the following versions: Redhat Enterprise Linux 5 update 9 kernel (RHEL5.9), 2.6.18-348.el5* Redhat Enterprise Linux 5 update 10 kernel (RHEL5.10), 2.6.18-371.el5* Redhat Enterprise Linux 6 update 4 kernel (RHEL6.4), 2.6.32-358.el6 Redhat Enterprise Linux 6 update 5 kernel (RHEL6.5), 2.6.32-431.el6* Suse Linux Enterprise Server 11 SP1 kernel (SLES11SP1), 2.6.32.12-0.
Chapter XVI. Bypass Driver 2. Software/Driver Loading Before proceeding, please ensure that drivers are installed with Bypass support as mentioned in CLI mode (individual drivers) section. The driver must be loaded by the root user. Any attempt to load the driver as a regular user will fail.
Chapter XVI. Bypass Driver 3.
Chapter XVI. Bypass Driver 4. Software/Driver Configuration and Fine-tuning 4.1. Starting ba server 4.1.1. For IPv4 only Execute the following command to start the ba server only for IPv4: [root@host]# ba_server –i ethX 4.1.2. For IPv4 and IPv6 Execute the following command to start the ba server for IPv4 and IPv6: [root@host]# ba_server -6 –i ethX 4.2. Bypass API (CLI) A CLI will be created that implements the Bypass API as specified below. This CLI will then communicate the requests to the SDK server.
Chapter XVI. Bypass Driver E.g.
Chapter XVI. Bypass Driver Getting the default state: [root@host]# bypass ethX get --default_state Setting the default state: [root@host]# bypass ethX set --default_state [bypass |disconnect] 4.2.4. Using the bypass watchdog timer The watchdog timer is used to ensure that if there is a software failure, the switch will enter the default state.
Chapter XVI. Bypass Driver The redirect CLI has the following syntax: [root@host]# redirect ethX command --key [value] … Redirect Command List Command redirect list Key Value ethX redirect ethX add update match Return list of all configured tables and rules table table id (defaults to table 1) Add a rule to a table. Update the specified rule with new keys. Match specified keys to a rule in a table.
Chapter XVI. Bypass Driver redirect ethX delete redirect ethX purge redirect ethX move table table id Delete the table index table rule index table id table table id Delete a rule from a table.
Chapter XVI. Bypass Driver The redirect dump command can be used to save the currently configured tables and rules into a shell script. To make the current configured rules & tables persistent, redirect the output to /etc/ba.cfg file only: [root@host]# redirect ethX dump > /etc/ba.cfg where the /etc/ba.cfg is read by the bad service at boot time. To apply the saved configuration after machine reboots, start the bad service. This service is available only in the IPv4 mode.
Chapter XVII. WD Sniffing and Tracing XVII.
Chapter XVII. WD Sniffing and Tracing 1. Theory of Operation The objective of these utilities (wd_sniffer and wd_tcpdump_trace) is to provide sniffing and tracing capabilities by making use of T4's hardware features. Sniffer- Involves targeting specific multicast traffic and sending it directly to user space. a) Get a Queue (raw QP) idx. b) Program a filter to redirect specific traffic to the raw QP queue.
Chapter XVII. WD Sniffing and Tracing Schematic diagram of T4 sniffer and tracer 1.1. Hardware Requirements 1.1.1.
Chapter XVII. WD Sniffing and Tracing T440-LP-CR T420-BT T420-LL-CR T420-CX 1.2. Software Requirements 1.2.1.
Chapter XVII. WD Sniffing and Tracing 2. Installation and Usage 2.1. Installing basic support iw_cxgb4 (Chelsio iWARP driver) and cxgb4 (Chelsio NIC driver) drivers have to be compiled and loaded before running the utilities. Refer to the Software/Driver Loading section for each driver and follow the instructions mentioned before proceeding. 2.2. Using Sniffer (wd_sniffer) 1. Setup: Wire filter sniffing requires 2 systems with one machine having a T4 card.
Chapter XVII. WD Sniffing and Tracing DUT: Machine B PEER: Machine A <-----> (port 0) (port 1) 192.168.1.100 IP-dont-care IP-dont-care <-----> PEER: Machine C 192.168.1.200 2. Procedure: Run wd_tcpdump_trace -i iface on the command prompt where iface is one of the interfaces whose traffic you want to trace. In the above diagram its port 0 or port 1. [root@host]# wd_tcpdump_trace -i Try ping or ssh between machines A and B.
Chapter XVIII. Classification and Filtering XVIII.
Chapter XVIII. Classification and Filtering 1. Introduction Classification and Filtering feature enhances network security by controlling incoming traffic as they pass through network interface based on source and destination addresses, protocol, source and receiving ports, or the value of some status bits in the packet. This feature can be used in the ingress path to: Steer ingress packets that meet ACL (Access Control List) accept criteria to a particular receive queue.
Chapter XVIII. Classification and Filtering 1.2. Software Requirements 1.2.1. Linux Requirements Currently the Classification and Filtering feature is available for the following versions: Redhat Enterprise Linux 5 update 9 kernel Redhat Enterprise Linux 5 update 10 kernel Redhat Enterprise Linux 6 update 4 kernel Redhat Enterprise Linux 6 update 5 kernel Suse Linux Enterprise Server 11 SP1 kernel Suse Linux Enterprise Server 11 SP2 kernel Suse Linux Enterprise Server 11 SP3 kernel Ubuntu 12.04, 3.2.
Chapter XVIII. Classification and Filtering 2. Usage 2.1. Configuration The Classification and Filtering feature is configured by specifying the filter selection combination set in the firmware configuration (t5-config.txt for T5;t4-config.txt for T4) located in /lib/firmware/cxgb4/ The following combination is set by default and packets will be matched accordingly: i. For T5: filterMode = srvrsram, fragmentation, mpshittype, protocol, vlan, port, fcoe ii.
Chapter XVIII. Classification and Filtering iii. Now, create filter rules using cxgbtool: [root@host]# cxgbtool ethx filter action [pass/drop/switch] Where, ethX index action pass switch drop : Chelsio interface : positive integer set as filter id : Ingress packet disposition : Ingress packets will be passed through set ingress queues : Ingress packets will be routed to an output port with optional header rewrite. : Ingress packets will be dropped.
Chapter XVIII. Classification and Filtering For offloaded ingress packets, use the prio argument with the above command: [root@host]# cxgbtool ethx filter action prio 1 Note For more information on additional parameters, refer cxgbtool manual by running the man cxgbtool command 2.3. Listing Filter Rules To list the filters set, run the following command: [root@host]# cxgbtool ethX filter show 2.4.
Chapter XVIII. Classification and Filtering 2.5. Layer 3 example Here’s an example on how to achieve L3 routing functionality: T5 eth0 eth1 102.1.1.250/24 102.1.2.250/24 102.1.2.1/24 102.1.2.2/24 102.1.2.3/24 102.1.1.1/24 102.1.1.2/24 102.1.1.3/24 i. eth0 eth0 Node 1 Node 2 Follow these steps on Node 1 Configure IP address and enable the 3 interfaces: [root@host]# ifconfig eth0 102.1.1.1/24 up [root@host]# ifconfig eth0:2 102.1.1.2/24 up [root@host]# ifconfig eth0:3 102.1.1.
Chapter XVIII. Classification and Filtering ii. Setup a static OR default route towards T5 router to reach 102.1.2.0/24 network [root@host]# route add -net 102.1.2.0/24 gw 102.1.1.250 i. Follow these steps on Node 2 Configure IP address and enable the 3 interfaces: [root@host]# ifconfig eth0 102.1.2.1/24 up [root@host]# ifconfig eth0:2 102.1.2.2/24 up [root@host]# ifconfig eth0:3 102.1.2.3/24 up [root@host]# ifconfig eth0 eth0 Link encap:Ethernet HWaddr 00:07:43:12:D4:88 inet addr:102.1.2.
Chapter XVIII. Classification and Filtering i. Follow these steps on machine with T5 adapter Configure IP address and enable the 2 interfaces: [root@host]# ifconfig eth0 102.1.1.250/24 up [root@host]# ifconfig eth0 eth0 Link encap:Ethernet HWaddr 00:07:43:04:96:40 inet addr:102.1.1.250 Bcast:102.1.1.255 Mask:255.255.255.
Chapter XVIII. Classification and Filtering iii. Create filter rule to send packets for 102.1.1.0/24 network out via eth0 inteface [root@host]# cxgbtool eth0 filter 1 lip 102.1.1.0/24 hitcnts 1 action switch eport 0 smac 00:07:43:04:96:40 dmac 00:07:43:04:7D:50 Where, smac is the MAC address of eth0 interface on T5 adapter machine and dmac is the MAC address of eth0 interface on Node 1. 2.6. Layer 2 example Here’s an example on how to achieve L2 switching functionality.
Chapter XVIII. Classification and Filtering i. Follow these steps on Node 1 Configure IP address and enable the interface: [root@host]# ifconfig eth0 102.1.1.1/24 up [root@host]# ifconfig eth0 eth0 Link encap:Ethernet HWaddr 00:07:43:04:7D:50 inet addr:102.1.1.1 Bcast:102.1.1.255 Mask:255.255.255.
Chapter XVIII. Classification and Filtering ii. Setup ARP entry to reach 102.1.1.1 [root@host]# arp -s 102.1.1.1 00:07:43:04:7D:50 i. Follow these steps on machine with T5 adapter Update filtermode value with below combination in /lib/firmware/cxgb4/t5-config.txt to enable matching based on macidx filterMode = fragmentation, macmatch, mpshittype, protocol, tos, port, fcoe ii. iii. Unload and re-load the cxgb4 driver.
Chapter XVIII. Classification and Filtering ix.
Chapter XVIII. Classification and Filtering 3. Hash/DDR Filters The default (Unified Wire) configuration tuning option allows you to create LE-TCAM filters, which has a limit of 496 filter rules. If you wish to create more, select T5 Hash Filter configuration tuning option during installation which allows you to create HASH/DDR filters with a capacity of ~5 million filter rules. Note Creating Hash/DDR Filters is currently supported only on T5 adapters. 3.1.
Chapter XVIII. Classification and Filtering Note “source IP”, “destination IP”, “source port” and “destination port” are mandatory parameters. “cap maskless” parameter should be appended in order to create Hash/DDR filter rules. Otherwise the above command will create LE-TCAM filter rules. Filter index provided for creating DDR filter is ignored. 3.1.1. Examples Drop action [root@host]# cxgbtool ethX filter 496 action drop lip 102.1.1.1 fip 102.1.1.
Chapter XVIII. Classification and Filtering To list the both LE-TCAM and Hash/DDR filters set, run the following command: [root@host]# cxgbtool ethX filter show 3.3. Removing Filter Rules To remove a filter, run the following command with cap maskless parameter and corresponding filter rule index: [root@host]# cxgbtool ethX filter index cap maskless Note Filter rule index can be determined by referring the “hash_filters” file located in /proc/drivers/cxgb4//.
Chapter XVIII. Classification and Filtering 3.5. Hit Counters For LE-TCAM filters, hit counters will work simply by adding hitcnts 1 parameter to the filter rule. However, for Hash/DDR filters, you will have to make use of tracing feature and RSS queues. Here’s a step-by-step guide to enable hit counters for Hash/DDR filter rules: i. Enable tracing on T5 adapter. [root@host]# cxgbtool ethX reg 0x09800=0x13 ii.
Chapter XVIII. Classification and Filtering iii. Configure the RSS Queue corresponding to trace0 filter configured above. Determine the RspQ ID of the queues by looking at Trace QType in /sys/kernel/debug/cxgb4//sge_qinfo file. [root@host]# cxgbtool ethX reg 0x09808= iv.
XIX. Traffic Management XIX.
XIX. Traffic Management 1. Introduction Traffic Management capabilities built-in to Chelsio T5/T4 CNAs can shape transmit data traffic through the use of sophisticated queuing and scheduling algorithms built-in to the ASIC hardware which provides fine-grained software control over latency and bandwidth parameters such as packet rate and byte rate. These features can be used in a variety of data center application environments to solve traffic management problems.
XIX. Traffic Management 1.2. Software Requirements 1.2.1. Linux Requirements Currently the Traffic Management feature is available for the following versions: Redhat Enterprise Linux 5 update 9 kernel Redhat Enterprise Linux 5 update 10 kernel Redhat Enterprise Linux 6 update 4 kernel Redhat Enterprise Linux 6 update 5 kernel Suse Linux Enterprise Server 11 SP1 kernel Suse Linux Enterprise Server 11 SP2 kernel Suse Linux Enterprise Server 11 SP3 kernel Ubuntu 12.04, 3.2.0-23 Ubuntu 12.04.2, 3.5.
XIX. Traffic Management 2. Software/Driver Loading Traffic Management can be performed on non-offloaded connections as well as on offloaded connections. The drivers must be loaded by the root user. Any attempt to load the drivers as a regular user will fail.
XIX. Traffic Management 3. Software/Driver Unloading Reboot the system to unload the driver.
XIX. Traffic Management 4. Software/Driver Configuration and Fine-tuning 4.1. Traffic Management Rules Traffic Management supports the following types of scheduler hierarchy levels which can be configured using the cxgbtool utility: i. Class Rate Limiting ii. Class Weighted Round Robin iii. Channel Rate Limiting 4.1.1. Class Rate Limiting This scheduler hierarchy level can be used to rate limit individual traffic classes or individual connections (flow) in a traffic class.
XIX. Traffic Management 4.1.2. Class Weighted Round Robin Incoming traffic flows from various applications can be prioritized and provisioned using a weighted round-robin scheduling algorithm. Class weighted round robin can be configured using the following command: [root@host]# cxgbtool sched-class params type packet level cl-wrr channel class weight Here, ethX is the Chelsio interface Channel No. is the port on which data is flowing (0-3).
XIX. Traffic Management 4.2. Configuring Traffic Management 4.2.1. For Non-offloaded connections Traffic Management of non-offloaded connections is a 2-step process. In the first step bind connections to indicated NIC TX queue using tc utility from iproute2-3.9.0 package. In the second step bind the indicated NIC TX queue to the specified TC Scheduler class using the cxgbtool utility. 1. Bring up the interface: [root@host]# ifconfig ethX up 2.
XIX. Traffic Management Both the methods have been described below: Applying COP policy 1. Bring up the interface: [root@host]# ifconfig ethX up 2. Create a new policy file (say new_policy_file) and add the following line to associate connections with the given scheduling class. E.g.: src host 102.1.1.1 => offload class 0 The above example will associate all connections originating from IP address 102.1.1.1 with scheduling class 0 3.
XIX. Traffic Management 1. Determine the TCP socket file descriptor in the application through which data is sent. 2. Declare and initialize a variable in the application: int cl=1; Here, cl is the TCP traffic class(scheduler-class-index) that the user wishes to assign the data stream to. This value needs to be in the range of 0 to 7. The application will function according to the parameters set for that traffic class. 3.
XIX. Traffic Management 5. Usage 5.1. Non-Offloaded Connections The following example demonstrates the method to rate limit all TCP connections on class 0 to a rate of 300 Mbps for Non-offload connections: 1. Load the Network driver in NIC mode. [root@host]# modprobe cxgb4 2. Bind connections with destination IP address 192.168.5.3 to NIC TX queue 3 [root@host]# tc qdisc add dev eth0 root handle 1: multiq [root@host]# tc filter add dev eth0 parent 1: protocol ip prio 1 u32 ip dst 192.168.5.
XIX. Traffic Management 2. Create a new policy file (say new_policy_file) and add the following line to associate connections with the given scheduling class.: src host 102.1.1.1 => offload class 0 3. Compile the policy file using COP [root@host]# cop -d –o 4. Apply the COP policy: [root@host]# cxgbtool eth0 policy 5.
XX. Unified Wire Manager (UM) XX.
XX. Unified Wire Manager (UM) 1. Introduction Chelsio's Unified Wire Manager is a powerful management software tool, allowing you to view and configure different aspects of the system, including Chelsio hardware installed in the system. The software includes a command line interface (CLI) tool and a web management interface (Web GUI) to help you manage all Chelsio network adapters on the network across multiple operating systems.
XX. Unified Wire Manager (UM) 1.2. Reference Architecture Chelsio’s Web GUI is a web-based management interface that lets you remotely manage several Chelsio CNAs from anywhere, at anytime on the network using a web browser. The Web GUI provides a great amount of flexibility, efficiency and accessibility to system administrators in managing Network and SAN resources.
XX. Unified Wire Manager (UM) properties. You can use either the CLI or Web GUI client to manage agents based on your preference. It makes service requests based on the command issued by the user and returns the appropriate information. CLI Client The CLI Client (chelsio_uwcli) is an executable binary which allows you to manage and configure agents using the command-line interface.
XX. Unified Wire Manager (UM) 2. Hardware and Software 2.1.
XX. Unified Wire Manager (UM) 2.2. Platform/Component Matrix The table below lists the Linux distributions and the supported UM components. Distribution Supported UM Components RHEL 6.4, 2.6.32-358.el6 (64-bit) Management Agent, Management Client, Web Management Interface SLES11SP2, 3.0.13-0.27 (64-bit) Management Agent, Management Client, Web Management Interface 2.3. Platform/Driver Matrix The table below lists the Chelsio drivers and their supported versions: Chelsio driver Version NIC T3: 2.
XX. Unified Wire Manager (UM) 3. Installing Unified Wire Manager Chelsio Unified Wire has been designed to install Unified Wire Manager (UM) by default. All the three UM components, i.e. Management Agent, Client and Station, will be installed on selecting any of the Terminator 4/Terminator 5 configuration tuning options during installation. Hence, no separate installation is required.
XX. Unified Wire Manager (UM) 4. Verifying UM components status The following section explains how to verify status of various UM components. 4.1. Verifying Management Agent 1. Execute the following query command : [root@chelsio]# ps –eaf | grep UW The above query should confirm that Management Agent is running by displaying a similar result: root root root root 30531 1 30534 1 30537 1 30581 28384 0 0 0 0 09:27 09:27 09:27 09:45 ? ? ? pts/1 00:00:00 00:00:00 00:00:00 00:00:00 ./UWMgrServer .
XX. Unified Wire Manager (UM) 4.2. Verifying Management Client Execute the following query command to determine if Management Client is installed: [root@host]# chelsio_uwcli -V The above query should confirm that Management Client is installed by displaying a similar result: Unified Manager client CLI version : 2.x.yy 4.3. Verifying Management Station Execute the following query command to determine the status of Management Station: [root@host]# /etc/init.
XX. Unified Wire Manager (UM) 5. Management Agent 5.1. Communication The agent uses a TCP connection over IP to communicate with the client. After the connection is established, SSL (Secure Sockets Layer) encryption is enabled using the Open SSL libraries. The agent listens on a TCP port for new incoming connections from clients. This port is set to 35001 by default. It may be changed in the configuration file for the agent. The agent needs to be restarted after the change. 5.2.
XX. Unified Wire Manager (UM) 5.3.2. Service start/stop/restart You can start, stop or restart the service by using the following command: [root@host]#/etc/init.d/chelsio-uwire_mgmtd [start|stop|restart] 5.4. Firewall If the system has a firewall configured, such as iptables, it should be configured to allow traffic to the management agent TCP port configured above in the configuration section, or the default port that the management agent uses, 35001.
XX. Unified Wire Manager (UM) 6. CLI client 6.1. CLI Help system A detailed help and usage documentation is built into the CLI, and is accessible through its help system. The help can be invoked by the usual argument of /? or --help. 6.1.1. Viewing help Use the chelsio_uwcli command to view the help file as shown below: [root@host]# chelsio_uwcli /? 6.2. Client conflict resolution The CLI and Web GUI cannot manage the same system at the same time by default.
XX. Unified Wire Manager (UM) 7. Web GUI client 7.1. Management Station In order to access the Web Management Interface, Apache HTTP server should be installed and running on a machine. Also, Cookies and Javascript must be enabled in the browser. 7.1.1. Running Management Station on RHEL 6.x 1. Start/Restart Apache httpd daemon: [root@host]# service httpd [start|restart] 2. Start/Restart the Management Station: [root@host]# /etc/init.d/chelsio-mgmtstd [start|restart] 7.1.2.
XX. Unified Wire Manager (UM) 3. Start/Restart the Management Station: [root@host]# /etc/init.d/chelsio-mgmtstd [start/restart] 7.2. Accessing Web Management Interface 1. To access the Web GUI, type in the URL https:// in a web browser. 2. The security certificate used by the web server is a generic one. It may cause the following types of prompts in different browsers. You will need to select the correct option to continue. Figure 7.
XX. Unified Wire Manager (UM) Figure 7.2 (b) - Security Certificate prompt in Mozilla Firefox Figure 7.2 (c) - Security Certificate prompt in Apple Safari Figure 7.
XX. Unified Wire Manager (UM) 3. The web interface requires password authorization to be accessed. Enter the username and corresponding password that was set up on the management station system and click on the Login button. Figure 7.2 (e) - Web GUI Login page Note Not performing any operation/action for 5 minutes will result in session timeout. You will have to re-login and connect to the Agents again.
XX. Unified Wire Manager (UM) 7.3. Layout and Navigation The Web Management Interface consists of the following: Title bar displaying the username on the left, Unified Wire Manager logo and name in the centre; and a Logout button on the right. Menu Bar consisting of the Home, Add System, Remove System, Refresh, Subscribe and Bulk Configuration buttons. The Navigation Pane with a cascading tree of links to various configuration modules for a UM Agent.
XX. Unified Wire Manager (UM) 7.4. Home page The home page is displayed by default on launching the Web GUI. It displays Bookmarks and History, Service Discovery and Bulk Driver Installation modules. Options to go back to home page, add/remove system, refresh and configure email alerts are also available. 7.4.1. Home This option will display the home page. Bookmarks and History A history of the last 128 systems that were managed from this system, by the current user, will be shown here in a list.
XX. Unified Wire Manager (UM) Connecting to a system Select the system from the Bookmark list and click Connect. Once successfully connected, the system will appear on the left pane with different related modules on the right to view and manage. Removing a system Select the system from the Bookmark list and click Delete system to remove it. Note Once removed, the system will no longer appear in the Bookmarks and History module.
XX. Unified Wire Manager (UM) Service Discovery Using this module, all the Unified Wire Manager agents connected in the same or different subnet can be discovered. One can choose to discover agents based on OS type or search for a particular agent if the agent's IP or hostname is known. Select the appropriate discovery method and provide the relevant information. For example, to search using hostname, select Hostname as the Input Type and provide the agent's hostname in the Search for Hostname/IP field.
XX. Unified Wire Manager (UM) Bulk Driver Installation This module allows you to install drivers for multiple systems simultaneously. Drivers available for installation for a particular system may differ depending on the network adapter (T5, T4 or T3) and operating system selected. Installing Driver 1. In the Choose the card fields, select T3 or T4/T5 depending on the chip revision of the network card. 2. Select the operating system for which drivers are to be installed in the Choose the OS Type field.
XX. Unified Wire Manager (UM) Note Agents that report errors or with incorrect login credentials will be automatically skipped during the driver installation. 7.4.2. Add System Use this option to connect to new Agents using their IP or Hostname. You can enter the TCP port for connection or leave it at its default value (35001). You will have to provide correct user credentials for the agent in order to connect successfully.
XX. Unified Wire Manager (UM) 7.4.3. Remove System Use this option to disconnect an Agent. To remove an agent, click on the name of the system in the tree menu in the left and click Remove System. Then click Yes to confirm. Figure 7.4.3 - Removing a UM Agent 7.4.4. Refresh This option can be used to reload the Web GUI or UM Agent. To reload the Web GUI, navigate to the Home page (by clicking on the “Home” button and click Refresh.
XX. Unified Wire Manager (UM) Figure 7.4.5 - Subscribing to Email Alerts 7.4.6. Bulk Configuration The Bulk Configuration page allows you to execute common configuration changes to multiple agents and their network adapters simultaneously. You can conveniently perform bulk operations like installing option ROM, setting MTU and VLAN ID, changing adapter and port parameters on various devices, without having to access multiple modules and thus saving considerable amount of administration time.
XX. Unified Wire Manager (UM) Before accessing these modules, you will have to create groups and then add members to that group. Once done, you can select the group in the modules and the new setting will be applied to all members of that particular group. Manage Groups This is where you can add, delete and manage groups. Use the Create a Group section to create a group by specifying agent’s platform and group type.
XX. Unified Wire Manager (UM) Figure 7.4.6 (b) - Managing a group Boot Configuration Using this module, you can install option ROM or erase option ROM on Chelsio network devices. The Set Default Boot Settings button will reset the adapter to factory boot settings. Figure 7.4.
XX. Unified Wire Manager (UM) Network Configuration In the Network Configuration module, you can set Maximum Transfer Unit (MTU),Virtual LAN (VLAN) ID and change the IP address type for the members (network interfaces) of the Network group. MTU can be set between 1500-9000 bytes. VLAN id can be set for an adapter within the range 0-4094 (enter 0 to disable it). Figure 7.4.
XX. Unified Wire Manager (UM) Card Configuration The Card Configuration module allows you to set various adapter settings including TCP Offload. Offload settings are only available when using the TOE capable drivers (t4_tom and toecore for T5 and T4 adapters; t3_tom and toecore for T3 adapters). Figure 7.4.
XX. Unified Wire Manager (UM) Port Configuration In the Port Configuration module, you can set various port settings like enabling Tx checksum and TCP segmentation offload, setting Link speed and link duplex mode, etc. The settings depend on the device driver installed. Figure 7.4.
XX. Unified Wire Manager (UM) Bypass Configuration Use the Bypass Configuration module to configure Chelsio’s bypass adapters like B420-SR and B404-BT. For more information on different bypass modes and configurational parameters, see Bypass Driver chapter. Figure 7.4.
XX. Unified Wire Manager (UM) 7.5. System page The system page is displayed, when the system hostname / IP address is selected in the tree menu on the left. On adding a system, this item is automatically selected, and this page is displayed. The system page contains generic system and support modules which are discussed below: 7.5.1. System Summary This module lists the system Hostname, Operating System, platform and also gives the count of the Chelsio cards found. Figure 7.5.
XX. Unified Wire Manager (UM) 7.5.2. Drivers Installation Using this module, one can install various Chelsio drivers for different operating systems. You can choose the configuration file type (Linux Agents only). Figure 7.5.
XX. Unified Wire Manager (UM) Figure 7.5.
XX. Unified Wire Manager (UM) 7.5.3. Driver Details A list of Chelsio device drivers with related information like driver description, version, current load status and installation date is shown in this module. To load or unload a particular driver, select the appropriate option (Yes to load, No to unload) in the corresponding cell of the Loaded column. To reload a driver select Reload. Finally click Load/Unload Driver button. Click Refresh if changes are not reflected immediately.
XX. Unified Wire Manager (UM) 7.5.4. System Diagnostics Using this module, you can run various diagnostic tests on Chelsio adapters to troubleshoot adapter related issues. Select the adapter(s) from the list for which you want to run the test, select the operation (type of test; you can run more than one test at a time) and click Run Test. After the tests are completed, the results will be displayed in a tabular format. Figure 7.5.
XX. Unified Wire Manager (UM) 7.5.5. Unified Wire Manager Component Versions A list of the Unified Wire Manager agent components installed on the managed system is shown in this module. The versions of the components are useful in case of reporting an issue to support. Figure 7.5.5 - Unified Wire Manager Component Versions module 7.5.6. KVM Configuration (Linux) This module allows you to enable or disable KVM related operations.
XX. Unified Wire Manager (UM) ii. Next, reload them using modprobe: [root@host]# modprobe kvm allow_unsafe_assigned_interrupts=1 [root@host]# modprobe [root@host]# modprobe cxgb4 Loading the kvm module with allow_unsafe_assigned_interrupts=1 option enables use of device assignment without interrupt remapping support. This is required in order to asssign VFs to VMs. iii. Finally, access WebGUI.
XX. Unified Wire Manager (UM) You can perform similar actions on multiple virtual machines. To do so, click on the machine names in the list. The properties box will display the domain state of the machines selected. Now, click on any of the system power actions provided at the bottom. Figure 7.5.7 (a) – VM Configurations module Figure 7.5.
XX. Unified Wire Manager (UM) 7.5.8. VF Configurations (Linux) The VF Configurations module lists all the VMs, Virtual Functions mapped to each Virtual Machine and all the available VFs. You can also add and remove VFs for a particular VM. Figure 7.5.
XX. Unified Wire Manager (UM) 7.5.9. Xen Configurations The Xen Configurations module allows you to view UUID, power state of Virtual Machines and Virtual Functions assigned to them. You can perform various system power options like start, resume (if VM is paused), turn off, restart or suspend (pause) a VM. You can perform similar actions on multiple virtual machines. To do so, click on the machine names in the list. The properties box will display the power state of the machines selected.
XX. Unified Wire Manager (UM) 7.5.10. Xen VF Properties Here you can view the list of virtual machines and list of available VFs. To assign a VF to a VM, select the guest name on the left and select the VF to be assigned on the right. You can assign more than one VF at a time. Finally, click “Assign VF” to add the selected VFs to the hosts. To enable SR-IOV support, IOMMU must be enabled. To do this, select “Enable” for IOMMU and then click “Set IOMMU”. Reboot the host machine for changes to take effect.
XX. Unified Wire Manager (UM) Figure 7.5.10 (b) – Xen VF Properties module: Adding Virtual Functions 7.5.11. Managed system application logs The management agent logs its activities and any errors that occur, in /var/log/chelsio in Linux and FreeBSD and in the Event log, in Windows. This log can be obtained in this module. Only 20 entries can be obtained and viewed at a time. Logs can be viewed by either choosing from a list of fixed range or by specifying a custom starting point.
XX. Unified Wire Manager (UM) 7.6. Network page 7.6.1. Network summary The Network Summary module provides the total number of Chelsio adapters present, including the number of T5, T4 and T3 adapters. It also provides the total number of Network interfaces including corporate and Chelsio interfaces and VLANs. Figure 7.6.1 (a) – Network Summary module 7.6.2. Chelsio card page When a Chelsio card is selected in the tree menu on the left, this page is displayed.
XX. Unified Wire Manager (UM) Figure 7.6.
XX. Unified Wire Manager (UM) TCP Offload settings (Linux & FreeBSD) The TCP offload settings applicable to the card are shown here. These settings are only available when using the TOE capable drivers (t3_tom and toecore for T3 adapters; t4_tom and toecore for T4 and T5 adapters). On changing the settings, the changed settings may not reflect immediately on refreshing the data.
XX. Unified Wire Manager (UM) Figure 7.6.2 (c) - TCP Offload Settings module for a FreeBSD Agent Device Driver settings (Windows) The device driver settings applicable to the card are shown here. For Chelsio T5 and T4 adapters, only the MaxVMQueues field will be displayed. On changing the settings, the changed settings may not reflect immediately on refreshing the data.
XX. Unified Wire Manager (UM) Card statistics Certain statistics are maintained on a per card basis (instead of a per port basis), since the card has a TCP/IP offload capability. The statistics are for TCP and IP protocol processing done in the card's hardware. These statistics may only be applicable if the card is TOE enabled. Figure 7.6.
XX. Unified Wire Manager (UM) 7.6.2.1. Chelsio card's port The port page is displayed on selecting a port of a Chelsio card listed in the tree menu on the left. It provides details of the port and port settings. It also displays any port specific statistics that the hardware provides. The modules available on this page are as below: Port summary Port details such as the Ethernet adapter nam, link status, etc are shown in this module. Figure 7.6.2.
XX. Unified Wire Manager (UM) Port settings Port settings such as MTU, Link speed and others can be set in this module. The settings depend on the device driver installed. Figure 7.6.2.
XX. Unified Wire Manager (UM) Port statistics Ethernet statistics and additional hardware statistics for the port are displayed in this module. Figure 7.6.2.1 (c) - Port Statistics of T4/T5 CNA on Linux Agent 7.6.3. Networking Management page The system networking and teaming / bonding configurations are shown on this page. IP addresses, MTU, VLAN Ids, DNS and default gateway settings can be viewed and modified here. Network adapters can also be enabled or disabled as required.
XX. Unified Wire Manager (UM) addresses or aliases for the specified adapter. Use the option to add additional IP addresses with caution, since multiple IP addresses configured on the same adapter, for the same network, may result in unpredictable behavior of the system's networking stack. Maximum Transfer Unit (MTU) can be set between 1500-9000 bytes. VLAN id can also be set for an adapter within the range 0-4094 (enter 0 to disable it).
XX. Unified Wire Manager (UM) Figure 7.6.
XX. Unified Wire Manager (UM) System network statistics Using this module, one can generate reports based on Throughput pkts/sec and Throughput Mbs (Receive, Transmit, Bi-direction) in Table and Graph format for a network adapter. A report for hardware statistics can be generated based on different parameters, only in the Table view in the Advanced NIC characteristics. The polling time field sets the average time (in seconds) based on which the table/graph updates the report. Figure 7.6.
XX. Unified Wire Manager (UM) Figure 7.6.3 (d) - Network Throughput Vs Time instant Graph Figure 7.6.
XX. Unified Wire Manager (UM) Default Gateway and DNS configuration The DNS servers list can be set here. The default gateway for remote networks and the Internet can also be set here. On Linux and FreeBSD, only one default gateway is allowed. On Windows, you may set multiple default gateways. Use the option to set multiple default gateways with caution, since it may cause the system to stop communicating with external networks. Figure 7.6.
XX. Unified Wire Manager (UM) Create a network team/bond device (Linux and FreeBSD) A list of regular network adapters is provided here, to create a Network Team / Bond device. The available modes for the team depend on the OS teaming / bonding driver in use. On Linux the team may be created with a DHCP or Static IP address. On Windows, only DHCP is allowed when creating the team, although both DHCP and Static IP addressing is supported for the team adapter, after it is created successfully.
XX. Unified Wire Manager (UM) Network troubleshooting This module allows detecting and troubleshooting various network connectivity issues. The Ping utility helps to contact a system by specifying IP address, Number of ICMP packets to send and packet timeout. The result of the ping can be viewed by clicking on the Ping Result button. Using TraceRoute one can determine the route taken by packets across an IP network. Use the GetConnections utility to view currently active TCP/UDP connections.
XX. Unified Wire Manager (UM) Figure 7.6.
XX. Unified Wire Manager (UM) Figure 7.6.3 (j) - GetConnections Utility 7.6.3.1. Hypervisor Xen Bridge Configuration The Xen Bridge Configuration module allows you to view and manage network bridges, virtual nterfaces (vifs) and virtual machines to which those virtual interfaces are assigned. The left pane displays a list of different bridges created. Clicking on a bridge name will display related properties on the right.
XX. Unified Wire Manager (UM) Figure 7.6.3.1 (a) – Xen Bridge Configuration module Bridge Configuration (Linux) The Bridge Configuration module allows you to view and manage network bridges, virtual network interface (vnets) and virtual machines to which those virtual network interfaces are assigned. The left pane displays a list of different bridges created. Clicking on a bridge name will display related properties on the right.
XX. Unified Wire Manager (UM) Figure 7.6.3.1 (b) – Bridge Configuration module (Linux) Virtual Network Configuration (Linux) Using the Virtual Network Configuration module, you can create network bridges and attach them to virtual machines. You can also assign physical interfaces on the host to bridges. To create a bridge, enter a name and click “Create” in the “Create Bridge” section. All other parameters are optional. If not specified, the bridge will be created with default values.
XX. Unified Wire Manager (UM) Figure 7.6.3.1 (c) – Creating Bridge Figure 7.6.3.1 (d) – Adding Bridge to VM Figure 7.6.3.
XX. Unified Wire Manager (UM) Virtual Network Configuration (Xen) Using the Virtual Network Configuration module, you can create network bridges. You can also create and attach virtual interfaces to them. To create a bridge, enter a label for the bridge and click “Create”. The MTU and Name Description fields are optional. The bridge name will be generated automatically by the operating system. Once created, it will appear in the Xen Bridge Configuration module.
XX. Unified Wire Manager (UM) Virtual Switch Configuration (Windows) This module allows you to view and manage virtual networks. The left pane displays a list of different virtual networks created. Clicking on a virtual network name will display related properties on the right. If a virtual network is added to a virtual machine, a “+” link appears next to the virtual network name. Expanding the “+” link will display the virtual machines to which the network is attached.
XX. Unified Wire Manager (UM) Private Network: A Private Network is similar to Internal Network in that physical adapter is not required for setup and access to external networks is not provided. However, unlike Internal Network, guest operating systems can only communicate with guest operating systems in the same private network and not with the host. The host operating system cannot access the virtual machines on private network.
XX. Unified Wire Manager (UM) Virtual Switch Settings (Windows) To attach a virtual network to a virtual machine, select the virtual network from the Virtual Network list and the virtual machine from the VM list. Finally click Attach. Figure 7.6.3.1 (l) – Attaching virtual network to VM (Windows) 7.6.4. iWARP iWARP Settings On Linux Agents, iWARP parameter settings for Chelsio's RDMA capable NICs can be set using this module.
XX. Unified Wire Manager (UM) Figure 7.6.
XX. Unified Wire Manager (UM) Figure 7.6.
XX. Unified Wire Manager (UM) 7.6.5. Wire Direct WD-UDP Process Statistics & Attributes The WD-UDP module lists the process ids (pid) of UDP traffic running on the agent and displays the corresponding statistics and attributes. Note Please ensure that WD-UDP traffic is running on the agent before accessing this module. Figure 7.6.
XX. Unified Wire Manager (UM) Figure 7.6.
XX. Unified Wire Manager (UM) WD-TOE Process Statistics & Attributes The WD-TOE module lists the process ids (pid) of TOE traffic running on the agent and displays the corresponding statistics and attributes. Note Please ensure that WD-TOE traffic is running on the agent before accessing this module. Figure 7.6.5 (c) – WD-TOE Process Statistics Figure 7.6.
XX. Unified Wire Manager (UM) 7.7. Storage Page Storage Summary The Storage module lists the status of configuration modules under Storage section, running on the agent. Figure 7.7 – Storage Summary Module 7.7.1. FCoE Initiator (Linux, Windows, XenServer) All supported Chelsio FCoE initiators available on the operating system can be managed from this page. FCoE support is extended on Linux, Windows and XenServer platforms.
XX. Unified Wire Manager (UM) 7.7.1.1. FCoE Initiator Card FCoE Card Summary Details pertaining to the card used such as model, firmware/hardware version etc, are provided in this module. Figure 7.7.1.1 (a) – FCoE Card Summary module FCoE Attributes Information such as Interrupt modes (MSI/MSI-X/INTx), SCSI mode and the card state are provided in this module. Figure 7.7.1.
XX. Unified Wire Manager (UM) 7.7.1.2. FCoE Port This is an actual N_Port which communicates with the fabric and performs FIP and FCoE device discovery. This page lets the user to retrieve all the FCoE specific port information and also extend NPIV management support. It contains the following sections: FCoE Port Summary The SCSI adapter name and the underlying ENODE MAC address of the physical port can be found here. Figure 7.7.1.
XX. Unified Wire Manager (UM) FCoE Port Attributes This module provides details about link status and port identifiers such as WWPN, WWNN, FC ID and NPort MAC Address. The module also contains fabric information such as fabric name, VLAN on which the FCoE service is currently running and the number of SCSI targets that are being discovered by this port. Port speed being mentioned in this section varies on the card type (10G/1G) being used.
XX. Unified Wire Manager (UM) FCoE NPIV management NPIV is a fibre channel facility allowing multiple N_Port IDs to share a single physical N_Port. This module allows the user to manage virtual ports on the corresponding FCoE Port. To create a virtual port, select the option Create and the GUI allows two ways of creating a virtual port. i. Manual: Where the user can manually create a virtual port by providing a value to the WWPN and WWNN fields. ii.
XX. Unified Wire Manager (UM) 7.7.1.3. FCoE Remote Port Remote ports are the SCSI targets that are discovered by their respective N_port/virtual ports. The GUI conveys the same via a tree structure so that the end user knows the initiator-target mapping. FCoE Remote Port Attributes This module provides details about the discovered target such as target’s FC ID, WWPN and WWNN so that the user can identify the discovered target accordingly. Figure 7.7.1.
XX. Unified Wire Manager (UM) FCoE Remote Port Lun Details This module provides the LUN information such as size of the LUN, SCSI address, and LUN address. For Linux, the SCSI address is displayed in H:C:T:L (Host:Channel:Target:Lun) format and for Windows, it is displayed in P:B:T:L(SCSI Port:Bus:Target:Lun) format. Figure 7.7.1.
XX. Unified Wire Manager (UM) 7.7.1.4. FCoE Virtual Port A virtual port allows multiple Fibre Channel initiators to occupy a single physical port, easing hardware requirements in SAN design, especially where virtual SANs are called for. The virtual ports appear under their respective N_Ports after creation and the GUI conveys it via a tree structure so that the end user knows the N_port-VN_Port mapping.
XX. Unified Wire Manager (UM) FCoE Virtual Port Attributes The module provides details about link status and port identifiers such as WWPN, WWNN, FC ID and Virtual NPort MAC Address. The module also contains fabric information such as fabric name, VLAN on which the FCoE service is currently running and the number of SCSI targets that are being discovered by this virtual port. Port speed being mentioned in this section varies on the card type (10G/1G) being used.
XX. Unified Wire Manager (UM) FCoE Remote Port Attributes This module provides details about the discovered target for remote port associated with virtual port. Details such as target’s FC ID, WWPN and WWNN are provided so that the user can identify the discovered target accordingly. Figure 7.7.1.4 (c) - FCoE Remort Port Attributes module FCoE Remote Port Lun Details This module provides LUN information for remote port associate with virtual port.
XX. Unified Wire Manager (UM) 7.7.2. iSCSI initiator (Linux, Windows) All supported iSCSI initiators can be managed from this page. The supported initiators on Windows are Microsoft and Chelsio iSCSI initiator (T5/T4 adapters). On Linux, Open iSCSI initiator is supported. The modules available on this page are: Initiator nodes This module lists the initiator nodes / virtual adapters configured in the initiator stack.
XX. Unified Wire Manager (UM) Figure 7.7.
XX. Unified Wire Manager (UM) Figure 7.7.
XX. Unified Wire Manager (UM) Discover targets iSCSI targets can be discovered by providing the IP address and TCP port (usually 3260) of the target. In Windows, you can specify the initiator HBA to use and its IP address. The discovery operation fetches the targets found at that Portal (combination of IP address and TCP port). The discovery operation also fetches all the other Portals that the target(s) are listening on. The discovered target can be deleted if required.
XX. Unified Wire Manager (UM) Figure 7.7.
XX. Unified Wire Manager (UM) Targets The iSCSI targets that have been discovered, or are currently connected, are listed here. You may login, logout and delete the target from the initiator's configuration. In Windows, for the Microsoft iSCSI initiator, connections to an already established iSCSI session can be added or deleted. For the Microsoft iSCSI initiator or the Open iSCSI initiator, you may specify the authentication details and digest settings while logging in.
XX. Unified Wire Manager (UM) 7.7.3. FO iSCSI Initiator (Linux) Full Offload iSCSI Hardware Information PCI, firmware and other adapter related details are provided in this module. Select the Chelsio adapter for which you want to view properties from the Select a T4 Card drop-down list and the module will expand to display related properties. You can also view details like link id, status, enode mac, etc of all the ports of the selected adapter. Figure 7.7.
XX. Unified Wire Manager (UM) FO iSCSI Manage Ports Here you can configure various port settings like VLAN id, Maximum Transmission Unit (MTU) and IP. Select a Chelsio adapter from Select a T4 Card drop-down list and then select the port for which you want set any of the aforementioned properties. MTU can be set between 15009000 bytes. VLAN id can be set within the range 0-4094 (enter 0 to disable it). The IP type can be IPV4 (static) or DHCP.
XX. Unified Wire Manager (UM) FO iSCSI Initiator Properties In the FO iSCSI Initiator Properties module, you can configure FO iSCSI Initiator by setting different properties like enabling/disabling CHAP authentication, setting Header and Data digest, etc. Figure 7.7.
XX. Unified Wire Manager (UM) FO iSCSI Manage Instances The FO iSCSI Initiator service maintains multiple instances of a target depending on the discovery method. In this module, you can set upto 8 instances. Configurable parameters include initiator node name (IQN), alias (friendly) name, Initiator (CHAP) Username and password. Figure 7.7.
XX. Unified Wire Manager (UM) FO iSCSI Discover Details iSCSI Targets can be discovered using this module. Select a Chelsio adapter and initiator instance using which you want to discover targets. Next, provide the source (initiator) and destination (target) IP. Finally, click Discover. After successful discovery, all the discovered targets will appear in the Discovered Targets section. To view more details, click on the Target name. Figure 7.7.
XX. Unified Wire Manager (UM) FO iSCSI Session Details The FO iSCSI Session Details module can be used to log onto targets and view details of established iSCSI sessions. You can also logout from a target Use the Login section to connect to a target. Adapter, (initiator) instance, Target Name, Source (Initiator) IP, Destination (Target) IP and Destination Port are mandatory. After providing values for these fields, click Login. By default, no authentication mechanism is used while connecting to a target.
XX. Unified Wire Manager (UM) After successful login, details of the established iSCSI session will be displayed under the Established sessions section. Select the Adapter and session id. Details of the selected session will be displayed. To end the session, click Logout. Figure 7.7.
XX. Unified Wire Manager (UM) 7.7.4. iSCSI Target (Linux) This page allows to create new Targets and manage them (add/delete portals, add/delete LUNs, add/delete ACLs).It also provides information on Session details. Viewing and modifying Target properties is also available. The modules available on this page are as below: Target Stack Globals This module displays various global properties of a currently connected iSCSI target. Authentication priority between CHAP and ACL can be set here. Figure 7.7.
XX. Unified Wire Manager (UM) Target properties Properties such as Target name and Alias, Max Data Receive Length, Authentication mode related to a specific iSCSI target can be viewed and modified here. iSCSI targets can be started/stopped or deleted.
XX. Unified Wire Manager (UM) Figure 7.7.
XX. Unified Wire Manager (UM) Session details Details including Session ID, Initiator IQN and Connections List of all discovered and currently connected iSCSI targets are listed here. Figure 7.7.
XX. Unified Wire Manager (UM) New Target Creation New iSCSI target can be created here by specifying the Target IQN and Target Alias name. Figure 7.7.
XX. Unified Wire Manager (UM) 7.7.5. LUNs Various Logical Units created in an iSCSI Target can be managed here. The modules available on this page are as below: View/Edit iSCSI Target LUNs This module displays various Logical Units created in an iSCSI Target. Selected LUNs can be deleted. Figure 7.7.
XX. Unified Wire Manager (UM) Add LUN New LUNs can be added here by providing various parameters like Target Name, Target Device and RAM Disk Size etc. RW (Read-Write) and RO (Read Only) are the two kinds of permissions that can be set. If Ram Disk is selected, then a minimum of 16 MB should be provided. Figure 7.7.
XX. Unified Wire Manager (UM) 7.7.6. Portal Groups Portal details for currently connected iSCSI Targets can be viewed and added here. The modules available on this page are as below: View/Edit iSCSI Target Portals Portal List on the left displays details of the portal group on which an iSCSI target is listening and the related info is displayed on the right under Portal Details. Selected portals can be deleted. Figure 7.7.
XX. Unified Wire Manager (UM) Add Portal New Portals can be added here by choosing the specific target and Portal IP address. The Port number should be 3260. Figure 7.7.
XX. Unified Wire Manager (UM) 7.7.7. ACLs ACLs configured for currently connected iSCSI Targets can be managed here. The modules available on this page are as below: View/Edit iSCSI Target ACLs This module displays details for all the ACLs configured for an iSCSI Target. Selected ACLs can be deleted. Figure 7.7.
XX. Unified Wire Manager (UM) Add ACL New ACLs can be configured by specifying Target name, initiator IQN name, IP address and permission type. Figure 7.7.
XX. Unified Wire Manager (UM) 7.8. Hardware Features The Hardware module lists the status of configuration modules under Hardware Features section, running on the agent. Figure 7.8 – Hardware module for a Linux Agent 7.8.1. Filtering (Linux) Filtering feature enhances network security by controlling incoming traffic as they pass through network interface based on source and destination addresses, protocol, source and receiving ports, or the value of some status bits in the packet.
XX. Unified Wire Manager (UM) Figure 7.8.1(a) – T3 Filtering Configuration module Note Results for actions like adding a new filter or setting maximum filters make some time to reflect. Highlight the system item in the tree menu on the left, and click "Refresh system", to refresh data from the system, in case the updated settings are not being shown. T5/T4 Filtering configuration Filtering options can be set only when offload driver (t4_tom) is not loaded.
XX. Unified Wire Manager (UM) Figure 7.8.1(b) – T5/T4 Filtering Configuration module 7.8.2. Traffic Management (Linux) Using this page, you can add/delete/modify offload policies only in the presence of offload driver (t3_tom for T3 adapters; t4_tom for T5 and T4 adapters). Traffic Management configuration The Chelsio Card section on the left displays all the cards available in the server and their corresponding policies on the right. Policies can be added and deleted.
XX. Unified Wire Manager (UM) Figure 7.8.2 - Traffic Management Configuration module 7.8.3. Boot T4/T5 Save Config File (Linux) This module displays the current T5/T4 configuration tuning option selected. You can also change the tuning option by selecting the config file for each option located in /ChelsioUwirex.xx.x.x/src/network/firmware. For instance, to select Low latency Networking for T4 adapter, locate the file, t4-config.txt, in /ChelsioUwire-x.xx.x.
XX. Unified Wire Manager (UM) Figure 7.8.3 (a) – T4/T5 Save Config File module T5/T4 Boot Option ROM management This module allows managing the PXE and FCoE boot capability for Chelsio T5 and T4 adapters. The Option ROM (PXE and FCoE) may be installed to or erased from the card. The version of Option ROM flashed can be viewed here. Figure 7.8.
XX. Unified Wire Manager (UM) T5/T4 Boot Configuration This module can be used to view and configure PXE, FCoE and iSCSI Option ROM settings for Chelsio T5 and T4 adapters. PXE physical functions and order of ports for PXE boot can be selected using the PXE option. You can also enable/disable PXE BIOS and set VLAN. The FCoE option can be used to configure FCoE Option ROM settings. Using the Function parameter, you can set port order for target discovery and discovery timeout.
XX. Unified Wire Manager (UM) Figure 7.8.
XX. Unified Wire Manager (UM) Figure 7.8.3 (d) - FCoE Boot configuration for T4 CNAs: Function parameter Figure 7.8.
XX. Unified Wire Manager (UM) Figure 7.8.3 (f) - FCoE Boot configuration for T4 CNAs: Show WWPN parameter Figure 7.8.
XX. Unified Wire Manager (UM) Figure 7.8.
XX. Unified Wire Manager (UM) Figure 7.8.
XX. Unified Wire Manager (UM) Figure 7.8.3 (j) - iSCSI Boot configuration for T4 CNAs: Boot Devices parameter 7.8.4. Bypass You can use the Bypass page to configure various settings for Chelsio’s bypass adapters like setting bypass operation mode, creating rules (filters), starting/stopping BA server, etc. There are two modules available: Bypass Configuration and Redirect Configuration.
XX. Unified Wire Manager (UM) For more information on different bypass modes and configurational parameters, see Bypass Driver chapter. Figure 7.8.4 (a) - Bypass Configuration module Redirect Configuration In the Redirect Configuration module, you can set rules (filters), based on which the bypass adapter will redirect packets. You can group rules into tables. You can save the currently configured tables and rules for a bypass adapter into a shell script using the Download Configuration button.
XX. Unified Wire Manager (UM) Create table: Create a new table. The new table created will be inactive by default. Use the Activate table option to enable it. You can cretate upto 5 tables. In the Rules Configuration tab, you can add, delete and configure rules. Use the Add a Filter row button to add a new rule by specifying the rule id in the INDEX field and providing the required parameters. Finally, click Save Changes.
XX. Unified Wire Manager (UM) Figure 7.8.
XX. Unified Wire Manager (UM) 7.8.5. T4 Egress Class Schedulers Schedulers can be set only when T5/T4 network driver (cxgb4) is loaded. Egress Queue Map Using this module, you can bind (map) NIC (non-offloaded) Tx queues to Tx Scheduler classes. 7.8.5 (a) – Egress Queue Map module Egress Packet Scheduler Using this module you can configure different scheduler hierarchy levels (i.e.Class Rate Limiting, Class Weighted Round Robin and Channel Rate Limiting).
XX. Unified Wire Manager (UM) 8. Uninstalling Unified Wire Manager This section describes the method to uninstall components of Chelsio Unified Manger. 8.1. Uninstalling Management Agent Use the following query command to determine the name of the agent RPM: [root@host]# rpm –qa | grep chelsio-uwire_mgmt-agent Now, execute the following command with the result from the above query to uninstall Management Agent: E.g. for RHEL 6.3: [root@host]# rpm –e chelsio-uwire_mgmt-agent-rhel6u3-2.2-xyz.x86_64 8.2.
XX. Unified Wire Manager (UM) 8.3. Uninstalling Management Station 1. Use the following query command to determine the name of the Management Station RPM: [root@host]# rpm –qa | grep chelsio-uwire_mgmt-station 2. Now, execute the following command with the result from the above query to uninstall Management Station: E.g. for RHEL 6.3: [root@host]# rpm –e chelsio-uwire_mgmt-station-rhel6u3-2.2-xyz.
Chapter XXI. Unified Boot XXI.
Chapter XXI. Unified Boot 1. Introduction PXE is short for Preboot eXecution Environment and is used for booting computers over an ethernet network using a Network Interface Card (NIC).FCoE SAN boot process involves installation of an operating system to an FC/FCoE disk and then booting from it. iSCSI SAN boot process involves installation of an operating system to an iSCSI disk and then booting from it.
Chapter XXI. Unified Boot Mellanox SX_PPC_M460EX Other platforms/switches have not been tested and are not guaranteed to work. 1.1.3. Supported Adapters Following are the currently shipping Chelsio Adapters that are compatible with Chelsio Unified Boot software: T502-BT T580-CR T520-LL-CR T520-SO-CR* T520-CR T522-CR T540-CR T580-LP-CR T580-SO-CR* T420-CR T440-CR T422-CR T404-BT T420-BCH T420-SO-CR* T440-LP-CR T420-LL-CR T420-BT * Only PXE 1.2.
Chapter XXI. Unified Boot 2. Flashing firmware and option ROM Use any one of the below mentioned methods to flash Option ROM onto Chelsio CNAs: Note Only Legacy environment currently supported on T5 and T4 adapters. 2.1. Using Flash Utility Chelsio legacy flash utility (cfut4.exe) is used to program the PXE Option ROM image onto the Chelsio CNAs. Example: This example assumes that you are using a USB flash drive as a storage media for the necessary files. Follow the steps below: i.
Chapter XXI. Unified Boot x. Run the following command to list all Chelsio CNAs present in the system. The list displays a unique index for each CNA found. C:\CHELSIO>cfut4 –l xi. Delete any previous version of Option ROM flashed on the CNA: C:\CHELSIO>cfut4 –d -xb Here, idx is the CNA index found in step x (0 in this case) xii.
Chapter XXI. Unified Boot xiii. Run the following command to flash the appropriate firmware (t5fw-x.xx.xx.x.bin for T5 adapters; t4fw-x.xx.xx.x.bin for T4 adapters). C:\CHELSIO>cfut4 -d -uf .bin Here, idx is the CNA index found in step x (0 in this case) and firmware_file is the firmware image file present in the CHELSIO folder.
Chapter XXI. Unified Boot xiv. Flash the unified option ROM onto the Chelsio CNA, using the following command: C:\CHELSIO>cfut4 -d -ub cuwlbt4.bin Here, idx is the CNA index found in step x (0 in this case) and cuwlbt4.bin is the unified option ROM image file. xv. Delete any previous Option ROM settings: C:\CHELSIO>cfut4 -d -xc Here, idx is the CNA index found in step x (0 in this case) xvi. Reboot the system for changes to take effect.
Chapter XXI. Unified Boot 2.2. Using cxgbtool i. If you haven't done already, download and install ChelsioUwire-x.xx.x.x.tar.gz from Chelsio Download Center, service.chelsio.com ii. Generate default boot configuration file using bootcfg utility: [root@host]# bootcfg default The boot configuration file generated will only enable PXE by default. iii. Load Network driver using the following command: [root@host]# modprobe cxgb4 iv.
Chapter XXI. Unified Boot 3. Configuring PXE Server The following components are required to configure a server as PXE Server: DHCP Server TFTP Server PXE server configuration steps for Linux can be found on following links: http://linux-sxs.org/internet_serving/pxeboot.html http://www.howtoforge.com/ubuntu_pxe_install_server PXE server configuration steps for Windows can be found on following links: http://technet.microsoft.com/en-us/library/cc771670%28WS.10%29.aspx http://tftpd32.
Chapter XXI. Unified Boot 4. PXE boot process Before proceeding, please ensure that the Chelsio CNA has been flashed with the provided firmware and option ROM (See Flashing firmware and option ROM). Note Only Legacy environment currently supported on T5 and T4 adapters. 4.1. Legacy PXE boot i. After configuring the PXE server, make sure the PXE server works. Then reboot the client machine. ii. Press [Alt+C] when the message Chelsio Unified Boot BIOS vX.X.X.
Chapter XXI. Unified Boot v. Enable the Adapter BIOS using arrow keys if not already enabled. Hit [ENTER]. Note Use the default values for Boot Mode, EDD and EBDA Relocation parameters, unless instructed otherwise. vi. Choose PXE from the list to configure. Hit [Enter].
Chapter XXI. Unified Boot vii. Use the arrow keys to highlight the appropriate function among the supported NIC functions and hit [Enter] to select. viii. Enable NIC function bios if not already enabled. ix. Choose the boot port to try the PXE boot. It is recommended to only enable functions and ports which are going to be used. Please note that enabling NIC Func 00 will enable port 0 for PXE, enabling NIC Func 01 will enable port 1 and so on for NIC function.
Chapter XXI. Unified Boot x. NIC Function enabled Ports enabled NIC Func00 00 NIC Func01 01 NIC Func02 02 NIC Func03 03 Hit [F10] or [Esc] and then [Y] to save configuration changes. xi. Reboot the system. xii. Hit [F2] or [DEL] or any other key as mentioned during system startup to enter the system setup. xiii. Allow the Chelsio option ROM to initialize and setup PXE devices. DO NOT PRESS ALT-S to skip Chelsio option ROM.
Chapter XXI. Unified Boot xiv. In the system setup, choose any of the Chelsio PXE devices as the first boot device. xv. Reboot. DO NOT PRESS ALT-S to skip Chelsio option ROM, during POST. xvi. Hit [F12] key when prompted to start PXE boot.
Chapter XXI. Unified Boot 5. FCoE boot process Before proceeding, please ensure that the Chelsio CNA has been flashed with the provided firmware and option ROM (See Flashing firmware and option ROM). 5.1. Legacy FCoE boot i. Reboot the system. ii. Press [Alt+C] when the message Chelsio Unified Boot BIOS vX.X.X.XX, Copyright (C) 2003-2014 Chelsio Communications Press to Configure T4/T5 Card(s). Press to skip BIOS appears on the screen to enter the configuration utility. iii.
Chapter XXI. Unified Boot v. Enable the Adapter BIOS if not already enabled. Hit [ENTER]. Note Use the default values for Boot Mode, EDD and EBDA Relocation parameters, unless instructed otherwise. vi. Choose FCoE from the list to configure and hit [Enter].
Chapter XXI. Unified Boot vii. Choose the first option (function parameters) from the list of parameter type and hit [Enter]. viii. Enable FCoE BIOS if not already enabled.
Chapter XXI. Unified Boot ix. Choose the order of the ports to discover FCoE targets. x. Set discovery timeout to a suitable value. Recommended value is >= 30.
Chapter XXI. Unified Boot xi. Hit [F10] or [Esc] and then [Y] to save the configuration. xii. Choose boot parameters to configure.
Chapter XXI. Unified Boot xiii. Select the first boot device and hit [Enter] to discover FC/FCoE targets connected to the switch. Wait till all reachable targets are discovered. xiv. List of discovered targets will be displayed. Highlight a target using the arrow keys and hit [Enter] to select.
Chapter XXI. Unified Boot xv. From the list of LUNs displayed for the selected target, choose one on which operating system has to be installed. Hit [Enter].
Chapter XXI. Unified Boot xvi. Hit [F10] or [Esc] and then [Y] to save the configuration. xvii. Reboot the machine. xviii. During POST, allow the Chelsio option ROM to discover FCoE targets.
Chapter XXI. Unified Boot xix. Enter BIOS setup and choose FCoE disk discovered via Chelsio adapter as the first boot device. xx. Reboot and boot from the FCoE disk or install the required OS using PXE.
Chapter XXI. Unified Boot 6. iSCSI boot process Before proceeding, please ensure that the Chelsio CNA has been flashed with the provided firmware and option ROM (See Flashing firmware and option ROM). 6.1. Legacy iSCSI boot i. Reboot the system. ii. Press [Alt+C] when the message Chelsio Unified Boot BIOS vX.X.X.XX, Copyright (C) 2003-2014 Chelsio Communications Press to Configure T4/T5 Card(s). Press to skip BIOS appears on the screen to enter the configuration utility. iii.
Chapter XXI. Unified Boot v. Enable the Adapter BIOS if not already enabled. Hit [Enter]. Note Use the default values for Boot Mode, EDD and EBDA Relocation parameters, unless instructed otherwise. vi. Choose iSCSI from the list to configure and hit [Enter].
Chapter XXI. Unified Boot vii. Choose the first option (Configure Function Parameters) from the list of parameter type and hit [Enter]. viii. Enable iSCSI BIOS if not already enabled. CBFT (Chelsio Boot Firmware Table) will be selected by default. For Windows, select IBFT, since Microsoft iSCSI Initiator will be used. You can also configure the number of iSCSI login attempts (retries) incase the network is unreachable or slow.
Chapter XXI. Unified Boot ix. Choose the order of the ports to discover iSCSI targets. x. Set discovery timeout to a suitable value. Recommended value is >= 30.
Chapter XXI. Unified Boot xi. Hit [Esc] and then [Y] to save the configuration. xii. Go back and choose Configure Initiator Parameters to configure initiator related properties.
Chapter XXI. Unified Boot xiii. Inititiar properties like IQN, Header Digest, Data Digest, etc will be displayed. Change the values appropriately or continue with the default values. Hit [F10] to save. xiv. CHAP authentication is disabled by default.
Chapter XXI. Unified Boot xv. Enable CHAP authentication by selecting ENABLED in the CHAP Policy field. Next, choose either one-way or mutual as the authentication method. Finally, provide Initiator and Target CHAP credentials according to the authentication method selected. Hit [F10] to save. xvi. Go back and choose Configure Network Parameters to configure iSCSI Network related properties.
Chapter XXI. Unified Boot xvii. Select the port using which you want to connect to the target. Hit [Enter]. xviii. Select Yes in the Enable DHCP field to configure port using DHCP or No to manually configure the port. Hit [F10] to save.
Chapter XXI. Unified Boot xix. Go back and choose Configure Target Parameters to configure iSCSI target related properties. xx. If you want to discover target using DHCP, select Yes in the Discover Boot Target via DHCP field. To discover target via static IP, select No and provide the target IP and Hit [F10] to save. The default TCP port selected is 3260.
Chapter XXI. Unified Boot xxi. Go back and choose Discover iSCSI Target (s) to connect to a target. xxii.
Chapter XXI. Unified Boot xxiii.A list of available targets will be displayed. Select the target you wish to connect to and hit [Enter]. xxiv. A list of LUNs configured on the selected target will be displayed. Select the LUN you wish to connect to and hit [Enter].
Chapter XXI. Unified Boot xxv. Hit [Esc] and then [Y] to save the configuration. xxvi. Reboot the machine. xxvii. During POST, allow the Chelsio option ROM to discover iSCSI targets.
Chapter XXI. Unified Boot xxviii. Enter BIOS setup and choose iSCSI target LUN discovered via Chelsio adapter as the first boot device. xxi. Reboot and boot from the iSCSI Target LUN or install the required OS using PXE.
Chapter XXI. Unified Boot 7. Creating Driver Update Disk (DUD) The following section describes the procedure to create Driver Update Disks for RHEL and SLES distributions for T5 adapters. In case of T4 adapters, you can skip this step and use inbox drivers to install the operating system. 7.1. Creating DUD for RedHat Enterprise Linux i. If you haven’t done already, download ChelsioUwire-x.xx.x.x.tar.gz from Chelsio Download Center, service.chelsio.com ii.
Chapter XXI. Unified Boot iv. Format the USB drive [root@host]# mkfs.vfat /dev/sda v. Depending on the distribution to be installed, copy the corresponding image file to the USB stick. For example, execute the following command for SLES11sp3. [root@host]# dd if=/root/ ChelsioUwire-x.xx.x.x/Uboot/LinuxDUD/ChelsioDriverUpdateDisk-SLES11sp3-x86_64-x.x.x.x-y.
Chapter XXI. Unified Boot 8. OS Installation 8.1. Installation using Chelsio NIC DUD (PXE only) This is the recommended method for installing Linux OS using Chelsio PXE boot. The Chelsio Driver Update Disk (DUD) has support for all the new adapters. Use Network Boot (PXE Boot) media to install the OS, and provide the Driver Update Disk as per the detailed instructions for each OS. The DUD supports installation of RHEL and SLES distributions using Chelsio adapters over Network.
Chapter XXI. Unified Boot 8.1.1. RHEL 6.X installation using Chelsio DUD i. PXE boot prompt Please make sure that the USB drive with DUD image is inserted. Type dd at the boot prompt for the installation media. The dd option specifies that you will be providing a Driver Update Disk during the installation. ii. Driver disk prompt: The installer will load and prompt you for the driver update disk. Now select “Yes” and hit [Enter] to proceed.
Chapter XXI. Unified Boot iii. Driver disk source prompt: You will be asked to select the Driver Update Disk device from a list. USB drives usually show up as SCSI disks in Linux. So if there are no other SCSI disks connected to the system, the USB drive would assume the first drive letter “a”. Hence the drive name would be “sda”. You can view the messages from the Linux kernel and drivers to determine the name of the USB drive, by pressing [Alt] + [F3/F4] and [Alt] + [F1] to get back to the list. iv.
Chapter XXI. Unified Boot v. Load additional drivers prompt: The installer will ask if you wish to load more drivers. Choose “Yes” to load if you have any other drivers to load. Otherwise choose “No”. vi. Choose language and Keyboard type: Select the required language from the list.
Chapter XXI. Unified Boot vii. Select Keyboard type Select the type of keyboard you have from the list. viii. Select Installation method: In this step, you can choose the source which contains the OS installation ISO image. In this case, select “NFS directory”.
Chapter XXI. Unified Boot ix. Select Displayed Network Devices: The Chelsio Network Devices will be displayed. Select the appropriate Chelsio NIC interface to proceed with installation. x. Configure TCP/IP settings: Here you can specify if you want to configure your network interfaces using DHCP or manually using IPv4. IPv6 is currently not supported. Hence disable IPv6 before proceeding.
Chapter XXI. Unified Boot xi. Provide NFS/FTP/HTTP Server Name/IP and Path: Proceeding with the installation will get NFS/FTP/HTTP setup page. Here, provide NFS server details to proceed with the installation. Then the graphical Installation screens for RHEL will appear. Then proceed with the installation as usual. 8.1.2. SLES installation using Chelsio DUD i. PXE boot prompt: Please make sure that the USB drive with DUD image is inserted. Type dd at the boot prompt for the installation media.
Chapter XXI. Unified Boot ii. Start Installation Select “Start Installation” and then “Start Installation or Update”.
Chapter XXI. Unified Boot iii. Select method of install: Select “Network” as the source of medium to install the SLES Operating System. iv. Select the Network protocol: Select the desired Network protocol from the list presented.
Chapter XXI. Unified Boot v. Select appropriate Chelsio network Interface: Select the appropriate Chelsio interface from the list to proceed with installation. You can view the messages from the Linux kernel and drivers to determine the name of NIC interface by pressing [Alt] + [F3] or [Alt] + [F4]. Press [Alt] + [F1] to get back to the list. vi. Configure DHCP IP Select “Yes” to configure the network interface selected in the previous step using DHCP.
Chapter XXI. Unified Boot vii. Provide NFS/FTP/HTTP/TFTP Server Name/IP and Path: Provide a valid NFS/FTP/HTTP/TFTP Server IP address to proceed. viii. Provide operating system Directory Path: Provide a valid directory path to the operating system to be installed. When the graphical Installation screen for SLES appears, proceed with the installation as usual. 8.2. Installation on FCoE LUN 8.2.1. Using CD/DVD ROM Please make sure that the USB drive with DUD image is inserted.
Chapter XXI. Unified Boot iv. Driver disk prompt: The installer will load and prompt you for the driver update disk. Now select “Yes” and hit [Enter] to proceed.
Chapter XXI. Unified Boot v. Driver disk source prompt: You will be asked to select the Driver Update Disk device from a list. USB drives usually show up as SCSI disks in Linux. So if there are no other SCSI disks connected to the system, the USB drive would assume the first drive letter “a”. Hence the drive name would be “sda”. You can view the messages from the Linux kernel and drivers to determine the name of the USB drive, by pressing [Alt] + [F3/F4] and [Alt] + [F1] to get back to the list. vi.
Chapter XXI. Unified Boot vii. Load additional drivers prompt: The installer will ask if you wish to load more drivers. Choose “Yes” to load if you have any other drivers to load. Otherwise choose “No” and hit [Enter]. viii. Testing Installation media If you want to test the installation media, choose OK or else Skip. Hit [Enter].
Chapter XXI. Unified Boot ix. Graphical Installer Red Hat graphical installer will now start. Click Next. x. Select Specialized Storage Devices radio button and click Next.
Chapter XXI. Unified Boot xi. Select storage device Select the FC/FCoE LUN which was saved as boot device in system BIOS and click Next. Then proceed with the installation as usual. 8.3. Installation on iSCSI LUN 8.3.1. Using CD/DVD ROM Please make sure that the USB drive with DUD image is inserted. Also, change the boot priority to boot from CD/DVD in the BIOS setup. i. Insert the OS installation disc into your CD/DVD ROM. ii.
Chapter XXI. Unified Boot iv. Driver disk prompt: The installer will load and prompt you for the driver update disk. Now select “Yes” and hit [Enter] to proceed.
Chapter XXI. Unified Boot v. Driver disk source prompt: You will be asked to select the Driver Update Disk device from a list. USB drives usually show up as SCSI disks in Linux. So if there are no other SCSI disks connected to the system, the USB drive would assume the first drive letter “a”. Hence the drive name would be “sda”. You can view the messages from the Linux kernel and drivers to determine the name of the USB drive, by pressing [Alt] + [F3/F4] and [Alt] + [F1] to get back to the list. vi.
Chapter XXI. Unified Boot vii. Load additional drivers prompt: The installer will ask if you wish to load more drivers. Choose “Yes” to load if you have any other drivers to load. Otherwise choose “No” and hit [Enter]. xii. Testing Installation media If you want to test the installation media, choose OK or else Skip. Hit [Enter].
Chapter XXI. Unified Boot xiii. Graphical Installer Red Hat graphical installer will now start. Click Next.
Chapter XXI. Unified Boot xiv. Select Specialized Storage Devices radio button and click Next. xv. The discovered LUNs will appear in the Basic Devices tab and you can proceed with installation as usual. If not, follow the steps mentioned below to add target.
Chapter XXI. Unified Boot xvi. Click +Add Advanced Target button and select the Add iSCSI target radio button. Click +Add drive button. xvii.Enter the target IP. Initiator IQN will be autogenerated by the installer. Click Start Discovery.
Chapter XXI. Unified Boot xviii. On successful discovery, LUNs configured on the discovered target will be displayed. xix. Select the checkbox corresponding to the LUN on which you wish to install operating system and click Login. Note Make sure the same LUN discovered at the Option ROM stage is selected for OS installation.
Chapter XXI. Unified Boot xx. Next, choose the CHAP authentication method and click Login. xxi. On successful login, a dialog box will display the results along with the connected LUN name. Click OK.
Chapter XXI. Unified Boot xxii.Select the LUN under the Other SAN Devices tab, and click Next. xxiii. Proceed with the installation as usual.
Chapter XXII. Lustre File System XXII.
Chapter XXII. Lustre File System 1. Introduction The Lustre file system is a scalable, secure, robust, and highly-available cluster file system that addresses I/O needs, such as low latency and extreme performance, of large computing clusters. Lustre Clusters Lustre clusters contain three kinds of systems: File system clients, which can be used to access the file system. Object storage servers (OSS), which provide file I/O service.
Chapter XXII. Lustre File System T580-LP-CR T520-LL-CR T520-CR T522-CR T540-CR T420-CR T440-CR T422-CR T420-LL-CR 1.2. Software Requirements 1.2.1. Linux Requirements Currently Lustre File System is supported on following distributions: Suse Linux Enterprise Server 11 SP1 kernel (SLES11SP1), 2.6.32.12-0.7 Other kernel versions have not been tested and are not guaranteed to work.
Chapter XXII. Lustre File System 2. Creating/Configuring Lustre File System Follow the steps mentioned below to create Lustre file system using Chelsio adapter: i. Build kernel with Lustre support by following the procedure mentioned in http://wiki.lustre.org/index.php/Building_and_Installing_Lustre_from_Source_Code If you haven’t done already, install Chelsio Unfied Wire package. Load the Network and iWARP driver as per requirement: ii. iii.
Chapter XXII. Lustre File System vii. Create a combined MGS/MDT file system on a block device. Run the following command on the MDS node: [root@host]# mkfs.lustre --fsname= --mgs --mdt viii. Mount the file system created in the previous step. Run the following command on the MDS node: [root@host]# mount -t lustre ix. Create the OST on the OSS node by running the following command: [root@host]# mkfs.
Chapter XXIII. Appendix XXIII.
Chapter XXIII. Appendix 1. Troubleshooting Cannot bring up Chelsio interface Make sure you have created the corresponding network-script configuration file as stated in Cheslsio Unified Wire chapter (See Creating network-scripts). If the file does exist, make sure the structure and contents are correct. A sample is given in the Chelsio Unified Wire chapter (See Configuring network-scripts). Another reason may be that the IP address mentioned in the configuration file is already in use on the network.
Chapter XXIII. Appendix priority-flow-control mode on the switch On the switch, make sure priority-flow-control mode is always set to auto and flow control is disabled. Configuring Ethernet interfaces on Cisco switch Always configure Ethernet interfaces on Cisco switch in trunk mode. Binding VFC to MAC If you are binding the VFC to MAC address in case of Cisco Nexus switch, then make sure you make the Ethernet interface part of both Ethernet VLAN and FCoE VLAN.
Chapter XXIII. Appendix 2. Chelsio End-User License Agreement (EULA) Installation and use of the driver/software implies acceptance of the terms in the Chelsio EndUser License Agreement (EULA). IMPORTANT: PLEASE READ THIS SOFTWARE LICENSE CAREFULLY BEFORE DOWNLOADING OR OTHERWISE USING THE SOFTWARE OR ANY ASSOCIATED DOCUMENTATION OR OTHER MATERIALS (COLLECTIVELY, THE "SOFTWARE"). BY CLICKING ON THE "OK" OR "ACCEPT" BUTTON YOU AGREE TO BE BOUND BY THE TERMS OF THIS AGREEMENT.
Chapter XXIII. Appendix (including the related documentation), together with all copies or modifications in any form. 6. Limited Warranty. Chelsio warrants only that the media upon which the Software is furnished will be free from defects in material or workmanship under normal use and service for a period of thirty (30) days from the date of delivery to you. CHELSIO DOES NOT AND CANNOT WARRANT THE PERFORMANCE OR RESULTS YOU MAY OBTAIN BY USING THE SOFTWARE OR ANY PART THEREOF.
Chapter XXIII. Appendix Federal Acquisition Regulations and its successors and 49 C.F.R. 227.7202-1 of the DoD FAR Supplement and its successors. 12. General. You acknowledge that you have read this Agreement, understand it, and that by using the Software you agree to be bound by its terms and conditions.