Intel® Ethernet Adapters and Devices User Guide
Overview Welcome to the User Guide for Intel® Ethernet Adapters and devices. This guide covers hardware and software installation, setup procedures, and troubleshooting tips for Intel network adapters, connections, and other devices. Intended Audience This document is intended for information technology professionals with a high level of knowledge, experience, and competency in Ethernet networking technology.
Overview l l l l l l l l l l l l l l l Intel® Ethernet 10G 4P X710-k bNDC Intel® Ethernet 10G 2P X710-k bNDC Intel® Ethernet 10G X710-k bNDC Intel® Ethernet Converged Network Adapter X710 Intel® Ethernet Converged Network Adapter X710-T Intel® Ethernet 10G 4P X710/l350 rNDC Intel® Ethernet 10G 4P X710 SFP+ rNDC Intel® Ethernet 10G X710 rNDC Intel® Ethernet Server Adapter X710-DA2 for OCP Intel® Ethernet 10G 2P X710 OCP Intel® Ethernet 10G 4P X710 OCP Intel® Ethernet 10G 2P X710-T2L-t OCP Intel® Ethernet 1
Installation Installation This chapter covers how to install Intel® Ethernet adapters, drivers, and other software. At a high level, installation involves the following steps, which are covered in more detail later in this chapter. If you are installing a network adapter, follow this procedure from step 1. If you are upgrading the driver software, start with step 4. NOTE: If you update the firmware, you must update the driver software to the same family version. 1. 2. 3. 4. 5. Review system requirements.
Installation 4. Insert the adapter, pushing it into the slot until the adapter is firmly seated. You can install a smaller PCI Express adapter in a larger PCI Express slot. CAUTION: Some PCI Express adapters may have a short connector, making them more fragile than PCI adapters. Excessive force could break the connector. Use caution when pressing the board in the slot. 5. Secure the adapter bracket with a screw, if required. 6. Replace the computer cover and plug in the power cord. 7.
Installation Connecting Network Cables Connect the appropriate network cable, as described in the following sections. Connect the RJ-45 Network Cable Connect the RJ-45 network cable as shown: The following table shows the maximum lengths for each cable type at a given transmission speed.
Installation Connector type: LC or SC Cable type: Multi-mode fiber with 62.5µm core diameter l 1 Gbps maximum cable length: 275 meters l 10 Gbps (and faster) maximum cable length: 33 meters Cable type: Multi-mode fiber with 50µm core diameter l 1 Gbps maximum cable length: 550 meters l 10 Gbps (and faster) maximum cable length: 300 meters LR transceiver cabling specifications Laser wavelength: 1310 nanometer (not visible) Connector type: LC Cable type: Single-mode fiber with 9.
Installation Dell EMC QSFP28 Passive Breakout Cables 7VN5T, 8R4VM, D9YM8 XXV710, E810XXV Dell EMC TRIPLE RATE 1G/10G/40G QSFP+ SR (bailed) (1G and 10G not supported on XL710) 5NP8R, 7TCDN, 9GCCD, FC6KV, J90VN, NWGTV, V492M XL710 1 Not supported on adapters based on the Intel® X520 controller. on adapters based on the Intel® E810-XXV controller. 3 The Intel® Ethernet Server Adapter X710-DA2 for OCP only supports modules listed in the table below.
Installation l l l 40 Gigabit Ethernet over SFP+ Direct Attached Cable (Twinaxial) l Length is 7 meters max. 25 Gigabit Ethernet over SFP28 Direct Attached Cable (Twinaxial) l Length is 5 meters max. l For optimal performance, must use CA-25G-L with RS-FEC and 25GBASE-CR 10 Gigabit Ethernet over SFP+ Direct Attached Cable (Twinaxial) l Length is 10 meters max. Install Drivers and Software Windows* Operating Systems You must have administrative rights to the operating system to install the drivers. 1.
Device Features Device Features This chapter describes the features available on Intel Ethernet devices. Major features are organized alphabetically. NOTE: Available settings are dependent on your device and operating system. Not all settings are available on every device/OS combination. Adaptive Inter-Frame Spacing Compensates for excessive Ethernet packet collisions on the network. The default setting works best for most computers and networks.
Device Features l l The Firmware LLDP (FW-LLDP) agent was disabled from a pre-boot environment (typically UEFI). This device is based on the Intel® Ethernet Controller X710 and the current link speed is 2.5 Gbps or 5 Gbps This setting is found on the Data Center tab of either the device's Device Manager property sheet or the Intel® PROSet Adapter Configuration Utility.
Device Features l l your installation, for information on enabling or disabling the FW-LLDP agent. In software-based DCBX mode, you can configure DCB parameters using software LLDP/DCBX agents that interface with the Linux kernel's DCB Netlink API. We recommend using OpenLLDP as the DCBX agent when running in software mode. For more information, see the OpenLLDP man pages and https://github.com/intel/openlldp.
Device Features Higher DMA Coalescing values result in more energy saved but may increase your system's network latency. If you enable DMA Coalescing, you should also set the Interrupt Moderation Rate to 'Minimal'. This minimizes the latency impact imposed by DMA Coalescing and results in better peak network throughput performance. You must enable DMA Coalescing on all active ports in the system. You may not gain any energy savings if it is enabled only on some of the ports in your system.
Device Features Adapters Based on the Intel® Ethernet Controller 700 Series FW-LLDP is enabled in NVM by default. To enable/disable the FW-LLDP Agent: l Linux: Use ethtool to set or show the disable-fw-lldp private flag. l ESX: Use the esxcfg-module command to set or get the LLDP module parameter. l Microsoft Windows: Use the LLDP Agent attribute in UEFI HII to change the FW-LLDP setting. Note: You must enable the UEFI HII "LLDP AGENT" attribute for the FW-LLDP setting to take effect.
Device Features l Default its NVM (usually "Disabled"). When an adapter is running in NPar mode, Flow Control is limited to the root partition of each port.
Device Features NOTE: A higher ITR rate also means that the driver has more latency in handling packets. If the adapter is handling many small packets, it is better to lower the ITR so that the driver can be more responsive to incoming and outgoing packets. Altering this setting may improve traffic throughput for certain network and system configurations, however the default setting is optimal for common network and system configurations.
Device Features Enable Jumbo Packets only if ALL devices across the network support them and are configured to use the same frame size. When setting up Jumbo Packets on other network devices, be aware that network devices calculate Jumbo Packet sizes differently. Some devices include the frame size in the header information while others do not. Intel adapters do not include frame size in the header information. Restrictions l l l l l Supported protocols are limited to IP (TCP, UDP).
Device Features Link State on Interface Down Sets if link is enabled or disabled when the interface is brought down. If this is set to Disabled and you bring an interface down (using an administrative tool, or in another way), then the port will lose link. This allows an attached switch to detect that the interface is no longer up. However, if Wake on LAN or manageability is enabled on this port, link will remain up.
Device Features Default Enabled Range Enabled, Disabled This setting is found on the Advanced tab of either the device's Device Manager property sheet or the Intel® PROSet Adapter Configuration Utility. To change this setting in Windows PowerShell, use the Set-IntelNetAdapterSetting cmdlet.
Device Features and reenables queues after MDD events Can disable automatic VF reset after MDD events Yes No No No MDD Auto Reset VFs Automatically resets the virtual machine immediately after the adapter detects a Malicious Driver Detection (MDD) event on the receive path. Default Range Disabled l l Disabled Enabled This setting is found on the Advanced tab of either the device's Device Manager property sheet or the Intel® PROSet Adapter Configuration Utility.
Device Features To change this setting in Windows PowerShell, use the Set-IntelNetAdapterSetting cmdlet. For example: Set-IntelNetAdapterSetting -Name "" -DisplayName "NVGRE Encapsulated Task Offload" -DisplayValue "Enabled" NIC Partitioning Network Interface Card (NIC) Partitioning (NPar) allows network administrators to create multiple partitions for each physical port on a network adapter card, and to set different bandwidth allocations on each partition.
Device Features nection for 2-3 seconds while the root partition (the first partition of the physical port) is initializing. NParEP Mode NParEP Mode is a combination of NPar and PCIe ARI, and increases the maximum number of partitions on an adapter to 16 per controller.
Device Features PCI Express Slot Dell EMC Platform OCP Mezz 1 2 3 4 T340 no no no no T430 no no yes yes yes yes T440 no yes yes yes yes T630 yes no T640 Rack NDC Slot yes 5 6 7 8 9 10 11 12 13 yes yes yes yes yes yes yes yes yes yes yes yes yes The following Dell EMC Platforms support NParEP mode on all slots.
Device Features Configuring NPar Mode Configuring NPar from the Boot Manager When you boot the system, press the F2 key to enter the System Setup menu. Select Device Settings from the list under System Setup Main Menu, then select your adapter from the list to get to the Device Configuration menu. Select Device Level Configuration in the list under Main Configuration Page. This brings up the Virtualization settings under Device Level Configuration.
Device Features pleted may be different than the values that were supposed to be set. To avoid this issue, set the values for minimum bandwidth percentage on all partitions using a single job and make sure the sum of the values is 100. Click the Back button when you have finished making your bandwidth allocation settings to return to the NIC Partitioning Configuration page. From there you may click on one of the PartitionnConfiguration list items under Global Bandwidth Allocation.
Device Features Power Management Settings Power Management settings are allowed only on the first partition of each physical port. If you select the Power Management tab while any partition other than the first partition is selected, you will be presented with text in the Power Management dialog stating that Power Management settings cannot be configured on the current connection. Clicking the Properties button will launch the property sheet for the root partition on the adapter.
Device Features To change the value for Min% or Max%, select a partition in the displayed list, then use the up or down arrows under “Selected Partition Bandwidth Percentages”. NOTE: l l l If the sum of the minimum bandwidth percentages does not equal 100, then settings will be automatically adjusted so that the sum equals 100.
Device Features When an adapter is operating in NPar mode, only the first partition of each physical port may be configured with the virtualization settings. NOTE: Microsoft* Hyper-V* must be installed on the system in order for virtualization settings to be available. Without Hyper-V* being installed, the Virtualization tab in PROSet will not appear. To set this using Windows PowerShell, find the first partition using the Get-IntelNetAdapter cmdlet.
Device Features Write to max_bw to set the maximum bandwidth for this function. Read from min_bw to display the current minimum bandwidth setting. Write to min_bw to set the minimum bandwidth for this function. Write a '1' to commit to save your changes. NOTES: l commit is write only. Attempting to read it will result in an error. l Writing to commit is only supported on the first function of a given port. Writing to a subsequent function will result in an error.
Device Features Performance Options Optimizing Performance You can configure Intel network adapter advanced settings to help optimize server performance.
Device Features RSS Queues If you have multiple 10 Gbps (or faster) ports installed in a system, the RSS queues of each adapter port can be adjusted to use non-overlapping sets of processors within the adapter's local NUMA Node/Socket. Change the RSS Base Processor Number for each adapter port so that the combination of the base processor and the max number of RSS processors settings ensure non-overlapping cores. For Microsoft Windows systems, do the following: 1.
Device Features Performance Profile Performance Profiles are supported on Intel® 10GbE adapters and allow you to quickly optimize the performance of your Intel® Ethernet Adapter. Selecting a performance profile will automatically adjust some Advanced Settings to their optimum setting for the selected application. For example, a standard server has optimal performance with only two RSS (ReceiveSide Scaling) queues, but a web server requires more RSS queues for better scalability.
Device Features Energy Efficient Ethernet The Energy Efficient Ethernet (EEE) feature allows a capable device to enter Low-Power Idle between bursts of network traffic. Both ends of a link must have EEE enabled for any power to be saved. Both ends of the link will resume full power when data needs to be transmitted. This transition may introduce a small amount of network latency. NOTES: l Both ends of the EEE link must automatically negotiate link speed. l EEE is not supported on every adapter.
Device Features Device Adapter Port(s) supporting WoL Intel® Ethernet 25G 2P XXV710 Adapter Intel® Ethernet Converged Network Adapter X710-T Not supported Intel® Ethernet Converged Network Adapter XL710-Q2 Intel® Ethernet 10G 2P X710-T2L-t Adapter Not Supported Intel® Ethernet 10G 4P X710-T4L-t Adapter Intel® Ethernet 10G 2P X520 Adapter Not Supported Intel® Ethernet X520 10GbE Dual Port KX4-KR Mezz Intel® Ethernet 10G 2P X540-t Adapter Not Supported Intel® Ethernet 10G 2P X550-t Adapter Not Sup
Device Features ACPI Power States Power State Description S0 On and fully operational S1 System is in low-power mode (sleep mode). The CPU clock is stopped, but RAM is powered on and being refreshed. S2 Similar to S1, but power is removed from the CPU. S3 Suspend to RAM (standby mode). Most components are shut down. RAM remains operational. S4 Suspend to disk (hibernate mode). The memory contents are swapped to the disk drive and then reloaded into RAM when the system is awakened.
Device Features Wake-Up Address Patterns Remote wake-up can be initiated by a variety of user selectable packet types and is not limited to the Magic Packet format. For more information about supported packet types, see the operating system settings section. The wake-up capability of Intel adapters is based on patterns sent by the OS. You can configure the driver to the following settings using Intel® PROSet. For Linux*, WoL is provided through the ethtool* utility.
Device Features Physical Installation Issues Slot Some motherboards will only support remote wake-up (or remote wake-up from S5 state) in a particular slot. See the documentation that came with your system for details on remote wake-up support. Power Some Intel PRO adapters are 3.3 volt and some are 12 volt. They are keyed to fit either type of slot. The 3.3 volt standby supply must be capable of supplying at least 0.2 amps for each Intel PRO adapter installed.
Device Features This setting is found on the Advanced tab of either the device's Device Manager property sheet or the Intel® PROSet Adapter Configuration Utility. To set this in Windows Powershell, first disable DCB, then set priority and VLAN tagging.
Device Features NOTES: l On systems running a Microsoft Windows Server operating system, enabling *QoS/priority flow control will disable link level flow control. l Devices based on the Intel® Ethernet Controller 800 Series do not support RDMA when operating in multiport mode with more than 4 ports. l On Linux systems, RDMA and bonding are not compatible. If RDMA is enabled, bonding will not be functional.
Device Features RDMA for Microsoft Windows Network Direct (ND) User-Mode Applications Network Direct (ND) allows user-mode applications to use RDMA features. NOTE: User mode applications may have prerequisites such as Microsoft HPC Pack or Intel MPI Library, refer to your application documentation for more details. RDMA User Mode Installation The Intel® Ethernet User Mode RDMA Provider is supported on Microsoft Windows Server 2016 and later.
Device Features Get-NetAdapterRDMA Use the following PowerShell command to check if the network interfaces are RDMA capable and multichannel is enabled: Get-SmbClientNetworkInterface Use the following PowerShell command to check if Network Direct is enabled in the operating system: Get-NetOffloadGlobalSetting | Select NetworkDirect Use netstat to make sure each RDMA-capable network interface has a listener at port 445 (Windows Client OSs that support RDMA may not post listeners). For example: netstat.
Device Features 10. Enable RDMA on the VF driver and Hyper-V Network Adapter using PowerShell in the VM: Set-NetAdapterAdvancedProperty -Name -RegistryKeyword RdmaVfEnabled RegistryValue 1 Get-NetAdapterRdma | Enable-NetAdapterRdma RDMA for NDK Features such as SMB Direct (Server Message Block) NDK allows Windows components (such as SMB Direct storage) to use RDMA features.
Device Features You might choose to increase the number of Receive Buffers if you notice a significant decrease in the performance of received traffic. If receive performance is not an issue, use the default setting appropriate to the adapter. Default 512, for all adapters. Range 128-4096, in intervals of 64, for all adapters.
Device Features l 4 or more queues are used for applications that demand maximum throughput and transactions per second. NOTES: l Not all settings are available on all adapters. l 8, or more, queues are only available when PROSet for Windows Device Manager or Intel® PROSet Adapter Configuration Utility (Intel® PROSet ACU) is installed. If PROSet is not installed, only 4 queues are available. l Using 8 or more queues requires the system to reboot.
Device Features Windows The default setting is for auto-negotiation to be enabled. Only change this setting to match your link partner's speed and duplex setting if you are having trouble connecting. 1. In Windows Device Manager or the Intel® PROSet Adapter Configuration Utility, double-click the adapter you want to configure. 2. On the Link Speed tab, select a speed and duplex option from the Speed and Duplex drop-down menu. 3. Click OK.
Device Features l l l l l l l l l l To resolve this issue, and to ensure isolation from unintended traffic streams, configure all SR-IOV enabled ports for VLAN tagging from the administrative interface on the PF. This configuration allows unexpected, and potentially malicious, frames to be dropped. SR-IOV must be enabled in the BIOS. You must enable VMQ for SR-IOV to function.
Device Features NDC, LOM, or Adapter 40Gbe 25Gbe 10Gbe 1Gbe Intel® Ethernet 25G 2P E810-XXV OCP Yes Intel® Ethernet 25G 2P XXV710 Mezz Yes Intel® Ethernet 25G 2P XXV710 Adapter Yes Intel® Ethernet 10G 4P X710-k bNDC Yes Intel® Ethernet 10G 2P X710-k bNDC Yes Intel® Ethernet 10G X710-k bNDC Yes Intel® Ethernet 10G 2P X710-T2L-t Adapter Yes Intel® Ethernet 10G 4P X710-T4L-t Adapter Yes Intel® Ethernet Network Adapter X710-TL Yes Intel® Ethernet Converged Network Adapter X710 Yes Intel®
Device Features NDC, LOM, or Adapter 40Gbe 25Gbe 10Gbe 1Gbe PowerEdge C4130 LOMs No PowerEdge C6320 LOMs Yes PowerEdge C6420 LOMs No PowerEdge T620 LOMs No PowerEdge T630 LOMs No PowerEdge FC430 LOMs No Yes PowerEdge R530XD LOMs Dell EMC Platform OCP Mezz No Rack NDC C4130 PCI Express Slot 1 2 no C6320 5 6 7 8 9 10 11 12 13 yes no yes yes C6420 yes yes R230 no no R240 no no R320 no yes R330 no no R340 no no 1x CPU no yes 2x CPU yes yes R430 yes ye
Device Features Dell EMC Platform OCP Mezz Rack NDC PCI Express Slot R640 yes yes yes yes R720XD yes yes yes yes yes yes yes R720 yes yes yes yes yes yes yes yes 1 2 3 4 5 6 7 R730 yes yes yes yes yes yes yes R730XD yes yes yes yes yes yes R740 yes R740XD2 R820 8 9 10 13 yes yes yes yes yes yes yes yes yes yes yes yes yes yes yes yes yes yes yes yes yes yes R840 yes yes yes yes yes yes yes yes yes yes yes R920 yes yes yes yes yes yes yes yes yes yes yes R930 yes yes
Device Features Dell EMC Platform Blade NDC Mezzanine Slot B C FC430 yes yes yes FC630 yes yes yes FC830 yes yes yes M420 yes yes yes M520 no yes yes M620 yes yes yes M630 yes yes yes M630 for VRTX yes M640 yes yes yes M640 for VRTX yes M820 yes yes yes M830 yes yes yes M830 for VRTX yes MX740c yes yes yes MX840c yes yes yes Supported platforms or slots are indicated by "yes." Unsupported are indicated by "no".
Device Features TCP/IP Offloading Options Thermal Monitoring Adapters and network controllers based on the Intel® Ethernet Controller I350 (and later controllers) can display temperature data and automatically reduce the link speed if the controller temperature gets too hot. NOTE: This feature is enabled and configured by the equipment manufacturer. It is not available on all adapters and network controllers. There are no user configurable settings.
Device Features Default Range RX & TX Enabled l l l l Disabled RX Enabled TX Enabled RX & TX Enabled This setting is found on the Advanced tab of either the device's Device Manager property sheet or the Intel® PROSet Adapter Configuration Utility. To change this setting in Windows PowerShell, use the Set-IntelNetAdapterSetting cmdlet.
Device Features Range l l l On Off Auto Detect This setting is found on the Advanced tab of either the device's Device Manager property sheet or the Intel® PROSet Adapter Configuration Utility. To change this setting in Windows PowerShell, use the Set-IntelNetAdapterSetting cmdlet.
Microsoft* Windows* Driver and Software Installation and Configuration Installing Windows Drivers and Software NOTE: To successfully install or uninstall the drivers or software, you must have administrative privileges on the computer completing installation. Install the Drivers NOTES: l This will update the drivers for all supported Intel® network adapters in your system.
Microsoft* Windows* Driver and Software Installation and Configuration /? or /h Display the Update Package usage information. /s Suppress all graphical user interfaces of the Update Package. /i Do a fresh install of the drivers contained in the Update Package. NOTE: Requires /s option /e= Extract the entire Update Package to the folder defined in . NOTE: Requires /s option /drivers= Extract only driver components of the Update Package to the folder defined in .
Microsoft* Windows* Driver and Software Installation and Configuration Only install driver components Network_Driver_XXXXX_WN64_XX.X.X_A00.exe /s /driveronly Change from the default log location to C:\my path with spaces\log.txt Network_Driver_XXXXX_WN64_XX.X.X_A00.exe /l="C:\my path with spaces\log.txt" Force update to continue, even on "soft" qualification errors Network_Driver_XXXXX_WN64_XX.X.X_A00.exe /s /f Downgrading Drivers You can use the /s and /f options to downgrade your drivers.
Microsoft* Windows* Driver and Software Installation and Configuration NOTES: l If the restore system is not identical to the saved system, the script may not restore any settings when the -BDF option is specified. l Virtual Function devices do not support the -BDF option. l If you used Windows to set NPar minimum and maximum bandwidth percentages, you must specify /bdf during save and restore to keep those settings.
Microsoft* Windows* Driver and Software Installation and Configuration NOTES: l The options available on the Power Management tab are adapter and system dependent. Not all adapters will display all options. There may be BIOS or operating system settings that need to be enabled for your system to wake up. In particular, this is true for Wake from S5 (also referred to as Wake from power off). l The Intel® 10 Gigabit Network Adapters do not support power management.
Microsoft* Windows* Driver and Software Installation and Configuration NOTES: l The Get-IntelNetAdapterStatus -Status General cmdlet may report the status "Link Up - This device is not linked at its maximum capable speed". In that case, if your device is set to auto-negotiate, you can adjust the speed of the device's link partner to the device's maximum speed.
Linux* Driver Installation and Configuration Overview This release includes Linux Base Drivers for Intel® Network Connections.
Linux* Driver Installation and Configuration l l l l Intel® Ethernet Connection I354 1.
Linux* Driver Installation and Configuration l l l Red Hat* Enterprise Linux* (RHEL) 8.3 Red Hat* Enterprise Linux* (RHEL) 8.2 Red Hat* Enterprise Linux* (RHEL) 7.9 SUSE Linux Enterprise Server (SLES): l Novell* SUSE* Linux Enterprise Server (SLES) 15 SP2 NIC Partitioning On Intel® 710 Series based adapters that support it, you can set up multiple functions on each physical port. You configure these functions through the System Setup/BIOS.
Linux* Driver Installation and Configuration Read from min_bw to display the current minimum bandwidth setting. Write to min_bw to set the minimum bandwidth for this function. Write a '1' to commit to save your changes. NOTES: l commit is write only. Attempting to read it will result in an error. l Writing to commit is only supported on the first function of a given port. Writing to a subsequent function will result in an error. l Oversubscribing the minimum bandwidth is not supported.
Linux* Driver Installation and Configuration igb Linux* Driver for the Intel® Gigabit Adapters igb Overview NOTE: In a virtualized environment, on Intel® Server Adapters that support SR-IOV, the virtual function (VF) may be subject to malicious behavior. Software- generated layer two frames, like IEEE 802.3x (link flow control), IEEE 802.1Qbb (priority based flow-control), and others of this type, are not expected and can throttle traffic between the host and the virtual switch, reducing performance.
Linux* Driver Installation and Configuration NOTES: l For the build to work properly it is important that the currently running kernel MATCH the version and configuration of the installed kernel source. If you have just recompiled your kernel, reboot the system. l RPM functionality has only been tested in Red Hat distributions. 1. Download the base driver tar file to the directory of your choice. For example, use '/home/username/igb' or '/usr/local/src/igb'. 2. Untar/unzip the archive, where
Linux* Driver Installation and Configuration l l l igb is the component name 1.3.8.6-1 is the component version x86_64 is the architecture type KMP RPMs are provided for supported Linux distributions. The naming convention for the included KMP RPMs is: intel--kmp--_..rpm For example, intel-igb-kmp-default-1.3.8.6_2.6.27.19_5-1.x86_64.rpm: l igb is the component name l default is the kernel type l 1.3.8.
Linux* Driver Installation and Configuration Parameter Name Valid Range/Settings InterruptThrottleRate 0, 1, 3, 100100000 Default Description 3 0=off 1=dynamic 3=dynamic conservative - Interrupt Throttle Rate controls the number of interrupts each interrupt vector can generate per second. Increasing ITR lowers latency at the cost of increased CPU utilization, though it may help throughput in some circumstances.
Linux* Driver Installation and Configuration Parameter Name Valid Range/Settings Default Description LLISize 0-1500 0 (disabled) LLISize causes an immediate interrupt if the board receives a packet smaller than the specified size. IntMode 0-2 2 Interrupt mode controls the allowed load time control over the type of interrupt registered for by the driver. MSI-X is required for multiple queue support. Some kernels and combinations of kernel .
Linux* Driver Installation and Configuration Parameter Name Valid Range/Settings Default Description max_vfs 0-7 0 This parameter adds support for SR-IOV. It causes the driver to spawn up to max_vfs worth of virtual functions. If the value is greater than 0 it will also force the VMDq parameter to be 1 or more. The parameters for the driver are referenced by position.
Linux* Driver Installation and Configuration Parameter Name Valid Range/Settings Default Description The Node parameter allows you to choose which NUMA node you want to have the adapter allocate memory from. All driver structures, in-memory queues, and receive buffers will be allocated on the node specified.
Linux* Driver Installation and Configuration Viewing Link Messages Link messages will not be displayed to the console if the distribution is restricting system messages. In order to see network driver link messages on your console, set dmesg to eight by entering the following: # dmesg -n 8 NOTE: This setting is not saved across reboots. Configuring the Driver on Different Distributions Configuring a network driver to load properly when the system is started is distribution dependent.
Linux* Driver Installation and Configuration In the default mode, an Intel® Ethernet Network Adapter using copper connections will attempt to auto-negotiate with its link partner to determine the best setting. If the adapter cannot establish link with the link partner using auto-negotiation, you may need to manually configure the adapter and link partner to identical settings to establish link and pass packets.
Linux* Driver Installation and Configuration IGB_LRO is a compile time flag. The user can enable it at compile time to add support for LRO from the driver. The flag is used by adding CFLAGS_EXTRA="-DIGB_LRO" to the make file when it's being compiled.
Linux* Driver Installation and Configuration Hardware Issues For known hardware and troubleshooting issues, either refer to the "Release Notes" in your User Guide, or for more detailed information, go to http://www.intel.com. In the search box enter your devices controller ID followed by "spec update". The specification update file has complete information on known hardware issues.
Linux* Driver Installation and Configuration If you have multiple interfaces in a server, either turn on ARP filtering by entering: # echo 1 > /proc/sys/net/ipv4/conf/all/arp_filter This only works if your kernel's version is higher than 2.4.5. NOTE: This setting is not saved across reboots. The configuration change can be made permanent by adding the following line to the file /etc/sysctl.conf: net.ipv4.conf.all.
Linux* Driver Installation and Configuration ixgbe Linux* Driver for the Intel® 10 Gigabit Server Adapters ixgbe Overview WARNING: By default, the ixgbe driver complies with the Large Receive Offload (LRO) feature enabled. This option offers the lowest CPU utilization for receives but is incompatible with routing/ip forwarding and bridging. If enabling ip forwarding or bridging is a requirement, it is necessary to disable LRO using compile time options as noted in the LRO section later in this section.
Linux* Driver Installation and Configuration l l l Install from Source Code Install Using KMP RPM Install Using KMOD RPM Install from Source Code To build a binary RPM* package of this driver, run 'rpmbuild -tb '. Replace with the specific filename of the driver. NOTES: l For the build to work properly it is important that the currently running kernel MATCH the version and configuration of the installed kernel source.
Linux* Driver Installation and Configuration Install Using KMP RPM The KMP RPMs update existing ixgbe RPMs currently installed on the system. These updates are provided by SuSE in the SLES release. If an RPM does not currently exist on the system, the KMP will not install. The RPMs are provided for supported Linux distributions. The naming convention for the included RPMs is: intel--..rpm For example, intel-ixgbe-1.3.8.6-1.x86_64.
Linux* Driver Installation and Configuration Command Line Parameters If the driver is built as a module, the following optional parameters are used by entering them on the command line with the modprobe command using this syntax: # modprobe ixgbe [
Linux* Driver Installation and Configuration Parameter Name Valid Range/Settings InterruptThrottleRate 956 - 488,281 (0=off, 1=dynamic) Default Description 1 0=off 1=dynamic - Interrupt Throttle Rate controls the number of interrupts each interrupt vector can generate per second. Increasing ITR lowers latency at the cost of increased CPU utilization, though it may help throughput in some circumstances.
Linux* Driver Installation and Configuration Parameter Name Valid Range/Settings Default Description NOTE: LLI is not supported on X550-based adapters. LLIPush 0-1 0 (disabled) LLIPush can be set to enabled or disabled (default). It is most effective in an environment with many small transactions. NOTE: Enabling LLIPush may allow a denial of service attack. LLI is not supported on X550-based adapters.
Linux* Driver Installation and Configuration Parameter Name Valid Range/Settings Default Description max_vfs 1 - 63 0 This parameter adds support for SR-IOV. It causes the driver to spawn up to max_vfs worth of virtual functions. If the value is greater than 0 it will also force the VMDq parameter to be 1 or more. NOTE: This parameter is only used on kernel 3.7.x and below. On kernel 3.8.x and above, use sysfs to enable VFs. Also, for Red Hat distributions, this parameter is only used on version 6.
Linux* Driver Installation and Configuration Parameter Name Valid Range/Settings Default Description With kernel 3.6, the driver supports the simultaneous usage of max_vfs and DCB features, subject to the constraints described below. Prior to kernel 3.6, the driver did not support the simultaneous operation of max_vfs greater than 0 and the DCB features (multiple traffic classes utilizing Priority Flow Control and Extended Transmission Selection).
Linux* Driver Installation and Configuration Parameter Name Valid Range/Settings Default Description l bined hw_rsc_flushed - counts the number of packets flushed out of LRO NOTE: IPv6 and UDP are not supported by LRO. EEE 0-1 0 = Disables EEE 1 = Enables EEE A link between two EEE-compliant devices will result in periodic bursts of data followed by periods where the link is in an idle state. This Low Power Idle (LPI) state is supported at 1 Gbps and 10 Gbps link speeds.
Linux* Driver Installation and Configuration Parameter Name Valid Range/Settings Default AQRate Description Devices that support AQRate (X550 and later) will include 2.5 Gbps and 5 Gbps in the speeds that the driver advertises during auto-negotiation, even though ethtool will not display 2.5 Gbps or 5 Gbps as "Supported link modes" or "Advertised link modes." These speeds are only available through unmodified auto-negotiation.
Linux* Driver Installation and Configuration This setting is not saved across reboots. The setting change can be made permanent by adding 'MTU = 9000' to the following file: l /etc/sysconfig/network-scripts/ifcfg- for RHEL l /etc/sysconfig/network/ for SLES NOTES: l The maximum MTU setting for Jumbo Frames is 9710 bytes. This value coincides with the maximum Jumbo Frames size of 9728 bytes. l This driver will attempt to use multiple page sized buffers to receive each jumbo packet.
Linux* Driver Installation and Configuration # ethtool -A rx tx NOTE: This command only enables or disables Flow Control if auto-negotiation is disabled. If auto-negotiation is enabled, this command changes the parameters used for auto-negotiation with the link partner. To enable or disable auto-negotiation: # ethtool -s autoneg NOTE: Flow Control auto-negotiation is part of link auto-negotiation.
Linux* Driver Installation and Configuration l l l - the ip address to match on - the port number to match on - the queue to direct traffic towards (-1 discards the matched traffic) To delete a filter: # ethtool -U delete Where is the filter ID displayed when printing all the active filters, and may also have been specified using "loc " when adding the filter. Examples: To add a filter that directs packet to queue 2: # ethtool -N flow-type tcp4 src-ip 192.168.
Linux* Driver Installation and Configuration The maximum offset is 64. The hardware will only read up to 64 bytes of data from the payload. The offset must be even because the flexible data is 2 bytes long and must be aligned to byte 0 of the packet payload. The user-defined flexible offset is also considered part of the input set and cannot be programmed separately for multiple filters of the same type.
Linux* Driver Installation and Configuration Configuring VLAN Tagging on SR-IOV Enabled Adapter Ports To configure VLAN tagging for the ports on an SR-IOV enabled adapter, use the following command. The VLAN configuration should be done before the VF driver is loaded or the VM is booted. The VF is not aware of the VLAN tag being inserted on transmit and removed on received frames (sometimes called "port VLAN" mode).
Linux* Driver Installation and Configuration where "X" is the PF interface number and "n" is number of spoofed packets. This feature can be disabled for a specific VF: # ip link set vf spoofchk {off|on} Setting MAC Address, VLAN and Rate Limit Using IProute2 Tool You can set a MAC address of a Virtual Function (VF), a default VLAN and the rate limit using the IProute2 tool.
Linux* Driver Installation and Configuration # ethtool -C rx-usecs N Values for N: l 0 - no limit l 1 - adaptive (default) l 2-1022 - minimum microseconds between each interrupt The range of 0-1022 microseconds provides an effective range of 978 to 500,000 interrupts per second. The underlying hardware supports granularity in 2us intervals at 1Gbps and 10Gbps and 20us at 100Mbps, so adjacent values may result in the same interrupt rate.
Linux* Driver Installation and Configuration This only works if your kernel's version is higher than 2.4.5. NOTE: This setting is not saved across reboots. The configuration change can be made permanent by adding the following line to the file /etc/sysctl.conf: net.ipv4.conf.all.arp_filter = 1 Another alternative is to install the interfaces in separate broadcast domains (either in different switches or in a switch partitioned to VLANs).
Linux* Driver Installation and Configuration Running ethtool -t ethX command causes break between PF and test client When there are active VFs, "ethtool -t" will only run the link test. The driver will also log in syslog that VFs should be shut down to run a full diagnostic test. Unable to obtain DHCP lease on boot with RedHat In configurations where the auto-negotiation process takes more than 5 seconds, the boot script may fail with the following message: "ethX: failed. No link present.
Linux* Driver Installation and Configuration ixgbevf Linux* Driver for the Intel® 10 Gigabit Server Adapters ixgbevf Overview SR-IOV is supported by the ixgbevf driver, which should be loaded on both the host and VMs. This driver supports upstream kernel versions 2.6.30 (or higher) x86_64. The ixgbevf driver supports 82599, X540, and X550 virtual function devices that can only be activated on kernels supporting SR-IOV. SR-IOV requires the correct platform and OS support.
Linux* Driver Installation and Configuration NOTE: For VLANs, there is a limit of a total of 32 shared VLANs to 1 or more virtual functions. There are three methods for installing the Linux driver: l Install from Source Code l Install Using KMP RPM l Install Using KMOD RPM Install from Source Code To build a binary RPM* package of this driver, run 'rpmbuild -tb '. Replace with the specific filename of the driver.
Linux* Driver Installation and Configuration The RPMs are provided for supported Linux distributions. The naming convention for the included RPMs is: intel--..rpm For example, intel-ixgbevf-1.3.8.6-1.x86_64.rpm: l ixgbevf is the component name l 1.3.8.6-1 is the component version l x86_64 is the architecture type KMP RPMs are provided for supported Linux distributions.
Linux* Driver Installation and Configuration Command Line Parameters If the driver is built as a module, the following optional parameters are used by entering them on the command line with the modprobe command using this syntax: # modprobe ixgbevf [
Linux* Driver Installation and Configuration l tel.com/design/network/applnots/ap450.htm. A descriptor describes a data buffer and attributes related to the data buffer. This information is accessed by the hardware. Additional Configurations Configuring the Driver on Different Distributions Configuring a network driver to load properly when the system is started is distribution dependent. Typically, the configuration process involves adding an alias line to /etc/modules.conf or /etc/modprobe.
Linux* Driver Installation and Configuration Known Issues MAC Address of Virtual Function Changes Unexpectedly If a Virtual Function's MAC address is not assigned in the host, then the VF (virtual function) driver will use a random MAC address. This random MAC address may change each time the VF driver is reloaded. You can assign a static MAC address in the host machine. This static MAC address will survive a VF driver reload.
Linux* Driver Installation and Configuration Prior to unloading the PF driver, you must first ensure that all VFs are no longer active. Do this by shutting down all VMs and unloading the VF driver.
Linux* Driver Installation and Configuration i40e Linux Driver for the Intel Ethernet Controller 700 Series i40e Overview NOTE: The kernel assumes that TC0 is available, and will disable Priority Flow Control (PFC) on the device if TC0 is not available. To fix this, ensure TC0 is enabled when setting up DCB on your switch. NOTE: If the physical function (PF) link is down, you can force link up (from the host PF) on any virtual functions (VF) bound to the PF.
Linux* Driver Installation and Configuration l l l l l l Intel® Ethernet 10G 2P X710-T2L-t Adapter Intel® Ethernet 10G 4P X710-T4L-t Adapter Intel® Ethernet 40G 2P XL710 QSFP+ rNDC Intel® Ethernet Converged Network Adapter XL710-Q2 Intel® Ethernet 25G 2P XXV710 Adapter Intel® Ethernet 25G 2P XXV710 Mezz Building and Installation There are three methods for installing the Linux driver: l Install from Source Code l Install Using KMP RPM l Install Using KMOD RPM Install from Source Code To build a binary R
Linux* Driver Installation and Configuration 9. Verify that the interface works. Enter the following, where is the IP address for another machine on the same subnet as the interface that is being tested: # ping Install Using KMP RPM The KMP RPMs update existing i40e RPMs currently installed on the system. These updates are provided by SuSE in the SLES release. If an RPM does not currently exist on the system, the KMP will not install.
Linux* Driver Installation and Configuration Command Line Parameters In general, ethtool and other OS specific commands are used to configure user changeable parameters after the driver is loaded. The i40e driver only supports the max_vfs kernel parameter on older kernels that do not have the standard sysfs interface. The only other module parameter is the debug parameter that can control the default logging verbosity of the driver.
Linux* Driver Installation and Configuration Parameter Name Valid Range/Settings Default Description When SR-IOV mode is enabled, hardware VLAN filtering and VLAN tag stripping/insertion will remain enabled. Please remove the old VLAN filter before the new VLAN filter is added.
Linux* Driver Installation and Configuration For example: vf008.rx_bytes: 0 vf008.rx_unicast: 0 vf008.rx_multicast: 0 vf008.rx_broadcast: 0 vf008.rx_discards: 0 vf008.rx_unknown_protocol: 0 vf008.tx_bytes: 0 vf008.tx_unicast: 0 vf008.tx_multicast: 0 vf008.tx_broadcast: 0 vf008.tx_discards: 0 vf008.tx_errors: 0 Configuring VLAN Tagging on SR-IOV Enabled Adapter Ports To configure VLAN tagging for the ports on an SR-IOV enabled adapter, use the following command.
Linux* Driver Installation and Configuration NOTE: It's important to set the VF to trusted before setting promiscuous mode. If the VM is not trusted, the PF will ignore promiscuous mode requests from the VF. If the VM becomes trusted after the VF driver is loaded, you must make a new request to set the VF to promiscuous. Once the VF is designated as trusted, use the following commands in the VM to set the VF to promiscuous mode.
Linux* Driver Installation and Configuration The Intel Ethernet Flow Director performs the following tasks: l Directs receive packets according to their flows to different queues l Enables tight control on routing a flow in the platform l Matches flows and CPU cores for flow affinity l Supports multiple parameters for flexible flow classification and load balancing (in SFP mode only) An included script (set_irq_affinity) automates setting the IRQ to CPU affinity.
Linux* Driver Installation and Configuration l l l l l - the Ethernet device to program - can be ip4, tcp4, udp4, or sctp4 - the ip address to match on - the port number to match on - the queue to direct traffic towards (-1 discards the matched traffic) To delete a filter: # ethtool -U delete Where is the filter ID displayed when printing all the active filters, and may also have been specified using "loc " when adding the filter.
Linux* Driver Installation and Configuration ... user-def 0x4FFFF ... tells the filter to look 4 bytes into the payload and match that value against 0xFFFF. The offset is based on the beginning of the payload, and not the beginning of the packet. Thus flow-type tcp4 ... user-def 0x8BEAF ... would match TCP/IPv4 packets which have the value 0xBEAF 8 bytes into the TCP/IPv4 payload. Note that ICMP headers are parsed as 4 bytes of header and 4 bytes of payload.
Linux* Driver Installation and Configuration Outer MAC L2 filter Inner MAC filter l Outer MAC, Tenant ID, Inner MAC l Application Destination IP l Application Source-IP, Inner MAC l ToQueue: Use MAC, VLAN to point to a queue L3 filters l Application Destination IP l l l Cloud filters are specified using ethtool's ntuple interface, but the driver uses user-def to determine whether to treat the filter as a cloud filter or a regular filter.
Linux* Driver Installation and Configuration the port may appear to hang on heavy traffic. Use ethtool to change the flow control settings. To enable or disable Rx or Tx Flow Control: # ethtool -A rx tx NOTE: This command only enables or disables Flow Control if auto-negotiation is disabled. If auto-negotiation is enabled, this command changes the parameters used for auto-negotiation with the link partner.
Linux* Driver Installation and Configuration l l l NVM version 6.01 or later ADQ cannot be enabled when the following features are enabled: Data Center Bridging (DCB), Multiple Functions per Port (MFP), or Sideband Filters. If another driver (for example, DPDK) has set cloud filters, you cannot enable ADQ. To create TCs on the interface: NOTE: Run all TC commands from the ../iproute2/tc/ directory. 1. Use the tc command to create traffic classes (TCs). You can create a maximum of 8 TCs per interface.
Linux* Driver Installation and Configuration # tc qdisc add dev ens4f0 root mqprio num_tc 2 map 0 0 0 0 1 1 1 1 queues 16@0 16@16 hw 1 mode channel shaper bw_rlimit max_rate 1Gbit 3Gbit Where: l l map 0 0 0 0 1 1 1 1: Sets priorities 0-3 to use tc0 and 4-7 to use tc1 queues 16@0 16@16: Assigns 16 queues to tc0 at offset 0 and 16 queues to tc1 at offset 16 You can add multiple filters to the device, using the same recipe (and requires no additional recipe resources), either on the same interface or on diff
Linux* Driver Installation and Configuration NOTES: l The maximum MTU setting for Jumbo Frames is 9710 bytes. This value coincides with the maximum Jumbo Frames size of 9728 bytes. l This driver will attempt to use multiple page sized buffers to receive each jumbo packet. This should help to avoid buffer starvation issues when allocating receive packets. l Packet loss may have a greater impact on throughput when you use jumbo frames.
Linux* Driver Installation and Configuration DCB is a configuration Quality of Service implementation in hardware. It uses the VLAN priority tag (802.1p) to filter traffic. That means that there are 8 different priorities that traffic can be filtered into. It also enables priority flow control (802.1Qbb) which can limit or eliminate the number of dropped packets during network stress. Bandwidth can be allocated to each of these priorities, which is enforced at the hardware level (802.1Qaz).
Linux* Driver Installation and Configuration l l l l No FEC Auto FEC BASE-R FEC RS FEC Dynamic Device Personalization Dynamic Device Personalization (DDP) allows you to change the packet processing pipeline of a device by applying a profile package to the device at runtime. Profiles can be used to, for example, add support for new protocols, change existing protocols, or change default settings. DDP profiles can also be rolled back without rebooting the system.
Linux* Driver Installation and Configuration | +-- vlan_mirror | +-- egress_mirror | +-- ingress_mirror | +-- mac_anti_spoof | +-- vlan_anti_spoof | +-- loopback | +-- mac | +-- mac_list | +-- promisc | +-- vlan_strip | +-- stats | +-- link_state | +-- max_tx_rate | +-- min_tx_rate | +-- spoofcheck | +-- trust | +-- vlan NOTES: 1. kobject started from “sriov” is not available from existing kernel sysfs, and it requires device driver to implement this interface. 2. maximum number of SR-IOV instances is 256.
Linux* Driver Installation and Configuration Example 2: remove egress traffic mirroring on PF p1p2 VF 1 to VF 7. # echo rem 7 > /sys/class/net/p1p2/device/sriov/1/egress_mirror ingress_mirror l Supports ingress traffic mirroring. l Example 1: mirror ingress traffic on PF p1p2 VF 1 to VF 7. # echo add 7 > /sys/class/net/p1p2/device/sriov/1/ingress_mirror l Example 2: show current ingress mirroring configuration.
Linux* Driver Installation and Configuration Example 3: unset MCAST promiscuous on PF p1p2 VF 1. # echo rem mcast > /sys/class/net/p1p2/device/sriov/1/promisc l Example 4: show current promiscuous mode configuration. # cat /sys/class/net/p1p2/device/sriov/1/promisc vlan_strip l Supports enabling/disabling VF device outer VLAN stripping l Example 1: enable VLAN strip on VF 3. # echo ON > /sys/class/net/p1p1/device/sriov/3/vlan_strip l Example 2: disable VLAN striping VF 3.
Linux* Driver Installation and Configuration Increase the number of Rx descriptors for each Rx ring using ethtool. This may help reduce Rx packet drops at the expense of system resources: # ethtool -G rx N Where N is the desired number of rings Interrupt Rate Limiting This driver supports an adaptive interrupt rate mechanism that is tuned for general workloads.
Linux* Driver Installation and Configuration IPv6/UDP checksum offload does not work on some older kernels Some distributions with older kernels do not properly enable IPv6/UDP checksum offload. To use IPv6 checksum offload, it may be necessary to upgrade to a newer kernel. depmod warning messages about unknown symbol during installation During driver installation, you may see depmod warning messages referring to unknown symbols i40e_register_client and i40e_unregister_client.
Linux* Driver Installation and Configuration # ethtool -r Where is the PF interface in the host, for example: p5p1. You may need to run the command more than once to get link on all virtual ports. MAC Address of Virtual Function Changes Unexpectedly If a Virtual Function's MAC address is not assigned in the host, then the VF (virtual function) driver will use a random MAC address. This random MAC address may change each time the VF driver is reloaded.
Linux* Driver Installation and Configuration Unplugging Network Cable While ethtool -p is Running In kernel versions 2.6.32 and newer, unplugging the network cable while ethtool -p is running will cause the system to become unresponsive to keyboard commands, except for control-alt-delete. Restarting the system appears to be the only remedy. Rx Page Allocation Errors 'Page allocation failure. order:0' errors may occur under stress with kernels 2.6.25 and newer.
Linux* Driver Installation and Configuration SR-IOV virtual functions have identical MAC addresses in RHEL8 When you create multiple SR-IOV virtual functions on Red Hat Enterprise Linux 8, the VFs may have identical MAC addresses. Only one VF will pass traffic, and all traffic on other VFs with identical MAC addresses will fail. This is related to the "MACAddressPolicy=persistent" setting in /usr/lib/systemd/network/99-default.link. To resolve this issue, edit the /usr/lib/systemd/network/99-default.
Linux* Driver Installation and Configuration ice Linux Driver for the Intel Ethernet Controller 800 Series ice Overview NOTE: Devices based on the Intel® Ethernet Controller 800 Series may exhibit poor receive performance and dropped packets. The following steps may improve the situation: 1. In your system's BIOS/UEFI settings, select the "Performance" profile. 2. On RHEL 7.x/8.x, use the tuned power management tool to set the "latency-performance" profile. 3.
Linux* Driver Installation and Configuration 4. Compile the driver module: # make install The binary will be installed as: /lib/modules//kernel/drivers/net/ice/ice.ko The install locations listed above are the default locations. This might differ for various Linux distributions. For more information, see the ldistrib.txt file included in the driver tar. 5. Remove the old driver: # rmmod ice 6. Install the module using the modprobe command: # modprobe ice = 7.
Linux* Driver Installation and Configuration # rpm -i intel-ice-1.3.8.6-1.x86_64.rpm # rpm -i intel-ice-kmp-default-1.3.8.6_2.6.27.19_5-1.x86_64.rpm Install Using KMOD RPM The KMOD RPMs are provided for supported Linux distributions. The naming convention for the included RPMs is: kmod---1..rpm For example, kmod-ice-2.3.4-1.x86_64.rpm: l ice is the driver name l 2.3.
Linux* Driver Installation and Configuration The DDP package loads during device initialization. The driver looks for intel/ice/ddp/ice.pkg in your firmware root (typically /lib/firmware/ or /lib/firmware/updates/) and checks that it contains a valid DDP package file. If the driver is unable to load the DDP package, the device will enter Safe Mode.
Linux* Driver Installation and Configuration l l l l l l IPv4 TCPv4 UDPv4 IPv6 TCPv6 UDPv6 For a given flow type, it supports valid combinations of IP addresses (source or destination) and UDP/TCP ports (source and destination). For example, you can supply only a source IP address, a source IP address and a destination port, or any combination of one or more of these four parameters.
Linux* Driver Installation and Configuration l l l l l - the Ethernet device to program - can be ip4, tcp4, udp4, sctp4, ip6, tcp6, udp6, or sctp6 - the ip address to match on - the port number to match on - the queue to direct traffic towards (-1 discards the matched traffic) To delete a filter: # ethtool -U delete Where is the filter ID displayed when printing all the active filters, and may also have been specified using "loc " when adding the filter
Linux* Driver Installation and Configuration ... user-def 0x4FFFF ... tells the filter to look 4 bytes into the payload and match that value against 0xFFFF. The offset is based on the beginning of the payload, and not the beginning of the packet. Thus flow-type tcp4 ... user-def 0x8BEAF ... would match TCP/IPv4 packets which have the value 0xBEAF 8 bytes into the TCP/IPv4 payload. Note that ICMP headers are parsed as 4 bytes of header and 4 bytes of payload.
Linux* Driver Installation and Configuration but you may encounter unexpected results if there's a conflict between aRFS and ntuple requests. To set up aRFS: 1. Enable the Intel Ethernet Flow Director and ntuple filters using ethtool. # ethtool -K ntuple on 2. Set up the number of entries in the global flow table. For example: # NUM_RPS_ENTRIES=16384 # echo $NUM_RPS_ENTRIES > /proc/sys/net/core/rps_sock_flow_entries 3. Set up the number of entries in the per-queue flow table.
Linux* Driver Installation and Configuration Configuring VLAN Tagging on SR-IOV Enabled Adapter Ports To configure VLAN tagging for the ports on an SR-IOV enabled adapter, use the following command. The VLAN configuration should be done before the VF driver is loaded or the VM is booted. The VF is not aware of the VLAN tag being inserted on transmit and removed on received frames (sometimes called "port VLAN" mode).
Linux* Driver Installation and Configuration Virtual Function (VF) Tx Rate Limit Use the ip command to configure the maximum or minimum Tx rate limit for a VF from the PF interface.
Linux* Driver Installation and Configuration NOTES: l The maximum MTU setting for Jumbo Frames is 9702 bytes. This value coincides with the maximum Jumbo Frames size of 9728 bytes. l This driver will attempt to use multiple page sized buffers to receive each jumbo packet. This should help to avoid buffer starvation issues when allocating receive packets. l Packet loss may have a greater impact on throughput when you use jumbo frames.
Linux* Driver Installation and Configuration l agents that interface with the Linux kernel's DCB Netlink API. We recommend using OpenLLDP as the DCBX agent when running in software mode. For more information, see the OpenLLDP man pages and https://github.com/intel/openlldp. iSCSI with DCB is not supported. FW-LLDP (Firmware Link Layer Discovery Protocol) Use ethtool to change FW-LLDP settings. The FW-LLDP setting is per port and persists across boots.
Linux* Driver Installation and Configuration NAPI This driver supports NAPI (Rx polling mode). For more information on NAPI, see https://wiki.linuxfoundation.org/networking/napi. MACVLAN This driver supports MACVLAN. Kernel support for MACVLAN can be tested by checking if the MACVLAN driver is loaded. You can run 'lsmod | grep macvlan' to see if the MACVLAN driver is loaded or run 'modprobe macvlan' to try to load the MACVLAN driver. NOTE: l l In passthru mode, you can only set up one MACVLAN device.
Linux* Driver Installation and Configuration Tunnel/Overlay Stateless Offloads Supported tunnels and overlays include VXLAN, GENEVE, and others depending on hardware and software configuration. Stateless offloads are enabled by default. To view the current state of all offloads: # ethtool -k UDP Segmentation Offload Allows the adapter to offload transmit segmentation of UDP packets with payloads up to 64K into valid Ethernet frames.
Linux* Driver Installation and Configuration Interrupt Rate Limiting This driver supports an adaptive interrupt rate mechanism that is tuned for general workloads. The user can customize the interrupt rate control for specific workloads, via ethtool, adjusting the number of microseconds between interrupts. To set the interrupt rate manually, you must disable adaptive mode: # ethtool -C adaptive-rx off adaptive-tx off For lower CPU utilization: 1.
Linux* Driver Installation and Configuration Fiber optics and auto-negotiation Modules based on 100GBASE-SR4, active optical cable (AOC), and active copper cable (ACC) do not support auto-negotiation per the IEEE specification. To obtain link with these modules, you must turn off auto-negotiation on the link partner's switch ports. 'ethtool -a' autonegotiate result may vary between drivers For kernel versions 4.6 or higher, 'ethtool -a' will show the advertised and negotiated autoneg settings.
Linux* Driver Installation and Configuration MDD events in dmesg when creating maximum number of VLANs on the VF When you create the maximum number of VLANs on the VF, you may see MDD events in dmesg on the host. This is due to the asynchronous design of the iavf driver. It always reports success to any VLAN requests, but the requests may fail later.
Linux* Driver Installation and Configuration iavf Linux Driver iavf Overview The i40evf driver was renamed to the iavf (Intel Adaptive Virtual Function) driver. This was done to reduce the impact of future Intel Ethernet Controllers. The iavf driver allows you to upgrade your hardware without needing to upgrade the virtual function driver in each of the VMs running on top of the hardware.
Linux* Driver Installation and Configuration Install from Source Code To build a binary RPM* package of this driver, run 'rpmbuild -tb '. Replace with the specific filename of the driver. NOTES: l For the build to work properly it is important that the currently running kernel MATCH the version and configuration of the installed kernel source. If you have just recompiled your kernel, reboot the system. l RPM functionality has only been tested in Red Hat distributions. 1.
Linux* Driver Installation and Configuration KMP RPMs are provided for supported Linux distributions. The naming convention for the included KMP RPMs is: intel--kmp--_..rpm For example, intel-iavf-kmp-default-1.3.8.6_2.6.27.19_5-1.x86_64.rpm: l iavf is the component name l default is the kernel type l 1.3.8.6 is the component version l 2.6.27.
Linux* Driver Installation and Configuration Setting VLAN Tag Stripping If you have applications that require Virtual Functions (VFs) to receive packets with VLAN tags, you can disable VLAN tag stripping for the VF. The Physical Function (PF) processes requests issued from the VF to enable or disable VLAN tag stripping. Note that if the PF has assigned a VLAN to a VF, then requests from that VF to set VLAN tag stripping will be ignored.
Linux* Driver Installation and Configuration l l l l hw 1 mode channel: 'channel' with 'hw' set to 1 is a new hardware offload mode in mqprio that makes full use of the mqprio options, the TCs, the queue configurations, and the QoS parameters. shaper bw_rlimit: For each TC, sets the minimum and maximum bandwidth rates. The totals must be equal to or less than the port speed. This parameter is optional and is required only to set up the Tx rates.
Linux* Driver Installation and Configuration +-- [VF-id, 0 .. 255] (see 2 below) | +-- trunk | +-- vlan_mirror | +-- egress_mirror | +-- ingress_mirror | +-- mac_anti_spoof | +-- vlan_anti_spoof | +-- loopback | +-- mac | +-- mac_list | +-- promisc | +-- vlan_strip | +-- stats | +-- link_state | +-- max_tx_rate | +-- min_tx_rate | +-- spoofcheck | +-- trust | +-- vlan NOTES: 1.
Linux* Driver Installation and Configuration l l l l l l l egress_mirror l Supports egress traffic mirroring. l Example 1: add egress traffic mirroring on PF p1p2 VF 1 to VF 7. # echo add 7 > /sys/class/net/p1p2/device/sriov/1/egress_mirror l Example 2: remove egress traffic mirroring on PF p1p2 VF 1 to VF 7. # echo rem 7 > /sys/class/net/p1p2/device/sriov/1/egress_mirror ingress_mirror l Supports ingress traffic mirroring. l Example 1: mirror ingress traffic on PF p1p2 VF 1 to VF 7.
Linux* Driver Installation and Configuration l l l l promisc l Supports setting/unsetting VF device unicast promiscuous mode and multicast promiscuous mode. l Example 1: set MCAST promiscuous on PF p1p2 VF 1. # echo add mcast > /sys/class/net/p1p2/device/sriov/1/promisc l Example 2: set UCAST promiscuous on PF p1p2 VF 1. # echo add ucast > /sys/class/net/p1p2/device/sriov/1/promisc l Example 3: unset MCAST promiscuous on PF p1p2 VF 1.
Linux* Driver Installation and Configuration Do not unload port driver if VF with active VM is bound to it NOTE:Do not unload a port's driver if a Virtual Function (VF) with an active Virtual Machine (VM) is bound to it. Doing so will cause the port to appear to hang. Once the VM shuts down, or otherwise releases the VF, the command will complete. Using four traffic classes fails Do not try to reserve more than three traffic classes in the iavf driver.
Linux* Driver Installation and Configuration Another alternative is to install the interfaces in separate broadcast domains (either in different switches or in a switch partitioned to VLANs). Rx Page Allocation Errors 'Page allocation failure. order:0' errors may occur under stress with kernels 2.6.25 and newer. This is caused by the way the Linux kernel reports this stressed condition. Host May Reboot after Removing PF when VF is Active in Guest Using kernel versions earlier than 3.
VMWare ESX Drivers and Support VMWare ESX Drivers and Support Driver types Intel provides the following types of drivers for VMware ESX: l Native mode drivers are the default driver for the VMware ESX environment. They are interrupt driven and developed using VMware’s native mode API. l Enhanced Network Stack (ENS) drivers are intended for use in VMware NSX-T deployments. These drivers are polling mode drivers. l Unified drivers support both interrupt and poll mode operation.
Remote Boot Remote Boot allows you to boot a system using only an Ethernet adapter. You connect to a server that contains an operating system image and use that to boot your local system. Flash Images "Flash" is a generic term for nonvolatile RAM (NVRAM), firmware, and option ROM (OROM). Depending on the device, it can be on the NIC or on the system board. NOTE: You cannot update the flash of a device in the "Pending Reboot" state. Reboot your system before attempting to update the device's flash.
Remote Boot In the Boot Manager Boot Menu, Intel adapters are identified as follows: l X710-controlled adapters: "IBA 40G" l Other 10G adapters: "IBA XE" l 1G adapters: "IBA 1G" Intel® Boot Agent Configuration Boot Agent Client Configuration The Boot Agent is enabled and configured from HII. CAUTION: If spanning tree protocol is enabled on a switch port through which a port is trying to use PXE, the delay before the port starts forwarding can cause a DHCP timeout.
Remote Boot Intel Boot Agent Target/Server Setup Overview For the Intel® Boot Agent software to perform its intended job, there must be a server set up on the same network as the client computer. That server must recognize and respond to the PXE or BOOTP boot protocols that are used by the Intel Boot Agent software. NOTE: When the Intel Boot Agent software is installed as an upgrade for an earlier version boot ROM, the associated server-side software may not be compatible with the updated Intel Boot Agent.
Remote Boot tinue. tomer Support. PXE-E05: The LAN adapter's configuration is corrupted or has not been initialized. The Intel Boot Agent cannot continue. The adapter's EEPROM is corrupted. The Intel Boot Agent determined that the adapter EEPROM checksum is incorrect. The agent will return control to the BIOS and not attempt to remote boot. Try to update the flash image. If this does not solve the problem, contact your system administrator or Intel Customer Support.
Remote Boot PXE-EC3: BC ROM ID structure is invalid. Base code could not be installed. An incorrect flash image is installed or the image has become corrupted. Try to update the flash image. PXE-EC4: UNDI ID structure was not found. UNDI ROM ID structure signature is incorrect. An incorrect flash image is installed or the image has become corrupted. Try to update the flash image. PXE-EC5: UNDI ROM ID structure is invalid. The structure length is incorrect.
Remote Boot If you are having problems with the local (client) or network operating system, contact the operating system manufacturer for assistance. If you are having problems with some application program, contact the application manufacturer for assistance. If you are having problems with any of your computer's hardware or with the BIOS, contact your computer system manufacturer for assistance.
Remote Boot Configuring Intel® Ethernet iSCSI Boot on a Linux* Client Initiator 1. Install the Open-iSCSI initiator utilities. #yum -y install iscsi-inititator-utils 2. Refer to the README file found at https://github.com/mikechristie/open-iscsi. 3. Configure your iSCSI array to allow access. a. Examine /etc/iscsi/initiatorname.iscsi for the Linux host initiator name. b. Update your volume manager with this host initiator name. 4. Set iscsi to start on boot. #chkconfig iscscd on #chkconfig iscsi on 5.
Remote Boot The usage of this menu is described below: l One network port in the system can be selected as the primary boot port by pressing the 'P' key when highlighted. The primary boot port will be the first port used by Intel® Ethernet iSCSI Boot to connect to the iSCSI target. Only one port may be selected as a primary boot port. l One network port in the system can be selected as the secondary boot port by pressing the 'S' key when highlighted.
Remote Boot Listed below are the options in the Intel® iSCSI Boot Configuration Menu: l Use Dynamic IP Configuration (DHCP) - Selecting this checkbox will cause iSCSI Boot to attempt to get the client IP address, subnet mask, and gateway IP address from a DHCP server. If this checkbox is enabled, these fields will not be visible. l Initiator Name - Enter the iSCSI initiator name to be used by Intel® iSCSI Boot when connecting to an iSCSI target.
Remote Boot The iSCSI CHAP Configuration menu has the following options to enable CHAP authentication: l Use CHAP - Selecting this checkbox will enable CHAP authentication for this port. CHAP allows the target to authenticate the initiator. After enabling CHAP authentication, a user name and target password must be entered. l User Name - Enter the CHAP user name in this field. This must be the same as the CHAP user name configured on the iSCSI target.
Remote Boot iSCSI Boot Target Configuration For specific information on configuring your iSCSI target system and disk volume, refer to instructions provided by your system or operating system vendor. Listed below are the basic steps necessary to setup Intel® Ethernet iSCSI Boot to work with most iSCSI target systems. The specific steps will vary from one vendor to another. NOTES: l To support iSCSI Boot, the target needs to support multiple sessions from the same initiator.
Remote Boot iscsi::::: Server name: DHCP server name or valid IPv4 address literal. Example: 192.168.0.20. l Protocol: Transportation protocol used by ISCSI. Default is tcp (6). No other protocols are currently supported. l Port: Port number of the iSCSI. A default value of 3260 will be used if this field is left blank. l LUN: LUN ID configured on iSCSI target system. Default is zero.
Remote Boot 1. Setup Windows iSCSI Boot. 2. If you have not already done so, install the latest Intel Ethernet Adapter drivers and Intel PROSet. 3. Open Intel PROSet for Windows Device Manager or Intel® PROSet Adapter Configuration Utility and select the Boot Options Tab. 4. From Settings select iSCSI Boot Crash Dump and the Value Enabled and click OK. iSCSI Troubleshooting The table below lists problems that can possibly occur when using Intel® Ethernet iSCSI Boot.
Remote Boot Error message displayed: "Failed to detect link" Error message displayed: "DHCP Server not found!" Error message displayed: "PnP Check Structure is invalid!" Error message displayed: "Invalid iSCSI connection information" Error message displayed: "Unsupported SCSI disk block size!" Error message displayed: "ERROR: Could not establish TCP/IP connection with iSCSI target system." Error message displayed: "ERROR: CHAP authentication with target failed.
Remote Boot When installing Linux to NetApp Filer, after a successful target disk discovery, error messages may be seen similar to those listed below. l l If these error messages are seen, unused iscsi interfaces on NetApp Filer should be disabled. Continuous=no should be added to the iscsi.conf file Iscsi-sfnet:hostx: Connect failed with rc -113: No route to host Iscsi-sfnet:hostx: establish_ session failed. Could not connect to target Error message displayed. "ERROR: iSCSI target not found.
Remote Boot iSCSI Remote Boot Firmware may show 0.0.0.0 in DHCP server IP address field In a Linux base DHCP server, the iSCSI Remote Boot firmware shows 0.0.0.0 in the DHCP server IP address field. The iSCSI Remote Boot firmware looks at the DHCP server IP address from the Next-Server field in the DHCP response packet. However, the Linux base DHCP server may not set the field by default. Add "Next-Server ;" in dhcpd.conf to show the correct DHCP server IP address.
Remote Boot I/OAT Offload may stop with Intel® Ethernet iSCSI Boot or with Microsoft Initiator installed A workaround for this issue is to change the following registry value to "0": HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\IOATDMA\Start Only change the registry value if iSCSI Boot is enabled and if you want I/OAT offloading. A blue screen will occur if this setting is changed to "0" when iSCSI Boot is not enabled.
Firmware Firmware Firmware is a layer of software that is programmed into a device's memory. It provides low level functionality for the device. In most cases you will not notice the firmware on your device at all. Firmware error states usually occur because of an unsuccessful update. Firmware Security Intel or your equipment manufacturer will occasionally release a firmware security patch.
Firmware -u -- Sets nvmupdate to update mode. -optinminsrev -- Tells the tool to update the MinSRev value. -l update.log -- Specifies the name of the log file. -o update.xml -- Specifies the name of the results file. This is an XML file that contains the inventory/update results. -c nvmupdate.cfg -- Specifies the name of the configuration file. This is a text file that contains descriptions of networking devices and firmware versions for those devices.
Firmware Resolving Firmware Recovery Mode Issues If your device is in Firmware Recovery mode you can restore it to factory defaults using the process for resolution of Firmware Recovery Mode Issues as outlined in the sub-sections below. NVM Self Check The process begins after power-on or reboot. At this time, the firmware will perform tests to assess whether there is damage or corruption of the device NVM image.
Firmware Actions: 1. Before initiating device recovery, the integrity of the host operating system, device drivers and firmware utilities must be verified and reinstalled if necessary. Fully functional operating system, device drivers and tools are required for device recovery. Please consult your operating system specific instructions on how to scan and repair potentially damaged system files. 2.
Troubleshooting Common Problems and Solutions There are many simple, easy-to-fix problems related to network problems. Review each one of these before going further. l Check for recent changes to hardware, software, or the network that may have disrupted communications. l Check the driver software. l Make sure you are using the latest appropriate drivers for your adapter from the Dell support website. l Disable (or unload), then re-enable (reload) the driver or adapter. l Check for conflicting settings.
Troubleshooting Problem Solution Verify that you are running the latest operating system revision for your switch and that the switch is compliant with the proper IEEE standard: l IEEE 802.3ad-compliant (gigabit over copper) l IEEE 802.3an-compliant (10 gigabit over copper) The device does not connect at the expected speed. When Gigabit Master/Slave mode is forced to "master" mode on both the Intel adapter and its link partner, the link speed obtained by the Intel adapter may be lower than expected.
Troubleshooting Multiple Adapters When configuring a multi-adapter environment, you must upgrade all Intel adapters in the computer to the latest software. If the computer has trouble detecting all adapters, consider the following: l If you enable Wake on LAN* (WoL) on more than two adapters, the Wake on LAN feature may overdraw your system’s auxiliary power supply, resulting in the inability to boot the system and other unpredictable problems.
Troubleshooting Possible Misconfiguration of the Ethernet Port You may see an informational message stating that a potential misconfiguration of the Ethernet port was detected. This is to alert you that your device is being underutilized. If this was intentional, you may ignore this message. For example, setting your Intel® Ethernet 100G 2P E810-C adapter to 2x2x25 is valid, but it does not use the full capabilities of the device.
Troubleshooting Testing from Windows PowerShell* Intel provides two PowerShell cmdlets for testing your adapter. l Test-IntelNetDiagnostics runs the specified test suite on the specified device. See the Test-IntelNetDiagnostics help inside PowerShell for more information. Test-IntelNetIdentifyAdapter blinks the LED on the specified device. l Linux Diagnostics The driver utilizes the ethtool interface for driver configuration and diagnostics, as well as displaying statistical information.
Troubleshooting Event Message ID Severity 25 PROBLEM: The MAC address on the network adapter is invalid. ACTION: Visit http://www.intel.com/support/go/network/adapter/home.htm for assistance. Error 27 Network link has been disconnected. Warning 30 PROBLEM: The network adapter is configured for auto-negotiation but the link partner is not. This may result in a duplex mismatch. ACTION: Configure the link partner for auto-negotiation.
Troubleshooting Event Message ID Severity 52 PROBLEM: The network adapter has been stopped because it has overheated. Error 53 Jumbo Frames cannot be configured when MACSec is enabled. Informational 54 PROBLEM: A malicious VF driver has been detected. Warning 56 The network driver has been stopped because the network adapter has been removed. Informational 58 Network link has been established at 25Gbps full duplex. Informational 60 Network link has been established at 50Gbps full duplex.
Troubleshooting Event ID Message Severity 261 Enhanced Transmission Selection feature on a device has changed to operational. Informational 262 Priority Flow Control feature on a device has changed to operational. Informational 263 Application feature on a device has changed to operational. Informational 264 Application feature has been disabled on a device. Informational 265 Application feature has been enabled on a device.
Troubleshooting Event ID Message Severity 794 Logical Link feature on a device has changed to non-operational. Error 795 Failed to open device. Error 796 DCB settings of the network adapter are invalid. Error 797 DCB settings of the network adapter are invalid - AppSelector. Error 798 Detected a non-optimal network adapter driver component. Please install network adapter driver version 3.5 or greater.
Troubleshooting Indicator Lights The Intel Server and Desktop network adapters feature indicator lights on the adapter backplate that serve to indicate activity and the status of the adapter board. The following tables define the meaning for the possible states of the indicator lights for each adapter board.
Troubleshooting Dual Port SFP28 Adapters The Intel® Ethernet 25G 2P E810-XXV Adapter and Intel® Ethernet 25G 2P XXV710 Adapter have the following indicator lights: Label Indication Meaning Green Linked at maximum port speed Yellow Linked at less than maximum port speed Blinking On/OFF Actively transmitting or receiving data Off No link GRN 25G ACTIVITY The Intel® Ethernet 25G 2P E810-XXV OCP has the following indicator lights: Label Indication Meaning Green Operating at maximum port speed
Troubleshooting Dual Port SFP/SFP+ Adapters The Intel® Ethernet 10G 2P X710 OCP has the following indicator lights: Label Indication Meaning Green Operating at maximum port speed Yellow Linked at less than maximum port speed Green flashing Data activity Off No activity LNK ACT The Intel® Ethernet Converged Network Adapter X710 has the following indicator lights: Label Indication Meaning Green Linked at 10 Gb Yellow Linked at 1 Gb Blinking On/OFF Actively transmitting or receiving data
Troubleshooting The Intel® 10G 2P X520 Adapter has the following indicator lights: Label Indication Meaning On Linked to the LAN Off Not linked to the LAN Blinking On/Off Actively transmitting or receiving data Off No link GRN 10G (A or B): Green ACT/LNK (A or B): Green Quad Port SFP/SFP+ Adapters The Intel® Ethernet 10G 4P X710 OCP has the following indicator lights: Label Indication Meaning Green Linked at maximum port speed Yellow Linked at less than maximum port speed Blinking On/OFF
Troubleshooting Dual Port Copper Adapters The Intel® Ethernet 10G 2P X710-T2L-t OCP has the following indicator lights: Label Link Indication Meaning Green Linked at 10 Gbps. Yellow Linked at slower than 10 Gbps. Off No link. Blinking On/Off Actively transmitting or receiving data. Off No link. Activity The Intel® Ethernet 10G 2P X710-T2L-t Adapter has the following indicator lights: Label Link Indication Meaning Green Linked at 10 Gbps. Yellow Linked at slower than 10 Gbps.
Troubleshooting The Intel® Ethernet 10G 2P X550-t Adapter has the following indicator lights: Label Link Indication Meaning Green Linked at 10 Gbps. Yellow Linked at less than 10 Gbps. Off No link. Blinking On/Off Actively transmitting or receiving data. Off No link.
Troubleshooting The Intel® Ethernet 10G 2P X540-t Adapter has the following indicator lights: Label Link Indication Meaning Green Linked at 10 Gb. Yellow Linked at less than 10 Gb. Off No link. Blinking On/Off Actively transmitting or receiving data. Off No link. Activity The Intel® Gigabit 2P I350-t Adapter has the following indicator lights: Label ACT/LNK 10/100/ 1000 Indication Meaning Green on The adapter is connected to a valid link partner.
Troubleshooting Quad Port Copper Adapters The Intel® Ethernet 10G 4P X710-T4L-t OCP has the following indicator lights: Label Link Indication Meaning Green Linked at 10 Gbps. Yellow Linked at slower than 1 Gbps. Off No link. Blinking On/Off Actively transmitting or receiving data. Off No link. Activity The Intel® Ethernet 10G 4P X710-T4L-t Adapter has the following indicator lights: Label Link Indication Meaning Green Linked at 10 Gbps. Yellow Linked at slower than 10 Gbps.
Troubleshooting The Intel® Ethernet Converged Network Adapter X710 and Intel® Ethernet Converged Network Adapter X710-T have the following indicator lights: Label ACT LNK Indication Meaning Green on The adapter is connected to a valid link partner. Green flashing Data activity Off No link.
Troubleshooting rNDC (Rack Network Daughter Cards) The Intel® Ethernet 40G 2P XL710 QSFP+ rNDC has the following indicator lights: Label Indication Meaning LNK (green/yellow) Green on Operating at maximum port speed. Off No link. Green flashing Data activity. Off No activity.
Troubleshooting The Intel® Ethernet 1G 4P I350-t OCP, Intel® Ethernet 10G 4P X550/I350 rNDC, Intel® Gigabit 4P X550/I350 rNDC, Intel® Ethernet 10G 4P X550 rNDC, Intel® Ethernet 10G 4P X540/I350 rNDC, Intel® Gigabit 4P X540/I350 rNDC and Intel® Gigabit 4P I350-t rNDC have the following indicator lights: Label Indication Meaning LNK (green/yellow) Green on Operating at maximum port speed. Yellow on Operating at lower port speed. Off No link. Green flashing Data activity. Off No activity.
Troubleshooting The Intel® Ethernet 10G 4P X520/I350 rNDC, Intel® Gigabit 4P X520/I350 rNDC, Intel® Ethernet Gigabit 4P x710/I350 rNDC, and Intel® 10G 4P X710/I350 rNDC have the following indicator lights: Label Indication LNK (green/yellow) Green on ACT (green) Meaning Operating at maximum port speed. Yellow on Operating at lower port speed. Off No link. Green flashing Data activity. Off No activity.
Transitioning from i40evf to iavf Overview Intel created the Intel® Adaptive Virtual Function (iavf) driver to provide a consistent, future-proof virtual function (VF) interface for Intel® Ethernet controllers. Previously, when you upgraded your network hardware, you replaced the drivers in each virtual machine (VM) with new drivers that were capable of accessing the new VF device provided by the new hardware.
Transitioning from i40evf to iavf 4. Install the driver. a. RHEL: rpm -i /root/rpmbuild/RPMS/x86_64/iavf-[version]-1.x86_64.rpm b. SLES: rpm -i /usr/src/packages/RPMS/x86_64/iavf-[version]-1.x86_64.rpm 5. Load the new driver module. modprobe iavf Install Using Linux tarball 1. Copy the iavf driver tar file to your VM image. 2. Untar the file. tar zxf iavf-.tar.gz where is the version number for the driver tar file. 3. Change to the src directory under the unzipped driver file. 4.
Transitioning from i40evf to iavf This is not absolutely necessary, but we do recommend you uninstall the i40evf driver. Is there any possibility of conflicts or situations where both drivers may exist in a system Both drivers can be installed in the system. Installing the iavf driver tells the system that the iavf driver should be used instead of the i40evf driver.
Known Issues NOTE: iSCSI Known Issues are located in their own section of this manual. Fiber optics and auto-negotiation Modules based on 100GBASE-SR4, 40GBASE-SR4, 25GBASE-SR, active optical cable (AOC), and active copper cable (ACC) do not support auto-negotiation per the IEEE specification. To obtain link with these modules, you must turn off autonegotiation on the link partner's switch ports.
Known Issues "Rx/Tx is disabled on this device because the module does not meet thermal requirements." error during POST This error is caused by installing a module in an X710 based device that does not meet thermal requirements for that device. To resolve the issue, please install a module that meets the device's thermal requirements. See the section "SFP+ and QSFP+ Devices" in this document. "Rx/Tx is disabled on this device because an unsupported SFP+ module type was detected.
Known Issues Traffic does not transmit through VXLAN tunnel On a system running Microsoft* Windows Server* 2016, traffic may fail to transmit through a VXLAN tunnel. Enabling Transmit Checksum Offloads for the appropriate traffic type will resolve the issue.
Known Issues Intel drivers must be installed by Dell EMC Update Package before configuring Microsoft Hyper-V features Prior to configuring the Microsoft* Hyper-V features, the Intel® NIC drivers must be installed by the Dell EMC Update Package. If the Microsoft* Hyper-V feature is configured on an unsupported NIC partition on an Intel® X710 device prior to using the Dell EMC Update Package to install Intel® NIC Drivers, the driver installation may not complete.
Known Issues Enabling WOL in Linux Using Ethtool and BootUtil By default, WOL is disabled. In a Linux environment, WOL is enabled using ethtool and, in some instances, using BootUtil is also required. Only port A (port 0) can be enabled through ethtool without using BootUtil. To enable WOL using ethtool on other ports, WOL must be enabled with BootUtil first.
Known Issues Other Intel 10GbE Network Adapter Known Issues The System H/W Inventory (iDRAC) indicates that Auto-negotiation on the Embedded NIC is Disabled, but elsewhere link speed and duplex auto-negotiation is Enabled If an optical module is plugged into the Intel® Ethernet 10G X520 LOM on a PowerEdge-C6320, the System H/W Inventory (iDRAC) will indicate that Auto-negotiation is Disabled. However, Windows Device Manager and HII indicate that link speed and duplex Auto-negotiation is Enabled.
Known Issues When trying to identify the adapter, the Activity LED blinks and the Link LED is solid If you use the Identify Adapter feature with the following adapters, the Activity LED blinks instead of the Link LED. The Link LED may display a solid green light for 10G ports even if a network link is not present.
Known Issues System does not boot Your system may run out of I/O resources and fail to boot if you install more than four quad port server adapters. Moving the adapters to different slots or rebalancing resources in the system BIOS may resolve the issue.
Regulatory Compliance Statements FCC Class A Products 40 Gigabit Ethernet Products l l Intel® Ethernet 40G 2P XL710 QSFP+ rNDC Intel® Ethernet Converged Network Adapter XL710-Q2 25 Gigabit Ethernet Products l l l Intel® Ethernet 25G 2P E810-XXV OCP Intel® Ethernet 25G 2P XXV710 Mezz Intel® Ethernet 25G 2P XXV710 Adapter 10 Gigabit Ethernet Products l l l l l l l l l l l l l l l l l l l l l Intel® Ethernet X520 10GbE Dual Port KX4-KR Mezz Intel® Ethernet 10G 2P X540-t Adapter Intel® Ethernet 10G 2P X550
Regulatory Compliance Statements FCC Class B Products 25 Gigabit Ethernet Products l Intel® Ethernet 25G 2P E810-XXV Adapter 10 Gigabit Ethernet Products l l l l Intel® Ethernet 10G 2P X710-T2L-t Adapter Intel® Ethernet 10G 4P X710-T4L-t Adapter Intel® Ethernet 10G 2P X520 Adapter Intel® Ethernet 10G X520 LOM Gigabit Ethernet Products l l Intel® Gigabit 2P I350-t Adapter Intel® Gigabit 4P I350-t Adapter Safety Compliance The following safety standards apply to all products listed above: l UL 60950-1,
Regulatory Compliance Statements l l l l l l l l l EN55022: 2010 – Radiated & Conducted Emissions (European Union) EN55024: 2010 – Immunity (European Union) EN55032: 2015 Class B Radiated and Conducted Emissions requirements (European Union) EMC Directive 2004/108/EC (European Union) VCCI (Class B)– Radiated & Conducted Emissions (Japan) (excluding optics) CNS13438 (Class B)-2006 – Radiated & Conducted Emissions (Taiwan) (excluding optics) AS/NZS CISPR 22:2009 + A1:2010 Class B and CISPR 32:2015 for Radi
Regulatory Compliance Statements VCCI Class A Statement BSMI Class A Statement KCC Notice Class A (Republic of Korea Only) BSMI Class A Notice (Taiwan) 211
Regulatory Compliance Statements FCC Class B User Information This equipment has been tested and found to comply with the limits for a Class B digital device pursuant to Part 15 of the FCC rules. These limits are designed to provide reasonable protection against harmful interference in a residential installation. This equipment generates, uses and can radiate radio frequency energy and, if not installed and used in accordance with the instructions, may cause harmful interference to radio communications.
Regulatory Compliance Statements KCC Notice Class B (Republic of Korea Only) EU WEEE Logo Manufacturer Declaration European Community Manufacturer Declaration Intel Corporation declares that the equipment described in this document is in conformance with the requirements of the European Council Directive listed below: l Low Voltage Directive 2006/95/EC l EMC Directive2004/108/EC l RoHS Directive 2011/65/EU These products follow the provisions of the European Directive 1999/5/EC.
Regulatory Compliance Statements Dette produktet er i henhold til bestemmelsene i det europeiske direktivet 1999/5/EC. Este produto cumpre com as normas da Diretiva Européia 1999/5/EC. Este producto cumple con las normas del Directivo Europeo 1999/5/EC. Denna produkt har tillverkats i enlighet med EG-direktiv 1999/5/EC. This declaration is based upon compliance of the Class A products listed above to the following standards: EN 55022:2010 (CISPR 22 Class A) RF Emissions Control.
Regulatory Compliance Statements China RoHS Declaration Class 1 Laser Products Server adapters listed above may contain laser devices for communication use. These devices are compliant with the requirements for Class 1 Laser Products and are safe in the intended use. In normal operation the output of these laser devices does not exceed the exposure limit of the eye and cannot cause harm.
Customer Support Web and Internet Sites http://support.dell.com/ Customer Support Technicians If the troubleshooting procedures in this document do not resolve the problem, please contact Dell, Inc. for technical assistance (refer to the "Getting Help" section in your system documentation). Before you call... You need to be at your computer with your software running and the product documentation at hand.
Adapter Specifications Intel® 40 Gigabit Network Adapter Specifications Feature Intel® Ethernet Converged Network Adapter XL710-Q2 Bus Connector PCI Express 3.0 Bus Speed x8 Transmission Mode/Connector QSFP+ Cabling 40GBase-SR4, Twinax DAC (7m max) Power Requirements 6.5 W Maximum @ +12 V Dimensions (excluding bracket) 5.21 x 2.71 in 13.3 x 6.9 cm Operating Temperature 32 - 131 deg. F (0 - 55 deg.
Adapter Specifications x8 Bus Speed Transmission Mode/Connector QSFP+ Cabling 40GBase-SR4, Twinax DAC (7m max) Power Requirements 6.2 W Maximum @ +12 V Dimensions (excluding bracket) 3.66 x6.081 in 9.3 x 15.5 cm Operating Temperature 32 - 140 deg. F (0 - 60 deg. C) MTBF at 55°c 112 years Available Speeds 40 Gbps Duplex Modes Full only Indicator Lights Two per port: Link and Activity Standards Conformance IEEE 802.3ba SFF-8436 PCI Express 3.
Adapter Specifications 3.63 W maximum @ +3.3 V Dimensions (excluding bracket) 2.54 in. x 6.6 in. 6.44 cm x 16.76 cm Operating Temperature 32 - 140°F (0 - 60°C) MTBF at 55°C 271 years Available Speeds 25 Gbps/10 Gbps/1 Gbps Duplex Modes Full only Indicator Lights Two per port: Link and Activity Standards Conformance PCI Express 4.0 SFF-8419 IEEE 802.
Adapter Specifications SFF-8431 PCI Express 3.
Adapter Specifications l CISPR 22 - Radiated & Conducted Emissions (International) l l EN55032-2015- Radiated & Conducted Emissions (European Union) EN55024 - 2010- (Immunity) (European Union) l REACH, WEEE, RoHS Directives (European Union) l VCCI - Radiated & Conducted Emissions (Japan) l CNS13438 - Radiated & Conducted Emissions (Taiwan) l l AS/NZS CISPR - Radiated & Conducted Emissions (Australia/New Zealand) KN22 -Radiated & Conducted Emissions (Korea) l RoHS (China) Feature Intel® Eth
Adapter Specifications Intel® 10 Gigabit Network Adapter Specifications Feature Intel® Ethernet 10G 2P X710 OCP Intel® Ethernet 10G 2P X710T2L-t OCP Intel® Ethernet 10G 4P X710T4L-t OCP Bus Connector PCI Express 3.0 OCP NIC 3.0 OCP NIC 3.0 Bus Speed x8 x8 PCI Express v3.0 x8 PCI Express v3.
Adapter Specifications Feature Intel® Ethernet 10G 2P X710-T2L-t Adapter Intel® Ethernet 10G 4P X710-T4L-t Adapter Bus Connector PCI Express 3.0 PCI Express 3.0 Bus Speed x8 x8 Transmission Mode/Connector RJ45 BASE-T Connector RJ45 BASE-T Connector Cabling 10GBASE-T: CAT6A (100m max), CAT6 (55m max) 1000BASE-T: CAT6A, CAT6, CAT5e (100m max) 10GBASE-T: CAT6A (100m max), CAT6 (55m max) 1000BASE-T: CAT6A, CAT6, CAT5e (100m max) Power Requirements 9.6 W Maximum @ +12 V 14.
Adapter Specifications Feature Intel® Ethernet Converged Network Adapter X710-T Intel® Ethernet Converged Network Adapter X710 Ethernet Server Adapter X710-DA2 for OCP 10GBase-SR/LR 10GBASE-SR Power Requirements 8.53 W (idle) @ 12V Main 6.7 Watts (maximum) @ 12 V 3.08 Watts (max) @ 5V Main Dimensions (excluding bracket) 6.578 x 4.372 in 16.708 x 11.107 cm 6.578 x 4.372 in 16.708 x 11.107 cm 2.67 x 4.59 in 6.78 x 11.658 cm Operating Temperature 32 - 131 deg. F (0 - 55 deg. C) 41 - 131 deg.
Adapter Specifications Intel® Ethernet 10G 2P X540-t Adapter Intel® Ethernet 10G 2P X520 Adapter Intel® Ethernet 10G 2P X550-t Adapter Dimensions (excluding bracket) 5.7 x 2.7 in 14.5 x 6.9 cm 5.7 x 2.7 in 14.5 x 6.9 cm 5.13 x 2.7 in 13.0 x 6.9 cm Operating Temperature 32 - 131 deg. F (0 - 55 deg. C) 32 - 131 deg. F (0 - 55 deg. C) 32 - 131 deg. F (0 - 55 deg. C) MTBF at 55°c 108 years 83.
Adapter Specifications Operating Temperature 32 - 131 deg. F (0 - 55 deg. C) MTBF at 55°c 147 years Available Speeds 10 Gbps/1 Gbps Duplex Modes Full only Standards Conformance IEEE 802.1p IEEE 802.1Q IEEE 802.3ac IEEE 802.3ad IEEE 802.3ae IEEE 802.3x ACPI v1.0 PCI Express 2.0 Regulatory and Safety Safety Compliance l UL 60950 Third Edition- CAN/CSA-C22.2 No.
Adapter Specifications Duplex Modes Full only Full only Standards Conformance IEEE 802.1p IEEE 802.1Q IEEE 802.3ac IEEE 802.3ad IEEE 802.3ae IEEE 802.3x ACPI v1.0 PCI Express 3.0 IEEE 802.1p IEEE 802.1Q IEEE 802.3ac IEEE 802.3ad IEEE 802.3ae IEEE 802.3x ACPI v1.0 PCI Express 3.0 Regulatory and Safety Safety Compliance l UL 60950 Third Edition- CAN/CSA-C22.2 No.
Adapter Specifications PCI Express 2.0a PCI Express 2.0a PCI Express 2.0a Regulatory and Safety Safety Compliance l UL 60950 Third Edition- CAN/CSA-C22.2 No.
Adapter Specifications l FCC Part 15 - Radiated & Conducted Emissions (USA) l ICES-003 - Radiated & Conducted Emissions (Canada) l CISPR 22 - Radiated & Conducted Emissions (International) l EN55022-1998 - Radiated & Conducted Emissions (European Union) l EN55024 - 1998 - (Immunity) (European Union) l CE - EMC Directive (89/336/EEC) (European Union) l VCCI - Radiated & Conducted Emissions (Japan) l CNS13438 - Radiated & Conducted Emissions (Taiwan) l AS/NZS3548 - Radiated & Conducted Emiss
Adapter Specifications l CE - EMC Directive (89/336/EEC) (European Union) l VCCI - Radiated & Conducted Emissions (Japan) l CNS13438 - Radiated & Conducted Emissions (Taiwan) l AS/NZS3548 - Radiated & Conducted Emissions (Australia/New Zealand) MIC notice 1997-41, EMI and MIC notice 1997-42 - EMS (Korea) l Feature Intel® Gigabit 2P I350-t Adapter and Intel® Gigabit 4P I350-t Adapter Bus Connector PCI Express 2.
Adapter Specifications l AS/NZS3548 - Radiated & Conducted Emissions (Australia/New Zealand) l MIC notice 1997-41, EMI and MIC notice 1997-42 - EMS (Korea) Intel® Gigabit Network Mezzanine Card Specifications Feature Intel® Gigabit 4P I350-t Mezz Bus Connector PCI Express 2.0 Bus Speed x4 Power Requirements 3.425 Watts (maximum) @ 3.3 V Dimensions 3.65 x 3.3 in. Operating Temperature 32 - 131 deg. F (0 - 55 deg.
Adapter Specifications Bus Speed x8 Transmission Mode/Con- Twisted copper/RJ-45 nector x8 x8 Twisted copper/RJ-45 Twisted copper/RJ-45 Cabling Cat-5e Cat-5e Cat-5e Power Requirements 10.7W Maximum @ +12 V 15.39 W (max) @ +12 V 5.5W (max)@ +3.3 V Dimensions (excluding bracket) 4.331 x 3.661 in 11.007 x 9.298 cm 5.86 x 4.35 in 14.882 x 11.04 cm 5.33 x 2.71 in 13.54 x 6.59 cm Operating Temperature 32 - 131 deg. F (0 - 55 deg. C) 32 - 60 deg. F (0- 16 deg C.) 32 - 60 deg. F (0 - 16 deg.
Standards l l l l l l l l l l l l l l IEEE 802.1p: Priority Queuing (traffic prioritizing) and Quality of Service levels IEEE 802.1Q: Virtual LAN identification IEEE 802.3ab: Gigabit Ethernet over copper IEEE 802.3ac: Tagging IEEE 802.3ad: SLA (FEC/GEC/Link Aggregation - static mode) IEEE 802.3ad: Dynamic mode IEEE 802.3ae: 10 Gbps Ethernet IEEE 802.3an: 10GBase-T 10 Gbps Ethernet over unshielded twisted pair IEEE 802.3ap: Backplane Ethernet IEEE 802.3u: Fast Ethernet IEEE 802.3x: Flow Control IEEE 802.
X-UEFI Attributes This section contains information about X-UEFI attributes and their expected values. List of Multi-controller Devices The adapters listed below contain more than one controller. On these adapters, configuring controller based settings will not affect all ports. Only ports bound to the same controller will be affected.
Display Name X-UEFI Name Supported Adapters I350 X520 X540 User Configurable X550 X710 XL710 XXV710 User Configurable Values Values that can be displayed Dependencies for Values E810 I/O Identity Optimization (iDRAC 8/9) Information Supported Partition State Interpretation PartitionStateInterpretation X RDMA Support RDMASupport LLDP Agent INTEL_LLDPAgent SR-IOV Support SRIOVSupport X X X VF Allocation Basis VFAllocBasis X X VF Allocation Multiple VFAllocMult X X NParEP Mod
Display Name X-UEFI Name Supported Adapters I350 X520 X540 X550 X710 XL710 XXV710 E810 User Configurable User Configurable Values Values that can be displayed Dependencies for Values I/O Identity Optimization (iDRAC 8/9) Information CHAP Secret FirstTgtChapPwd X X X X X X X X Yes string string Yes Specifies the Challenge-Handshake Authentication Protocol secret (CHAP password) of the first iSCSI storage target. The string value is limited to alphanumeric characters, '.
Display Name X-UEFI Name Supported Adapters I350 X520 X540 X550 X710 XL710 XXV710 E810 User Configurable User Configurable Values Values that can be displayed Dependencies for Values I/O Identity Optimization (iDRAC 8/9) Information TcpIpViaDHCP Disabled Yes Specifies the IPv4 Subnet Mask of the iSCSI initiator. Subnet Mask IscsiInitiatorSubnet X X X X X X X X Yes X.X.X.X X.X.X.
Display Name X-UEFI Name Supported Adapters I350 X520 X540 X550 X710 XL710 XXV710 E810 User Configurable User Configurable Values Values that can be displayed Dependencies for Values I/O Identity Optimization (iDRAC 8/9) Information Legacy Virtual LAN ID VLanId X X X X X X X X Yes 0-4094 0-4094 No Specifies the ID (tag) to be used for PXE VLAN Mode. The VLAN ID must be in the range from 0 to 4094. PXE VLAN is disabled if value is set to 0.
Display Name X-UEFI Name Supported Adapters I350 X520 X540 User Configurable X550 X710 XL710 XXV710 User Configurable Values Values that can be displayed Dependencies for Values E810 I/O Identity Optimization (iDRAC 8/9) Information PCI Device ID PCIDeviceID[Partition:n] X X No XXXX No Reports the PCI Device ID of the partition. Port Number PortNumber[Partition:n] X X No 1-4 No Reports the port to which the partition belongs, where n is the number of the partitions.
Legal Disclaimers Software License Agreement INTEL SOFTWARE LICENSE AGREEMENT (Final, License) IMPORTANT - READ BEFORE COPYING, INSTALLING OR USING. Do not use or load this software and any associated materials (collectively, the "Software") until you have carefully read the following terms and conditions. By loading or using the Software, you agree to the terms of this Agreement. If you do not wish to so agree, do not install or use the Software.
Legal Disclaimers LIMITATION OF LIABILITY. IN NO EVENT SHALL INTEL OR ITS SUPPLIERS BE LIABLE FOR ANY DAMAGES WHATSOEVER (INCLUDING, WITHOUT LIMITATION, LOST PROFITS, BUSINESS INTERRUPTION, OR LOST INFORMATION) ARISING OUT OF THE USE OF OR INABILITY TO USE THE SOFTWARE, EVEN IF INTEL HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. SOME JURISDICTIONS PROHIBIT EXCLUSION OR LIMITATION OF LIABILITY FOR IMPLIED WARRANTIES OR CONSEQUENTIAL OR INCIDENTAL DAMAGES, SO THE ABOVE LIMITATION MAY NOT APPLY TO YOU.
Legal Disclaimers * Half-to-Single and Single-to-Half conversions are covered by the following license: Copyright (c) 2002, Industrial Light & Magic, a division of Lucas Digital Ltd. LLC. All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: l Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.
Legal Disclaimers All statements or claims regarding the properties, capabilities, speeds or qualifications of the part referenced in this document are made by the supplier and not by Dell EMC. Dell EMC specifically disclaims knowledge of the accuracy, completeness or substantiation for any such statements. All questions or comments relating to such statements or claims should be directed to the supplier.