Intel® Ethernet Adapters and Devices User Guide
Overview Welcome to the User Guide for Intel® Ethernet Adapters and devices. This guide covers hardware and software installation, setup procedures, and troubleshooting tips for Intel network adapters, connections, and other devices. Installing the Network Adapter If you are installing a network adapter, follow this procedure from step 1. If you are upgrading the driver software, start with step 4. NOTE: If you update the firmware, you must update the driver software to the same family version. 1. 2. 3. 4.
Supported 10 Gigabit Network Adapters l l l l l l l l l l l l l l l l l l l l l l l Intel® Ethernet 10G 2P X520 Adapter Intel® Ethernet 10G X520 LOM Intel® Ethernet X520 10GbE Dual Port KX4-KR Mezz Intel® Ethernet 10G 2P X540-t Adapter Intel® Ethernet 10G 2P X550-t Adapter Intel® Ethernet 10G 4P X550 rNDC Intel® Ethernet 10G 4P X550/I350 rNDC Intel® Ethernet 10G 4P X540/I350 rNDC Intel® Ethernet 10G 4P X520/I350 rNDC Intel® Ethernet 10G 2P X520-k bNDC Intel® Ethernet 10G 4P X710-k bNDC Intel® Ethernet 10G
l Novell* SUSE* Linux Enterprise Server (SLES) 15 SP1 Hardware Compatibility Before installing the adapter, check your system for the following: l The latest BIOS for your system l One open PCI Express slot (see the specifications of your card for slot compatibility) Cabling Requirements Please see the section Connecting Network Cables. Installation Overview Installing the Adapter 1. 2. 3. 4. Turn off the computer and unplug the power cord.
l l l Optimized for quick response and low latency – useful for video, audio, and High Performance Computing Cluster (HPCC) servers Optimized for throughput – useful for data backup/retrieval and file servers Optimized for CPU utilization – useful for application, web, mail, and database servers NOTES: l Linux users, see the Linux section of this guide and the README file in the Linux driver package for Linux-specific performance enhancement details.
CPU Affinity When passing traffic on multiple network ports using an I/O application that runs on most or all of the cores in your system, consider setting the CPU Affinity for that application to fewer cores. This should reduce CPU utilization and in some cases may increase throughput for the device. The cores selected for CPU Affinity must be local to the affected network device's Processor Node/Group. You can use the PowerShell command Get-NetAdapterRSS to list the cores that are local to a device.
NOTE: When an device is in NPar mode, you can only configure DCB through the System Setup/BIOS. iSCSI Over DCB Intel® Ethernet adapters support iSCSI software initiators that are native to the underlying operating system.Intel® 82599 and X540-based adapters support iSCSI within a Data Center Bridging cloud. Used in conjunction with switches and targets that support the iSCSI/DCB application TLV, this solution can provide guaranteed minimum bandwidth for iSCSI traffic between the host and target.
l l l l l l l l l l l l l l Intel® Ethernet Converged Network Adapter X710 Intel® Ethernet 10G 2P X710 OCP Intel® Ethernet 10G 4P X710 OCP Intel® Ethernet 10G 2P X710-k bNDC Intel® Ethernet 10G 4P X710-k bNDC Intel® Ethernet 10G X710-k bNDC Intel® Ethernet 10G 4P X710/l350 rNDC Intel® Ethernet 10G 4P X710 SFP+ rNDC Intel® Ethernet 10G X710 rNDC Intel® Converged Network Adapter X710-T Intel® Ethernet Server Adapter X710-DA2 for OCP Intel® Ethernet 25G 2P XXV710 Adapter Intel® Ethernet 25G 2P XXV710 Mezz Int
Virtual Machine Queue Offloading Enabling VMQ offloading increases receive and transmit performance, as the adapter hardware is able to perform these tasks faster than the operating system. Offloading also frees up CPU resources. Filtering is based on MAC and/or VLAN filters. For devices that support it, VMQ offloading is enabled in the host partition on the adapter's Device Manager property sheet, under Virtualization on the Advanced Tab.
l l Physical Function (PF) is a full featured PCI Express function that can be discovered, managed and configured like any other PCI Express device. Virtual Function (VF) is similar to PF but cannot be configured and only has the ability to transfer data in and out. The VF is assigned to a Virtual Machine. NOTES: l SR-IOV must be enabled in the BIOS.
Intel® Ethernet 10G X710-k bNDC Intel® Ethernet Converged Network Adapter X710 Intel® Converged Network Adapter X710-T Intel® Ethernet Server Adapter X710-DA2 for OCP l l l l NOTES: l Adapters support NPar in NIC (LAN) mode only. l The following are supported on the first partition of each port only: l PXE Boot l iSCSIboot l Speed and Duplex settings l Flow Control l Power Management settings l SR-IOV l NVGRE processing l Some adapters only support Wake on Lan on the first partition of the first port.
PCI Express Slot Dell EMC Platform OCP Mezz Rack NDC Slot 1 2 3 4 5 6 R340 no no R430 yes yes R440 yes yes yes R530 yes yes yes no R530XD yes yes no R540 yes yes yes yes yes no 7 8 9 10 11 12 13 no R630 yes yes yes yes R640 yes yes yes yes R730 yes yes yes yes yes yes yes yes R730XD yes yes yes yes yes yes yes R740 yes yes yes yes yes yes yes yes yes R740XD2 no yes yes yes yes yes no R830 yes yes yes yes yes yes yes R840 yes yes yes yes yes yes yes yes ye
Mezzanine Slot Dell EMC Platform Blade NDC Slot B M640 for VRTX yes M830 yes M830 for VRTX yes MX740c yes yes MX840c yes yes C Supported platforms or slots are indicated by "yes." Unsupported are indicated by "no". Not applicable are indicated by blank cells. Configuring NPar Mode Configuring NPar from the Boot Manager When you boot the system, press the F2 key to enter the System Setup menu.
The Maximum Bandwidth percentage represents the maximum transmit bandwidth allocated to the partition as a percentage of the full physical port link speed.The accepted range of values is 0-100. The value here can be used as a limiter, should you chose that any one particular partition not be able to consume 100% of a port's bandwidth should it be available. The sum of all the values for Maximum Bandwidth is not restricted, because no more than 100% of a port's bandwidth can ever be used.
Power Management Settings Power Management settings are allowed only on the first partition of each physical port. If you select the Power Management tab in the Device Manager property sheets while any partition other than the first partition is selected, you will be presented with text in the Power Management dialog stating that Power Management settings cannot be configured on the current connection. Clicking the Properties button will launch the property sheet for the root partition on the adapter.
To change the value for Min% or Max%, select a partition in the displayed list, then use the up or down arrows under “Selected Partition Bandwidth Percentages”. NOTE: l l l If the sum of the minimum bandwidth percentages does not equal 100, then settings will be automatically adjusted so that the sum equals 100.
To set this using Windows PowerShell, find the first partition using the Get-IntelNetAdapter cmdlet. Once you know the port with partition number 0, use that port name with the Get-IntelNetAdapterSetting and Set-IntelNetAdapterSetting cmdlets. Configuring NPAR in Linux On Intel® 710 Series based adapters that support it, you can set up multiple functions on each physical port. You configure these functions through the System Setup/BIOS.
l l l l commit is write only. Attempting to read it will result in an error. Writing to commit is only supported on the first function of a given port. Writing to a subsequent function will result in an error. Oversubscribing the minimum bandwidth is not supported. The underlying device's NVM will set the minimum bandwidth to supported values in an indeterminate manner. Remove all of the directories under config and reload them to see what the actual values are.
Installing the Adapter Select the Correct Slot One open PCI-Express slot, x4, x8, or x16, depending on your adapter. NOTE: Some systems have physical x8 PCI Express slots that actually only support lower speeds. Please check your system manual to identify the slot. Insert the Adapter into the Computer 1. If your computer supports PCI Hot Plug, see your computer documentation for special installation instructions. 2. Turn off and unplug your computer. Then remove the cover.
The following table shows the maximum lengths for each cable type at a given transmission speed. Category 5 Category 6 Category 6a Category 7 1 Gbps 100m 100m 100m 100m 10 Gbps NA 55m 100m 100m 25 Gbps NA NA NA 50m 40 Gbps NA NA NA 50m CAUTION: If using less than 4-pair cabling, you must manually configure the speed and duplex setting of the adapter and the link partner. In addition, with 2- and 3-pair cabling the adapter can only achieve speeds of up to 100Mbps.
Connector type: LC Cable type: Single-mode fiber with 9.0µm core diameter l Maximum cable length: 10 kilometers Most Intel® Ethernet Server Adapters support the following modules: NOTE: Intel® 710 Series based devices do not support third party modules.
THIRD PARTY OPTIC MODULES AND CABLES REFERRED TO ABOVE ARE LISTED ONLY FOR THE PURPOSE OF HIGHLIGHTING THIRD PARTY SPECIFICATIONS AND POTENTIAL COMPATIBILITY, AND ARE NOT RECOMMENDATIONS OR ENDORSEMENT OR SPONSORSHIP OF ANY THIRD PARTY'S PRODUCT BY INTEL. INTEL IS NOT ENDORSING OR PROMOTING PRODUCTS MADE BY ANY THIRD PARTY AND THE THIRD PARTY REFERENCE IS PROVIDED ONLY TO SHARE INFORMATION REGARDING CERTAIN OPTIC MODULES AND CABLES WITH THE ABOVE SPECIFICATIONS.
4. Lower the locking lever until it clicks into place over the card or cards. 5. Replace the blade server cover and put the blade back into the server chassis. 6. Turn the power on. Install a Network Daughter Card in a Server See your server documentation for detailed instructions on how to install a bNDC or rNDC. 1. Turn off the server and then remove its cover. CAUTION: Failure to turn off the server could endanger you and may damage the card or server. 2. 3. 4. 5.
Microsoft* Windows* Driver and Software Installation and Configuration Installing Windows Drivers and Software NOTE: To successfully install or uninstall the drivers or software, you must have administrative privileges on the computer completing installation. Install the Drivers NOTES: l This will update the drivers for all supported Intel® network adapters in your system.
/i Do a fresh install of the drivers contained in the Update Package. NOTE: Requires /s option /e= Extract the entire Update Package to the folder defined in . NOTE: Requires /s option /drivers= Extract only driver components of the Update Package to the folder defined in . NOTE: Requires /s option /driveronly Install or Update only the driver components of the Update Package.
Force update to continue, even on "soft" qualification errors Network_Driver_XXXXX_WN64_XX.X.X_A00.exe /s /f Downgrading Drivers You can use the /s and /f options to downgrade your drivers. For example, if you have the 17.0.0 drivers loaded and you want to downgrade to 16.5.0, type the following: Network_Driver_XXXXX_WN64_16.5.0_A00.
Examples Save Example To save the adapter settings to a file on a removable media device, do the following. 1. Open a Windows PowerShell Prompt. 2. Navigate to the directory where SaveRestore.ps1 is located (generally c:\Program Files\Intel\Wired Networking\PROSET). 3. Type the following: SaveRestore.ps1 –Action Save –ConfigPath e:\settings.txt Restore Example To restore the adapter settings from a file on removable media, do the following: 1. Open a Windows PowerShell Prompt. 2.
Configuring with Intel® PROSet for Windows PowerShell* software The Intel® PROSet for Windows PowerShell* software contains several cmdlets that allow you to configure and manage the Intel® Ethernet Adapters and devices present in your system. For a complete list of these cmdlets and their descriptions, type get-help IntelNetCmdlets at the Windows PowerShell prompt. For detailed usage information for each cmdlet, type get-help at the Windows PowerShell prompt.
For iSCSI Crash Dump configuration, use the Intel® PROSet for Windows PowerShell* software and refer to the aboutIntelNetCmdlets.help.txt help file. NOTE:Support for the Intel PROSet command line utilities (prosetcl.exe and crashdmp.exe) has been removed, and is no longer installed. This functionality has been replaced by the Intel® PROSet for Windows PowerShell* software. Please transition all of your scripts and processes to use the Intel® PROSet for Windows PowerShell* software.
To change this setting in Windows PowerShell use the Set-IntelNetAdapterSetting cmdlet. For example: Set-IntelNetAdapterSetting -Name "" -DisplayName "DMA Coalescing" –DisplayValue "Enabled" Forward Error Correction (FEC) Mode Allows you to set the Forward Error Correction (FEC) mode. FEC improves link stability, but increases latency. Many high quality optics, direct attach cables, and backplane channels provide a stable link without FEC.
l l Force Slave Mode Auto Detect NOTE: Some multi-port devices may be forced to Master Mode. If the adapter is connected to such a device and is configured to "Force Master Mode," link is not established. On the device's Device Manager property sheet, this setting is found on the Advanced tab. To change this setting in Windows PowerShell use the Set-IntelNetAdapterSetting cmdlet.
To change this setting in Windows PowerShell use the Set-IntelNetAdapterSetting cmdlet. For example: Set-IntelNetAdapterSetting -Name "" -DisplayName "IPv4 Checksum Offload" –DisplayValue "Tx Enabled" Jumbo Frames Enables or disables Jumbo Packet capability. The standard Ethernet frame size is about 1514 bytes, while Jumbo Packets are larger than this. Jumbo Packets can increase throughput and decrease CPU utilization. However, additional latency may be introduced.
Range 0000 0000 0001 - FFFF FFFF FFFD Exceptions: l Do not use a multicast address (Least Significant Bit of the high byte = 1). For example, in the address 0Y123456789A, "Y" cannot be an odd number. (Y must be 0, 2, 4, 6, 8, A, C, or E.) l Do not use all zeros or all Fs. If you do not enter an address, the address is the original network address of the adapter.
To change this setting in Windows PowerShell use the Set-IntelNetAdapterSetting cmdlet. For example: Set-IntelNetAdapterSetting -Name "" -DisplayName "Low Latency Interrupts" –DisplayValue "Port-Based" Network Virtualization using Generic Routing Encapsulation (NVGRE) Network Virtualization using Generic Routing Encapsulation (NVGRE) increases the efficient routing of network traffic within a virtualized or cloud environment.
Reduce Power if Cable Disconnected & Reduce Link Speed During Standby Enables the adapter to reduce power consumption when the LAN cable is disconnected from the adapter and there is no link. When the adapter regains a valid link, adapter power usage returns to its normal state (full power usage). The Hardware Default option is available on some adapters. If this option is selected, the feature is disabled or enabled based on the system hardware.
Device Adapter Port(s) supporting WoL Intel® Ethernet Converged Network Adapter X710-2 Intel® Ethernet Converged Network Adapter X710 Intel® Ethernet 25G 2P XXV710 Mezz Not supported Intel® Ethernet 25G 2P XXV710 Adapter Intel® Ethernet Converged Network Adapter X710-T Not supported Intel® Ethernet Converged Network Adapter XL710-Q2 Intel® Ethernet 10G 2P X710-T2L-t Adapter Not Supported Intel® Ethernet 10G 2P X520 Adapter Not Supported Intel® Ethernet X520 10GbE Dual Port KX4-KR Mezz Intel® Ether
Advanced Configuration and Power Interface (ACPI) ACPI supports a variety of power states. Each state represents a different level of power, from fully powered up to completely powered down, with partial levels of power in each intermediate state. ACPI Power States Power State Description S0 On and fully operational S1 System is in low-power mode (sleep mode). The CPU clock is stopped, but RAM is powered on and being refreshed. S2 Similar to S1, but power is removed from the CPU.
Wake-Up Address Patterns Remote wake-up can be initiated by a variety of user selectable packet types and is not limited to the Magic Packet format. For more information about supported packet types, see the operating system settings section. The wake-up capability of Intel adapters is based on patterns sent by the OS. You can configure the driver to the following settings using Intel® PROSet for Windows Device Manager. For Linux*, WoL is provided through the ethtool* utility.
Physical Installation Issues Slot Some motherboards will only support remote wake-up (or remote wake-up from S5 state) in a particular slot. See the documentation that came with your system for details on remote wake-up support. Power Newer Intel PRO adapters are 3.3 volt and some are 12 volt. They are keyed to fit either type of slot. The 3.3 volt standby supply must be capable of supplying at least 0.2 amps for each Intel PRO adapter installed.
PTP Hardware Timestamp Allows applications that use PTPv2 (Precision Time Protocol) to use hardware generated timestamps to synchronize clocks throughout your network. Default Range Disabled l l Enabled Disabled On the device's Device Manager property sheet, this setting is found on the Advanced tab. To change this setting in Windows PowerShell use the Set-IntelNetAdapterSetting cmdlet.
LAN RSS LAN RSS applies to a particular TCP connection. NOTE: This setting has no effect if your system has only one processing unit. LAN RSS Configuration RSS is enabled on the Advanced tab of the adapter property sheet. If your adapter does not support RSS, or if the SNP or SP2 is not installed, the RSS setting will not be displayed. If RSS is supported in your system environment, the following will be displayed: l Port NUMA Node. This is the NUMA node number of a device. l Starting RSS CPU.
In the default mode, an Intel network adapter using copper connections will attempt to auto-negotiate with its link partner to determine the best setting. If the adapter cannot establish link with the link partner using auto-negotiation, you may need to manually configure the adapter and link partner to the identical setting to establish link and pass packets.
Software Timestamp Allows applications that use PTPv2 (Precision Time Protocol) to use software generated timestamps to synchronize clocks throughout your network. Default Range Disabled l l l l l l Disabled RxAll TxAll RxAll & TxAll TaggedTx RxAll & TaggedTx On the device's Device Manager property sheet, this setting is found on the Advanced tab. To change this setting in Windows PowerShell use the Set-IntelNetAdapterSetting cmdlet.
Enabling SR-IOV on the Server You must enable SR-IOV in the system's BIOS or HII. BIOS 1. 2. 3. 4. Enter the system BIOS at POST. Enable Global SR-IOV. Enable Virtualization Technology. Save the changes and exit. 1. 2. 3. 4. During POST, press F2 to enter Device Settings. Navigate to NIC -> Device Level Settings. Set Virtualization Mode to either "SR-IOV" or "NPAR + SR-IOV" Save the changes and exit.
NDC, LOM, or Adapter 40Gbe 25Gbe 10Gbe 1Gbe Intel® Ethernet 10G 4P X540/I350 rNDC Yes No Intel® Ethernet 10G 4P X520/I350 rNDC Yes No Intel® Ethernet 10G 2P X520-k bNDC Yes Intel® Ethernet X520 10GbE Dual Port KX4-KR Mezz Yes Intel® Ethernet 1G 4P I350-t OCP Yes Intel® Gigabit 4P I350-t rNDC Yes Intel® Gigabit 4P I350 bNDC Yes Intel® Gigabit 4P I350-t Mezz Yes Intel® Gigabit 2P I350-t Adapter Yes Intel® Gigabit 4P I350-t Adapter Yes PowerEdge C4130 LOMs No PowerEdge C6320 LOMs Yes
Dell EMC Platform OCP Mezz Rack NDC PCI Express Slot 1 2 3 4 5 6 7 8 9 10 11 12 13 CPU 2x CPU yes yes yes yes R530 yes yes yes no no R540 yes yes yes yes yes no R530XD yes yes no R620 yes yes yes R630 yes yes yes R640 yes yes yes yes R720XD yes yes yes yes yes yes yes R720 yes yes yes yes yes yes yes yes R730 yes yes yes yes yes yes yes R730XD yes yes yes yes yes yes R740 yes R740XD2 R820 yes yes yes yes yes yes yes yes yes yes yes yes yes no yes R830 yes yes
Dell EMC Platform Blade NDC Mezzanine Slot B C FC430 yes yes yes FC630 yes yes yes FC830 yes yes yes M420 yes yes yes M520 no yes yes M620 yes yes yes M630 yes yes yes M630 for VRTX yes M640 yes yes yes M640 for VRTX yes M820 yes yes yes M830 yes yes yes M830 for VRTX yes MX740c yes yes yes MX840c yes yes yes Supported platforms or slots are indicated by "yes." Unsupported are indicated by "no". Not applicable are indicated by blank cells.
NOTE: This feature is enabled and configured by the equipment manufacturer. It is not available on all adapters and network controllers. There are no user configurable settings. Monitoring and Reporting Temperature information is displayed on the Link tab in Intel® PROSet for Windows* Device Manager. There are three possible conditions: l Temperature: Normal Indicates normal operation.
Wait for Link Determines whether the driver waits for auto-negotiation to be successful before reporting the link state. If this feature is off, the driver does not wait for auto-negotiation. If the feature is on, the driver does wait for auto-negotiation. If this feature is on and the speed is not set to auto-negotiation, the driver will wait for a short time for link to be established before reporting the link state.
Linux* Driver Installation and Configuration Overview This release includes Linux Base Drivers for Intel® Network Connections.
Devices Supported by the ixgbe Linux Base Driver l l l l l l l l l l Intel® Ethernet X520 10GbE Dual Port KX4-KR Mezz Intel® Ethernet 10G 2P X540-t Adapter Intel® Ethernet 10G 2P X550-t Adapter Intel® Ethernet 10G 4P X550 rNDC Intel® Ethernet 10G 4P X550/I350 rNDC Intel® Ethernet 10G 4P X540/I350 rNDC Intel® Ethernet 10G 4P X520/I350 rNDC Intel® Ethernet 10G 2P X520-k bNDC Intel® Ethernet 10G 2P X520 Adapter Intel® Ethernet 10G X520 LOM Devices Supported by the i40e Linux Base Driver l l l l l l l l l l l
Minimum TX Bandwidth is the guaranteed minimum data transmission bandwidth, as a percentage of the full physical port link speed, that the partition will receive. The bandwidth the partition is awarded will never fall below the level you specify here.
Example of Setting the minimum and maximum bandwidth (assume there are four functions on the port eth6-eth9, and that eth6 is the first function on the port): # mkdir /config/eth6 # mkdir /config/eth7 # mkdir /config/eth8 # mkdir /config/eth9 # echo 50 > /config/eth6/min_bw # echo 100 > /config/eth6/max_bw # echo 20 > /config/eth7/min_bw # echo 100 > /config/eth7/max_bw # echo 20 > /config/eth8/min_bw # echo 100 > /config/eth8/max_bw # echo 10 > /config/eth9/min_bw # echo 25 > /config/eth9/max_bw # echo 1 >
igb Linux* Driver for the Intel® Gigabit Adapters igb Overview NOTE: In a virtualized environment, on Intel® Server Adapters that support SR-IOV, the virtual function (VF) may be subject to malicious behavior. Software- generated layer two frames, like IEEE 802.3x (link flow control), IEEE 802.1Qbb (priority based flow-control), and others of this type, are not expected and can throttle traffic between the host and the virtual switch, reducing performance.
l l l Install from Source Code Install Using KMP RPM Install Using KMOD RPM Install from Source Code To build a binary RPM* package of this driver, run 'rpmbuild -tb '. Replace with the specific filename of the driver. NOTES: l For the build to work properly it is important that the currently running kernel MATCH the version and configuration of the installed kernel source. If you have just recompiled your kernel, reboot the system.
For example, intel-igb-1.3.8.6-1.x86_64.rpm: igb is the component name; 1.3.8.6-1 is the component version; and x86_ 64 is the architecture type. KMP RPMs are provided for supported Linux distributions. The naming convention for the included KMP RPMs is: intel--kmp--_..rpm For example, intel-igb-kmp-default-1.3.8.6_2.6.27.19_5-1.x86_64.rpm: igb is the component name; default is the kernel type; 1.3.8.6 is the component version; 2.6.
Parameter Name Valid Range/Settings Default Description 0 = Setting InterruptThrottleRate to 0 turns off any interrupt moderation and may improve small packet latency. However, this is generally not suitable for bulk throughput traffic due to the increased CPU utilization of the higher interrupt rate. NOTES: - On 82599, and X540, and X550-based adapters, disabling InterruptThrottleRate will also result in the driver disabling HW RSC.
Parameter Name Valid Range/Settings Default Description 0 = Legacy Interrupts. 1 = MSI Interrupts. 2 = MSI-X interrupts (default). RSS 0-8 1 0 = Assign up to whichever is less between the number of CPUs or the number of queues. X = Assign X queue, where X is less than or equal to the maximum number of queues. The driver allows maximum supported queue value. For example, I350-based adapters allow RSS=8, where 8 is the maximum queues allowed.
Parameter Name Valid Range/Settings Default Description max_vfs 0-7 0 This parameter adds support for SR-IOV. It causes the driver to spawn up to max_vfs worth of virtual function. If the value is greater than 0, it will force the VMDQ parameter to equal 1 or more. NOTE: When either SR-IOV mode or VMDq mode is enabled, hardware VLAN filtering and VLAN tag stripping/insertion will remain enabled. Please remove the old VLAN filter before the new VLAN filter is added.
Parameter Name Valid Range/Settings Default Description DMAC 0, 250, 500, 1000, 2000, 3000, 4000, 5000, 6000, 7000, 8000, 9000, 10000 0 (disabled) Enables or disables DMA Coalescing feature. Values are in microseconds and increase the internal DMA Coalescing feature's internal timer. Direct Memory Access (DMA) allows the network device to move packet data directly to the system's memory, reducing CPU utilization.
Jumbo Frames Jumbo Frames support is enabled by changing the MTU to a value larger than the default of 1500 bytes. Use the ifconfig command to increase the MTU size. For example: ifconfig eth mtu 9000 up This setting is not saved across reboots. The setting change can be made permanent by adding MTU = 9000 to the file /etc/sysconfig/network-scripts/ifcfg-eth, in Red Hat distributions. Other distributions may store this setting in a different location.
Multiqueue In this mode, a separate MSI-X vector is allocated for each queue and one for “other” interrupts such as link status change and errors. All interrupts are throttled via interrupt moderation. Interrupt moderation must be used to avoid interrupt storms while the driver is processing one interrupt. The moderation value should be at least as large as the expected time for the driver to process an interrupt. Multiqueue is off by default. MSI-X support is required for Multiqueue.
MAC and VLAN anti-spoofing feature When a malicious driver attempts to send a spoofed packet, it is dropped by the hardware and not transmitted. An interrupt is sent to the PF driver notifying it of the spoof attempt. When a spoofed packet is detected the PF driver will send the following message to the system log (displayed by the "dmesg" command): Spoof event(s) detected on VF(n) Where n=the VF that attempted to do the spoofing.
If you have multiple interfaces in a server, either turn on ARP filtering by entering: echo 1 > /proc/sys/net/ipv4/conf/all/arp_filter This only works if your kernel's version is higher than 2.4.5. NOTE: This setting is not saved across reboots. The configuration change can be made permanent by adding the following line to the file /etc/sysctl.conf: net.ipv4.conf.all.
ixgbe Linux* Driver for the Intel® 10 Gigabit Server Adapters ixgbe Overview WARNING: By default, the ixgbe driver complies with the Large Receive Offload (LRO) feature enabled. This option offers the lowest CPU utilization for receives but is incompatible with routing/ip forwarding and bridging. If enabling ip forwarding or bridging is a requirement, it is necessary to disable LRO using compile time options as noted in the LRO section later in this section.
See SFP+ and QSFP+ Devices for more information. Building and Installation There are three methods for installing the Linux driver: l Install from Source Code l Install Using KMP RPM l Install Using KMOD RPM Install from Source Code To build a binary RPM* package of this driver, run 'rpmbuild -tb '. Replace with the specific filename of the driver.
The RPMs are provided for supported Linux distributions. The naming convention for the included RPMs is: intel--..rpm For example, intel-ixgbe-1.3.8.6-1.x86_64.rpm: ixgbe is the component name; 1.3.8.6-1 is the component version; and x86_64 is the architecture type. KMP RPMs are provided for supported Linux distributions. The naming convention for the included KMP RPMs is: intel--kmp--_..
Command Line Parameters If the driver is built as a module, the following optional parameters are used by entering them on the command line with the modprobe command using this syntax: modprobe ixgbe [
Parameter Name Valid Range/Settings InterruptThrottleRate 956 - 488,281 (0=off, 1=ddynamic) Default Description 1 Interrupt Throttle Rate controls the number of interrupts each interrupt vector can generate per second. Increasing ITR lowers latency at the cost of increased CPU utilization, though it may help throughput in some circumstances. 0 = Setting InterruptThrottleRate to 0 turns off any interrupt moderation and may improve small packet latency.
Parameter Name Valid Range/Settings Default Description LLISize 0 - 1500 0 (disabled) LLISize causes an immediate interrupt if the board receives a packet smaller than the specified size. NOTE: LLI is not supported on X550-based adapters. LLIEType LLIVLANP Flow Control 0 - x8FFF 0-7 0 (disabled) Low Latency Interrupt Ethernet Protocol Type. 0 (disabled) Low Latency Interrupt on VLAN Priority Threshold. NOTE: LLI is not supported on X550-based adapters.
Parameter Name Valid Range/Settings Default Description Perfect filter is an interface to load the filter table that funnels all flow into queue_0 unless an alternative queue is specified using "action." In that case, any flow that matches the filter criteria will be directed to the appropriate queue. Support for Virtual Function (VF) is via the user-data field. You must update to the version of ethtool built for the 2.6.40 kernel. Perfect Filter is supported on all kernels 2.6.30 and later.
Parameter Name Valid Range/Settings Default Description udp6 UDP over IPv6 f Hash on bytes 0 and 1 of the Layer 4 header of the rx packet. n Hash on bytes 2 and 3 of the Layer 4 header of the rx packet.
Parameter Name Valid Range/Settings Default Description max_vfs 1 - 63 0 This parameter adds support for SR-IOV. It causes the driver to spawn up to max_vfs worth of virtual function. If the value is greater than 0, it will also force the VMDq parameter to be 1 or more. NOTE: When either SR-IOV mode or VMDq mode is enabled, hardware VLAN filtering and VLAN tag stripping/insertion will remain enabled. Please remove the old VLAN filter before the new VLAN filter is added.
Parameter Name Valid Range/Settings Default Description When DCB is enabled, network traffic is transmitted and received through multiple traffic classes (packet buffers in the NIC). The traffic is associated with a specific class based on priority, which has a value of 0 through 7 used in the VLAN tag. When SR-IOV is not enabled, each traffic class is associated with a set of RX/TX descriptor queue pairs. The number of queue pairs for a given traffic class depends on the hardware configuration.
Parameter Name Valid Range/Settings EEE 0-1 Default Description 0 = Disables EEE 1 = Enables EEE A link between two EEE-compliant devices will result in periodic bursts of data followed by periods where the link is in an idle state. This Low Power Idle (LPI) state is supported at both 1 Gbps and 10 Gbps link speeds. NOTES: l EEE support requires auto-negotiation. Both link partners must support EEE. l EEE is not supported on every Intel® Ethernet Network devices or at all link speeds.
dmesg -n 8 NOTE: This setting is not saved across reboots. Jumbo Frames Jumbo Frames support is enabled by changing the MTU to a value larger than the default of 1500 bytes. The maximum value for the MTU is 9710. Use the ifconfig command to increase the MTU size. For example, enter the following where is the interface number: ifconfig ethx mtu 9000 up This setting is not saved across reboots.
HW RSC 82599-based adapters support hardware based receive side coalescing (RSC) which can merge multiple frames from the same IPv4 TCP/IP flow into a single structure that can span one or more descriptors. It works similarly to software large receive offload technique. By default HW RSC is enabled, and SW LRO can not be used for 82599-based adapters unless HW RSC is disabled. IXGBE_NO_HW_RSC is a compile time flag that can be enabled at compile time to remove support for HW RSC from the driver.
-N --config-nfc Configures the receive network flow classification. rx-flow-hash tcp4|udp4|ah4|esp4|sctp4|tcp6|udp6|ah6|esp6|sctp6 m|v|t|s|d|f|n|r... Configures the hash options for the specified network traffic type. udp4 UDP over IPv4 udp6 UDP over IPv6 f Hash on bytes 0 and 1 of the Layer 4 header of the rx packet. n Hash on bytes 2 and 3 of the Layer 4 header of the rx packet.
A potential workaround is to use the Cisco IOS command "no errdisable detect cause all" from the Global Configuration prompt which enables the switch to keep the interfaces up, regardless of errors. Rx Page Allocation Errors 'Page allocation failure. order:0' errors may occur under stress with kernels 2.6.25 and newer. This is caused by the way the Linux kernel reports this stressed condition.
ixgbevf Linux* Driver for the Intel® 10 Gigabit Server Adapters ixgbevf Overview SR-IOV is supported by the ixgbevf driver, which should be loaded on both the host and VMs. This driver supports upstream kernel versions 2.6.30 (or higher) x86_64. The ixgbevf driver supports 82599, X540, and X550 virtual function devices that can only be activated on kernels supporting SR-IOV. SR-IOV requires the correct platform and OS support. The ixgbevf driver requires the ixgbe driver, version 2.0 or later.
l l l Install from Source Code Install Using KMP RPM Install Using KMOD RPM Install from Source Code To build a binary RPM* package of this driver, run 'rpmbuild -tb '. Replace with the specific filename of the driver. NOTES: l For the build to work properly it is important that the currently running kernel MATCH the version and configuration of the installed kernel source. If you have just recompiled your kernel, reboot the system.
For example, intel-ixgbevf-kmp-default-1.3.8.6_2.6.27.19_5-1.x86_64.rpm: ixgbevf is the component name; default is the kernel type; 1.3.8.6 is the component version; 2.6.27.19_5-1 is the kernel version; and x86_64 is the architecture type. To install the KMP RPM, type the following two commands: rpm -i rpm -i For example, to install the ixgbevf KMP RPM package, type the following: rpm -i intel-ixgbevf-1.3.8.6-1.x86_64.rpm rpm -i intel-ixgbevf-kmp-default-1.3.8.6_2.6.27.
Command Line Parameters If the driver is built as a module, the following optional parameters are used by entering them on the command line with the modprobe command using this syntax: modprobe ixgbevf [
Parameter Name Valid Range/Settin gs Defaul Description t NOTES: l Dynamic interrupt throttling is only applicable to adapters operating in MSI or Legacy interrupt mode, using a single receive queue. l When ixgbevf is loaded with default settings and multiple adapters are in use simultaneously, the CPU utilization may increase non-linearly.
MACVLAN ixgbevf supports MACVLAN on those kernels that have the feature included. Kernel support for MACVLAN can be tested by checking if the MACVLAN driver is loaded. The user can run 'lsmod | grep macvlan' to see if the MACVLAN driver is loaded or run 'modprobe macvlan' to try to load the MACVLAN driver. It may be necessary to update to a recent release of the iproute2 package to get support of MACVLAN via the 'ip' command.
i40e Linux Driver for the Intel X710 Ethernet Controller Family i40e Overview NOTE: The kernel assumes that TC0 is available, and will disable Priority Flow Control (PFC) on the device if TC0 is not available. To fix this, ensure TC0 is enabled when setting up DCB on your switch. NOTE: If the physical function (PF) link is down, you can force link up (from the host PF) on any virtual functions (VF) bound to the PF. Note that this requires kernel support (Redhat kernel 3.10.0-327 or newer, upstream kernel 3.
l l Intel® Ethernet 25G 2P XXV710 Adapter Intel® Ethernet 25G 2P XXV710 Mezz SFP+ Devices with Pluggable Optics NOTE: For SFP+ fiber adapters, using "ifconfig down" turns off the laser. "ifconfig up" turns the laser on. See SFP+ and QSFP+ Devices for more information.
Install Using KMP RPM The KMP RPMs update existing i40e RPMs currently installed on the system. These updates are provided by SuSE in the SLES release. If an RPM does not currently exist on the system, the KMP will not install. The RPMs are provided for supported Linux distributions. The naming convention for the included RPMs is: intel--..rpm For example, intel-i40e-1.3.8.6-1.x86_64.rpm: i40e is the component name; 1.3.8.
Parameter Name Valid Range/Settings Default Description max_vfs 1 - 63 0 This parameter adds support for SR-IOV. It causes the driver to spawn up to max_vfs worth of virtual function. NOTES: l This parameter is only used on kernel 3.7.x and below. On kernel 3.8.x and above, use sysfs to enable VFs. Also, for Red Hat distributions, this parameter is only used on version 6.6 and older. For version 6.7 and newer, use sysfs.
Parameter Name Valid Range/Settings Default Description To set a VF as trusted or untrusted, enter the following command in the Hypervisor: # ip link set dev eth0 vf 1 trust [on|off] Once the VF is designated as trusted, use the following commands in the VM to set the VF to promiscuous mode.
Parameter Name Valid Range/Settings Default Description l type, it supports valid combinations of IP addresses (source or destination) and UDP/TCP ports (source and destination). For example, you can supply only a source IP address, a source IP address and a destination port, or any combination of one or more of these four parameters. The Linux i40e driver allows you to filter traffic based on a user-defined flexible two-byte pattern and offset by using the ethtool user-def and mask fields.
Parameter Name Valid Range/Settings Default Description ethtool -U flow-type src-ip dst-ip src-port dst-port action Where: - the ethernet device to program - can be ip4, tcp4, udp4, or sctp4 - the ip address to match on - the port number to match on - the queue to direct traffic towards (-1 discards the matched traffic) Use the following command to display all of the active filters: ethtool -u Use the following c
Parameter Name Valid Range/Settings Default Description This flexible data is specified using the "user-def" field of the ethtool command in the following way: 31 28 24 20 16 15 12 8 4 0 offset into packet payload 2 bytes of flexible data For example, ... user-def 0x4FFFF ... tells the filter to look 4 bytes into the payload and match that value against 0xFFFF. The offset is based on the beginning of the payload, and not the beginning of the packet. Thus flow-type tcp4 ... user-def 0x8BEAF ...
Parameter Name Valid Range/Settings Default Description l l l device is in Single Function per Port mode. The "action -1" option, which drops matching packets in regular Flow Director filters, is not available to drop packets when used with cloud filters. For IPv4 and ether flow-types, cloud filters cannot be used for TCP or UDP filters. Cloud filters can be used as a method for implementing queue splitting in the PF.
Parameter Name Valid Range/Settings Default Description Redirect traffic coming from 192.168.42.13 port 12344 with destination 192.168.42.33 port 12344 into VF id 1, and call this "rule 3" For cloud filters (tunneled packets): l All other filters, including where Tenant ID/VNI is specified. l The lower 32 bits of the user-def field can carry the tenant ID/VNI if required. l The VF can be specified using the "action" field, just as regular filters described in the Flow director Filter section above.
NOTE: This setting is not saved across reboots. Jumbo Frames Jumbo Frames support is enabled by changing the MTU to a value larger than the default of 1500 bytes. The maximum value for the MTU is 9710. Use the ifconfig command to increase the MTU size. For example, enter the following where is the interface number: ifconfig ethx mtu 9000 up This setting is not saved across reboots.
#ethtool -N rx-flow-hash
Adapter firmware implements LLDP and DCBX protocol agents as per 802.1AB and 802.1Qaz respectively. The firmware based DCBX agent runs in willing mode only and can accept settings from a DCBX capable peer. Software configuration of DCBX parameters via dcbtool/lldptool are not supported. NOTE: Firmware LLDP can be disabled by setting the private flag disable-fw-lldp.
1. Disable adaptive ITR and lower Rx and Tx interrupts. The examples below affect every queue of the specified interface. 2.
Incomplete messages in the system log The NVMUpdate utility may write several incomplete messages in the system log. These messages take the form: in the driver Pci Ex config function byte index 114 in the driver Pci Ex config function byte index 115 These messages can be ignored. Bad checksum counter incorrectly increments when using VxLAN When passing non-UDP traffic over a VxLAN interface, the port.rx_csum_bad counter increments for the packets.
echo 1 > /proc/sys/net/ipv4/conf/all/arp_filter This only works if your kernel's version is higher than 2.4.5. NOTE:This setting is not saved across reboots. The configuration change can be made permanent by adding the following line to the file /etc/sysctl.conf: net.ipv4.conf.all.arp_filter = 1 Another alternative is to install the interfaces in separate broadcast domains (either in different switches or in a switch partitioned to VLANs).
NOTE:Link time can vary. Adjust LINKDELAY value accordingly. Alternatively, NetworkManager can be used to configure the interfaces, which avoids the set timeout. For configuration instructions of NetworkManager refer to the documentation provided by your distribution. Loading i40e driver in 3.2.x and newer kernels displays kernel tainted message Due to recent kernel changes, loading an out of tree driver causes the kernel to be tainted.
iavf Linux Driver iavf Overview The i40evf driver was renamed to the iavf (Intel Adaptive Virtual Function) driver. This was done to reduce the impact of future Intel Ethernet Controllers. The iavf driver allows you to upgrade your hardware without needing to upgrade the virtual function driver in each of the VMs running on top of the hardware.
1. Download the base driver tar file to the directory of your choice. For example, use '/home/username/iavf' or '/usr/local/src/iavf'. 2. Untar/unzip the archive, where is the version number for the driver tar: tar zxf iavf-.tar.gz 3. Change to the driver src directory, where is the version number for the driver tar: cd iavf-/src/ 4. Compile the driver module: make install The binary will be installed as: /lib/modules//kernel/drivers/net/iavf/iavf.
For example, kmod-iavf-2.3.4-1.x86_64.rpm: l iavf is the driver name l 2.3.4 is the version l x86_64 is the architecture type To install the KMOD RPM, go to the directory of the RPM and type the following command: rpm -i For example, to install the iavf KMOD RPM package, type the following: rpm -i kmod-iavf-2.3.4-1.x86_64.rpm Command Line Parameters The iavf driver does not support any command line parameters.
Known Issues Virtual machine does not get link If the virtual machine has more than one virtual port assigned to it, and those virtual ports are bound to different physical ports, you may not get link on all of the virtual ports. The following command may work around the issue: ethtool -r Where is the PF interface in the host, for example: p5p1. You may need to run the command more than once to get link on all virtual ports.
Data Center Bridging (DCB) for Intel® Network Connections Data Center Bridging provides a lossless data center transport layer for using LANs and SANs in a single unified fabric. Data Center Bridging includes the following capabilities: l Priority-based flow control (PFC; IEEE 802.1Qbb) l Enhanced transmission selection (ETS; IEEE 802.1Qaz) l Congestion notification (CN) l Extensions to the Link Layer Discovery Protocol standard (IEEE 802.
iSCSI Over DCB Intel® Ethernet adapters support iSCSI software initiators that are native to the underlying operating system. Data Center Bridging is most often configured at the switch. If the switch is not DCB capable, the DCB handshake will fail but the iSCSI connection will not be lost. NOTE: DCB does not install in a VM. iSCSI over DCB is only supported in the base OS. An iscsi initiator running in a VM will not benefit from DCB ethernet enhancements.
Remote Boot Remote Boot allows you to boot a system using only an Ethernet adapter. You connect to a server that contains an operating system image and use that to boot your local system. Flash Images "Flash" is a generic term for nonvolatile RAM (NVRAM), firmware, and option ROM (OROM). Depending on the device, it can be on the NIC or on the system board. Updating the Flash from Linux The BootUtil command line utility can update the flash on an Intel Ethernet network adapter.
In the Boot Manager Boot Menu, Intel adapters are identified as follows: l X710-controlled adapters: "IBA 40G" l Other 10G adapters: "IBA XE" l 1G adapters: "IBA 1G" Intel® Boot Agent Configuration Boot Agent Client Configuration The Boot Agent is enabled and configured from HII. CAUTION: If spanning tree protocol is enabled on a switch port through which a port is trying to use PXE, the delay before the port starts forwarding can cause a DHCP timeout.
NOTE: When the Intel Boot Agent software is installed as an upgrade for an earlier version boot ROM, the associated server-side software may not be compatible with the updated Intel Boot Agent. Contact your system administrator to determine if any server updates are necessary. Linux* Server Setup Consult your Linux* vendor for information about setting up the Linux Server. Windows* Deployment Services Nothing is needed beyond the standard driver files supplied on the media.
PXE-E06: Option ROM requires DDIM support. The system BIOS does not support DDIM. The BIOS does not support the mapping of the PCI expansion ROMs into upper memory as required by the PCI specification. The Intel Boot Agent cannot function in this system. The Intel Boot Agent returns control to the BIOS and does not attempt to remote boot. You may be able to resolve the problem by updating the BIOS on your system.
PXE-EC6: UNDI driver image is invalid. The UNDI driver image signature was invalid. An incorrect flash image is installed or the image has become corrupted. Try to update the flash image. PXE-EC8: !PXE structure was not found in UNDI driver code segment. The Intel Boot Agent could not locate the needed !PXE structure resource. An incorrect flash image is installed or the image has become corrupted. Try to update the flash image. PXE-EC9: PXENV + structure was not found in UNDI driver code segment.
If this does not work, the problem may be occurring before the Intel Boot Agent software even begins operating. In this case, there may be a BIOS problem with your computer. Contact your computer manufacturer's customer support group for help in correcting your problem. There are configuration/operation problems with the boot process If your PXE client receives a DHCP address, but then fails to boot, you know the PXE client is working correctly.
Intel® Ethernet iSCSI Boot Port Selection Menu The first screen of the Intel® iSCSI Boot Setup Menu displays a list of Intel® iSCSI Boot-capable adapters. For each adapter port the associated PCI device ID, PCI bus/device/function location, and a field indicating Intel® Ethernet iSCSI Boot status is displayed. Up to 10 iSCSI Boot-capable ports are displayed within the Port Selection Menu. If there are more Intel® iSCSI Boot-capable adapters, these are not listed in the setup menu.
Intel® Ethernet iSCSI Boot Port Specific Setup Menu The port specific iSCSI setup menu has four options: l Intel® iSCSI Boot Configuration - Selecting this option will take you to the iSCSI Boot Configuration Setup Menu. The iSCSI Boot Configuration Menu is described in detail in the section below and will allow you to configure the iSCSI parameters for the selected network port. l CHAP Configuration - Selecting this option will take you to the CHAP configuration screen.
Listed below are the options in the Intel® iSCSI Boot Configuration Menu: l Use Dynamic IP Configuration (DHCP) - Selecting this checkbox will cause iSCSI Boot to attempt to get the client IP address, subnet mask, and gateway IP address from a DHCP server. If this checkbox is enabled, these fields will not be visible. l Initiator Name - Enter the iSCSI initiator name to be used by Intel® iSCSI Boot when connecting to an iSCSI target.
The iSCSI CHAP Configuration menu has the following options to enable CHAP authentication: l Use CHAP - Selecting this checkbox will enable CHAP authentication for this port. CHAP allows the target to authenticate the initiator. After enabling CHAP authentication, a user name and target password must be entered. l User Name - Enter the CHAP user name in this field. This must be the same as the CHAP user name configured on the iSCSI target. l Target Secret - Enter the CHAP password in this field.
NOTES: l To support iSCSI Boot, the target needs to support multiple sessions from the same initiator. Both the iSCSI Boot firmware initiator and the OS High Initiator need to establish an iSCSI session at the same time. Both these initiators use the same Initiator Name and IP Address to connect and access the OS disk but these two initiators will establish different iSCSI sessions. In order for the target to support iSCSI Boot, the target must be capable of supporting multiple sessions and client logins.
Configure option 12 with the hostname of the iSCSI initiator. DHCP Option 3, Router List: Configure option 3 with the gateway or Router IP address, if the iSCSI initiator and iSCSI target are on different subnets. Creating a Bootable Image for an iSCSI Target There are two ways to create a bootable image on an iSCSI target: l Install directly to a hard drive in an iSCSI storage array (Remote Install).
Intel® Ethernet iSCSI Boot does not load on system startup and the sign-on banner is not displayed. l l l l After installing Intel Ethernet iSCSI Boot, the system will not boot to a local disk or network boot device. The system becomes unresponsive after Intel Ethernet iSCSI Boot displays the sign-on banner or after connecting to the iSCSI target. "Intel® iSCSI Remote Boot" does not show up as a boot device in the system BIOS boot device menu.
Error message displayed: "PnP Check Structure is invalid!" Error message displayed: "Invalid iSCSI connection information" Error message displayed: "Unsupported SCSI disk block size!" Error message displayed: "ERROR: Could not establish TCP/IP connection with iSCSI target system." Error message displayed: "ERROR: CHAP authentication with target failed." Error message displayed: "ERROR: Login request rejected by iSCSI target system.
Error message displayed. "ERROR: iSCSI target has reported an error." Error message displayed. ERROR: There is an IP address conflict with another system on the network. l l l l An error has occurred on the iSCSI target. Inspect the iSCSI target to determine the source of the error and ensure it is configured properly. A system on the network was found using the same IP address as the iSCSI Option ROM client.
HKLM\SYSTEM\CurrentControlSet\Control\Class\{4D36E97B-E325-11CE-BFC108002BE10318}\\Parameters DumpMiniport REG_SZ iscsdump.sys Moving iSCSI adapter to a different slot: In a Windows* installation, if you move the iSCSI adapter to a PCI slot other than the one that it occupied when the drivers and MS iSCSI Remote Boot Initiator were installed, then a System Error may occur during the middle of the Windows Splash Screen. This issue goes away if you return the adapter to its original PCI slot.
Linux Known Issues Channel Bonding Linux Channel Bonding has basic compatibility issues with iSCSI Boot and should not be used. Authentications errors on EqualLogic target may show up in dmesg when running Red Hat* Enterprise Linux 4 These error messages do not indicate a block in login or booting and may safely be ignored. LRO and iSCSI Incompatibility LRO (Large Receive Offload) is incompatible with iSCSI target or initiator traffic.
Troubleshooting Common Problems and Solutions There are many simple, easy-to-fix problems related to network problems. Review each one of these before going further. l Check for recent changes to hardware, software, or the network that may have disrupted communications. l Check the driver software. l Make sure you are using the latest appropriate drivers for your adapter from the Intel support website. l Disable (or unload), then re-enable (reload) the driver or adapter. l Check for conflicting settings.
Problem Solution Another adapter stops working after you installed the Intel® Network Adapter Make sure your PCI BIOS is current. See PCI / PCI-X / PCI Express Configuration. Check for interrupt conflicts and sharing problems. Make sure the other adapter supports shared interrupts. Also, make sure your operating system supports shared interrupts. Unload all PCI device drivers, then reload all drivers. Adapter unable to connect to switch at correct speed.
Problem Solution l l Driver message: "Rx/Tx is disabled on this device because an unsupported SFP+ module type was detected." Configure interrupts for level-triggering, as opposed to edgetriggering. Reserve interrupts and/or memory addresses. This prevents multiple buses or bus slots from using the same interrupts. Check the BIOS for IRQ options for PCI / PCI-X / PCIe. You installed an unsupported module in the device. See Supported SFP+ and QSFP+ Modules for a list of supported modules.
Resolving Firmware Recovery Mode Issues If your device is in Firmware Recovery mode you can restore it to factory defaults using the process for resolution of Firmware Recovery Mode Issues as outlined in the sub-sections below. NVM Self Check The process begins after power-on or reboot. At this time, the firmware will perform tests to assess whether there is damage or corruption of the device NVM image.
repair potentially damaged system files. 2. If your device is in Firmware Recovery mode you can restore it to factory defaults using the latest Dell EMC Update Package for Intel Adapter Firmware (FW-DUP) or Intel NIC Family ESXi Firmware Update Package. Download the latest Dell EMC Update Package for Intel Adapter Firmware (FW-DUP) or Intel NIC Family ESXi Firmware Update Package from Dell’s support website and follow the instructions in them.
Testing the Adapter Intel's diagnostic software lets you test the adapter to see if there are problems with the adapter hardware, the cabling, or the network connection. Testing from Windows Intel PROSet allows you to run three types of diagnostic tests. l Connection Test: Verifies network connectivity by pinging the DHCP server, WINS server, and gateway. l Cable Tests: Provide information about cable properties.
Intel® Network Adapter Messages Below is a list of custom event messages that appear in the Windows Event Log for Intel® Ethernet adapters: Event Message ID Severity 1 The Hyper-V role was disabled on the system. All Intel® Ethernet devices configured with a Virtualization performance profile were changed to a more appropriate performance profile. Informational 6 PROBLEM: Unable to allocate the map registers necessary for operation. ACTION: Reduce the number of transmit descriptors and restart.
Event Message ID Severity 47 PROBLEM: Could not map the network adapter flash. ACTION: Install the latest driver from http://www.intel.com/support/go/network/adapter/home.htm. ACTION: Try another slot. Error 48 PROBLEM: The fan on the network adapter has failed. ACTION: Power off the machine and replace the network adapter. Error 49 PROBLEM: The driver was unable to load due to an unsupported SFP+ module installed in the adapter. ACTION: Replace the module.
Intel DCB Messages Below is a list of intermediate driver custom event messages that appear in the Windows Event Log: Event ID Message Severity 256 Service debug string Informational 257 Enhanced Transmission Selection feature has been enabled on a device. Informational 258 Enhanced Transmission Selection feature has been disabled on a device. Informational 259 Priority Flow Control feature has been enabled on a device.
Event ID Message Severity 787 Service experienced a receive state machine error. Error 789 Service connection to LLDP protocol driver failed. Error 790 Enhanced Transmission Selection feature on a device has changed to non-operational. Error 791 Priority Flow Control feature on a device has changed to non-operational. Error 792 Application feature on a device has changed to non-operational. Error 793 Service rejected configuration - multiple link strict bandwidth groups were detected.
Indicator Lights The Intel Server and Desktop network adapters feature indicator lights on the adapter backplate that serve to indicate activity and the status of the adapter board. The following tables define the meaning for the possible states of the indicator lights for each adapter board.
The Intel® Ethernet 10G 2P X710 OCP has the following indicator lights: Label Indication Meaning Green Operating at maximum port speed Yellow Linked at less than maximum port speed Green flashing Data activity Off No activity LNK ACT The Intel® Ethernet Converged Network Adapter X710 has the following indicator lights: Label Indication Meaning Green Linked at 10 Gb Yellow Linked at 1 Gb Blinking On/OFF Actively transmitting or receiving data Off No link LNK ACT
The Intel® 10G 2P X520 Adapter has the following indicator lights: Label Indication Meaning On Linked to the LAN Off Not linked to the LAN Blinking On/Off Actively transmitting or receiving data Off No link GRN 10G (A or B): Green ACT/LNK (A or B): Green Dual Port Copper Adapters The Intel® Ethernet 10G 2P X710-T2L-t OCP has the following indicator lights: Label Link Indication Meaning Green Linked at 10 Gbps. Yellow Linked at slower than 1 Gbps. Off No link.
The Intel® Ethernet 10G 2P X710-T2L-t Adapter has the following indicator lights: Label Link Indication Meaning Green Linked at 10 Gbps. Yellow Linked at slower than 10 Gbps. Off No link. Blinking On/Off Actively transmitting or receiving data. Off No link. Activity The Intel® Ethernet 10G 2P X550-t Adapter has the following indicator lights: Label Link Indication Meaning Green Linked at 10 Gbps. Yellow Linked at 1 Gb or 100 Mbps. Off No link.
The Intel® Ethernet 10G 2P X540-t Adapter has the following indicator lights: Label Link Indication Meaning Green Linked at 10 Gb. Yellow Linked at 1 Gb. Off No link. Blinking On/Off Actively transmitting or receiving data. Off No link. Activity The Intel® Gigabit 2P I350-t Adapter has the following indicator lights: Label ACT/LNK 10/100/ 1000 Indication Meaning Green on The adapter is connected to a valid link partner. Green flashing Data activity Off No link.
Quad Port Copper Adapters The Intel® Ethernet Converged Network Adapter X710 and Intel® Ethernet Converged Network Adapter X710-T have the following indicator lights: Label ACT LNK Indication Meaning Green on The adapter is connected to a valid link partner. Green flashing Data activity Off No link.
rNDC (Rack Network Daughter Cards) The Intel® Ethernet 40G 2P XL710 QSFP+ rNDC has the following indicator lights: Label Indication Meaning LNK (green/yellow) Green on Operating at maximum port speed. Off No link. Green flashing Data activity. Off No activity. ACT (green) The Intel® Ethernet 10G 2P X710 OCP has the following indicator lights: Label Indication Meaning LNK (green/yellow) Green on Operating at maximum port speed. Off No link. Green flashing Data activity.
The Intel® Ethernet 1G 4P I350-t OCP, Intel® Ethernet 10G 4P X550/I350 rNDC, Intel® Gigabit 4P X550/I350 rNDC, Intel® Ethernet 10G 4P X550 rNDC, Intel® Ethernet 10G 4P X540/I350 rNDC, Intel® Gigabit 4P X540/I350 rNDC and Intel® Gigabit 4P I350-t rNDC have the following indicator lights: Label Indication Meaning LNK (green/yellow) Green on Operating at maximum port speed. Yellow on Operating at lower port speed. Off No link. Green flashing Data activity. Off No activity.
Transitioning from i40evf to iavf Overview Intel created the Intel® Adaptive Virtual Function (iavf) driver to provide a consistent, future-proof virtual function (VF) interface for Intel® Ethernet controllers. Previously, when you upgraded your network hardware, you replaced the drivers in each virtual machine (VM) with new drivers that were capable of accessing the new VF device provided by the new hardware.
5. Load the new driver module. modprobe iavf Install Using Linux tarball 1. Copy the iavf driver tar file to your VM image. 2. Untar the file. tar zxf iavf-.tar.gz where is the version number for the driver tar file. 3. Change to the src directory under the unzipped driver file. 4. Compile the driver module. make make install 5. Make sure that any older i40evf drivers are removed from the kernel before loading the new module. rmmod i40evf 6.
Transition i40evf to iavf on Microsoft Windows Operating Systems NOTES: l Do not use the i40evf device as your primary interface to access the VM. You must have another way to interact with the VM so you don't lose the connection when you disable the i40evf driver. 1. Copy the iavf installer package to your VM image. 2. Use Add/Remove Programs to remove the i40evf driver. 3. Run the iavf install package to install the iavf driver.
Known Issues NOTE: iSCSI Known Issues are located in their own section of this manual. The get-netadaptervmq PowerShell cmdlet displays less than the expected number of receive queues After installing the Dell Update Package (DUP), the get-netadaptervmq PowerShell cmdlet reports 31 queues per port. This is expected behavior. The DUP changes the queue pooling default from pairs to groups of four. Pre-DUP, queues are paired into pools of two. After the DUP is installed, queues put into groups of four.
Throughput Reduction After Hot-Replace If an Intel gigabit adapter is under extreme stress and is hot-swapped, throughput may significantly drop. This may be due to the PCI property configuration by the Hot-Plug software. If this occurs, throughput can be restored by restarting the system. CPU Utilization Higher Than Expected Setting RSS Queues to a value greater than 4 is only advisable for large servers with several processors.
Intel drivers must be installed by Dell EMC Update Package before configuring Microsoft Hyper-V features Prior to configuring the Microsoft* Hyper-V features, the Intel® NIC drivers must be installed by the Dell EMC Update Package. If the Microsoft* Hyper-V feature is configured on an unsupported NIC partition on an Intel® X710 device prior to using the Dell EMC Update Package to install Intel® NIC Drivers, the driver installation may not complete.
Configuring the Driver on Different Distributions Configuring a network driver to load properly when the system is started (0=legacy, 1=MSI, 2=MSI-X) is distribution dependent. Typically, the configuration process involves adding an alias line to /etc/modules.conf or /etc/modprobe.conf as well as editing other system startup scripts and/or configuration files. Many popular Linux distributions ship with tools to make these changes for you.
Other Intel 10GbE Network Adapter Known Issues The System H/W Inventory (iDRAC) indicates that Auto-negotiation on the Embedded NIC is Disabled, but elsewhere link speed and duplex auto-negotiation is Enabled If an optical module is plugged into the Intel® Ethernet 10G X520 LOM on a PowerEdge-C6320, the System H/W Inventory (iDRAC) will indicate that Auto-negotiation is Disabled. However, Windows Device Manager and HII indicate that link speed and duplex Auto-negotiation is Enabled.
When trying to identify the adapter, the Activity LED blinks and the Link LED is solid If you use the Identify Adapter feature with the following adapters, the Activity LED blinks instead of the Link LED. The Link LED may display a solid green light for 10G ports even if a network link is not present.
Regulatory Compliance Statements FCC Class A Products 40 Gigabit Ethernet Products l l Intel® Ethernet 40G 2P XL710 QSFP+ rNDC Intel® Ethernet Converged Network Adapter XL710-Q2 25 Gigabit Ethernet Products l l Intel® Ethernet 25G 2P XXV710 Mezz Intel® Ethernet 25G 2P XXV710 Adapter 10 Gigabit Ethernet Products l l l l l l l l l l l l l l l l l l l l Intel® Ethernet X520 10GbE Dual Port KX4-KR Mezz Intel® Ethernet 10G 2P X540-t Adapter Intel® Ethernet 10G 2P X550-t Adapter Intel® Ethernet 10G 4P X550 r
Gigabit Ethernet Products l l Intel® Gigabit 2P I350-t Adapter Intel® Gigabit 4P I350-t Adapter Safety Compliance The following safety standards apply to all products listed above: l UL 60950-1, 2nd Edition, 2011-12-19 (Information Technology Equipment - Safety - Part 1: General Requirements) l UL 62368-1 2nd Edition (Information Technology Equipment - Safety requirements) l CSA C22.2 No.
l l l l l EU REACH directive EU WEEE directive EU RoHS directive China RoHS directive BSMI CNS15663: Taiwan RoHS Regulatory Compliance Markings When required, these products are provided with the following Product Certification Markings: l UL Recognition Mark for USA and Canada l CE Mark l EU WEEE Logo l FCC markings l VCCI marking l Australian C-Tick Mark l Korea MSIP mark l Taiwan BSMI mark l People's Republic of China "EFUP" mark FCC Class A User Information The Class A products listed above comply wi
BSMI Class A Statement KCC Notice Class A (Republic of Korea Only) BSMI Class A Notice (Taiwan) FCC Class B User Information This equipment has been tested and found to comply with the limits for a Class B digital device pursuant to Part 15 of the FCC rules. These limits are designed to provide reasonable protection against harmful interference in a residential installation.
NOTE: This device complies with Part 15 of the FCC Rules. Operation is subject to the following two conditions: (1) this device may not cause harmful interference, and (2) this device must accept any interference received, including interference that may cause undesired operation.
EU WEEE Logo Manufacturer Declaration European Community Manufacturer Declaration Intel Corporation declares that the equipment described in this document is in conformance with the requirements of the European Council Directive listed below: l Low Voltage Directive 2006/95/EC l EMC Directive2004/108/EC l RoHS Directive 2011/65/EU These products follow the provisions of the European Directive 1999/5/EC. Dette produkt er i overensstemmelse med det europæiske direktiv 1999/5/EC.
Tämä tuote noudattaa EU-direktiivin 1999/5/EC määräyksiä. Ce produit est conforme aux exigences de la Directive Européenne 1999/5/EC. Dieses Produkt entspricht den Bestimmungen der Europäischen Richtlinie 1999/5/EC. Þessi vara stenst reglugerð Evrópska Efnahags Bandalagsins númer 1999/5/EC. Questo prodotto è conforme alla Direttiva Europea 1999/5/EC. Dette produktet er i henhold til bestemmelsene i det europeiske direktivet 1999/5/EC. Este produto cumpre com as normas da Diretiva Européia 1999/5/EC.
China RoHS Declaration Class 1 Laser Products Server adapters listed above may contain laser devices for communication use. These devices are compliant with the requirements for Class 1 Laser Products and are safe in the intended use. In normal operation the output of these laser devices does not exceed the exposure limit of the eye and cannot cause harm.
Customer Support Web and Internet Sites http://support.dell.com/ Customer Support Technicians If the troubleshooting procedures in this document do not resolve the problem, please contact Dell, Inc. for technical assistance (refer to the "Getting Help" section in your system documentation). Before you call... You need to be at your computer with your software running and the product documentation at hand.
Adapter Specifications Intel® 40 Gigabit Network Adapter Specifications Feature Intel® Ethernet Converged Network Adapter XL710-Q2 Bus Connector PCI Express 3.0 Bus Speed x8 Transmission Mode/Connector QSFP+ Cabling 40GBase-SR4, Twinax DAC (7m max) Power Requirements 6.5 W Maximum @ +12 V Dimensions (excluding bracket) 5.21 x 2.71 in 13.3 x 6.9 cm Operating Temperature 32 - 131 deg. F (0 - 55 deg.
Cabling 40GBase-SR4, Twinax DAC (7m max) Power Requirements 6.2 W Maximum @ +12 V Dimensions (excluding bracket) 3.66 x6.081 in 9.3 x 15.5 cm Operating Temperature 32 - 140 deg. F (0 - 60 deg. C) MTBF 112 years Available Speeds 40 Gbps Duplex Modes Full only Indicator Lights Two per port: Link and Activity Standards Conformance IEEE 802.3ba SFF-8436 PCI Express 3.0 Regulatory and Safety Safety Compliance l UL 60950 Third Edition- CAN/CSA-C22.2 No.
Available Speeds 25 Gbps/10 Gbps/1 Gbps Duplex Modes Full only Indicator Lights Two per port: Link and Activity Standards Conformance IEEE 802.3-2015 SFF-8431 PCI Express 3.
l EN55032-2015- Radiated & Conducted Emissions (European Union) EN55024 - 2010- (Immunity) (European Union) l REACH, WEEE, RoHS Directives (European Union) l VCCI - Radiated & Conducted Emissions (Japan) l CNS13438 - Radiated & Conducted Emissions (Taiwan) l l AS/NZS CISPR - Radiated & Conducted Emissions (Australia/New Zealand) KN22 -Radiated & Conducted Emissions (Korea) l RoHS (China) l Intel® 10 Gigabit Network Adapter Specifications Feature Intel® Ethernet 10G 2P X710 OCP Intel® Ethernet
Feature Intel® Ethernet 10G 2P X710-T2L-t Adapter Bus Connector PCI Express 3.0 Bus Speed x8 Transmission Mode/Connector RJ45 BASE-T Connector Cabling 10GBASE-T: CAT6A (100m max), CAT6 (55m max) 1000BASE-T: CAT6A, CAT6, CAT5e (100m max) Power Requirements 9.6 W Maximum @ +12 V Dimensions (excluding bracket) 2.70 x 6.74 in (6.86 x 17.12 cm) Operating Temperature 32 - 131 deg. F (0 - 55 deg.
Intel® Ethernet Converged Network Adapter X710-T Intel® Ethernet Converged Network Adapter X710 Ethernet Server Adapter X710-DA2 for OCP Dimensions (excluding bracket) 6.578 x 4.372 in 16.708 x 11.107 cm 6.578 x 4.372 in 16.708 x 11.107 cm 2.67 x 4.59 in 6.78 x 11.658 cm Operating Temperature 32 - 131 deg. F (0 - 55 deg. C) 41 - 131 deg. F (5 - 55 deg. C) 32 - 131 deg. F (0 - 55 deg.
Feature Intel® Ethernet 10G 2P X540-t Adapter Intel® Ethernet 10G 2P X520 Adapter Intel® Ethernet 10G 2P X550-t Adapter MTBF 108 years 83.9 years 127 years Available Speeds 10 Gbps/1 Gbps 10 Gbps/1 Gbps 10 Gbps/1 Gbps Duplex Modes Full only Full only Full only Indicator Lights Two per port: Two per port: Link and Activity Link and Activity Link Activity Standards Conformance IEEE 802.1p IEEE 802.1Q IEEE 802.3an IEEE 802.3ac IEEE 802.3ad IEEE 802.3an IEEE 802.3x ACPI v1.
IEEE 802.3x ACPI v1.0 PCI Express 2.0 Regulatory and Safety Safety Compliance l UL 60950 Third Edition- CAN/CSA-C22.2 No.
EMC Compliance l FCC Part 15 - Radiated & Conducted Emissions (USA) l ICES-003 - Radiated & Conducted Emissions (Canada) l CISPR 22 - Radiated & Conducted Emissions (International) l EN55022-1998 - Radiated & Conducted Emissions (European Union) l EN55024 - 1998 - (Immunity) (European Union) l CE - EMC Directive (89/336/EEC) (European Union) l VCCI - Radiated & Conducted Emissions (Japan) l CNS13438 - Radiated & Conducted Emissions (Taiwan) l AS/NZS3548 - Radiated & Conducted Emissions (Austr
Feature Intel® Ethernet 10G 4P X710-k bNDC Intel® Ethernet 10G 4P X710/l350 rNDC Intel® Ethernet 10G 4P X710 SFP+ rNDC Bus Connector Dell EMC bNDC PCI Express 3.0 PCI Express 3.0 Bus Speed x8 x8 x8 Transmission Mode/Connector KX/KR SFP+ SFP+ Backplane Twinax 10GBase-SR/LR Twinax 10GBase-SR/LR 3.3 Watts @ 3.3 V (AUX),12.6 Watts @ 12 V (AUX) 10.7 Watts Maximum @ +12 V 9.5 Watts Maximum @ +12 V Power Requirements Dimensions 3.000x2.449 in 7.62x6.220cm 4.331x3.661 in 11.0x9.298 cm 4.
Cabling Cat 5e Power Requirements Dimensions (excluding bracket) Standard OCP3.0 Small Form Factor 2.99 x 4.53 in (7.6 x 11.5 cm) Operating Temperature 23 - 149 deg. F (-5 to 65 deg. C)) MTBF TBD Available Speeds 10/100/1000 Mbps auto-negotiate Duplex Modes Full or half at 10/100 Mbps; full only at 1000 Mbps Standards Conformance IEEE 802.3 IEEE 802.3ab IEEE 802.3u PCI Express 2.1 OCP NIC 3.
Available Speeds 10/100/1000 auto-negotiate Duplex Modes Full or half at 10/100 Mbps; full only at 1000 Mbps Standards Conformance IEEE 802.1p IEEE 802.1Q IEEE 802.3ab IEEE 802.3ac IEEE 802.3ad IEEE 802.3az IEEE 802.3u IEEE 802.3x IEEE 802.3z ACPI v1.0 PCI Express 2.0 Indicator Lights Two per port: Activity and Speed Safety Compliance l UL 60950 Third Edition- CAN/CSA-C22.2 No.
l UL 60950 Third Edition- CAN/CSA-C22.2 No.
l CISPR 22 - Radiated & Conducted Emissions (International) l EN55022-1998 - Radiated & Conducted Emissions (European Union) l EN55024 - 1998 - (Immunity) (European Union) l CE - EMC Directive (89/336/EEC) (European Union) l VCCI - Radiated & Conducted Emissions (Japan) l CNS13438 - Radiated & Conducted Emissions (Taiwan) l AS/NZS3548 - Radiated & Conducted Emissions (Australia/New Zealand) l MIC notice 1997-41, EMI and MIC notice 1997-42 - EMS (Korea)
Standards l l l l l l l l l l l l l l IEEE 802.1p: Priority Queuing (traffic prioritizing) and Quality of Service levels IEEE 802.1Q: Virtual LAN identification IEEE 802.3ab: Gigabit Ethernet over copper IEEE 802.3ac: Tagging IEEE 802.3ad: SLA (FEC/GEC/Link Aggregation - static mode) IEEE 802.3ad: Dynamic mode IEEE 802.3ae: 10 Gbps Ethernet IEEE 802.3an: 10GBase-T 10 Gbps Ethernet over unshielded twisted pair IEEE 802.3ap: Backplane Ethernet IEEE 802.3u: Fast Ethernet IEEE 802.3x: Flow Control IEEE 802.
X-UEFI Attributes This section contains information about X-UEFI attributes and their expected values. List of Multi-controller Devices The adapters listed below contain more than one controller. On these adapters, configuring controller based settings will not affect all ports. Only ports bound to the same controller will be affected.
Display Name X-UEFI Name Supported Adapters I350 X520 X540 Partition State Interpretation PartitionStateInterpretation RDMA Support RDMASupport LLDP Agent INTEL_LLDPAgent SR-IOV Support SRIOVSupport X X X VF Allocation Basis VFAllocBasis X X VF Allocation Multiple VFAllocMult X X NParEP Mode NParEP Partition n Maximum TX Bandwidth User Configurable X550 X710 XL710 X X X X X User Configurable Values Values that can be displayed Dependencies for Values XXV710 I/O Identit
Display Name X-UEFI Name Supported Adapters I350 X520 X540 User Configurable X550 X710 XL710 User Configurable Values Values that can be displayed Dependencies for Values XXV710 I/O Identity Optimization (iDRAC 8/9) Information (dash). TCP Port FirstTgtTcpPort X X X X X X X Yes 1024-65535 1024-65535 Yes Specifies the TCP Port number of the first iSCSI target.
Display Name X-UEFI Name Supported Adapters User Configurable I350 X520 X540 X550 X710 XL710 XXV710 User Configurable Values Values that can be displayed Dependencies for Values I/O Identity Optimization (iDRAC 8/9) Information iSCSI Dual IP Version Support iSCSIDualIPVersionSupport X X X X X X X No Available/Unavailable No Indicates support for simultaneous IPv4 and IPv6 configurations of iSCSI initiator and iSCSI primary and secondary targets.
Display Name X-UEFI Name Supported Adapters User Configurable User Configurable Values Values that can be displayed Dependencies for Values I/O Identity Optimization (iDRAC 8/9) Information I350 X520 X540 X550 X710 XL710 XXV710 X X X X X X X Yes I350: 1-8, X520/X540/X550: 1-64, X710/XL710/XXV710: 0-127 I350: 1-8, X520/X540/X550: 1-64, X710/XL710/XXV710: 0127 VirtualizationMode - SR-IOV No Specifies the number of PCI Virtual Functions (VFs) to be advertised in non-NPAR mode.
Legal Disclaimers Software License Agreement INTEL SOFTWARE LICENSE AGREEMENT (Final, License) IMPORTANT - READ BEFORE COPYING, INSTALLING OR USING. Do not use or load this software and any associated materials (collectively, the "Software") until you have carefully read the following terms and conditions. By loading or using the Software, you agree to the terms of this Agreement. If you do not wish to so agree, do not install or use the Software.
LIMITATION OF LIABILITY. IN NO EVENT SHALL INTEL OR ITS SUPPLIERS BE LIABLE FOR ANY DAMAGES WHATSOEVER (INCLUDING, WITHOUT LIMITATION, LOST PROFITS, BUSINESS INTERRUPTION, OR LOST INFORMATION) ARISING OUT OF THE USE OF OR INABILITY TO USE THE SOFTWARE, EVEN IF INTEL HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. SOME JURISDICTIONS PROHIBIT EXCLUSION OR LIMITATION OF LIABILITY FOR IMPLIED WARRANTIES OR CONSEQUENTIAL OR INCIDENTAL DAMAGES, SO THE ABOVE LIMITATION MAY NOT APPLY TO YOU.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
All statements or claims regarding the properties, capabilities, speeds or qualifications of the part referenced in this document are made by the supplier and not by Dell EMC. Dell EMC specifically disclaims knowledge of the accuracy, completeness or substantiation for any such statements. All questions or comments relating to such statements or claims should be directed to the supplier.