Cover User’s Guide Ethernet iSCSI Adapters and Ethernet FCoE Adapters Marvell BCM57xx and BCM57xxx Third party information brought to you courtesy of Dell. BC0054508-00 M October 16, 2019 Marvell.
User’s Guide Ethernet iSCSI Adapters and Ethernet FCoE Adapters For more information, visit our website at: http://www.marvell.com Notice THIS DOCUMENT AND THE INFORMATION FURNISHED IN THIS DOCUMENT ARE PROVIDED “AS IS” WITHOUT ANY WARRANTY.
Table of Contents Preface Intended Audience . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . What Is in This Guide . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Related Materials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Documentation Conventions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Downloading Documents . . . . . . . . . . . . . . . . . . . . . . .
User’s Guide—Ethernet iSCSI Adapters and Ethernet FCoE Adapters Marvell BCM57xx and BCM57xxx 3 Virtual LANs in Windows VLAN Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Adding VLANs to Teams . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 Installing the Hardware System Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Hardware Requirements . . . . . . . . . . . . . . . . .
User’s Guide—Ethernet iSCSI Adapters and Ethernet FCoE Adapters Marvell BCM57xx and BCM57xxx Packaging. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Installing Linux Driver Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Installing the Source RPM Package . . . . . . . . . . . . . . . . . . . . . . . . . . Installing the KMP Package. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
User’s Guide—Ethernet iSCSI Adapters and Ethernet FCoE Adapters Marvell BCM57xx and BCM57xxx last_active_tcp_port . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ooo_enable . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . bnx2fc Driver Parameter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . debug_logging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . cnic Driver Parameters . . . . . . . . . . . . . . .
User’s Guide—Ethernet iSCSI Adapters and Ethernet FCoE Adapters Marvell BCM57xx and BCM57xxx Driver Fails Handshake with FCoE Offload Enabled C-NIC Device . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . No Valid License to Start FCoE . . . . . . . . . . . . . . . . . . . . . . . . . Session Failures Due to Exceeding Maximum Allowed FCoE Offload Connection Limit or Memory Limits . . . . . . . . . . . . . . . Session Offload Failures. . . . . . . . . . . . . . . . . . .
User’s Guide—Ethernet iSCSI Adapters and Ethernet FCoE Adapters Marvell BCM57xx and BCM57xxx native_eee . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . num_queues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pri_map. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . tx_switching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . full_promiscous. . . . . . . . . . . . . . . .
User’s Guide—Ethernet iSCSI Adapters and Ethernet FCoE Adapters Marvell BCM57xx and BCM57xxx offload_flags . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rx_filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rxqueue_nr . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rxring_bd_nr . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . txqueue_nr . . . . . . . . . . . . . . .
User’s Guide—Ethernet iSCSI Adapters and Ethernet FCoE Adapters Marvell BCM57xx and BCM57xxx Driver Defaults. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . bnx2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . bnx2x . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . qfle3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
User’s Guide—Ethernet iSCSI Adapters and Ethernet FCoE Adapters Marvell BCM57xx and BCM57xxx iSCSI Boot Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Enabling CHAP Authentication . . . . . . . . . . . . . . . . . . . . . . . . . . Configuring the DHCP Server to Support iSCSI Boot . . . . . . . . DHCP iSCSI Boot Configuration for IPv4 . . . . . . . . . . . . . . . . . . DHCP iSCSI Boot Configuration for IPv6 . . . . . . . . . . . . . . . . . . Configuring the DHCP Server. . . . .
User’s Guide—Ethernet iSCSI Adapters and Ethernet FCoE Adapters Marvell BCM57xx and BCM57xxx Supported Features by Team Type . . . . . . . . . . . . . . . . . . . . . . . . . . . Selecting a Team Type . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Teaming Mechanisms. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Outbound Traffic Flow. . . . . .
User’s Guide—Ethernet iSCSI Adapters and Ethernet FCoE Adapters Marvell BCM57xx and BCM57xxx Application Considerations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Teaming and Clustering. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Microsoft Cluster Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . High-Performance Computing Cluster . . . . . . . . . . . . . . . . . . . . Oracle . . . . . . . . . . . . . . . . . . . . . . . . . . . .
User’s Guide—Ethernet iSCSI Adapters and Ethernet FCoE Adapters Marvell BCM57xx and BCM57xxx RHEL 7 Installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Linux: Adding Boot Paths . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . VMware ESXi FCoE Boot Installation . . . . . . . . . . . . . . . . . . . . . . . . . Configuring FCoE Boot from SAN on VMware . . . . . . . . . . . . . . Booting from SAN After Installation . . . . . . . . . . . . . . . . . . . . . . . . . . .
User’s Guide—Ethernet iSCSI Adapters and Ethernet FCoE Adapters Marvell BCM57xx and BCM57xxx FCC Notice. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . FCC, Class B. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . FCC, Class A . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . VCCI Notice . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
User’s Guide—Ethernet iSCSI Adapters and Ethernet FCoE Adapters Marvell BCM57xx and BCM57xxx Troubleshooting Checklist . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Checking if Current Drivers Are Loaded. . . . . . . . . . . . . . . . . . . . . . . . . . . . Windows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Linux . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
User’s Guide—Ethernet iSCSI Adapters and Ethernet FCoE Adapters Marvell BCM57xx and BCM57xxx List of Figures Figure Page 3-1 Example of Servers Supporting Multiple VLANs with Tagging. . . . . . . . . . . . . . . . . 14 6-1 CCM MBA Configuration Menu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 6-2 System Setup, Device Settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 6-3 Device Settings . . . . . . . . . . . . . . . . . . . . .
User’s Guide—Ethernet iSCSI Adapters and Ethernet FCoE Adapters Marvell BCM57xx and BCM57xxx 13-9 13-10 13-11 13-12 13-13 13-14 13-15 13-16 13-17 13-18 13-19 13-20 13-21 13-22 13-23 13-24 13-25 13-26 13-27 13-28 13-29 13-30 13-31 13-32 13-33 13-34 13-35 13-36 FCoE Boot Configuration Menu, FCoE General Parameters. . . . . . . . . . . . . . . . . . FCoE Boot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Starting SLES Installation . . . . . .
User’s Guide—Ethernet iSCSI Adapters and Ethernet FCoE Adapters Marvell BCM57xx and BCM57xxx List of Tables Table 1-1 1-2 2-1 3-1 4-1 4-2 7-1 8-1 8-2 8-3 8-4 8-5 9-1 10-1 10-2 10-3 10-4 10-5 11-1 11-2 11-3 11-4 11-5 11-6 11-7 11-8 11-9 11-10 12-1 12-2 13-1 16-1 16-2 16-3 16-4 16-5 16-6 16-7 16-8 16-9 16-10 16-11 16-12 Network Link and Activity Indicated by the RJ45 Port LEDs . . . . . . . . . . . . . . . . . . Network Link and Activity Indicated by the Port LED . . . . . . . . . . . . . . . . . . . . . .
User’s Guide—Ethernet iSCSI Adapters and Ethernet FCoE Adapters Marvell BCM57xx and BCM57xxx 16-13 16-14 16-15 16-16 17-1 17-2 17-3 17-4 17-5 17-6 17-7 17-8 18-1 18-2 BCM5709 and BCM5716 Environmental Specifications . . . . . . . . . . . . . . . . . . . . . BCM957810A1006G Environmental Specifications . . . . . . . . . . . . . . . . . . . . . . . . BCM957810A1008G Environmental Specifications. . . . . . . . . . . . . . . . . . . . . . . . . BCM957840A4007G Environmental Specifications . . . . . . . . . .
Preface This section provides information about this guide’s intended audience, content, document conventions, and laser safety information. NOTE Marvell® now supports QConvergeConsole® (QCC) GUI as the only GUI management tool across all Marvell adapters. QLogic Control Suite™ (QCS) GUI is no longer supported for the Marvell adapters based on 57xx/57xxx controllers, and has been replaced by the QCC GUI management tool. The QCC GUI provides single-pane-of-glass GUI management for all Marvell adapters.
Preface Related Materials Related Materials For additional information, refer to the Migration Guide: QLogic®/Broadcom NetXtreme I/II Adapters, document number BC0054606-00. The migration guide presents an overview of Marvell’s acquisition of specific Broadcom® Ethernet assets and its end-user impact, and was written in cooperation between Broadcom and Marvell. Documentation Conventions This guide uses the following documentation conventions: NOTE CAUTION provides additional information.
Preface Downloading Documents Key names and key strokes are indicated with UPPERCASE: Press the CTRL+P keys. Press the UP ARROW key. Text in italics indicates terms, emphasis, variables, or document titles. For example: What are shortcut keys? To enter the date type mm/dd/yyyy (where mm is the month, dd is the day, and yyyy is the year).
Preface Laser Safety Information 7. Click Go. 8. In the Documentation section, click the corresponding document title. Laser Safety Information This product may use Class 1 laser optical transceivers to communicate over the fiber optic conductors. The U.S. Department of Health and Human Services (DHHS) does not consider Class 1 lasers to be hazardous.
1 Functionality and Features This chapter covers the following for the adapters: Functional Description “Features” on page 2 “Supported Operating Environments” on page 6 “Network Link and Activity Indication” on page 6 Functional Description The Marvell BCM57xx and BCM57xxx adapter is a new class of gigabit Ethernet (GbE) and 10GbE converged network interface controller (C-NIC) that can simultaneously perform accelerated data networking and storage networking on a standard Ethernet network.
1–Functionality and Features Features The Marvell BCM57xx and BCM57xxx adapters include a 10Mbps, 100Mbps, 1000Mbps, or 10Gbps Ethernet MAC with both half-duplex and full-duplex capability and a 10Mbps, 100Mbps, 1000Mbps, or 10Gbps physical layer (PHY). The transceiver is fully compatible with the IEEE 802.3 standard for auto-negotiation of speed.
1–Functionality and Features Features Adaptive interrupts (see “Adaptive Interrupt Frequency” on page 5) Receive side scaling (RSS) Manageability: QLogic Control Suite (QCS) CLI diagnostic and configuration software (see “QLogic Control Suite CLI” on page 6) QConvergeConsole (QCC) GUI diagnostics and configuration software for Linux® and Windows® QCC PowerKit diagnostics and configuration software extensions to Microsoft® PowerShell® for Linux, VMware®, and Windows QCC vSphere®
1–Functionality and Features Features High-speed on-chip reduced instruction set computer (RISC) processor (see “ASIC with Embedded RISC Processor” on page 6) Integrated 96KB frame buffer memory Quality of service (QoS) Serial gigabit media independent interface (SGMII), gigabit media independent interface (GMII), and media independent interface (MII) management interface 256 unique MAC unicast addresses Support for multicast addresses through a 128-bit hashing hardware function
1–Functionality and Features Features FCoE FCoE allows Fibre Channel protocol to be transferred over Ethernet. FCoE preserves existing Fibre Channel infrastructure and capital investments. The following FCoE features are supported: Full stateful hardware FCoE offload Receiver classification of FCoE and FCoE initialization protocol (FIP) frames. FIP is used to establish and maintain connections.
1–Functionality and Features Supported Operating Environments ASIC with Embedded RISC Processor The core control for Marvell BCM57xx and BCM57xxx adapters resides in a tightly integrated, high-performance ASIC. The ASIC includes a RISC processor, which provides the flexibility to add new features to the adapter and conforms it to future network requirements through software downloads.
1–Functionality and Features Network Link and Activity Indication For fiber optic Ethernet connections and SFP+, the state of the network link and activity is indicated by a single LED located adjacent to the port connector, as described in Table 1-2. Table 1-2.
2 Configuring Teaming in Windows Server Teaming configuration in a Microsoft Windows Server® system includes an overview of the QLogic Advanced Server Program (QLASP), load balancing, and fault tolerance. Windows Server 2016 and later do not support Marvell’s QLASP teaming driver. QLASP Overview “Load Balancing and Fault Tolerance” on page 9 NOTE This chapter describes teaming for adapters in Windows Server systems.
2–Configuring Teaming in Windows Server Load Balancing and Fault Tolerance For more information on network adapter teaming concepts, see Chapter 11 Marvell Teaming Services. NOTE Windows Server 2012 and later provide built-in teaming support, called NIC Teaming. Marvell recommends that users do not enable teams through NIC Teaming and QLASP at the same time on the same adapters. Windows Server 2016 does not support Marvell’s QLASP teaming driver.
2–Configuring Teaming in Windows Server Load Balancing and Fault Tolerance Smart Load Balancing and Failover Smart Load Balancing and Failover is the Broadcom® implementation of switch-independent NIC teaming load balancing based on IP flow. This feature supports balancing IP traffic across multiple adapters (team members) in a bidirectional manner. In this type of team, all adapters in the team have separate MAC addresses.
2–Configuring Teaming in Windows Server Load Balancing and Fault Tolerance Generic Trunking (FEC/GEC)/802.3ad-Draft Static The Generic Trunking (FEC/GEC)/802.3ad-Draft Static type of team is very similar to the Link Aggregation (802.3ad) type of team in that all adapters in the team are configured to receive packets for the same MAC address. The Generic Trunking (FEC/GEC)/802.3ad-Draft Static) type of team, however, does not provide LACP or marker protocol support.
2–Configuring Teaming in Windows Server Load Balancing and Fault Tolerance Limitations of Smart Load Balancing and Failover and SLB (Auto-Fallback Disable) Types of Teams Smart Load Balancing (SLB) is a protocol-specific scheme. The level of support for IP is listed in Table 2-1. Table 2-1.
3 Virtual LANs in Windows This chapter provides information about VLANs in Windows for teaming. VLAN Overview “Adding VLANs to Teams” on page 16 VLAN Overview Virtual LANs (VLANs) allow you to split your physical LAN into logical parts, to create logical segmentation of work groups, and to enforce security policies for each logical segment.
3–Virtual LANs in Windows VLAN Overview Although VLANs are commonly used to create individual broadcast domains and separate IP subnets, it is sometimes useful for a server to have a simultaneous presence on more than one VLAN. Marvell adapters support multiple VLANs on a per-port or per-team basis, allowing very flexible network configurations. Figure 3-1. Example of Servers Supporting Multiple VLANs with Tagging Figure 3-1 shows an example network that uses VLANs.
3–Virtual LANs in Windows VLAN Overview Table 3-1. Example VLAN Network Topology (Continued) Component Description Main Server A high-use server that needs to be accessed from all VLANs and IP subnets. The Main Server has a Marvell adapter installed. All three IP subnets are accessed through the single physical adapter interface. The server is attached to one of the switch ports, which is configured for VLANs #1, #2, and #3. Both the adapter and the connected switch port have tagging turned on.
3–Virtual LANs in Windows Adding VLANs to Teams Adding VLANs to Teams Each Marvell QLASP adapter team supports up to 64 VLANs (63 tagged and 1 untagged). Note that only Marvell adapters and Alteon® AceNIC adapters can be part of a team with VLANs. With multiple VLANs on an adapter, a server with a single adapter can have a logical presence on multiple IP subnets. With multiple VLANs in a team, a server can have a logical presence on multiple IP subnets and benefit from load balancing and failover.
4 Installing the Hardware This chapter applies to Marvell BCM57xx and BCM57xxx add-in network interface cards. Hardware installation covers the following: System Requirements “Safety Precautions” on page 19 “Preinstallation Checklist” on page 19 “Installation of the Add-In NIC” on page 20 NOTE Service Personnel: This product is intended only for installation in a Restricted Access Location (RAL).
4–Installing the Hardware System Requirements Operating System Requirements NOTE Because the Dell Update Packages Version xx.xx.xxx User’s Guide is not updated in the same cycle as this Ethernet adapter user’s guide, consider the operating systems listed in this section as the most current. This section describes the requirements for each supported OS. General The following host interface is required: PCI Express v1.
4–Installing the Hardware Safety Precautions Safety Precautions ! WARNING The adapter is being installed in a system that operates with voltages that can be lethal. Before you open the case of your system, observe the following precautions to protect yourself and to prevent damage to the system components: Remove any metallic objects or jewelry from your hands and wrists. Make sure to use only insulated or nonconducting tools.
4–Installing the Hardware Installation of the Add-In NIC Installation of the Add-In NIC The following instructions apply to installing the Marvell BCM57xx and BCM57xxx adapters (add-in NIC) in most systems. Refer to the manuals that were supplied with your system for details about performing these tasks on your specific system. Installing the Add-In NIC 1. Review Safety Precautions and Preinstallation Checklist.
4–Installing the Hardware Installation of the Add-In NIC Copper Wire To connect a copper wire: 1. Select an appropriate cable. Table 4-1 lists the copper cable requirements for connecting to 100 and 1000BASE-T and 10GBASE-T ports. Table 4-1.
4–Installing the Hardware Installation of the Add-In NIC Fiber Optic To connect a fiber optic cable: 1. Select an appropriate cable. Table 4-2 lists the fiber optic cable requirements for connecting to 1000 and 2500BASE-X ports. See also the tables in “Supported SFP+ Modules Per NIC” on page 260. Table 4-2.
5 Manageability Information about manageability includes: CIM “Host Bus Adapter API” on page 24 CIM The common information model (CIM) is an industry standard defined by the Distributed Management Task Force (DMTF). Microsoft implements CIM on Windows Server platforms. Marvell supports CIM on Windows Server and Linux platforms. The Marvell implementation of CIM provides various classes to provide information to users through CIM client applications.
5–Manageability Host Bus Adapter API where TargetInstance ISA "QLGC_ExtraCapacityGroup" SELECT * FROM __InstanceCreationEvent where TargetInstance ISA "QLGC_NetworkAdapter" SELECT * FROM __InstanceDeletionEvent where TargetInstance ISA "QLGC_NetworkAdapter" SELECT * FROM __InstanceCreationEvent where TargetInstance ISA "QLGC_ActsAsSpare" SELECT * FROM __InstanceDeletionEvent where TargetInstance ISA "QLGC_ActsAsSpare" For detailed information about these events, see the CIM documentation: http://www.dmtf.
6 Boot Agent Driver Software This chapter covers how to set up MBA in both client and server environments: Overview “Setting Up MBA in a Client Environment” on page 26 “Setting Up MBA in a Linux Server Environment” on page 32 Overview Marvell BCM57xx and BCM57xxx adapters support pre-execution environment (PXE), remote program load (RPL), iSCSI, and bootstrap protocol (BOOTP).
6–Boot Agent Driver Software Setting Up MBA in a Client Environment Setting Up MBA in a Client Environment Setting up MBA in a client environment involves the following steps: 1. Configuring the MBA Driver. 2. Setting Up the BIOS for the boot order. Configuring the MBA Driver This section pertains to configuring the MBA driver (located in the adapter firmware) on add-in NIC models of the Marvell network adapter.
6–Boot Agent Driver Software Setting Up MBA in a Client Environment Using Comprehensive Configuration Management To use CCM to configure the MBA driver: 1. Restart the system. 2. Press the CTRL+ S keys within four seconds after you are prompted to do so. A list of adapters appears. a. Select the adapter to configure, and then press the ENTER key. The Main Menu appears. b. Select MBA Configuration to view the MBA Configuration Menu, as shown in Figure 6-1. Figure 6-1.
6–Boot Agent Driver Software Setting Up MBA in a Client Environment 3. To access the Boot Protocol item, press the UP ARROW and DOWN ARROW keys. If other boot protocols besides Preboot Execution Environment (PXE) are available, press RIGHT ARROW or LEFT ARROW to select the boot protocol of choice: FCoE or iSCSI. NOTE For iSCSI and FCoE boot-capable LOMs, set the boot protocol through the BIOS. See your system documentation for more information.
6–Boot Agent Driver Software Setting Up MBA in a Client Environment 3. Select the device on which you want to change MBA settings (see Figure 6-3). Figure 6-3.
6–Boot Agent Driver Software Setting Up MBA in a Client Environment 4. On the Main Configuration Page, select NIC Configuration (see Figure 6-4). Figure 6-4.
6–Boot Agent Driver Software Setting Up MBA in a Client Environment 5. In the NIC Configuration page (see Figure 6-5), use the Legacy Boot Protocol drop-down menu to select the boot protocol of choice, if boot protocols other than Preboot Execution Environment (PXE) are available. If available, other boot protocols include iSCSI and FCoE. The BCM57800’s fixed speed, 1GbE ports support only PXE and iSCSI remote boot. Figure 6-5.
6–Boot Agent Driver Software Setting Up MBA in a Linux Server Environment Setting Up the BIOS To boot from the network with the MBA, make the MBA enabled adapter the first bootable device under the BIOS. This procedure depends on the system BIOS implementation. Refer to the user manual for the system for instructions. Setting Up MBA in a Linux Server Environment The Red Hat Enterprise Linux distribution has PXE Server support.
7 Linux Driver Software Information about the Linux driver software includes: Introduction “Limitations” on page 34 “Packaging” on page 35 “Installing Linux Driver Software” on page 36 “Unloading or Removing the Linux Driver” on page 42 “Patching PCI Files (Optional)” on page 43 “Network Installations” on page 44 “Setting Values for Optional Properties” on page 44 “Driver Defaults” on page 51 “Driver Messages” on page 52 “Teaming with Channel Bonding” on page 57
7–Linux Driver Software Limitations Table 7-1. Marvell BCM57xx and BCM57xxx Linux Drivers (Continued) Linux Driver Description bnx2x Linux driver for the BCM57xxx 1Gb/10Gb network adapters. This driver directly controls the hardware and is responsible for sending and receiving Ethernet packets on behalf of the Linux host networking stack. The driver also receives and processes device interrupts, both on behalf of itself (for Layer 2 networking) and on behalf of the bnx2fc (FCoE) and C-NIC drivers.
7–Linux Driver Software Packaging RHEL5.4 and later has special backported code to support the C-NIC driver; these distributions are supported. bnx2x Driver Limitations The current version of the driver has been tested on 2.6.x kernels, starting from the 2.6.9 kernel. The bnx2x driver may not compile on kernels older than 2.6.9. Testing is concentrated on i386 and x86_64 architectures. Only limited testing has been done on some other architectures.
7–Linux Driver Software Installing Linux Driver Software Source Packages Identical source files to build the driver are included in both RPM and TAR source packages. The supplemental TAR file contains additional utilities such as patches and driver diskette images for network installation. The following is a list of included files: netxtreme2-version.src.rpm: RPM package with BCM57xx and BCM57xxx bnx2, bnx2x, cnic, bnx2fc, bnx2ilibfc, and libfcoe driver source. netxtreme2-version.tar.
7–Linux Driver Software Installing Linux Driver Software 2. Change the directory to the RPM path and build the binary RPM for your kernel. NOTE For RHEL 8, install the kernel-rpm-macros and kernel-abi-whitelists package before building the binary RPM. For RHEL: cd ~/rpmbuild rpmbuild -bb SPECS/netxtreme2.spec For SLES: cd /usr/src/packages rpmbuild -bb SPECS/netxtreme2.spec 3. Install the newly compiled RPM: rpm -ivh RPMS//netxtreme2-..
7–Linux Driver Software Installing Linux Driver Software 7. For FCoE offload, after rebooting, create configuration files for all FCoE ethX interfaces: cd /etc/fcoe cp cfg-ethx cfg- NOTE Note that your distribution might have a different naming scheme for Ethernet devices (that is, pXpX or emX instead of ethX). 8. For FCoE offload or iSCSI-offload-TLV, modify /etc/fcoe/cfg- by setting DCB_REQUIRED=yes to DCB_REQUIRED=no. 9. Turn on all ethX interfaces.
7–Linux Driver Software Installing Linux Driver Software }; }; 13. For FCoE offload and iSCSI-offload-TLV, restart lldpad service to apply new settings. service lldpad restart 14. For FCOE offload, restart FCoE service to apply new settings. service fcoe restart Installing the KMP Package NOTE The examples in this procedure refer to the bnx2x driver, but also apply to the bxn2fc and bnx2i drivers. To install the KMP package: 1. Install the KMP package: rpm -ivh rmmod bnx2x 2.
7–Linux Driver Software Installing Linux Driver Software 3. Test the driver by loading it (first unload the existing driver, if necessary): rmmod bnx2x (or bnx2fc or bnx2i) insmod bnx2x/src/bnx2x.ko (or bnx2fc/src/bnx2fc.ko, or bnx2i/src/bnx2i.ko) 4. For iSCSI offload and FCoE offload, load the C-NIC driver (if applicable): insmod cnic.ko 5. Install the driver and man page: make install NOTE See the RPM instructions in the preceding for the location of the installed driver. 6.
7–Linux Driver Software Installing Linux Driver Software Verify that your network adapter supports iSCSI by checking the message log. If the message bnx2i: dev eth0 does not support iSCSI appears in the message log after loading the bnx2i driver, iSCSI is not supported. This message may not appear until the interface is opened, as with: ifconfig eth0 up 4. To use iSCSI, refer to “Load and Run Necessary iSCSI Software Components” on page 42 to load the necessary software components.
7–Linux Driver Software Load and Run Necessary iSCSI Software Components Load and Run Necessary iSCSI Software Components The Marvell iSCSI Offload software suite consists of three kernel modules and a user daemon. Required software components can be loaded either manually or through system services. 1. Unload the existing driver, if necessary. To do so manually, issue the following command: rmmod bnx2i 2. Load the iSCSI driver. To do so manually, issue one of the following commands: insmod bnx2i.
7–Linux Driver Software Patching PCI Files (Optional) If the driver was installed using RPM, issue the following command to remove it: rpm -e netxtreme2 Removing the Driver from a TAR Installation NOTE The examples used in this procedure refer to the bnx2x driver, but also apply to the bnx2fc and bnx2i drivers. If the driver was installed using make install from the TAR file, manually delete the bnx2x.ko driver file from the operating system.
7–Linux Driver Software Network Installations Next, back up the old files and rename the new files for use. cp /usr/share/hwdata/pci.ids /usr/share/hwdata/old.pci.ids cp /usr/share/hwdata/pci.ids.new /usr/share/hwdata/pci.ids cp /usr/share/hwdata/pcitable /usr/share/hwdata/old.pcitable cp /usr/share/hwdata/pcitable.
7–Linux Driver Software Setting Values for Optional Properties bnx2x Driver Parameters Parameters for the bnx2x driver are described in the following sections. int_mode Use the optional parameter int_mode to force using an interrupt mode other than MSI-X. By default, the driver tries to enable MSI-X if it is supported by the kernel. If MSI-X is not attainable, the driver tries to enable MSI if it is supported by the kernel. If MSI is not attainable, the driver uses the legacy INTx mode.
7–Linux Driver Software Setting Values for Optional Properties or modprobe bnx2x dropless_fc=1 autogreen The autogreen parameter forces the specific AutoGrEEEN behavior. AutoGrEEEn is a proprietary, pre-IEEE standard Energy Efficient Ethernet (EEE) mode supported by some 1000BASE-T and 10GBASE-T RJ45 interfaced switches. By default, the driver uses the NVRAM configuration settings per port.
7–Linux Driver Software Setting Values for Optional Properties tx_switching The tx_switching parameter sets the L2 Ethernet send direction to test each transmitted packet. If the packet is intended for the transmitting NIC port, it is hair-pin looped back by the adapter. This parameter is relevant only in multifunction (NPAR) mode, especially in virtualized environments.
7–Linux Driver Software Setting Values for Optional Properties bnx2i Driver Parameters Optional parameters en_tcp_dack, error_mask1, and error_mask2 can be supplied as command line arguments to the insmod or modprobe command for bnx2i. error_mask1 and error_mask2 Use the error_mask (Configure firmware iSCSI error mask #) parameters to configure a specific iSCSI protocol violation to be treated either as a warning or a fatal error.
7–Linux Driver Software Setting Values for Optional Properties sq_size Use the sq_size parameter to choose send queue size for offloaded connections and SQ size determines the maximum SCSI commands that can be queued. SQ size also has a bearing on the quantity of connections that can be offloaded; as QP size increases, the quantity of connections supported decreases. With the default values, the BCM5708 adapters can offload 28 connections.
7–Linux Driver Software Setting Values for Optional Properties ooo_enable The ooo_enable (enable TCP out-of-order) parameter feature enables and disables TCP out-of-order RX handling feature on offloaded iSCSI connections. Default: TCP out-of-order feature is ENABLED. For example: insmod bnx2i.ko ooo_enable=1 or modprobe bnx2i ooo_enable=1 bnx2fc Driver Parameter You can supply the optional parameter debug_logging as a command line argument to the insmod or modprobe command for bnx2fc.
7–Linux Driver Software Driver Defaults cnic_dump_kwqe_enable The cnic_dump_kwe_en parameter enables and disables single work-queue element message (kwqe) logging. By default, this parameter is set to 1 (disabled).
7–Linux Driver Software Driver Messages bnx2x Driver Defaults Speed: Autonegotiation with all speeds advertised Flow control: Autonegotiation with RX and TX advertised MTU: 1500 (range is 46–9600) RX Ring Size: 4078 (range is 0–4078) TX Ring Size: 4078 (range is (MAX_SKB_FRAGS+4)–4078). MAX_SKB_FRAGS varies on different kernels and different architectures. On a 2.6 kernel for x86, MAX_SKB_FRAGS is 18.
7–Linux Driver Software Driver Messages NIC Detected eth#: QLogic BCM57xx and BCM57xxx xGb (B1) PCI-E x8 found at mem f6000000, IRQ 16, node addr 0010180476ae cnic: Added CNIC device: eth0 Link Up and Speed Indication bnx2x: eth# NIC Link is Up, 10000 Mbps full duplex Link Down Indication bnx2x: eth# NIC Link is Down MSI-X Enabled Successfully bnx2x: eth0: using MSI-X bnx2i Driver Messages The bnx2i driver messages include the following.
7–Linux Driver Software Driver Messages Exceeds Maximum Allowed iSCSI Connection Offload Limit bnx2i: alloc_ep: unable to allocate iscsi cid bnx2i: unable to allocate iSCSI context resources Network Route to Target Node and Transport Name Binding Are Two Different Devices bnx2i: conn bind, ep=0x...
7–Linux Driver Software Driver Messages bnx2i: iscsi_error - F-bit not set bnx2i: iscsi_error - invalid TTT bnx2i: iscsi_error - invalid DataSN bnx2i: iscsi_error - burst len violation bnx2i: iscsi_error - buf offset violation bnx2i: iscsi_error - invalid LUN field bnx2i: iscsi_error - invalid R2TSN field bnx2i: iscsi_error - invalid cmd len1 bnx2i: iscsi_error - invalid cmd len2 bnx2i: iscsi_error - pend r2t exceeds MaxOutstandingR2T value bnx2i: iscsi_error - TTT is rsvd bnx2i: iscsi_error - MBL violatio
7–Linux Driver Software Driver Messages [20]: 2a 0 0 2 ffffffc8 14 0 0 [28]: 40 0 0 0 0 0 0 0 Open-iSCSI Daemon Handing Over Session to Driver bnx2i: conn update - MBL 0x800 FBL 0x800MRDSL_I 0x800 MRDSL_T 0x2000 bnx2fc Driver Messages The bnx2fc driver messages include the following. BNX2FC Driver Signon QLogic FCoE Driver bnx2fc v0.8.7 (Mar 25, 2011) Driver Completes Handshake with FCoE Offload Enabled C-NIC Device bnx2fc [04:00.
7–Linux Driver Software Teaming with Channel Bonding Session Upload Failures bnx2fc: ERROR!! destroy timed out bnx2fc: Disable request timed out.
7–Linux Driver Software Linux iSCSI Offload Linux iSCSI Offload iSCSI offload information for Linux includes the following: Open iSCSI User Applications User Application iscsiuio Bind iSCSI Target to Marvell iSCSI Transport Name VLAN Configuration for iSCSI Offload (Linux) Making Connections to iSCSI Targets Maximum Offload iSCSI Connections Linux iSCSI Offload FAQ Open iSCSI User Applications Install and run the inbox Open-iSCSI initiator programs from the DVD.
7–Linux Driver Software Linux iSCSI Offload Bind iSCSI Target to Marvell iSCSI Transport Name By default, the Open-iSCSI daemon connects to discovered targets using software initiator (transport name = 'tcp'). Users who want to offload iSCSI connection onto C-NIC device should explicitly change transport binding of the iSCSI iface. Perform the binding change using the iscsiadm CLI utility as follows, iscsiadm -m iface -I -n iface.
7–Linux Driver Software Linux iSCSI Offload Iface.port = 0 #END Record NOTE Although not strictly required, Marvell recommends configuring the same VLAN ID on the iface.iface_num field for iface file identification purposes. Making Connections to iSCSI Targets Refer to Open-iSCSI documentation for a comprehensive list of iscsiadm commands. The following is a sample list of commands to discovery targets and to create iSCSI connections to a target.
7–Linux Driver Software Linux iSCSI Offload Linux iSCSI Offload FAQ Not all Marvell BCM57xx and BCM57xxx adapters support iSCSI offload. The iSCSI session will not recover after a hot remove and hot plug. For Microsoft Multipath I/O (MPIO) to work properly, you must enable iSCSI noopout on each iSCSI session. For procedures on setting up noop_out_interval and noop_out_timeout values, refer to Open-iSCSI documentation.
8 VMware Driver Software This chapter covers the following for the VMware driver software: Introduction “Packaging” on page 63 “Download, Install, and Update Drivers” on page 64 “FCoE Support” on page 87 “iSCSI Support” on page 89 NOTE Information in this chapter applies primarily to the currently supported VMware versions: ESXi 6.5 and ESXi 6.7. ESXi 6.7 uses native drivers for all protocols.
8–VMware Driver Software Packaging Table 8-1. Marvell BCM57xx and BCM57xxx VMware Drivers (Continued) VMware Driver Description bnx2x VMware legacy driver for the BCM57xxx 1/10Gb network adapters. This driver directly controls the hardware and is responsible for sending and receiving Ethernet packets on behalf of the VMware host networking stack.
8–VMware Driver Software Download, Install, and Update Drivers The VMware driver is released in the packaging formats shown in Table 8-2. Table 8-2. VMware Driver Packaging Format Drivers Compressed ZIP QLG-bnx-6.0-offline_bundle-.zip (legacy ESXi 6.5) Compressed ZIP QLG-qcnic-6.5-offline_bundle-.zip (native ESXi 6.5) Compressed ZIP QLG-qcnic-6.7-offline_bundle-.zip (native ESXi 6.
8–VMware Driver Software Driver Parameters Marvell recommends setting the disable_msi parameter to 1 to always disable MSI/MSI-X on all QLogic adapters in the system. Issue one of the following commands: insmod bnx2.ko disable_msi=1 modprobe bnx2 disable_msi=1 This parameter can also be set in the modprobe.conf file. See the man page for more information. bnx2x Driver Parameters You can supply several optional parameters as a command line argument to the vmkload_mod command.
8–VMware Driver Software Driver Parameters dropless_fc The dropless_fc parameter is set to 1 (by default) to enable a complementary flow control mechanism on BCM57xxx adapters. The normal flow control mechanism is to send pause frames when the on-chip buffer (BRB) is reaching a specific level of occupancy, which is a performance-targeted flow control mechanism.
8–VMware Driver Software Driver Parameters pri_map On earlier versions of Linux that do not support tc-mqprio, use the optional parameter pri_map to map the VLAN PRI value or the IP DSCP value to a different or the same class of service (CoS) in the hardware. This 32-bit parameter is evaluated by the driver as eight values of 4 bits each. Each nibble sets the required hardware queue number for that priority.
8–VMware Driver Software Driver Parameters use_random_vf_mac When this parameter is enabled (set to 1), all created VFs will have a random forced MAC. By default, this parameter is disabled (set to 0). debug The debug parameter sets the default message level (msglevel) on all adapters in the system at one time. To set the message level for a specific adapter, issue the ethtool -s command. RSS Use the optional RSS parameter to specify the quantity of receive side scaling queues. For VMware ESXi 6.
8–VMware Driver Software Driver Parameters enable_live_grcdump Use the enable_live_grcdump parameter to indicate which firmware dump is collected for troubleshooting. Valid values are: Value Description 0x0 Disable live global register controller (GRC) dump 0x1 Enable parity/live GRC dump (default) 0x2 Enable transmit timeout GRC dump 0x4 Enable statistics timeout GRC dump The default setting is appropriate for most situations. Do not change the default value unless requested by the support team.
8–VMware Driver Software Driver Parameters bnx2i Driver Parameters Optional parameters en_tcp_dack, error_mask1, and error_mask2 can be supplied as command line arguments to the insmod or modprobe command for bnx2i. error_mask1 and error_mask2 Use the error_mask (Configure firmware iSCSI error mask #) parameters to configure a specific iSCSI protocol violation to be treated either as a warning or a fatal error. All fatal iSCSI protocol violations will result in session recovery (ERL 0).
8–VMware Driver Software Driver Parameters sq_size Use the sq_size parameter to choose send queue size for offloaded connections and SQ size determines the maximum SCSI commands that can be queued. SQ size also has a bearing on the quantity of connections that can be offloaded; as QP size increases, the quantity of connections supported decreases. With the default values, the BCM5708 adapters can offload 28 connections.
8–VMware Driver Software Driver Parameters ooo_enable The ooo_enable (enable TCP out-of-order) parameter feature enables and disables TCP out-of-order RX handling feature on offloaded iSCSI connections. Default: TCP out-of-order feature is ENABLED. For example: insmod bnx2i.ko ooo_enable=1 or modprobe bnx2i ooo_enable=1 bnx2fc Driver Parameter You can supply the optional parameter debug_logging as a command line argument to the insmod or modprobe command for bnx2fc.
8–VMware Driver Software Driver Parameters cnic_dump_kwqe_en The cnic_dump_kwe_en parameter enables and disables single work-queue element message (kwqe) logging. By default, this parameter is set to 1 (disabled).
8–VMware Driver Software Driver Parameters 0x00100000 /* debug vlan */ 0x00200000 /* state machine 0x00400000 /* nvm access 0x00800000 /* SRIOV 0x01000000 /* mgmt interface 0x02000000 /* CNIC */ 0x04000000 /* DCB */ 0xFFFFFFFF /* all enabled */ */ */ */ */ enable_fwdump The enable_fwdump parameter enable and disables the firmware dump file. Set to 1 to enable the firmware dump file. Set to 0 (default) to disable the firmware dump file.
8–VMware Driver Software Driver Parameters offload_flags This parameter specifies the offload flags: Value Flag 1 CSO 2 TSO 4 VXLAN offload 8 Geneve offload 15 Default. All tunneled offloads (CSO, TSO, VXLAN, Geneve) are enabled. rx_filters The rx_filters parameter defines the number of receive filters per NetQueue. Set to 1 to use the default number of receive filters based on availability. Set to 0 to disable use of multiple receive filters.
8–VMware Driver Software Driver Parameters DRSS The DRSS parameter sets the number of RSS queues associated with the default queue. The minimum number of RSS queues is 2; the maximum number is 4. To disable this parameter, set it to 0 (default). This parameter is used for VXLAN gateways, where multiple unknown MAC addresses may be received by the default queue. rss_engine_nr The rss_engine_nr parameter sets the number of RSS engines. Valid values are 0 (Disabled) or 1–4 (fixed number of RSS engines).
8–VMware Driver Software Driver Parameters qfle3i Driver Parameters For a list of qlfe3i driver parameters, issue one of the following commands: # esxcli system module parameters list -m qfle3i # esxcfg-module -i qfle3i To change a parameter’s value, issue one of the following commands: #esxcli system module parameters set -m qfle3i -p = #esxcfg-module -s = qfle3i qfle3i_chip_cmd_max The qlfe3i_chip_cmd_max parameter sets the maximum I/Os queued to the BCM57xx and BCM57xxx
8–VMware Driver Software Driver Parameters Certain iSCSI targets do not handle ACK piggybacking. If this parameter is enabled on these types of targets, the host cannot login to the target. If this even occurs, Marvell recommends disabling this parameter. error_mask1, error_mask2 Use the error_mask (Configure firmware iSCSI error mask #) parameters to configure a specific iSCSI protocol violation to be treated either as a warning or a fatal error.
8–VMware Driver Software Driver Parameters The following debug logs can be masked: Log Value (h) DEFAULT_LEVEL 001 Initialization 002 Conn Setup 004 TMF 008 iSCSI NOP 010 CNIC IF 020 ITT CLEANUP 040 CONN EVT 080 SESS Recovery 100 Internal 200 IO Path 400 APP INTERFACE 800 rq_size Use the rq_size parameter to choose the size of asynchronous buffer queue size per offloaded connections.
8–VMware Driver Software Driver Parameters tcp_buf_size The tcp_buf_size parameter sets the TCP send and receive buffer size. The default is 64 × 1,024. time_stamps The time_stamps parameter enables and disables TCP time stamps. Set to 0 to disable time stamps. Set to 1 (default) to enable time stamps.
8–VMware Driver Software Driver Parameters qfle3f_autodiscovery The qfle3f_autodiscovery parameter controls auto-FCoE discovery during system boot. Set to 0 (default) to disable auto-FCoE discovery. Set to 1 to enable auto-FCoE discovery. qfle3f_create vmkMgmt_Entry The qfle3f_createvmkMgmt_Entry parameter creates the vmkMgmt interface. Set to 0 if the vmkMgmt interface will not be used. Set to 1 (default) to create the vmkMgmt interface.
8–VMware Driver Software Driver Parameters Table 8-3. bnx2 Driver Defaults (Continued) Parameter Default Coalesce Tx frames 20 (range 0–255) Coalesce Tx frames IRQ 2 (range 0–255) Coalesce stats μsecs 999936 (approximately 1 second) (range 0–16776960 in 256 increments) MSI/MSI-X Enabled (if supported by 2.6/3.x kernel and interrupt test passes) TSO Enabled on 2.6/3.x kernels WoL Initial setting based on NVRAM's setting. bnx2x Defaults for the bnx2x VMware ESXi driver are listed in Table 8-4.
8–VMware Driver Software Driver Parameters qfle3 Defaults for the qlfe3 VMware ESXi driver are listed in Table 8-5. : Table 8-5.
8–VMware Driver Software Driver Parameters If the cnic driver is loaded, it must be unloaded first before the bnx2 driver can be unloaded. If the driver was installed using rpm, issue the following command to remove it: rpm -e bnx2 If the driver was installed using make install from the tar file, the driver bnx2.o (or bnx2.ko) must be manually deleted from the system.
8–VMware Driver Software Driver Parameters MSI-X Enabled Successfully bnx2x 0000:01:00.0: vmnic0: using MSI-X fp[7] 35 IRQs: sp 16 fp[0] 28 ... Link Up and Speed Indication bnx2x 0000:01:00.0: vmnic0: NIC Link is Up, 10000 Mbps full duplex, Flow control: ON - receive & transmit Link Down Indication bnx2x 0000:01:00.1: vmnic0: NIC Link is Down Memory Limitation Messages such as the following in the log file indicate that the ESXi host is severely strained. To relieve the strain, disable NetQueue.
8–VMware Driver Software Driver Parameters Otherwise, allow the bnx2x driver to select the quantity of NetQueues to use by issuing the following command: esxcfg-module -s "num_queues=0" bnx2x The optimal number is to have the quantity of NetQueues match the quantity of CPUs on the machine. bnx2 BNX2 Driver Sign-on QLogic Gigabit Ethernet Driver bnx2 v1.1.3 (Jan. 13, 2005) CNIC Driver Sign-on QLogic CNIC Driver cnic v1.1.
8–VMware Driver Software FCoE Support iSCSI/FCoE Driver Stuck This message may appear during shutdown; no action is needed. cnic: eth0: Failed waiting for ULP up call to complete. Hardware Error, Reload Drivers, or Reboot System cnic: eth0: KCQ index not resetting to 0. FCoE Support This section describes the contents and procedures associated with installation of the VMware software package for supporting Marvell FCoE C-NICs. Drivers Marvell BCM57712/578xx FCoE drivers include the bnx2x and the bnx2fc.
8–VMware Driver Software FCoE Support Output example: vmnic4 User Priority: 3 Source MAC: FF:FF:FF:FF:FF:FF Active: false Priority Settable: false Source MAC Settable: false VLAN Range Settable: false VN2VN Mode Enabled: false 2. Enable the FCoE interface as follows: # esxcli fcoe nic discover -n vmnicX Where X is the interface number determined in Step 1. 3.
8–VMware Driver Software iSCSI Support NOTE The label Software FCoE is a VMware term used to describe initiators that depend on the inbox FCoE libraries and utilities. Marvell’s FCoE solution is a fully state, connection-based, hardware offload solution designed to significantly reduce the CPU burden encumbered by a non-offload software initiator. The native qfle3f driver automatically starts the FCoE initialization and need not follow these steps.
8–VMware Driver Software iSCSI Support VLAN Configuration for iSCSI Offload (VMware) iSCSI traffic on the network may be isolated in a VLAN to segregate it from other traffic. When this is the case, you must make the iSCSI interface on the adapter a member of that VLAN. To configure the VLAN using the V-Sphere client (GUI): 1. Select the ESXi host. 2. Click the Configuration tab. 3. On the Configuration page, select the Networking link, and then click Properties. 4.
8–VMware Driver Software iSCSI Support 5. (Optional) On the VM Network Properties, General page, assign a VLAN number in the VLAN ID box. Figure 8-1 and Figure 8-2 show examples. Figure 8-1.
8–VMware Driver Software iSCSI Support Figure 8-2. VM Network Properties: Example 2 6. Configure the VLAN on VMkernel.
9 Windows Driver Software Windows driver software information includes the following: Supported Drivers “Installing the Driver Software” on page 94 “Modifying the Driver Software” on page 98 “Repairing or Reinstalling the Driver Software” on page 99 “Removing the Device Drivers” on page 100 “Viewing or Changing the Properties of the Adapter” on page 100 “Setting Power Management Options” on page 100 “Configuring the Communication Protocol to Use with QCC GUI, QCC PowerKit, an
9–Windows Driver Software Installing the Driver Software Installing the Driver Software NOTE These instructions are based on the assumption that your Marvell BCM57xx and BCM57xxx adapters were not factory installed. If your controller was installed at the factory, the driver software has been installed for you.
9–Windows Driver Software Installing the Driver Software Using the Installer In addition to the Marvell device drivers, the installer installs the management applications. The following are installed when running the installer: QLogic Device Drivers installs the Marvell device drivers. Control Suite is the QLogic Control Suite (QCS) CLI. QCC is the QConverge Console GUI. QLASP installs QLogic Advanced Server Program1. SNMP installs the SNMP sub agent.
9–Windows Driver Software Installing the Driver Software 3. At the InstallShield Wizard prompt (Figure 9-1), select the adapter management utility that you want to use: Click Yes to use QConvergeConsole GUI. Click No to use QLogic Control Suite. Figure 9-1. InstallShield Wizard Prompt for Management Utility 4.
9–Windows Driver Software Installing the Driver Software To install the Microsoft iSCSI Software Initiator for iSCSI Crash Dump: If supported and if you will use the Marvell iSCSI Crash Dump utility, it is important to follow the installation sequence: 1. Run the installer. 2. Install Microsoft iSCSI Software Initiator along with the patch (MS KB939875).
9–Windows Driver Software Modifying the Driver Software To perform a silent install by feature: Use the ADDSOURCE to include any of the following features.
9–Windows Driver Software Repairing or Reinstalling the Driver Software 4. Click Modify, Add, or Remove to change program features. NOTE This option does not install drivers for new adapters. For information on installing drivers for new adapters, see “Repairing or Reinstalling the Driver Software” on page 99. 5. Click Next to continue. 6. Click on an icon to change how a feature is installed. 7. Click Next. 8. Click Install. 9. Click Finish to close the wizard. 10.
9–Windows Driver Software Removing the Device Drivers Removing the Device Drivers When removing the device drivers, any management application that is installed is also removed. To remove the device drivers: 1. In the Control Panel, double-click Add or Remove Programs. 2. Click QLogic Drivers and Management Applications, and then click Remove. Follow the on-screen prompts. 3. Reboot your system to completely remove the drivers.
9–Windows Driver Software Setting Power Management Options To have the controller stay on at all times: On the adapter properties’ Power Management page, clear the Allow the computer to turn off the device to save power check box, as shown in Figure 9-2. NOTE Power management options are not available on blade servers. Figure 9-2. Device Power Management Options NOTE The Power Management page is available only for servers that support power management.
9–Windows Driver Software Configuring the Communication Protocol to Use with QCC GUI, QCC PowerKit, and QCS CLI Configuring the Communication Protocol to Use with QCC GUI, QCC PowerKit, and QCS CLI There are two main components of the QCC GUI, QCC PowerKit, and QCS CLI management applications: the RPC agent and the client software. An RPC agent is installed on a server, or managed host, that contains one or more Converged Network Adapters.
10 iSCSI Protocol This chapter provides the following information about the iSCSI protocol: iSCSI Boot “iSCSI Crash Dump” on page 130 “iSCSI Offload in Windows Server” on page 130 iSCSI Boot Marvell BCM57xx and BCM57xxx gigabit Ethernet (GbE) adapters support iSCSI boot to enable network boot of operating systems to diskless systems. iSCSI boot allows a Windows, Linux, or VMware operating system boot from an iSCSI target machine located remotely over a standard IP network.
10–iSCSI Protocol iSCSI Boot Supported Operating Systems for iSCSI Boot The Marvell BCM57xx and BCM57xxx gigabit Ethernet adapters support iSCSI boot on the following operating systems: Windows Server 2012 and later 32-bit and 64-bit (supports offload and non-offload paths) Linux RHEL 6 and later and SLES 11.1 and later (supports offload and non-offload paths) SLES 10.x and SLES 11 (only supports non-offload path) VMware ESXi 5.
10–iSCSI Protocol iSCSI Boot Target IP address Target TCP port number Target LUN Initiator IQN CHAP ID and secret Configuring iSCSI Boot Parameters To configure the iSCSI boot parameters: 1. In the NIC Configuration page, in the Legacy Boot Protocol drop-down menu, select iSCSI (see Figure 10-1). Figure 10-1. Legacy Boot Protocol Selection As shown in Figure 10-1, UEFI is not supported for the iSCSI protocol for the BCM57xx and BCM57xxx adapters.
10–iSCSI Protocol iSCSI Boot 2. Configure the iSCSI boot software for either static or dynamic configuration in the CCM, UEFI (see Figure 10-2), QCC GUI, or QCS CLI. Figure 10-2.
10–iSCSI Protocol iSCSI Boot The configuration options available on the General Parameters window (see Figure 10-3) are listed in Table 10-1. Figure 10-3. UEFI, iSCSI Configuration, iSCSI General Parameters Table 10-1 lists parameters for both IPv4 and IPv6. Parameters specific to either IPv4 or IPv6 are noted. NOTE Availability of IPv6 iSCSI boot is platform and device dependent. Table 10-1. Configuration Options Option TCP/IP parameters through DHCP Description This option is specific to IPv4.
10–iSCSI Protocol iSCSI Boot Table 10-1. Configuration Options (Continued) Option Description IP Autoconfiguration This option is specific to IPv6. Controls whether the iSCSI boot host software will configure a stateless link-local address and/or stateful address if DHCPv6 is present and used (Enabled). Router Solicit packets are sent out up to three times with 4 second intervals in between each retry. Or use a static IP configuration (Disabled).
10–iSCSI Protocol iSCSI Boot Table 10-1. Configuration Options (Continued) Option Description LUN Busy Retry Count Controls the quantity of connection retries the iSCSI Boot initiator will attempt if the iSCSI target LUN is busy. IP Version This option is specific to IPv6. Toggles between the IPv4 or IPv6 protocol. All IP settings will be lost when switching from one protocol version to another.
10–iSCSI Protocol iSCSI Boot LUN Busy Retry Count: 0 IP Version: IPv6 (for IPv6, non-offload) HBA Boot Mode: Disabled NOTE For initial OS installation to a blank iSCSI target LUN from a CD/DVD-ROM or mounted bootable OS installation image, set Boot from Target to One Time Disabled. This setting causes the system not to boot from the configured iSCSI target after establishing a successful login and connection. This setting will revert to Enabled after the next system reboot.
10–iSCSI Protocol iSCSI Boot 4. On the iSCSI Initiator Parameters window (Figure 10-4), type values for the following: IP Address (unspecified IPv4 and IPv6 addresses should be 0.0.0.0 and ::, respectively) NOTE Carefully enter the IP address. There is no error-checking performed against the IP address to check for duplicates or incorrect segment or network assignment.
10–iSCSI Protocol iSCSI Boot 7. On the iSCSI First Target Parameters window (Figure 10-5): a. Enable Connect to connect to the iSCSI target. b. Type values for the following using the values used when configuring the iSCSI target: IP Address TCP Port Boot LUN iSCSI Name CHAP ID CHAP Secret 8. Press ESC to return to the Main menu. 9. (Optional) Configure a secondary iSCSI target by repeating these steps in the iSCSI Second Target Parameter window. 10.
10–iSCSI Protocol iSCSI Boot If DHCP Option 17 is used, the target information is provided by the DHCP server, and the initiator iSCSI name is retrieved from the value programmed on the Initiator Parameters window. If no value was selected, the controller defaults to the following name: iqn.1995-05.com.qlogic.<11.22.33.44.55.66>.iscsiboot Where the string 11.22.33.44.55.66 corresponds to the controller’s MAC address.
10–iSCSI Protocol iSCSI Boot Enabling CHAP Authentication Ensure that CHAP authentication is enabled on the target and initiator. To enable CHAP authentication: 1. On the iSCSI General Parameters window, set CHAP Authentication to Enabled. 2. On the iSCSI Initiator Parameters window, type values for the following: CHAP ID (up to 128 bytes) CHAP Secret (if authentication is required, and must be a minimum of 12 characters; the maximum length is 16 characters) 3.
10–iSCSI Protocol iSCSI Boot DHCP Option 17, Root Path Option 17 is used to pass the iSCSI target information to the iSCSI client. The format of the root path as defined in IETC RFC 4173 is: "iscsi:"":"":"":"":"" Table 10-2 lists the parameters and definitions. Table 10-2.
10–iSCSI Protocol iSCSI Boot Table 10-3 lists the suboption. Table 10-3. DHCP Option 43 Suboption Definition Suboption 201 Definition First iSCSI target information in the standard root path format "iscsi:"":"":"":"": "" Using DHCP option 43 requires more configuration than DHCP option 17, but it provides a richer environment and provides more configuration options.
10–iSCSI Protocol iSCSI Boot The content of Option 16 should be <2-byte length> . DHCPv6 Option 17, Vendor-Specific Information DHCPv6 Option 17 (vendor-specific information) provides more configuration options to the iSCSI client. In this configuration, three additional suboptions are provided that assign the initiator IQN to the iSCSI boot client along with two iSCSI target IQNs that can be used for booting. Table 10-4 lists the suboption. Table 10-4.
10–iSCSI Protocol iSCSI Boot Windows Server 2012, 2012 R2, and 2016 iSCSI Boot Setup Windows Server 2012/2012 R2 and 2016 support booting as well as installing in either the offload or non-offload paths. Marvell requires the use of a “slipstream” DVD with the latest Marvell drivers injected (see “Injecting (Slipstreaming) Marvell Drivers into Windows Image Files” on page 124). Also refer to the Microsoft knowledge base topic KB974072 at support.microsoft.com.
10–iSCSI Protocol iSCSI Boot 12. Select Next to proceed with Windows Server 2012 or 2016 installation. A few minutes after the Windows Server 2012 or 2016 DVD installation process starts, a system reboot occurs. After the reboot, the Windows Server 2012 or 2016 installation routine should resume and complete the installation. 13. Following another system restart, verify that the remote system is able to boot to the desktop. 14.
10–iSCSI Protocol iSCSI Boot 11. Continue installation as needed. A drive will be available at this point. After file copying is done, remove the CD or DVD and reboot the system. 12. When the system reboots, enable “boot from target” in iSCSI Boot Parameters and continue with installation until it is done. At this stage, the initial installation phase is complete. To create a new customized initrd for any new components update: 1. Update the iSCSI initiator if needed.
10–iSCSI Protocol iSCSI Boot 17. Continue booting into the iSCSI boot image and select one of the images you created (non-offload or offload). Your choice must correspond with your choice in the iSCSI Boot parameters section. If HBA Boot Mode was enabled in the iSCSI Boot Parameters section, you must boot the offload image. NOTE Marvell supports Host Bus Adapter (offload) starting in SLES 11 SP1 and later. Marvell does not support iSCSI boot in Host Bus Adapter (offload) mode for SLES 10.x and SLES 11.
10–iSCSI Protocol iSCSI Boot ISCSIUIO=/sbin/iscsiuio CONFIG_FILE=/etc/iscsid.conf DAEMON=/sbin/iscsid ARGS="-c $CONFIG_FILE" # Source LSB init functions . /etc/rc.status # # This service is run right after booting. So all targets activated # during mkinitrd run should not be removed when the open-iscsi # service is stopped.
10–iSCSI Protocol iSCSI Boot rc_failed 6 rc_exit fi fi case "$1" in start) echo -n "Starting iSCSI initiator for the root device: " iscsi_load_iscsiuio startproc $DAEMON $ARGS rc_status -v iscsi_mark_root_nodes ;; stop|restart|reload) rc_failed 0 ;; status) echo -n "Checking for iSCSI initiator service: " if checkproc $DAEMON ; then rc_status -v else rc_failed 3 rc_status -v fi ;; *) echo "Usage: $0 {start|stop|status|restart|reload}" exit 1 ;; esac rc_exit Removing Inbox Drivers from Windows OS Image 1.
10–iSCSI Protocol iSCSI Boot 4. Open the Windows Automated Installation Kit (AIK) command prompt in elevated mode from All program, and then issue the following command: attrib -r D:\Temp\Win2008R2Copy\sources\boot.wim 5. Issue the following command to mount the boot.wim image: dism /Mount-WIM /WimFile:D:\Temp\Win2008R2Copy\sources\boot.wim /index:1 / MountDir:D:\Temp\Win2008R2Mod 6. The boot.wim image was mounted in the Win2008R2Mod folder.
10–iSCSI Protocol iSCSI Boot Finally, inject these drivers into the Windows Image (WIM) files and install the applicable Windows Server version from the updated images. To inject Marvell drivers into Windows image files: 1. For Windows Server 2008 R2 and SP2, install the Windows Automated Installation Kit (AIK). Or, for Windows Server 2012 and 2012 R2, install the Windows Assessment and Deployment Kit (ADK). 2.
10–iSCSI Protocol iSCSI Boot 10. Issue the following command to determine the index of the SKU that you want in the install.wim image: dism /get-wiminfo /wimfile:.\src\sources\install.wim For example, in Windows Server 2012, index 2 is identified as “Windows Server 2012 SERVERSTANDARD.” 11. Issue the following command to mount the install.wim image: dism /mount-wim /wimfile:.\src\sources\install.wim /index:X /mountdir:.\mnt Note: X is a placeholder for the index value that you obtained in the previous
10–iSCSI Protocol iSCSI Boot 3. To boot through an offload path, set the HBA Boot Mode to Enabled. To boot through a non-offload path, set the HBA Boot Mode to Disabled. (This parameter cannot be changed when the adapter is in multi-function mode.) If CHAP authentication is needed, enable CHAP authentication after determining that booting is successful (see “Enabling CHAP Authentication” on page 114).
10–iSCSI Protocol iSCSI Boot 6. Install the bibt package on your Linux system. You can get this package from QLogic CD. 7. Delete all ifcfg-eth* files. 8. Configure one port of the network adapter to connect to iSCSI target (for instructions, see “Configuring the iSCSI Target” on page 104). 9. Connect to the iSCSI target. 10. Issue the DD command to copy from the local hard drive to iSCSI target. 11.
10–iSCSI Protocol iSCSI Boot Troubleshooting iSCSI Boot The following troubleshooting tips are useful for iSCSI boot. Problem: The Marvell iSCSI Crash Dump utility will not work properly to capture a memory dump when the link speed for iSCSI boot is configured for 10Mbps or 100Mbps. Solution: The iSCSI Crash Dump utility is supported when the link speed for iSCSI boot is configured for 1Gbps or 10Gbps. 10Mbps and 100Mbps are not supported.
10–iSCSI Protocol iSCSI Crash Dump Problem: In Windows Server 2012, toggling between iSCSI Host Bus Adapter offload mode and iSCSI software initiator boot can leave the machine in a state where the Host Bus Adapter offload miniport bxois will not load. Solution: Manually edit [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\bxois\ StartOverride] from 3 to 0. Modify the registry key before toggling back from NDIS to Host Bus Adapter path in CCM. NOTE Microsoft recommends against this method.
10–iSCSI Protocol iSCSI Offload in Windows Server Configuring Marvell iSCSI Using QCC Configuring Microsoft Initiator to Use the Marvell iSCSI Offload Installing Marvell Drivers and Management Applications Install the Windows drivers and management applications. Installing the Microsoft iSCSI Initiator For Windows Server 2012 and later, the iSCSI initiator is included inbox.
10–iSCSI Protocol iSCSI Offload in Windows Server On this page, you can change the iSCSI-Offload MTU size, the iSCSI-Offload VLAN ID, the IPv4/IPv6 DHCP setting, the IPv4/IPv6 Static Address/Subnet Mask/Default Gateway settings, and the IPv6 Process Router Advertisements setting (see Figure 10-6). Figure 10-6. Configuring iSCSI Using QCC 4. DHCP is the default for IP address assignment, but you can change it to a static IP address assignment, if this is the preferred method of IP address assignment.
10–iSCSI Protocol iSCSI Offload in Windows Server Configuring Microsoft Initiator to Use the Marvell iSCSI Offload After you have configured the IP address for the iSCSI adapter, you must use Microsoft Initiator to configure and add a connection to the iSCSI target using a Marvell iSCSI adapter. See Microsoft’s user guide for more details on the Microsoft Initiator. 1. Open Microsoft Initiator. 2. Configure the initiator IQN name according to your setup.
10–iSCSI Protocol iSCSI Offload in Windows Server 3. In the Initiator Node Name Change dialog box (see Figure 10-8), type the initiator IQN name, and then click OK. Figure 10-8. Changing the Initiator Node Name 4. On the iSCSI Initiator Properties (Figure 10-9), click the Discovery tab, and then under Target Portals, click Add. Figure 10-9.
10–iSCSI Protocol iSCSI Offload in Windows Server 5. On the Add Target Portal dialog box (Figure 10-10), type the IP address of the target, and then click Advanced. Figure 10-10. Add Target Portal Dialog Box 6. On the Advanced Settings dialog box, complete the General page as follows: a. For the Local adapter, select the Marvell BCM57xx and BCM57xxx C-NIC iSCSI adapter. b. For the Source IP, select the IP address for the adapter. c.
10–iSCSI Protocol iSCSI Offload in Windows Server Figure 10-11 shows an example. Figure 10-11.
10–iSCSI Protocol iSCSI Offload in Windows Server 7. On the iSCSI Initiator Properties, click the Discovery tab, and then on the Discovery page, click OK to add the target portal. Figure 10-12 shows an example. Figure 10-12. iSCSI Initiator Properties: Discovery Page 8. On the iSCSI Initiator Properties, click the Targets tab.
10–iSCSI Protocol iSCSI Offload in Windows Server 9. On the Targets page, select the target, and then click Log On to log into your iSCSI target using the Marvell iSCSI adapter. Figure 10-13 shows an example. Figure 10-13. iSCSI Initiator Properties: Targets Page 10. On the Log On To Target dialog box (Figure 10-14), click Advanced. Figure 10-14.
10–iSCSI Protocol iSCSI Offload in Windows Server 11. On the Advanced Settings dialog box, General page, select the Marvell BCM57xx and BCM57xxx C-NIC iSCSI adapters as the Local adapter, and then click OK. Figure 10-15 shows an example. Figure 10-15. Advanced Settings: General Page, Local Adapter 12. Click OK to close the Microsoft Initiator.
10–iSCSI Protocol iSCSI Offload in Windows Server 13. To format your iSCSI partition, use Disk Manager. NOTE Teaming does not support iSCSI adapters. Teaming does not support NDIS adapters that are in the boot path. Teaming supports NDIS adapters that are not in the iSCSI boot path, but only for the SLB or switch-independent team type. iSCSI Offload FAQs Question: How do I assign an IP address for iSCSI offload? Answer: Use the Configurations page in the applicable management utility.
10–iSCSI Protocol iSCSI Offload in Windows Server Table 10-5. Offload iSCSI (OIS) Driver Event Log Messages (Continued) Message Number Severity Message 3 Error Maximum command sequence number is not serially greater than expected command sequence number in login response. Dump data contains Expected Command Sequence number followed by Maximum Command Sequence number. 4 Error MaxBurstLength is not serially greater than FirstBurstLength. Dump data contains FirstBurstLength followed by MaxBurstLength.
10–iSCSI Protocol iSCSI Offload in Windows Server Table 10-5. Offload iSCSI (OIS) Driver Event Log Messages (Continued) Message Number Severity 20 Error Connection to the target was lost. The initiator will attempt to retry the connection. 21 Error Data Segment Length specified in the header exceeds MaxRecvDataSegmentLength declared by the target. 22 Error Header digest error was detected for the specified PDU. Dump data contains the header and digest.
10–iSCSI Protocol iSCSI Offload in Windows Server Table 10-5. Offload iSCSI (OIS) Driver Event Log Messages (Continued) Message Number Severity 38 Error Initiator requires CHAP for logon authentication, but target did not offer CHAP. 39 Error Initiator sent a task management command to reset the target. The target name is specified in the dump data. 40 Error Target requires logon authentication through CHAP, but Initiator is not configured to perform CHAP.
10–iSCSI Protocol iSCSI Offload in Windows Server Table 10-5. Offload iSCSI (OIS) Driver Event Log Messages (Continued) Message Number Severity Message 57 Error Initiator could not allocate required resources for processing a request resulting in I/O failure. 58 Error Initiator could not allocate a tag for processing a request resulting in I/O failure. 59 Error Target dropped the connection before the initiator could transition to Full Feature Phase.
11 Marvell Teaming Services This chapter describes teaming for adapters in Windows Server systems (excluding Windows Server 2016 and later). For more information on a similar technologies on other operating systems (for example, Linux Channel Bonding), refer to your operating system documentation. Microsoft recommends using their in-OS NIC teaming service instead of any adapter vendor-proprietary NIC teaming driver on Windows Server 2012 and later.
11–Marvell Teaming Services Executive Summary This section describes the technology and implementation considerations when working with the network teaming services offered by the Marvell software shipped with Dell’s servers and storage products. The goal of Marvell teaming services is to provide fault tolerance and link aggregation across a team of two or more adapters.
11–Marvell Teaming Services Executive Summary Table 11-1.
11–Marvell Teaming Services Executive Summary Network Addressing To understand how teaming works, it is important to understand how node communications work in an Ethernet network. This document is based on the assumption that the reader is familiar with the basics of IP and Ethernet network communications. The following information provides a high-level overview of the concepts of network addressing used in an Ethernet network.
11–Marvell Teaming Services Executive Summary Teaming and Network Addresses A team of adapters function as a single virtual network interface and does not appear any different to other network devices than a non-teamed adapter. A virtual network adapter advertises a single Layer 2 and one or more Layer 3 addresses. When the teaming driver initializes, it selects one MAC address from one of the physical adapters that make up the team to be the Team MAC address.
11–Marvell Teaming Services Executive Summary Table 11-2 shows a summary of the teaming types and their classification. Table 11-2. Available Teaming Types SwitchDependent a LACP Support Required on the Switch Load Balancing Failover Smart Load Balancing and Failover (with two to eight load balance team members) — — ✔ ✔ SLB (Auto-Fallback Disable) — — ✔ ✔ Link Aggregation (802.3ad) ✔ ✔ ✔ ✔ Generic Trunking (FEC/GEC)/802.
11–Marvell Teaming Services Executive Summary Transmit load balancing is achieved by creating a hashing table using the source and destination IP addresses and TCP/UDP port numbers.The same combination of source and destination IP addresses and TCP/UDP port numbers generally yield the same hash index and therefore point to the same port in the team.
11–Marvell Teaming Services Executive Summary The reason is that ARP is not a routable protocol. It does not have an IP header and therefore, is not sent to the router or default gateway. ARP is only a local subnet protocol. In addition, because the G-ARP is not a broadcast packet, the router will not process it and will not update its own ARP cache.
11–Marvell Teaming Services Executive Summary Link Aggregation (IEEE 802.3ad LACP) Link aggregation is similar to generic trunking except that it uses the link aggregation control protocol (LACP) to negotiate the ports that will make up the team. LACP must be enabled at both ends of the link for the team to be operational. If LACP is not available at both ends of the link, 802.3ad provides a manual aggregation that only requires both ends of the link to be in a link up state.
11–Marvell Teaming Services Executive Summary Software Components Teaming is implemented through an NDIS intermediate driver in the Windows operating system environment. This software component works with the miniport driver, the NDIS layer, and the protocol stack to enable the teaming architecture (see Figure 11-2 on page 161). The miniport driver controls the host LAN controller directly to enable functions such as sends, receives, and interrupt processing.
11–Marvell Teaming Services Executive Summary Hardware Requirements Hardware requirements for teaming include the following: Repeater Hub Switching Hub Router The various teaming modes described in this document place specific restrictions on the networking equipment used to connect clients to teamed systems. Each type of network interconnect technology has an effect on teaming as described in the following sections.
11–Marvell Teaming Services Executive Summary Router A router is designed to route network traffic based on Layer 3 or higher protocols, although it often also works as a Layer 2 device with switching capabilities. The teaming of ports connected directly to a router is not supported. Teaming Support by Processor All team types are supported by the IA-32 and EM64T processors.
11–Marvell Teaming Services Executive Summary Table 11-4. Comparison of Team Types (Continued) Type of Team Fault Tolerance Load Balancing Switch-Dependent Static Trunking Switch-Independent Dynamic Link Aggregation (IEEE 802.
11–Marvell Teaming Services Executive Summary Table 11-4. Comparison of Team Types (Continued) Type of Team Fault Tolerance Load Balancing Switch-Dependent Static Trunking Switch-Independent Dynamic Link Aggregation (IEEE 802.3ad) Function SLB with Standby a SLB Generic Trunking Link Aggregation Load balancing by IP address No Yes Yes Yes Load balancing by MAC address No Yes (used for no-IP/IPX) Yes Yes a SLB with one primary and one standby member.
11–Marvell Teaming Services Executive Summary Figure 11-1 shows a flow chart for determining the team type. Figure 11-1.
11–Marvell Teaming Services Teaming Mechanisms Teaming Mechanisms This section provides the following information about teaming mechanisms: Architecture Types of Teams Attributes of the Features Associated with Each Type of Team Speeds Supported for Each Type of Team 160 BC0054508-00 M
11–Marvell Teaming Services Teaming Mechanisms Architecture The QLASP is implemented as an NDIS intermediate driver (see Figure 11-2). It operates below protocol stacks such as TCP/IP and IPX and appears as a virtual adapter. This virtual adapter inherits the MAC Address of the first port initialized in the team. A Layer 3 address must also be configured for the virtual adapter.
11–Marvell Teaming Services Teaming Mechanisms Outbound Traffic Flow The Marvell intermediate driver manages the outbound traffic flow for all teaming modes. For outbound traffic, every packet is first classified into a flow, and then distributed to the selected physical adapter for transmission. The flow classification involves an efficient hash computation over known protocol fields. The resulting hash value is used to index into an Outbound Flow Hash Table.
11–Marvell Teaming Services Teaming Mechanisms When an inbound IP Datagram arrives, the appropriate Inbound Flow Head Entry is located by hashing the source IP address of the IP Datagram. Two statistics counters stored in the selected entry are also updated. These counters are used in the same fashion as the outbound counters by the load-balancing engine periodically to reassign the flows to the physical adapter.
11–Marvell Teaming Services Teaming Mechanisms The actual assignment between adapters may change over time, but any protocol that is not TCP/UDP based goes over the same physical adapter because only the IP address is used in the hash. Performance Modern network interface cards provide many hardware features that reduce CPU utilization by offloading specific CPU intensive operations (see “Teaming and Other Advanced Networking Properties” on page 171).
11–Marvell Teaming Services Teaming Mechanisms Network Communications Key attributes of SLB include: Failover mechanism—Link loss detection. Load Balancing Algorithm—Inbound and outbound traffic are balanced through a Marvell proprietary mechanism based on Layer 4 flows. Outbound Load Balancing using MAC Address—No Outbound Load Balancing using IP Address—Yes Multivendor Teaming—Supported (must include at least one Marvell Ethernet adapter as a team member).
11–Marvell Teaming Services Teaming Mechanisms The attached switch must support the appropriate trunking scheme for this mode of operation. Both the QLASP and the switch continually monitor their ports for link loss. In the event of link loss on any port, traffic is automatically diverted to other ports in the team.
11–Marvell Teaming Services Teaming Mechanisms Dynamic Trunking (IEEE 802.3ad Link Aggregation) This mode supports link aggregation through static and dynamic configuration through the link aggregation control protocol (LACP). With this mode, all adapters in the team are configured to receive packets for the same MAC address. The MAC address of the first adapter in the team is used and cannot be substituted for a different MAC address.
11–Marvell Teaming Services Teaming Mechanisms LiveLink LiveLink is a feature of QLASP that is available for the Smart Load Balancing (SLB) and SLB (Auto-Fallback Disable) types of teaming. The purpose of LiveLink is to detect link loss beyond the switch and to route traffic only through team members that have a live link. This function is accomplished though the teaming software.
11–Marvell Teaming Services Teaming Mechanisms Table 11-5. Teaming Attributes (Continued) Feature Attribute Failover event Loss of link Failover time <500ms Fallback time 1.
11–Marvell Teaming Services Teaming Mechanisms Table 11-5. Teaming Attributes (Continued) Feature Attribute Hot remove Yes Link speed support Different speeds Frame protocol All Incoming packet management Switch Outgoing packet management QLASP Failover event Loss of link only Failover time < 500ms Fallback time 1.5s (approximate) a MAC address Same for all adapters Multivendor teaming Yes a Make sure that Port Fast or Edge Port is enabled.
11–Marvell Teaming Services Teaming and Other Advanced Networking Properties Teaming and Other Advanced Networking Properties This section covers the following teaming and advanced networking properties: Checksum Offload IEEE 802.1p QoS Tagging Large Send Offload Jumbo Frames IEEE 802.
11–Marvell Teaming Services Teaming and Other Advanced Networking Properties A team does not necessarily inherit adapter properties; rather various properties depend on the specific capability. For instance, an example would be flow control, which is a physical adapter property and has nothing to do with QLASP, and will be enabled on a specific adapter if the miniport driver for that adapter has flow control enabled.
11–Marvell Teaming Services Teaming and Other Advanced Networking Properties Jumbo Frames The use of jumbo frames was originally proposed by Alteon Networks, Inc. in 1998 and increased the maximum size of an Ethernet frame to a maximum size of 9600 bytes. Though never formally adopted by the IEEE 802.3 Working Group, support for jumbo frames has been implemented in Marvell BCM57xx and BCM57xxx adapters.
11–Marvell Teaming Services General Network Considerations Wake on LAN Wake on LAN (WoL) is a feature that allows a system to be awakened from a sleep state by the arrival of a specific packet over the Ethernet interface. Because a Virtual Adapter is implemented as a software only device, it lacks the hardware features to implement Wake on LAN and cannot be enabled to wake the system from a sleeping state through the virtual adapter.
11–Marvell Teaming Services General Network Considerations Teaming with Microsoft Virtual Server 2005 The only supported QLASP team configuration when using Microsoft Virtual Server 2005 is with a Smart Load Balancing team-type consisting of a single primary Marvell adapter and a standby Marvell adapter. Make sure to unbind or deselect “Virtual Machine Network Services” from each team member prior to creating a team and prior to creating virtual networks with Microsoft Virtual Server.
11–Marvell Teaming Services General Network Considerations The figures show the secondary team member sending the ICMP echo requests (yellow arrows) while the primary team member receives the respective ICMP echo replies (blue arrows). This send-receive illustrates a key characteristic of the teaming software. The load balancing algorithms do not synchronize how frames are load balanced when sent or received.
11–Marvell Teaming Services General Network Considerations Furthermore, a failover event would cause additional loss of connectivity. Consider a cable disconnect on the Top Switch port 4. In this case, Gray would send the ICMP Request to Red 49:C9, but because the Bottom Switch has no entry for 49:C9 in its CAM Table, the frame is flooded to all its ports but cannot find a way to get to 49:C9. Figure 11-3.
11–Marvell Teaming Services General Network Considerations The addition of a link between the switches allows traffic from and to Blue and Gray to reach each other without any problems. Note the additional entries in the CAM table for both switches. The link interconnect is critical for the proper operation of the team. As a result, Marvell highly advises that you have a link aggregation trunk to interconnect the two switches to ensure high availability for the connection. Figure 11-4.
11–Marvell Teaming Services General Network Considerations Figure 11-5 represents a failover event in which the cable is unplugged on the Top Switch port 4. This event is a successful failover with all stations pinging each other without loss of connectivity. Figure 11-5.
11–Marvell Teaming Services General Network Considerations Spanning Tree Algorithm In Ethernet networks, only one active path may exist between any two bridges or switches. Multiple active paths between switches can cause loops in the network. When loops occur, some switches recognize stations on both sides of the switch. This situation causes the forwarding algorithm to malfunction allowing duplicate frames to be forwarded.
11–Marvell Teaming Services General Network Considerations Topology Change Notice (TCN) A bridge or switch creates a forwarding table of MAC addresses and port numbers by learning the source MAC address that received on a specific port. The table is used to forward frames to a specific port rather than flooding the frame to all ports. The typical maximum aging time of entries in the table is 5 minutes. Only when a host has been silent for 5 minutes would its entry be removed from the table.
11–Marvell Teaming Services General Network Considerations Layer 3 Routing and Switching The switch that the teamed ports are connected to must not be a Layer 3 switch or router. The ports in the team must be in the same network. Teaming with Hubs (for Troubleshooting Purposes Only) SLB teaming can be used with 10 and 100 hubs, but Marvell recommends using it only for troubleshooting purposes, such as connecting a network analyzer in the event that switch port mirroring is not an option.
11–Marvell Teaming Services General Network Considerations SLB Team Connected to a Single Hub SLB teams configured as shown in Figure 11-6 maintain their fault tolerance properties. Either server connection could potentially fail, and network functionality is maintained. Clients could be connected directly to the hub, and fault tolerance would still be maintained; server performance, however, would be degraded. Figure 11-6. Team Connected to a Single Hub Generic and Dynamic Trunking (FEC/GEC/IEEE 802.
11–Marvell Teaming Services Application Considerations Application Considerations Application considerations covered: Teaming and Clustering Teaming and Network Backup Teaming and Clustering Teaming and clustering information includes: Microsoft Cluster Software High-Performance Computing Cluster Oracle Microsoft Cluster Software Dell Server cluster solutions integrate Microsoft Cluster Services (MSCS) with PowerVault™ SCSI or Dell and EMC Fibre Channel-based storage, Dell servers, stor
11–Marvell Teaming Services Application Considerations Figure 11-7 shows a two-node Fibre-Channel cluster with three network interfaces per cluster node: one private and two public. On each node, the two public adapters are teamed, and the private adapter is not. Teaming is supported across the same switch or across two switches. Figure 11-8 on page 187 shows the same two-node Fibre-Channel cluster in this configuration. Figure 11-7.
11–Marvell Teaming Services Application Considerations High-Performance Computing Cluster Gigabit Ethernet is typically used for the following purposes in high-performance computing cluster (HPCC) applications: Inter-process communications (IPC): For applications that do not require low-latency, high-bandwidth interconnects (such as Myrinet™ or InfiniBand®), Gigabit Ethernet can be used for communication between the compute nodes.
11–Marvell Teaming Services Application Considerations Oracle In the Marvell Oracle® solution stacks, Marvell supports adapter teaming in both the private network (interconnect between Real Application Cluster [RAC] nodes) and public network with clients or the application layer above the database layer, as shown in Figure 11-8. Figure 11-8.
11–Marvell Teaming Services Application Considerations Teaming and Network Backup When you perform network backups in a nonteamed environment, overall throughput on a backup server adapter can be easily impacted due to excessive traffic and adapter overloading. Depending on the quantity of backup servers, data streams, and tape drive speed, backup traffic can easily consume a high percentage of the network link bandwidth, thus impacting production data and tape backup performance.
11–Marvell Teaming Services Application Considerations Because there are four client servers, the backup server can simultaneously stream four backup jobs (one per client) to a multidrive autoloader. Because of the single link between the switch and the backup server; however, a four-stream backup can easily saturate the adapter and link.
11–Marvell Teaming Services Application Considerations The designated path is determined by two factors: Client-Server ARP cache points to the backup server MAC address. This address is determined by the Marvell intermediate driver inbound load balancing algorithm. The physical adapter interface on Client-Server Red transmits the data.
11–Marvell Teaming Services Application Considerations Fault Tolerance If a network link fails during tape backup operations, all traffic between the backup server and client stops and backup jobs fail. If, however, the network topology was configured for both Marvell SLB and switch fault tolerance, this configuration would allow tape backup operations to continue without interruption during the link failure. All failover processes within the network are transparent to tape backup software applications.
11–Marvell Teaming Services Application Considerations To understand how backup data streams are directed during network failover process, consider the topology in Figure 11-10. Client-Server Red is transmitting data to the backup server through Path 1, but a link failure occurs between the backup server and the switch.
11–Marvell Teaming Services Troubleshooting Teaming Problems Troubleshooting Teaming Problems When running a protocol analyzer over a virtual adapter teamed interface, the MAC address shown in the transmitted frames may not be correct. The analyzer does not show the frames as constructed by QLASP and shows the MAC address of the team and not the MAC address of the interface transmitting the frame.
11–Marvell Teaming Services Troubleshooting Teaming Problems A team that requires maximum throughput should use LACP or GEC\FEC. In these cases, the intermediate driver is only responsible for the outbound load balancing while the switch performs the inbound load balancing. Aggregated teams (802.3ad\LACP and GEC\FEC) must be connected to only a single switch that supports IEEE 802.3a, LACP, or GEC/FEC.
11–Marvell Teaming Services Frequently Asked Questions 5. Check that the adapters and the switch are configured identically for link speed and duplex. 6. If possible, break the team and check for connectivity to each adapter independently to confirm that the problem is directly associated with teaming. 7. Check that all switch ports connected to the team are on the same VLAN. 8. Check that the switch ports are configured properly for Generic Trunking (FEC/GEC)/802.
11–Marvell Teaming Services Frequently Asked Questions Question: Can I connect the teamed adapters to a hub? Answer: Teamed ports can be connected to a hub for troubleshooting purposes only. However, this practice is not recommended for normal operation because the performance would be degraded due to hub limitations. Connect the teamed ports to a switch instead. Question: Can I connect the teamed adapters to ports in a router? Answer: No.
11–Marvell Teaming Services Frequently Asked Questions Question: How do I upgrade the intermediate driver (QLASP)? Answer: The intermediate driver cannot be upgraded through the Local Area Connection Properties. It must be upgraded using the Setup installer. Question: How can I determine the performance statistics on a virtual adapter (team)? Answer: In QLogic Control Suite, click the Statistics tab for the virtual adapter.
11–Marvell Teaming Services Event Log Messages Question: Why does my team lose connectivity for the first 30 to 50 seconds after the primary adapter is restored (fall-back after a failover)? Answer: During a fall-back event, link is restored causing Spanning Tree Protocol to configure the port for blocking until it determines that it can move to the forwarding state. You must enable Port Fast or Edge Port on the switch ports connected to the team to prevent the loss of communications caused by STP.
11–Marvell Teaming Services Event Log Messages Base Driver (Physical Adapter or Miniport) The base driver is identified by source L2ND. Table 11-8 lists the event log messages supported by the base driver, explains the cause for the message, and provides the recommended action. NOTE In Table 11-8, message numbers 1 through 17 apply to both NDIS 5.x and NDIS 6.x drivers, message numbers 18 through 23 apply only to the NDIS 6.x driver. Table 11-8.
11–Marvell Teaming Services Event Log Messages Table 11-8. Base Driver Event Log Messages (Continued) Message Number Severity Message Cause 6 Informational Network controller configured for 10Mb half-duplex link. The adapter has been manually configured for the selected line speed and duplex settings. No action is required. 7 Informational Network controller configured for 10Mb full-duplex link. The adapter has been manually configured for the selected line speed and duplex settings.
11–Marvell Teaming Services Event Log Messages Table 11-8. Base Driver Event Log Messages (Continued) Message Number Severity Message Cause Corrective Action 15 Error Unable to map I/O space. The device driver cannot allocate memory-mapped I/O to access driver registers. Remove other adapters from the system, reduce the amount of physical memory installed, and replace the adapter. 16 Informational Driver initialized successfully. The driver has successfully loaded. No action is required.
11–Marvell Teaming Services Event Log Messages Table 11-8. Base Driver Event Log Messages (Continued) Message Number 23 Severity Error Message Cause Corrective Action Network controller failed to exchange the interface with the bus driver. The driver and the bus driver are not compatible. Update to the latest driver set, ensuring the major and minor versions for both NDIS and the bus driver are the same.
11–Marvell Teaming Services Event Log Messages Table 11-9. Intermediate Driver Event Log Messages (Continued) System Event Message Number Severity Message Cause Corrective Action 7 Error Could not allocate memory for internal data structures. The driver cannot allocate memory from the operating system. Close running applications to free memory. 8 Warning Could not bind to adapter. The driver could not open one of the team physical adapters.
11–Marvell Teaming Services Event Log Messages Table 11-9. Intermediate Driver Event Log Messages (Continued) System Event Message Number Severity Message Cause 14 Informational Network adapter does not support Advanced Failover. The physical adapter does not support the Marvell NIC Extension (NICE). Replace the adapter with one that does support NICE. 15 Informational Network adapter is enabled through management interface.
11–Marvell Teaming Services Event Log Messages Virtual Bus Driver (VBD) Table 11-10 lists VBD event log messages. Table 11-10. Virtual Bus Driver (VBD) Event Log Messages Message Number Severity Message Cause Corrective Action 1 Error Failed to allocate memory for the device block. Check system memory resource usage. The driver cannot allocate memory from the operating system. Close running applications to free memory. 2 Informational The network link is down.
11–Marvell Teaming Services Event Log Messages Table 11-10. Virtual Bus Driver (VBD) Event Log Messages (Continued) Message Number Severity Message Cause Corrective Action 8 Informational Network controller configured for 1Gb half-duplex link. The adapter has been manually configured for the selected line speed and duplex settings. No action is required. 9 Informational Network controller configured for 1Gb full-duplex link.
12 NIC Partitioning and Bandwidth Management NIC partitioning and bandwidth management covered in this chapter includes: Overview “Configuring for NIC Partitioning” on page 208 Overview NIC partitioning (NPAR) divides a Marvell BCM57xx and BCM57xxx 10-gigabit Ethernet NIC into multiple virtual NICs by having multiple PCI physical functions per port. Each PCI function is associated with a different virtual NIC. To the OS and the network, each physical function appears as a separate NIC port.
12–NIC Partitioning and Bandwidth Management Configuring for NIC Partitioning Supported Operating Systems for NIC Partitioning The Marvell BCM57xx and BCM57xxx 10-gigabit Ethernet adapters support NIC partitioning on the following operating systems: Windows Linux 2012 Server and later family 2016 Server 2019 Server RHEL 8.x and later family RHEL 7.x and later family SLES 12.x and later family SLES 15.x and later family VMware ESX 6.
12–NIC Partitioning and Bandwidth Management Configuring for NIC Partitioning NIC partitioning can also be configured using pre-boot CCM, Linux and Windows QCC GUI, Linux and Windows QCS CLI, and the VMware QCC vSphere GUI plug-in. See the respective user's guides for more information. NOTE In NPAR mode, SR-IOV cannot be enabled on any partition or PF (VNIC) on which storage offload (FCoE or iSCSI) is configured. This does not apply to adapters in Single Function (SF) mode.
12–NIC Partitioning and Bandwidth Management Configuring for NIC Partitioning Table 12-2 describes the functions available from the PF# X window. Table 12-2. Function Description Function Description Ethernet Protocol Enables and disables the Ethernet protocol. Option Enable Disable iSCSI Offload Protocol Enables and disables the iSCSI protocol. Enable Disable FCoE Offload protocol Enables and disables the FCoE protocol.
12–NIC Partitioning and Bandwidth Management Configuring for NIC Partitioning Consider this example configuration: Four functions (or partitions) are configured with a total of six protocols, as shown in the following. Function 0 Ethernet FCoE Function 1 Ethernet Function 2 Ethernet Function 3 Ethernet iSCSI 1. If Relative Bandwidth Weight is configured as “0” for all four physical functions (PFs), all six offloads share the bandwidth equally.
13 Fibre Channel Over Ethernet Fibre Channel over Ethernet (FCoE) information includes: Overview “FCoE Boot from SAN” on page 213 “Configuring FCoE” on page 245 “N_Port ID Virtualization (NPIV)” on page 247 Overview In today’s data center, multiple networks, including network attached storage (NAS), management, IPC, and storage, are used to achieve the performance and versatility that you require.
13–Fibre Channel Over Ethernet FCoE Boot from SAN Data center bridging (DCB) provides lossless behavior with priority flow control (PFC) DCB allocates a share of link bandwidth to FCoE traffic with enhanced transmission selection (ETS) DCB supports storage, management, computing, and communications fabrics onto a single physical fabric that is simpler to deploy, upgrade, and maintain than in standard Ethernet networks.
13–Fibre Channel Over Ethernet FCoE Boot from SAN Preparing Marvell Multiple Boot Agent for FCoE Boot (CCM) CCM is available only when the system is set to legacy boot mode; it is not available when the systems is set to UEFI boot mode. The UEFI device configuration pages are available in both modes. 1. Invoke the CCM utility during POST. At the QLogic Ethernet Boot Agent banner (Figure 13-1), press the CTRL+S keys. Figure 13-1. Invoking the CCM Utility 2.
13–Fibre Channel Over Ethernet FCoE Boot from SAN 3. Ensure that DCB and DCBX are enabled on the device (Figure 13-3). FCoE boot is only supported on DCBX-capable configurations. As such, DCB and DCBX must be enabled, and the directly attached link peer must also be DCBX-capable with parameters that allow for full DCBX synchronization. Figure 13-3. CCM Device Hardware Configuration 4.
13–Fibre Channel Over Ethernet FCoE Boot from SAN For all other devices, use the CCM MBA Configuration Menu to set the Boot Protocol option to FCoE (Figure 13-4). Figure 13-4. CCM MBA Configuration Menu 5. Configure the boot target and LUN. From the Target Information menu, select the first available path (Figure 13-5). Figure 13-5.
13–Fibre Channel Over Ethernet FCoE Boot from SAN 6. Enable the Connect option, and then the target WWPN and Boot LUN information for the target to be used for boot (Figure 13-6). Figure 13-6. CCM Target Parameters The target information shows the changes (Figure 13-7). Figure 13-7.
13–Fibre Channel Over Ethernet FCoE Boot from SAN 7. Press the ESC key until prompted to exit and save changes. To exit CCM, restart the system, and apply changes, press the CTRL+ALT+DEL keys. 8. Proceed to OS installation after storage access has been provisioned in the SAN. Preparing Marvell Multiple Boot Agent for FCoE Boot (UEFI) To prepare the Marvell multiple boot agent for FCOE boot (UEFI): 1.
13–Fibre Channel Over Ethernet FCoE Boot from SAN 5. In the FCoE Configuration menu, select FCoE General Parameters. The FCoE General Parameters menu appears (see Figure 13-9). Figure 13-9. FCoE Boot Configuration Menu, FCoE General Parameters 6. In the FCoE General Parameters menu: a. Select the desired Boot to FCoE Target mode (see One-Time Disabled).
13–Fibre Channel Over Ethernet FCoE Boot from SAN Provisioning Storage Access in the SAN Storage access consists of zone provisioning and storage selective LUN presentation, each of which is commonly provisioned per initiator WWPN.
13–Fibre Channel Over Ethernet FCoE Boot from SAN When the initiator boot starts, it begins DCBX sync, FIP Discovery, Fabric Login, Target Login, and LUN readiness checks. As each of these phases completes, if the initiator is unable to proceed to the next phase, MBA presents the option to press the CTRL+R keys. 3. Press the CTRL+R keys. 4.
13–Fibre Channel Over Ethernet FCoE Boot from SAN For OS installation over the FCoE path, you must instruct the Option ROM to bypass FCoE and skip to CD or DVD installation media. As instructed in “Preparing Marvell Multiple Boot Agent for FCoE Boot (CCM)” on page 214, the boot order must be configured with Marvell boot first and installation media second. Furthermore, during OS installation, it is necessary to bypass the FCoE boot and pass through to the installation media for boot.
13–Fibre Channel Over Ethernet FCoE Boot from SAN Windows Server 2012, 2012 R2, 2016, and 2019 FCoE Boot Installation Windows Server 2012, 2012 R2, and 2016 boot from SAN installation requires the use of a “slipstream” DVD or ISO image with the latest Marvell drivers injected (see “Injecting (Slipstreaming) Marvell Drivers into Windows Image Files” on page 124). Also, refer to the Microsoft Knowledge Base topic KB974072 at support.microsoft.
13–Fibre Channel Over Ethernet FCoE Boot from SAN e. Click Installation to proceed (Figure 13-11). Figure 13-11.
13–Fibre Channel Over Ethernet FCoE Boot from SAN 2. Follow the prompts to choose the driver update medium (Figure 13-12) and load the drivers (Figure 13-13). Figure 13-12. Selecting Driver Update Medium Figure 13-13. Loading the Drivers 3. After the driver update is complete, select Next to continue with OS installation.
13–Fibre Channel Over Ethernet FCoE Boot from SAN 4. When requested, click Configure FCoE Interfaces (Figure 13-14). Figure 13-14.
13–Fibre Channel Over Ethernet FCoE Boot from SAN 5. Ensure that FCoE Enable is set to yes on the 10GbE Marvell initiator ports that you want to use as the SAN boot paths (Figure 13-15). Figure 13-15. Enabling FCoE 6. For each interface to be enabled for FCoE boot: a. Click Change Settings. b. On the Change FCoE Settings window (Figure 13-16), ensure that FCoE Enable and Auto_VLAN are set to yes. c. Ensure that DCB Required is set to no. d. Click Next to save the settings.
13–Fibre Channel Over Ethernet FCoE Boot from SAN Figure 13-16. Changing FCoE Settings 7. For each interface to be enabled for FCoE boot: a. Click Create FCoE VLAN Interface. b. On the VLAN interface creation dialog box, click Yes to confirm and trigger automatic FIP VLAN discovery. If successful, the VLAN is displayed under FCoE VLAN Interface. If no VLAN is visible, check your connectivity and switch configuration.
13–Fibre Channel Over Ethernet FCoE Boot from SAN 8. After completing the configuration of all interfaces, click OK to proceed (Figure 13-17). Figure 13-17. FCoE Interface Configuration 9. Click Next to continue installation.
13–Fibre Channel Over Ethernet FCoE Boot from SAN 10. YaST2 prompts you to activate multipath. Answer as appropriate (Figure 13-18). Figure 13-18. Disk Activation 11. Continue installation as usual.
13–Fibre Channel Over Ethernet FCoE Boot from SAN 12. On the Expert page on the Installation Settings window, click Booting (Figure 13-19). Figure 13-19.
13–Fibre Channel Over Ethernet FCoE Boot from SAN 13. Click the Boot Loader Installation tab, and then select Boot Loader Installation Details. Make sure you have one boot loader entry here; delete all redundant entries (Figure 13-20). Figure 13-20. Boot Loader Device Map 14. Click OK to proceed and complete installation. RHEL 6 Installation To install Linux FCoE boot on RHEL 6: 1. Boot from the installation medium. Instructions vary for RHEL 6.3 and 6.4. For RHEL 6.3: a.
13–Fibre Channel Over Ethernet FCoE Boot from SAN For details about installing the Anaconda update image, refer to the Red Hat Installation Guide, Section 28.1.3: http://docs.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/6/html/Install ation_Guide/ap-admin-options.html#sn-boot-options-update For RHEL 6.4 and later: No updated Anaconda is required. a. On the installation splash window, press the TAB key. b. Add the dd option to the boot command line, as shown in Figure 13-21. c.
13–Fibre Channel Over Ethernet FCoE Boot from SAN 2. When prompted Do you have a driver disk, select Yes (Figure 13-22). NOTE RHEL does not allow driver update media to be loaded through the network when installing driver updates for network devices. Use local media. Figure 13-22. Selecting a Driver Disk 3. When drivers are loaded, proceed with installation. 4. When prompted, select Specialized Storage Devices. 5. Click Add Advanced Target.
13–Fibre Channel Over Ethernet FCoE Boot from SAN 6. Select Add FCoE SAN, and then click Add drive (Figure 13-23). Figure 13-23. Adding FCoE SAN Drive 7. For each interface intended for FCoE boot, select the interface, clear the Use DCB check box, select Use auto vlan, and then click Add FCoE Disk(s) (Figure 13-24). Figure 13-24. Configuring FCoE Parameters 8. Repeat steps 8 through 10 for all initiator ports.
13–Fibre Channel Over Ethernet FCoE Boot from SAN 9. Confirm all FCoE visible disks are visible on the Multipath Devices or Other SAN Devices pages (Figure 13-25). Figure 13-25. Confirming FCoE Disks 10. Click Next to proceed. 11. Click Next and complete installation as usual. Upon completion of installation, the system reboots. 12. When booted, ensure all boot path devices are set to start on boot. Set onboot=yes under each network interface config file in /etc/sysconfig/network-scripts. 13.
13–Fibre Channel Over Ethernet FCoE Boot from SAN 3. Add the option dd to the boot command line, as shown in Figure 13-26. Figure 13-26. Adding the “dd” Installation Option 4. Press the ENTER key to proceed. 5. At the Driver disk device selection prompt: a. Refresh the device list by pressing the R key. b. Type the appropriate number for your media. c. Press the C key to continue.
13–Fibre Channel Over Ethernet FCoE Boot from SAN 12. On the Installation Destination window (Figure 13-27) under Other Storage Options, select your Partitioning options, and then click Done. Figure 13-27. Selecting Partitioning Options 13. On the Installation Summary window, click Begin Installation. Linux: Adding Boot Paths RHEL requires updates to the network configuration when adding new boot through an FCoE initiator that was not configured during installation. RHEL 6.2 and Later On RHEL 6.
13–Fibre Channel Over Ethernet FCoE Boot from SAN 3. Create a /etc/fcoe/cfg- file for each new FCoE initiator by duplicating the /etc/fcoe/cfg- file that was already configured during initial installation. 4. Issue the following command: nm-connection-editor 5. a. Open Network Connection and choose each new interface. b. Configure each interface as needed, including DHCP settings. c. Click Apply to save.
13–Fibre Channel Over Ethernet FCoE Boot from SAN 4. On the Select a Disk window (Figure 13-28), scroll to the boot LUN for installation, and then press ENTER to continue. Figure 13-28. ESXi Disk Selection 5. On the ESXi and VMFS Found window (Figure 13-29), select the installation method. Figure 13-29. ESXi and VMFS Found 6. Follow the prompts to: a. Select the keyboard layout. b. Enter and confirm the root password.
13–Fibre Channel Over Ethernet FCoE Boot from SAN 7. On the Confirm Install window (Figure 13-30), press the F11 key to confirm the installation and repartition. Figure 13-30. ESXi Confirm Install 8. After successful installation (Figure 13-31), press ENTER to reboot. Figure 13-31. ESXi Installation Complete 9. On 57800 and 57810 boards, the management network is not vmnic0.
13–Fibre Channel Over Ethernet FCoE Boot from SAN 10. For BCM57800 and BCM57810 boards, the FCoE boot devices must have a separate vSwitch other than vSwitch0. This switch allows DHCP to assign the IP address to the management network rather than to the FCoE boot device. To create a vSwitch for the FCoE boot devices, add the boot device vmnics in vSphere Client on the Configuration page under Networking. Figure 13-33 shows an example. Figure 13-33.
13–Fibre Channel Over Ethernet Booting from SAN After Installation Booting from SAN After Installation After boot configuration and OS installation are complete, you can reboot and test the installation. On this and all future reboots, no other user interactivity is required. Ignore the CTRL+D prompt and allow the system to boot through to the FCoE SAN LUN, as shown in Figure 13-34. Figure 13-34.
13–Fibre Channel Over Ethernet Booting from SAN After Installation 3. 4. 5. Issue the following command to update the ramdisk: On RHEL 6.x [systems, issue: dracut -force On SLES 11 SPX systems, issue: mkinitrd If you are using different name for the initrd under /boot: a. Overwrite it with the default, because dracut/mkinitrd updates the ramdisk with the default original name. b.
13–Fibre Channel Over Ethernet Configuring FCoE To avoid any of the preceding error messages, you must ensure that there is no USB flash drive attached until the setup asks for the drivers. When you load the drivers and see your SAN disks, detach or disconnect the USB flash drive immediately before selecting the disk for further installation. Configuring FCoE By default, DCB is enabled on BCM57712/578xx FCoE-, DCB-compatible C-NICs. The BCM57712/578xx FCoE requires a DCB-enabled interface.
13–Fibre Channel Over Ethernet Configuring FCoE To enable and disable the FCoE-offload instance on Windows using QCC GUI: 1. Open QCC GUI. 2. In the tree pane on the left, under the port node, select the port’s virtual bus device instance. 3. In the configuration pane on the right, click the Resource Config tab. The Resource Config page appears (see Figure 13-36). Figure 13-36. Resource Config Page 4.
13–Fibre Channel Over Ethernet N_Port ID Virtualization (NPIV) 5. (optional) To enable or disable FCoE-Offload or iSCSI-Offload in single function or NPAR mode on Windows or Linux using QCS CLI, see the User’s Guide, QLogic Control Suite CLI (part number BC0054511-00). To enable or disable FCoE-Offload or iSCSI-Offload in single function or NPAR mode on Windows or Linux using the QCC PowerKit, see the User’s Guide, PowerShell (part number BC0054518-00).
14 Data Center Bridging This chapter provides the following information about the data center bridging feature: Overview “DCB Capabilities” on page 249 “Configuring DCB” on page 250 “DCB Conditions” on page 250 “Data Center Bridging in Windows Server 2012 and Later” on page 251 Overview Data center bridging (DCB) is a collection of IEEE specified standard extensions to Ethernet to provide lossless data delivery, low latency, and standards-based bandwidth sharing of data center physical
14–Data Center Bridging DCB Capabilities DCB Capabilities DCB capabilities include ETS, PFC, and DCBX, as described in this section. Enhanced Transmission Selection (ETS) Enhanced transmission selection (ETS) provides a common management framework for assignment of bandwidth to traffic classes. Each traffic class or priority can be grouped in a priority group (PG), and it can be considered as a virtual link or virtual interface queue.
14–Data Center Bridging Configuring DCB Data Center Bridging Exchange (DCBX) Data center bridging exchange (DCBX) is a discovery and capability exchange protocol that is used for conveying capabilities and configuration of ETS and PFC between link partners to ensure consistent configuration across the network fabric. In order for two devices to exchange information, one device must be willing to adopt network configuration from the other device.
14–Data Center Bridging Data Center Bridging in Windows Server 2012 and Later In NIC partitioned enabled configurations, ETS (if operational) overrides the Bandwidth Relative (minimum) Weights assigned to each function. Transmission selection weights are per protocol per ETS settings instead. Maximum bandwidths per function are still honored in the presence of ETS.
14–Data Center Bridging Data Center Bridging in Windows Server 2012 and Later To revert to standard QCS CLI or QCC GUI control over the Marvell DCB feature set, uninstall the Microsoft QoS feature or disable quality of service in the QCS CLI, QCC GUI, or Device Manager NDIS advance properties page. NOTE Marvell recommends that you do not install the DCB feature if SR-IOV will be used.
15 SR-IOV This chapter provides information about single-root I/O virtualization (SR-IOV): Overview Enabling SR-IOV “Verifying that SR-IOV is Operational” on page 256 “SR-IOV and Storage Functionality” on page 257 “SR-IOV and Jumbo Packets” on page 257 Overview Virtualization of network controllers allows users to consolidate their networking hardware resources and run multiple virtual machines concurrently on consolidated hardware.
15–SR-IOV Enabling SR-IOV To enable SR-IOV: 1. Enable the feature on the adapter using either QCC GUI, QCS CLI, QCC PowerKit, Dell pre-boot UEFI, or pre-boot CCM. If using Windows QCC GUI: a. Select the network adapter in the Explorer View pane. Click the Configuration tab and select SR-IOV Global Enable. b.
15–SR-IOV Enabling SR-IOV g. If in SR-IOV (with NPAR mode), each partition has a separate Number of VFs Per PF control window. Press ESC to return to the Main Configuration Page, and then select the NIC Partitioning Configuration menu (which appears only if NPAR mode is selected in the Virtualization Mode control). In the NIC Partitioning Configuration page, select each Partition “N” Configuration menu and set the Number of VFs per PF control.
15–SR-IOV Verifying that SR-IOV is Operational In ESX: a. Install one of the following drivers: bnx2x (ESXi 6.5 or earlier) qfle3 (ESXi 6.5 or later) b. Ensure that the lspci command output on ESXi lists the desired adapter. c. From lspci, select the 10G NIC sequence number for which SR-IOV is required. For example: ~ # lspci | grep -i Broadcom 0000:03:00.0 Network Controllers: Broadcom Corporation NetXtreme II BCM57810 10 Gigabit Ethernet [vmnic0] Following is a sample output. 0000:03:00.
15–SR-IOV SR-IOV and Storage Functionality To verify SR-IOV in VMware vSphere 6.0 U2 Web Client: 1. Confirm that the VFs appear as regular VMDirectPath devices by selecting Host, Manage, Settings, Hardware, and then PCI Devices. 2. Right-click VM, Edit settings, New Device, Select Network, and Add. Click New Network and then select SR-IOV as the adapter type. Click OK. To verify SR-IOV in ESXi CLI: 1. Issue the lspci command: ~ # lspci | grep -i ether Following is a sample output. 0000:03:01.
15–SR-IOV SR-IOV and Jumbo Packets If there is a mismatch in the values, the SR-IOV function is shown as being in the degraded state in Hyper-V, Networking Status.
16 Specifications Specifications, characteristics, and requirements include: 10/100/1000BASE-T and 10GBASE-T Cable Specifications “Interface Specifications” on page 262 “NIC Physical Characteristics” on page 263 “NIC Power Requirements” on page 263 “Wake on LAN Power Requirements” on page 264 “Environmental Specifications” on page 265 10/100/1000BASE-T and 10GBASE-T Cable Specifications Table 16-1.
16–Specifications 10/100/1000BASE-T and 10GBASE-T Cable Specifications Table 16-2. 10GBASE-T Cable Specifications Port Type Connector 10GBASE-T a RJ45 Maximum Distance Media CAT-6 a UTP 131ft (40m) CAT-6A a UTP 328ft (100m) 10GBASE-T signaling requires four twisted pairs of CAT-6 or CAT-6A (augmented CAT-6) balanced cabling, as specified in ISO/IEC 11801:2002 and ANSI/TIA/EIA-568-B Supported SFP+ Modules Per NIC Table 16-3.
16–Specifications 10/100/1000BASE-T and 10GBASE-T Cable Specifications Table 16-4. BCM57810 Supported Modules Module Type Optic Modules (SR) Direct Attach Cables Dell Part Number Module Vendor Module Part Number W365M Avago AFBR-703SDZ-D1 N743D Finisar Corp. FTLX8571D3BCL R8H2F Intel Corp. AFBR-703SDZ-IN2 R8H2F Intel Corp. FTLX8571D3BCV-IT K585N Cisco-Molex Inc. 74752-9093 J564N Cisco-Molex Inc. 74752-9094 H603N Cisco-Molex Inc. 74752-9096 G840N Cisco-Molex Inc.
16–Specifications Interface Specifications Table 16-5. BCM57840 Supported Modules Module Type Optic Modules (SR) Direct Attach Cables Dell Part Number Module Vendor R8H2F Module Part Number Intel Corp. AFBR-703SDZ-IN2 Intel Corp. FTLX8571D3BCV-IT K585N Cisco-Molex Inc. 74752-9093 J564N Cisco-Molex Inc. 74752-9094 H603N Cisco-Molex Inc. 74752-9096 G840N Cisco-Molex Inc.
16–Specifications NIC Physical Characteristics NIC Physical Characteristics Table 16-8. NIC Physical Characteristics NIC Type NIC Length BCM57810S PCI Express x8 low profile 6.6in (16.8cm) NIC Width 2.54in (6.5cm) NIC Power Requirements Table 16-9. BCM957810A1006G NIC Power Requirements Link 10G SFP Module a NIC 12V Current Draw (A) NIC 3.3V Current Draw (A) NIC Power (W) a 1.00 0.004 12.0 Power, measured in watts (W), is a direct calculation of total current draw (A) multiplied by voltage (V).
16–Specifications Wake on LAN Power Requirements Table 16-11. BCM957840A4006G Mezzanine Card Power Requirements Total Power (12V and 3.3VAUX) (W) a Link a 10G SFP+ 12.0 Standby WoL Enabled 5.0 Standby WoL Disabled 0.5 Power, measured in watts (W), is a direct calculation of total current draw (A) multiplied by voltage (V). The maximum power consumption for the adapter will not exceed 25W. Table 16-12. BCM957840A4007G Mezzanine Card Power Requirements Link a Total Power (3.
16–Specifications Environmental Specifications Environmental Specifications Table 16-13. BCM5709 and BCM5716 Environmental Specifications Parameter Condition Operating Temperature 32°F to 131°F (0°C to 55°C) Air Flow Requirement (LFM) 0 Storage Temperature –40°F to 149°F (–40°C to 65°C) Storage Humidity 5% to 95% condensing Vibration and Shock IEC 68, FCC Part 68.302, NSTA, 1A Electrostatic/Electromagnetic Susceptibility EN 61000-4-2, EN 55024 Table 16-14.
16–Specifications Environmental Specifications Table 16-16. BCM957840A4007G Environmental Specifications Parameter Condition Operating Temperature 32°F to 131°F (0°C to 65°C) Air Flow Requirement (LFM) 200 Storage Temperature –40°F to 149°F (–40°C to 65°C) Storage Humidity 5% to 95% condensing Vibration and Shock IEC 68, FCC Part 68.
17 Regulatory Information Regulatory information covered in this chapter includes the following: Product Safety AS/NZS (C-Tick) “FCC Notice” on page 268 “VCCI Notice” on page 270 “CE Notice” on page 275 “Canadian Regulatory Information (Canada Only)” on page 276 “Korea Communications Commission (KCC) Notice (Republic of Korea Only)” on page 278 “BSMI” on page 281 “Certifications for BCM95709SA0908G, BCM957710A1023G (E02D001), and BCM957711A1123G (E03D001)” on page 281 Pr
17–Regulatory Information FCC Notice FCC Notice FCC, Class B Marvell BCM57xx and BCM57xxx gigabit Ethernet controller BCM95708A0804F BCM95709A0907G BCM95709A0906G BCM957810A1008G Marvell Semiconductor, Inc. 15485 San Canyon Ave Irvine, CA 92618 USA The equipment complies with Part 15 of the FCC Rules.
17–Regulatory Information FCC Notice FCC, Class A Marvell BCM57xx and BCM57xxx gigabit Ethernet controller: BCM95709A0916G Marvell BCM57xx and BCM57xxx 10-gigabit Ethernet controller: BCM957800 BCM957710A1022G BCM957710A1021G BCM957711A1113G BCM957711A1102G BCM957810A1006G BCM957840A4006G BCM957840A4007G Marvell Semiconductor, Inc. 15485 San Canyon Ave Irvine, CA 92618 USA This device complies with Part 15 of the FCC Rules.
17–Regulatory Information VCCI Notice Do not make mechanical or electrical modifications to the equipment. NOTE If the device is changed or modified without permission of Marvell, the user may void his or her authority to operate the equipment. VCCI Notice The following tables provide the VCCI notice physical specifications for the Marvell BCM57xx and BCM57xxx adapters for Dell. Table 17-1.
17–Regulatory Information VCCI Notice Table 17-2. Marvell 57800S Quad RJ-45, SFP+, or Direct Attach Rack Network Daughter Card Physical Characteristics (Continued) Item Connectors Description Two ports SFP+ (10GbE) Two ports RJ45 (1GbE) Certifications RoHS, FCC A, UL, CE, VCCI, BSMI, C-Tick, KCC, TUV, and ICES-003 Table 17-3. Marvell 57810S Dual 10GBASE-T PCI-e Card Physical Characteristics Item Description Ports Dual 10Gbps BASE-T Ethernet ports Form Factor PCI Express short, low-profile card 6.
17–Regulatory Information VCCI Notice Table 17-4. Marvell 57810S Dual SFP+ or Direct Attach PCIe Physical Characteristics (Continued) Item Supported Servers Description 13th Generation: R630, R730, R730xd, and T630 12th Generation: R220, R320, R420, R520, R620, R720, R720xd, R820, R920, T420, and T620 Certifications RoHS, FCC A, UL, CE, VCCI, BSMI, C-Tick, KCC, TUV, and ICES-003 Table 17-5.
17–Regulatory Information VCCI Notice Table 17-7. Marvell 57840S Quad 10GbE SFP+ or Direct Attach Rack Network Daughter Card Physical Characteristics Item Description Ports Dual 10Gbps Ethernet Form Factor PCI Express short, low-profile card 6.60in×2.71in (67.64mm×68.
17–Regulatory Information VCCI Notice The equipment is a Class B product based on the standard of the Voluntary Control Council for Interference from Information Technology Equipment (VCCI). If used near a radio or television receiver in a domestic environment, it may cause radio interference. Install and use the equipment according to the instruction manual.
17–Regulatory Information CE Notice VCCI Class A Statement (Japan) CE Notice Marvell BCM57xx and BCM57xxx gigabit Ethernet controller BCM95708A0804F BCM95709A0907G BCM95709A0906G BCM95709A0916G BCM957810A1008G Marvell BCM57xx and BCM57xxx 10-gigabit Ethernet controller BCM957710A1022G BCM957710A1021G BCM957711A1113G BCM957711A1102G BCM957840A4006G BCM957840A4007G This product has been determined to be in compliance with 2006/95/EC (Low Voltage Directive), 2004/108/EC (EMC Directi
17–Regulatory Information Canadian Regulatory Information (Canada Only) Canadian Regulatory Information (Canada Only) Industry Canada, Class B Marvell BCM57xx and BCM57xxx gigabit Ethernet controller BCM95708A0804F BCM95709A0907G BCM95709A0906G Marvell Semiconductor, Inc. 15485 San Canyon Ave Irvine, CA 92618 USA This Class B digital apparatus complies with Canadian ICES-003.
17–Regulatory Information Canadian Regulatory Information (Canada Only) Industry Canada, classe B Marvell BCM57xx and BCM57xxx gigabit Ethernet controller BCM95708A0804F BCM95709A0907G BCM95709A0906G Marvell Semiconductor, Inc. 15485 San Canyon Ave Irvine, CA 92618 USA Cet appareil numérique de la classe B est conforme à la norme canadienne ICES-003.
17–Regulatory Information Korea Communications Commission (KCC) Notice (Republic of Korea Only) Korea Communications Commission (KCC) Notice (Republic of Korea Only) B Class Device Marvell BCM57xx and BCM57xxx gigabit Ethernet controller BCM95708A0804F BCM95709A0907G BCM95709A0906G Marvell Semiconductor, Inc.
17–Regulatory Information Korea Communications Commission (KCC) Notice (Republic of Korea Only) Note that this device has been approved for non-business purposes and may be used in any environment, including residential areas.
17–Regulatory Information Korea Communications Commission (KCC) Notice (Republic of Korea Only) Marvell Semiconductor, Inc.
17–Regulatory Information BSMI BSMI Certifications for BCM95709SA0908G, BCM957710A1023G (E02D001), and BCM957711A1123G (E03D001) This section is included on behalf of Dell, and Marvell is not responsible for the validity or accuracy of the information.
17–Regulatory Information Certifications for BCM95709SA0908G, BCM957710A1023G (E02D001), and BCM957711A1123G Canadian Regulatory Information, Class A (Canada) Korea Communications Commission (KCC) Notice (Republic of Korea) FCC Notice FCC, Class A Marvell BCM57xx and BCM57xxx gigabit Ethernet controller BCM95709SA0908G Marvell BCM57xx and BCM57xxx 10-gigabit Ethernet controller BCM957710A1023G BCM957711A1123G (E03D001) E02D001 Dell Inc.
17–Regulatory Information Certifications for BCM95709SA0908G, BCM957710A1023G (E02D001), and BCM957711A1123G Move the system away from the receiver. Plug the system into a different outlet so that the system and receiver are on different branch circuits. Do not make mechanical or electrical modifications to the equipment. NOTE If the device is changed or modified without permission of Dell Inc, the user may void his or her authority to operate the equipment.
17–Regulatory Information Certifications for BCM95709SA0908G, BCM957710A1023G (E02D001), and BCM957711A1123G VCCI Class A Statement (Japan) CE Notice Class A Marvell BCM57xx and BCM57xxx gigabit Ethernet controller BCM95709SA0908G Marvell BCM57xx and BCM57xxx 10-gigabit Ethernet controller BCM957710A1023G BCM957711A1123G (E03D001) E02D001 Dell Inc.
17–Regulatory Information Certifications for BCM95709SA0908G, BCM957710A1023G (E02D001), and BCM957711A1123G Marvell BCM57xx and BCM57xxx 10Gbt Ethernet Controller BCM957710A1023G BCM957711A1123G (E03D001) E02D001 Dell Inc. Worldwide Regulatory Compliance, Engineering and Environmental Affairs One Dell Way PS4-30 Round Rock, Texas 78682, USA 512-338-4400 This Class A digital apparatus complies with Canadian ICES-003.
17–Regulatory Information Certifications for BCM95709SA0908G, BCM957710A1023G (E02D001), and BCM957711A1123G Korea Communications Commission (KCC) Notice (Republic of Korea Only) A Class Device Marvell BCM57xx and BCM57xxx gigabit Ethernet controller BCM95709SA0908G (5709s-mezz) Marvell BCM57xx and BCM57xxx 10-gigabit Ethernet controller BCM957710A1023G BCM957711A1123G (E03D001) E02D001 Dell Inc.
17–Regulatory Information Certifications for BCM95709SA0908G, BCM957710A1023G (E02D001), and BCM957711A1123G 287 BC0054508-00 M
18 Troubleshooting Troubleshooting topics cover the following: Hardware Diagnostics “Checking Port LEDs” on page 290 “Troubleshooting Checklist” on page 290 “Checking if Current Drivers Are Loaded” on page 291 “Running a Cable Length Test” on page 292 “Testing Network Connectivity” on page 292 “Microsoft Virtualization with Hyper-V” on page 293 “Removing the Marvell BCM57xx and BCM57xxx Device Drivers” on page 296 “Upgrading Windows Operating Systems” on page 297 “Mar
18–Troubleshooting Hardware Diagnostics QCS CLI and QCC GUI Diagnostic Tests Failures If any of the following tests fail while running the diagnostic tests from QCS CLI or QCC GUI, this may indicate a hardware issue with the NIC or LOM that is installed in the system. Control Registers MII Registers EEPROM Internal Memory On-Chip CPU Interrupt Loopback - MAC Loopback - PHY Test LED Troubleshooting steps that may help correct the failure: 1.
18–Troubleshooting Checking Port LEDs Checking Port LEDs To check the state of the network link and activity, see “Network Link and Activity Indication” on page 6. Troubleshooting Checklist CAUTION Before you open the cabinet of your server to add or remove the adapter, review “Safety Precautions” on page 19. The following checklist provides recommended actions to take to resolve problems installing the Marvell BCM57xx and BCM57xxx adapter or running it in your system.
18–Troubleshooting Checking if Current Drivers Are Loaded Checking if Current Drivers Are Loaded Follow the appropriate procedure for your operating system to confirm if the current drivers are loaded. Windows See the QCC GUI online help for information on viewing vital information about the adapter, link status, and network connectivity. Linux To verify that the bnx2.
18–Troubleshooting Running a Cable Length Test Following is a sample output. driver: bnx2x version: 1.78.07 firmware-version: bc 7.8.6 bus-info: 0000:04:00.2 If you loaded a new driver but have not yet booted, the modinfo command does not show the updated driver information.
18–Troubleshooting Microsoft Virtualization with Hyper-V Linux To verify that the Ethernet interface is up and running, issue ifconfig to check the status of the Ethernet interface. It is possible to use netstat -i to check the statistics on the Ethernet interface. For information on ifconfig and netstat, see Chapter 7 Linux Driver Software. Ping an IP host on the network to verify connection has been established. From the command line, issue the ping command, and then press ENTER.
18–Troubleshooting Microsoft Virtualization with Hyper-V Table 18-1. Configurable Network Adapter Hyper-V Features (Continued) Feature Supported in Windows Server Version 2012 and Later Comments and Limitations Jumbo frames Yes * OS limitation. RSS Yes * OS limitation. RSC Yes * OS limitation. SR-IOV Yes * OS limitation. NOTE For full functionality, ensure that Integrated Services, which is a component of Hyper-V, is installed in the guest operating system (child partition).
18–Troubleshooting Microsoft Virtualization with Hyper-V Teamed Network Adapters Table 18-2 identifies Hyper-V supported features that are configurable for BCM57xx and BCM57xxx teamed network adapters. This table is not an all-inclusive list of Hyper-V features. The Marvell QLASP NIC teaming driver is not supported in Windows Server 2016 or later. Table 18-2.
18–Troubleshooting Removing the Marvell BCM57xx and BCM57xxx Device Drivers Table 18-2. Configurable Teamed Network Adapter Hyper-V Features (Continued) Feature Supported in Windows Server Version 2012 Virtual machine queue (VMQ) Yes See “Configuring VMQ with SLB Teaming” on page 296.
18–Troubleshooting Upgrading Windows Operating Systems If you manually uninstalled the device drivers with Device Manager and attempted to reinstall the device drivers but could not, run the Repair option from the InstallShield wizard. For information on repairing Marvell BCM57xx and BCM57xxx device drivers, see “Repairing or Reinstalling the Driver Software” on page 99. Upgrading Windows Operating Systems This section covers Windows upgrades from Windows Server 2008 R2 to Windows Server 2012.
18–Troubleshooting QLASP Problem: Adding an Network Load Balancing-enabled BCM57xx and BCM57xxx adapter to a team may cause unpredictable results. Solution: Prior to creating the team, unbind Network Load Balancing from the BCM57xx and BCM57xxx adapter, create the team, and then bind Network Load Balancing to the team. Problem: A system containing an 802.3ad team causes a Netlogon service failure in the system event log and prevents it from communicating with the domain controller during boot up.
18–Troubleshooting Linux Problem: The advanced properties of a team do not change after changing the advanced properties of an adapter that is a member of the team. Solution: If an adapter is included as a member of a team and you change any advanced property, you must rebuild the team to ensure that the team’s advanced properties are properly set. Linux Problem: BCM57xx and BCM57xxx devices with SFP+ Flow Control default to Off rather than Rx/Tx Enable.
18–Troubleshooting NPAR Problem: iSCSI-Offload boot from SAN fails to boot after installation. The iSCSI boot from SAN process is divided into two parts: pre switch-root and post switch root. During pre switch-root, when the drivers load, the open-iSCSI tool iscsistart establishes the connection with the target and discovers the remote LUN. Then iscsistart starts a session using the iBFT information. The iscsistart utility program is not run to manage connection with target.
18–Troubleshooting Kernel Debugging Over Ethernet Kernel Debugging Over Ethernet Problem: When attempting to perform kernel debugging over an Ethernet network on a Windows 8.0 or Windows Server 2012 system, the system does not boot. This problem may occur with some adapters on systems where the Windows 8.0 or Windows Server 2012 OS is configured for unified extensible firmware interface (UEFI) mode.
18–Troubleshooting Miscellaneous Problem: Performance is degraded when multiple BCM57710 network adapters are used in a system. Solution: Ensure that the system has at least 2GB of main memory when using up to four network adapters and 4GB of main memory when using four or more network adapters. Problem: The network adapter has shut down and an error message appears indicating that the fan on the network adapter has failed. Solution: The network adapter was shut down to prevent permanent damage.
A Revision History Document Revision History Revision A, February 18, 2015 Revision B, July 29, 2015 Revision C, March 24, 2016 Revision D, April 8, 2016 Revision E, February 2, 2017 Revision F, August 25, 2017 Revision G, December 19, 2017 Revision H, March 15, 2018 Revision J, April 13, 2018 Revision K, October 25, 2018 Revision L, June 7, 2019 Revision M, October 16, 2019 Changes In the first paragraph, clarified the last sentence to “These images reside in the adapter’s firmware and provide flexibili
Contact Information Marvell Technology Group http://www.marvell.com Marvell.