Users Guide
Table Of Contents
- Table of Contents
- Preface
- 1 Functionality and Features
- 2 Configuring Teaming in Windows Server
- 3 Virtual LANs in Windows
- 4 Installing the Hardware
- 5 Manageability
- 6 Boot Agent Driver Software
- 7 Linux Driver Software
- Introduction
- Limitations
- Packaging
- Installing Linux Driver Software
- Load and Run Necessary iSCSI Software Components
- Unloading or Removing the Linux Driver
- Patching PCI Files (Optional)
- Network Installations
- Setting Values for Optional Properties
- Driver Defaults
- Driver Messages
- bnx2x Driver Messages
- bnx2i Driver Messages
- BNX2I Driver Sign-on
- Network Port to iSCSI Transport Name Binding
- Driver Completes Handshake with iSCSI Offload-enabled C-NIC Device
- Driver Detects iSCSI Offload Is Not Enabled on the C-NIC Device
- Exceeds Maximum Allowed iSCSI Connection Offload Limit
- Network Route to Target Node and Transport Name Binding Are Two Different Devices
- Target Cannot Be Reached on Any of the C-NIC Devices
- Network Route Is Assigned to Network Interface, Which Is Down
- SCSI-ML Initiated Host Reset (Session Recovery)
- C-NIC Detects iSCSI Protocol Violation - Fatal Errors
- C-NIC Detects iSCSI Protocol Violation—Non-FATAL, Warning
- Driver Puts a Session Through Recovery
- Reject iSCSI PDU Received from the Target
- Open-iSCSI Daemon Handing Over Session to Driver
- bnx2fc Driver Messages
- BNX2FC Driver Signon
- Driver Completes Handshake with FCoE Offload Enabled C-NIC Device
- Driver Fails Handshake with FCoE Offload Enabled C-NIC Device
- No Valid License to Start FCoE
- Session Failures Due to Exceeding Maximum Allowed FCoE Offload Connection Limit or Memory Limits
- Session Offload Failures
- Session Upload Failures
- Unable to Issue ABTS
- Unable to Recover the IO Using ABTS (Due to ABTS Timeout)
- Unable to Issue I/O Request Due to Session Not Ready
- Drop Incorrect L2 Receive Frames
- Host Bus Adapter and lport Allocation Failures
- NPIV Port Creation
- Teaming with Channel Bonding
- Statistics
- Linux iSCSI Offload
- 8 VMware Driver Software
- Introduction
- Packaging
- Download, Install, and Update Drivers
- Driver Parameters
- FCoE Support
- iSCSI Support
- 9 Windows Driver Software
- Supported Drivers
- Installing the Driver Software
- Modifying the Driver Software
- Repairing or Reinstalling the Driver Software
- Removing the Device Drivers
- Viewing or Changing the Properties of the Adapter
- Setting Power Management Options
- Configuring the Communication Protocol to Use with QCC GUI, QCC PowerKit, and QCS CLI
- 10 Citrix XenServer Driver Software
- 11 iSCSI Protocol
- iSCSI Boot
- Supported Operating Systems for iSCSI Boot
- iSCSI Boot Setup
- Configuring the iSCSI Target
- Configuring iSCSI Boot Parameters
- MBA Boot Protocol Configuration
- iSCSI Boot Configuration
- Enabling CHAP Authentication
- Configuring the DHCP Server to Support iSCSI Boot
- DHCP iSCSI Boot Configuration for IPv4
- DHCP iSCSI Boot Configuration for IPv6
- Configuring the DHCP Server
- Preparing the iSCSI Boot Image
- Booting
- Other iSCSI Boot Considerations
- Troubleshooting iSCSI Boot
- iSCSI Crash Dump
- iSCSI Offload in Windows Server
- iSCSI Boot
- 12 Marvell Teaming Services
- Executive Summary
- Teaming Mechanisms
- Teaming and Other Advanced Networking Properties
- General Network Considerations
- Application Considerations
- Troubleshooting Teaming Problems
- Frequently Asked Questions
- Event Log Messages
- 13 NIC Partitioning and Bandwidth Management
- 14 Fibre Channel Over Ethernet
- Overview
- FCoE Boot from SAN
- Preparing System BIOS for FCoE Build and Boot
- Preparing Marvell Multiple Boot Agent for FCoE Boot (CCM)
- Preparing Marvell Multiple Boot Agent for FCoE Boot (UEFI)
- Provisioning Storage Access in the SAN
- One-Time Disabled
- Windows Server 2016/2019/Azure Stack HCI FCoE Boot Installation
- Linux FCoE Boot Installation
- VMware ESXi FCoE Boot Installation
- Booting from SAN After Installation
- Configuring FCoE
- N_Port ID Virtualization (NPIV)
- 15 Data Center Bridging
- 16 SR-IOV
- 17 Specifications
- 18 Regulatory Information
- 19 Troubleshooting
- Hardware Diagnostics
- Checking Port LEDs
- Troubleshooting Checklist
- Checking if Current Drivers Are Loaded
- Running a Cable Length Test
- Testing Network Connectivity
- Microsoft Virtualization with Hyper-V
- Removing the Marvell 57xx and 57xxx Device Drivers
- Upgrading Windows Operating Systems
- Marvell Boot Agent
- Linux
- NPAR
- Kernel Debugging Over Ethernet
- Miscellaneous
- A Revision History
12–Marvell Teaming Services
Executive Summary
Doc No. BC0054508-00 Rev. R
January 21, 2021 Page 148 Copyright © 2021 Marvell
Smart Load Balancing and Failover
The Smart Load Balancing and Failover type of team provides both load
balancing and failover when configured for load balancing, and only failover when
configured for fault tolerance. This type of team works with any Ethernet switch
and requires no trunking configuration on the switch. The team advertises multiple
MAC addresses and one or more IP addresses (when using secondary IP
addresses). The team MAC address is selected from the list of load balance
members. When the system receives an ARP request, the software-networking
stack will always send an ARP Reply with the team MAC address. To begin the
load balancing process, the teaming driver will modify this ARP Reply by changing
the source MAC address to match one of the physical adapters.
Smart Load Balancing enables both transmit and receive load balancing based on
the Layer 3 and Layer 4 IP address and TCP/UDP port number. In other words,
the load balancing is not done at a byte or frame level but on a TCP/UDP session
basis. This methodology is required to maintain in-order delivery of frames that
belong to the same socket conversation. Load balancing is supported on two to
eight ports. These ports can include any combination of add-in adapters and LAN
on motherboard (LOM) devices.
Transmit load balancing is achieved by creating a hashing table using the source
and destination IP addresses and TCP/UDP port numbers.The same combination
of source and destination IP addresses and TCP/UDP port numbers generally
yield the same hash index and therefore point to the same port in the team. When
a port is selected to carry all the frames of a specific socket, the unique MAC
address of the physical adapter is included in the frame, and not the team MAC
address. This inclusion is required to comply with the IEEE 802.3 standard. If two
adapters transmit using the same MAC address, a duplicate MAC address
situation would occur that the switch could not handle.
Receive load balancing is achieved through an intermediate driver by sending
gratuitous ARPs on a client-by-client basis using the unicast address of each
client as the destination address of the ARP request (also known as a directed
ARP). This practice is considered client load balancing and not traffic load
balancing. When the intermediate driver detects a significant load imbalance
between the physical adapters in an SLB team, it generates G-ARPs in an effort
to redistribute incoming frames. It is important to understand that receive load
balancing is a function of the quantity of clients that are connecting to the system
through the team interface.
NOTE
IPv6 addressed traffic is load balanced by SLB because ARP is not a feature
of IPv6.