Users Guide
Table Of Contents
- Table of Contents
- Preface
- 1 Functionality and Features
- 2 Configuring Teaming in Windows Server
- 3 Virtual LANs in Windows
- 4 Installing the Hardware
- 5 Manageability
- 6 Boot Agent Driver Software
- 7 Linux Driver Software
- Introduction
- Limitations
- Packaging
- Installing Linux Driver Software
- Load and Run Necessary iSCSI Software Components
- Unloading or Removing the Linux Driver
- Patching PCI Files (Optional)
- Network Installations
- Setting Values for Optional Properties
- Driver Defaults
- Driver Messages
- bnx2x Driver Messages
- bnx2i Driver Messages
- BNX2I Driver Sign-on
- Network Port to iSCSI Transport Name Binding
- Driver Completes Handshake with iSCSI Offload-enabled C-NIC Device
- Driver Detects iSCSI Offload Is Not Enabled on the C-NIC Device
- Exceeds Maximum Allowed iSCSI Connection Offload Limit
- Network Route to Target Node and Transport Name Binding Are Two Different Devices
- Target Cannot Be Reached on Any of the C-NIC Devices
- Network Route Is Assigned to Network Interface, Which Is Down
- SCSI-ML Initiated Host Reset (Session Recovery)
- C-NIC Detects iSCSI Protocol Violation - Fatal Errors
- C-NIC Detects iSCSI Protocol Violation—Non-FATAL, Warning
- Driver Puts a Session Through Recovery
- Reject iSCSI PDU Received from the Target
- Open-iSCSI Daemon Handing Over Session to Driver
- bnx2fc Driver Messages
- BNX2FC Driver Signon
- Driver Completes Handshake with FCoE Offload Enabled C-NIC Device
- Driver Fails Handshake with FCoE Offload Enabled C-NIC Device
- No Valid License to Start FCoE
- Session Failures Due to Exceeding Maximum Allowed FCoE Offload Connection Limit or Memory Limits
- Session Offload Failures
- Session Upload Failures
- Unable to Issue ABTS
- Unable to Recover the IO Using ABTS (Due to ABTS Timeout)
- Unable to Issue I/O Request Due to Session Not Ready
- Drop Incorrect L2 Receive Frames
- Host Bus Adapter and lport Allocation Failures
- NPIV Port Creation
- Teaming with Channel Bonding
- Statistics
- Linux iSCSI Offload
- 8 VMware Driver Software
- Introduction
- Packaging
- Download, Install, and Update Drivers
- Driver Parameters
- FCoE Support
- iSCSI Support
- 9 Windows Driver Software
- Supported Drivers
- Installing the Driver Software
- Modifying the Driver Software
- Repairing or Reinstalling the Driver Software
- Removing the Device Drivers
- Viewing or Changing the Properties of the Adapter
- Setting Power Management Options
- Configuring the Communication Protocol to Use with QCC GUI, QCC PowerKit, and QCS CLI
- 10 Citrix XenServer Driver Software
- 11 iSCSI Protocol
- iSCSI Boot
- Supported Operating Systems for iSCSI Boot
- iSCSI Boot Setup
- Configuring the iSCSI Target
- Configuring iSCSI Boot Parameters
- MBA Boot Protocol Configuration
- iSCSI Boot Configuration
- Enabling CHAP Authentication
- Configuring the DHCP Server to Support iSCSI Boot
- DHCP iSCSI Boot Configuration for IPv4
- DHCP iSCSI Boot Configuration for IPv6
- Configuring the DHCP Server
- Preparing the iSCSI Boot Image
- Booting
- Other iSCSI Boot Considerations
- Troubleshooting iSCSI Boot
- iSCSI Crash Dump
- iSCSI Offload in Windows Server
- iSCSI Boot
- 12 Marvell Teaming Services
- Executive Summary
- Teaming Mechanisms
- Teaming and Other Advanced Networking Properties
- General Network Considerations
- Application Considerations
- Troubleshooting Teaming Problems
- Frequently Asked Questions
- Event Log Messages
- 13 NIC Partitioning and Bandwidth Management
- 14 Fibre Channel Over Ethernet
- Overview
- FCoE Boot from SAN
- Preparing System BIOS for FCoE Build and Boot
- Preparing Marvell Multiple Boot Agent for FCoE Boot (CCM)
- Preparing Marvell Multiple Boot Agent for FCoE Boot (UEFI)
- Provisioning Storage Access in the SAN
- One-Time Disabled
- Windows Server 2016/2019/Azure Stack HCI FCoE Boot Installation
- Linux FCoE Boot Installation
- VMware ESXi FCoE Boot Installation
- Booting from SAN After Installation
- Configuring FCoE
- N_Port ID Virtualization (NPIV)
- 15 Data Center Bridging
- 16 SR-IOV
- 17 Specifications
- 18 Regulatory Information
- 19 Troubleshooting
- Hardware Diagnostics
- Checking Port LEDs
- Troubleshooting Checklist
- Checking if Current Drivers Are Loaded
- Running a Cable Length Test
- Testing Network Connectivity
- Microsoft Virtualization with Hyper-V
- Removing the Marvell 57xx and 57xxx Device Drivers
- Upgrading Windows Operating Systems
- Marvell Boot Agent
- Linux
- NPAR
- Kernel Debugging Over Ethernet
- Miscellaneous
- A Revision History
12–Marvell Teaming Services
Application Considerations
Doc No. BC0054508-00 Rev. R
January 21, 2021 Page 187 Copyright © 2021 Marvell
The designated path is determined by two factors:
Client-Server ARP cache points to the backup server MAC address. This
address is determined by the Marvell intermediate driver inbound load
balancing algorithm.
The physical adapter interface on Client-Server Red transmits the data. The
Marvell intermediate driver outbound load-balancing algorithm determines
the data (see “Outbound Traffic Flow” on page 160 and “Inbound Traffic
Flow (SLB Only)” on page 160.
The teamed interface on the backup server transmits a gratuitous address
resolution protocol (G-ARP) to Client-Server Red, which in turn causes the
client-server ARP cache to get updated with the Backup Server MAC address.
The load balancing mechanism within the teamed interface determines the MAC
address embedded in the G-ARP. The selected MAC address is essentially the
destination for data transfer from the client server.
On Client-Server Red, the SLB teaming algorithm will determine which of the two
adapter interfaces is used to transmit data. In this example, data from Client
Server Red is received on the backup server Adapter A interface. To demonstrate
the SLB mechanisms when additional load is placed on the teamed interface,
consider the scenario when the backup server initiates a second backup
operation: one to Client-Server Red, and one to Client-Server Blue. The route that
Client-Server Blue uses to send data to the backup server is dependent on its
ARP cache, which points to the backup server MAC address. Because Adapter A
of the backup server is already under load from its backup operation with
Client-Sever Red, the Backup Server invokes its SLB algorithm to inform
Client-Server Blue (through an G-ARP) to update its ARP cache to reflect the
backup server Adapter B MAC address. When Client-Server Blue needs to
transmit data, it uses either one of its adapter interfaces, which is determined by
its own SLB algorithm. What is important is that data from Client-Server Blue is
received by the Backup Server Adapter B interface, and not by its Adapter A
interface. This action is important because, with both backup streams running
simultaneously, the backup server must load balance data streams from different
clients. With both backup streams running, each adapter interface on the backup
server is processing an equal load, thus load-balancing data across both adapter
interfaces.
The same algorithm applies if a third and fourth backup operation is initiated from
the backup server. The teamed interface on the backup server transmits a unicast
G-ARP to backup clients to inform them to update their ARP cache. Each client
then transmits backup data along a route to the target MAC address on the
backup server.