Users Guide
Table Of Contents
- Table of Contents
 - Preface
 - 1 Functionality and Features
 - 2 Configuring Teaming in Windows Server
 - 3 Virtual LANs in Windows
 - 4 Installing the Hardware
 - 5 Manageability
 - 6 Boot Agent Driver Software
 - 7 Linux Driver Software
- Introduction
 - Limitations
 - Packaging
 - Installing Linux Driver Software
 - Load and Run Necessary iSCSI Software Components
 - Unloading or Removing the Linux Driver
 - Patching PCI Files (Optional)
 - Network Installations
 - Setting Values for Optional Properties
 - Driver Defaults
 - Driver Messages
- bnx2x Driver Messages
 - bnx2i Driver Messages
- BNX2I Driver Sign-on
 - Network Port to iSCSI Transport Name Binding
 - Driver Completes Handshake with iSCSI Offload-enabled C-NIC Device
 - Driver Detects iSCSI Offload Is Not Enabled on the C-NIC Device
 - Exceeds Maximum Allowed iSCSI Connection Offload Limit
 - Network Route to Target Node and Transport Name Binding Are Two Different Devices
 - Target Cannot Be Reached on Any of the C-NIC Devices
 - Network Route Is Assigned to Network Interface, Which Is Down
 - SCSI-ML Initiated Host Reset (Session Recovery)
 - C-NIC Detects iSCSI Protocol Violation - Fatal Errors
 - C-NIC Detects iSCSI Protocol Violation—Non-FATAL, Warning
 - Driver Puts a Session Through Recovery
 - Reject iSCSI PDU Received from the Target
 - Open-iSCSI Daemon Handing Over Session to Driver
 
 - bnx2fc Driver Messages
- BNX2FC Driver Signon
 - Driver Completes Handshake with FCoE Offload Enabled C-NIC Device
 - Driver Fails Handshake with FCoE Offload Enabled C-NIC Device
 - No Valid License to Start FCoE
 - Session Failures Due to Exceeding Maximum Allowed FCoE Offload Connection Limit or Memory Limits
 - Session Offload Failures
 - Session Upload Failures
 - Unable to Issue ABTS
 - Unable to Recover the IO Using ABTS (Due to ABTS Timeout)
 - Unable to Issue I/O Request Due to Session Not Ready
 - Drop Incorrect L2 Receive Frames
 - Host Bus Adapter and lport Allocation Failures
 - NPIV Port Creation
 
 
 - Teaming with Channel Bonding
 - Statistics
 - Linux iSCSI Offload
 
 - 8 VMware Driver Software
- Introduction
 - Packaging
 - Download, Install, and Update Drivers
 - Driver Parameters
 - FCoE Support
 - iSCSI Support
 
 - 9 Windows Driver Software
- Supported Drivers
 - Installing the Driver Software
 - Modifying the Driver Software
 - Repairing or Reinstalling the Driver Software
 - Removing the Device Drivers
 - Viewing or Changing the Properties of the Adapter
 - Setting Power Management Options
 - Configuring the Communication Protocol to Use with QCC GUI, QCC PowerKit, and QCS CLI
 
 - 10 Citrix XenServer Driver Software
 - 11 iSCSI Protocol
- iSCSI Boot
- Supported Operating Systems for iSCSI Boot
 - iSCSI Boot Setup
- Configuring the iSCSI Target
 - Configuring iSCSI Boot Parameters
 - MBA Boot Protocol Configuration
 - iSCSI Boot Configuration
 - Enabling CHAP Authentication
 - Configuring the DHCP Server to Support iSCSI Boot
 - DHCP iSCSI Boot Configuration for IPv4
 - DHCP iSCSI Boot Configuration for IPv6
 - Configuring the DHCP Server
 - Preparing the iSCSI Boot Image
 - Booting
 
 - Other iSCSI Boot Considerations
 - Troubleshooting iSCSI Boot
 
 - iSCSI Crash Dump
 - iSCSI Offload in Windows Server
 
 - iSCSI Boot
 - 12 Marvell Teaming Services
- Executive Summary
 - Teaming Mechanisms
 - Teaming and Other Advanced Networking Properties
 - General Network Considerations
 - Application Considerations
 - Troubleshooting Teaming Problems
 - Frequently Asked Questions
 - Event Log Messages
 
 - 13 NIC Partitioning and Bandwidth Management
 - 14 Fibre Channel Over Ethernet
- Overview
 - FCoE Boot from SAN
- Preparing System BIOS for FCoE Build and Boot
 - Preparing Marvell Multiple Boot Agent for FCoE Boot (CCM)
 - Preparing Marvell Multiple Boot Agent for FCoE Boot (UEFI)
 - Provisioning Storage Access in the SAN
 - One-Time Disabled
 - Windows Server 2016/2019/Azure Stack HCI FCoE Boot Installation
 - Linux FCoE Boot Installation
 - VMware ESXi FCoE Boot Installation
 
 - Booting from SAN After Installation
 - Configuring FCoE
 - N_Port ID Virtualization (NPIV)
 
 - 15 Data Center Bridging
 - 16 SR-IOV
 - 17 Specifications
 - 18 Regulatory Information
 - 19 Troubleshooting
- Hardware Diagnostics
 - Checking Port LEDs
 - Troubleshooting Checklist
 - Checking if Current Drivers Are Loaded
 - Running a Cable Length Test
 - Testing Network Connectivity
 - Microsoft Virtualization with Hyper-V
 - Removing the Marvell 57xx and 57xxx Device Drivers
 - Upgrading Windows Operating Systems
 - Marvell Boot Agent
 - Linux
 - NPAR
 - Kernel Debugging Over Ethernet
 - Miscellaneous
 
 - A Revision History
 
   Doc No. BC0054508-00 Rev. R 
January 21, 2021 Page 17 Copyright © 2021 Marvell
4 Installing the Hardware
This chapter applies to Marvell 57xx and 57xxx add-in network interface cards. 
Hardware installation covers the following:
 System Requirements
 “Safety Precautions” on page 19
 “Preinstallation Checklist” on page 19
 “Installation of the Add-In NIC” on page 20
System Requirements
Before you install Marvell 57xx and 57xxx adapters, verify that your system meets 
the hardware and operating system requirements described in this section. 
Hardware Requirements
 IA32- or EMT64-based computer that meets operating system requirements 
 One open PCI Express slot. Depending on the PCI Express support on your 
adapter, the slot may be one of these types:
 PCI Express 1.0a x1
 PCI Express 1.0a x4
 PCI Express Gen2 x8
 PCI Express Gen3 x8
Full dual-port 10Gbps bandwidth is supported on PCI Express Gen2 x8 or 
faster slots.
 128MB RAM (minimum) 
NOTE
Service Personnel: This product is intended only for installation in a 
Restricted Access Location (RAL).










