Users Guide
Table Of Contents
- Table of Contents
- Chapter 1: Introduction
- Chapter 2: Installation
- Chapter 3: Configuration
- 3.1 ESXi Command Line Tool Transition
- 3.2 FC Driver Configuration
- 3.2.1 Configuration Methods for FC Driver Parameters
- 3.2.2 Emulex ExpressLane Support
- 3.2.3 FC-SP-2 Authentication (DH-CHAP) Support
- 3.2.4 Trunking Support
- 3.2.5 Dynamically Adding LUNs
- 3.2.6 Dynamically Adding Targets
- 3.2.7 FC Driver Module Parameters
- 3.2.8 Creating an FC Remote Boot Disk
- 3.2.9 Managing Devices through the CIM Interface
- 3.2.10 Installing the Emulex CIM Provider
- 3.2.11 Creating, Deleting, and Displaying vPorts
- 3.2.12 Configuring VVols
- 3.2.13 Adjusting the LUN Queue Depth
- 3.3 Configuring NVMe over FC on a NetApp Target
- 3.4 Configuring NVMe over FC on an Initiator System
- Chapter 4: Troubleshooting the FC Driver
- Chapter 5: Troubleshooting the NVMe Driver
- Appendix A: esxcli Management Tool
- Appendix B: lpfc Driver BlockGuard Functionality
- Appendix C: Using the VMID Feature on a Brocade Switch
- Appendix D: Using the VMID Feature on a Cisco Switch
- Appendix E: NPIV Configuration
- Appendix F: License Notices
Broadcom DRVVM-UG128-100
29
Emulex Drivers for VMware ESXi User Guide
NOTE: The following information applies to vPorts:
Ensure you are using the latest recommended firmware for vPort functionality. Check the Broadcom website
for the latest firmware.
Loop devices and NPIV are not supported on the same port at the same time. If you are running a loop topology
and you create a vPort, the vPorts link state is offline. VMware ESXi supports fabric mode only.
You can create vPorts only on 8, 16, and 32 GFC adapters.
The Emulex HBA Manager application sees all vPorts created by the driver, but the application has read-only
access to them.
3.2.12 Configuring VVols
The Emulex native mode FC driver supports the VVols feature released with ESXi. VMware’s VVols feature allows for
dynamic provisioning of storage, based upon the needs of a VM. VM disks, also called VVols, allow VMware administrators
to manage storage arrays through the API. Arrays are logically partitioned into storage containers. VVols are stored natively
in the storage containers. I/O from ESXi to the array is managed through an access point or PE and the storage provider.
3.2.12.1 Storage Containers
Storage containers are a logical abstraction and hold groups of VVols that are physically provisioned in the storage array.
Storage containers are an alternative to traditional storage based upon LUNs or NFA shares. Storage containers are set up
by a storage administrator. Storage container capacity is based on physical storage capacity. The minimum is one storage
container per array, and the maximum number depends upon the array. One storage container can be simultaneously
accessed through multiple PEs. When the storage provider and PEs are in place, the storage container is visible to ESXi
hosts.
3.2.12.2 Protocol Endpoints
A PE is an access point that enables communication between an ESXi host and a storage array system. A PE is not a
datastore; it is the I/O transport mechanism to access the storage container. A PE is part of the physical storage fabric. A
PE is created by a storage administrator.
3.2.12.3 Storage Providers
Storage providers are also referred to as VASA providers. Out-of-band communication between vCenter and the storage
array is achieved through the storage provider. The storage provider creates the VVols.
For more information about VVols and instructions on configuring VVols, refer to the VMware and target vendor-supplied
documentation.