Users Guide

Table Of Contents
Broadcom DRVLin-UG128-100
32
Emulex Drivers for Linux User Guide
The following vPort configuration limits have been tested with and are supported by the Emulex driver. Configurations that
exceed one or more of these limits are unsupported.
The maximum number of vPorts configurable on a physical port is 255.
The maximum number of LUNs supported on each driver port is 256.
The maximum number of targets supported for each driver port is 255.
The maximum number of driver ports in one zone is 64. This limit is based on the system’s ability to recover from link
events within the time constraints of the default timers.
The NPIV use cases that involve a virtual server environment include associating a vPort with a virtual machine, and placing
the virtual machine in its own zone, which results in one vPort per zone. In the case of load-balanced environments, this can
increase typically to two vPorts per virtual machine, to a practical limit of something far less than 50.
In the NPIV cases not related to virtual server environments, zoning is typically initiator-zoning, again resulting in one vPort,
or a low number of vPorts in the case of load balancing, within a given zone. If too many vPorts exist within a single zone,
expected behavior includes devices being lost after link events.
The minimum lifetime of a vPort is 60 seconds. An unenforced limit of 60 seconds exists between the creation of a vPort and
the deletion of the same vPort. vPorts are designed to exist for a long time in the system, and the creation of vPorts is
asynchronous, which means that a vPort might not be finished with FC or SCSI discovery when the command to create a
vPort is finished.
3.3 FC Driver Performance Tuning
This section describes how to tune the FC driver for best performance.
3.3.1 Overview
The configurable parameters lpfc_hdw_queue and lpfc_irq_chann can enhance performance on supported RHEL
and SLES operating systems.These features are available through module parameters that are defined in the FC driver as
well as sysfs entries defined by the Linux kernel.
This section provides more information about how the tuning parameters and script can improve Emulex adapter
performance.
NOTE: The parameters in this section do not apply to LPe12000-series adapters.
3.3.1.1 lpfc_hdw_queue
The lpfc_hdw_queue module parameter can be configured at driver load time. It defines the number of hardware queues
supported by the driver for each port. The driver is capable of supporting parallel I/O paths, and each I/O path is capable of
posting and completing FCP and NVMe commands independent of the other. Each hardware queue is composed of a unique
pair of completion queue and work queue.
NOTE: The Emulex LPe12000-series adapters support only one I/O path, so this parameter has no effect on them.
By default, lpfc_hdw_queue is configured for an automatically determined recommended amount based on system
resources. The driver also limits the number of hardware queues to not exceed the number of online logical CPUs (as
reported by /proc/cpuinfo). It is highly desirable, for performance, to have one hardware queue per CPU.