Installation guide
188
EMC Host Connectivity with QLogic Fibre Channel HBAs and CNAs in the Windows Environment
Troubleshooting
Understanding queue depth
Each port on EMC storage arrays has a maximum queue depth. The 
performance implications in a large fabric environment with many 
HBAs (initiators) generating I/Os is that a storage port's queue can 
quickly fill up to the maximum. When this happens, the HBA will be 
notified by the array with queue full (QFULL) messages and result in 
very poor response times. Various operating systems deal with queue 
full differently. 
Windows operating systems with STORPort drivers will throttle I/Os 
down to a minimum in an attempt to prevent filling the queue. When 
the queue full messages subside, STORPort will increase the queue 
depth again. This could take up to around a minute in some 
instances, depending on the load. The performance of the server's 
applications will be impacted, sometimes to the point of hanging or 
crashing if it happens repeatedly or for a prolonged amount of time.
In order to avoid overloading the storage array's ports, you can 
calculate the maximum queue depth using a combination of the 
number of initiators per storage port and the number of LUNs ESX 
uses. Other initiators are likely to be sharing the same SP ports, so 
these will also need to have their queue depths limited. The math to 
calculate the maximum queue depth is:
QD = Maximum Port Queue Length / (Initiators * LUNs)
For example, there are 4 servers with single HBA ports connected to a 
single port on the storage array, with 5 LUNs masked to each server. 
The storage port's maximum queue length is 1600 outstanding 
commands. This leads to the following queue depth calculation:
HBA Queue Depth = 1600 / (4 * 20)
In this example, the calculated HBA queue depth would be 20. A 
certain amount of over-subscription can be tolerated because all 
LUNs assigned to the servers are unlikely to be busy at the same 
time, especially if additional HBA ports and load balancing software 
is used. So in the example above, a queue depth of 32 should not 
cause queue full. However, a queue depth value of 256 or higher 
could cause performance issues.
Using this example, it is easy to extrapolate the potential performance 
implications of large server environments with large numbers of 
servers and initiators. This includes virtualized environments like 
Hyper-V that use synthetic/virtual Fibre Channel adapters and NPIV 










