Converged networks with Fibre Channel over Ethernet and Data Center Bridging
Table Of Contents
14
notification messages to the server CNAs. The switch selects the server CAN by statistically sampling the
congested queue. The congestion notification occurs dynamically by sending higher feedback quanta to
CNAs producing the most traffic and lower feedback quanta to sources producing less traffic. As a result,
the CNAs throttle down their transmit rates on congested traffic flows. The decrease in traffic flow rates
reduces the number of frames in the congested queue in the switch to achieve a more sustainable, balanced
level of performance. As the congestion eases, the switch reduces or stops sending notifications and the
CNAs start to accelerate the throughput rate. This active feedback protocol continuously balances traffic
flow.
It is possible to construct simple converged networks on one or two switch hops without QCN. In fact, the
FCoE protocol does not require use of QCN in DCB-enabled Ethernet equipment. However, the general
understanding is that building relatively complex multi-hop or end-to-end, data-center-wide converged
networks based on DCB-enabled Ethernet equipment requires enabling QCN in this infrastructure. Networks
that use the QCN protocol face several challenges:
• QCN protocol complexity – Implementing the flow tagging, statistical sampling, and congestion
messaging is relatively complex. Identifying the proper timing and quanta of notification feedback to
satisfy a wide variety of operating conditions is also difficult.
• Difficult interoperability process – Perfecting multi-vendor interoperability could take several years
because of protocol complexity.
• No QCN support in current generation products – No DCB/FCoE products shipping today support the
QCN protocol. Furthermore, most, if not all, products will require a hardware upgrade to support QCN.
Products claiming to support QCN have unproven, untested hardware implementations. Vendors haven’t
performed any rigorous interoperability tests with production level QCN software.
• Complete end-to-end support requirement – To enable QCN in a network, the entire data path must
support the QCN protocol. All hardware across the DCB-enabled network must support QCN. This poses
a significant problem because upgrading existing first-generation, DCB-based converged networks
requires replacing or upgrading all DCB components.
Because of these challenges, only one-hop and two-hop networks will be reliable until next generation
hardware becomes available to support QCN. Most currently shipping hardware cannot support QCN and
cannot be software upgraded to add this support. Therefore, support for larger DCB-based network
deployments will require hardware upgrades.
Data Center Bridging Exchange
Data Center Bridging Exchange (DCBX) protocol provides two primary functions:
• Lets DCB-enabled Ethernet devices/ports advertise their DCB capabilities to their link partners
• Lets DCB-enabled Ethernet devices push preferred parameters to their link partners
DCBX supports discovery and exchange of network configuration information between DCB-compliant peer
devices. DCBX enhances the Link Layer Discovery Protocol (LLDP) with more network status information and
more parameters than LLDP. The specification separates DCBX exchange parameters into administered and
operational groups. The administered parameters contain network device configurations. The operational
parameters describe the operational status of network device configurations. Devices can also specify a
willingness to accept DCBX parameters from the attached link partner. This is most commonly supported in
CNAs that allow the attached DCB-enabled switch to set up their parameters.