Concept Guide
DCBx Example
The following gure shows how to use DCBx.
The external 40GbE 40GbE ports on the base module (ports 33 and 37) of two switches are used for uplinks congured as DCBx auto-
upstream ports. The device is connected to third-party, top-of-rack (ToR) switches through 40GbE uplinks. The ToR switches are part of a
Fibre Channel storage network.
The internal ports (ports 1-32) connected to the 10GbE backplane are congured as auto-downstream ports.
Figure 31. DCBx Sample Topology
DCBx Prerequisites and Restrictions
The following prerequisites and restrictions apply when you congure DCBx operation on a port:
• For DCBx, on a port interface, enable LLDP in both Send (TX) and Receive (RX) mode (the protocol lldp mode command; refer
to the example in
CONFIGURATION versus INTERFACE Congurations in the Link Layer Discovery Protocol (LLDP) chapter). If
multiple DCBx peer ports are detected on a local DCBx interface, LLDP is shut down.
• The CIN version of DCBx supports only PFC, ETS, and FCOE; it does not support iSCSI, backward congestion management (BCN),
logical link down (LLDF), and network interface virtualization (NIV).
Conguring DCBx
To congure DCBx, follow these steps.
For DCBx, to advertise DCBx TLVs to peers, enable LLDP. For more information, refer to Link Layer Discovery Protocol (LLDP).
1 Congure ToR- and FCF-facing interfaces as auto-upstream ports.
2 Congure server-facing interfaces as auto-downstream ports.
3 Congure a port to operate in a conguration-source role.
Data Center Bridging (DCB)
277










