Administrator Guide
DCBx Example
The following gure shows how to use DCBx.
The external 40GbE 40GbE ports on the base module (ports 33 and 37) of two switches are used for uplinks congured as DCBx auto-
upstream ports. The device is connected to third-party, top-of-rack (ToR) switches through 40GbE uplinks. The ToR switches are part of a
Fibre Channel storage network.
The internal ports (ports 1-32) connected to the 10GbE backplane are congured as auto-downstream ports.
On the S4048, PFC and ETS use DCBx to exchange link-level conguration with DCBx peer devices.
Figure 30. DCBx Sample Topology
DCBx Prerequisites and Restrictions
The following prerequisites and restrictions apply when you congure DCBx operation on a port:
• For DCBx, on a port interface, enable LLDP in both Send (TX) and Receive (RX) mode (the protocol lldp mode command; refer
to the example in
CONFIGURATION versus INTERFACE Congurations in the Link Layer Discovery Protocol (LLDP) chapter). If
multiple DCBx peer ports are detected on a local DCBx interface, LLDP is shut down.
• The CIN version of DCBx supports only PFC, ETS, and FCOE; it does not support iSCSI, backward congestion management (BCN),
logical link down (LLDF), and network interface virtualization (NIV).
Conguring DCBx
To congure DCBx, follow these steps.
For DCBx, to advertise DCBx TLVs to peers, enable LLDP. For more information, refer to Link Layer Discovery Protocol (LLDP).
Congure DCBx operation at the interface level on a switch or globally on the switch. To congure the S4810 system for DCBx operation in
a data center network, you must:
1 Congure ToR- and FCF-facing interfaces as auto-upstream ports.
2 Congure server-facing interfaces as auto-downstream ports.
Data Center Bridging (DCB)
275