Administrator Guide

DCBx Example
The following gure shows how to use DCBx.
The external 40GbE 40GbE ports on the base module (ports 33 and 37) of two switches are used for uplinks congured as DCBx auto-
upstream ports. The device is connected to third-party, top-of-rack (ToR) switches through 40GbE uplinks. The ToR switches are part of a
Fibre Channel storage network.
The internal ports (ports 1-32) connected to the 10GbE backplane are congured as auto-downstream ports.
On the S4048, PFC and ETS use DCBx to exchange link-level conguration with DCBx peer devices.
Figure 30. DCBx Sample Topology
DCBx Prerequisites and Restrictions
The following prerequisites and restrictions apply when you congure DCBx operation on a port:
For DCBx, on a port interface, enable LLDP in both Send (TX) and Receive (RX) mode (the protocol lldp mode command; refer
to the example in
CONFIGURATION versus INTERFACE Congurations in the Link Layer Discovery Protocol (LLDP) chapter). If
multiple DCBx peer ports are detected on a local DCBx interface, LLDP is shut down.
The CIN version of DCBx supports only PFC, ETS, and FCOE; it does not support iSCSI, backward congestion management (BCN),
logical link down (LLDF), and network interface virtualization (NIV).
Conguring DCBx
To congure DCBx, follow these steps.
For DCBx, to advertise DCBx TLVs to peers, enable LLDP. For more information, refer to Link Layer Discovery Protocol (LLDP).
Congure DCBx operation at the interface level on a switch or globally on the switch. To congure the S4810 system for DCBx operation in
a data center network, you must:
1 Congure ToR- and FCF-facing interfaces as auto-upstream ports.
2 Congure server-facing interfaces as auto-downstream ports.
Data Center Bridging (DCB)
271