Users Guide
Figure 33. DCBx Sample Topology
DCBx Prerequisites and Restrictions
The following prerequisites and restrictions apply when you congure DCBx operation on a port:
• For DCBx, on a port interface, enable LLDP in both Send (TX) and Receive (RX) mode (the protocol lldp mode 
command; refer to the example in CONFIGURATION versus INTERFACE Congurations in the Link Layer Discovery Protocol 
(LLDP) chapter). If multiple DCBx peer ports are detected on a local DCBx interface, LLDP is shut down.
• The CIN version of DCBx supports only PFC, ETS, and FCOE; it does not support iSCSI, backward congestion management 
(BCN), logical link down (LLDF), and network interface virtualization (NIV).
Conguring DCBx
To congure DCBx, follow these steps.
For DCBx, to advertise DCBx TLVs to peers, enable LLDP. For more information, refer to Link Layer Discovery Protocol (LLDP).
Congure DCBx operation at the interface level on a switch or globally on the switch. To congure the S4820Tsystem for DCBx 
operation in a data center network, you must:
1. Congure ToR- and FCF-facing interfaces as auto-upstream ports.
2. Congure server-facing interfaces as auto-downstream ports.
3. Congure a port to operate in a conguration-source role.
4. Congure ports to operate in a manual role.
1. Enter INTERFACE Conguration mode.
CONFIGURATION mode
interface type slot/port
2. Enter LLDP Conguration mode to enable DCBx operation.
INTERFACE mode
[no] protocol lldp
3. Congure the DCBx version used on the interface, where: auto congures the port to operate using the DCBx version 
received from a peer.
PROTOCOL LLDP mode
[no] DCBx version {auto | cee | cin | ieee-v2.5}
264
Data Center Bridging (DCB)










