Administrator Guide

DCBx Example
The following figure shows how to use DCBx.
The external 40GbE 40GbE ports on the base module (ports 33 and 37) of two switches are used for uplinks configured as DCBx auto-
upstream ports. The device is connected to third-party, top-of-rack (ToR) switches through 40GbE uplinks. The ToR switches are part of
a Fibre Channel storage network.
The internal ports (ports 1-32) connected to the 10GbE backplane are configured as auto-downstream ports.
On the S4048, PFC and ETS use DCBx to exchange link-level configuration with DCBx peer devices.
Figure 30. DCBx Sample Topology
DCBx Prerequisites and Restrictions
The following prerequisites and restrictions apply when you configure DCBx operation on a port:
For DCBx, on a port interface, enable LLDP in both Send (TX) and Receive (RX) mode (the protocol lldp mode command; refer
to the example in
CONFIGURATION versus INTERFACE Configurations in the Link Layer Discovery Protocol (LLDP) chapter). If
multiple DCBx peer ports are detected on a local DCBx interface, LLDP is shut down.
The CIN version of DCBx supports only PFC, ETS, and FCOE; it does not support iSCSI, backward congestion management (BCN),
logical link down (LLDF), and network interface virtualization (NIV).
Configuring DCBx
To configure DCBx, follow these steps.
For DCBx, to advertise DCBx TLVs to peers, enable LLDP. For more information, refer to Link Layer Discovery Protocol (LLDP).
Configure DCBx operation at the interface level on a switch or globally on the switch. To configure the S4810 system for DCBx operation
in a data center network, you must:
1 Configure ToR- and FCF-facing interfaces as auto-upstream ports.
2 Configure server-facing interfaces as auto-downstream ports.
Data Center Bridging (DCB)
271