Administrator Guide

dcb-map linecard 0 backplane all <name>
dcb-map linecard all backplane all <name>
NOTE: Dell Networking OS Behavior: DCB is not supported if you enable link-level ow control on one or more interfaces. For
more information, refer to Ethernet Pause Frames.
Ethernet Enhancements in Data Center Bridging
The following section describes DCB.
The device supports the following DCB features:
Data center bridging exchange protocol (DCBx)
Priority-based ow control (PFC)
Enhanced transmission selection (ETS)
NOTE: DCB is not supported on the Port Extender ports and Cascade ports.
DCB refers to a set of IEEE Ethernet enhancements that provide data centers with a single, robust, converged network to support multiple
trac types, including local area network (LAN), server, and storage trac. Through network consolidation, DCB results in reduced
operational cost, simplied management, and easy scalability by avoiding the need to deploy separate application-specic networks.
For example, instead of deploying an Ethernet network for LAN trac, include additional storage area networks (SANs) to ensure lossless
Fibre Channel trac, and a separate InniBand network for high-performance inter-processor computing within server clusters, only one
DCB-enabled network is required in a data center. The Dell Networking switches that support a unied fabric and consolidate multiple
network infrastructures use a single input/output (I/O) device called a converged network adapter (CNA).
A CNA is a computer input/output device that combines the functionality of a host bus adapter (HBA) with a network interface controller
(NIC). Multiple adapters on dierent devices for several trac types are no longer required.
Data center bridging satises the needs of the following types of data center trac in a unied fabric:
LAN
trac LAN trac consists of many ows that are insensitive to latency requirements, while certain applications, such as
streaming video, are more sensitive to latency. Ethernet functions as a best-eort network that may drop packets
in the case of network congestion. IP networks rely on transport protocols (for example, TCP) for reliable data
transmission with the associated cost of greater processing overhead and performance impact LAN trac consists
of a large number of ows that are generally insensitive to latency requirements, while certain applications, such as
streaming video, are more sensitive to latency. Ethernet functions as a best-eort network that may drop packets
in case of network congestion. IP networks rely on transport protocols (for example, TCP) for reliable data
transmission with the associated cost of greater processing overhead and performance impact.
Storage trac Storage trac based on Fibre Channel media uses the Small Computer System Interface (SCSI) protocol for data
transfer. This trac typically consists of large data packets with a payload of 2K bytes that cannot recover from
frame loss. To successfully transport storage trac, data center Ethernet must provide no-drop service with
lossless links.
InterProcess
Communication
(IPC) trac
InterProcess Communication (IPC) trac within high-performance computing clusters to share information. Server
trac is extremely sensitive to latency requirements.
To ensure lossless delivery and latency-sensitive scheduling of storage and service trac and I/O convergence of LAN, storage, and server
trac over a unied fabric, IEEE data center bridging adds the following extensions to a classical Ethernet network:
802.1Qbb — Priority-based Flow Control (PFC)
802.1Qaz — Enhanced Transmission Selection (ETS)
802.1Qau — Congestion Notication
258
Data Center Bridging (DCB)