Administrator Guide

Ethernet Enhancements in Data Center Bridging
The following section describes DCB.
The device supports the following DCB features:
Data center bridging exchange protocol (DCBx)
Priority-based flow control (PFC)
Enhanced transmission selection (ETS)
NOTE: DCB is not supported on the Port Extender ports and Cascade ports.
DCB refers to a set of IEEE Ethernet enhancements that provide data centers with a single, robust, converged network to
support multiple traffic types, including local area network (LAN), server, and storage traffic. Through network consolidation,
DCB results in reduced operational cost, simplified management, and easy scalability by avoiding the need to deploy separate
application-specific networks.
For example, instead of deploying an Ethernet network for LAN traffic, include additional storage area networks (SANs) to
ensure lossless Fibre Channel traffic, and a separate InfiniBand network for high-performance inter-processor computing within
server clusters, only one DCB-enabled network is required in a data center. The Dell Networking switches that support a unified
fabric and consolidate multiple network infrastructures use a single input/output (I/O) device called a converged network
adapter (CNA).
A CNA is a computer input/output device that combines the functionality of a host bus adapter (HBA) with a network interface
controller (NIC). Multiple adapters on different devices for several traffic types are no longer required.
Data center bridging satisfies the needs of the following types of data center traffic in a unified fabric:
LAN traffic
LAN traffic consists of many flows that are insensitive to latency requirements, while certain applications,
such as streaming video, are more sensitive to latency. Ethernet functions as a best-effort network that
may drop packets in the case of network congestion. IP networks rely on transport protocols (for
example, TCP) for reliable data transmission with the associated cost of greater processing overhead and
performance impact LAN traffic consists of a large number of flows that are generally insensitive to
latency requirements, while certain applications, such as streaming video, are more sensitive to latency.
Ethernet functions as a best-effort network that may drop packets in case of network congestion. IP
networks rely on transport protocols (for example, TCP) for reliable data transmission with the associated
cost of greater processing overhead and performance impact.
Storage traffic Storage traffic based on Fibre Channel media uses the Small Computer System Interface (SCSI) protocol
for data transfer. This traffic typically consists of large data packets with a payload of 2K bytes that
cannot recover from frame loss. To successfully transport storage traffic, data center Ethernet must
provide no-drop service with lossless links.
InterProcess
Communication
(IPC) traffic
InterProcess Communication (IPC) traffic within high-performance computing clusters to share
information. Server traffic is extremely sensitive to latency requirements.
To ensure lossless delivery and latency-sensitive scheduling of storage and service traffic and I/O convergence of LAN, storage,
and server traffic over a unified fabric, IEEE data center bridging adds the following extensions to a classical Ethernet network:
802.1Qbb Priority-based Flow Control (PFC)
802.1Qaz Enhanced Transmission Selection (ETS)
802.1Qau Congestion Notification
Data Center Bridging Exchange (DCBx) protocol
NOTE: Dell Networking OS supports only the PFC, ETS, and DCBx features in data center bridging.
Priority-Based Flow Control
In a data center network, priority-based flow control (PFC) manages large bursts of one traffic type in multiprotocol links so
that it does not affect other traffic types and no frames are lost due to congestion.
When PFC detects congestion on a queue for a specified priority, it sends a pause frame for the 802.1p priority traffic to the
transmitting device. In this way, PFC ensures that PFC-enabled priority traffic is not dropped by the switch.
PFC enhances the existing 802.3x pause and 802.1p priority capabilities to enable flow control based on 802.1p priorities
(classes of service). Instead of stopping all traffic on a link (as performed by the traditional Ethernet pause mechanism), PFC
246
Data Center Bridging (DCB)