Dell EqualLogic Best Practices Series Transition to the Data Center Bridging Era with EqualLogic PS Series Storage Solutions A Dell Technical Whitepaper This document has been archived and will no longer be maintained or updated. For more information go to the Storage Solutions Technical Documents page on Dell TechCenter or contact support.
THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL ERRORS AND TECHNICAL INACCURACIES. THE CONTENT IS PROVIDED AS IS, WITHOUT EXPRESS OR IMPLIED WARRANTIES OF ANY KIND. © 2012 Dell Inc. All rights reserved. Reproduction of this material in any manner whatsoever without the express written permission of Dell Inc. is strictly forbidden. For more information, contact Dell.
Table of Contents 1 Introduction ....................................................................................................................................................... 5 2 Data Center Bridging overview ....................................................................................................................... 6 3 4 2.1 Why DCB was created .............................................................................................................................. 6 2.
Acknowledgements This whitepaper was produced by the PG Storage Infrastructure and Solutions team between November 2011 and February 2012 at the Dell Labs facility in Round Rock, Texas.
1 Introduction Ethernet is the most widely deployed networking technology today. This is a standard technology and deployed in virtually every data center throughout the world. Despite Ethernet’s traditional advantages, special purpose switching fabric technologies needed to be developed to address special requirements, such as storage and data area networking. Using distinct networks for data, management, and storage can be more complex and costly than using a single converged network.
2 Data Center Bridging overview 2.1 Why DCB was created Ethernet, being a pervasive technology, continues to scale in its bandwidth to 10/40/100Gb and also drop its cost. This provides a great opportunity for other technologies-such as Fibre Channel and Remote Direct Memory Access networking solutions to leverage Ethernet as a converged universal fabric. For example, as 10GbE becomes more common, 8Gb Fibre channel has been shipping for couple of years, and 16Gb is beginning to ship now.
Power and cooling The benefits realized from reducing the number of cables and network cards virtualization extends to the power and cooling needs of the datacenter. As the traffic flows converge onto one network instead of several, the resulting number of network switches goes down. Along with the switch count, the power and cooling requirements for the datacenter also decrease. By using fewer cables, the airflow characteristics and cooling efficiency in data center racks improves.
2.3.1 Priority-based Flow Control (PFC) Priority-based Flow control is an evolution of the concept of Flow Control originally implemented in the MAC Pause frame feature of Ethernet (IEEE 802.3x). Pause frames provide a simple way for control of segment traffic, by allowing a NIC to send a request to an adjacent port to stop transmitting for a specific time period. With no granularity applied to this request; all Ethernet frames between two ports are stopped during the pause.
2.3.2 Enhanced Transmission Selection (ETS) Enhanced Transmission Selection is a mechanism for guaranteeing a minimum percentage of bandwidth to a traffic class. A traffic class contains one or more CoS’s defined using VLAN Q-tag. Each traffic class is then assigned a percentage of bandwidth (with setting granularity down to 10%). All traffic class bandwidths must add up to 100%; no oversubscription is allowed. The bandwidth percentage defined is a minimum guaranteed bandwidth for that traffic class.
2.3.3 Congestion Notification (CN) Congestion Notification is a mechanism for managing congestion throughout a DCB fabric or domain. Ideally, that fabric would consist of interconnected switches and end-devices that all conform to the same settings for PFC, ETS and CN. Frames in a fabric that are conforming to CN will be “tagged” with a Flow Identifier. CN then relays messages between two types of devices called Congestion Points (CPs) and Reaction Points (RP) to control the flows.
2.3.4 Data Center Bridging Capability Exchange (DCBx) Datacenter Bridging Capability Exchange is an extension of the IEEE standard 802.1AB for Link Layer Discovery Protocol (LLDP). It uses the existing LLDP framework for network devices to advertise their identity and capabilities. LLDP relies on the use of Type-Length-Values (TLV) to advertise a device’s capabilities for a multitude of Ethernet functions, as well as its identity. DCBx defines new TLVs specific to the DCB functionalities.
3 iSCSI in a converged data center 3.1 Challenge for iSCSI in a shared network In a traditional, non-DCB Ethernet iSCSI environment, it is a recommended practice to have iSCSI packets flow in a physically isolated SAN (with dedicated switches/ cabling, etc.), so that the iSCSI SAN network traffic is minimally affected by other, lower priority Ethernet traffic. In virtualized environments, it is a recommended practice to employ multiple NICs for traffic isolation (iSCSI, vMotion, production, management, etc.
Figure 5 Sample enterprise iSCSI solution with DCB 3.4 EqualLogic DCB support For EqualLogic environments using DCB over a shared network infrastructure, support for PFC, ETS and iSCSI TLV are required. For FCoE, it is easy for the network to assign a higher priority to its frames, as they have a separate EtherType. For iSCSI however, the end station needs to know how to identify iSCSI frames from the other non-iSCSI TCP/IP traffic.
4 Transitioning to DCB 4.1 Classes of DCB switches DCB capable switches can be categorized into different classes. “Core DCB” switches, “Bridge DCB” switches and “non-DCB” switches. Figure 6 illustrates the relationship of these different switch classes when used together to provide a transitional DCB datacenter environment. Figure 6 4.1.1 DCB switch classes Core DCB switch DCB “core” switches must have full DCB feature support, as specified in section 2.3.
to that bridge DCB switch (such as non-DCB aware iSCSI devices, or non-DCB downstream switches).In the transitional DCB datacenter, the non-DCB aware devices and switches can be attached to bridge DCB switches to gain access to the shared datacenter network or the non-DCB switches can be completely replaced by either a core or bridge switch (with addition of core switch) to achieve DCB in a non-DCB environment.
4.3 Data center in transition 4.3.1 Deployment considerations for DCB network Consider the following before deploying a DCB network in the data center: Goals: Identify why DCB needs to be implemented in the existing network infrastructure. For instance, some areas where DCB fits are, • Highly virtualized environments As the virtualized environments grow, where a single server may host many distinct OS and application stacks, using higher bandwidth 10GbE becomes increasingly important.
4.3.2 Deployment example scenarios As discussed in section 4.3.1, how you transition an existing data center to DCB, depends on your goals. Below are few example scenarios and steps involved in transitioning to DCB. 4.3.2.1 Scenario 1: Simplify server (host) network connections If the goal is to simplify the host network connections to several networks, following the below steps would lead to a converged DCB network. Figure 8 shows a host’s connection path to LAN and SAN with no DCB support.
Figure 9 Data center that has transitioned to DCB, minimizing required host connection paths 4.3.2.2 Scenario 2: DCB support in 1Gb SAN storage array network If the goal of the data center is to transition to DCB while still maintaining the use of 1GbE iSCSI SAN, the steps below would lead to a converged DCB network.
Figure 10 Data center transitioned to DCB 4.4 Future network design based on end-to-end DCB and 10GbE In this scenario, all different types of networks can share a single network based on 10Gb Ethernet. The enablers here are 10GbE bandwidth combined with the lossless capability provided by full DCB support; CNA’s and core DCB switches. The result is an end-to-end converged DCB aware datacenter. An example of future Datacenter is shown in figure 11.
5 Conclusion Data Center Bridging is a new network standard that can bring to networks the same consolidation benefits that storage and servers have enjoyed in recent years—higher utilization rates, simpler management, and lower total cost of ownership. It is reliable, offers predictable performance, and can segregate and prioritize traffic types. Administrators can now implement standard Ethernet, Data Center Bridging, or a combination of both.
6 Glossary iSCSI: Internet Small Computer System Interface.
7 References The following resources are referenced in this document. • “iSCSI Design Considerations and Deployment Guide,” VMware 2007, http://www.vmware.com/files/pdf/iSCSI_design_deploy.pdf • “Creating a DCB Compliant EqualLogic iSCSI SAN with Mixed Traffic,” Dell Inc. August, 2011, http://en.community.dell.com/techcenter/storage/w/wiki/creating-a-dcb-compliantequallogic-iscsi-san-with-mixed-traffic.
THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL ERRORS AND TECHNICAL INACCURACIES. THE CONTENT IS PROVIDED AS IS, WITHOUT EXPRESS OR IMPLIED WARRANTIES OF ANY KIND.