Dell EqualLogic Best Practices Series EqualLogic iSCSI SAN Concepts for the Experienced Fibre Channel Storage Professional A Dell Technical Whitepaper This document has been archived and will no longer be maintained or updated. For more information go to the Storage Solutions Technical Documents page on Dell TechCenter or contact support.
THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL ERRORS AND TECHNICAL INACCURACIES. THE CONTENT IS PROVIDED AS IS, WITHOUT EXPRESS OR IMPLIED WARRANTIES OF ANY KIND. © 2012 Dell Inc. All rights reserved. Reproduction of this material in any manner whatsoever without the express written permission of Dell Inc. is strictly forbidden. For more information, contact Dell.
Contents 1 Overview ............................................................................................................................................................. 2 2 Audience ............................................................................................................................................................. 2 3 Fibre Channel and iSCSI basic comparison ..................................................................................................
Acknowledgements This whitepaper was produced by the PG Storage Infrastructure and Solutions of Dell Inc.
1 Overview As data storage has become central to any successful computer based solution, advanced technologies for connecting computing resources to data have matured over the last four decades.
3 Fibre Channel and iSCSI basic comparison The roots of both the FC and iSCSI protocols are the same – SCSI. Both protocols were developed to improve upon this well-established standard, and to allow it to work with modern hardware technologies. Whereas SCSI was designed as a parallel, bus architecture, FC and iSCSI both transmit data serially in frames or packets.
4 Fabric architecture comparison 4.1 Network congestion and prevention FC utilizes a credit or token method of dealing with congestion, or more accurately of preventing congestion altogether. When end points (hosts and storage) initiate communication, they exchange operating parameters to inform each other about their capabilities and decide how much data they can send to each other. Each device can only send data if it has “credit”.
4.2 Packet and frame size A typical FC frame is around 2148 bytes (up to 2112 bytes in the data field). Ethernet packets are typically around 1518 bytes; however, if “Jumbo Frames” are configured, that allows for a MTU (Maximum Transmission Unit) of up to 9000 bytes. Depending on the device, the Jumbo Frame size may be 9000 or slightly higher, usually depending on whether the packet header information is included in that value.
Some Ethernet switches have stacking abilities, where a dedicated (usually proprietary) interface is connected between two or more switches, allowing them to be managed as a single switch and also to forward traffic as if it was a single switch. Non-stackable switches can be connected by configuring standard ports or higher-speed uplink ports as a Link Aggregation Group (LAG) – although sometimes these are also referred to as ISL’s.
5 Host adapters/interface card comparison 5.1 FC host bus adapters In the FC architecture, each host has (preferably) at least two FC HBAs, which provide the hardware and logic to connect the host server to the FC SAN. A number of vendors produce FC HBAs with varying capabilities including speed, number of ports, connector types (optical, copper, etc.), bus interfaces (PCI-X, PCI-e), and buffer memory sizes.
6 Storage array architecture comparison 6.1 Fibre Channel arrays FC storage systems have been around for over a decade and there are numerous choices when selecting a FC environment. While these systems range from large, highly scalable models to smaller FC arrays, many of them are monolithic designs. By that, we mean that they consist of a controller unit (or units) that contain storage processors, cache memory, and front-end (for host/switch connectivity) and back-end interfaces (for disks).
An FC system may have a mix of drive types (such as FC, SAS, SATA, and SSD disk), or it may only have a single type. Depending on the type of array, it may have multiple ports for front-end and back-end connectivity. Some may even support additional connectivity options (such as SAS, iSCSI, etc.) or NAS features (for support of NFS and CIFS protocols over Ethernet). Typical systems are built around redundant hardware components throughout and cache memory that is protected against power loss.
With EqualLogic PS Series storage each array contains a number of disks that are usually of the same type – one exception being “hybrid” arrays which contain a mix of SAS and SSD disks. When an array is added to a pool, all of the disks in that array are consumed and automatically added to the pool as available space. When a RAID policy is assigned, it is assigned to all disks in that array (enclosure) except for those automatically designated as hot spares.
From a host perspective, a group of EqualLogic arrays appears as a single storage system. A built-in network load balancer transparently redirects iSCSI requests to the proper member as needed. However, a host-side software component is also available to automatically manage and optimize iSCSI sessions.
7 Security concept comparison Security within the concepts of data storage provides benefits including: • • • Ensuring that hostile hosts do not gain access to stored sensitive data Limiting access to storage devices to only the devices that are designated and allocated to each host (i.e., LUN masking and mapping) Because Ethernet switches are common to both LAN and iSCSI SAN connectivity, it is tempting to put them all together on the same fabric.
8 Storage management Many traditional enterprise storage systems utilize licensing tiers or add-ons to enable advanced features such as snapshots, clones, replication, and advanced monitoring or systems management. When one of these features is required, a license must be purchased to enable that additional functionality. Sometimes even the addition of storage capacity requires a license or feature upgrade.
9 Conclusion Both FC and iSCSI Storage Area Networks require proper planning and deployment to ensure optimal functionality and performance. There are various hardware choices available for iSCSI connectivity, just as there is with FC SANs. Proper configuration and monitoring of the entire SAN is necessary to ensure deterministic performance for both FC and iSCSI environments.
Appendix A Common terminology The following terms are used throughout this paper: DCB: Data Center Bridging – a set of enhancements to Ethernet Protocols to prioritize specific types of traffic in data center environments. E-port: Expansion-Port. A port on a FC switch that connects to another FC switch fabric. Fabric: The switching hardware in a FC or iSCSI SAN F-port: A port on a FC switch that connects to a node or end device. HBA: Host Bus Adapter ISL: Inter-Switch Link.
Appendix B Related publications The following Dell publications are referenced in this document and are recommended sources for additional information. • Dell EqualLogic PS Series SANs Validated Component List: http://en.community.dell.com/techcenter/storage/w/wiki/equallogic-validatedcomponents.aspx • Dell EqualLogic Configuration Guide: http://en.community.dell.com/techcenter/storage/w/wiki/equallogic-configurationguide.aspx • PS Online help http://psonlinehelp.equallogic.com/V5.1/groupmanager.
THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL ERRORS AND TECHNICAL INACCURACIES. THE CONTENT IS PROVIDED AS IS, WITHOUT EXPRESS OR IMPLIED WARRANTIES OF ANY KIND.