NETWORKING BEST PRACTICES FOR VMWARE® INFRASTRUCTURE 3 ON DELL™ POWEREDGE™ BLADE SERVERS April 2009 Dell Virtualization Solutions Engineering www.dell.
Networking Best Practices for VMware Infrastructure 3 on Dell PowerEdge Blade Servers Information in this document is subject to change without notice. © Copyright 2009 Dell Inc. All rights reserved. Reproduction in any manner whatsoever without the written permission of Dell Inc. is strictly forbidden. This white paper is for informational purposes only and may contain typographical errors or technical inaccuracies. The content is provided as is, without express or implied warranties of any kind.
Networking Best Practices for VMware Infrastructure 3 on Dell PowerEdge Blade Servers Contents 1 Introduction ........................................................................................................................................................... 4 2 Overview ............................................................................................................................................................... 4 3 2.1 Fabrics ..................................................
Networking Best Practices for VMware Infrastructure 3 on Dell PowerEdge Blade Servers 1 Introduction This whitepaper provides an overview of the networking architecture for VMware® Infrastructure 3 on Dell™ PowerEdge blade servers. It provides best practices for deploying and configuring your network in the VMware environment. References to other guides for step by step instructions are provided.
Networking Best Practices for VMware Infrastructure 3 on Dell PowerEdge Blade Servers • module in the Fabric B slots, a blade must have a matching mezzanine card installed in a Fabric B mezzanine card location. You can install modules designed for Fabric A in the Fabric B slots. Fabric C is a 1 to 10 Gb/sec dual port, redundant fabric that supports I/O module slots C1 and C2. Fabric C currently supports Gb Ethernet, Infiniband, and Fibre Channel modules.
Networking Best Practices for VMware Infrastructure 3 on Dell PowerEdge Blade Servers • simplifies management, allows server-server traffic to stay within the VBS domain vs. congesting the core network, and can help significantly consolidate external cabling. o Optional software license key upgrades to IP Services (Advanced L3 protocol support) and Advanced IP Services (IPv6) Dell Ethernet Pass-Through Module: This supports 16 x 10/100/1000Mb copper RJ45 connections.
Networking Best Practices for VMware Infrastructure 3 on Dell PowerEdge Blade Servers Figure 3: Adapter and I/O Modules connection in Chassis for Full Height Blades For more information on port mapping, see the Hardware Owner’s Manual for your blade server model at http://support.dell.com. 2.4 Mapping between ESX Physical Adapter Enumeration and I/O Modules The following table shows how the ESX/ESXi 3.5 servers enumerate the physical adapters and the I/O modules they connect to.
Networking Best Practices for VMware Infrastructure 3 on Dell PowerEdge Blade Servers 3 Network Architecture Network traffic can be divided into two primary types - Local Area Network (LAN) and iSCSI Storage Area Network (SAN). LAN consists of traffic from virtual machines, ESX/ESXi management (service console for ESX), and VMotion. iSCSI SAN consists of iSCSI storage network traffic.
Networking Best Practices for VMware Infrastructure 3 on Dell PowerEdge Blade Servers • • High iSCSI SAN Bandwidth: In this configuration two I/O modules are dedicated to the LAN and four I/O modules are dedicated to the iSCSI SAN. This configuration is useful for environments which have high back end SAN requirements such as database environments and low LAN bandwidth requirements.
Networking Best Practices for VMware Infrastructure 3 on Dell PowerEdge Blade Servers 3.3.1 Traffic Isolation using VLANs The traffic on the LAN network is separated into three VLANs, one VLAN each for management, VMotion, and virtual machine traffic. Network traffic is tagged with respective VLAN ID for each traffic type in the virtual switch. This is achieved through the Virtual Switch Tagging (VST) mode. In this mode, a VLAN is assigned to each of the three port groups.
Networking Best Practices for VMware Infrastructure 3 on Dell PowerEdge Blade Servers You can also connect multiple blade chassis together. If the total number of front-end switches is less than 8 for Cisco, or 12 for Dell, then you can stack all the switches together into a single Virtual Blade Switch. Multiple Virtual Blade Switches could be daisy-chained together by creating two EtherChannels. 3.
Networking Best Practices for VMware Infrastructure 3 on Dell PowerEdge Blade Servers 4 References iSCSI overview - A “Multivendor Post” to help our mutual iSCSI customers using VMware http://virtualgeek.typepad.com/virtual_geek/2009/01/a-multivendor-post-to-help-our-mutual-iscsi-customers-usingvmware.html Integrating Blade Solutions with EqualLogic SANs http://www.dell.com/downloads/global/partnerdirect/apj/Integrating_Blades_to_EqualLogic_SAN.pdf Cisco Products http://www.cisco.