White Papers

vStart 1000m for Enterprise Virtualization using Hyper-V Reference Architecture
Dell Inc. 11
6.1 Dell Blade Network Architecture
The Dell blade chassis has three separate fabrics referred to as A, B, and C. Each fabric has two I/O
modules, making a total of six I/O modules slots in the chassis. The I/O modules are A1, A2, B1, B2, C1,
and C2. Each I/O module can be an Ethernet physical switch, an Ethernet pass-through module, FC
switch, or FC pass-through module. Each half-height blade server has a Blade Network Daughter Card
(bNDC) that replaces the conventional LAN on Motherboard (LOM) with the NDC that can be selected
between several options depending on solution requirements.
The Chassis Fabric A contains 10 GbE PowerConnect Pass-Through-k modules and is used for LAN
traffic. Fabric B contains Dell 8|4 Gbps SAN modules and is used for SAN. The Fabric C is unused.
PowerEdge M620 blade servers use a Broadcom 57810-k Dual port 10Gb KR bNDC to connect to the
Fabric A. Pass-Through-K modules uplink to Dell Force10 S4810 network switches providing LAN
connectivity. The QLogic QME2572 8 Gbps Fibre Channel I/O mezzanine cards connected to Fabric B
connect to Dell 8|4 Gbps SAN modules. The uplinks of Dell 8|4 Gbps SAN modules connect to Brocade
5100 switches providing SAN connectivity.
The network traffic on each blade includes iSCSI as well as traffic for the parent partition (hypervisor),
Live Migration, cluster heartbeat and cluster shared volume, and child partitions (virtual machines). A
Smart Load-balancing (SLB) with Failover team is created using the two 10 GbE ports, and VLAN
adapters are created on top of the team for each traffic type. A Virtual Network Switch is then created
and bound to the Virtual Machine VLAN adapter.
6.2 Server / Blade Network Connectivity
Each host network adapter in both the compute and management hosts utilizes network teaming
technology to provide highly available network adapters to each layer of the networking stack. The
teaming architecture closely follows the Hyper-V: Live Migration Network Configuration Guide, but
extends further to provide highly available networking to each traffic type used in the architecture.
The 10GbE bNDC supports network partitioning (NPAR) which allows splitting the 10GbE pipe with no
specific configuration requirements in the switches. With NPAR, administrators can split each 10GbE
port into 4 separate partitions or physical functions and allocate the desired bandwidth and resources
as needed. Each of these partitions is enumerated as a PCI Express function that appears as a separate
physical NIC in the server’s system ROM, operating systems, and hypervisor.
As mentioned previously, each PowerEdge M620 blade server is configured with a Broadcom BCM57810
bNDC providing two 10GbE ports. These ports are wired to the pass-through modules in Fabric A and
the corresponding ports on A1 and A2 modules are connected with the two Dell Force10 S4810 switches
outside the blade chassis enclosure. Meanwhile, each PowerEdge R620 rack server is configured with
Broadcom BCM57810 Add-in NIC providing two 10Gb SPF+ ports, and they are connected with the two
Force10 S4810 switches. The two Force10 S4810 switches are configured with Inter Switch Links (ISL)
using two 40 Gbps QSFP+ links. Link Aggregation Groups (LAGs) are created between the two 40 Gbps
QSFP+ ports, providing a path for communication across the switches. Network connectivity for the
M620 is illustrated in Figure 4 below.