White Papers

vStart 1000v for Enterprise Virtualization using VMware vSphere: Reference Architecture
Page 13
and two optional dual-port mezzanine I/O cards. The NDC connects to Fabric A. One mezzanine I/O
card attaches to Fabric B, with the remaining mezzanine I/O card attached to Fabric C.
In this solution, the Chassis Fabric A contains 10 GbE Pass-Through-k modules and is used for LAN.
Fabric B contains Dell 8|4 Gbps SAN modules and is used for SAN. The Fabric C is unused.
PowerEdge M620 blade servers use a Broadcom 57810-k Dual port 10GbE KR bNDC (blade Network
Daughter Card) to connect to the fabric A. Pass-Through-K modules uplink to Dell Force10 S4810
network switches providing LAN connectivity. QLogic QME2572 8 Gbps Fibre Channel I/O mezzanine
cards are used to connect to Dell 8|4 Gbps SAN modules. The uplinks of Dell 8|4 Gbps SAN modules
connect to Brocade 5100 switches providing SAN connectivity.
Figure 3 below illustrates how the fabrics are populated in a Dell blade server chassis and how the I/O
modules are utilized.
Figure 3: I/O Connectivity for PowerEdge M620 Blade Server
Fabric A1
10 Gb Ethernet Pass-through-k
Module
Fabric B1
Dell 8 | 4 I/O SAN modules
Fabric C1
Unused
Fabric A2
10 Gb Ethernet Pass-through-k
Module
Fabric B2
Dell 8 | 4 I/O SAN modules
Fabric C2
Unused
PowerEdge M620
Broadcom
57810-k
10Gb KR
NDC
Mezz B
Qlogic
QME2572
Mezz C
Unused
Network Interface Card Partition (NPAR): NPAR allows splitting the 10GbE pipe on the NDC with no
specific configuration requirements in the switches. With NPAR, administrators can split each 10GbE
port of an NDC into four separate partitions, or physical functions and allocate the desired bandwidth
and resources as needed. Each of these partitions is enumerated as a PCI Express function that appears
as a separate physical NIC in the server, operating systems, and hypervisor. The vStart 1000v solution