Deployment Guide

6 Dell Networking FC Flex IOM: Deployment of FCoE with Dell FC Flex IOM, Brocade FC switches,
and Dell Compellent Storage Array
Deployment/Configuration Guide
2 Dell PowerEdge M1000e Overview
The PowerEdge M1000e Modular Server Enclosure solution supports up to (32) server modules, and (6)
network I/O modules. The M1000e contains a high performance and highly available passive midplane
that connects server modules to the infrastructure components, power supplies, fans, integrated KVM
and Chassis Management Controllers (CMC). The PowerEdge M1000e uses redundant and hot
pluggable components throughout to provide maximum uptime. The chassis has the ability to house 6
x I/O modules allowing for a greater diversity of roles for all of the enclosed blade servers.
The (6) I/O slots in the back of the chassis are classified as 3 separate fabrics with each fabric containing
2 slots (A1/A2, B1/B2, C1/C2); these fabric I/O slots relate to the ports found on the server side network
adaptors. The I/O modules can be used independently of each other, and each I/O module must
contain the same technology. For example, fabric A is hardwired to the 2 network adaptors on the blade
server mainboards, which means the I/O modules in fabric A must support Ethernet; Fabrics B and C
can be used for Ethernet, Fibre Channel, or InfiniBand. Figure 2 below exemplifies the I/O mappings
between the server side dual/quad port networking adaptors and the I/O modules.
Note: The networking adaptors in Fabric A have also been described as LOM’s (LAN on
Motherboards), and bNDC’s (blade Network Daughter Card’s). All of these terms describe the same
device: A network adaptor that performs Ethernet/iSCSI/FCoE tasks on behalf of the Server and its
operating system.
Figure 2 M1000e Front and Back View