Administrator Guide
NPIV Proxy Gateway for FC Flex IO Modules
The N-port identier virtualization (NPIV) Proxy Gateway (NPG) feature provides FCoE-FC bridging capability on the M I/O
Aggregator and MXL 10/40GbE Switch with the FC Flex IO module switch, allowing server CNAs to communicate with SAN fabrics
over the M I/O Aggregator and MXL 10/40GbE Switch with the FC Flex IO module.
To congure the M I/O Aggregator and MXL 10/40GbE Switch with the FC Flex IO module to operate as an NPIV proxy gateway,
use the following commands:
NPIV Proxy Gateway Conguration on FC Flex IO Modules
The Fibre Channel (FC) Flex IO module is supported on the MXL 10/40GbE Switch and M I/O Aggregator (IOA). The MXL and IOA
switches, installed with the FC Flex IO module, function as a top-of-rack edge switch that supports Converged Enhanced Ethernet
(CEE) trac — Fibre Channel over Ethernet (FCoE) for storage, Interprocess Communication (IPC) for servers, and Ethernet local
area network (LAN) (IP cloud) for data — as well as FC links to one or more storage area network (SAN) fabrics.
The N-port identier virtualization (NPIV) proxy gateway (NPG) provides FCoE-FC bridging capability on the MXL 10/40GbE Switch
and M I/O Aggregator with the FC Flex IO module.
This chapter describes how to congure and use an NPIV proxy gateway on an MXL 10/40GbE Switch and M I/O Aggregator with
the FC Flex IO module in a (SAN.
NPIV Proxy Gateway Operations and Capabilities
Benets of an NPIV Proxy Gateway
The MXL 10/40GbE Switch and M I/O Aggregator with the FC Flex IO module functions as a top-of-rack edge switch that supports
Converged Enhanced Ethernet (CEE) trac — FCoE for storage, Interprocess Communication (IPC) for servers, and Ethernet LAN
(IP cloud) for data — as well as Fibre Channel (FC) links to one or more SAN fabrics.
Using an NPIV proxy gateway (NPG) helps resolve the following problems in a storage area network:
• Fibre Channel storage networks typically consist of servers connected to edge switches, which are connected to SAN core
switches. As the SAN grows, it is necessary to add more ports and SAN switches. This results in an increase in the required
domain IDs, which may surpass the upper limit of 239 domain IDs supported in the SAN network. An NPG avoids the need for
additional domain IDs because it is deployed outside the SAN and uses the domain IDs of core switches in its FCoE links.
• With the introduction of 10GbE links, FCoE is being implemented for server connections to optimize performance. However, a
SAN traditionally uses Fibre Channel to transmit storage trac. FCoE servers require an ecient and scalable bridging feature to
access FC storage arrays, which an NPG provides.
NPIV Proxy Gateway Operation
Consider a sample scenario of NPG operation. An M1000e chassis congured as an NPG does not join a SAN fabric, but functions as
an FCoE-FC bridge that forwards storage trac between servers and core SAN switches. The core switches forward SAN trac to
and from FC storage arrays.
An M1000e chassis FC port is congured as an N (node) port that logs in to an F (fabric) port on the upstream FC core switch and
creates a channel for N-port identier virtualization. NPIV allows multiple N-port fabric logins at the same time on a single, physical
Fibre Channel link.
Converged Network Adapter (CNA) ports on servers connect to the M1000e chassis Ten-Gigabit Ethernet ports and log in to an
upstream FC core switch through the FC Flex IO module N port. Server fabric login (FLOGI) requests are converted into fabric
discovery (FDISC) requests before being forwarded by the FC Flex IO module to the FC core switch.
Servers use CNA ports to connect over FCoE to an Ethernet port in ENode mode on the NPIV proxy gateway. FCoE transit with FIP
snooping is automatically enabled and congured on the M1000e gateway to prevent unauthorized access and data transmission to
the SAN network (see FCoE Transit). FIP is used by server CNAs to discover an FCoE switch operating as an FCoE forwarder
(FCF).
928
FC Flex IO Modules