Network Virtualization using Extreme Fabric Connect
Table Of Contents
- Table of Contents
- Table of Contents
- Table of Contents
- Table of Figures
- Table of Figures
- Table of Tables
- Conventions
- Introduction
- Reference Architecture
- Guiding Principles
- Architecture Components
- User to Network Interface
- Network to Network Interface
- Backbone Core Bridge
- Backbone Edge Bridge
- Customer MAC Address
- Backbone MAC Address
- SMLT-Virtual-BMAC
- IS-IS Area
- IS-IS System ID
- IS-IS Overload Function
- SPB Bridge ID
- SPBM Nick-name
- Dynamic Nick-name Assignment
- Customer VLAN
- Backbone VLAN
- Virtual Services Networks
- I-SID
- Inter-VSN Routing
- Fabric Area Network
- Fabric Attach / Auto-Attach
- FA Server
- FA Client
- FA Proxy
- FA Standalone Proxy
- VPN Routing and Forwarding Instance
- Global Router Table
- Distributed Virtual Routing
- Zero Touch Fabric (ZTF)
- Foundations for the Service Enabled Fabric
- IP Routing and L3 Services over Fabric Connect
- L2 Services Over SPB IS-IS Core
- Fabric Attach
- IP Multicast Enabled VSNs
- Extending the Fabric Across the WAN
- Distributed Virtual Routing
- Quality of Service
- Consolidated Design Overview
- High Availability
- Fabric and VSN Security
- Fabric as Best Foundation for SDN
- Glossary
- Reference Documentation
- Revisions
Network Virtualization Using Extreme Fabric Connect
© 2019 Extreme Networks, Inc. All rights reserved. 22
(RFC6325), as this limited the technology to: a) only extending L2 VLANs and b) being limited to 4095
VLANs at it. A later TRILL standard (RFC7172) inroduced Fine-Grained Labelling, which introduces an Inner
label high part and low part each encoded over 12 bits, but at the price of defining yet another packet
encapsulation incompatible with the previous one.
Tip
SPBM’s Mac-in-Mac encapsulation does away with the VLAN used as a service ID field and
instead replaces it with a more scalable 16-bit Service-ID (I-SID) which can be extended to
offer any service type, much like on an MPLS network, as well as theoretically scaling up
to 16 million services.
TRILL’s unlimited equal cost multi-pathing capability is the reason why it has been developed and proposed
in the data center market, as it fits well in large spine-leaf architectures (where SPBM instead requires as
many BVLANs as spines), and where the virtualization requirements are limited to L2 only.
There are two TRILL implementations to note:
• Cisco FabricPath: Uses TRILL’s IS-IS control plane but a Cisco proprietary packet encapsulation.
• Extreme Networks VCS (Virtual Cluster Switching): Formerly developed by Brocade, this
implementation does use the TRILL packet encapsulation (supporting TRILL Fine-Grained Labelling)
but uses Fibre Channel’s FSPF (Fabric Shortest Path First) instead of IS-IS as the TRILL control
plane.
None of the TRILL implementations are standards-based and inter-operable.
In summary, TRILL based fabrics offer solutions only for the data center and only covering L2 Virtualization
without scaling beyond the 4095 VLAN-IDs and completely lacking multitenancy L3 Virtualization.
Ethernet VPN
Ethernet VPN (EVPN) running over a VXLAN overlay, brings together much of the MPLS VPLS and MPLS-
VPN functionality in a new architecture focused on the data centre. EVPN tightly integrates L2
Virtualization with L3 Virtualization and at the same time allows active-active multi-homing of hosts (which
was not possible with VPLS). The EVPN control plane remains BGP and is almost identical to the MPLS-VPN
architecture based on Route Distinguishers and Route Targets but introduces new BGP NLRIs to advertise
host IP and MAC addresses.
Tip
EVPN is a huge improvement over VPLS as it builds a consensus and cooperation
between different router vendors and service providers to interoperate. VPLS had several
different operating modes which made it prone to interoperability issues.
The EVPN standard is defined for operation over traditional MPLS, but in practice all implementations
deploy the EVPN over a VXLAN overlay which eliminates the MPLS complexity.
The underlay network only needs to provide IP reachability for the VXLAN tunnel end points (VTEP). The
BGP control plane remains complex and requires MP-BGP to run on every ToR leaf node. Most EVPN
vendors use BGP not only as the EVPN Control plane but also as IGP for the underlay. BGP is designed to
be scalable but it is not naturally fast and therefore must always be combined with BFD to achieve the kind
of resilience expected in the data center. The resulting BGP configuration is fairly complex and becomes a
spine-leaf of either eBGP peerings (where each leaf is a separate AS number) or iBGP peerings towards
BGP Route Reflectors running on the spine nodes.
In large self-service oriented data centers where automation becomes a key element of the design, the
EVPN provisioning complexity can be also automated.