Network Virtualization using Extreme Fabric Connect

Table Of Contents
Network Virtualization Using Extreme Fabric Connect
© 2019 Extreme Networks, Inc. All rights reserved. 25
TRILL/FabricPath/VCS
EVPN
Fabric Connect
Multi Homing of
Host (NIC
teaming) into >2
ToRs
No, if MC-LAG/SMLT is
required on ToR
Yes, with Extreme VCS Fabric
Yes, if Hypervisor hashing
mode negates use of MC-
LAG/SMLT on ToR
Yes (but not all EVPN
vendors offer this
capability)
(Extreme SLX/VDX
platforms do not offer this
capability)
No, if MC-LAG/SMLT is required
on ToR
Yes, if Hypervisor hashing mode
negates use of MC-LAG/SMLT
on ToR
Flooding of
Broadcast,
Unknowns and
Multicast (BUM)
Via elected Root bridge. Not
shortest path, but using
efficient replication.
Inefficient ingress
replication performed on
the VXLAN overlay
Efficient service-specific
shortest path multicast trees
Control Plane
advertises host
MACs
No, data plane MAC learning
Yes, via BGP EVPN Route
Type 2
No, data plane MAC learning
performed within L2 VSN
service
Control Plane
advertises host
IPs
No, L2 only
Yes, via BGP EVPN Route
Type 2
Yes, with DVR
Technology
Scalability Limit
4059 VLANs
BGP scaling limits
Extreme SPB Fabric is typically
limited to a maximum of 500
FC nodes (though some VSP
platforms can scale higher)
Cisco’s Campus Fabric
On the campus side, another fabric technology worth mentioning is Cisco’s Campus Fabric as part of
Cisco’s Digital Network Architecture (DNA) framework. This fabric technology is essentially very similar to
EVPN as it is also based on a VXLAN overlay but differs from EVPN in its choice of the overlay’s signalling
control plane, which is not based on BGP but on LISP (Locator Identifier Separation Protocol). According to
Cisco, LISP offers an alternative to BGP that can run with smaller tables and less CPU power. But where
EVPN only had VTEPs, these now become LISP Tunnel Routers in the Cisco fabric, which also requires a
LISP Map Server/Resolver as well as LISP Proxy Tunnel Routers for external connectivity that add
complexity to the solution and require these functions to deployed in a redundant fashion to avoid single
point of failures.
Like EVPN, Cisco’s Campus Fabric can support a tight integration of L2 and L3 virtualization as well as the
ability to support distributed anycast gateway. Unlike EVPN, there is no requirement for spine-leaf
architectures and all topologies are supported and IP Multicast is supported (both PIM-SM and PIM-SSM),
but is again implemented using inefficient ingress replication in the underlay.