Network Virtualization using Extreme Fabric Connect

Table Of Contents
Network Virtualization Using Extreme Fabric Connect
© 2019 Extreme Networks, Inc. All rights reserved. 23
Tip
Extreme Networks does offer data center EVPN-based solutions and these can be fully
automated via Extreme Workflow Composer (EWC) as well as with integrated Embedded
Fabric Automation (EFA) on SLX platforms.
Use of IP ECMP in the underlay underpins the ability for the VXLAN overlay to provide equal cost multipath
between the leaf VTEPs. In fact, the EVPN model also ensures load-balancing towards multiple leaf VTEPs
to which hosts are multi-homed, but this form of multi-path is only guaranteed to be shortest path if the
underlay topology is spine-leaf (and hence all distant leaf VTEPs are an equal number of hops away). This is
why all EVPN deployment models are always spine-leaf and EVPN is only positioned for the data center.
VXLAN brings its own constraints to the EVPN model in the way flooded traffic is handled for L2 Broadcast,
Unknown, and Multicast (BUM) traffic. In theory, use of IP multicast in the underlay IP network is possible to
allow the VXLAN overlay to replicate packets using more efficient underlay replication. But use of IP
Multicast in what is a traditional IP routed underlay presents such scaling and operational challenges that all
vendors proposing EVPN solutions recommend instead to deploy VXLAN using ingress replication, which is
far less efficient.
This results in EVPN not being a suitable architecture to handle IP Multicast in the data center.
Note
IP Multicast is typically not a requirement for the large data centers deploying EVPN.
The same constraints are applicable to the way in which EVPN handles L2 broadcast and unknown packets.
One Mbps worth of BUM traffic ingressing a leaf node on one access port could end up consuming 100
Mbps of the uplinks if those L2 BUM packets need to be replicated to 100 other leaf VTEPs; the effect is
quickly compounded if many end-points emit BUM traffic. Therefore, EVPN goes to considerable length to
reduce BUM traffic by making BGP advertise the host MAC on the one hand (so that unknown traffic can be
minimized) and implementing ARP-suppression mechanisms to reduce the number of ARP broadcasts. Yet
in scaled environments this can result in the BGP control plane containing more MAC/ARP entries than can
be programmed in the hardware forwarding tables, so these mechanisms are often combined with
conversational learning techniques to limit them to just active conversations or traffic flows.
The EVPN model also needs extra complexity to handle BUM replication towards multi-homed hosts by
implementing a Designated Forwarder and Local Bias rules, which would otherwise result in duplication and
reflection of BUM traffic.
Still, even if out of a necessity, EVPN’s ability to control and reduce Ethernet broadcasts brings new
capabilities that can be relevant in some environments. Using control plane IP/MAC learning can provide a
consistent forwarding database in any size network instead of relying on flooding and learning. Control
plane learning also offers greater control over MAC learning in its ability to apply policies, that is, “who
learns what.” This provides benefits in Data Center Interconnect (DCI) over WAN connections where the
level of L2 broadcasts can be greatly reduced.
Note
Extreme Fabric Connect does not currently offer any mechanisms to suppress or
eliminate Ethernet broadcast beyond the traditional multicast and broadcast rate limiters.
Within SPB L2 VSN services, MAC learning occurs in the data plane via the flooding of
BUM traffic because this is the most effective and scalable approach for SPB. Unlike EVPN
it does not pose any operational issues and the BUM traffic is in not magnified when
transmitted across the Fabric.