Network Virtualization using Extreme Fabric Connect

Table Of Contents
Network Virtualization Using Extreme Fabric Connect
© 2019 Extreme Networks, Inc. All rights reserved. 34
Note
Fabric Extend with IP fragmentation is supported with the Fabric Connect VPN XA1400
platforms as well as with the Extreme Networks VSP4450/4850 platforms (latter require
use of the Open Network Adapter - ONA).
Note
Fabric Extend with IPsec encryption is supported with the Fabric Connect VPN XA1400
platforms.
Data Center Architecture
The Extreme Networks Fabric Connect provides an ideal multitenant architecture for the data center where
both L2 and L3 VSN service types can be tightly integrated, fully supports IP Multicast applications, and
remains topology agnostic to adapt to small- and large-scale data centers alike.
For smaller size data centers, the ToR switches can be intermeshed using high speed 40GbE interconnects
which in turn reduces the overall cost of the solution. This is because the data center distribution layer no
longer needs to aggregate the same amount of ToR uplinks as the east-west traffic flows are handled via
the ToR high speed interconnects.
Figure 10 Smaller (Meshed) vs Larger (Spine-Leaf) Topologies
For larger scale data centers where it becomes impractical to fully mesh or interconnect the ToR switches
or where the east-west bandwidth requirements exceed the ToR single high speed Ethernet uplink speed,
the topology of choice is spine-leaf, which ensures a consistent lowest equal cost path and multi-path
capability between any pair of ToR switches. To achieve the multi-path capability in a spine-leaf topology,
the Fabric Connect will need to be deployed with as many BVLANs as there are spines.
Caution
Extreme’s SPB implementation currently supports two BVLANs. This could be increased
to 16 BVLANs in the future.
In larger data centers and those where multi-tenancy is required, east-west flows will often be IP routed L3
flows where the communicating VMs are located in different IP subnets of the same “tenant” VRF domain,
yet VMs can be highly mobile and are prone to being moved to any hypervisor within or across different
data centers. Having those L3 east-west flows redirected to a centralized default gateway node in the
Spine/Distribution does not necessarily ensure the shortest path and lowest latency (both VMs could even
be connected to the same ToR switch on different or the same hypervisor).
The only way to ensure that such east-west L3 flows are also switched along the shortest path, like L2
flows, is for the ToR switches to implement a distributed anycast gateway function whereby the ToR switch