Network Virtualization Using Extreme Fabric Connect Concept and Design Abstract: This document serves as a design reference for implementing a virtualized fabric network architecture based on Shortest Path Bridging (SPB) named Fabric Connect. It is intended for network architects and engineering staff responsible of the technical design and implementation of the network infrastructure. This document draws parallels with MPLS and EVPN VXLAN based architectures.
Network Virtualization Using Extreme Fabric Connect Table of Contents Conventions .................................................................................................................................................................................... 8 Introduction .................................................................................................................................................................................... 9 Reference Architecture ...........................
Network Virtualization Using Extreme Fabric Connect Fabric Attach / Auto-Attach...................................................................................................................................................... 48 FA Server ........................................................................................................................................................................................... 49 FA Client ...................................................................
Network Virtualization Using Extreme Fabric Connect Inter VSN IP Multicast ................................................................................................................................................................... 99 Extending the Fabric Across the WAN ............................................................................................................................... 102 Fabric Extend .......................................................................................
Network Virtualization Using Extreme Fabric Connect Concealment of the Core Infrastructure............................................................................................................................... 162 Extreme’s Fabric Connect Stealth Networking ............................................................................................................................................... 162 Resistance to Attacks ..........................................................................
Network Virtualization Using Extreme Fabric Connect Figure 31 Switched UNI ........................................................................................................................................................................... 74 Figure 32 Transparent UNI ...................................................................................................................................................................... 75 Figure 33 E-TREE Private-VLAN L2 VSN ............................
Network Virtualization Using Extreme Fabric Connect Figure 79 Figure 80 Figure 81 Figure 82 Figure 83 Figure 84 Figure 85 Figure 86 Figure 87 Figure 88 Figure 89 Figure 90 Figure 91 Figure 92 Figure 93 Figure 94 Figure 95 Figure 96 Figure 97 Figure 98 Figure 99 Figure 100 Figure 101 SPB QoS Uniform Model .................................................................................................................................................... 135 QoS Marking Over Fabric Extend ....................
Network Virtualization Using Extreme Fabric Connect Conventions This section describes the text and command conventions used in this document. Tip Highlights a solution benefit. Note Highlights important information. Caution Highlights important factors that need to be accounted for in solution designs. © 2019 Extreme Networks, Inc. All rights reserved.
Network Virtualization Using Extreme Fabric Connect Introduction Over the years, campus and data center networks have had to dramatically evolve to keep up with new trends in the industry. Multi-tenancy requirements, once a prerogative of carrier networks, are becoming more common in large enterprises, particularly in outsourcing environments, while privately owned data centers increasingly need to handle network virtualization.
Network Virtualization Using Extreme Fabric Connect It is for these reasons that Extreme has developed Fabric Connect (FC), which is fundamentally a new networking foundation based on Ethernet. FC utilizes a single control plane that combines essential multicast capabilities and state-of-the-art link state routing with virtualization capabilities to match and exceed those of MPLS- or EVPN-based solutions while significantly reducing complexity.
Network Virtualization Using Extreme Fabric Connect Reference Architecture The goal of network virtualization is to decouple the physical infrastructure from the network services used to interconnect distinct user communities and their applications. Users and devices connected to the network will only see the virtual network to which they belong and are allowed to communicate only with other devices in the same virtual network.
Network Virtualization Using Extreme Fabric Connect The SPB (IEEE802.1aq) standard leverages this addressing hierarchy, replacing the Spanning Tree protocols, and bringing to Ethernet-based networks the well-known and much appreciated IS-IS Link State Routing protocol. IS-IS computes the shortest paths between all BMACs within the Ethernet fabric.
Network Virtualization Using Extreme Fabric Connect Figure 1 SPBM’s Mac-in-Mac Encapsulation Note As a comparison, MPLS always pushes two or more labels onto an Ethernet (or other L2 technology) packet. But not all MPLS labels are the same. The outer-most label is a packet forwarding label that is used for label switching the packet across the MPLS backbone. (In an SPB-based Fabric, this role is undertaken by the BVLAN id + BMAC destination address.
Network Virtualization Using Extreme Fabric Connect The third standard is IEEE802.1ag, which is the new foundation for Operations, Administration & Management (OAM) over Ethernet-based networks for Connectivity and Fault Management (CFM). Defined by carriers for use on carrier-grade networks (including MPLS-based ones), this standard brings to Ethernet and SPB a far more sophisticated troubleshooting toolkit than Enterprise customers are used to with IP.
Network Virtualization Using Extreme Fabric Connect configuration or identity-based networking authentication. These VLANs are associated with a VRF on the BEB nodes where a given VLAN can belong to one and only one VRF. Multiple VRFs can be configured, each belonging to a different Fabric wide service identifier (I-SID), thus providing multi-tenancy in L3 VSN.
Network Virtualization Using Extreme Fabric Connect Multi-VRF (VRF-Lite) MPLS (RFC 4364) Fabric Connect Applicability Small networks Service Provider & Carrier Networks Some large enterprises Small to Large enterprise networks, campus, core, IoT Small service providers VRF Scalability Few VRFs (2-4) because of the need to support an instance of the IGP within each VRF As many as PE device supports (typically 512-4000 on carrier platforms) As many as BEB device supports (typically 24-512 on enterp
Network Virtualization Using Extreme Fabric Connect Layer 2 Virtualization Overview SPB natively offers very flexible L2 connectivity over the Ethernet Fabric to achieve Transparent LAN Services (TLS). Transparent LAN Services provide the ability to connect Ethernet segments that are geographically separate. These networks appear as a single Ethernet or L2 VLAN domain. This basically allows any edge L2 VLAN to be extended to any other node in the SPB fabric.
Network Virtualization Using Extreme Fabric Connect Tip Benefits of SPB L2 VSNs over EoMPLS and MPLS-VPLS: • Simple service definition via Service id (I-SID) configured on end-point VLAN and/or UNI port combination (literally one CLI command), instead of specifying a Virtual Circuit (VC) on EoMPLS or a Virtual Switch Instance (VSI) on VPLS. • SPB L2 VSNs natively offer E-LAN (any-to-any), E-LINE (point-point) and E-TREE (private-VLAN) service types.
Network Virtualization Using Extreme Fabric Connect discovery (VSI-ID is 32bits); no theoretical limit if VPLS is used with T-LDP; in practice the number of Pseudowires a PE can handle will be the limit Service Type Flexibility E-LAN only and impossible to re-map VLAN-IDs E-LINE, E-LAN, E-TREE combined with rich selection of UNI type interfaces E-LINE, E-LAN, E-TREE combined with rich selection of UNI type interfaces Control Plane Spanning Tree (MSTP) IGP (OSPF or IS-IS) and LDP on P and PE nodes.
Network Virtualization Using Extreme Fabric Connect architecture that ensures shortest path and therefore lowest latency across any data center network topology. When the bandwidth demands from the single ToR leaf node are such that traffic needs to be distributed across multiple 40GbE or 100GbE core links, as is the case in large scale data centers, the topology of choice is spine-leaf.
Network Virtualization Using Extreme Fabric Connect Note Extreme’s SMLT clustering supports dual homing of server/hypervisors when the NIC teaming configuration requires link aggregation on the network side. Multiple homing ( > 2) is only possible when the NIC teaming configuration does not require link aggregation on the network side.
Network Virtualization Using Extreme Fabric Connect (RFC6325), as this limited the technology to: a) only extending L2 VLANs and b) being limited to 4095 VLANs at it. A later TRILL standard (RFC7172) inroduced Fine-Grained Labelling, which introduces an Inner label high part and low part each encoded over 12 bits, but at the price of defining yet another packet encapsulation incompatible with the previous one.
Network Virtualization Using Extreme Fabric Connect Tip Extreme Networks does offer data center EVPN-based solutions and these can be fully automated via Extreme Workflow Composer (EWC) as well as with integrated Embedded Fabric Automation (EFA) on SLX platforms. Use of IP ECMP in the underlay underpins the ability for the VXLAN overlay to provide equal cost multipath between the leaf VTEPs.
Network Virtualization Using Extreme Fabric Connect In summary EVPN is an architecture that is almost exclusively aimed at the data center and should only be deployed in spine-leaf architectures. EVPN’s greatest benefits are in the very large data centers where the use of BGP allows not only to scale beyond what an SPB based data center could scale to, but also interconnect private cloud with public cloud.
Network Virtualization Using Extreme Fabric Connect TRILL/FabricPath/VCS EVPN Fabric Connect Multi Homing of Host (NIC teaming) into >2 ToRs No, if MC-LAG/SMLT is required on ToR Yes, with Extreme VCS Fabric Yes, if Hypervisor hashing mode negates use of MCLAG/SMLT on ToR Yes (but not all EVPN vendors offer this capability) (Extreme SLX/VDX platforms do not offer this capability) No, if MC-LAG/SMLT is required on ToR Yes, if Hypervisor hashing mode negates use of MC-LAG/SMLT on ToR Flooding of Broad
Network Virtualization Using Extreme Fabric Connect Figure 6 Overview of Fabric Layers and Overlays © 2019 Extreme Networks, Inc. All rights reserved.
Network Virtualization Using Extreme Fabric Connect Guiding Principles The goal of network virtualization is to decouple the network users and their applications from the underlying infrastructure so that those users and applications can be segmented from other users and applications while still sharing the same physical network infrastructure resources (bandwidth and connectivity). One network must therefore support many different virtual networks.
Network Virtualization Using Extreme Fabric Connect based Fabric Attach signalling. Fabric Attach (FA) is the foundation for the elastic nature of the Fabric Connect architecture as it allows SPB’s service I-SID to be seamlessly extended to users, applications and IoT devices without any manual intervention. Fabric Attach may be deployed in a number of different manners, from an edge-only provisioning model with automated core service connectivity.
Network Virtualization Using Extreme Fabric Connect Tip If we compare this to a traditional IP/MPLS design, we note that we no longer need any dedicated redundant IP routers to act as BGP Route Reflectors and we also no longer need any dedicated WAN IP routers. The wiring closet access switches remain L2 switches where the user-VLANs are defined.
Network Virtualization Using Extreme Fabric Connect Fabric Connect and Fabric Attach The Fabric Connect name is often used to designate the entire Extreme SPB-based fabric solution, including the other variants, Fabric Attach & Fabric Extend, touched on below. However, in this section, Fabric Connect (FC) is the Extreme Networks name given to the core SPB-based fabric technology that is based on IS-IS and Mac-in-Mac (SPBM).
Network Virtualization Using Extreme Fabric Connect Tip Fabric Attach includes secure message authentication, which results in a secure deployment where IoT device MACs cannot be spoofed by attackers trying to penetrate the network. Clearly for certain devices that are not network switches (server hypervisors, ExtremeWireless Access Points, end stations), the benefits of using Fabric Attach are obvious.
Network Virtualization Using Extreme Fabric Connect • Requirement to terminate a large number of I-SIDs on a per-access switch basis. Since Fabric Attach leverages LLDP for service signalling, the Fabric Attach TLV cannot be larger than a given size. Caution LLDP’s TLV maximum size equates to the ability to request a maximum of 94 I-SIDs with Fabric Attach and not more. Should there be a requirement for an access switch to terminate more than 94 I-SIDs then the Fabric Connect mode could be used.
Network Virtualization Using Extreme Fabric Connect Because Fabric Extend is in effect extending the SPB Ethernet Fabric foundation, the WAN provider is no longer aware of the customer virtualized VSN services and, in the case of IPVPN WAN service types, is no longer required to participate in IP route advertising with the end customer. Tip With Fabric Extend only one WAN service needs to purchased.
Network Virtualization Using Extreme Fabric Connect Note Fabric Extend with IP fragmentation is supported with the Fabric Connect VPN XA1400 platforms as well as with the Extreme Networks VSP4450/4850 platforms (latter require use of the Open Network Adapter - ONA). Note Fabric Extend with IPsec encryption is supported with the Fabric Connect VPN XA1400 platforms.
Network Virtualization Using Extreme Fabric Connect becomes not only the default gateway for any traffic flow that is not L2 but also has knowledge about every other host IP within the data center. In the Extreme Fabric Connect architecture these enhanced capabilities for the data center come under the name of Distributed Virtual Routing (DVR). Caution Currently DVR is only supported with IPv4 but will be extended to IPv6 in the future.
Network Virtualization Using Extreme Fabric Connect Figure 11 DVR Architecture Caution If deploying FA Proxy switches in DVR architectures, these can only be connected either to the DVR controllers or to DVR leaf nodes. In both cases Extreme’s vIST SMLT clustering can be leveraged.
Network Virtualization Using Extreme Fabric Connect Configuration of the DVR Gateway IP for the segment has similarities with VRRP in that each DVR controller has a unique physical IP interface on the segment and all DVR controllers share the same virtual DVR Gateway IP, which ultimately is configured as default gateway on the servers and VMs residing in the segment.
Network Virtualization Using Extreme Fabric Connect The Extreme Networks ToR switch platforms are able to support any of the above-mentioned hashing schemes. It is however recommended to provision the ToR switches in vIST SMLT clustering from the start, so that both link aggregation (SMLT with or without LACP) and non-link aggregation server connectivity can be supported. Tip Extreme Networks supports all server hypervisor NIC Teaming schemes.
Network Virtualization Using Extreme Fabric Connect Architecture Components This section covers the various components making up an SPB Ethernet fabric as well as the terminology. Figure 13 SPB Fabric Architecture Components User to Network Interface These are the interfaces at the edge of the SPB fabric where regular Ethernet traffic is sent and received to and from end stations. There is no MAC-in-MAC encapsulation, nor any IS-IS, used on User to Network Interfaces (UNIs).
Network Virtualization Using Extreme Fabric Connect Figure 14 IS-IS NNI Parallel Links In the Extreme Networks implementation, IS-IS interface authentication is also possible using either simple password authentication or HMAC-MD5 or HMAC-SHA256 authentication. This increases security and ensures that only trusted SPB nodes are allowed to form an IS-IS adjacency in the SPB fabric. Backbone Core Bridge A Backbone Core Bridge (BCB) is an SPB node with only NNIs.
Network Virtualization Using Extreme Fabric Connect Customer MAC Address A Customer MAC address (CMAC) is a MAC address that belongs to an end station, server/VM or in general any device that is not running SPB. CMACs only exist within Fabric Connect VSN services and are never seen or used within the SPB Backbone VLANs.
Network Virtualization Using Extreme Fabric Connect then ensure that equal cost shortest path load balancing can be leveraged across the SPB Fabric to reach the SMLT cluster BEB nodes. Tip The SMLT-Virtual-BMAC can be either auto generated or manually provisioned. If auto provisioned, the system will simply use the SMLT’s Primary nodal BMAC with the least significant byte set to 0x80.
Network Virtualization Using Extreme Fabric Connect IS-IS Overload Function The Overload bit is special in the IS-IS LSP and used to inform the network that the advertising router is not yet ready to forward transit traffic. The IS-IS Overload bit can be automatically set on SPB nodes during their boot up phase, to ensure that they are not considered by other fabric nodes when computing Shortest Path First (SPF) path calculations until after they have completed their initialization.
Network Virtualization Using Extreme Fabric Connect Figure 15 How the SPBM Nick-name is Used to Construct a Multicast BMAC Another use of the SPBM nick-name is, when configuring IS-IS IP Accept policies, it is possible to single out IP routes advertised by specific BEB nodes by specifying the BEB’s nick-name. This provides a similar approach to OSPF Accept policies where OSPF Router IDs are specified.
Network Virtualization Using Extreme Fabric Connect Database. Also, a BVLAN takes no notice of the Spanning Tree state of member ports (e.g., if Spanning Tree was for some reason enabled on NNIs). The unicast MAC table of a BVLAN will contain every BMAC in the Ethernet fabric and these entries will point to the local NNI port that corresponds to the IS-IS computed shortest path to reach that BMAC across the fabric and from which all traffic from that BMAC is expected to arrive.
Network Virtualization Using Extreme Fabric Connect Furthermore, the resources to store those forwarding records are only consumed on the BEB nodes where the VSN services terminate.
Network Virtualization Using Extreme Fabric Connect Figure 16 SPB Fabric Inter-VSN Routing Conceptually this is no different from a switch that IP routes between VLANs where an IP address was configured on each VLAN (in the same VRF). However, with L2 VSN this implies that the SPB node must be capable to handle (de-capsulate and then re-encapsulate) the Mac-in-Mac encapsulation before and after performing the IP routing function.
Network Virtualization Using Extreme Fabric Connect Fabric Attach / Auto-Attach Fabric Attach (FA) extends the Fabric Connect VSN service I-SID all the way to the end user or IoT device in the campus access as well as to the application in the data center onto devices for which it would make no sense to implement Fabric Connect and IS-IS (like wireless APs, server hypervisors, etc.).
Network Virtualization Using Extreme Fabric Connect Fabric Attach on page 77. FA Server FA Servers are Extreme Networks switches that connect directly to the SPB Fabric Connect core network and thus act as BEB nodes. FA Client devices can be connected directly to the FA Server or via an FA Proxy switch. In both cases, the FA Server can accept FA Client or FA Proxy service binding requests to attach the user/IoT/application to a VLAN:I-SID pair.
Network Virtualization Using Extreme Fabric Connect 16 ONA-SDN Open Network Adapter in SDN/Surge solution 17 ONA-spb-over-ip Open Network Adapter used on VSP4000 for Fabric Extend FA Proxy A FA Proxy is an Ethernet switch which allows Fabric Connect L2 I-SID services to be extended to locally attached clients.
Network Virtualization Using Extreme Fabric Connect of IPVPNs or L3 VSNs, a VRF is thus a repository of all the IP routes that belong to a given virtualized routing domain. A VRF consists of an IP routing table, a forwarding table, and interfaces assigned to it. Common routing protocols such as OSPF, RIP, BGP, and IS-IS can be used to advertise and learn routes within the VRFs.
Network Virtualization Using Extreme Fabric Connect Distributed Virtual Routing Distributed Virtual Routing (DVR) is an extension to Fabric Connect that allows the creation, on DVR controller nodes, of anycast default gateway IPs for fabric L2 VSN segments and their distribution across the fabric access layer on DVR leaf nodes.
Network Virtualization Using Extreme Fabric Connect reachability of DVR host IP routes across different DVR domains as well as to campus BEBs that may have been activated to join the same DVR Backbone. DVR Leaf The DVR leaf is typically deployed as a data center ToR Fabric Connect switch. It is a switch with a dual personality. From a provisioning and management perspective, the DVR leaf is an L2 BEB where the only configuration possible is to associate server access ports to L2 VSN segments.
Network Virtualization Using Extreme Fabric Connect DVR Backbone The DVR Backbone is a dedicated point-multipoint IS-IS control plane I-SID based signalling tree that allows all DVR controllers belonging to different DVR domains to communicate DVR host IP routes. This allows DVR controllers, for their L2 VSN interfaces configured for DVR, to have full visibility of all host IP routes for the L2 VSN segment including host routes which are located in different DVR domains.
Network Virtualization Using Extreme Fabric Connect Foundations for the Service Enabled Fabric SPB Service primitives SPB uses IS-IS as its link state routing protocol. IS-IS was originally developed as the routing protocol for the ISO Connectionless Network Protocol CNLP and was later extended to operate with IP around the same time OSPF was first defined.
Network Virtualization Using Extreme Fabric Connect play when leveraging 802.1ag (CFM), as it ensures that when injecting Ethernet OAM packets along the path, on a given BVLAN, the responses will come back on the same path. Figure 18 Unicast Shortest Path Between Two Nodes; Forward & Reverse Congruent The second primitive is what makes SPB so much better for multicast than any other networking protocol to date.
Network Virtualization Using Extreme Fabric Connect Tip With today’s CPUs, the time it takes to perform these SPF runs is less significant than the time it takes to program the switch hardware. A further important property of SPB is that the path used by a multicast tree from a root node to a leaf node will always match (be congruent with) the unicast path between those same nodes in the same BVLAN.
Network Virtualization Using Extreme Fabric Connect used, which will only forward the multicast stream to BEBs that have IGMP registered receivers (as opposed to using the L2 VSN I-SID tree, which would multicast the traffic to all BEBs in the L2 VSN). The second primitive is also leveraged for Fabric Connect control plane enhancements, such as DVR and PIM Gateway (which will be covered later in this document), where reserved I-SID ranges are used to exchange IS-IS multicast based signalling.
Network Virtualization Using Extreme Fabric Connect Figure 20 SPB path calculation and suggested link metrics Caution Extreme Networks VSP platforms currently do not automatically set the SPB metric on NNI ports based on link speed. Instead a default metric of 10 is used on all IS-IS interfaces when initially created. When more than one path is found with shortest and equal cost, SPB can provision each available BVLAN into one of these paths.
Network Virtualization Using Extreme Fabric Connect paths have the same path cost. Hence any paths with a higher hop count will not be considered if a path exists with a lower hop count and same metric. In the event that IS-IS still sees a number of equal cost shortest paths (with the same metric and hop count) there has to be a deterministic way for all nodes in the SPB fabric to unequivocally agree which path to program in BVLAN#1 and which to program in BVLAN#2.
Network Virtualization Using Extreme Fabric Connect core node in the topology, then the network administrator can influence the ECT Algorithms only by intervening on the NNI link metrics or on the SPB node Bridge IDs. In the example depicted in Figure 21, the less desirable path was discarded by ensuring that one core node had the lowest numerical Bridge ID (01) in the network, while the other core node the highest (F2).
Network Virtualization Using Extreme Fabric Connect IP Routing and L3 Services over Fabric Connect Core IGP / GRT IP Shortcuts With Fabric Connect, IP routing (whether IPv4 or IPv6) is just a service and is in no way acting as a foundation to the architecture. The Global Router Table (GRT) represents the default IP routing instance of a router and will usually be used for user traffic, sometimes even in multi-tenancy deployments leveraging VRFs and L3 VSNs.
Network Virtualization Using Extreme Fabric Connect Figure 23 Different Encapsulation Used by GRT IP Shortcuts Virtualized L3 VPNs / L3 VSNs L3 VSNs represent virtualized IP routing domains as a service over an SPB fabric. Conceptually they offer the same service as MPLS-based IPVPNs but over a considerably simpler architecture. An L3 VSN is terminated in VRF instances located on the BEB nodes typically acting as distribution layer.
Network Virtualization Using Extreme Fabric Connect Figure 24 Relevant SPB L3 VSN Forwarding Tables Tip One of the nice properties of L3 VSNs (and IP Shortcuts also) is that inspection of the VRF (GRT) IP routing table will show IS-IS routes with next-hop an easily recognizable IS-IS ASCII system-name of the egress BEB (instead of just an IP address as is the case with a traditional OSPF IGP, or MPLS-VPN). The example in Figure 23 is showing next-hop BMACs to explain how L3 VSNs actually work.
Network Virtualization Using Extreme Fabric Connect Figure 25 Security Zones with Common Services There are two design approaches that are equally possible and can complement each other. If the requirement is for all inter L3 VSN communications to be secured by a firewall, then each L3 VSN will need to be head-ended by a dedicated firewall instance, which will allow the creation of security policies specific to all traffic leaving or entering the L3 VSN (including NAT, if necessary).
Network Virtualization Using Extreme Fabric Connect Tip Benefits of SPB IS-IS Accept policies over MPLS-VPN Route Target manipulation • In SPB an I-SID-IPv4 (or I-SID-IPv6) route can only belong to one and only one ISID. This is not true with MPLS-VPNs where the same BGP VPN-IPv4 (or VPN-IPv6) route can be tagged with any number of export Route Targets (RT) as Extended Communities. • The MPLS-VPN approach is more sophisticated as one can define multiple import and export RTs.
Network Virtualization Using Extreme Fabric Connect Caution Extreme Networks Fabric L3 VSN BEBs are capable of IP routing traffic received from the fabric back into the fabric in a single IP routed hop.
Network Virtualization Using Extreme Fabric Connect Table 9 – IS-IS Internal and External IP Route Tie Breaking Tie breaking criteria when the same IP route is advertised by more than one distant BEB • • • Same IP route seen as both internal and external type. ➢ IS-IS Internal route is preferred over IS-IS External route. Both IP routes are internal type. ➢ The IP route with the lowest internal metric is preferred. • Both IP routes are internal type AND have the same internal metric.
Network Virtualization Using Extreme Fabric Connect Note The same delicate exercise equally applies when redistributing between any two IP routing protocols. The very same challenges had to be dealt with in the days when RIP networks were connected or migrated to OSPF.
Network Virtualization Using Extreme Fabric Connect preference. To put it another way, if there really was an IP route conflict and the same IP route was to be seen in both clouds, would you want the smaller IP cloud to interfere and potentially replace that same IP route of your core network? In the example at hand we will assume that the core network is the Extreme Fabric Connect SPB cloud and that it should therefore use a higher protocol preference than OSPF/RIP or BGP. This is step (1).
Network Virtualization Using Extreme Fabric Connect The protocol preference plays also on the reverse side, namely an IS-IS Internal route, once redistributed into OSPF/RIP/BGP by one border router will be seen by other border routers from both protocols: IS-IS on one side and OSPF/RIP/BGP on the other. However, in this case the globally set (and default in this case) protocol preferences will always ensure that the IS-IS Internal routes cannot be replaced by an OSPF/RIP/BGP one.
Network Virtualization Using Extreme Fabric Connect L2 Services Over SPB IS-IS Core E-LAN / L2 VSNs with CVLAN/Switched UNI An L2 VSN is the interconnection at Layer 2 (bridging) of distant Ethernet segments into one single Ethernet or L2 VLAN domain. A L2 VSN is natively an any-to-any (E-LAN) service type which means that it can have any number of end-points and that MAC learning is performed within the VSN service.
Network Virtualization Using Extreme Fabric Connect Tip Use of the SPB multicast trees means that L2 VSNs are able to correctly handle flooded traffic within the VSN and perform replication along the multicast tree in an efficient manner. This is not true with MPLS VPLS, which uses a full mesh of EoMPLS circuits to deliver an any-to-any service and hence needs to ingress replicate flooded traffic across all EoMPLS circuits.
Network Virtualization Using Extreme Fabric Connect UNIs and Switched UNIs to the same I-SID, in which case the user-VLAN that constitutes the CVLAN UNI will be bridged together with the Switched UNIs. This is also possible on the same local BEB. Note Switched-UNI is in fact what is being used with Fabric Attach by the FA Server BEB and is also the only available UNI type on a DVR leaf node.
Network Virtualization Using Extreme Fabric Connect Caution Transparent UNIs should not be assigned to the same I-SID as CVLAN or Switched UNI. Figure 32 Transparent UNI E-TREE / L2 VSNs with Private-VLAN UNI An E-TREE service type is an enhanced version of an E-LAN L2 VSN that uses a CVLAN UNI where the VLAN used is a Private-VLAN.
Network Virtualization Using Extreme Fabric Connect Figure 33 E-TREE Private-VLAN L2 VSN Tip An E-TREE L2 VSN can mix end-point BEBs using Private-VLAN UNI as well as regular CVLAN UNIs. This means all devices connected to the CVLAN UNI will have Promiscuouslike connectivity within the L2 segment. If an IP interface needs to be configured to IP route the traffic on and off the E-TREE segment, that IP interface should be configured on a CVLAN UNI that belongs to the same service.
Network Virtualization Using Extreme Fabric Connect Fabric Attach Fabric Attach (FA) complements and extends the Fabric Connect architecture to bring the Fabric services directly to the end-users in the wired and wireless access as well as to the business applications located in the data center. Note Fabric Attach is the Extreme naming of IEEE 802.1Qcj Auto Attach standard.
Network Virtualization Using Extreme Fabric Connect Extreme Networks VOSS VSP platforms provide FA Server support, including in redundant MLAG SMLT clustering configurations. ERS 5900/4900/4800 platforms can support FA Server but do not support MLAG / SMLT clustering. • • FA Client: This is an end-device supporting Fabric Attach LLDP extensions and is therefore able to automatically negotiate directly with the fabric access the terms of its virtual service connection.
Network Virtualization Using Extreme Fabric Connect becomes particularly important in implementing onboarding policies for newly connected devices and achieving the goal of an automated edge fabric. Therefore, if an ExtremeWireless AP FA Client is detected, its connecting port can automatically be configured for 802.
Network Virtualization Using Extreme Fabric Connect Figure 37 Fabric Attach LLDP Service Signalling TLV Note A FA Standalone Proxy node will only process FA signalling TLVs where the I-SID value is null There are a number of possible ways in which a Fabric Attach I-SID binding can be assigned: • FA Client expressly asks for one, or more, VLAN:I-SID bindings. The FA Proxy or FA Server will then handle the assignment request.
Network Virtualization Using Extreme Fabric Connect Figure 39 FA Zero-Touch-Client Assigns VLAN:I-SID Binding to Discovered FA client Note Only Extreme Networks ERS and VSP platforms support FA Zero Touch Client. • FA Proxy or FA Server have 802.1X NAC enabled on the access port where the FA Client is detected.
Network Virtualization Using Extreme Fabric Connect In all cases, the VLAN-ID requested via FA signalling remains locally significant to the FA Server access port, where all FA bindings are implemented using Switched-UNI, and only the I-SID is globally significant. If the FA Server is providing IP routing and or IP Multicast support for the L2 I-SID, then a platform CVLAN must be provisioned and bound to the same L2 I-SID on the FA Server.
Network Virtualization Using Extreme Fabric Connect message authentication prevent such a rogue device from getting access to Fabric VSN services, but even those services which were accessible to the displaced IoT device will be retracted as the rogue device will not be accepted as a valid FA Client device.
Network Virtualization Using Extreme Fabric Connect Figure 43 FA Message Authentication Hardening RADIUS MAC-Based Authentication For many deployments, the Extreme Networks FA message authentication pre-configured key can be used as long as it remains secret and cannot be found in the public domain. This has the advantage that onboarding newly deployed FA Client devices can be done without any prior pre-staging of those devices to pre-set on them the customer-defined FA secret key.
Network Virtualization Using Extreme Fabric Connect FA Zero Touch Provisioning The main focus of Fabric Attach is to attach end users and applications or IoT devices to Fabric I-SID services and in the examples covered above we have seen that this is achieved by either the FA Client or the FA Proxy requesting service attachment to some L2 I-SID and VLAN pair towards the FA Server.
Network Virtualization Using Extreme Fabric Connect Consequently, a freshly deployed wiring closet switch can automatically: • • Discover the FA Server. Become an FA Proxy device. Tip Extreme ERS and ExtremeXOS platforms operate in FA Proxy mode by default. • Configure its uplink ports to the FA Server for VLAN tagging. If a management VLAN is advertised by the FA Server, it can automatically: • • • Create the management VLAN Assign the management VLAN on the uplink to the FA Server.
Network Virtualization Using Extreme Fabric Connect In the example depicted in Figure 44, as soon as the ExtremeWireless AP boots up it will discover the FA advertised management VLAN and will therefore perform DHCP on that VLAN (using q-tagged frames). If instead no FA management VLAN was discovered, then the AP will perform DHCP untagged and the switch it is connected to will need to be able to handle all AP management traffic as untagged.
Network Virtualization Using Extreme Fabric Connect This option can be useful for onboarding FA Client devices that are also VLAN aware (and are thus likely to signal for additional VLAN:I-SIDs) but where all communication, including over the FA management VLAN, needs to remain tagged. For a given FA Client type, this option is incompatible with options auto-port-mode-fa-client, autopvid-mode-fa-client, and auto-client-attach.
Network Virtualization Using Extreme Fabric Connect IP Multicast Enabled VSNs Initial Considerations Multicast allows information to be efficiently forwarded from a source device to many receivers who have expressed an interest in joining the multicast group.
Network Virtualization Using Extreme Fabric Connect Tip In the Extreme Networks Fabric Connect SPB implementation, each BEB reserves I-SIDs in the range 16,000,001 – 16,600,000 for IP Multicast streams. Each allocated I-SID in that range defines one unique IP Multicast stream (1 Group address + 1 Source IP address) rooted at that BEB node. I-SIDs in that range are not allowed for use when creating L2 VSNs or L3 VSNs.
Network Virtualization Using Extreme Fabric Connect Tip In a design where the source is SMLT connected to two ingress BEBs, both BEBs forming the SMLT cluster will IS-IS announce availability of the IP multicast stream. In the absence of receivers, the SPB Fabric does not set up any multicast tree and the ingress BEB silently discards the multicast packets it receives from the source.
Network Virtualization Using Extreme Fabric Connect Tip In a design where the receiver is SMLT connected to two egress BEBs, the IGMP receiver table is automatically synchronized between the BEBs over the IST (or vIST). If a multicast tree is set up, both egress BEBs will set up a tree, each over one of the available BVLANs.
Network Virtualization Using Extreme Fabric Connect Figure 46 IP Multicast Over L3 VSN that Comprises L2 VSNs A more common requirement is for the L2 VSN segment to be allowed to perform IP multicast routing to any other BEB in the L3 domain (L3 VSN or IP Shortcuts) as well as efficient snooping within the L2 VSN segment. This is possible but requires, on the L2 BEB, configuration of an IP interface on which SPB Multicast is enabled.
Network Virtualization Using Extreme Fabric Connect path for the multicast traffic its involvement in forwarding it would purely be from a BCB transport perspective. In each of the deployment scenarios described, IGMP Access policies can be applied on the BEB VLAN interfaces to specify which streams will be accepted from a source device as well as which streams a receiver is allowed to join.
Network Virtualization Using Extreme Fabric Connect Tip Fabric Connect PIM Gateway is able to interoperate with both PIM-SM and PIM-SSM. We’ll start by looking at the most common approach, dealing with PIM-SM, and then revisit PIM-SSM in a later section below. PIM-SM is a very complex protocol.
Network Virtualization Using Extreme Fabric Connect Figure 47 illustrates at a high level how PIM Gateway interacts with a PIM-SM network. The first striking difference is that on the Fabric Connect side, the PIM Gateway functionality is in reality running off one of possibly many L3 VSNs or IP Shortcut service instances, whereas PIM-SM is almost never virtualized in VRFs (except in Draft Rosen Multicast VPNs).
Network Virtualization Using Extreme Fabric Connect default behavior. To this effect, MSDP offers the ability to associate route policies to limit which multicast sources are advertised. Tip The Extreme Networks PIM Gateway MSDP implementation allows for both MSDP redistribution and SA-filters. The former allows policies to control, at a global level, what sources are announced by the PIM Controller over MSDP.
Network Virtualization Using Extreme Fabric Connect role should the active RP fail. All this essentially means that when implementing PIM Gateway, it might be necessary to provision MSDP peerings with more than one PIM router. In the figure, we are assuming that two RPs exist. Since this is a redundant design, we also have two PIM Controllers and two PIM Gateways.
Network Virtualization Using Extreme Fabric Connect source to one of the PIM Gateways. The hash ensures a distribution of PIM sources across all the available PIM Gateways. The selected PIM Gateway for a given PIM source will then be responsible for advertising and making available the PIM multicast stream into the IS-IS LSDB using the appropriate multicast TLV. So, for PIM sources it is the PIM Controller that determines the allocation of PIM Gateway.
Network Virtualization Using Extreme Fabric Connect access to certain video surveillance multicast streams. As both VSNs ultimately operate on the same Ethernet Fabric infrastructure, it would not make sense to duplicate the surveillance cameras IP Multicast streams across both VSNs. This would be inefficient, as it would potentially result in the same IP Multicast packets being transmitted twice over the Ethernet Fabric core.
Network Virtualization Using Extreme Fabric Connect source VLAN and another L2 segment (VLAN) for the VSN users attached to the FA Proxy access switch performing MVR. Note that this implies that the VSN that owns the IP multicast source is an L3 VSN (or IP Shortcut instance) which can thus have multiple L2 segments on the VRF/GRT that terminates the VSN. Caution The Extreme Networks MVR implementation on ERS platforms does not allow IGMP receivers on the MVR source VLAN.
Network Virtualization Using Extreme Fabric Connect Extending the Fabric Across the WAN This section will explore the possible design choices available for extending the Extreme Networks Fabric Connect architecture across the WAN / Internet. Applications include extending the fabric from the campus to the branch offices as well as interconnecting larger locations, for example geo-redundant data centers for Data Center Interconnect (DCI).
Network Virtualization Using Extreme Fabric Connect Both Fabric Extend and the use of VXLAN Gateway have the advantage of being able to traverse a WAN L3 cloud without having to request multiple circuits from the WAN provider. However, use of the VXLAN Gateway approach does present some limitations with respect to Fabric Extend.
Network Virtualization Using Extreme Fabric Connect Extreme Networks Fabric Connect VPN XA1400 (XA1440 & XA1480) platforms natively support Fabric Extend in IP VXLAN mode but can also support Fabric Extend with an IPSec encapsulation; both are implemented with software switching.
Network Virtualization Using Extreme Fabric Connect Figure 53 Fabric Extend IP Mode (VXLAN / IPsec) MTU Considerations Tip WAN providers will typically use MPLS to deliver their WAN services. MPLS equipment is perfectly capable to handle oversize frames in excess of 1600 bytes. However, the WAN provider may choose whether or not to allow larger MTUs in their service offerings. For Data Center Interconnect (DCI), it is also possible to deploy Fabric Extend with jumbo frame sizes.
Network Virtualization Using Extreme Fabric Connect Note Fabric Connect VPN is a licensed software application hosted on XA1400 hardware and based on VSP Operating System Software (VOSS). Two licensing tiers are available offering either WAN Bandwidth of 100Mbps (applicable to both XA1440 and XA1480) or 500Mbps (applicable to XA1480 only).
Network Virtualization Using Extreme Fabric Connect Tip Extreme Networks VSP platforms can support up to 255 IS-IS adjacencies over Fabric Extend IS-IS logical interfaces. The following sections will cover each of the possible ways in which Fabric Extend can be deployed. It should be noted that while certain technical trade-offs exist between the various modes, each mode is designed to adapt over a particular WAN service type.
Network Virtualization Using Extreme Fabric Connect routes) such that the customer IP subnets can be re-advertised by the WAN IPVPN between the distant sites. With Fabric Extend the IP address provided by the WAN provider will become the Fabric Extend source IP and will be used to originate and terminate all IP (VXLAN) tunnels. Thus the only IP traffic which will be seen by the WAN provider will be VXLAN encapsulated traffic between the IP addresses it supplied.
Network Virtualization Using Extreme Fabric Connect Tip The Extreme Networks Fabric Connect VPN software (e.g.
Network Virtualization Using Extreme Fabric Connect and use this VRF to allocate the Fabric Extend IP interfaces used to send and receive traffic from the Internet. This VRF will never be L3 VSN enabled. Note Use of the GRT (VRF-0) or an L3 VSN enabled VRF for terminating Fabric Extend tunnels should be avoided when deploying Fabric Extend over the Internet.
Network Virtualization Using Extreme Fabric Connect Figure 58 Fabric Extend over WAN L2 Any-to-Any E-LAN Service The WAN service remains an underlay to Fabric Connect, so it is again necessary that the IP interfaces used by Fabric Extend to build the IP tunnels should be isolated from any other IP interface used to carry VSN services above the fabric.
Network Virtualization Using Extreme Fabric Connect Figure 60 Fabric Extend over WAN L2 Point-to-Point E-LINE Services This Fabric Connect mode is different from the other modes covered above in that there is no need to use an IP (VXLAN) encapsulation and tunnels because the nature of the WAN service is already L2 and pointto-point, which means that IS-IS and SPB’s Mac-in-Mac encapsulation can be run natively directly over these WAN circuits. We shall refer to this as the Fabric Extend L2 mode.
Network Virtualization Using Extreme Fabric Connect Caution Fabric Extend in L2 mode does not support IEEE 802.1ad Q-in-Q; only IEEE 802.1Q-tags are supported. This has some implications in that we know that an SPB Ethernet Fabric will emit traffic with 802.1Q-tags for any and all of the BVLANs it was provisioned with.
Network Virtualization Using Extreme Fabric Connect The VXLAN Gateway BEB thus also becomes a VXLAN termination point (VTEP) where a Circuitless IP address is assigned to be the VTEP source IP and a number of static VXLAN tunnels can be provisioned towards remote VTEPs across the VXLAN cloud. The VXLAN virtual segments (VXLAN Network Identifiers - VNI) are then simply mapped to a list of remote VTEP tunnels on one side and to a Fabric L2 I-SID on the other.
Network Virtualization Using Extreme Fabric Connect Figure 62 Extending an L2 VSN Across Fabrics with VXLAN Gateway Figure 62 shows a possible use of VXLAN Gateway to extend a Fabric L2 VSN across two separate Extreme SPB Fabrics where the VTEPs are Extreme VSP platforms at both ends. The L2 VSN L2 I-SIDs are mapped to the VXLAN VNI on the VXLAN Gateways so the I-SID value can be different at both ends.
Network Virtualization Using Extreme Fabric Connect Figure 63 Extending an L3 VSN across Fabrics with VXLAN Gateway Running IP Multicast across the extended L3 domain also presents challenges. It would require IP Multicast enabling the Fabric L3 VSN at each end and running PIM + PIM Gateway on top of the VXLAN Gateways. Tip Prefer a Fabric Extend approach if there is a requirement for IP Multicast. © 2019 Extreme Networks, Inc. All rights reserved.
Network Virtualization Using Extreme Fabric Connect Distributed Virtual Routing This section will go into greater depth about Distributed Virtual Routing (DVR) in the context of the Extreme Fabric Connect architecture. DVR is an enhancement of Fabric Connect that allows both the use of a distributed anycast gateway and the ability to compute the shortest path to the individual host IP. This becomes hugely important in environments where the host IPs are mobile.
Network Virtualization Using Extreme Fabric Connect gateway, and these flows might be forced to go across to the other data center unnecessarily resulting in a sub-optimal forwarding path, which is no longer the desired shortest path.
Network Virtualization Using Extreme Fabric Connect become a “pod” building block within that data center and thus the single data center can be built with multiple DVR domains. The ToR switches become DVR leaf nodes while the DVR controllers should strategically be placed in a position where all traffic is expected to transit in and out of the DVR domain.
Network Virtualization Using Extreme Fabric Connect Caution The DVR leaf will most likely be an Extreme Networks 10GbE VSP 7200 or 25GbE VSP 7400 or VSP 8k platform, which provides the maximum scaling for DVR. For GbE access connectivity, the VSP 4k platform can be used as a DVR leaf but with a reduced scaling of not more than 6000 IPv4 hosts within the DVR domain.
Network Virtualization Using Extreme Fabric Connect Note The DVR Backbone I-SID uses reserved value 16678216. The DVR domain I-SID uses reserved range 16678216 + DVR domain ID; hence the DVR domain ID 1 I-SID will take value 16678217, domain ID 2, 16678218 and so on. Also, if DVR leaf nodes are SMLT enabled, I-SID range 16677215 + Cluster ID is used for the vIST connection. Tip There is no need to configure the DVR domain & Backbone I-SIDs.
Network Virtualization Using Extreme Fabric Connect first-hop IP routing for any L3 flow received from those hosts. In case IP Multicast for the segment was enabled on the DVR controller, the DVR leaf nodes will also automatically activate IGMP on the same segment using the requested IGMP version. When the DVR leaf activates a new DVR interface for the DVR segment, it will allocate a local VLAN-ID and VRF-ID out of its local pool.
Network Virtualization Using Extreme Fabric Connect Figure 66 – DVR Gateway IP Provisioning on DVR Controllers Only The configuration pushed down to the DVR leaf will be limited to the creation of a DVR interface and associated IP routes for the VRF context (L3 VSN). This configuration will not include configuration of the DVR leaf access ports which will need to be provisioned separately to associate those ports with one or more DVR segments via Switched-UNI (VLAN-ID + L2 I-SID pair) bindings.
Network Virtualization Using Extreme Fabric Connect DVR Host Tracking and Traffic Forwarding It is worth looking at how DVR learns and tracks data center hosts as well as how it handles traffic forwarding for L3 flows within the data center from which it will become apparent how powerful DVR is compared to competing data center architectures.
Network Virtualization Using Extreme Fabric Connect When it comes to traffic forwarding, any L3 flow emitted by the server/VMs will always have destination MAC the DVR Gateway MAC and will thus benefit from efficient data plane IP routing from the immediately attached DVR leaf. In the example shown in Figure 69, all VMs are located in one of two DVR segments which both belong to the same L3 VSN tenant.
Network Virtualization Using Extreme Fabric Connect one makes use of GARP, which allows an IP host binding to be changed or updated and the other makes use of Reverse ARP (RARP), which allows a host MAC to be seen to move. In both cases, it is the destination hypervisor itself that generates these GARP/RARP messages after completion of a VM move and on behalf of that VM. It is not the VM itself that generates these packets.
Network Virtualization Using Extreme Fabric Connect entries will age out fairly quickly but by that time all DVR hosts in the DVR domain will have an updated host IP route for the VM. Figure 72 DVR with VM Migration Across DVR domains In the case of a VM migration across DVR domains, the mechanism is much the same. Again, the GARP/RARP will be received by all DVR nodes, across both DVR domains.
Network Virtualization Using Extreme Fabric Connect data center(s). So only a small proportion of data center host IPs will need to be optimized against north-south traffic tromboning. There are two possible approaches with DVR. In the first case, the Extreme’s Fabric Connect is end-to-end extended from the data centers all the way to the wider campus, as depicted in Figure 73.
Network Virtualization Using Extreme Fabric Connect Caution Because these hosts routes will point to the nearest DVR controller of the DVR domain where the host is located, this form of traffic tromboning optimization will only work if the two data centers have been deployed into different DVR domains. If the two data centers are part of one single DVR domain, then this optimization is not possible.
Network Virtualization Using Extreme Fabric Connect DVR limitations and Design Alternatives To date there are still some limitations in the use of DVR, so this section will detail what those limitations are and how to design around them. Some of these limitations have already been mentioned in the preceding sections but will all be covered again here in order.
Network Virtualization Using Extreme Fabric Connect Quality of Service Initial Considerations The objective of network Quality of Service (QoS) is to allow different type of traffic to contend inequitably for shared network resources. The goal is to converge applications such as voice, video and data over the same network infrastructure. Voice is low constant bandwidth but is a real-time application and thus does not tolerate delay (latency).
Network Virtualization Using Extreme Fabric Connect With today’s platforms, which can act both as L2 and L3 switches, when an ingress packet is received carrying both DSCP and 802.1Q-Tag 802.1p-bit QoS markings, the default behavior is to QoS classify using the markings that correspond to the way in which the packet will be forwarded.
Network Virtualization Using Extreme Fabric Connect Tip All of Extreme Networks VSP series of Core and Distribution platforms are pre-configured for eight QoS classes. Extreme Networks ERS series access platforms can be configured to use QoS queue-sets with one to eight QoS classes (the default queue-set uses two queues) ExtremeXOS platforms can support up to eight Egress QoS Profiles (QP1-8) of which only QP1 & QP8 are configured by default.
Network Virtualization Using Extreme Fabric Connect Caution Extreme Networks ERS and VSP series platforms do not currently make use of the DEI bit. Figure 77 QoS SPB Model Because SPB uses a transport encapsulation with its own QoS markings, there are two possible approaches to combining the DiffServ model with an SPB backbone; we shall refer to these as the Provider and Uniform models.
Network Virtualization Using Extreme Fabric Connect class marking, so that subsequent transport (BCB) hops along the SPB path will be able to derive the correct PHB by only looking at those Backbone p-bits. This will include the egress BEB node, which will remove the Mac-in-Mac encapsulation, and at which point the DSCP markings will continue to be relevant to any subsequent IP routing hops (either external or internal to the SPB Fabric) encountered.
Network Virtualization Using Extreme Fabric Connect QoS Considerations with Fabric Extend When using Fabric Extend with IP (VXLAN) encapsulation, we are yet again adding an additional packet header to transparently cross a WAN provider cloud. In terms of QoS, every time a new encapsulation is added, there is a need to reflect the QoS of the packet payload in that new header because that new header will be used by downstream routers to forward the packet and we need to ensure a consistent PHB.
Network Virtualization Using Extreme Fabric Connect Figure 80 QoS Marking Over Fabric Extend The Fabric Extend DSCP QoS markings become more relevant when Fabric Extend is not used to extend Fabric Connect over a WAN provider, but instead is used to extend Fabric Connect over an enterprise’s self-owned campus IP core. This approach can be used as a migration strategy to fabric-enable the access of the network before SPB-enabling the core of the network.
Network Virtualization Using Extreme Fabric Connect Consolidated Design Overview This section will illustrate how the various VSN service types should come together into the reference architecture outlined in the Guiding Principles section on page 27.
Network Virtualization Using Extreme Fabric Connect The same picture illustrates how at the same time some other fabric-wide L2 VSN segments (e.g., for Guest WLAN users where the default gateway is implemented by a captive portal in the data center and thus no IP interface is needed in the SPB fabric) can also be extended to the same wiring closet switches in the same exact manner.
Network Virtualization Using Extreme Fabric Connect Data Center Distribution The data center distribution model is slightly different as here there is a real benefit in extending Fabric Connect all the way to the ToR switches. This allows to high speed interconnect the ToRs in smaller data center designs as well as the use of DVR, which in all cases ensures shortest path and lowest latency for all data center east-west and north-south traffic flows.
Network Virtualization Using Extreme Fabric Connect Figure 84 Data Center Top of Rack (ToR) BEB with SMLT clustering From a configuration perspective, the DVR leaf ToR switch remains purely an L2 switch and can only be provisioned with which L2 VSN I-SID to attach to a given server access port (even this configuration can be automated if the server hypervisor is capable of Fabric Attach via use of OVS). © 2019 Extreme Networks, Inc. All rights reserved.
Network Virtualization Using Extreme Fabric Connect High Availability High Availability (HA) is the function of eliminating single points of failure and detection of failures as they occur. System Level Resiliency System level resiliency includes having nodes with redundant hardware components and enhanced software features to detect system and hardware failures by providing recovery mechanisms.
Network Virtualization Using Extreme Fabric Connect Clearly for Fabric Connect fast-rerouting to be possible, the SPB core needs to be architected with sufficient redundant paths and switching capacity such that loss of a component (link or node) does not impact connectivity and/or switching capacity after a failure.
Network Virtualization Using Extreme Fabric Connect • Unidirectional link. A link that is only able to Transmit but not Receive (or vice-versa) and would thus result in traffic getting black-holed in one direction. In the case of a single fibre strand failure, Ethernet’s auto-negotiation Far End Fault Detection (FEFI) and 10 gigabit Ethernet Remote Fault Indication (RFI) will be able to detect this and take the Ethernet interface down even on the side where light is still received.
Network Virtualization Using Extreme Fabric Connect Figure 85 Multi-Link Trunking (MLT) Used in Core and Access In the Extreme Networks VSP and ERS series terminology, this is referred to as Multi-Link Trunking (MLT) when the link aggregation is statically configured and as Link Aggregation Groups (LAG) when the aggregation is done dynamically by the LACP protocol.
Network Virtualization Using Extreme Fabric Connect Caution All links forming the link aggregation (MLT/LAG) must have the same interface speed (with LACP this is a mandatory requirement; with static MLT, some platforms may tolerate different speeds, but it is not recommended). Otherwise this would undermine the hashing algorithm’s ability to distribute load on the aggregate links (slower links will congest, while faster links will be under-utilized).
Network Virtualization Using Extreme Fabric Connect Figure 86 Split Multi-Link Trunking (SMLT) Used in SPB Fabric Access SMLT provides the ability of having two active/active switches in a cluster. The two SMLT cluster switches appear as a single switch to the attached device such as a server, L2 switch, or L3 switch or router, which in all cases is configured with either a static MLT (or third-party static link aggregation scheme such as Cisco Ether-Channel) or LACP.
Network Virtualization Using Extreme Fabric Connect Tip On earlier platforms using the original IST implementation (before vIST), the IST DMLT would be provisioned to also act as an SPB NNI MLT interface. Taking advantage of the SPB Fabric, the above IST restrictions have been removed with the introduction of Virtual IST (vIST) on the latest Extreme Networks VSP series platforms. With a vIST, the IST is no longer tied to any physical MLT instance, but is instead associated with an L2VSN I-SID.
Network Virtualization Using Extreme Fabric Connect For user VLANs that need to be IP routed on the Distribution SMLT cluster, it is therefore necessary to implement some form IP Gateway Redundancy on the IP interfaces of that VLAN on each switch forming the cluster.
Network Virtualization Using Extreme Fabric Connect Tip An IP router will only IP route a packet if the packet was sent to its MAC address (or any other MAC address it has been programmed to “own”). By letting both VRRP routers “own” the VRRP MAC address, both routers are able to perform IP routing for it in an active/active fashion. Tip Extreme Networks supports VRRP with Backup-Master for both IPv4 (VRRPv2 or VRRPv3) and IPv6 (VRRPv3) in the VSP series of Fabric routing switches.
Network Virtualization Using Extreme Fabric Connect Figure 88 SMLT with RSMLT-Edge With RSMLT, each switch in the SMLT cluster possesses its own IP address on a given VLAN. End user devices should be configured with one or the other (does not matter which one) IP address as default gateway. The VLAN IP interface of the first switch is tied to a VLAN allocated MAC address derived from the chassis MAC of that switch. Likewise, on the second switch in the cluster.
Network Virtualization Using Extreme Fabric Connect Configuration-wise it is similar to VRRP in that both a local IP and a virtual DVR Gateway IP need to be configured (the latter to be used as default gateway by end-stations). Like VRRP, DVR does not have any SMLT IST dependencies and thus can also be used on user segments which span multiple SMLT clusters. Like RSMLT, DVR does not use or need a Hello protocol and can therefore scale to the maximum number of IP interfaces that the platform supports.
Network Virtualization Using Extreme Fabric Connect enabled, and will thus learn from IS-IS the availability of IP net 2 via both BEB-1 and BEB-2. As the SPB Fabric is operating with two BVLANs the two IP paths multiply by the number of BVLANs and become 4 distinct IP ECMP paths which will create as many entries in the VRF (or GRT) IP routing table. Tip Use of SMLT on the edge BEBs allows equal cost paths provided by the SPB BVLANs to be multiplied by a factor of two.
Network Virtualization Using Extreme Fabric Connect • Secondary split-BEB: Node with the highest IS-IS System ID; will always transmit traffic into BVLAN#2. Tip On the receive side, both nodes forming the SMLT cluster are able to receive from either BVLAN. In fact, all traffic received from an SMLT interface is Mac-in-Mac encapsulated using a source SMLT-Virtual-BMAC which is jointly “owned” by both nodes forming the SMLT cluster.
Network Virtualization Using Extreme Fabric Connect Tip From a CFM standpoint, performing L2 ping or traceroute or tracetree on the ingress BEB needs to be performed on the corresponding BVLAN. This remains deterministic as required by SPB, whereby BEB-1 uses BVLAN1 while BEB-2 uses BVLAN2. We’ll conclude this section by looking at how IP Multicast load balancing is handled with Fabric Connect.
Network Virtualization Using Extreme Fabric Connect In the reverse direction, Source-Group streams 3 through 6 originate from the SMLT cluster side. Provided that the SMLT edge device is capable of performing an MLT hash for Multicast traffic, we can expect on average that BEB-1 will receive 50% of those streams and BEB-2 the other 50%. As streams 3 & 4 are hashed to BEB-1, they will get assigned to I-SIDs 16000001 and 16000002, respectively, both on BVLAN1.
Network Virtualization Using Extreme Fabric Connect routing domain end-point is provisioned (VRF/L3 VSN). The same is true when activating IP Multicast within a service. Tip With MPLS architectures, the end-point provisioning goes only as far as the PE distribution node and in no cases to the access switches. This is true for both IPVPN and VPLS service types. In the case of Draft Rosen Multicast VPNs, the MPLS core will need to be PIM enabled.
Network Virtualization Using Extreme Fabric Connect access and 10Gbps in the data center access ToR, any loop will fully consume those rates on links involved and will impact the CPU utilization on the nodes which own the looped links as the source MAC of the looping packet will be in constant conflict with the real location of that MAC.
Network Virtualization Using Extreme Fabric Connect • Simple Loop Prevention Protocol (SLPP) on distribution (IP gateway) nodes. Extreme Networks’ best design guidelines mandate that user and server access ports must always be configured with Spanning Tree enabled as MSTP Edge ports (or STP FastStart).
Network Virtualization Using Extreme Fabric Connect c. If instead the fault condition persists, the second distribution node will also disable its uplink, generate a trap, and the fault condition is corrected by isolating the offending wiring closet switch from the rest of the network. However, all users connected to that access switch will lose connectivity. 4. If instead we are experiencing two different VLANs collapsed together: a.
Network Virtualization Using Extreme Fabric Connect Fabric and VSN Security This section covers the security aspects of managing and maintaining a virtualized network architecture such as Extreme’s Fabric Connect SPB. We start by covering the benefits that separation and segmentation offer in a virtualized architecture, and how such architecture must conceal itself from outside users to repel potential attacks.
Network Virtualization Using Extreme Fabric Connect Concealment of the Core Infrastructure Concealment of the core infrastructure is a very important property for virtualized network architectures, as it makes it much harder for potential outside attackers to gain any useful information which could be used in an attack to compromise the availability and security of the network.
Network Virtualization Using Extreme Fabric Connect establish the initial service path (sections thereof), all additional path notions such as BGP and MPLS are dependent upon, it meaning that these networks are potentially vulnerable to IP scanning techniques. Strong access control lists can mask the environment from the general routed core, but this carries with it its own set of conundrums in that path behavior is dependent upon reachability, as such there is only so much that can be masked.
Network Virtualization Using Extreme Fabric Connect The most stealth VSN is an L2 VSN where no IP interface has been defined, as this is a totally closed L2 environment where nothing can enter or exit unless otherwise provisioned. It is invisible to the IP protocol as IP is not being used. IP can still run ‘inside’ the L2 VSN but with the IP subnet being established by the devices that are attaching into it.
Network Virtualization Using Extreme Fabric Connect Figure 95 L3 VSN Topology as Seen by IP Scanning Tools Resistance to Attacks By resistance to attacks we intend the ability of the core to withstand any denial of service (DoS) attacks from the outside. It is essential that the core itself should not be accessible for such attacks and that if the attack is launched from within a VSN, no other VSNs be attained by it.
Network Virtualization Using Extreme Fabric Connect Tip As a matter of comparison, consider that there are two basic ways an MPLS core can be attacked: by attacking the provider-edge router, or by attacking the signalling mechanisms of MPLS. Both types of attacks require specific router configuration via ACLs to be repelled. In the Fabric Connect model the latter is simply not applicable nor possible.
Network Virtualization Using Extreme Fabric Connect Spoofing MAC addresses is even easier than spoofing IP addresses, as the dynamic nature of Ethernet transparent learning bridges (which remains true in VLANs and inside L2 VSNs services alike) is that source MAC addresses are inspected and forwarding tables are immediately updated if a given MAC address is seen arriving from a different port.
Network Virtualization Using Extreme Fabric Connect By default, all innate IP services are based on the GRT. This is the only viable channel of management communication without the use of dedicated VSNs and physical loopback of management interfaces, which quickly results in an obtuse implementation. By removing the client user and device communities from the GRT we provide for a very clean and dedicated IP environment for the management of the fabric and security infrastructure.
Network Virtualization Using Extreme Fabric Connect Figure 98 GRT (VRF0) L2 VSN The L2 VSN is now an extension of the GRT and is such part of that given security domain. If this is done unintentionally, a significant security breach could occur. There may be valid reasons for doing this, so it is important to consider the exceptions and more importantly document, monitor, and audit them as you would any security exception policy. But as a general practice this should be avoided.
Network Virtualization Using Extreme Fabric Connect Layer 3 Virtual Service Networks There are instances where you require a fully private IP routing environment. This is provided for by L3 Virtual Service Networks. In this service instance, the I-SID is used as the peering mechanism for the individual VRFs that exists within the constricted community of interest. In traditional networking, the VRF is typically peered using an IGP like OSPF or with a protocol like BGP for a true peering model.
Network Virtualization Using Extreme Fabric Connect Figure 100 L3 VSN Extension However, when IP multicast is not required and it is desired to have an end-to-end L2 service, the L2 VSN extension method is often selected. There is an important principle to remember regarding L2 VSNs. Any IP address assigned to the VLAN gets placed onto the GRT. This obviously creates a security issue with segmentation. In the figure above, note that BEB-A has a local VLAN termination to a VRF.
Network Virtualization Using Extreme Fabric Connect Fabric as Best Foundation for SDN Software-Defined Networking (SDN) offers enterprises significant potential to not only increase efficiency and reduce operational cost, but fundamentally alter the way applications and infrastructure interoperate, consequently improving business agility, innovation, and competitive positioning.
Network Virtualization Using Extreme Fabric Connect • Unleashing the power of the open ecosystem. Building upon the automated core and Fabric Attachenabled edge, open SDN edge devices, such as any Open vSwitch-based device, can automatically and zero-touch attach to the desired virtual service network.
Network Virtualization Using Extreme Fabric Connect Glossary ACL Access Control List AF Assured Forwarding (DSCP PHB) AFI Authority and Format Identifier (NSAP addresses) AP Access Point (WLAN) ARP Address Resolution Protocol (IPv4) AS Autonomous System ASCII American Standard Code for Information Interchange ATM Asynchronous Transfer Mode BCB Backbone Core Bridge BEB Backbone Edge Bridge BFD Bidirectional Forwarding Detection BGP Border Gateway Protocol (RFC 4271) BMAC Backbone MA
Network Virtualization Using Extreme Fabric Connect DMZ Demilitarized Zone DNA Digital Network Architecture (Cisco) DNN Dynamic Nick-Name assignment DNS Domain Name System DoS Denial of Service (attack) DSCP Differentiated Services Code Point DVR Distributed Virtual Routing EAP Extensible Authentication Protocol EAPoL Extensible Authentication Protocol (EAP) over LAN EAP-TLS EAP Transport Layer Security ECT Equal Cost Tree (SPB) eBGP External BGP ECMP Equal Cost Multi-Path EF Exp
Network Virtualization Using Extreme Fabric Connect GRE Generic Routing Encapsulation GRT Global Routing Table (VRF-0) HA High Availability HMAC Hash-based Message Authentication Code HTTP Hypertext Transfer Protocol HTTPS Hypertext Transfer Protocol Secure iBGP Internal BGP ICMP Internet Control Message Protocol I-DEI Instance Drop Eligible Indictor (IEEE 802.
Network Virtualization Using Extreme Fabric Connect MAC Media Access Control MLAG Multi-chassis Link Aggregation Group MD5 Message-Digest algorithm MDT Multicast Distribution Tree MEF Metro Ethernet Forum MHMA Multiple Host Multiple Authentication (IEEE 802.1X) MHSA Multiple Host Single Authentication (IEEE 802.
Network Virtualization Using Extreme Fabric Connect PE Provider Edge (MPLS) PHB Per Hop Behavior (DSCP) PHP Penultimate Hop Popping (MPLS) PIM Protocol Independent Multicast PIM-SM Protocol Independent Multicast Sparse Mode PIM-SSM Protocol Independent Multicast source specific mode PKI Public Key Infrastructure PoE Power over Ethernet (IEEE 802.
Network Virtualization Using Extreme Fabric Connect SPBV Shortest Path Bridging VID (uses IEEE 802.1ad Q-in-Q encapsulation) SPF Shortest Path First (IS-IS and OSPF Dijkstra’s algorithm) SPSourceID Shortest Path Source Identifier (IEEE 802.1aq) SSH Secure Shell SSID Service Set Identifier (WLAN IEEE 802.
Network Virtualization Using Extreme Fabric Connect VTEP VXLAN Tunnel Endpoint VXLAN Virtual Extensible LAN (RFC 7348) WAN Wide Area Network WLAN Wireless LAN WRR Weighted Round Robin ZTC Zero Touch Client (Fabric Attach) ZTF Zero Touch Fabric © 2019 Extreme Networks, Inc. All rights reserved.
Network Virtualization Using Extreme Fabric Connect Reference Documentation Ref Document Title Publication / Date / Document Description [1] 802.
Network Virtualization Using Extreme Fabric Connect Revisions Rev # Date Authored/Revised Remarks 01 19 February 2015 Content: Ludovico Stevens, John Vant Erve, Ed Koehler, Andrew Rufener Review: Goeran Friedl, Didier Ducarre Published as Avaya external document 02 29 September 2018 Content: Ludovico Stevens, Ed Koehler, Didier Ducarre Review: Steve Emert, Gregory Deffenbaugh, Mikael Holmberg, Scott Fincher, Alexander Nonikov, John Hopkins, Roger Lapuh, Goeran Friedl, Stephane Grosjean Extreme N