Ethernet Routing Switch Virtual Services Platform Engineering > Technical Configuration Guide for Microsoft Network Load Balancing Extreme Networks Document Date: November 2020 Part Number: 9036882-00 Revision AA
© 2020, Extreme Networks, Inc. All Rights Reserved. Notice While reasonable efforts have been made to ensure that the information in this document is complete and accurate at the time of printing, Extreme Networks, Inc. assumes no liability for any errors. Extreme Networks, Inc. reserves the right to make changes and corrections to the information in this document without the obligation to notify any person or organization of such changes.
PARTNER (AS APPLICABLE), THE TERMS OF USE FOR HOSTED SERVICES ARE AVAILABLE ON THE EXTREME NETWORKS WEBSITE, https://extremeportal.force.com OR SUCH SUCCESSOR SITE AS DESIGNATED BY EXTREME NETWORKS, AND ARE APPLICABLE TO ANYONE WHO ACCESSES OR USES THE HOSTED SERVICE. BY ACCESSING OR USING THE HOSTED SERVICE, OR AUTHORIZING OTHERS TO DO SO, YOU, ON BEHALF OF YOURSELF AND THE ENTITY FOR WHOM YOU ARE DOING SO (HEREINAFTER REFERRED TO INTERCHANGEABLY AS “YOU” AND “END USER”), AGREE TO THE TERMS OF USE.
Copyright Except where expressly stated otherwise, no use should be made of materials on this site, the Documentation, Software, Hosted Service, or hardware provided by Extreme Networks.
AT THE EXTREME NETWORKS CHANNEL PARTNER’S EXPENSE, DIRECTLY FROM THE APPLICABLE THIRD PARTY SUPPLIER. WITH RESPECT TO CODECS, IF THE EXTREME NETWORKS CHANNEL PARTNER IS HOSTING ANY PRODUCTS THAT USE OR EMBED THE G.729 CODEC, H.264 CODEC, OR H.265 CODEC, THE EXTREME NETWORKS CHANNEL PARTNER ACKNOWLEDGES AND AGREES THE EXTREME NETWORKS CHANNEL PARTNER IS RESPONSIBLE FOR ANY AND ALL RELATED FEES AND/OR ROYALTIES. THE G.729 CODEC IS LICENSED BY SIPRO LAB TELECOM INC. SEE WWW.SIPRO.COM/CONTACT.HTML. THE H.
permitted to use such Marks without prior written consent from Extreme Networks or such third party which may own the Mark. Nothing contained in this site, the Documentation, Hosted Service(s) and product(s) should be construed as granting, by implication, estoppel, or otherwise, any license or right in and to the Marks without the express written permission of Extreme Networks or the applicable third party. Extreme Networks is a registered trademark of Extreme Networks, Inc.
Table of Contents 1. Overview ............................................................................................................................................................ 11 1.1 Architecture .................................................................................................................................................... 12 1.2 Operation.......................................................................................................................................
Figures Figure 1-1 – Network Load Balancing Cluster .......................................................................................11 Figure 1-2 – Example Network Load Balancing Cluster ........................................................................11 Figure 1.1 – Network Load Balancing Stack ..........................................................................................12 Figure 1.2.1-1 – Unicast Virtual MAC Assignment ...........................................................
Tables Table 1.5 – MAC Address Formats ........................................................................................................19 Table 2.1.1 – Single L2 Switch Supported Platforms .............................................................................26 Table 2.2.1 – Centralized L3 Stackable Switch Supported Platforms ....................................................28 Table 2.3.1 – Single L3 Modular Switch Supported Platforms ......................................................
Conventions This section describes the text, image, and command conventions used in this document. Symbols Tip – Highlights a configuration or technical tip. Note – Highlights important information to the reader. Warning – Highlights important information about an action that may result in equipment damage, configuration or data loss. Text Bold text indicates emphasis.
1. Overview Network Load Balancing is a clustering technology available with Microsoft Windows 2000 / Windows 2003 Server family of operating systems. Network Load Balancing uses a distributed algorithm to load balance TCP/IP network traffic across a number of hosts, enhancing the scalability and availability of mission critical, IP based services, such as Web, VPN, Streaming Media, Firewalls, etc.
1.1 Architecture Network Load Balancing uses fully distributed software architecture and an identical copy of the Network Load Balancing driver runs in parallel on each cluster host. The drivers arrange for all cluster hosts on a single subnet to concurrently detect incoming network traffic for the cluster's virtual IP address.
1.2 Operation Microsoft Network Load Balancing can be deployed in unicast (default), multicast and IGMP-multicast modes. These modes are configured on the MSNLB server cluster. The following sections highlight the three options for MSNLB configuration. 1.2.1 Unicast Unicast mode is the default option for Network Load Balancing. With unicast mode, Network Load Balancing replaces the network adapter’s real MAC address with a cluster virtual MAC address.
If each network adapters MAC address is unique, how are frames delivered to all members of the cluster? Microsoft Network Load Balancing solves this problem with IP. A client will learn the clusters MAC address using Address Resolution Protocol (ARP). When a client sends an ARP request for the MAC address of the clusters virtual IP address, the ARP response will contain cluster MAC virtual address and not the bogus MAC addresses.
1.2.2 Multicast / IGMP Multicast Multicast and IGMP-multicast modes are optional modes for Network Load Balancing. With multicast mode, a multicast virtual MAC address with the prefix 03-bf is bound to all cluster hosts but the network adapter's real MAC address is retained. The multicast MAC address is used for client-to-cluster traffic and the adapter’s real MAC address is used for network traffic specific to the host computer. MACs starting with odd numbers are multicast. Figure 1.2.
IGMP-multicast mode implements IGMP and all hosts in the cluster forward IGMPv1 group membership reports. IGMP allows the Ethernet switches to prune the multicast traffic and limit the flooding to only the ports that connect to the cluster hosts. When IGMP-multicast mode is enabled, traffic is pruned. Multicast traffic uses a multicast address for both L2 & L3 layers in the packet.
Frames from clients are forwarded to the clusters virtual IP address with a destination MAC address set to the clusters virtual multicast MAC address. Depending on the multicast mode, the frames are ether flooded to all ports in the broadcast domain or forwarded to only the ports that the cluster hosts are connected to. Figure 1.2.2-4 – IGMP-Multicast Traffic Flow 1.
1.4 Convergence Network Load Balancing hosts periodically exchange multicast or broadcast heartbeat messages within the cluster. This allows the hosts to monitor the status of the cluster.
MAC Address Type MAC Address Range Multicast x1-xx-xx-xx-xx-xx x3-xx-xx-xx-xx-xx x5-xx-xx-xx-xx-xx x7-xx-xx-xx-xx-xx x9-xx-xx-xx-xx-xx xB-xx-xx-xx-xx-xx xD-xx-xx-xx-xx-xx xF-xx-xx-xx-xx-xx (exception broadcast address) Broadcast FF-FF-FF-FF-FF-FF Table 1.5 – MAC Address Formats 1.5.1 Globally Unique Globally unique addresses are allocated by the IEEE in blocks containing 2^24 (16,777,216) addresses and start with even numbers.
1.5.4 Network Load Balancing Unicast When NLB is deployed in unicast mode, the globally unique MAC address on each cluster hosts network adaptor is replaced with a locally administered MAC address assigned by Microsoft. The locally administered MAC address starts with a 02:xx prefix and the second octet will contain the host-id of the host in the cluster. The clusters virtual MAC address is also a locally administered MAC address and starts with a 02:bf prefix. Figure 1.5.4 – Unicast MAC Format 1.5.
1.6 Implementation Models Microsoft’s Network Load Balancing can be deployed using one of four models. This section provides a brief overview of the supported models and provides advantages and disadvantages of each. 1.6.1 Single Network Adaptor in Unicast Mode The single network adapter unicast model is suitable for a cluster in which ordinary network communication among cluster hosts is not required and there is limited dedicated traffic from outside the cluster subnet to specific cluster hosts.
1.6.2 Single Network Adaptor in Multicast / IGMP Multicast Mode The single network adapter multicast / IGMP-multicast model is suitable for a cluster in which ordinary network communication among cluster hosts is necessary or desirable, but in which there is limited dedicated traffic from outside the cluster subnet to specific cluster hosts. Figure 1.6.2 – Single Adaptor Multicast / IGMP Multicast Mode Advantages Disadvantages One network adapter per cluster host is required.
1.6.3 Multiple Network Adaptors in Unicast Mode The multiple network adapter unicast model is suitable for a cluster in which ordinary network communication among cluster hosts is necessary or desirable. It is also appropriate when you want to separate the traffic used to manage the cluster from the traffic occurring between the cluster and client computers. Figure 1.6.3 – Multiple Adapters Unicast Mode Advantages Disadvantages Network communication between cluster hosts is permitted.
1.6.4 Multiple Network Adapters in Multicast / IGMP Multicast Mode The multiple network adapter multicast model is suitable for a cluster in which ordinary network communication among cluster hosts is necessary and in which there is heavy dedicated traffic from outside the cluster subnet to specific cluster hosts. Figure 1.6.4 – Multiple Adaptors Multicast / IGMP Multicast Mode Advantages Disadvantages Network communication between cluster hosts is permitted.
2. Supported Topologies and Releases The following section outlines the tested and supported Network Load Balancing topologies on Extreme Ethernet switching platforms. This section provides information on specific releases of software that may be required as well as any features that may need to be enabled on Extreme Ethernet switching platforms to support Microsoft Network Load Balancing clusters.
2.1.1 Supported Extreme Switching Platforms The following table provides a list of Extreme Ethernet switching platforms that can be deployed to support this topology: Extreme Switch Model Unicast Mode Multicast Mode IGMP-Multicast Mode VSP 9000 Yes Yes Yes ERS 8600 / 8800 Yes Yes Yes ERS 8300 Yes Yes Yes ERS 5000 Yes Yes Yes ERS 4000 Yes Yes Yes ERS 3500 Yes Yes Yes ERS 2500 Yes Yes Yes Table 2.1.1 – Single L2 Switch Supported Platforms 2.1.
2.1.2.
2.2 Centralized Layer 3 Stackable Switch The following topology is supported when an Ethernet Routing Switch 5000 or 4000 is used to route between server and client VLANs. The Network Load Balancing cluster hosts must be connected to a Layer 2 subtended Ethernet Switch. The clients may be connected directly to the Core switch or to a Layer 2 subtended Ethernet Switch. The subtended Ethernet switch can use a single uplink port or a multi-port MLT/DMLT trunk.
2.2.2 Configuration Example To support this topology the following configuration steps need to be performed on the Ethernet switching platforms: Mandatory Configuration Steps No mandatory configuration steps need to be performed. By default Extreme Ethernet switching platforms will flood Network Load Balancing cluster traffic with no additional configuration being required.
2.2.2.2 ERS 5000 Core Verification Steps 1 The following CLI command displays the arp entry for 192.168.1.50: ERS5530-24TFD(config)# show ip arp 192.168.50.150 =============================================================================== IP ARP =============================================================================== IP Address Age (min) MAC Address VLAN-Unit/Port/Trunk Flags ------------------------------------------------------------------------------192.168.50.
2 Display multicast group membership for VLAN 1300. In this example the cluster hosts are connected to ports 1/23 & 1/24: 4550T-PWR# show vlan multicast membership 1300 Number of groups: 1 Multicast Group Address Port ----------------------- ---239.255.1.50 23 239.255.1.50 24 November 2020 ©2020 Extreme Networks, Inc.
2.3 Single Layer 3 Modular Switch The following topology is supported when a Virtual Services Platform 9000, Ethernet Routing Switch 8600/8800 or 8300 is used to route between server and client VLANs when both Network Load Balancing cluster hosts and clients are directly connected to the Extreme switch IP routing is enabled. This topology supports Network Load Balancing clusters in unicast, multicast and IGMP-multicast modes. Figure 2.3 – Single Ethernet Routing Switch 2.3.
2.3.2 Configuration Example To support this topology the following configuration steps need to be performed on the Ethernet Routing Switch 8600/8800, 8300 and Virtual Services Platform 9000 switching platforms: Mandatory Configuration Steps The Extreme Ethernet Routing Switch VLAN NLB mode must match the Microsoft Server NLB mode. The ERS 8600 must have 4.1.1 or higher and the ERS 8300 must have 4.0 or higher. The VSP 9000 must have 3.0 or higher. Optional Configuration Steps None 2.3.2.
2.3.2.2 Per VLAN NLB Verification Steps 1 The following command displays the status of the per VLAN NLB support showing the status when NLB unicast support is enabled for VLAN 1300 and cluster hosts are connected to port 1/47 & 1/48.
2.3.2.3 Global ARP Multicast MAC Flooding Configuration Steps In older ERS 8600 / 8800 software releases prior to 3.7.15, Network Load Balancing can be deployed in multicast mode by enabling the global ARP multicast MAC flooding feature. For VSP 9000, the global ARP multicast MAC flooding feature is supported in 3.
2.4 Switch Clustering Topologies Section 2.4 will cover various SMLT cluster topologies and the type of configuration required on the SMLT cluster switches. Section 2.5 will cover the configurations details. 2.4.
2.4.1.
2.4.2 Switch Clustering – Topology 2 The following topology is supported when Ethernet Routing Switch 8600s are deployed as a SMLT core and Network Load Balancing cluster hosts are directly connected and distributed between the ERS 8600s and the clients are connected to Layer 2 Ethernet switch that is SMLT connected to the SMLT cluster. This topology supports Network Load Balancing clusters in unicast and multicast modes. Figure 2.4.
2.4.2.
2.4.3 Switch Clustering – Topology 3 The following topology is supported when Ethernet Routing Switch 8600s are deployed as a SMLT core and Network Load Balancing cluster hosts and clients are directly connected and distributed between the ERS 8600s in the SMLT cluster. This topology supports Network Load Balancing clusters in unicast and multicast modes. Figure 2.4.3 – Switch Clustering Topology 3 November 2020 ©2020 Extreme Networks, Inc.
2.4.3.
2.4.4 Switch Clustering – Topology 4 The following topology is supported when Ethernet Routing Switch 8600s are deployed as a SMLT core and Network Load Balancing cluster hosts and clients are connected to a Layer 2 Ethernet switch that is SMLT connected to the SMLT cluster core. This topology supports Network Load Balancing clusters in unicast and multicast modes. Figure 2.4.3 – Switch Clustering Topology 4 November 2020 ©2020 Extreme Networks, Inc.
2.4.4.
2.4.5 Switch Clustering – Topology 5 The following topology is supported when Ethernet Routing Switch 8600s are deployed as a SMLT core using RSMLT edge where Network Load Balancing cluster hosts and clients are connected to Layer 2 Ethernet switches that are SMLT connected to the SMLT cluster core. This topology supports Network Load Balancing clusters in unicast and multicast modes. Figure 2.4.5 – Switch Clustering Topology 5 November 2020 ©2020 Extreme Networks, Inc.
2.4.5.
2.4.6 Switch Clustering Configuration Examples 2.4.6.1 Per VLAN NLB Unicast Configuration Steps The following commands enables / disables per VLAN NLB unicast assuming VLAN 1300 is used to connect to the Microsoft NLB servers. For this example, we will assume ERS-8600-1 is using CLI and ERS-8600-2 is using CLI.
ERS-8600-2# show vlan info nlb-mode ================================================================================ Vlan Nlb ================================================================================ VLAN_ID NLB_ADMIN_MODE NLB_OPER_MODE PORT_LIST MLT_GROUPS -------------------------------------------------------------------------------1300 2 unicast unicast 4/1-4/2,5/1 The following command displays the ARP entry for the NLB unicast IP address used in this example.
ERS-8600-2# show vlan info fdb-entry 1300 ================================================================================ Vlan Fdb ================================================================================ VLAN ID MAC STATUS QOS ADDRESS INTERFACE MONITOR LEVEL SMLT REMOTE -------------------------------------------------------------------------------- 1300 self 00:00:5e:00:01:82 Port-cpp false 1 false 1300 self 00:01:81:29:1e:1c Port-cpp false 1 false 1300 learned 00:80:2d:be:
1300 learned 02:01:c0:a8:32:96 IST false 1 true 1300 learned 02:03:c0:a8:32:96 Port-1/1 false 1 false Note – If NLB Servers are connected are directly connected to the SMLT cluster switches, only the locally attached server MAC address will be learned via the local port whereas the remote NLB server MAC will be learned via the IST. 2.4.6.
2.4.6.2.1 Per VLAN NLB Multicast Verification Steps 1 The following displays the status on the SMLT cluster when Network Load Balancing multicast support is enabled for VLAN 1300.
ERS-8600-2# show ip arp info 192.168.50.0 ================================================================================ IP Arp - GlobalRouter ================================================================================ IP_ADDRESS MAC_ADDRESS VLAN PORT TYPE TTL(10 Sec) -------------------------------------------------------------------------------- 3 192.168.50.3 00:e0:7b:bc:22:30 1300 - LOCAL 2160 192.168.50.255 ff:ff:ff:ff:ff:ff 1300 - LOCAL 2160 192.168.50.
Note – The Microsoft NLB server’s real IP addresses should be displayed. Since the ERS 8600, with the per VLAN NLB multicast parameter enabled, only supports local attached servers connected to the SMLT cluster, the Microsoft NLB Server MAC addresses will be learned either via the local port or from the IST. 2.4.6.3 Static Multicast Entries Configuration Steps If NLB Multicast is enabled on the Microsoft NLB servers, the multicast MAC address can be statically entered on both ERS 8600 cluster switches.
2.4.6.3.1 Static Multicast Entries Verification Steps 1 The following command displays the ARP entry for the NLB unicast IP address used in this example.
ERS-8600-2# show ip arp info 192.168.50.0 ================================================================================ IP Arp - GlobalRouter ================================================================================ IP_ADDRESS MAC_ADDRESS VLAN PORT TYPE TTL(10 Sec) -------------------------------------------------------------------------------- 192.168.50.3 00:01:81:29:1e:07 1300 - LOCAL 2160 192.168.50.255 ff:ff:ff:ff:ff:ff 1300 - LOCAL 2160 192.168.50.
Note – If NLB Servers are connected are directly connected to the SMLT cluster switches, only the locally attached server MAC address will be learned via the local port whereas the remote NLB server MAC will be learned via the IST. 2.4.6.4 IP ARP Multicast MAC Flooding Configuration Steps For Multicast mode Network Load Balancing, the IP ARP Multicast Flooding parameter can be enabled on both SMLT cluster switches.
2.4.6.4.1 Global ARP Multicast MAC Flooding Verification Steps 1 Verify that the NLB mode configured is set to multicast.
ERS-8600-2# show ip arp info 192.168.50.0 ================================================================================ IP Arp - GlobalRouter ================================================================================ IP_ADDRESS MAC_ADDRESS VLAN PORT TYPE TTL(10 Sec) -------------------------------------------------------------------------------- 3 192.168.50.3 00:01:81:29:1e:07 1300 - LOCAL 2160 192.168.50.255 ff:ff:ff:ff:ff:ff 1300 - LOCAL 2160 192.168.50.
ERS-8600-2# show vlan info fdb-entry 1300 ================================================================================ Vlan Fdb ================================================================================ VLAN ID MAC STATUS QOS ADDRESS INTERFACE MONITOR LEVEL SMLT REMOTE -------------------------------------------------------------------------------- 1300 self 00:00:5e:00:01:82 Port-cpp false 1 false 1300 self 00:01:81:29:1e:1c Port-cpp false 1 false 1300 learned 00:06:5b:79:
ERS-8600-2# show vlan info fdb-entry 1300 ================================================================================ Vlan Fdb ================================================================================ VLAN ID MAC STATUS QOS ADDRESS INTERFACE MONITOR LEVEL SMLT REMOTE -------------------------------------------------------------------------------- 1300 self 00:00:5e:00:01:82 Port-cpp false 1 false 1300 self 00:01:81:29:1e:1c Port-cpp false 1 false 1300 learned 00:1b:25:e8:
3. Appendix 3.1 Creating a Network Load Balancing Cluster The following section demonstrates how to create a Network Load Balancing Cluster using two Windows 2003 servers to provide high available HTTP web services. • The Windows 2003 Servers used in the following examples were configured as follows: • The Windows 2003 servers have been updated with the latest Service Pack 1 and all the current updates applied.
2 To create a new cluster, in the Microsoft Network Load Balancing Manager Application click Cluster then New: November 2020 ©2020 Extreme Networks, Inc.
3 In the Cluster Parameters window, specify the clusters virtual IP address, Subnet Mask and optionally Full Internet name that will used to address this cluster. The Full Internet name is used only for reference. Specify the operational mode for the cluster which can be set to unicast (default), multicast or IGMP-multicast.
4 Click on Next to skip adding a Cluster IP address. This is only required if you need additional IP address to be load balanced via multiple sites using different IP addresses. November 2020 ©2020 Extreme Networks, Inc.
5 The Port Rules window defines the traffic that the load balancing cluster will service as well as how traffic is distributed between hosts. The default port rule will load balance all TCP and UDP traffic using ports 0 through 65535. Administrators may specify a single rule or multiple port rules if the application requires it such as a Web server that requires HTTP and HTTPS.
6 The default port rule has now been modified so that the Network Load Balancing cluster will load balance HTTP and HTTPS traffic. Click Next: November 2020 ©2020 Extreme Networks, Inc.
7 In the Connect window we will add the first host to the cluster. For this example we have two Windows 2003 servers with the hostnames w3kserver1 and w3kserver2. In the Host field type the hostname for the first server in the cluster and click Connect. Once connected a list of interfaces will be displayed. Highlight the Interface name where Network Load Balancing will be bound to and click Next: November 2020 ©2020 Extreme Networks, Inc.
8 In the Host Parameters window set the Priority for the host to 1. This value needs to be unique for each host in the cluster. Optionally modify the Default state for the cluster host if you do not wish Network Load Balancing to be immediately started. Click Finish: 9 To add the second host to the cluster, in the Network Load Balancing Manager highlight the Domain name of the cluster and then right click and click Add Host To Cluster: November 2020 ©2020 Extreme Networks, Inc.
10 In the Connect window in the Host field type the hostname for the second server in the cluster and click Connect. Once connected a list of interfaces will be displayed. Highlight the Interface name where Network Load Balancing will be bound to and click Next: November 2020 ©2020 Extreme Networks, Inc.
11 In the Host Parameters window set the Priority for the host to 2. This value needs to be unique for each host in the cluster. Optionally modify the Default state for the cluster host if you do not wish Network Load Balancing to be immediately started. Click Finish: November 2020 ©2020 Extreme Networks, Inc.
12 The cluster is created and once converged all operational hosts will be displayed in Network Load Balancing Manager window in a green state. Additionally details for all known clusters as well as log entries are displayed in this window: Note – The above configuration assumes you have DNS configured on both NLB servers with the appropriate server names. If DNS is not enabled, you will need to modify the host file C:\winnt\system32\drivers\etc\hosts and add appropriate names of each server.
The following table provides a detailed overview the Port Rule parameters available in the Add/Edit Port Rule window: Parameter Description Cluster IP Address Specifies options regarding which cluster IP addresses that the port rule should cover. All Specifies whether the port rule is a global port rule and will cover all cluster IP addresses associated with the particular Network Load Balancing cluster. Port Range Specifies the start and end of the port range for the selected port rule.
4. Software Baseline The following table provides the baseline software releases for each switching platform used to validate the topologies in this guide. If prior versions of software are being used, please refer to the product release notes and product documentation for known issues or limitations with the specific software release. Older switching software should be used at your own risk.
5. Reference Documentation Ethernet Routing Switch 8600 / 8800 Technical Configuration Guide for SMLT https://extremeportal.force.com Technical Configuration Guide for VRRP https://extremeportal.force.com Network Design Guidelines (per major release) https://extremeportal.force.com Configuring IP Routing Operations https://extremeportal.force.com Configuring VLANs, Spanning Tree, and Link Aggregation https://extremeportal.force.