53-1003037-02 9 December, 2013 Multi-Service IronWare QoS and Traffic Management Configuration Guide Supporting Multi-Service IronWare R05.6.
Copyright © 2013 Brocade Communications Systems, Inc. All Rights Reserved. ADX, AnyIO, Brocade, Brocade Assurance, the B-wing symbol, DCX, Fabric OS, ICX, MLX, MyBrocade, OpenScript, VCS, VDX, and Vyatta are registered trademarks, and HyperEdge, The Effortless Network, and The On-Demand Data Center are trademarks of Brocade Communications Systems, Inc., in the United States and/or in other countries. Other brands, products, or service names mentioned may be trademarks of their respective owners.
Contents About This Document Audience . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 Supported hardware and software . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 Supported software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 Document conventions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 Text formatting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Chapter 2 Configuring Traffic Policing for the Brocade NetIron XMR and Brocade MLX series Traffic policing on the Brocade device. . . . . . . . . . . . . . . . . . . . . . . . 15 Applying traffic policing parameters directly to a port. . . . . . . . 16 Applying traffic policing parameters using a policy map. . . . . . 17 Configuration considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 Configuring traffic policing on Brocade devices . . . . . . . . . . . .
Configuring QoS. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 Configuring QoS procedures applicable to Ingress and Egress. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 Configuring a force priority . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 Configuring extended-qos-mode . . . . . . . . . . . . . . . . . . . . . . . . . 49 Configuring port-level QoS commands on LAG ports . . . . . . . .
Configuring Ingress decode policy maps . . . . . . . . . . . . . . . . . . 87 Binding Ingress decode policy maps . . . . . . . . . . . . . . . . . . . . . 93 Configuring a force priority . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96 Configuring Egress encode policy maps. . . . . . . . . . . . . . . . . . . 99 Binding an Egress encode EXP policy map . . . . . . . . . . . . . . .102 Enabling a port to use the DEI bit for Ingress and Egress processing. . . . . . . . . . . . . . . . . . . . . . .
Displaying TM statistics from the multicast queue . . . . . . . . .152 Showing collected aggregated TM VOQ statistics . . . . . . . . . .152 Clearing the TM VOQ statistics . . . . . . . . . . . . . . . . . . . . . . . . .153 Displaying TM VOQ depth summary . . . . . . . . . . . . . . . . . . . . .153 Displaying TM buffer utilization. . . . . . . . . . . . . . . . . . . . . . . . .154 Clearing TM VOQ depth summary. . . . . . . . . . . . . . . . . . . . . . .155 Clearing TM buffer utilization . . . . . . . .
6 Multi-Service IronWare QoS and Traffic Management Configuration Guide 53-1003037-02
About This Document Audience This document is designed for system administrators with a working knowledge of Layer 2 and Layer 3 switching and routing. If you are using a Brocade device, you should be familiar with the following protocols if applicable to your network – IP, RIP, OSPF, BGP, ISIS, IGMP, PIM, MPLS, and VRRP.
Supported hardware and software The following hardware platforms are supported by this release of this guide: TABLE 1 Supported devices Brocade NetIron XMR Series Brocade MLX Series NetIron CES 2000 and NetIron CER 2000 Series Brocade NetIron XMR 4000 Brocade NetIron XMR 8000 Brocade NetIron XMR 16000 Brocade NetIron XMR 32000 Brocade MLX-4 Brocade MLX-8 Brocade MLX-16 Brocade MLX-32 Brocade MLXe-4 Brocade MLXe-8 Brocade MLXe-16 Brocade MLXe-32 Brocade NetIron CES 2024C Brocade NetIron CES 2024F Bro
Document conventions This section describes text formatting conventions and important notice formats used in this document.
Notice to the reader This document may contain references to the trademarks of the following corporations. These trademarks are the properties of their respective companies and corporations. These references are made for informational purposes only.
Getting technical help or reporting errors To contact Technical Support, go to http://www.brocade.com/services-support/index.page for the latest e-mail and telephone contact information.
12 Multi-Service IronWare QoS and Traffic Management Configuration Guide 53-1003037-02
Chapter Configuring Traffic Policing for the Brocade NetIron CES and Brocade NetIron CER 1 Traffic policing on Brocade NetIron CES and Brocade NetIron CER devices Brocade NetIron CES and Brocade NetIron CER devices provide line-rate traffic policing in hardware on inbound and outbound ports. You can configure a device to use one of the following traffic policing modes: • Port-based – Limits the rate on an individual physical port to a specified number.
1 Traffic policing on Brocade NetIron CES and Brocade NetIron CER devices Maximum burst Maximum burst allows a higher-than-average rate to traffic that meets the rate limiting criteria. Traffic is allowed to pass through the port for a short period of time. The unused bandwidth can be accumulated up to a maximum equal to the maximum burst value. Maximum burst size is adjusted according the configured average line rate.
Traffic policing on Brocade NetIron CES and Brocade NetIron CER devices 1 The CIR bucket The CIR rate limiting bucket is defined by two parameters: the CIR rate, and the Committed Burst Size (CBS) rate. The CIR rate is the maximum number of bits a port is allowed to receive or send during a one-second interval. The rate of the traffic that matches the traffic policing policy can not exceed the CIR rate.
1 Traffic policing on Brocade NetIron CES and Brocade NetIron CER devices Limitations In the Brocade NetIron CES and Brocade NetIron CER, UDP rate-limiting is applicable only in the following scenarios: • When sending 1% of 1G traffic with packet size of 64 bytes to the device for configured Burst-max value (up to 8000) • When sending 10% of 1G traffic with packet size of 64 bytes to the device for configured Burst-max value (up to 1500) • When sending 100% of 1G traffic with packet size of 64 bytes to
Traffic policing on Brocade NetIron CES and Brocade NetIron CER devices 1 For Drop Precedence, the Brocade MLX Series and Brocade NetIron XMR has 4 levels, while Brocade NetIron CER and Brocade NetIron CES have 3 levels. The Brocade NetIron CER and Brocade NetIron CES internally converts the 4 levels as follows: 0 -> 0, 1 -> 1, 2 -> 1, 3 -> 2 The excess-dscp parameter specifies that traffic whose bandwidth requirements exceeds what is available in the CIR bucket and is sent to the EIR bucket.
1 Traffic policing on Brocade NetIron CES and Brocade NetIron CER devices These commands configure a traffic policing policy for outbound traffic on port 1/1. The policy limits the average rate of all outbound traffic to 500 Mbps with a maximum burst size of 750 bits. Configuring port-based traffic policing using a policy map To configure port based traffic policing policy through a policy map, enter commands such as the following.
Traffic policing on Brocade NetIron CES and Brocade NetIron CER devices 1 Brocade(config)#access-list 50 permit host 1.1.1.2 Brocade(config)#access-list 50 deny host 1.1.1.3 Brocade(config)#access-list 60 permit host 2.2.2.3 Brocade(config-if-1/1)# rate-limit input access-group 50 500000000 20480 Brocade(config-if-1/1)# rate-limit input access-group 60 100000000 24194240 These commands first configure access-list groups that contain the ACLs that will be used in the traffic policing policy.
1 Traffic policing on Brocade NetIron CES and Brocade NetIron CER devices You can configure the device to drop traffic that is denied by the ACL instead of forwarding the traffic, on an individual port basis. NOTE Once you configure an ACL-based rate limiting policy on a port, you cannot configure a regular (traffic filtering) ACL on the same port. To filter traffic, you must enable the strict ACL option.
Traffic policing on Brocade NetIron CES and Brocade NetIron CER devices TABLE 2 1 Rate limit counters parameters (Continued) This field... Displays... Re-mark The number of packets for which priority has been remarked as a result of exceeding the bandwidth available in the CIR bucket for this rate limit policy. Total Total traffic (in bytes) that has been carried on this interface for the defined rate limit policy since the device was started or the counter was reset.
1 Rate limiting BUM packets Output such as the following is displayed. policy-map pmap1 cir 106656 bps cbs 24000 eir 53328 bps ebs 20000 excess-priority 2 excess-dscp 43 bytes bytes policy-map pmap2 cir 106656 bps cbs 24000 eir 53328 bps ebs 30000 excess-priority 1 excess-dscp 30 bytes bytes This display shows the following information. TABLE 4 Rate limit policy map parameters This field... Displays...
Rate limiting BUM packets 1 • When the port reaches the configured rate limit, the device will check if the shutdown option is enabled for the port. • The device counts the rate of BUM packets received on the port, for every port configured for shutdown. • A single drop counter moves over each port to check for the shutdown option in a round robin fashion. • If the drop counter finds the BUM packets dropped on a port, the port will be shut down until the port is explicitly enabled.
1 Rate limiting BUM packets The following syslog message is displayed with the port shutdown information in the output to the show log command. Brocade# show log Nov 4 23:07:52:I:BUM rate-limit is shutting down port 0 on PPCR 0 Nov 4 23:07:52:I:System: Interface ethernet 1/1, state down - shut down by rate-limiting broadcast, unknown unicast & multicast To enable the shutdown port, delete the previous rate limit by entering the clear rate-limit bum interface slot/port command.
Rate limiting BUM packets 1 Table 6 describes the output parameters of the show rate-limit command. TABLE 6 Output parameters of the show rate-limit command Field Description interface Shows the interface for which the BUM rate limit policy information is displayed. rate-limit input broadcast unknown-unicast multicast Shows the average rate configured for the inbound broadcast, unknown-unicast, and multicast traffic on the interface.
1 Rate limiting BUM packets Clearing accounting information for the BUM rate limit To clear the accounting information for the BUM rate limit, enter the following command.
Chapter Configuring Traffic Policing for the Brocade NetIron XMR and Brocade MLX series 2 Traffic policing on the Brocade device The Brocade device provides line-rate traffic policing in hardware on inbound ports and outbound ports. You can configure a Brocade device to use one of the following modes of traffic policing policies: • Port-based – Limits the rate on an individual physical port to a specified rate.
2 Traffic policing on the Brocade device on source and destination TCP or UDP addresses. and protocol information. These policies can be applied to inbound and outbound traffic. Up to 990 Port-and-ACL-based policies can be configured for a port under normal conditions or 3960 policies if priority-based traffic policing is disabled as described in “Configuring for no priority-based traffic policing” on page 25.
Traffic policing on the Brocade device 2 Maximum burst Maximum burst provides a higher than average rate to traffic that meet the rate limiting criteria. Traffic will be allowed to pass through the port for a short period of time. The unused bandwidth can be accumulated up to a maximum of “maximum burst” value expressed in bits. Credits and credit total Each rate limiting policy is assigned a class.
2 Traffic policing on the Brocade device The CIR bucket The CIR rate limiting bucket is defined by two separate parameters: the CIR rate, and the Committed Burst Size (CBS) rate. The CIR rate is the maximum number of bits a port is allowed to receive or send during a one-second interval. The rate of the traffic that matches the traffic policing policy can not exceed the CIR rate.
Traffic policing on the Brocade device 2 Configuring traffic policing on Brocade devices The following sections show examples of how to configure each traffic policing policy type. Configuring a policy map To configure a policy map, enter a command such as the following.
2 Traffic policing on the Brocade device Configuring port-based traffic policing for inbound and outbound ports Port-based traffic policing limits the rate on an individual inbound or outbound physical port to a specified rate. To configure port-based traffic policing policy for outbound ports, enter commands such as the following at the interface level.
Traffic policing on the Brocade device 2 Configuring a port and priority-based traffic policing policy for inbound and outbound ports To configure port based traffic policing policy directly, enter a command such as the following. Brocade(config)# interface ethernet 1/1 Brocade(config-if-1/1)# rate-limit input priority q1 500000000 33553920 The commands configure a traffic policing policy for inbound traffic on port 1/1.
2 Traffic policing on the Brocade device These commands configure two traffic policing policies that limit the average rate of all inbound traffic on port 1/1 with VLAN tag 10 and all outbound traffic on port 1/2 VLAN tag 20. The first policy limits packets with VLAN tag 10 to an average rate of 500000000 bits per second (bps) with a maximum burst size of 33553920 bits on port 1/1. The second policy limits packets with VLAN tag 20 to values defined in policy map map1.
Traffic policing on the Brocade device 2 3. Create a policy for the VLAN group and apply it to the interface you want. Enter commands such as the following. Brocade(config)# interface ethernet 1/1 Brocade(config-if-1/1)# rate-limit input group 10 500000000 33553920 These commands configure a traffic policing policy that limits the average rate of all inbound traffic on port 1/1 from vlan group VlanGroupA.
2 Traffic policing on the Brocade device Brocade(config)#access-list 50 permit host 1.1.1.2 Brocade(config)#access-list 50 deny host 1.1.1.3 Brocade(config)#access-list 60 permit host 2.2.2.3 Brocade(config-if-1/1)# rate-limit input access-group 50 priority q1 500000000 33553920 Brocade(config-if-1/1)# rate-limit input access-group 60 100000000 268431230 These commands first configure access-list groups that contain the ACLs that will be used in the traffic policing policy.
Traffic policing on the Brocade device 2 Using ACLs for filtering in addition to rate limiting When you use the ACL-based mode, the permit and deny conditions in an ACL you use in a rate limiting policy work as follows: • Permit – The traffic is rate limited according to the other parameters in the rate limiting policy. • Deny – The traffic is forwarded instead of dropped, by default.
2 Traffic policing on the Brocade device Configuring rate limiting for Copied-CPU-bound traffic A new feature was added that allows you to limit the rate of Copied-CPU-bound packets from applications such as sFlow, ACL logging, RPF logging, and source MAC address learning (with known destination address).
Traffic policing on the Brocade device 2 Displaying accounting information for rate limit usage To display accounting information for rate limit usage, enter the following command. Brocade# show rate-limit counters Syntax: show rate-limit counters [interface slot/port] The interface slot/port option allows you to get accounting information for a specified interface only. Output such as the following will display.
2 Traffic policing on the Brocade device Resetting the rate limit counters You can reset all of the rate limit counters using the following command. Brocade# clear rate-limit counters Syntax: clear rate-limit counters [interface] The interface variable specifies an interface that you want to clear the rate limit counters for. If you do not specify an interface, all rate limit counters on the device will be reset.
IPv6 ACL-based rate limiting 2 Syntax: show policy-map [map-name] The map-name variable limits the display of policy map configuration information to the map specified. If this variable is not used, configuration information will be displayed for all policy maps configured on the device. Output such as the following will display.
2 IPv6 ACL-based rate limiting • Multiple IPv6 ACL based rate-limiting policies can be applied to a single port. • Once a matching ACL clause is hit, subsequent rules and subsequent rate-limiting bindings on the interface are not evaluated. • An undefined ACL can be used in a rate-limiting configuration. • When “force-delete-bound-acl” is enabled, an ACL can be deleted even if in use by a rate-limiting policy.
IPv6 ACL-based rate limiting 2 Create Policy-map The following example configures the traffic policing policy-map map5 to limit CIR rate to 1000000 the CBS rate to 2000000, the EIR rate to 1000000 and the EBS to 2000000.
2 IPv6 ACL-based rate limiting Configure VRF specific rate-limit IPv6 access-list based rate-limiting can be configured for a specific VRF. Rate-limiting is applied to the inbound traffic for the interfaces which are part of the configured VRF. The following command configures rate-limiting for inbound traffic on the VRF “data” using the access-list “fdry”.
IPv6 ACL-based rate limiting 2 Brocade(config-if-1/1)# rate-limit output access-group name ipv6 fdry priority q0 policy-map map5 Configure strict-ACL rate-limiting on the interface By default, rate-limiting is applied to traffic that matches a permit clause. If the traffic does not match any clause or if the traffic matches a deny clause, it is forwarded normally (neither dropped nor rate-limited).
2 IPv6 ACL-based rate limiting Configuring rate-limit using non-existing access-list Rate-limiting can be configured using a non-existing or empty IPv6 access-list. When the access-list is created or when filters are added to the access-list and an explicit rebind is performed, the rate-limit parameters will be programmed on the interface. Output of show commands to verify output Following is the output of the debug commands to confirm the configuration and functionality.
Layer 2 ACL-based rate limiting 2 Clearing rate-limit counters You can clear rate-limit counters using the following command: Brocade# clear rate-limit counters ipv6-subnet Syntax: clear rate-limit counters ipv6-subnet Layer 2 ACL-based rate limiting Layer 2 ACL-based rate limiting enables devices to limit the rate of incoming traffic in hardware, without CPU intervention. Rate limiting in hardware enables the device to manage bandwidth at line-rate speed.
2 Layer 2 ACL-based rate limiting Editing a Layer 2 ACL Table You can make changes to the Layer 2 ACL table definitions without unbinding and rebinding the rate limit policy. For example, you can add a new clause to the ACL table, delete a clause from the table, or delete the ACL table that is used by a rate limit policy.
Layer 2 ACL-based rate limiting 2 Rate limiting protocol traffic using Layer 2 inbound ACLs Using interface level Layer 2 inbound ACLs, you can rate limit the following types of protocol traffic by explicitly configuring a filter to match the traffic: • • • • • • STP/RSTP/BPDU MRP VSRP LACP GARP UDLP To rate-limit all such control traffic enter commands such as the following: Brocade(config)#access-list etype any Brocade(config)#access-list etype any Brocade(config)#access-list etype any Brocade(config
2 Rate limiting ARP packets To bind an ACL that rate limits broadcast traffic and forwards all other traffic without rate limiting, enter commands such the following Brocade(config)#int eth 14/1 Brocade(config-if-e10000-14/1)#rate-limit in access-gr 411 8144 100 Rate limiting ARP packets You can limit the rate of ARP traffic that requires CPU processing on Brocade devices, such as ARP request traffic, and ARP response addressed to the device.
Rate limiting ARP packets TABLE 13 2 Rate limit ARP display parameters Parameter Description Re-mark The ARP traffic in bytes whose priority have been remarked as a result of exceed the bandwidth available in the CIR bucket for the ARP rate limit policy since the device was started up or the counter was reset. Total The total ARP traffic in bytes that has been subjected to the ARP rate limit policy since the device was started up or the counter was reset.
2 40 Rate limiting ARP packets Multi-Service IronWare QoS and Traffic Management Configuration Guide 53-1003037-02
Chapter Configuring Quality of Service (QoS) for the Brocade NetIron CES and Brocade NetIron CER Series 3 Quality of Service (QoS) The Quality of Service (QoS) features offer many options for the Brocade NetIron CES and Brocade NetIron CER devices. Quality of Service (QoS) provides preferential treatment to specific traffic, possibly at the expense of other traffic.
3 Ingress Traffic processing through a device Traffic types Data - Data packets can be either Network-to-Network traffic or traffic from the CPU. Network-to-Network traffic is considered Data traffic.Qos parameters can be assigned and modified for data traffic. Control - Packets to and from the CPU is considered control traffic. The QoS parameters fro this traffic are preassigned and not configurable. Setting packet header QoS fields The device supports setting or modifying the packet header IEEE 802.
Ingress Traffic processing through a device 3 3. Force the priority and drop precedence value based on the value configured for the physical port. 4. Force the priority value based on an ACL look-up. This is used for setting a a specific priority for and L2, L3 or L4 traffic flow. NOTE DEI value will remain 0 regardless of PCP or DSCP value.
3 Forcing the priority of a packet Forcing the priority of a packet Once a packet’s ingress priority has been mapped, the values that will be used for processing on the device are determined by either forcing or merging. There are a variety of commands to “force” the priority of a packet based on the following criteria: • Forced to a priority configured for a specific ingress port. The priority force command is configured at the interface where you want it to be applied.
Forcing the priority of a packet 3 Custom decode support User defined decode maps are supported on the Brocade NetIron CES and Brocade NetIron CER. The custom decode maps have the following implication for QoS handling: • Per port custom decode maps are not supported. Only a single global QoS map is supported. • A number of custom decode maps can be defined in the Multi-Service IronWare, but only one can be active at any time in the hardware.
3 Forcing the drop precedence of a packet Forcing the drop precedence of a packet Once a packet’s ingress drop precedence has been mapped, the values that will be used for processing on the device are determined by either forcing or merging. There are a variety of commands to “force” the drop precedence of a packet based on the following criteria: • Forced to a drop precedence configured for a specific ingress port.
Configuring QoS • • • • 3 Force to the values configured on a port Force to the value in the DSCP bits Force to the value in the PCP bits Force to a value specified within an ACL Configuring a force priority for a port You can configure an ingress port with a priority to apply to packets that arrive on it using the priority command. To configure an ingress port with a priority, use the priority command as shown in the following.
3 Configuring QoS Configuring force priority to the DSCP value You can configure an ingress port (using the qos dscp force command) to force the configured DSCP value when determining the priority relative to other priority values of incoming packets. To configure an ingress port to force the DSCP value, use the qos dscp force command as shown in the following.
Configuring QoS 3 Configuring extended-qos-mode The extended-qos-mode command should only be turned on when deploying CES/CER as MPLS PE devices, if preserving passenger DSCP is required, when terminating VPLS/VLL traffic at the egress end point. NOTE You must write this command to memory and perform a system reload for this command to take effect. NOTE This command will reduce the hardware table size by half.
3 Configuring QoS 2. If you have already formed a LAG with the same configuration, you can change the configuration by making changes to the LAG’s primary port. 3. If the LAG configuration is deleted, each of the port in the LAG (primary and secondary) will inherit the QoS configuration of the primary port.
Configuring QoS 3 Configuring port-level QoS commands on CPU ports The control packets destined to the CPU are assigned fixed priorities. The data and control packets that are processed by the CPU are prioritized, scheduled, and rate-shaped so that the higher priority control packets are handled before any lower priority control and data packets. The enhanced control packet prioritization and scheduling scheme ensures the proper transmission or reception of time-sensitive control and protocol packets.
3 Configuring QoS • “Configuring strict priority-based traffic scheduling” • “Configuring WRR weight-based traffic scheduling” • “Configuring mixed strict priority- and weight-based traffic scheduling” Configuring strict priority-based traffic scheduling To configure strict priority-based scheduling, enter the following command. Brocade(config-cpu-port)# scheduler strict Syntax: [no] scheduler strict Strict priority-based scheduling is the default traffic scheduling scheme.
Configuring QoS 3 Configuring mixed strict priority- and weight-based traffic scheduling When configuring the mixed strict priority- and weight-based scheduling scheme, queue 5 to queue 7 are allocated to strict priority-based scheduling and queue 0 to queue 4 are allocated to weight-based scheduling. To configure mixed strict priority- and weight-based scheduling., enter the following command.
3 Displaying QoS information Displaying QoS information You can display the following QoS information as described: • QoS Configuration Information – Using the show qos-map decode-map and show qos-map encode-map commands, you can display the priority and drop-precedence values mapped between values internal to the device and values that are received at the device or marked on packets leaving the device. This is described in “Displaying QoS Configuration information” on page 54.
Scheduling traffic for forwarding 3 Configuring traffic scheduling Traffic scheduling can be configured on a per-port basis. It affects the outgoing traffic on the configured port when bandwidth congestion occurs on that port. The following sections describe how to configure each of the traffic scheduling schemes: • “Configuring strict priority-based traffic scheduling” This option is the default traffic scheduling method if traffic scheduling is not configured on a port.
3 Scheduling traffic for forwarding • Queue 0 =10, Queue 1 = 15, Queue 2 = 20, Queue 3 = 25, Queue 4 = 30, Queue 5 = 35, Queue 6 = 40, and Queue 7 = 45, To determine the weight of q3 25 Weight of q3 = ----------------------------------------10 + 15 + 20 + 25 + 30 + 35 + 40 + 45 The weight of q3 is 11.4%. Consequently, q3 will get 11.4% of the port’s total bandwidth. The values of the remaining queues are calculated to be the following: q7 = 20.5%, q6 = 18.2%, q5 = 15.9%, q4 = 13.6%, q3 = 11.4%, q2 = 9.
Egress port and priority based rate shaping 3 Configuring mixed strict priority and weight-based scheduling When configuring the mixed strict priority and weight-based scheduling option, queues 5 - 7 are allocated to strict priority-based scheduling and queues 0 - 4 are allocated to weight-based scheduling. To configure mixed priority and weight-based scheduling use a command such as the following.
3 Egress port and priority based rate shaping Configuring port-based rate shaping When setting rate shaping for a port, you can limit the amount of bandwidth available on a port within the limits of the port’s rated capacity. NOTE The egress rate shaping on a port-based and priority based rate shaper is configured in increments of 1Kbps These limits provide a minimum and maximum rate that the port can be set to. They also provide the increments at which the port capacity can be set.
Egress port and priority based rate shaping 3 Example of configuring Prioritized Voice over Data When configuring Prioritize Voice over Data, use the strict priority method. In the example below, the DSCP 46 (Voice) is assigned to high priority queue on Ingress port, and goes out of Egress port without changing the DSCP value in the packet.
3 Egress port and priority based rate shaping DSCP 24 to priority DSCP 25 to priority DSCP 26 to priority DSCP 27 to priority DSCP 28 to priority DSCP 29 to priority DSCP 30 to priority DSCP 31 to priority DSCP 32 to priority DSCP 33 to priority DSCP 34 to priority DSCP 35 to priority DSCP 36 to priority DSCP 37 to priority DSCP 38 to priority DSCP 39 to priority DSCP 40 to priority DSCP 41 to priority DSCP 42 to priority DSCP 43 to priority DSCP 44 to priority DSCP 45 to priority DSCP 46 to priority DSCP
Egress port and priority based rate shaping 3 To change the profile from strict to weighted WRR0, use the qos scheduler WRR0 command on the interface (in this case you would change it on the Outgoing interface ie 1/13. Remember, on the Brocade NetIron CES and Brocade NetIron CER the QOS Scheduler is on Egress interface only.
3 Egress port and priority based rate shaping STP configured to ON, Priority is level0, flow control enabled mirror disabled, monitor disabled Not member of any active trunks Not member of any configured trunks No port name MTU 1544 bytes, encapsulation ethernet 300 second input rate: 754303848 bits/sec, 1473249 packets/sec, 89.57% utilization 300 second output rate: 754304283 bits/sec, 1473250 packets/sec, 89.
Egress port and priority based rate shaping 3 Configuring multicast flow control Flow controls are available from egress to Ingress, and from fabric to Ingress. At the egress of each Traffic Manager, there are pre-determined thresholds for consumed resources and available resources and separate thresholds for guaranteed multicast or broadcast traffic and best-effort multicast or broadcast traffic.
3 Egress port and priority based rate shaping NOTE When a qos multicast shaper command is configured for a port, the configuration command is placed in the running config for all ports that belong to the same Traffic Manager. In the example, that would mean that the qos multicast shaper best-effort rate 10000 command would appear in the interface configuration section for ports 1 and 2 on the Interface Module.
Egress port and priority based rate shaping 3 Ingress traffic shaping per multicast stream Internet Protocol Television (IPTV) multicast streams on an individual inbound physical port are rate shaped to a specified rate and are prioritized over the broadcast or unknown-unicast traffic. Each IPTV multicast stream is queued separately and is scheduled independently to the outbound ports. The IPTV rate shaping reduces burstiness in the source stream.
3 Egress port and priority based rate shaping FIGURE 1 IPTV Bandwidth Requirements Configuring multicast traffic policy maps You can define profiles to match the IPTV multicast traffic of the individual ingress streams. To configure a policy map for the multicast streams, enter the following command.
Egress port and priority based rate shaping 3 Binding multicast traffic policy maps NOTE A profile must exist in the configuration before it can be used for binding. A standard or an extended ACL is used to define the IPTV streams that can be bound to a defined profile. The profile binding associates the properties of the profile to all the IPTV streams identified by the ACL. Binding of multicast streams can be done for Layer 3 multicast routing and Layer 2 multicast snooping.
3 Egress port and priority based rate shaping The ip multicast policy-map specifies the ACL binding for IPv4 multicast snooping. Syntax: [no] ipv6 multicast policy-map profile_name acl_id | acl_name The ipv6 multicast policy-map specifies the ACL binding for IPv6 multicast snooping. The no form of the command removes the profile binding with the ACL on the VLAN or VPLS. In the following example, binding for Layer 2 multicast snooping is applied to VPLS instance V1.
Egress port and priority based rate shaping 3 Tuning multicast parameters NOTE The following commands are specific to the Brocade NetIron CER and Brocade NetIron CES devices: Mutilcast to Unicast descriptor ratio Multicast weight Multicast descriptor limit Multicast Traffic class mapping to Fabric Traffic class Multicast to Unicast descriptor ratio The qos-multicast mcast-unicast-desc-ratio command defines the number of Multicast Descriptors Duplications per Unicast Descriptor.
3 Egress port and priority based rate shaping Multicast descriptor limit Brocade NetIron CES and Brocade NetIron CER require a descriptor to place the packet in queue. If due to traffic conditions the descriptor limit is reached, the subsequent packets are dropped until the descriptors are available again. This parameter specifies the maximum number of descriptors that can be allocated for multicast packets.
Chapter Configuring Quality of Service for the Brocade NetIron XMR and Brocade MLX series 4 This chapter describes how QoS is implemented and configured in the Brocade device. The chapter contains the following sections: • Ingress Traffic Processing through a device – This section describes the QoS operation on ingress traffic of a Brocade device. Refer to “Ingress Traffic processing through a device” on page 72.
4 Ingress Traffic processing through a device • Configuring Packet Drop Priority using WRED – This section describes how to configure Weighted Random Early Detection (WRED). Refer to “Configuring packet drop priority using WRED” on page 120. • Scheduling Traffic for Forwarding – The Brocade supports six different schemes for prioritizing traffic for forwarding in a congested network. This section describes each of these schemes and how to configure them.
Ingress Traffic processing through a device 7. 4 Merge or force the priority value based on an ACL look-up. This is used for setting a a specific priority for and L2, L3 or L4 traffic flow. This process is described in Figure 2.
4 Creating an Ingress decode policy map • To assist the device in the decoding process described in “stage 1” decode-map tables are defined. Stage 2 Determine if a priority value should be forced or merged • If a packet’s EtherType matches 8100 or the port’s EtherType, derive a priority value and drop precedence by decoding the PCP value. • If the qos pcp force command is configured on the port, the priority and drop precedence values are set to the value read from the PCP bits.
Forcing or merging the drop precedence of a packet 4 • Forced to a priority configured for a specific VLAN. The priority force command is configured at the VLAN where you want is to be applied. • Forced to a priority that is obtained from the DSCP priority bits. The qos- dscp force command is configured at the interface where you want it to be applied. • Forced to a priority that is obtained from the EXP priority bits.
4 Egress Traffic processing exiting a device • Forced to a drop precedence that is based on an ACL match. The drop-precedence-force keyword can be used within an ACL to apply a priority to specified traffic. If multiple commands containing the force keyword are specified, the command with the highest precedence will take effect as determined by the following order. 1. ACL match 2. Physical port’s drop precedence value 3. DSCP value in an incoming IPv4v6 packet 4. EXP value in an incoming MPLS packet 5.
Backward compatibility with pre-03.8.00 4 Backward compatibility with pre-03.8.00 A number of the commands used in prior releases for QoS configuration have been deprecated and the functions performed by them has been taken over by new commands. The qos-tos-trust and qos-tos mark commands are still operative although their use is discouraged. Additionally, the qos-tos map dscp-priority commands that are in a current configuration are converted to new commands during a software upgrade.
4 Backward compatibility with pre-03.8.00 • The primary use of these commands is for packet remarking (without changing the internal priority of the packet if desired) qos-tos trust indicates the priority to be trusted on the Ingress interface (cos, dscp or ip-prec). qos-tos mark indicates which priority is to be marked on the Egress interface (cos or dscp). • qos-tos trust and qos-tos mark commands are both applied at the Ingress interface (physical or virtual).
Default QoS mappings 4 qos-mapping dscp decode-map USER_DSCP_MAP dscp-value 32 to priority 0 dscp-value 0 to priority 1 dscp-value 2 3 to priority 1 dscp-value 4 to priority 1 dscp-value 24 to priority 2 dscp-value 48 to priority 3 dscp-value 16 to priority 4 dscp-value 8 to priority 5 dscp-value 56 to priority 6 dscp-value 40 to priority 7 qos dscp decode-policy USER_DSCP_MAP NOTE If the port-priority command was not configured in the pre-converted configuration, the qos dscp encode-policy USER_DSCP_MA
4 Default QoS mappings TABLE 18 PCP PCP encode table 8P0D (default) 7 7 6 6 5 5 4 4 3 3 2 2 1 1 0 0 7P1D 7 7 6 6 5 4 5 4 3 3 2 2 1 1 0 0 6P2D 7 7 6 6 5 4 5 4 3 2 3 2 1 1 0 0 5P3D 7 7 6 6 5 4 5 4 3 2 3 2 1 0 1 0 Table 19 lists the default PCP Decode mappings.
Default QoS mappings TABLE 21 Default DSCP decode table DSCP decimal (binary) Priority decimal (binary) Drop-precedence decimal (binary) DSCP decimal (binary) Priority decimal (binary) Drop-precedence decimal (binary) 0 (000000) 0 (000) 0 (00) 16 (010000) 2 (010) 0 (00) 1 (000001) 0 (000) 0 (00) 17(010001) 2 (010) 0 (00) 2 (000010) 0 (000) 1 (01) 18 (010010) 2 (010) 1 (01) 3 (000011) 0 (000) 1 (01) 19 (010011) 2 (010) 1 (01) 4 (000100) 0 (000) 2 (10) 20 (010100) 2 (010)
4 Default QoS mappings TABLE 22 Default DSCP decode table (cont.) DSCP decimal (binary) Priority decimal (binary) Drop-precedence decimal (binary) DSCP decimal (binary) Priority decimal (binary) Drop-precedence decimal (binary 46 (101110) 5 (101) 3 (11) 62 (111110) 7 (111) 3 (11) 47(101111) 5 (101) 3 (11) 63 (111111) 7 (111) 3 (11) Table 23 lists the default EXP Encode mappings. Please note that software forwarded VPLS packets do not use the EXP encode table.
Protocol Packet Prioritization TABLE 24 4 Default EXP encode table Priority decimal (binary) Drop-precedence decimal (binary) EXP value Priority decimal (binary) Drop-precedence decimal (binary) EXP value 1 (001) 1 (01) 1 5 (101) 1 (01) 5 1 (001) 2 (10) 1 5 (101) 2 (10) 5 1 (001) 3 (11) 1 5 (101) 3 (11) 5 2 (010) 0 (00) 2 6 (110) 0 (00) 6 2 (010) 1 (01) 2 6 (110) 1 (01) 6 2 (010) 2 (10) 2 6 (110) 2 (10) 6 2 (010) 3 (11) 2 6 (110) 3 (11) 6 3 (011) 0 (00)
4 Protocol Packet Prioritization TABLE 26 Default prioritized protocol table Protocol Packets IPv4/L2 ARP STP/RSTP/BPDU MRP VSRP LACP GARP UDLD IGMP OSPF / OSPF over GRE BGP / BGP over GRE RIP IS-IS ES-IS VRRP VRRPE PIM / PIM over GRE DVMRP MSDP / MSDP over GRE RSVP LDP basic LDP extended BOOTP/DHCP IPv4 Router Alert ISIS over GRE or GRE Keep Alive Packets BFD (Bidirectional Forwarding Detection) IPv6 OSPF / OSPF in 6to4 BGP / BGP in 6to4 RIPNG MLD ND6 / ND6 in 6to4 VRRP 84 Multi-Service IronWare QoS
Protocol Packet Prioritization TABLE 26 4 Default prioritized protocol table Protocol Packets VRRPE PIM / PIM in 6to4 BFD (Bidirectional Forwarding Detection) PIM / PIM in 6to4 BFD (Bidirectional Forwarding Detection) Enhanced control packet prioritization The Traffic Manager (TM) allows prioritization and scheduling of packets destined for the CPU to guarantee optimal control packet processing and to reduce protocol flapping. The TM achieves physical separation of CPU-bound data and control packets.
4 Configuring QoS TABLE 27 Network processor prioritized protocol packets (Continued) Priority categorization Protocols P1 New unassigned protocols. P0 Existing unassigned protocols: GARP, L2-Trace. Configuring QoS The QoS configuration process involves separate procedures for Ingress and Egress QoS Processing as described in the following major sections.
Configuring QoS 4 • Enabling a Port to Use the DEI bit – You can configure the device to use the DEI bit when computing the drop precedence value for an incoming packet or encoding the DEI bit for transmitted frame as described in “Enabling a port to use the DEI bit for Ingress and Egress processing” on page 107. • Specifying the Trust Level and Enabling Marking – If you want to use the qos-tos trust and qos-tos mark commands from pre-03.8.
4 Configuring QoS NOTE The name “default-map” cannot be used because it is reserved for standard mappings as described in “Default QoS mappings” on page 79. Configuring an Ingress decode DSCP policy map Once you have named an Ingress Decode DSCP Policy Map using the dscp decode-map command, you can set the values of the named Ingress Decode DSCP Policy Map.
Configuring QoS 4 After this command is executed, the priority and drop-precedence values for dscp-value 40 will be returned to their default values as described in the default map tables that are defined in “Default QoS mappings” on page 79. 2. You can negate the drop-precedence value (returning it to its default value) without changing the currently configured priority value. This is done by using the [no] option with the original command that includes both the priority and drop-precedence values.
4 Configuring QoS Configuring an Ingress decode PCP policy map Once you have named an Ingress PCP Decode Policy Map using the pcp decode-map command, you can set the values of the named policy map. Setting the values in a policy map involves specifying the value of the PCP bits of an incoming packet and setting them to correspond to a value of 0 to 7 of the device’s internal priority. Optionally, you can set a drop precedence value of 0 to 3 in addition to the internal priority value.
Configuring QoS 4 For example: the following command has been used to set the priority map to assign an internal priority of “4” and a drop precedence of “2” to Ingress packets that have a PCP value of “6”. Brocade(config-qos-mapping-pcp-decode)# pcp-value 6 to priority 4 drop-precedence 2 To set the drop-precedence value back to the default value, use the [no] option with the previous command, as shown in the following.
4 Configuring QoS Brocade(config)# qos-mapping Brocade(config-qos-mapping)# exp decode-map Customer1 Brocade(config-qos-mapping-exp-decode)# exp-value 7 to priority 5 drop-precedence 2 Syntax: [no] exp-value exp-value [exp-value ] to priority priority-value [drop-precedence dp-value] The exp-value variable specifies the value of the EXP bits within the packet header of the incoming packets.
Configuring QoS 4 Brocade(config-qos-mapping-exp-decode)# no exp-value 7 to priority 5 drop-precedence 2 After this command is executed, the priority value will remain at 5 and the drop-precedence value will be returned to the default drop-precedence value for exp-value 7, as described in the default map tables that are defined in “Default QoS mappings” on page 79.
4 Configuring QoS The encode-map-name variable is the name assigned to the Ingress Decode DSCP Policy Map that you want applied to the port whose configuration this is under. The default-map option assigns the default Ingress Decode DSCP Policy Map to the port whose configuration this is under. Since the default Ingress Decode DSCP Policy Map is the global default setting, this option is only required when the device’s global map has been set to a Ingress Decode DSCP Policy Map other than the default.
Configuring QoS 4 Brocade(config)# qos pcp decode-policy Customer1 Brocade(config)# interface ethernet 10/1 Brocade(config-if-e10000-10/1)qos pcp decode-policy Customer1 Syntax: [no] qos pcp decode-policy decode-map-name | default-map | all-zero-map | 7P1D |6P2D | 5P3D The decode-map-name variable is the name assigned to the Ingress Decode PCP Policy Map that you want applied to the port whose configuration this is under.
4 Configuring QoS The all-zero-map option assigns an Ingress Decode EXP Policy Map where all EXP values are mapped to priority 0 and drop precedence 0. Binding an Ingress decode EXP policy map to a port You can bind an Ingress Decode EXP Policy Map to a specified port on a Brocade device using the qos exp decode-policy command as shown in the following.
Configuring QoS 4 To configure an Ingress port to force the port-configured priority, use the priority force command as shown in the following: Brocade(config)# interface ethernet 10/1 Brocade(config-if-e10000-10/1)priority force Syntax: [no] priority force Configuring a force drop precedence for a port You can configure an Ingress port with a drop precedence to apply to packets that arrive on it using the drop-precedence command.
4 Configuring QoS To configure an Ingress port to force the VLAN-configured priority, use the priority force command as shown in the following. Brocade(config)# vlan 20 Brocade(config-vlan-20) priority force Syntax: [no] priority force Configuring force priority to the DSCP value You can configure an Ingress port (using the qos dscp force command) to force the configured DSCP value when determining the priority relative to other priority values of incoming packets.
Configuring QoS 4 Configuring Egress encode policy maps Egress Encode Policy Maps are created globally and are applied later either globally for all ports on a device or locally to specific port. To create an Egress Encode Policy Map, you must first enter the QoS mapping configuration level of the command interface using the qos-mapping command, as shown in the following.
4 Configuring QoS To set the values of an Egress Encode DSCP Policy Map, first specify name of the policy map and then populate the values in the Egress Encode DSCP Policy Map using the priority command as shown in the following.
Configuring QoS 4 Configuring an Egress encode PCP policy map Once you have named an Egress Encode PCP Policy Map using the pcp encode-map command, you can set the values of the named policy map. Setting the values in an Egress Encode PCP Policy Map involves specifying a PCP value to be marked in outgoing packets for a specified internal priority value (0 - 7) and optionally a drop precedence value (0 - 3).
4 Configuring QoS NOTE The name “default-map” cannot be used because it is reserved for standard mappings as described in “Default QoS mappings” on page 79. Configuring an Egress encode EXP policy map Once you have named an Egress Encode EXP Policy Map using the exp encode-map command, you can set the values of the named encode policy map.
Configuring QoS 4 Globally binding an Egress encode DSCP policy map You can bind an Egress Encode DSCP Policy Map globally for a Brocade device using the qos dscp encode-policy command as shown in the following. Brocade(config)# qos dscp encode-policy Customer1 Syntax: [no] qos dscp encode-policy encode-map-name | default-map | all-zero-map The encode-map-name variable is the name assigned to the Egress Encode DSCP Policy Map that you want applied globally on the device.
4 Configuring QoS The no option allows you to withdraw a previously configured encode policy. If the qos pcp encode-policy command is not configured, then the no qos pcp encode-policy command will generate an error message. The no option allows you to withdraw a previously configured Egress Encode DSCP Policy Map.
Configuring QoS 4 The default-map option assigns the default Egress Encode PCP Policy Map globally on the device. Since the default Egress Encode PCP Policy Map is the default setting, this option is only required when the device has been previously set to a different Egress Encode PCP Policy Map. When configured globally, the qos pcp encode-policy default-map command will not be displayed within the configuration even if it is explicitly configured.
4 Configuring QoS The no option allows you to withdraw a previously configured Egress Encode PCP Policy Map. If the qos pcp encode-policy command is not configured, then the no qos pcp encode-policy command will generate an error message. The no option allows you to withdraw a previously configured Egress Encode PCP Policy Map.
Configuring QoS 4 The no option allows you to withdraw a previously configured Egress Encode EXP Policy Map. If the qos exp encode-policy default-map command is not configured, the no qos exp encode-policy default-map command will still be allowed because qos exp encode-policy default-map is the default configuration.
4 Configuring QoS The semantics and structure of the S-TAG is identical to that of the C-TAG, with the exception that bit 5 in octet 1, the Drop Eligible Indicator (DEI) bit, dis used to indicate if the packet is drop eligible. This allows all 3 bits in the PCP ID to be used for indicating priority of the packet with the drop precedence indicated by the DEI bit. The IEEE 802.1ad requires that if this capability is provided, it must be independently manageable for each port.
Configuring QoS 4 NOTE In versions of the Multi-Service IronWare prior to 03.8.00, before configuring the qos-tos trust and qos-tos mark commands, you had to configure the port-priority command at global CONFIG level. Beginning with version 03.8.00, the port-priority command is no longer supported. You can now directly configure the qos-tos trust and qos-tos mark commands at the interface-level.
4 Configuring QoS NOTE In release 03.8.00 and later, the qos pcp encode-policy on command must be configured when the qos-tos mark cos command is configured. The qos pcp encode-policy command is on by default and does not require explicit configuration unless it has been configured to be off. NOTE You can't apply an ACL to an interface in the outbound direction to change the priority of certain types of traffic. Packet mapping commands The qos-tos trust command, that is retained from pre-03.8.
Configuring QoS 4 Changing the IP precedence –> DSCP mappings The IP precedence –> DSCP mappings are used if the trust level is IP Precedence as set by the qos-tos trust command. To change the IP precedence –> DSCP mappings, enter commands such as the following at the global CONFIG level of the CLI.
4 Configuring QoS Configuring support for super aggregate VLANs In a super-aggregate VLAN application, you can optionally configure an untagged interface to copy the QOS bits from the tag value set by the edge device to the tag value set by the core device. This is only supported if the incoming packet has ETYPE 0x8100. This can be configured using the qos decode-cvlan-pcp command as shown in the following.
Displaying QoS information 4 LAG configuration rules for QoS configurations using commands that begin with the qos keyword In port-level QoS Configurations where QoS Configurations Using Commands that begin with the qos keyword are used, the considerations listed below must be followed. 1. The secondary ports configured in the LAG must not have any QoS values configured on them. 2. The qos commands that are configured on the primary port are applied to all ports in the LAG. 3.
4 Displaying QoS information Displaying QoS Decode Policy Map configurations To display QoS Decode Policy Map configuration information, enter the following command at any level of the CLI.
Displaying QoS information 4 Displaying QoS Egress Encode Policy Map configurations To display QoS Egress Encode Policy Map configuration information, enter the following command at any level of the CLI.
4 Displaying QoS information Displaying QoS Binding configurations To display QoS Binding configuration information, enter the following command at any level of the CLI. Brocade(config)# show qos-map binding global qos pcp decode-policy pcp-t2 qos exp decode-policy exp-t1 qos dscp decode-policy dscp-t3 qos dscp encode-policy dscp-d3 Syntax: show qos-map binding global | slot/port The global option is used to display all QoS Policy Map bindings configured on the device.
Displaying QoS information 4 Displaying QoS packet and byte counters You can enable the collection of statistics for Ingress and Egress packet priorities using the enable-qos-statistics command. Once the collection of statistics is enabled, the show np qos statistics command can be used to display a count of the packet priorities of Ingress and Egress packets as shown in the following.
4 Weighted Random Early Discard (WRED) TABLE 28 QoS counter information (Continued) This field... Displays... COS : packets The number of packets leaving the device on the specified port or module with a DSCP, EXP, or PCP value equal to the value of the variable. COS : bytes The number of bytes contained in the packets leaving the device on the specified port or module with a DSCP, EXP, or PCP value equal to the value of the variable.
Weighted Random Early Discard (WRED) 4 • Pkt-Size-Max – The packet size to which the current packet's size is compared as shown in the algorithm below. This variable is user configured. How the WRED algorithm operates The graph in Figure 4 describes the interaction of the previously described variables in the operation of WRED.
4 Configuring packet drop priority using WRED pkt-size Pdrop = (avg-q-size - min-avg-q size) ----------------- * Pmax * pkt-size-max ----------------------------------------(max-avg-q-size - min-avg-q size) Applying the WRED algorithm to device traffic Packets are assigned to an Ingress queue type based on their individual destination port and one of the 8 (0 - 7) internal priorities.
Configuring packet drop priority using WRED 4 Enabling WRED WRED must be enabled for the queue type of any forwarding queue that you want it to operate on. To enable WRED for the forwarding queues with a queue type of 3, enter the following command. Brocade(config)#qos queue-type 3 wred enable Syntax: [no] qos queue-type queue-number wred enable The queue-type variable is the number of the forwarding queue that you want to enable WRED for. There are eight forwarding queues on Brocade devices.
4 Configuring packet drop priority using WRED The avg-weight-value variable is the weight-ratio between instantaneous and average queue sizes. It is described as the Wq parameter in “Weighted Random Early Discard (WRED)” on page 118. It can be one of the 13 values expressed as 1 to 13 described in Table 30. The default value is 9 which maps to a Wq value of .19%. Configuring the maximum instantaneous queue size You can set the maximum size to which a queue is allowed to grow.
Configuring packet drop priority using WRED 4 The queue-type variable is the number of the forwarding queue type that you want to configure drop-precedence for. There are eight forwarding queue types on Brocade devices. They are numbered 0 to 7. The drop-precedence-value variable for the drop-precedence parameter is the TOS or DSCP value in the IPv4 or IPv6 packet header. It determines drop precedence on a scale from 0 - 3.
4 Configuring packet drop priority using WRED Brocade(config)#qos queue-type 1 wred drop-precedence 0 min-avg-queue-size 16 Syntax: [no] qos queue-type queue-type wred drop-precedence drop-precedence-value min-avg-queue-size min-size The queue-type variable is the number of the forwarding queue type that you want to configure drop-precedence for. There are eight forwarding queue types on Brocade devices. They are numbered 0 to 7.
Configuring packet drop priority using WRED TABLE 31 4 WRED default settings Queue type Drop precedence Minimum average queue size (KByte) Maximum average queue size (KByte) Maximum packet size (Byte) Maximum drop probability Maximum instantaneous queue size (Kbyte) Average weight 0 0 320 1024 16384 2% 1024 6.25% 1 256 1024 16384 4% 2 256 1024 16384 9% 3 192 1024 16384 10% 0 320 1024 16384 2% 1024 6.
4 Scheduling traffic for forwarding • If you enter the min-avg-queue-size equal to what is already configured as the max-avg-queue-size, then the min-avg-queue-size will be decremented by 64 to make it different from the max-avg-queue-size, the following warning is displayed: “Warning - min-avg-queue-size is decreased to (min-avg-queue-size - 64) as min and max should be different to be effective.
Scheduling traffic for forwarding 4 • WFQ weight-based traffic scheduling – With WFQ destination-based scheduling enabled, some weight-based bandwidth is allocated to all queues. With this scheme, the configured weight distribution is guaranteed across all traffic leaving an egress port and an input port is guaranteed allocation in relationship to the configured weight distribution.
4 Scheduling traffic for forwarding To determine the weight of q3, use the following formula. 25 Weight of q3 = ----------------------------------------10 + 15 + 20 + 25 + 30 + 35 + 40 + 45 The weight of q3 is 11.4%. Consequently, q3 will get 11.4% of the port’s total bandwidth. The values of the remaining queues are calculated to be the following: q7 = 20.5%, q6 = 18.2%, q5 = 15.9%, q4 = 13.6%, q3 = 11.4%, q2 = 9.1%, q1 = 6.8%, and q0 = 4.
Egress port and priority based rate shaping 4 Brocade(config)# interface ethernet 1/1 Brocade(config-if-e1000-1/1)# qos scheduler mixed 100 80 60 40 20 Syntax: qos scheduler mixed queue4-weight queue3-weight queue2-weight queue1-weight queue0-weight The queue4-weight variable defines the relative value for queue4 in calculating queue4’s allocated bandwidth. The queue3-weight variable defines the relative value for queue3 in calculating queue3’s allocated bandwidth.
4 Egress port and priority based rate shaping Configuring port-based rate shaping When setting rate shaping for a port, you can limit the amount of bandwidth available on a port within the limits of the port’s rated capacity. Within that capacity, you can set the bandwidth at increments within the ranges described in Table 32.
Egress port and priority based rate shaping 4 NOTE The egress rate shaping burst size for a port and priority-based shaper is 3072 bytes. To set the capacity for priority 2 traffic on a 10 Gbps port to the incremental capacity over 2 Gbps, use the following command.
4 Egress port and priority based rate shaping Configuring multicast flow control Flow controls are available from egress to Ingress, and from fabric to Ingress. At the egress of each Traffic Manager, there are pre-determined thresholds for consumed resources and available resources and separate thresholds for guaranteed multicast or broadcast traffic and best-effort multicast or broadcast traffic.
Egress port and priority based rate shaping 4 Brocade(config)# interface ethernet 1/1 Brocade(config-if-e10000-1/1)# qos multicast shaper best-effort rate 10000 In this example, the configuration will apply to Ingress traffic that arrives on either port 1/1 or port 1/2 of the Interface module. NOTE When a qos multicast shaper command is configured for a port, the configuration command is placed in the running config for all ports that belong to the same Traffic Manager.
4 Egress port and priority based rate shaping The best-effort_max_buffer specifies the maximum buffer size allowed for guaranteed traffic flow (multicast port priority 2-0). Specified as percentage of total buffer size. Ingress traffic shaping per multicast stream Internet Protocol Television (IPTV) multicast streams on an individual inbound physical port are rate shaped to a specified rate and are prioritized over the broadcast or unknown-unicast traffic.
Egress port and priority based rate shaping FIGURE 6 4 IPTV Bandwidth Requirements Configuring multicast traffic policy maps You can define profiles to match the IPTV multicast traffic of the individual ingress streams. To configure a policy map for the multicast streams, enter the following command.
4 Egress port and priority based rate shaping Binding multicast traffic policy maps NOTE A profile must exist in the configuration before it can be used for binding. A standard or an extended ACL is used to define the IPTV streams that can be bound to a defined profile. The profile binding associates the properties of the profile to all the IPTV streams identified by the ACL. Binding of multicast streams can be done for Layer 3 multicast routing and Layer 2 multicast snooping.
Egress port and priority based rate shaping 4 The ip multicast policy-map specifies the ACL binding for IPv4 multicast snooping. Syntax: [no] ipv6 multicast policy-map profile_name acl_id | acl_name The ipv6 multicast policy-map specifies the ACL binding for IPv6 multicast snooping. The no form of the command removes the profile binding with the ACL on the VLAN or VPLS. In the following example, binding for Layer 2 multicast snooping is applied to VPLS instance V1.
4 Traffic manager statistics display Traffic manager statistics display Counters have been introduced to track the packets and bytes that enter the Ingress traffic manager and exit the egress traffic manager. Data from these counters can be displayed as described in the following sections. Displaying all traffic manager statistics for a device The following command displays all traffic manager statistics for a device by port groups that belong to each traffic manager.
Traffic manager statistics display EnQue Byte Count: DeQue Pkt Count: DeQue Byte Count: TotalQue Discard Pkt Count: TotalQue Discard Byte Count: Oldest Discard Pkt Count: Oldest Discard Byte Count: Egress Counters: EnQue Pkt Count: EnQue Byte Count: Discard Pkt Count: Discard Byte Count: 4 51907696 464454 51907696 0 0 0 0 701866 78791072 0 0 Syntax: show tm statistics ethernet slot/port The slot/port variable specifies the slot and port number of the port group that you want to display traffic manager s
4 Traffic manager statistics display TABLE 34 Traffic manager statistics This field... Displays... Ingress Statistics Total Ingress Pkt Count A count of all packets entering into this traffic manager. A traffic manager contains a specific number of ports depending on the Interface module as described in Table 35. EnQue Pkt Count A count of all packets entering Ingress queues on this traffic manager.
Traffic manager statistics display 4 NOTE The byte counts displayed from the show tm statistics command incorporate proprietary internal headers of various lengths.
4 Traffic manager statistics display Displaying traffic manager statistics for NI-MLX-10Gx8-M and NI-MLX-10Gx8-D modules The following command displays traffic manager statistics for the NI-MLX-10Gx8-M module, and the NI-MLX-10Gx8-D module identified by its slot number.
Traffic manager statistics display 4 Displaying traffic manager statistics for the 4x10G module The following command displays traffic manager statistics for the 4x10G module identified by its slot number.
4 Traffic manager statistics display Displaying traffic manager statistics for the 20x1G module The following command displays traffic manager statistics for the 20x1G module.
Traffic manager statistics display 4 Displaying traffic manager statistics for IPTV multicast queue The following command displays traffic manager statistics for the IPTV Multicast queue on an Ethernet module.
4 Traffic manager statistics display TABLE 37 Output parameters of the show tm-voq-stat src_port eth 3/21 fid 8004 command (Continued) Field Description Current Queue Depth Shows the current queue depth. Maximum Queue Depth since Last read Shows the maximum queue depth since last access to read.
QoS for NI-MLX-1Gx48-T modules 4 utilization 1015230949 packets input, 64974783168 bytes, 0 no buffer Received 0 broadcasts, 0 multicasts, 1015230949 unicasts 0 input errors, 0 CRC, 0 frame, 0 ignored 0 runts, 0 giants NP received 1039220106 packets, Sent to TM 1039220442 packets NP Ingress dropped 0 packets 1015231660 packets output, 64974824768 bytes, 0 underruns Transmitted 0 broadcasts, 0 multicasts, 1015231660 unicasts 0 output errors, 0 collisions NP transmitted 1039221393 packets, Received from TM
4 Aggregated TM VOQ statistics collection NOTE When configuring priority queues from 8 to 4, or vice versa, the system displays the following message: Reload required. Please write memory and then reload or power cycle. Failure to reload could cause system instability or failure. The NP continues to map all inbound packets to 8 internal priorities. If the system-init max-tm-queues command is configured, the NP will right shift this priority number by one bit before sending the packet to TM.
Aggregated TM VOQ statistics collection 4 Enabling aggregated TM VOQ statistics collection The tm-voq-collection command allows you to enable and disable aggregated TM VOQ statistics collection. Brocade(config)# statistics Brocade(config-statistics)# tm-voq-collection Syntax: [no] tm-voq-collection NOTE If priority queues are configured with system init max-tm-queues 4, TM will queue packets based on Table 40.
4 Aggregated TM VOQ statistics collection NOTE The enable-qos-statistics command must be enabled along with the snmp-server enable mib np-qos-stat command to enable SNMP support for retrieving NP QoS statistics. NOTE NP QOS statistics are supported for physical ports only. Brocade(config)# enable-qos-statistics Syntax: [no] enable-qos-statistics Displaying TM statistics from one queue or all queues Use the following command to display traffic manager statistics for ethernet.
Aggregated TM VOQ statistics collection WRED Dropped Pkt Count WRED Dropped Bytes Count Current Queue Depth Maximum Queue Depth since Last read Priority = 2 .... 4 0 21 0 0 Syntax: show tm-voq-stat src_port source-port dst_port ethernet destination-port priority Specification of a source-port and destination-port is required. You can optionally specify a priority to limit the display to a single priority.
4 Aggregated TM VOQ statistics collection Displaying TM statistics from the multicast queue Use the following command to display traffic manager statistics from the Multicast queue for priority 1 on a module.
Aggregated TM VOQ statistics collection 4 • Use the show tm-voq-stats dst_port all P command to display priority P counters for all the ports in the system. • Use the show tm-voq-stats dst_port all command to display aggregated counters for all the ports in the system. • Use the show tm-voq-stats dst_port all all command to display all priorities and aggregate counters for all the ports in the system. NOTE The statistics are shown only when aggregated TM VOQ statistics collection is enabled.
4 Aggregated TM VOQ statistics collection 7 0 0% --------- Ports 3/25 - 3/48 --------QType Max Depth Max Util 0 0 0% 1 0 0% 2 0 0% 3 0 0% 4 0 0% 5 0 0% 6 0 0% 7 0 0% NA Destination Port NA NA NA NA NA NA NA NA Syntax: show tm-voq-stat max-queue-depth slot slot number The show tm-voq-stat max-queue-depth slot command displays the following information: TABLE 43 TM VOQ maximum queue depth summary. Field Description QType Specifies the queue priority.
Aggregated TM VOQ statistics collection 4 Clearing TM VOQ depth summary Use the clear tm-voq-stat max-queue-depth slot command to clear the maximum queue depth summary of any queue. Brocade#clear tm-voq-stat max-queue-depth slot Syntax: clear tm-voq-stat max-queue-depth slot slot number The slot number specifies the decimal value of the slot number of the group from which you want to clear the traffic manager queue depth summary.
4 Displaying QoS packet and byte counters Displaying QoS packet and byte counters You can enable the collection of statistics for Ingress and Egress packet priorities using the enable-qos-statistics command. Once the collection of statistics is enabled, the show np statistics command can be used to display a count of the packet priorities of Ingress and Egress packets as shown in the following. Brocade# show np statistics TD: Traffic Despritor.
QoS commands affected by priority queues 4 QoS commands affected by priority queues • • • • • Priority-based Rate Shaping Weighted Random Early Discard (WRED) Weighted-based Scheduling and Mixed Strict Priority CPU Copy Queue Traffic Manager Statistics Priority-based rate shaping If the user specifies a priority of 4-7 when the max-tm-queues parameter is configured using 4 queues, the qos shaper priority command is accepted, but a warning message is displayed.
4 QoS commands affected by priority queues The following example displays when the qos scheduler weighted command is configured using 4 queues. Brocade(config-ethe-1/1)#qos scheduler weighted 7 6 5 4 3 2 11 Current max TM queues is 4 - weights “7”, “6”, “5”, “4” for priority 7-4 will not have any effect. The following example displays when the qos scheduler mixed command is configured using 4 queues.
Enhanced buffer management for NI-MLX-10Gx8 modules and NI-X-100Gx2 modules 4 Enhanced buffer management for NI-MLX-10Gx8 modules and NI-X-100Gx2 modules The prioritized buffer-pool feature establishes two buffer-pools, gold buffer-pool for high priority traffic and bronze buffer-pool for low priority traffic. Each internal priority can be associated with either the gold buffer-pool or the bronze buffer-pool. High priority traffic is guaranteed buffers even in the presence of bursty low priority traffic.
4 Enhanced buffer management for NI-MLX-10Gx8 modules and NI-X-100Gx2 modules FIGURE 7 System Default Priorities and Corresponding Buffer Pools High Priority Buffer Pool Internal priority 7 VOQs Priority 7 VOQs Priority 6 VOQs Priority 5 VOQs Priority 4 Low Priority Buffer Pool Internal priorities 6-0 VOQs Priority 3 VOQs Priority 2 VOQs Priority 1 VOQs Priority 0 Configuration of buffer-pool priority to queue type The following table displays the traffic types that are associate with each pri
Enhanced buffer management for NI-MLX-10Gx8 modules and NI-X-100Gx2 modules TABLE 46 4 Strict v.s. weighted queues Queue Type (Prioritization Category) Protocol P7 LACP, UDLD (802.3ah), STP/RSTP/BPDU, VSRP, MRP, BFD, GRE-KA/IS-IS over GRE, G.8032, LLDP, non-CCM 802.1ag (Eth + MPLS-enc.
4 Enhanced buffer management for NI-MLX-10Gx8 modules and NI-X-100Gx2 modules Configuring buffer-pool size Brocade(config)#qos buffer-pool bronze 50 Syntax: [no] qos buffer-pool buffer-pool-type | gold | bronze max-percentage The buffer-pool-type variable signifies the type of buffer-pool, gold or bronze. The percentage specifies the maximum percentage of memory allocated for the buffer-pool-type. Configuration considerations • The percentage of memory allocated for Bronze buffer-pool cannot exceed 95%.
Enhanced buffer management for NI-MLX-10Gx8 modules and NI-X-100Gx2 modules BRONZE GOLD 95 100 Module Type Total Memory 8x10 MB 2x100 MB 4 0 5 Max. Gold 1392 MB 1472 MB Min. Gold 1392 MB 1472 MB Max. Bronze 64 MB 73 MB Min. Bronze 1328MB 1398 MB 0 0 Configuring Virtual Output Queue (VOQ) queue size Modules with 256MB VOQ size support The following modules support a maximum VOQ size of 256MB.
4 Enhanced buffer management for NI-MLX-10Gx8 modules and NI-X-100Gx2 modules The queue-number variable signifies the queue-type priority for which “max-queue-size” is changed. The queue-number can vary from 0-7. The max-queue variable signifies the maximum value of the queue size in KB for the queue-type/internal priority. The max-queue can vary from 0-65536 KBytes. Setting the “max-queue-size” to “0” implicitly sets the “max-queue-size” to the maximum value.
Commands 4 Commands • • • • show tm buffer-pool-stats slot show tm-voq-stat max-queue-depth slot clear tm-voq-stat max-queue-depth slot clear tm buffer-pool-stats slot Multi-Service IronWare QoS and Traffic Management Configuration Guide 53-1003037-02 165
4 show tm buffer-pool-stats slot show tm buffer-pool-stats slot Displays the maximum buffer utilization from the TM. Syntax Parameters show tm buffer-pool-stats slot slot number slot number The slot number specifies the decimal value of the slot from traffic manager. Command Modes Privileged EXEC mode Usage Guidelines The show tm buffer-pool-stats slot command displays the maximum buffer utilization from the TM and provides additional information for debugging purposes.
show tm-voq-stat max-queue-depth slot 4 show tm-voq-stat max-queue-depth slot Displays the summary of maximum queue depth of any TM queue. Syntax Parameters show tm-voq-stat max-queue-depth slot slot number slot number The slot number specifies the decimal value of the slot of the traffic manager.
4 show tm-voq-stat max-queue-depth slot History Related Commands 168 Release Command History Multi-Service IronWare R05.5c This command was introduced.
clear tm-voq-stat max-queue-depth slot 4 clear tm-voq-stat max-queue-depth slot Clears the summary of maximum queue depth of any TM queue. Syntax Parameters clear tm-voq-stat max-queue-depth slot slot number slot number The slot number specifies the decimal value of the slot of the traffic manager. Command Modes Privileged EXEC mode Usage Guidelines The clear tm-voq-stat max-queue-depth slot command clears the maximum queue depth of any queue from the TM.
4 clear tm buffer-pool-stats slot clear tm buffer-pool-stats slot Clears the maximum buffer utilization from the TM. Syntax Parameters clear tm buffer-pool-stats slot slot number slot number The slot number specifies the decimal value of the slot from the traffic manager. Command Modes Privileged EXEC mode Usage Guidelines The clear tm buffer-pool-stats slot command clears the maximum buffer utilization from the TM.
Chapter 5 Hierarchical Quality of Service (HQoS) Table 47 displays the individual Brocade devices that support HQoS.
5 Hierarchical QoS (HQoS) for 8x10G modules Hierarchical QoS (HQoS) for 8x10G modules NOTE HQoS is supported on the egress of 10G ports of the NI-MLX-10GX8-M and BR-MLX-10GX8-X modules. HQoS is not supported on the NI-MLX-10GX8-D module. Hierarchical QoS (HQoS) allows a carrier to consolidate different services on the same physical device running on the same physical infrastructure.
Hierarchical QoS (HQoS) for 8x10G modules FIGURE 8 5 HQoS model Service 1 Service 2 Service 3 Service 4 Customer 1 Logical Port 1 Customer 2 Physical Port Customer 3 Logical Port 2 Customer 4 HQoS Components The following HQoS components are supported in this release. Supported levels of scheduling At every scheduling/shaping level, the sum of the shaping rates going into a scheduler element does not need to add up to less than the shaping rate out of that scheduler element.
5 Hierarchical QoS (HQoS) for 8x10G modules HQoS towards the customers HQoS can shape the traffic towards the downstream 1GE links using the “Logical” port level of HQoS on Brocade devices. In Figure 9 on page 174, two logical ports would be defined and shaped to 1Gb/s. The HQoS policy is configured with Customer traffic connected to the appropriate “Logical” port on the HQoS hierarchy.
Hierarchical QoS (HQoS) for 8x10G modules 5 HQoS towards core network HQoS usage towards the Service Provider core network on 10GE ports ensure high levels of QoS higher priority traffic classes for the customer. The core network in this case can be a PB or PBB network.
5 Hierarchical QoS (HQoS) for 8x10G modules • Level 3: The Service level provides the scheduler/shaper for individual customer services, e.g., VLAN. The SLA applied here would apply to the individual service for that customer. For example, a customer may have two distinct point-to-point services identified by a SVLAN sharing the same physical link to the customer. The Service level schedules traffic from individual priority levels for a particular SVLAN.
5 Hierarchical QoS (HQoS) for 8x10G modules Supported deployment models HQoS for Local VPLS Figure 11 shows a Local VPLS HQoS model. This model can be used for any kind of VLAN, such as Customer 802.1Q VLANs (CVLAN), Provider Bridging 802.1ad VLANs (SVLAN), or Provider Backbone Bridging 802.1ah VLANs (BVLAN). The type of VLAN being used is defined by the port Ethertype configuration. As Figure 11 shows HQoS for Local VPLS supports single-tagged and dual-tagged endpoint queuing on the same egress port.
5 Hierarchical QoS (HQoS) for 8x10G modules HQoS for PBB traffic PBB ports can use either BVLAN-based queuing (BVLAN HQoS model, as shown in Figure 11 on page 177) or I-SID-based queuing as shown in Figure 12, where the egress port is an 802.1ah (PBB) port and where packets are queued per I-SID. A BVLAN may carry a large number of services identified by distinct I-SID values.
Hierarchical QoS (HQoS) for 8x10G modules 5 Bypassing hierarchy levels Figure 13 is an example where the Service Provider does not require the Logical Port level.
5 Hierarchical QoS (HQoS) for 8x10G modules Other traffic Figure 14 on page 180 displays a Local VPLS HQoS model concurrently supporting non-customer traffic, which is referred to as "other traffic". The customer traffic will always be queued and scheduled through the customer portion of the Layer 2 HQoS scheduler irrespective of what customer Layer 2 traffic is carrying. The "other queues" is used for non-customer traffic. That is, traffic that is not explicitly mapped to an HQoS queue.
5 Hierarchical QoS (HQoS) for 8x10G modules Configuring HQoS The HQoS configuration procedure goes through the following phases: • • • • Create scheduling entities and configure forwarding profiles for them. Associate the scheduling entities with the forwarding profiles. Configure the match criteria for each node. Apply the organized scheduler policy to an interface.
5 Hierarchical QoS (HQoS) for 8x10G modules • Continuous bursts from 2K HQoS customer streams of jumbo packet sizes at line rate cannot be sustained by XPP's small egress FIFOs.
Hierarchical QoS (HQoS) for 8x10G modules FIGURE 17 5 HQoS scheduler policy The following HQoS scheduler policies examples use Figure 16 on page 182.
5 Hierarchical QoS (HQoS) for 8x10G modules Brocade(config-hqos-scheduler-policy customer-group-type1)# scheduler-flow Customer1 scheduler-input 3 scheduler-policy customer-type1 Brocade(config-hqos-scheduler-policy customer-group-type1)#scheduler-flow Customer2 scheduler-input 2 scheduler-policy customer-type1 Level 3 policy The following is an example of how to configure a Level 2 policy.
Hierarchical QoS (HQoS) for 8x10G modules 5 HQoS queue policy HQoS hierarchy is achieved by creating a set of queues. • A queue is rate shaped and the resulting traffic is referenced as a scheduler flow. • The scheduler flow out of a queue is referenced by a scheduler policy. • A queue stores packets based on a selected matching criteria. FIGURE 18 H-QoS Queue Policy 7, 6 Figure 16 on page 182 defines queue policies named "Q-7-6", "Q-5-4", "Q-3-2", and "Q-1-0".
5 Hierarchical QoS (HQoS) for 8x10G modules The shaper-rate is an optional parameter. The shaping rate is set with the minimum of 1Mbps and a maximum of 10Gbps. If no shaper-rate specified, the traffic will not be subject to shaping. The shaper-burst-size is an optional parameter. The shaper burst size is set with the minimum of 2 Kbytes and a maximum of 256 Kbytes. The default value for the shaper burst size is set to 10 Kbytes.
Hierarchical QoS (HQoS) for 8x10G modules Brocade(config-if-e10000-1/1)# vlan 500 Brocade(config-if-e10000-1/1)# vlan 600 Brocade(config-if-e10000-1/1)# vlan 700 Brocade(config-if-e10000-1/1)# vlan 800 Brocade(config-if-e10000-1/1)# Brocade(config-if-e10000-1/1)# shaper-rate 15000 Brocade(config-if-e10000-1/1)# 10000 Brocade(config-if-e10000-1/1)# 10000 5 hqos-map LogicalPort2.CustomerGrp1.Customer1 match hqos-map LogicalPort2.CustomerGrp1.Customer2 match hqos-map LogicalPort2.CustomerGrp2.
5 Hierarchical QoS (HQoS) for 8x10G modules PBB HQoS FIGURE 19 PBB HQoS mode Queues queuing per Level 3 Customer 7,6 2 Mb/s CoS1 2 5,4 CoS2 40 3,2 CoS3 20 1,0 CoS4 20 Level 0 Physical Port (10GE) Level 1 Logical Port Level 2 Customer Group 20 Mb/s Cu sto me r1 mixed 20Mb/s 5 2 Mb/s 7,6 CoS1 2 5, 4 CoS2 40 3, 2 CoS3 20 1, 0 CoS4 20 r2 me sto Cu 15 Mb/s 4 strict mixed Customer1 Same Queues/Level 3 as ISID 1000 Customer2 5
Hierarchical QoS (HQoS) for 8x10G modules 5 At the level-1 scheduler policy configuration, there are two customer groups competing in a weighted fair queue. • CustomerGrp1 will get preferential treatment with twice the bandwidth of CustomerGrp2 because of the weight values associated with them. • CustomerGrp1 can receive up to 666 Mbps and CustomerGrp2 receives up to 333 Mbps from the total 1Gbps shaped at this level.
5 Hierarchical QoS (HQoS) for 8x10G modules Brocade(config-if-e10000-1/1)# match vlan 100 isid 1000 Brocade(config-if-e10000-1/1)# match vlan 200 isid 2000 Brocade(config-if-e10000-1/1)# match vlan 300 isid 3000 Brocade(config-if-e10000-1/1)# match vlan 400 isid 4000 Brocade(config-if-e10000-1/1)# match vlan 500 isid 5000 Brocade(config-if-e10000-1/1)# match vlan 600 isid 6000 Brocade(config-if-e10000-1/1)# match vlan 700 isid 7000 Brocade(config-if-e10000-1/1)# match vlan 800 isid 8000 hqos-map Logical-
Hierarchical QoS (HQoS) for 8x10G modules Scheduler-Node Scheduler-Node Scheduler-Node Scheduler-Node Scheduler-Node Scheduler-Node 5 Type: Root Name: vlan-business ID: 0x310000 Scheduler Type: Strict Shaper Rate: 2000000 Kbps Shaper Rate Burst Size: 128 KB In this example, the HQoS policy tree has been applied to an interface. Brocade# show hqos interface eth 1/1 scheduler-node vlan-business.
5 Hierarchical QoS (HQoS) for 8x10G modules Scheduler-Node Shaper Rate Burst Size: 128 KB Scheduler-Node-Parent Node Name: Logical-Port1 Scheduler-Node-Parent Node ID: 0x310008 Syntax: show hqos interface ethernet slot/port [scheduler-node scheduler-node-name | scheduler-node-id] [child] Displaying the HQoS Max Queue Size Use the show hqos max-queue-size command to display the priority for customer and default traffic queues.
Hierarchical QoS (HQoS) for 8x10G modules 2 3 5 BRONZE GOLD Buffer Type Memory(%) BRONZE GOLD Module Type 8x10 95 100 Total Memory 1392 MB Min. Gurantee(%) 0 5 Max. Gold 1392 MB Min. Gold 69 MB Max. Bronze 1322 MB Min. Bronze 0 MB Syntax: show hqos buffer-pool Displaying HQoS global resource information Use the show hqos resource global command to display the HQoS resources for specified slot.
5 Hierarchical QoS (HQoS) for 8x10G modules Displaying HQoS Statistics Use the show the hqos statistics command to display the specified flow information for a specified interface. Brocade# show hqos statistics ethernet 1/1 queue Queue name: Queue name: vlan-business.Logical-Port1.Customer1.
Hierarchical QoS (HQoS) for 8x10G modules Total Discard Byte Count Oldest Discard Packet Count Oldest Discard Byte Count Current Queue Depth Maximum Queue Depth Since Last Read 5 0 0 0 0 0 Node: LogicalPort1.CustomerGrp1.
5 196 Hierarchical QoS (HQoS) for 8x10G modules Node: implicit_match_all Queue index: 2 Priorities: 2 EnQueue Packet Count EnQueue Byte Count DeQueue Packet Count DeQueue Byte Count Total Discard Packet Count Total Discard Byte Count Oldest Discard Packet Count Oldest Discard Byte Count Current Queue Depth Maximum Queue Depth Since Last Read 0 0 0 0 0 0 0 0 0 0 Node: implicit_match_all Queue index: 3 Priorities: 3 EnQueue Packet Count EnQueue Byte Count DeQueue Packet Count DeQueue Byte Count Total Dis
Hierarchical QoS (HQoS) for 8x10G modules Total Discard Byte Count Oldest Discard Packet Count Oldest Discard Byte Count Current Queue Depth Maximum Queue Depth Since Last Read 0 0 0 0 0 Node: implicit_match_all Queue index: 7 Priorities: 7 EnQueue Packet Count EnQueue Byte Count DeQueue Packet Count DeQueue Byte Count Total Discard Packet Count Total Discard Byte Count Oldest Discard Packet Count Oldest Discard Byte Count Current Queue Depth Maximum Queue Depth Since Last Read 0 0 0 0 0 0 0 0 0 0 5 U
5 Hierarchical QoS (HQoS) for 8x10G modules Clearing HQoS statistics Use the clear the hqos statistics command to clear the statistics for a specified flow. Brocade# clear hqos statistics ethernet ethernet 1/1 queue default-other Syntax: Clear hqos statistics ethernet slot/port queue hqos-scheduler-node-name index index Sample configurations NOTE All VPLS HQoS traffic in TM will drop after changing the loopback IP address in MPLS configuration.
Hierarchical QoS (HQoS) for 8x10G modules 5 scheduler-flow LogicalPort2 scheduler-input 6 scheduler-policy logical-port-type1 scheduler-flow Other-traffic scheduler-input 5 scheduler-policy other-policy ! hqos scheduler-policy logical-port-type1 level level-1 shaper-rate 1000000 shaper-burst-size 10 scheduler-type weighted scheduler-flow CustomerGrp1 scheduler-input 3 scheduler-policy customer-group-type1 scheduler-flow CustomerGrp2 scheduler-input 2 scheduler-policy customer-group-type1 ! hqos scheduler-
5 Hierarchical QoS (HQoS) for 8x10G modules shaper-rate 10000000 shaper-burst-size 10 ! ! router mpls vpls Customer1 1 vlan 100 tagged ethe 2/1 ethe 1/1 vpls Customer2 2 vlan 200 tagged ethe 2/1 ethe 1/1 vpls Customer3 3 vlan 300 tagged ethe 2/1 ethe 1/1 vpls Customer4 4 vlan 400 tagged ethe 2/1 ethe 1/1 vpls Customer5 5 vlan 500 tagged ethe 2/1 ethe 1/1 vpls Customer6 6 vlan 600 tagged ethe 2/1 ethe 1/1 vpls Customer7 7 vlan 700 tagged ethe 2/1 ethe 1/1 vpls Customer8 8 vlan 800 tagged ethe 2/1 ethe 1/1
Hierarchical QoS (HQoS) for 8x10G modules 5 PBB HQoS example configuration FIGURE 21 PBB HQoS deployment example Queues queuing per ISID 1000, BVLAN 10 Level 3 Customer 7,6 2 Mb/s CoS1 2 5,4 CoS2 40 3,2 CoS3 20 1,0 CoS4 20 Level 0 Physical Port (10GE) Level 1 Logical Port Level 2 Customer Group 20 Mb/s Cu sto me r1 mixed 20Mb/s 5 2 Mb/s 7,6 ISID 2000, BVLAN 10 CoS1 2 5, 4 CoS2 40 3, 2 CoS3 20 1, 0 CoS4 20 ISID 3000, BVLAN 10 IS
5 Hierarchical QoS (HQoS) for 8x10G modules scheduler-flow Customer1 scheduler-input 3 scheduler-policy customer-type1 scheduler-flow Customer2 scheduler-input 2 scheduler-policy customer-type1 ! hqos scheduler-policy customer-type1 level level-3 shaper-rate 20000 shaper-burst-size 10 scheduler-type strict scheduler-flow CoS1 scheduler-input 3 scheduler-policy scheduler-flow CoS2 scheduler-input 2 scheduler-policy scheduler-flow CoS3 scheduler-input 1 scheduler-policy scheduler-flow CoS4 scheduler-input 0
Hierarchical QoS (HQoS) for 8x10G modules 5 vpls Customer7 7 pbb vlan 700 tagged ethe 3/1 vlan 10 isid 7000 tagged eth 1/1 eth 2/8 vpls Customer8 8 pbb vlan 800 tagged ethe 3/1 vlan 10 isid 8000 tagged eth 1/1 eth 2/8 ! !interface ethernet 1/1 hqos service-policy output pbb-port hqos-map Logical-Port1.CustomerGrp1.Customer1 hqos-map Logical-Port1.CustomerGrp1.Customer2 hqos-map Logical-Port1.CustomerGrp2.Customer1 hqos-map Logical-Port1.CustomerGrp2.Customer2 hqos-map Logical-Port2.CustomerGrp1.
5 Hierarchical QoS (HQoS) for 8x10G modules Scheduler and queue policy configuration templates Scheduler and queue policy configuration templates are available for creating the HQOS tree. The configuration does not come into effect till they are bound to a interface supporting HQOS. Once a policy is bound to an interface, you cannot make any changes to the policy (except shaper rate and burst size). The policy has to be unbound and then make the changes to the policy and rebind it.
Hierarchical QoS (HQoS) for 8x10G modules 5 HQoS queue scheduler models Strict priority (SP) Figure 22 is an example of scheduling model for HQoS other traffic. All the 8 scheduler inputs are SP. FIGURE 22 Strict priority scheduler (SP) HQoS-QT stands for hqos-queue-type. The range is <0-7>. IP stands for internal priority. The range is <0-7>. PR stands for scheduler-input (the ordering of a flow with respect to a scheduler which is specified in the hqos scheduler policy). The range is <0-7>.
5 Hierarchical QoS (HQoS) for 8x10G modules Mixed Strict Priority and Weighted Fair Queue (SP/WFQ) Figure 23 is an example of mixed SP and WFQ scheduling model for HQoS customer traffic. In this example, the top three scheduler inputs are SP and the bottom five scheduler inputs are WFQ. FIGURE 23 Mixed Strict Priority and Weighted Fair Queue The supportable weight range for each input is <1-64>. HQoS-QT stands for hqos-queue-type. The range is <0-7>. IP stands for internal priority. The range is <0-7>.
Hierarchical QoS (HQoS) for 8x10G modules 5 Weighted Fair Queue and Fair Queue (WFQ/FQ) Figure 24 is an example the WFQ and FQ scheduling model for HQoS other traffic. In this example, all 8 scheduler inputs are WFQ. If all 8 scheduler inputs are equal, the scheduling model is Fair Queue (FQ). The supportable weight range for each input is <1-64>. FIGURE 24 WFQ/FQ scheduling model for HQoS other traffic HQoS-QT stands for hqos-queue-type. The range is <0-7>. IP stands for internal priority.
5 Hierarchical QoS (HQoS) for 8x10G modules TABLE 48 Internal priority 8x10G family buffer-pool 8x10G family default queue size 3 3 Bronze 1MB 2 2 Bronze 1MB 1 1 Bronze 1MB O O Bronze 1MB TABLE 49 208 HQoS "Other Traffic" queue-type HQoS "Other Traffic" queue-type HQoS "Customer Traffic" queue-type HQoS "Customer Traffic" queue-type Internal priority 8x10G family buffer-pool 8x10G family default queue size 11 7,6 Gold 1MB 10 5,4 Bronze 1MB 9 3,2 Bronze 1MB 8 0,1
Hierarchical QoS (HQoS) for 8x10G modules 5 HQoS Queue Schedulers - Customer Traffic Strict Priority (SP) Figure 25 depicts the SP scheduling model for HQoS other traffic. All the 4 scheduler inputs are SP. FIGURE 25 SP scheduling model for HQoS other traffic HQoS-QT stands for hqos-queue-type. The range is <8-11>. IP stands for internal priority. The range is <0-7>. PR stands for scheduler-input (the ordering of a flow with respect to a scheduler which is specified in the hqos scheduler policy).
5 Hierarchical QoS (HQoS) for 8x10G modules Mixed Strict Priority and Weighted Fair Queue (SP/WFQ) Figure 23 depicts the mixed SP/WFQ scheduling model for HQoS other traffic. The top scheduler input is SP and the bottom 3 scheduler inputs are WFQ. FIGURE 26 Mixed Strict Priority and Weighted Fair Queue The supportable weight range for each input is <1-64>. HQoS-QT stands for hqos-queue-type. The range is <8-11>. IP stands for internal priority. The range is <0-7>.
Hierarchical QoS (HQoS) for 8x10G modules 5 Weighted Fair Queue and Fair Queue (WFQ/FQ) Figure 27 depicts the mixed Weighted Fair Queue (WFQ) scheduling model for HQoS customer traffic. All the 4 scheduler inputs are WFQ. If all the 4 scheduler inputs are equal, the scheduling model is FQ. FIGURE 27 Weighted Fair Queue and Fair Queue The supportable weight range for each input is <1-64>. HQoS-QT stands for hqos-queue-type. The range is <8-11>. IP stands for internal priority. The range is <0-7>.
5 Hierarchical QoS (HQoS) for 8x10G modules HQoS egress port scheduler Figure 28 depicts the Egress Port Scheduler for a port for which HQoS is enabled. FIGURE 28 HQoS egress port scheduler The Egress Port Scheduler has 9 SP inputs. The scheduler and queue setup for the first 8 inputs is exactly the same as the current egress port scheduler without HQoS.