Connectivity Guide

Table Of Contents
Table 51. Default settings for PFC
Speed 10G 25G 40G 50G 100G
Default reserved buer for S4000, S4048–ON, S6010–ON 9KB NA 9KB NA NA
Default reserved buer for S41xx, Z9100–ON 9KB 9KB 18KB 18KB 36KB
Default Xo threshold 36KB 45KB 75KB 91KB 142KB
Default Xon threshold 9KB 9KB 9KB 9KB 9KB
Default dynamic share buer threshold (alpha value) 9KB 9KB 9KB 9KB 9KB
NOTE: The supported speed varies for dierent platforms. After the reserved buers are used, each PFC starts consuming
shared buers from the lossless pool with the alpha value determining the threshold.
You can override the default priority group settings when you enable LLFC or PFC.
1 Create a network-qos type class-map to match the trac classes. For LLFC, match all the trac classes from 0 to 7. For PFC, match
the required trac class.
OS10(config)# class-map type network-qos tc
OS10 (config-cmap-nqos)# match qos-group 0-7
2 Create network-qos type policy-map to dene the actions for trac classes, such as a buer conguration and threshold.
OS10(config)# policy-map type network-qos buffer
OS10(config-pmap-network-qos)# class tc
OS10 (config-pmap-c-nqos)# pause buffer-size 300 pause-threshold 200 resume-threshold 100
OS10 (config-pmap-c-nqos)# queue-limit thresh-mode dynamic 5
Congure egress buer
All port queues are allocated with reserved buers. When the reserved buers are consumed, each queue starts using the shared buers
from the default pool.
The reserved buer per queue is 1664 bytes for the speed of 10G, 25G, 40G, 50G, and 100G. The default dynamic shared buer threshold
is 8.
1 Create a queuing type class-map to match the queue.
OS10(config)# class-map type queuing q1
OS10(config-cmap-queuing)# match queue 1
2 Create a queuing type policy-map to dene the actions for queues, such as a buer conguration and threshold.
OS10(config)# policy-map type queuing q-buffer
OS10(config-pmap-queuing)# class q1
OS10(config-pmap-c-que)# queue-limit queue-len 200 thresh-mode dynamic 5
Congestion avoidance
Congestion avoidance anticipates and takes necessary actions to avoid congestion. The following mechanisms avoid congestion:
Tail drop—Packets are buered at trac queues. When the buers are exhausted or reach the congured threshold, excess packets
drop. By default, OS10 uses tail drop for congestion avoidance.
Random early detection (RED)—In tail drop, dierent ows are not considered in buer utilization. When multiple hosts start
retransmission, tail drop causes TCP global re-synchronization. Instead of waiting for the queue to get lled up completely, RED starts
dropping excess packets with a certain drop-probability when the average queue length exceeds the congured minimum threshold.
The early drop ensures that only some of TCP sources slow down, which avoids global TCP re-synchronization.
Weighted random early detection (WRED)This allows dierent drop-probabilities and thresholds for each color — red, yellow, green
— of trac. You can congure the drop characteristics for three dierent ows by assigning the colors to the ow. Assign colors to a
particular ow or trac using various methods, such as ingress policing, qos input policy-maps, and so on.
792
Quality of service