Design Reference
Table Of Contents
- Contents
- Chapter 1: Introduction
- Chapter 2: New in this release
- Chapter 3: Network design fundamentals
- Chapter 4: Hardware fundamentals and guidelines
- Chapter 5: Optical routing design
- Chapter 6: Platform redundancy
- Chapter 7: Link redundancy
- Chapter 8: Layer 2 loop prevention
- Chapter 9: Spanning tree
- Chapter 10: Layer 3 network design
- Chapter 11: SPBM design guidelines
- Chapter 12: IP multicast network design
- Multicast and VRF-lite
- Multicast and MultiLink Trunking considerations
- Multicast scalability design rules
- IP multicast address range restrictions
- Multicast MAC address mapping considerations
- Dynamic multicast configuration changes
- IGMPv3 backward compatibility
- IGMP Layer 2 Querier
- TTL in IP multicast packets
- Multicast MAC filtering
- Guidelines for multicast access policies
- Multicast for multimedia
- Chapter 13: System and network stability and security
- Chapter 14: QoS design guidelines
- Chapter 15: Layer 1, 2, and 3 design examples
- Chapter 16: Software scaling capabilities
- Chapter 17: Supported standards, RFCs, and MIBs
- Glossary
Network congestion and QoS design
When you provide QoS in a network, one of the major elements you must consider is
congestion, and the traffic management behavior during congestion. Congestion in a network
is caused by many different conditions and events, including node failures, link outages,
broadcast storms, and user traffic bursts.
At a high level, three main types or stages of congestion exist:
1. no congestion
2. bursty congestion
3. severe congestion
In a noncongested network, QoS actions ensure that delay-sensitive applications, such as real-
time voice and video traffic, are sent before lower-priority traffic. The prioritization of delay-
sensitive traffic is essential to minimize delay and reduce or eliminate jitter, which has a
detrimental impact on these applications.
A network can experience momentary bursts of congestion for various reasons, such as
network failures, rerouting, and broadcast storms. Virtual Services Platform 4000 has sufficient
capacity to handle bursts of congestion in a seamless and transparent manner. If the burst is
not sustained, the traffic management and buffering process on the switch allows all the traffic
to pass without loss.
Severe congestion is defined as a condition where the network or certain elements of the
network experience a prolonged period of sustained congestion. Under such congestion
conditions, congestion thresholds are reached, buffers overflow, and a substantial amount of
traffic is lost.
After the switch detects severe congestion, Virtual Services Platform 4000 discards traffic
based on drop precedence values. This mode of operation ensures that high-priority traffic is
not discarded before lower-priority traffic.
When you perform traffic engineering and link capacity analysis for a network, the standard
design rule is to design the network links and trunks for a maximum average-peak utilization
of no more than 80%. This value means that the network peaks to up to 100% capacity, but
the average-peak utilization does not exceed 80%. The network is expected to handle
momentary peaks above 100% capacity.
QoS examples and recommendations
The sections that follow present QoS network scenarios for bridged and routed traffic over the
core network.
QoS design guidelines
134 Network Design Reference for Avaya VSP 4000 February 2014
Comments? infodev@avaya.com