Effects of virtualization and cloud computing on data center networks
7 
Distributing storage, distributing databases across multiple servers, sending requests to multiple 
servers, and accumulating the responses are all E/W traffic intensive. 
For example, consider how you plan a vacation. You visit the dynamic travel website of your choice 
and enter your variable data (when you want to travel, where, whether you need a hotel, flight, or a 
car). The site pulls together the appropriate responses from multiple databases, along with related 
ads, and shows you the options within a matter of seconds. Not only is this process very heavy in 
E/W traffic flow because it pulls data from multiple servers, it is also latency sensitive. If a travel 
website cannot serve the data to you within a matter of seconds, you’re likely to go to a competitor.  
Mobile access devices  
Finally, consider the effect of mobile access devices on data center traffic. There are hundreds of 
thousands of smartphone applications. These applications use a thin client that pulls much of the 
application and data from private or public clouds in a data center. It puts tremendous loads on the 
data center’s Ethernet fabrics. These E/W traffic loads are not only bandwidth sensitive; they are also 
latency-sensitive. Many internet-based applications like travel websites have a limited time window for 
the back-end applications to retrieve requested data. If your network infrastructure cannot handle 
these traffic loads, you will have inadequate application responses, resulting in customers moving on 
to a competitor’s services. 
Limitations of a hierarchical networking structure  
The more E/W traffic you have in a network, the more limitations you may face with a hierarchical 
network structure designed primarily for N/S flow. The challenges include traditional Spanning Tree 
Protocol (STP) limitations, oversubscription, port extension technology, and increased latency. 
STP limitations  
STP detects and prevents loops in L2 networks. Loops are an undesirable situation that can occur 
when there are multiple active paths between any pair of non-adjacent switches in the network. 
(Multiple paths between adjacent switches can use link aggregation technology such as 802.3AD 
LACP). To eliminate loops, STP allows only one active path from one switch to another. If the active 
path fails, STP automatically selects a backup connection and makes that the active path. Thus, STP 
blocks all parallel paths to a destination except the one it has selected as active, regardless of how 
many actual connection paths might exist in the network. Even when the network is operating 
normally, STP usually reduces the effective available bandwidth by 50 % or more. The process to 
activate new links can be time-consuming, often taking considerable time to re-converge on a new 
path.  
As businesses move away from client- server applications to more dynamic, latency-sensitive 
applications, the limitations of STP-based protocols become more burdensome. As E/W traffic volume 
increases, so does the need to use all available bandwidth and links. STP itself has no capability to 
do dynamic load balancing over multiple paths. Enhancements to STP such as Rapid Spanning Tree 
Protocol (RSTP) and Multiple Spanning Tree Protocol (MSTP) help resolve some of these issues, but at 
the cost of complex manual management. It’s clear the industry requires a new approach.  
Oversubscription  
Depending on the data center architecture you choose, oversubscription can be a problem. For 
example, if you use the Cisco Universal Computing System (UCS) architecture, you may have 
oversubscription rates of anywhere from 4:1 to 32:1 into the aggregation layer. Oversubscription can 
be an especially critical issue if your applications cause a lot of storage movement because of large 










