Administrator Guide

Copyright © 2020 Dell Inc. or its subsidiaries. All Rights Reserved. Dell, EMC and other trademarks are trademarks of Dell Inc. or its subsidiaries
Copyright © 2020 Dell Inc. or its subsidiaries. All Rights Reserved. Dell, EMC and other trademarks are trademarks of Dell Inc. or its subsidiaries
Follow Us
For PowerEdge news
Contact Us
For feedback and requests
PowerEdge DfD Repository
For more technical learning
UPI Utilization
UPI traffic exists because the CPUs are communicating tasks to each other, constantly working to keep up
with user requests. SNAP I/O relieves the UPI of additional overhead by supplying a direct path to both CPUs
that doesnt require UPI traversing, therefore freeing up
UPI bandwidth. It should come as no surprise that SNAP
I/O UPI traffic loading utilization is as low as 7%, while
standard riser UPI traffic loading utilization is at 63%.
CPU Utilization
While iperf was running for latency/bandwidth testing,
the CPU utilization was monitored. As we can see in
Figure 8, the SNAP I/O and Non-SNAPI sender-
remote utilization are identical, so SNAP I/O did not
have any impact here. However, the receiver-remote
utilization underwent a significant improvement,
seeing the Non-SNAPI configuration reduce from 55%
use to 32% use when configured with SNAP I/O. This
is due to the even distribution of TCP streams
reducing the average cache miss count on both CPUs.
Who Will Benefit from SNAP I/O
Using SNAP I/O to improve latency is most useful when the total cost of ownership (TCO) is priority, while
maximum bandwidth and card-level redundancy are not. Customers using a 100GbE NIC that need more
than 50Gb/s per CPU, or require two-card redundancy, may consider using a two-card solution to achieve
the same latency. SNAP I/O should be used in environments where low latency is a priority and single-NIC
bandwidth is unlikely to be the bottleneck. Environments such as containers and databases will thrive with
SNAP I/O configured, whereas virtualization environments are not yet compatible with the SNAP I/O riser.
Conclusion
Dual-socket servers using a Non-SNAP I/O riser configuration may suffer from unbalanced I/O or a higher
TCO. Having data travel from the remote socket across the UPI channel to reach the NIC introduces
additional overhead that can degrade performance.
SNAP I/O solution provides an innovative riser that allows data to bypass the UPI channel, achieving a direct
connection to a single NIC for two CPUs. As seen throughout this tech note, using a direct connection will
deliver higher network bandwidth, lower latency, lower CPU utilization and lower UPI traffic. Additionally, the
SNAP I/O solution is more cost-effective than purchasing a second NIC, cable and switch port.
Figure 8: Bar graphs comparing CPU utilization of sender and
receiver remotes for non-SNAP I/O and SNAP I/O
configurations
Figure 7: Comparison of UPI traffic loading percentages