Chelsio Communications www.chelsio.com firstname.lastname@example.org +1-408-962-3600•• •
10GbE iWARP Adapter
RDMA TCP/IP Cluster Computing••
HELSIO’S R310E 10GbE
iWARP Adapter is a pro-
tocol-offloading 10 Gigabit
Ethernet adapter with PCI
Express host bus interface for
servers and storage systems.
The third-generation technol-
ogy from Chelsio provides the
highest 10GbE performance
available and dramatically low-
ers host-system CPU commu-
With on-board hardware that offloads iWARP RDMA processing from its
host system, the R310E frees up host CPU cycles for useful applications.
The system gets increased bandwidth, improved overall performance,
and reduced message latency across all applications.
This combination makes it practical to converge other networks that tradi-
tionally used niche technologies onto 10GbE. High bandwidth and
extremely low latency make 10GbE the best technology for high-
performance cluster computing (HPCC) fabrics.
By using Chelsio’s R310E, enterprises can cost-effectively connect serv-
ers and storage systems directly to the 10GbE backbone over lower-cost
CX4 copper or active optical cabling and switching infrastructure.
As an upgrade or alternative to aggregated Gigabit Ethernet links, 10GbE
boosts connection bandwidth and simplifies cabling, installation, and main-
tenance. The R310E also provides additional bandwidth needed to con-
solidate server functions on fewer, more powerful systems – simplifying
management and reducing costs for servers, rackspace, power consump-
tion, and maintenance.
Applications with large data sets benefit from a high-speed distributed plat-
form, including video rendering and distribution, data visualization such as
remote medical imaging and climatic modeling, and bioinformatics appli-
cations such as DNAsequencing.
RDMA-enabled NIC (RNIC) specifically
optimized for cluster computing
Reduces host CPU utilization by up to
90% compared to NICs without full
PCI Express 8x host bus interface
Line-rate 10Gbps full-duplex
Integrated traffic manager, QoS, and
RNIC-PI, kDAPL, and OpenFabrics 1.2
Powerful per-connection, per-server,
and per-interface configuration and
Scale up servers and NAS systems
Link servers in multiple facilities to syn-
chronize data centers
Consolidate LAN, SAN, and cluster
Deploy Ethernet-only networking for clus-
ter fabric, LAN, and SAN
Standards-compliant iWARP RDMA plus
direct data placement (DDP)
Seamlessly runs existing InfiniBand
Very low latency Ethernet
Increase cluster fabric bandwidth
High Performance Cluster Computing