Brochure

1
Inte’s NetEffect Server Cluster Adapters provide accelerated
10 Gigabit Ethernet processing to benefit some of the most
demanding and latency-sensitive applications, including high
performance computing (HPC) clustering and financial market
data systems. The product line is optimized for scalability to
take advantage of the multi-core environments typically used
with these high performance computing applications.
Powered by the second-generation of accelerated 10 Gigabit
Ethernet technology, the NetEffect NE020 network controller
provides the protocol processing required to deliver the low-
latency, scalable performance that is required.
iWARP and Kernel-Bypass
The NetEffect Server Cluster Adapters support iWARP, or inter-
net Wide Area RDMA Protocol. iWARP provides a low-latency,
kernel-bypass solution on Ethernet by using RDMA (Remote
Direct Memory Access) semantics. RDMA enables a remote
memory capability that can be abstracted to various application
APIs. iWARP is built on top of the TCP/IP protocol and therefore
provides datacenter-compatible connectivity using standard
network infrastructures. And it works on the standard IP-based
management software and standard Ethernet-based switches
used in datacenters today.
Kernel-bypass (or OS-bypass) is a key element of iWARP because
of the RDMA semantics. But kernel-bypass can be utilized
without iWARP. The NetEffect Server Cluster Adapters support
a mode that implements the bypass operation without the
RDMA protocol. This enables standard APIs, like UDP sockets,
to be used with existing applications while also benefiting from
latency improvements of kernel-bypass.
Both of these modes of operation provide lower latency and
more deterministic latency jitter. The end result is a more ef-
ficient network implementation that delivers more performance
to the application.
Multiple media types are supported:
Connector
Type
Interconnect
Cabling
Maximum
Distance
Notes
CX4 Twinax CX4 Cables 12 meters Copper
SFP+
850 nm Multi-
mode Fiber
300 meters Requires Fiber Optic
transceiver
Twinax Direct
Attach Cables
7 meters Copper
HPC Clustering
High-Performance Computing (HPC) describes a class of com-
puting that extracts the most performance from the cluster’s
compute and fabric resources.
The majority of HPC implementations are now commodity x86
server clusters. In turn, Ethernet and InfiniBand are the prevalent
commodity fabrics of choice.
Workload examples include: Computational Fluid Dynamics,
Computational Chemistry & Material Sciences, Finite Element
Analysis, Bio-Informatics, Climate & Weather Simulation, and
Reservoir Simulation & Visualization.
iWARP provides a low-latency option for Ethernet. NetEffect
Server Cluster adapters deliver an RDMA interface for various
Upper Layer Protocols (ULPs) including Intel-MPI, Microsoft-MPI,
Open-MPI, MVAPICH2, and uDAPL. For Linux, this is provided
through the OpenFabrics Enterprise Distribution (OFED) open-
source releases that are adopted from commercial distributors,
like Red Hat*. For Windows*, Microsoft* supports the Network
Direct interface in Windows HPC Server 2008.
Product Brief
NetEffect® Server Cluster Adapter
Network Connectivity
NetEffect® Server Cluster Adapters
Low-latency 10 Gigabit Ethernet adapters for high-performance apps

Summary of content (3 pages)