White Papers

Dell - Internal Use - Confidential
Introducing 100GBps with Inte Omni-Path Fabric in HPC
By Munira Hussain, Deepthi Cherlopalle
This blog introduces the Omni-Path Fabric from Intel® as a cluster network fabric used for intra-node
communication for application, management and storage communication in High Performance
Computing (HPC). It is part of the new technology referring to Intel® Scalable System framework based on
IP generated from the coalition of Qlogic, Truescale and Cray Aries. The goal of Omni-Path is to eventually
be able to meet the demands of the exascale data centers in performance and scalability.
Dell provides complete validated and supported solution offering which includes the Networking H-series
Fabric switches and Host Fabric Interface (HFI) adapters. The Omni-Path HFI is a PCI-E Gen3 x16 adapter
capable of 100 Gbps unidirectional per port. The card supports 4 lanes supporting 25Gbps per lane.
HPC Program Overview with Omni-Path:
The current solution program is based on Red Hat Linux 7.2 (kernel version 3.10.0-327.el7.x86_64). The
Intel Fabric Suite (IFS) drivers are integrated in the current software solution stack Bright Cluster Manager
7.2 which helps to deploy, provision, install and configure an Omni-Path cluster seamlessly.
The following Dell servers support Intel
®
Omni-Path Host Fabric Interface (HFI) cards
PowerEdge R430,PowerEdge R630, PowerEdge R730, PowerEdge R730XD, PowerEdge R930, PowerEdge
C4130, PowerEdge C6320
The management and monitoring of the Fabric is done using the Fabric Manager (FM) GUI available from
Intel®. The FM GUI provides in-depth analysis and graphical overview of the fabric health including
detailed breakdown of status of the ports, mapping as well as investigative report on the errors.
Figure 1: Fabric Manager GUI

Summary of content (4 pages)