Users Guide

Table Of Contents
12–Marvell Teaming Services
Application Considerations
Doc No. BC0054508-00 Rev. R
January 21, 2021 Page 183 Copyright © 2021 Marvell
High-Performance Computing Cluster
Gigabit Ethernet is typically used for the following purposes in high-performance
computing cluster (HPCC) applications:
Inter-process communications (IPC): For applications that do not require
low-latency, high-bandwidth interconnects (such as Myrinet™ or
InfiniBand
®
), Gigabit Ethernet can be used for communication between the
compute nodes.
I/O: Ethernet can be used for file sharing and serving the data to the
compute nodes using an NFS server or using parallel file systems such as
PVFS.
Management and administration: Ethernet is used for out-of-band (Dell
Embedded Remote Access [ERA]) and in-band (Dell OpenManage™ Server
Administrator [OMSA]) management of the cluster nodes. It can also be
used for job scheduling and monitoring.
In Dell’s current HPCC offerings, only one of the on-board adapters is used. If
Myrinet or InfiniBand is present, this adapter serves I/O and administration
purposes; otherwise, it is also responsible for IPC. In case of an adapter failure,
the administrator can use the Felix
1
package to easily configure the second
(standby) adapter. Adapter teaming on the host side is neither tested nor
supported in HPCC.
Advanced Features
PXE is used extensively for the deployment of the cluster (installation and
recovery of compute nodes). Teaming is typically not used on the host side and it
is not a part of the Marvell standard offering. Link aggregation is commonly used
between switches, especially for large configurations. Jumbo frames, although not
a part of the Marvell standard offering, may provide performance improvement for
some applications due to reduced CPU overhead.
1
The 32-bit HPCC configurations from Dell come with the Felix 3.1 Deployment solution stack.
Felix is a collaborative effort between MPI Software Technologies Inc. (MSTI) and Dell.