User`s guide

Chapter IV. iWARP (RDMA)
Chelsio T5/T4 Unified Wire For Linux Page 78
iii. Edit make_mpich file and set MPI_HOME variable to the MPI which you want to build the
benchmarks tool against. For example, in case of openMPI-1.6.4 set the variable as:
MPI_HOME=/usr/mpi/gcc/openmpi-1.6.4/
iv. Next, build and install the benchmarks using:
[root@host]# gmake -f make_mpich
The above step will install IMB-MPI1, IMB-IO and IMB-EXT benchmarks in the current working
directory (i.e. src).
v. Change your working directory to the MPI installation directory. In case of OpenMPI, it will
be /usr/mpi/gcc/openmpi-x.y.z/
vi. Create a directory called tests and then another directory called imb under tests.
vii. Copy the benchmarks built and installed in step (iv) to the imb directory.
viii. Follow steps (v), (vi) and (vii) for all the nodes.
4.2.5. Running MPI applications
Run Open MPI application as:
mpirun --host node1,node2 -mca btl openib,sm,self /usr/mpi/gcc/openmpi-
x.y.z/tests/imb/IMB-MPI1
For OpenMPI/RDMA clusters with node counts greater than or equal to 8 nodes, and process
counts greater than or equal to 64, you may experience the following RDMA address resolution
error when running MPI jobs with the default OpenMPI settings:
The RDMA CM returned an event error while attempting to make a connection.
This type of error usually indicates a network configuration error.
Local host: core96n3.asicdesigners.com
Local device: Unknown
Error name: RDMA_CM_EVENT_ADDR_ERROR
Peer: core96n8
Note