Intel® MPI Library for Linux* OS User’s Guide Copyright © 2003–2014 Intel Corporation All Rights Reserved Document Number: 315398-012 1
Intel® MPI Library User’s Guide for Linux* OS Contents 1. Introduction ........................................................................................................................ 5 1.1. Introducing Intel® MPI Library ................................................................................... 5 1.2. Intended Audience .................................................................................................... 5 1.3. Notational Conventions .........................................
Disclaimer and Legal Notices INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS. NO LICENSE, EXPRESS OR IMPLIED, BY ESTOPPEL OR OTHERWISE, TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT.
Optimization Notice Intel’s compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors. These optimizations include SSE2, SSE3, and SSSE3 instruction sets and other optimizations. Intel does not guarantee the availability, functionality, or effectiveness of any optimization on microprocessors not manufactured by Intel. Microprocessor-dependent optimizations in this product are intended for use with Intel microprocessors.
1. Introduction This User’s Guide explains how to use the Intel® MPI Library to compile and run a simple MPI program. This guide also includes basic usage examples and troubleshooting tips. To quickly start using the Intel® MPI Library, print this short guide and walk through the example provided.
Intel® MPI Library User’s Guide for Linux* OS [ items ] Optional items { item | item } Selectable items separated by vertical bar(s) (SDK only) For Software Development Kit (SDK) users only 1.4. Related Information To get more information about the Intel® MPI Library, explore the following resources: See the Intel® MPI Library Release Notes for updated information on requirements, technical support, and known limitations.
2. Using the Intel®MPI Library This section describes the basic Intel® MPI Library usage model and demonstrates typical usages of the Intel® MPI Library. 2.1. Usage Model Using the Intel® MPI Library involves the following steps: Figure 1: Flowchart representing the usage model for working with the Intel® MPI Library. 2.2. Before You Begin Before using the Intel® MPI Library, ensure that the library, scripts, and utility applications are installed.
Intel® MPI Library User’s Guide for Linux* OS To compile your MPI program: 1. (SDK only) Make sure you have a compiler in your PATH. To find the path to your compiler, run the which command on the desired compiler. For example: $ which icc /opt/intel/composerxe-2013/bin/intel64/icc 2. (SDK only) Compile a test program using the appropriate compiler driver. For example: $ mpiicc -o myprog /test/test.c To run your MPI program: 1.
2.5. Setting up the Intel®MPI Library Environment The Intel® MPI Library uses the Hydra process manager. To run programs compiled with the mpiicc (or related) commands, make sure your environment is set up correctly. 1. Set up the environment variables with appropriate values and directories. For example, in the .cshrc or .bashrc files: Ensure that the PATH variable includes the //bin directory. Use the mpivars.[c]sh scripts included with the Intel MPI Library to set up this variable.
Intel® MPI Library User’s Guide for Linux* OS Arguments Argument Definition Define a network fabric shm Shared-memory dapl DAPL-capable network fabrics, such as InfiniBand*, iWarp*, Dolphin*, and XPMEM* (through DAPL*) tcp TCP/IP-capable network fabrics, such as Ethernet and InfiniBand* (through IPoIB*) tmi Network fabrics with tag matching capabilities through the Tag Matching Interface (TMI), such as Intel® True Scale Fabric and Myrinet* ofa Network fabric, such as InfiniBand* (thro
If you are using a network fabric different than the default fabric, use the -genv option to assign a value to the I_MPI_FABRICS variable. For example, to run an MPI program using the shm fabric, type in the following command: $ mpirun -genv I_MPI_FABRICS shm -n <# of processes> ./myprog For a dapl-capable fabric, use the following command: $ mpirun -genv I_MPI_FABRICS dapl -n <# of processes> .
Intel® MPI Library User’s Guide for Linux* OS #PBS -l walltime=1:30:00 #PBS -q workq #PBS -V # Set Intel MPI environment mpi_dir=//bin cd $PBS_O_WORKDIR source $mpi_dir/mpivars.sh # Launch application mpirun -n <# of processes> ./myprog 2. Submit the job using the PBS qsub command: $ qsub pbs_run.sh When using mpirun under a job scheduler, you do not need to determine the number of available nodes.
Hello world: rank 1 of 4 running on clusternode1 Hello world: rank 2 of 4 running on clusternode2 Hello world: rank 3 of 4 running on clusternode2 Alternatively, you can explicitly set the number of processes to be executed on each host through the use of argument sets. One common use case is when employing the masterworker model in your application. For example, the following command equally distributes the four processes on clusternode1 and on clusternode2: mpirun –n 2 –host clusternode1 ./myprog.
Intel® MPI Library User’s Guide for Linux* OS 2.9.2. Running an MPI Application To run an MPI application on the host node and the Intel® Xeon Phi™ coprocessor, do the following: 1. Ensure that NFS is properly set up between the hosts and the Intel® Xeon Phi™ coprocessor(s). For information on how to set up NFS on the Intel® Xeon Phi™ coprocessor(s), visit the Intel® Xeon Phi™ coprocessor developer community at http://software.intel.com/en-us/mic-developer. 2.
3. Troubleshooting This section explains how to test the Intel® MPI Library installation and how to run a test program. 3.1. Testing the Installation To ensure that the Intel® MPI Library is installed and functioning correctly, complete the general testing below, in addition to compiling and running a test program. To test the installation (on each node of your cluster): 1.
Intel® MPI Library User’s Guide for Linux* OS You should see one line of output for each rank, as well as debug output indicating the sharedmemory and DAPL-capable network fabrics are being used. Test any other fabric using: $ mpirun -n 2 -genv I_MPI_DEBUG 2 -genv I_MPI_FABRICS ./myprog where is a supported fabric. For more information, see Selecting a Network Fabric.