Parallel Programming Guide for HP-UX Systems

MPI
Running
Chapter 2 25
HP MPI prints the output from running the hello_world executable in
non-deterministic order. The following is an example of the output:
Hello world! I'm 2 of 4 on wizard
Hello world! I'm 0 of 4 on jawbone
Hello world! I'm 3 of 4 on wizard
Hello world! I'm 1 of 4 on jawbone
Notice that processes 0 and 1 run on jawbone, the local host, while processes 2 and
3 run on wizard. HP MPI guarantees that the ranks of the processes in
MPI_COMM_WORLD are assigned and sequentially ordered according to the order
the programs appear in the appfile. The appfile in this example, my_appfile,
describes the local host on the first line and the remote host on the second line.
Running on multiple hosts using prun (Quadrics system)
This example teaches you to run the hello_world.c application that you built in Building
Applications (above) using two hosts to achieve four-way parallelism on a Quadrics system.
For this example, the local host is named jawbone and a remote host is named wizard. To run
hello_world.c on two hosts, use the following procedure, replacing jawbone and wizard with
the names of your machines:
Step 1. Insure that the executable is accessible from each host either by placing it in a
shared directory or by copying it to a local directory on each host.
Step 2. Run the hello_world executable file:
% $MPI_ROOT/bin/mpirun -prun -N 2 -n 4 /path/to/hello_world
All options after -prun are processed directly by prun. In this example, -N to prun
specifies 2 hosts are to be used and -n starts 4 total processes.
Types of applications
HP MPI supports two programming styles: SPMD applications and MPMD applications.
Running SPMD applications A single program multiple data (SPMD) application consists
of a single program that is executed by each process in the application. Each process normally
acts upon different data. Even though this style simplifies the execution of an application,
using SPMD can also make the executable larger and more complicated.
Each process calls MPI_Comm_rank to distinguish itself from all other processes in the
application. It then determines what processing to do.
To run a SPMD application, use the mpirun command like this:
% $MPI_ROOT/bin/mpirun -np
# program