Parallel Programming Guide for HP-UX Systems

MPI
Debugging
Chapter 262
example, a sequence of operations (labeled "Deadlock") as illustrated in Table 2-6 would result
in such a deadlock. Table 2-6 also illustrates the sequence of operations that would avoid code
deadlock.
Propagation of environment variables When working with applications that run on
multiple hosts, you must set values for environment variables on each host that participates
in the job.
A recommended way to accomplish this is to set the -e option in the appfile:
-h
remote_host
-e
var
=
val
[-np
#
]
program
[
args
]
Alternatively, you can set environment variables using the .cshrc file on each remote host if
you are using a /bin/csh-based shell.
Interoperability Depending upon what server resources are available, applications may
run on heterogeneous systems.
For example, suppose you create an MPMD application that calculates the average
acceleration of particles in a simulated cyclotron. The application consists of a four-process
program called sum_accelerations and an eight-process program called calculate_average.
Because you have access to a K-Class server called K_server and an V-Class server called
V_server, you create the following appfile:
-h K_server -np 4 sum_accelerations
-h V_server -np 8 calculate_average
Then, you invoke mpirun passing it the name of the appfile you created. Even though the two
application programs run on different platforms, all processes can communicate with each
other, resulting in twelve-way parallelism. The four processes belonging to the
sum_accelerations application are ranked 0 through 3, and the eight processes belonging to
the calculate_average application are ranked 4 through 11 because HP MPI assigns ranks in
MPI_COMM_WORLD according to the order the programs appear in the appfile.
Table 2-6 Non-buffered messages and deadlock
Deadlock No Deadlock
Process 1 Process 2 Process 1 Process 2
MPI_Send(2,..
..)
MPI_Send(1,...
.)
MPI_Send(2,....) MPI_Recv(1,....)
MPI_Recv(2,..
..)
MPI_Recv(1,....
)
MPI_Recv(2,....) MPI_Send(1,....)