Parallel Programming Guide for HP-UX Systems

Introduction to parallel environments
Parallel programming model
Chapter 18
In message-passing, a parallel application consists of a number of processes that run
concurrently. Each process has its own local memory. It communicates with other processes by
sending and receiving messages. When data is passed in a message, both processes must work
to transfer the data from the local memory of one to the local memory of the other.
Under the message-passing paradigm, functions allow you to explicitly spawn parallel
processes, communicate data among them, and coordinate their activities. Unlike the
previous model, there is no shared-memory. Each process has its own private 16-terabyte
(Tbyte) address space, and any data that must be shared must be explicitly passed between
processes. Figure 1-2 shows a layout of the message-passing paradigm.
Figure 1-2 Message-passing programming model
Support of message passing allows programs written under this paradigm for distributed
memory to be easily ported to HP servers. Programs that require more per-process memory
than possible using shared-memory benefit from the manually-tuned message-passing style.
For more information about HP MPI, see the HP MPI User’s Guide and the
MPI Reference
.
Distributed memory model
I/O
Memory
CPU
I/O
Memory
CPU
I/O
Memory
CPU
I/O
Memory
CPU