Table of Contents
ParaStation MPI's management and task handling facilities solve all of the nasty problems seen on a compute cluster while handling parallel applications. ParaStation MPI implements a fast and robust startup mechanism for parallel tasks. In addition the ParaStation MPI daemons running on each node of the cluster control the various remotely started processes forming the Parallel Task in a combined effort. They also clean up the whole task if one of the processes dies unexpectedly.
Within this chapter it is assumed that an executable ready to run on a ParaStation MPI cluster does already exist. It might have been built by the end-user using the MPI framework as described in Chapter 4, have been provided by an independent software vendor or can be a native ParaStation MPI application only making use of the low-level communication and management functions as described in Chapter 5. Anyhow, it has to make use of the ParaStation MPI management library as described in the API reference.
This chapter describes how to start a ParaStation MPI aware executable, how the processes will be distributed within the cluster, how to control the distribution strategy used by ParaStation MPI and how the mechanism of input and output redirection works.
Starting a parallel task consisting of N processes is as simple as executing:
$ mpiexec -np
Given the default settings of ParaStation MPI the
N instances of
program will be distributed to N free
processors in the cluster.
For more details, refer to mpiexec(1).
/some/path/to/program has to exist
and must be executable for the user on each node.
Likewise the current working directory must be accessible on
each node. Otherwise, the user's
will be used as current working directory, and a warning will
be issued. This may lead to unexpected results.
Let's have a brief look behind the scenes to understands what is happening while starting a parallel task, i.e. when distributing the processes forming the task within the cluster.
The locally started process (which is the process actually created when mpiexec is executed) connects to the local ParaStation MPI daemon psid(8), issuing a spawn request. This request, including the number of processes and other necessary information, is forwarded to the master node, which will provide a temporary node list. The local process will actually startup the remote processes using the psid on the particular node and will afterwards convert to an I/O handling process, the so called ParaStation MPI Logger process. The I/O handling mechanisms will be discussed in detail in the section called “Redirecting standard input and output”
Of course the distribution of the processes building the parallel task can be controlled in order to match the site's policy or tailor the process placement for a given algorithm. Because of the huge number of different types of applications running on a cluster ParaStation MPI provides far reaching features to control the distribution and spawning process. The manual pages for process_placement(7) and mpiexec(1) will discuss this in detail.