Controlling process placement

ParaStation MPI includes sophisticated functions to control the process placement for newly created parallel and serial tasks. These processes typically require a dedicated CPU (core). Upon task startup, the environment variables PSI_NODES, PSI_HOSTS and PSI_HOSTFILE are looked up (in this order) to get a predefined node list. If not defined, all currently known nodes are taken into account. Also, the variables PSI_NODES_SORT, PSI_LOOP_NODES_FIRST, PSI_EXCLUSIVE and PSI_OVERBOOK are observed. Based on these variables and the list of currently active processes, a sorted list of nodes is constructed, defining the final node list for this new task.

Beside this environment variables, node reservations for users and groups are also observed. See psiadmin(1).

In addition, only available nodes will be used to start up processes. Currently not available nodes will be ignored.

Obeying all these restrictions, the processes constructing a parallel task will be spawned on the nodes listed within the final node list. For SMP systems, all available CPUs (cores) on this node may be used for consecutive ranks, depending on the environment variable PSI_LOOP_NODES_FIRST.

Note

For administrative tasks not requiring a dedicated CPU (core), e.g. processes spawned using pssh, other strategies take place. As this type of processes are intended to run on dedicated nodes, predefined by the user, the described procedure will be circumvented and the processes will be run on the user-defined nodes.

For a detailed discussion of placing processes within ParaStation, please refer to process placement(7), ps_environment(5), pssh(8) and mpiexec(8).