Chapter 4. Configuration

Table of Contents

Configuration of the ParaStation MPI system
Enable optimized network drivers
Testing the installation

After installing the ParaStation MPI software successfully, only few modifications to the configuration file parastation.conf(5) have to be made in order to enable ParaStation MPI on the local cluster.

Configuration of the ParaStation MPI system

Within this section the basic configuration procedure to enable ParaStation MPI will be described. It covers the configuration of ParaStation MPI using TCP/IP (Ethernet) and the optimized ParaStation protocol p4sock.

The primarily configuration work is reduced to editing the central configuration file parastation.conf, which is located in /etc.

A template file can be found in /opt/parastation/config/parastation.conf.tmpl. Copy this file to /etc/parastation.conf and edit it as appropriate.

This section describes all parameters of /etc/parastation.conf necessary to customize ParaStation MPI for a basic cluster environment. A detailed description of all possible configuration parameters in the configuration file can be found within the parastation.conf(5) manual page.

The following steps have to be executed on the frontend node to configure the ParaStation MPI daemon psid(8):

  1. Copy template

    Copy the file /opt/parastation/config/parastation.conf.tmpl to /etc/parastation.conf.

    The template file contains all possible parameters known by the ParaStation MPI daemon psid(8). Most of these parameters are set to their default value within lines marked as comments. Only those that have to be modified in order to adapt ParaStation MPI to the local environment are enabled. Additionally all parameters are exemplified using comments. A more detailed description of all the parameters can be found in the parastation.conf(5) manual page.

    The template file is a good starting point to create a working configuration of ParaStation MPI for your cluster. Beside basic information about the cluster, this template file defines all hardware components ParaStation MPI is able to handle. Since these definitions require a deeper knowledge of ParaStation MPI, it is easier to copy the template file anyway.

  2. HWType

    In order to tell ParaStation MPI which general kind of communication hardware should be used, the HWType parameter has to be set. This could be changed on a per node basis within the nodes section (see below).

    For clusters running ParaStation MPI utilizing the optimized ParaStation MPI communication stack on Ethernet hardware of any flavor this parameter has to be set to:

      HWType { p4sock ethernet }

    This will use the optimized ParaStation MPI protocol, if available. Otherwise, TCP/IP will be used.

    The values that might be assigned to the HWType parameter have to be defined within the parastation.conf configuration file. Have a brief look at the various Hardware sections of this file in order to find out which hardware types are actually defined.

    Other possible types are: mvapi, openib, gm, ipath, elan, dapl.

    Note

    To enable shared memory communication used within SMP nodes, no dedicated hardware entry is required. Shared memory support is always enabled by default. As there are no options for shared memory, no dedicated hardware section for this kind of interconnect is provided.

  3. Define Nodes

    Furthermore ParaStation MPI has to be told which nodes should be part of the cluster. The usual way of using the Nodes parameter is the environment mode, that is already enabled in the template file.

    The general syntax of the Nodes environment is one entry per line. Each entry has the form

      hostname id [HWType] [runJob] [starter] [accounter] 

    This will register the node hostname to the ParaStation MPI system with the ParaStation MPI ID id. The ParaStation ID has to be an integer number between 0 and the maximum number of nodes minus one.

    For each cluster node defined within the Nodes environment at least the hostname of the node and the ParaStation MPI ID of this node have to be given. The optional parameters HWType, runJobs, starter and accounter may be ignored for now. For a detailed description of these parameters refer to the parastation.conf(5) manual page.

    Usually the nodes will be enlisted ordered by increasing ParaStation MPI IDs, beginning with 0 for the first node. If a front end node exists and furthermore should be integrated into the ParaStation MPI system, it usually should be configured with ID 0.

    Within an cluster the mapping between hostnames and ParaStation MPI ID is completely unrestricted.

  4. More options

    More configuration options may be set as described in the configuration file parastation.conf. For details refer to the parastation.conf(5) manual page.

    Note

    If using vapi (HwType ib) or DAPL (HwType dapl) layers for communication, e.g. for InfiniBand or 10G Ethernet, the amount of lockable memory must be increased. To do so, use the option rlimit memlock within the configuration file.

  5. Copy configuration file to all other nodes

    The modified configuration file must be copied to all other nodes of the cluster. E.g., use psh to do so. Restart all ParaStation MPI daemons.

In order to verify the configuration, the command

  # /opt/parastation/bin/test_config

could be run. This command will analyze the configuration file and report any configuration failures. After finishing these steps, the configuration of ParaStation MPI is done.