Chapter 3. Installation

Table of Contents

Prerequisites
Directory structure
Installation via RPM packages
Installing the documentation
Installing MPI
Further steps
Uninstalling ParaStation MPI

This chapter describes the installation of ParaStation MPI. At first, the prerequisites to use ParaStation MPI are discussed. Next, the directory structure of all installed components is explained. Finally, the installation using RPM packages is described in detail.

Of course, the less automated the chosen way of installation is, the more possibilities of customization within the installation process occur. On the other hand even the most automated way of installation, the installation via RPM, will give a suitable result in most cases.

For a quick installation guide refer to Appendix A.

Prerequisites

In order to prepare a bunch of nodes for the installation of the ParaStation MPI communication system, a few prerequisites have to be met.

Hardware

The cluster must have a homogeneous processor architecture. The supported architectures up to now are:

  • i586: Intel IA32 (including AMD Athlon)

  • ia64: Intel IA64

  • x86_64: Intel EM64T and AMD64

  • ppc: IBM Power4 and Power5

Multi-core CPUs are supported, as well as single and multi-CPU (SMP) nodes.

Furthermore the nodes need to be interconnected. In principle, ParaStation MPI uses two different kinds of interconnects:

  • At first a so called administration network which is used to handle all the administrative tasks that have to be dealt with within a cluster. Besides commonly used services like sharing of NFS partitions or NIS tables, on a ParaStation MPI cluster, this also includes the inter-daemon communication used to implement the effective cluster administration and parallel task handling mechanisms. This administration network is usually implemented using a Fast or Gigabit Ethernet network.

  • Secondly a high speed interconnect is required in order to do high bandwidth, low latency communication within parallel applications. While historically this kind of communication is usually done using specialized highspeed networks like Myrinet, nowadays Gigabit Ethernet is a much cheaper and only slightly slower alternative. ParaStation MPI currently supports Ethernet (Fast, Gigabit and 10G Ethernet), Myrinet, InfiniBand, QsNetII and Shared Memory.

If IP connections over the high speed interconnect are available, it is not required to really have two distinct networks. Instead it is possible to use one physical network for both tasks. IP connections are usually configured by default in the case of Ethernet. For other networks, particular measures have to be taken in order to enable IP over these interconnects.

Software

ParaStation MPI requires a RPM-based Linux installation, as the ParaStation MPI software is based on installable RPM packages.

All current distributions from Novell and Red Hat are supported, like

  • SuSE Linux Enterprise Server (SLES) 10 and 11

  • OpenSuSE up to version 11.3

  • Red Hat Enterprise Linux (RHEL) 4 and 5

  • Fedora Core, up to version 11

  • CentOS 4 and 5

For other distributions and non-RPM based installations, please contact .

In order to use highspeed networks, additional libraries and kernel modules may be required. These packages are typically provided by the hardware vendors.

Kernel version

Using only TCP as a high speed interconnect protocol, no dedicated kernel modules are required. This is the ParaStation MPI default communication path and is always enabled.

Using other interconnects and protocols, additional kernel modules are required. Especially using the optimized ParaStation p4sock protocol, a couple of additional modules are loaded. Refer to the section called “Installing the RPMs” for details. The ParaStation MPI modules can be compiled for all major kernel versions.

Using InfiniBand and Myrinet requires additional modules and may restrict the supported kernels.