software:openmpi

This is an old revision of the document!


OpenMPI

Currently installed versions of OpenMPI and their supported MPI versions are:

OpenMPI version Subversions MPI version
1.6 1.6.5 2.1
1.8 1.8.1 3.0
1.10 1.10.2, 1.10.3 3.1

Since version 1.10 has various performance enhancements, please use this version. For OpenMPI versions from 1.10 and up, please use the ModuleFile mpi/openmpi/1.10/…, except if you need a specific feature only present in a specific subversion. The ModuleFile for version 1.10 will link to the most recent patch level. The use of mpi/openmpi/1.10.[23]/… will prompt a warning.

To compile your code with OpenMPI you need an compiler module and the OpenMPI module.

module load <compiler/of/your/choice>
module load <mpi/openmpi/1.10/gcc/4.4.7>
mpicc [compilation_parameter] -o <executable> <input_file.c> [input_file2.c ...]

Generally you can use OpenMPI compiled for the system compiler, which is gcc/4.4.7 at the moment. You can use this module with different compiler to compile your program.

Since F90 has some problems with the OpenMPI compiled with the system compiler, it is necessary to load the MPI module for the compiler you intend to use. A the moment there are modules for the following compilers:

Compiler version OpenMPI Module
gcc/4.4.7 mpi/openmpi/1.10/gcc/4.4.7
gcc/4.9.3 mpi/openmpi/1.10/gcc/4.9.3
gcc/5.1.0 mpi/openmpi/1.10/gcc/5.1.0
gcc/5.3.0 mpi/openmpi/1.10/gcc/5.3.0
intel/composer/2013_4_183 mpi/openmpi/1.10/intel/composer/2013
intel/composer/2016 mpi/openmpi/1.10/intel/composer/2016
module load <compiler/of/your/choice>
module load <mpi/openmpi/1.10/compiler/version>

To execute your program you need to have the correct module of OpenMPI (see in Compilation) loaded.

You execute your program by running it with mpirun. To use it within Mogon you submit your job like any other job and execute
mpirun [mpi_options] ./<executable> [input_parameter].

module load mpi/openmpi/1.10.2/gcc/4.4.7
bsub ... mpirun ./<executable> [input_parameter]


or, if you use F90

module load mpi/openmpi/1.10.2/compiler/version
bsub ... mpirun ./<executable> [input_parameter]

Tuning options are explained in man mpirun if the module is loaded. These options are chained behind the call of mpirun, e.g.

bsub ... mpirun --bind-to core --map-by core -mca btl openib,sm,self ./<executable> [input_parameter]

In order to run larger OpenMPI jobs it might be necessary to increase the memory for your job. Here are a few hints on how much OpenMPI needs to function. Since this is largely dependent on the app you are running, consider this as a guideline which is used by MPI to communicate.

Number of cores Memory demand (-M <value>)
64 default
128 512 MByte (-M 512000)
256 768 MByte (-M 768000)
512 1280 MByte (-M 1280000)
1024 may be problematic, see below
2048
4096

Attention: For jobs with more than 512 Cores there might be problems with execution. Depending on the communication scheme used by MPI the job might fail due to memory limits.

Every MPI programm needs an amount of time to start up an get ready for communicating. With increasing number of cores this time also increases. Here are some rough numbers on how much time MPI needs to proper start up.

Number of cores Startup time
- 256 5 - 10 sec
- 2048 20 - 30 sec
4096 ~40 sec
  • software/openmpi.1470815568.txt.gz
  • Last modified: 2016/08/10 09:52
  • by doeringn