software:openmpi

OpenMPI

To compile your code with OpenMPI you need an OpenMPI module.

module load mpi/OpenMPI/<version-compiler-version>
mpicc [compilation_parameter] -o <executable> <input_file.c> [input_file2.c ...]

To execute your program you need to have the correct module of OpenMPI (see in Compilation) loaded.

You execute your program by running it with srun, which behaves like mpirun/mpiexec as the mpi-modules are compiled against slurm.

One example would be:

#!/bin/bash
 
#SBATCH -N 2 # the number of nodes
#SBATCH -p nodeshort # on Mogon I
#SBATCH -p parallel  # on Mogon II
#SBATCH -A <your slurm account>
#SBATCH -t <sufficient time>
#SBATCH --mem <sufficient memory, if default / node is not sufficient>
#SBATCH -J <jobname>
 
#M1 - example
#module load mpi/OpenMPI/2.0.2-GCC-6.3.0 
 
# M2 - example
#module load mpi/OpenMPI/2.0.2-GCC-6.3.0-2.27-opa
 
srun -N2 -n <should fit the number of MPI ranks> <your application>

In the case of hybrid applications (multi processing + threading) see to it that -c in your slurm parameterization, which is the number of threads per process, times -n, the number of ranks, equal the number of cores per node. This might not always be true, as some applications might profit from the hyperthreading on Mogon II or saturate the FPUs on Mogon I. In this case you should experiment to find the optimal performance. Please do not hesitate to ask for advice if you do not know how to approach this problem.

In order to run larger OpenMPI jobs it might be necessary to increase the memory for your job. Here are a few hints on how much OpenMPI needs to function. Since this is largely dependent on the app you are running, consider this as a guideline which is used by MPI to communicate.

Number of cores Memory demand (–mem <value>)
64 default
128 512 MByte (–mem 512M)
256 768 MByte (–mem 768M)
512 1280 MByte (–mem 1280M)
1024 may be problematic, see below
2048
4096

Attention: For jobs with more than 512 Cores there might be problems with execution. Depending on the communication scheme used by MPI the job might fail due to memory limits.

Every MPI programm needs an amount of time to start up an get ready for communicating. With increasing number of cores this time also increases. Here are some rough numbers on how much time MPI needs to proper start up.

Number of cores Startup time
- 256 5 - 10 sec
- 2048 20 - 30 sec
4096 ~40 sec
  • software/openmpi.txt
  • Last modified: 2017/09/10 09:46
  • by meesters