software:openmpi

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
software:openmpi [2020/09/02 14:05]
jrutte02 removed
— (current)
Line 1: Line 1:
-====== OpenMPI ====== 
  
- 
-=====Compilation===== 
-To compile your code with OpenMPI you need an OpenMPI module. 
- 
-<code bash> 
-module load mpi/OpenMPI/<version-compiler-version> 
-mpicc [compilation_parameter] -o <executable> <input_file.c> [input_file2.c ...] 
-</code> 
- 
-  
- 
- 
-=====Execution===== 
-To execute your program you need to have the correct module of OpenMPI (see in [[software:openmpi#Compilation]]) loaded. 
- 
-You execute your program by running it with ''srun'', which behaves like ''mpirun''/''mpiexec'' as the mpi-modules are compiled against slurm.  
- 
-One example would be: 
- 
-<code bash> 
-#!/bin/bash 
- 
-#SBATCH -N 2 # the number of nodes 
-#SBATCH -p nodeshort # on Mogon I 
-#SBATCH -p parallel  # on Mogon II 
-#SBATCH -A <your slurm account> 
-#SBATCH -t <sufficient time> 
-#SBATCH --mem <sufficient memory, if default / node is not sufficient> 
-#SBATCH -J <jobname> 
- 
-#M1 - example 
-#module load mpi/OpenMPI/2.0.2-GCC-6.3.0  
- 
-# M2 - example 
-#module load mpi/OpenMPI/2.0.2-GCC-6.3.0-2.27-opa 
- 
-srun -N2 -n <should fit the number of MPI ranks> <your application> 
-</code> 
- 
-In the case of hybrid applications (multi processing + threading) see to it that ''-c'' in your slurm parameterization, which is the number of threads per process, times ''-n'', the number of ranks, equal the number of cores per node. This might not always be true, as some applications might profit from the hyperthreading on Mogon II or saturate the FPUs on Mogon I. In this case you should experiment to find the optimal performance. Please do not hesitate to ask for advice if you do not know how to approach this problem. 
-====Memory==== 
- 
-In order to run larger OpenMPI jobs it might be necessary to increase the memory for your job. Here are 
-a few hints on how much OpenMPI needs to function. Since this is largely dependent on the app you are 
-running, consider this as a guideline which is used by MPI to communicate. 
- 
-^Number of cores  ^Memory demand  (''--mem <value>'' ^ 
-|64  |default  | 
-|128  |512 MByte (''--mem 512M'' | 
-|256  |768 MByte (''--mem 768M'' | 
-|512  |1280 MByte (''--mem 1280M'' | 
-|1024  |**may be problematic, see below**  | 
-|2048  |:::  | 
-|4096  |:::  | 
- 
-**Attention:** For jobs with more than 512 Cores there might be problems with execution. Depending on  
-the communication scheme used by MPI the job might fail due to memory limits. 
-====Startup time==== 
- 
-Every MPI programm needs an amount of time to start up an get ready for communicating. With increasing  
-number of cores this time also increases. Here are some rough numbers on how much time MPI needs to  
-proper start up. 
- 
-^Number of cores  ^Startup time  ^ 
-|- 256  |5 - 10 sec  | 
-|- 2048  |20 - 30 sec  | 
-|4096  |~40 sec  | 
  • software/openmpi.1599048338.txt.gz
  • Last modified: 2020/09/02 14:05
  • by jrutte02