start:working_on_mogon:slurm_submit

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
start:working_on_mogon:slurm_submit [2021/09/14 15:02]
ntretyak [Hybrid MPI Jobs (using GROMACS as an example)]
start:working_on_mogon:slurm_submit [2022/01/20 14:54] (current)
henkela [Trivial example - single core job]
Line 203: Line 203:
 #SBATCH -p smp                   # Queue name 'smp' or 'parallel' on Mogon II #SBATCH -p smp                   # Queue name 'smp' or 'parallel' on Mogon II
 #SBATCH -n 1                     # Total number of tasks, here explicitly 1 #SBATCH -n 1                     # Total number of tasks, here explicitly 1
 +#SBATCH --mem 300M               # The default is 300M memory per job. You'll likely have to adapt this to your needs
 #SBATCH -t 00:30:00              # Run time (hh:mm:ss) - 0.5 hours #SBATCH -t 00:30:00              # Run time (hh:mm:ss) - 0.5 hours
  
Line 275: Line 276:
 </file> </file>
  
-==== Hybrid MPI Jobs (using GROMACS as an example) ==== +==== Hybrid MPI-OpenMP Jobs (using GROMACS as an example) ==== 
  
 Whereas MPI applications frequently adhere to the standard MPI idea of parallelization by multiprocessing and exchanging messages between the created processes, hybrid applications use internally threaded processes (e.g. either by means of MPI-tasks or OpenMP-threads). Whereas MPI applications frequently adhere to the standard MPI idea of parallelization by multiprocessing and exchanging messages between the created processes, hybrid applications use internally threaded processes (e.g. either by means of MPI-tasks or OpenMP-threads).
Line 286: Line 287:
 # Example SLURM job script to run GROMACS with MPI on MOGON. # Example SLURM job script to run GROMACS with MPI on MOGON.
 # #
-# This script requests 128 cores on two node. The job+# This script requests 64 cores on two nodes. The job
 # will have access to all the memory in the nodes.   # will have access to all the memory in the nodes.  
 #----------------------------------------------------------------- #-----------------------------------------------------------------
  • start/working_on_mogon/slurm_submit.1631624557.txt.gz
  • Last modified: 2021/09/14 15:02
  • by ntretyak