start:working_on_mogon:slurm_submit

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
start:working_on_mogon:slurm_submit [2021/09/14 15:02]
ntretyak [Hybrid MPI Jobs (using GROMACS as an example)]
start:working_on_mogon:slurm_submit [2021/09/14 16:48]
ntretyak [Hybrid MPI Jobs (using GROMACS as an example)]
Line 275: Line 275:
 </file> </file>
  
-==== Hybrid MPI Jobs (using GROMACS as an example) ==== +==== Hybrid MPI-OpenMP Jobs (using GROMACS as an example) ==== 
  
 Whereas MPI applications frequently adhere to the standard MPI idea of parallelization by multiprocessing and exchanging messages between the created processes, hybrid applications use internally threaded processes (e.g. either by means of MPI-tasks or OpenMP-threads). Whereas MPI applications frequently adhere to the standard MPI idea of parallelization by multiprocessing and exchanging messages between the created processes, hybrid applications use internally threaded processes (e.g. either by means of MPI-tasks or OpenMP-threads).
Line 286: Line 286:
 # Example SLURM job script to run GROMACS with MPI on MOGON. # Example SLURM job script to run GROMACS with MPI on MOGON.
 # #
-# This script requests 128 cores on two node. The job+# This script requests 64 cores on two nodes. The job
 # will have access to all the memory in the nodes.   # will have access to all the memory in the nodes.  
 #----------------------------------------------------------------- #-----------------------------------------------------------------
  • start/working_on_mogon/slurm_submit.txt
  • Last modified: 2021/09/14 16:48
  • by ntretyak