start:working_on_mogon:slurm_submit

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
Last revision Both sides next revision
start:working_on_mogon:slurm_submit [2021/09/14 14:57]
ntretyak [Simple MPI Job]
start:working_on_mogon:slurm_submit [2021/09/14 15:03]
ntretyak [Hybrid MPI Jobs (using GROMACS as an example)]
Line 279: Line 279:
 Whereas MPI applications frequently adhere to the standard MPI idea of parallelization by multiprocessing and exchanging messages between the created processes, hybrid applications use internally threaded processes (e.g. either by means of MPI-tasks or OpenMP-threads). Whereas MPI applications frequently adhere to the standard MPI idea of parallelization by multiprocessing and exchanging messages between the created processes, hybrid applications use internally threaded processes (e.g. either by means of MPI-tasks or OpenMP-threads).
  
-For this example we assume you want to run GROMACS on 2 nodes with 64 MPI-tasks (32 tasks per node) and 2 OpenMP-threads per MPI-task. The job script could look something like this+For this example we assume you want to run GROMACS on 2 Skylake-nodes (have 32 cores per node) with 32 MPI-tasks run on 2 cores each and 2 OpenMP-threads per MPI-task. The job script could look something like this
  
 <file bash myjobscript.slurm> <file bash myjobscript.slurm>
Line 286: Line 286:
 # Example SLURM job script to run GROMACS with MPI on MOGON. # Example SLURM job script to run GROMACS with MPI on MOGON.
 # #
-# This script requests 128 cores on two node. The job+# This script requests 64 cores on two nodes. The job
 # will have access to all the memory in the nodes.   # will have access to all the memory in the nodes.  
 #----------------------------------------------------------------- #-----------------------------------------------------------------
Line 293: Line 293:
 #SBATCH -o mygromacsjob.%j.out   # Specify stdout output file (%j expands to jobId) #SBATCH -o mygromacsjob.%j.out   # Specify stdout output file (%j expands to jobId)
 #SBATCH -p parallel              # Partition/Queue name #SBATCH -p parallel              # Partition/Queue name
-#SBATCH -C <constraint>          # select either 'broadwell' or 'skylake' +#SBATCH -C skylake               # select 'skylake' architecture 
-#SBATCH -N 2                     # Total number of nodes requested (64 cores/node)+#SBATCH -N 2                     # Total number of nodes requested (32 cores/node)
 #SBATCH -t 00:30:00              # Run time (hh:mm:ss) - 0.5 hours #SBATCH -t 00:30:00              # Run time (hh:mm:ss) - 0.5 hours
  
Line 304: Line 304:
  
 # Launch the executable # Launch the executable
-srun -n 64 -c 2 gmx_mpi mdrun -ntomp 2 -deffnm em +srun -n 32 -c 2 gmx_mpi mdrun -ntomp 2 -deffnm em 
 </file> </file>
  
  • start/working_on_mogon/slurm_submit.txt
  • Last modified: 2021/09/14 16:48
  • by ntretyak