Job Examples

How to use Slurm for submitting batch jobs to MOGON

Trivial job script example - single core job

#!/bin/bash
#-----------------------------------------------------------------
# Example SLURM job script to run single core applications on
# MOGON.
#
# This script requests one core (out of 20) on one Broadwell-node. The job
# will have access to the default memory of the partition.
#-----------------------------------------------------------------
#SBATCH -J mysimplejob           # Job name
#SBATCH -o mysimplejob.%j.out    # Specify stdout output file (%j expands to jobId)
#SBATCH -p smp                   # Queue name 'smp' or 'parallel' on Mogon II
#SBATCH -n 1                     # Total number of tasks, here explicitly 1
#SBATCH --mem 300M               # The default is 300M memory per job. You'll likely have to adapt this to your needs
#SBATCH -t 00:30:00              # Run time (hh:mm:ss) - 0.5 hours

#SBATCH -A account               # Specify allocation to charge against

# Load all necessary modules if needed (these are examples)
# Loading modules in the script ensures a consistent environment.

# Launch the executable
srun <myexecutable>

Trivial example - full node job - threaded application

In contrast to the previous example, the following will launch one task on 20 cores. Be careful: Most applications do not scale that far.

#!/bin/bash
#-----------------------------------------------------------------
# Example SLURM job script to run serial applications on MOGON.
#
# This script requests one core (out of 20) on one Broadwell-node.
# The job will have access to all the memory in the node.  Note
# that this job will be charged as if all 20 cores were requested.
#-----------------------------------------------------------------

#SBATCH -J mysimplejob           # Job name
#SBATCH -o mysimplejob.%j.out    # Specify stdout output file (%j expands to jobId)
#SBATCH -p parallel              # Queue name
#SBATCH -N 1                     # Total number of nodes requested (20 cores/node on a standard Broadwell-node)
#SBATCH -n 1                     # Total number of tasks
#SBATCH -c 20                    # Total number of cores for the single task
#SBATCH -t 00:30:00              # Run time (hh:mm:ss) - 0.5 hours

#SBATCH -A account               # Specify allocation to charge against

# Load all necessary modules if needed (these are examples)
# Loading modules in the script ensures a consistent environment.
module load gcc/6.3.0

# Launch the executable with one task distributed on 20 cores:
srun <myexecutable>

Simple MPI Job

#!/bin/bash
#-----------------------------------------------------------------
# Example SLURM job script to run MPI Job on MOGON.
#
# This script requests 80 MPI-tasks on two Broadwell-nodes. The job
# will have access to all the memory in the nodes.  
#-----------------------------------------------------------------

#SBATCH -J mympijob              # Job name
#SBATCH -o mympijob.%j.out       # Specify stdout output file (%j expands to jobId)
#SBATCH -p parallel              # Partition/Queue name
#SBATCH -N 2                     # Total number of nodes requested (40 tasks/node)
#SBATCH -n 80                    # Total number of tasks
#SBATCH -t 00:30:00              # Run time (hh:mm:ss) - 0.5 hours

#SBATCH -A <account>             # Specify account to charge against

# Load all necessary modules if needed (these are examples)
# Loading modules in the script ensures a consistent environment.
module load <appropriate module(s)>

# Launch the executable
srun <myexecutable>

Hybrid MPI-OpenMP Jobs (using GROMACS as an example)

Whereas MPI applications frequently adhere to the standard MPI idea of parallelization by multiprocessing and exchanging messages between the created processes, hybrid applications use internally threaded processes (e.g. either by means of MPI-tasks or OpenMP-threads).

For this example we assume you want to run GROMACS on 2 Skylake-nodes (have 32 cores per node) with 32 MPI-tasks run on 2 cores each and 2 OpenMP-threads per MPI-task. The job script could look something like this

#!/bin/bash
#-----------------------------------------------------------------
# Example SLURM job script to run GROMACS with MPI on MOGON.
#
# This script requests 64 cores on two nodes. The job
# will have access to all the memory in the nodes.  
#-----------------------------------------------------------------

#SBATCH -J mygromacsjob          # Job name
#SBATCH -o mygromacsjob.%j.out   # Specify stdout output file (%j expands to jobId)
#SBATCH -p parallel              # Partition/Queue name
#SBATCH -C skylake               # select 'skylake' architecture
#SBATCH -N 2                     # Total number of nodes requested (32 cores/node)
#SBATCH -t 00:30:00              # Run time (hh:mm:ss) - 0.5 hours

#SBATCH -A <account>             # Specify allocation to charge against

# Load all necessary modules if needed (these are examples)
# Loading modules in the script ensures a consistent environment.
module load bio/GROMACS # you can select a specific version, too

# Launch the executable
srun -n 32 -c 2 gmx_mpi mdrun -ntomp 2 -deffnm em