node_local_scheduling

This is an old revision of the document!


Node-local scheduling

There are some use cases, where you would want to simply request a full cluster node from the LSF batch system and then run many (e.g. much more than 64) smaller (e.g. only a fragment of the total job runtime) tasks on this full node. Then of course you will need some local scheduling on this node to ensure proper utilization of all cores.

To accomplish this, we suggest you use the GNU Parallel program. The program is installed to /cluster/bin, but you can also simply load the modulefile software/gnu_parallel so that you can also access its man page.

For more documentation on how to use GNU Parallel, please read man parallel and man parallel_tutorial, where you'll find a great number of examples and explanations.

Let's say we have a number of input data files that contain differing parameters that are going to be processed independently by our program:

$ ls data_*.in
data_001.in
data_002.in
[...]
data_149.in
data_150.in
$ cat data_001.in
1 2 3 4 5
6 7 8 9 0

Now of course we could submit 150 jobs using LSF or we could use one job which processes the files one after another, but the most elegant way would be to submit one job for 64 cores (e.g. a whole node) and process the files in parallel. This is especially convenient, since we can then use the nodelong queue which has better scheduling characteristics than long.

Let's further assume that our program is able to work in parallel itself using OpenMP. We determined that OMP_NUM_THREADS=4 is the best amount of parallel work for one set of input data. This means we can launch 64/4=16 processes using GNU Parallel on the one node we have.

parallel_job
#!/bin/bash
# LSF Job parameters (could also be given on the bsub command line)
# Job name
#BSUB -J parallel_job
# Queue
#BSUB -q nodelong
# Number of cores
#BSUB -n 64
# Memory reservation
#BSUB -app Reserve1800M
# Allowed job runtime (maximum)
#BSUB -W 7200
 
# Store working directory to be safe
SAVEDPWD=`pwd`
 
# First, we copy the input data files and the program to the local filesystem of our node
cp "${SAVEDPWD}"/data_*.in "/jobdir/${LSB_JOBID}/"
cp "${SAVEDPWD}"/program "/jobdir/${LSB_JOBID}/"
 
# Change directory to jobdir
cd "/jobdir/${LSB_JOBID}/"
 
export OMP_NUM_THREADS=4
 
# -t enables verbose output to stderr
# We could also set -j $((LSB_DJOB_NUMPROC/OMP_NUM_THREADS)) to be more dynamic
# The --delay parameter is used to distribute I/O load at the beginning of program execution by
#   introducing a delay of 1 second before starting the next task
# --progress will output the current progress of the parallel task execution
# {} will be replaced by each filename
# {#} will be replaced by the consecutive job number
# Both variants will have equal results:
#parallel -t -j 16 --delay 1 --progress "./program {/} > {/.}.out" ::: data_*.in
find . -name 'data_*.in' | parallel -t -j 16 --delay 1 --progress "./program {/} > {/.}.out"
# See the GNU Parallel documentation for more examples and explanation
 
# Now capture exit status code, parallel will have set it to the number of failed tasks
STATUS=$?
 
# Copy output data back to the previous working directory
cp "/jobdir/${LSB_JOBID}"/data_*.out "${SAVEDPWD}/"
 
exit $STATUS

For this example. the program only sleeps for some seconds and then counts the words in the input data file using wc.

$ bsub < parallel_job

After this job has run, we should have the results/output data (in this case, it's just the output of wc, for demonstration):

$ ls data_*.out
data_001.out
data_002.out
[...]
data_149.out
data_150.out
$ cat data_001.out
2 10 20 data_001.in

LSF offers $LSB_HOSTS within a job to identify the hosts assigned to a particular job. This can be used to execute a command distributed over those hosts:

parallel --no-notice --onall -S $(echo $LSB_HOSTS | tr ' ' ',') echo ::: foo bar

will print the host and 'D', 'E' and 'F' (not necessarily in order and wrapped around, if more than 3 hosts are requested via bsub).

  • node_local_scheduling.1453364080.txt.gz
  • Last modified: 2016/01/21 09:14
  • by meesters