node_local_scheduling

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
node_local_scheduling [2014/05/15 14:54]
schlarbm
— (current)
Line 1: Line 1:
-====== Node-local scheduling ====== 
- 
-There are some use cases, where you would want to simply request a **full cluster node** from the LSF batch system and then run **many** //(e.g. much more than 64)// **smaller** //(e.g. only a fragment of the total job runtime)// tasks on this full node. Then of course you will need some **local scheduling** on this node to ensure proper utilization of all cores. 
- 
-To accomplish this, we suggest you use the [[http://www.gnu.org/software/parallel/|GNU Parallel]] program. The program is installed to ''/cluster/bin'', but you can also simply load the [[modules|modulefile]] ''software/gnu_parallel'' so that you can also access its man page. 
- 
-For more documentation on how to use GNU Parallel, please read ''[[http://www.gnu.org/software/parallel/man.html#name|man parallel]]'' and ''[[http://www.gnu.org/software/parallel/parallel_tutorial.html#gnu_parallel_tutorial|man parallel_tutorial]]'', where you'll find a great number of examples and explanations. 
- 
-===== Mogon Usage Examples ===== 
- 
-Soon we will post some Mogon-specific examples here on this page. 
- 
-Let's say we have a number of input data files that contain differing parameters that are going to be processed independently by our program: 
- 
-<file bash> 
-$ ls data_*.in 
-data_001.in 
-data_002.in 
-[...] 
-data_149.in 
-data_150.in 
-$ cat data_001.in 
-1 2 3 4 5 
-6 7 8 9 0 
-</file> 
- 
-Now of course we could submit 150 jobs using LSF or we could use one job which processes the files one after another, but the most elegant way would be to submit one job for 64 cores (e.g. a whole node) and process the files in parallel. This is especially convenient, since we can then use the ''nodelong'' queue which has better scheduling characteristics than ''long''. 
- 
-Let's further assume that our program is able to work in parallel itself using OpenMP. 
-We determined that ''OMP_NUM_THREADS=4'' is the best amount of parallel work for one set of input data. 
-This means we can launch ''64/4=16'' processes using GNU Parallel on the one node we have. 
- 
-<file bash parallel_job> 
-#!/bin/bash 
-# LSF Job parameters (could also be given on the bsub command line) 
-# Job name 
-#BSUB -J parallel_job 
-# Queue 
-#BSUB -q nodelong 
-# Number of cores 
-#BSUB -n 64 
-# Memory reservation 
-#BSUB -app Reserve1800M 
-# Allowed job runtime (maximum) 
-#BSUB -W 7200 
- 
-# Store working directory to be safe 
-SAVEDPWD=`pwd` 
- 
-# First, we copy the input data files and the program to the local filesystem of our node 
-cp "${SAVEDPWD}"/data_*.in "/jobdir/${LSB_JOBID}/" 
-cp "${SAVEDPWD}"/program "/jobdir/${LSB_JOBID}/" 
- 
-# Change directory to jobdir 
-cd "/jobdir/${LSB_JOBID}/" 
- 
-export OMP_NUM_THREADS=4 
- 
-# We could also set --jobs $((LSB_DJOB_NUMPROC/OMP_NUM_THREADS)) to be more dynamic 
-# The --delay parameter is used to distribute I/O load at the beginning of program execution by 
-#   introducing a delay of 1 second before starting the next task 
-# -t enables verbose output to stderr 
-# {} will be replaced by each filename 
-# {#} will be replaced by the consecutive job number 
-# Both variants will have equal results: 
-#parallel --jobs 16 --delay 1 -t "./program {} > output_data_{#}" ::: data_*.in 
-find . -name 'data_*.in' | parallel --jobs 16 --delay 1 -t "./program {} > {.}.out" 
-# See the GNU Parallel documentation for more examples and explanation 
- 
-# Now capture exit status code, parallel will have set it to the number of failed tasks 
-STATUS=$? 
- 
-# Copy output data back to the previous working directory 
-cp "/jobdir/${LSB_JOBID}"/data_*.out "${SAVEDPWD}/" 
- 
-exit $STATUS 
-</file> 
- 
-For this example. the program only sleeps for some seconds and then counts the words in the input data file using ''wc''. 
- 
-<file bash> 
-$ bsub < parallel_job 
-</file> 
- 
-After this job has run, we should have the results/output data (in this case, it's just the output of ''wc'', for demonstration): 
- 
-<file bash> 
-$ ls data_*.out 
-data_001.out 
-data_002.out 
-[...] 
-data_149.out 
-data_150.out 
-$ cat data_001.out 
-2 10 20 data_001.in 
-</file> 
  
  • node_local_scheduling.1400158492.txt.gz
  • Last modified: 2014/05/15 14:54
  • by schlarbm