lsf_gpu

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
lsf_gpu [2017/06/09 20:11]
meesters [Using multiple GPUs]
— (current)
Line 1: Line 1:
-====== GPU Queues ====== 
- 
-The titan-Queues (''​titanshort/​long''​) currently include hosts i0001-i0009,​ while the gpu-Queues (''​infogpushort''​) include the hosts g0001-g0009 ((formally there have been the ''​gpushort/​long''​ queues on these nodes - access however is restricted.)). The titan-hosts carry 4 GeForce GTX TITAN, hence a usage request up to ''​cuda=4''​ can be selected (see below). In contrast the GeForce GTX 480 is installed on the gpu-hosts (for the ''​gpushort/​long''​ queues). Finally, for the tesla-Queues (''​teslashort/​long''​) 4 Tesla K20m cards are installed. 
- 
-====== GPU Usage ====== 
- 
-To use a GPU you have to explicitly reserve it as a resource in the submission script: 
- 
-<code bash> 
-#!/bin/bash 
-# ... other SBATCH statements 
-#SBATCH --gres=gpu:​4 
-#SBATCH -p <​appropriate partition>​ 
-</​code>​ 
- 
- 
- 
- 
-===== Using multiple nodes and multiple GPUs ===== 
- 
-In order to use multiples nodes, you have to request entire nodes and entire GPU sets, e.g. 
- 
-<code bash> 
-$ bsub -q titanshort -n 2 -R '​span[ptile=1]'​ -R '​affinity[core(16)]'​ -R '​rusage[cuda=4] 
-</​code>​ 
- 
-In this example 2 entire titannodes will be used (also the CPU set). 
- 
-Your job script / job command has to export the environment of your job. ''​mpirun''​ implementations do have an option for this (see your ''​mpirun''​ man page). 
- 
-<WRAP alert> 
-//​**Attention**//​ 
- 
-Multiple GPU nodes require to take entire nodes. The entire GPU set has to be claimed and the entire CPU set - either by setting ''​affinity(core''​ or ''​ptile''​. 
-</​WRAP>​ 
  
  • lsf_gpu.1497031902.txt.gz
  • Last modified: 2017/06/09 20:11
  • by meesters