lsf_gpu

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
lsf_gpu [2018/05/08 08:36]
meesters [GPU Queues] - fixed broken link
— (current)
Line 1: Line 1:
-====== GPU Queues ====== 
  
-There are three different [[partitions|partitions (SLURM lingo for 'queues')]] inside the cluster that support gpu usage: The titan-Queues (''titanshort/long'') currently include hosts i0001-i0009, while the gpu-Queues (''infogpu'') include the hosts g0001-g0009 ((formally there have been the ''gpushort/long'' queues on these nodes - access however is restricted.)). The titan-hosts carry 4 GeForce GTX TITAN, hence a usage request up to ''cuda=4'' can be selected (see below). In contrast the GeForce GTX 480 is installed on the gpu-hosts (for the ''gpushort/long'' queues). Finally, for the tesla-Queues (''teslashort/long'') 4 Tesla K20m cards are installed. 
- 
-====== GPU Usage ====== 
- 
-To use a GPU you have to explicitly reserve it as a resource in the submission script: 
- 
-<code bash> 
-#!/bin/bash 
-# ... other SBATCH statements 
-#SBATCH --gres=gpu:<number> 
-#SBATCH -p <appropriate partition> 
-</code> 
- 
-Number can be anything from 1-4 on our GPU nodes. In order to use more than 1 GPU the application needs to support using this much, of course. 
- 
-===== Using multiple nodes and multiple GPUs ===== 
- 
-In order to use multiples nodes, you have to request more than one node. 
  • lsf_gpu.1525761388.txt.gz
  • Last modified: 2018/05/08 08:36
  • by meesters