User Tools

Site Tools


lsf_gpu

This is an old revision of the document!


GPU Queues

The titan-Queues (titanshort/long) currently include hosts i0001-i0009, while the gpu-Queues (infogpushort) include the hosts g0001-g0009 1). The titan-hosts carry 4 GeForce GTX TITAN, hence a usage request up to cuda=4 can be selected (see below). In contrast the GeForce GTX 480 is installed on the gpu-hosts (for the gpushort/long queues). Finally, for the tesla-Queues (teslashort/long) 4 Tesla K20m cards are installed.

GPU Usage

To use a GPU you have to explicitly reserve it as a resource in the submission script:

#!/bin/bash
# ... other SBATCH statements
#SBATCH --gres=gpu:<number>
#SBATCH -p <appropriate partition>

Number can be anything from 1-4 on our GPU nodes. In order to use more than 1 GPU the application needs to support using this much, of course.

Using multiple nodes and multiple GPUs

In order to use multiples nodes, you have to request entire nodes and entire GPU sets, e.g.

$ bsub -q titanshort -n 2 -R 'span[ptile=1]' -R 'affinity[core(16)]' -R 'rusage[cuda=4]

In this example 2 entire titannodes will be used (also the CPU set).

Your job script / job command has to export the environment of your job. mpirun implementations do have an option for this (see your mpirun man page).

Attention

Multiple GPU nodes require to take entire nodes. The entire GPU set has to be claimed and the entire CPU set - either by setting affinity(core or ptile.

1)
formally there have been the gpushort/long queues on these nodes - access however is restricted.
lsf_gpu.1497031966.txt.gz · Last modified: 2017/06/09 20:12 by meesters