start:working_on_mogon:gpu

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
start:working_on_mogon:gpu [2021/03/29 11:48]
pkeller2 [Access]
start:working_on_mogon:gpu [2021/04/28 12:20] (current)
ntretyak load default CUDA-version
Line 8: Line 8:
 | ''m2_gpu''             | [[start:mogon_cluster:nodes|s[0001-0030]]]        | 6 GeForce GTX 1080 ti| 11550       | project on Mogon II | | ''m2_gpu''             | [[start:mogon_cluster:nodes|s[0001-0030]]]        | 6 GeForce GTX 1080 ti| 11550       | project on Mogon II |
  
-//Physically all GPU nodes are placed together with MOGON I, hence users need to log in to MOGON I even to use the ''m2_gpu'' partition.// 
  
 Notes:  Notes: 
   * RAM displays the default memory per node in MiB.   * RAM displays the default memory per node in MiB.
-  * The Mogon I titan/tesla nodes come in as ''*short'' or ''*long'' queues, which is associated with maximum run times: 5 days and 5 hours, respectively. 
  
-<callout type="info" title="MOGON I" icon="true"> 
-titan/tesla nodes are not maintained any longer - the number of nodes is steadily declining. Eventually, they will be phased out. 
-</callout> 
  
 <callout type="warning" icon="true"> <callout type="warning" icon="true">
Line 76: Line 71:
 # Example SLURM job script to run serial applications on Mogon. # Example SLURM job script to run serial applications on Mogon.
 # #
-# This script requests one task using all cores (48) on one node.  +# This script requests one task using cores on one GPU-node.  
-# The job will have access to all the memory and all 6 GPUs in the node.  +
 #----------------------------------------------------------------- #-----------------------------------------------------------------
  
Line 91: Line 85:
 # Load all necessary modules if needed (these are examples) # Load all necessary modules if needed (these are examples)
 # Loading modules in the script ensures a consistent environment. # Loading modules in the script ensures a consistent environment.
-module load system/CUDA/9.1.85+module load system/CUDA
  
 # Launch the executable # Launch the executable
Line 124: Line 118:
 # Load all necessary modules if needed (these are examples) # Load all necessary modules if needed (these are examples)
 # Loading modules in the script ensures a consistent environment. # Loading modules in the script ensures a consistent environment.
-module load system/CUDA/9.1.85+module load system/CUDA
  
 # Launch the executable # Launch the executable
Line 149: Line 143:
 #SBATCH -N 1                     # Total number of nodes requested (48 cores/node per GPU node) #SBATCH -N 1                     # Total number of nodes requested (48 cores/node per GPU node)
 #SBATCH -n 6                     # Total number of tasks  #SBATCH -n 6                     # Total number of tasks 
-#SBATCH -c                     # CPUs per task +#SBATCH -c                     # CPUs per task 
 #SBATCH -t 00:30:00              # Run time (hh:mm:ss) - 0.5 hours #SBATCH -t 00:30:00              # Run time (hh:mm:ss) - 0.5 hours
 #SBATCH --gres=gpu:            # Reserve 6 GPUs  #SBATCH --gres=gpu:            # Reserve 6 GPUs 
Line 157: Line 151:
 # Load all necessary modules if needed (these are examples) # Load all necessary modules if needed (these are examples)
 # Loading modules in the script ensures a consistent environment. # Loading modules in the script ensures a consistent environment.
-module load system/CUDA/9.1.85+module load system/CUDA
  
 # Launch the tasks # Launch the tasks
  • start/working_on_mogon/gpu.1617011325.txt.gz
  • Last modified: 2021/03/29 11:48
  • by pkeller2