start:working_on_mogon:partitions:out_of_service

Out of Service Cluster

Please consider

that the clusters listed below have been taken out of service and the information provided may be out of date. We list them here only for the sake of clarity.
The information provided here will not be updated!

MOGON I End Of Life

MOGON I will be shut down at 31th September 2020

General purpose CPUs

Partition Nodes Max wall time % nodes Interconnect Constraints
short a-nodes 5 hours 25 Infiniband jobs using n « 64, Max running jobs per user: 10.000
long a-nodes 5 days 20 Infiniband jobs using n « 64, Max running jobs per user: 3.000
nodeshort a-nodes 5 hours 100 Infiniband jobs using n*64, for 1 < n < all of mogon
nodelong a-nodes 5 days 30 Infiniband jobs using n*64, for 1 < n < all of mogon, Max running jobs per association: 100
parallel a-nodes 5 days 20 Infiniband jobs using n*64
smp a-nodes 5 days 25 Infiniband jobs using n « 64
devel a-nodes 4 hours 1 Infiniband Max running jobs per user: 1
visualize a-nodes 5 hours 1 Infiniband Max TRES per user: cpu=129

Partitions deprecated

The short, long, nodelong and nodeshort partitions are deprecated and will be removed soon. Please use the parallel and smp partitions.

The default memory for a partition is listed with the command giving further details: scontrol show partition <partition name>.

If you require more memory per node as defined by the defaults, the mogon I a-nodes offer

Memory [MiB] No. of Nodes 1)
115500 444
242500 96
497500 15

Partition Limits

We have put limits in place to prevent single users or group to solely take up all resources in a given partition. These limitations can be altered to improve the system utilization and are henceforth not given in detail. They may result in pending jobs; pending reasons are listed in our wiki.

Partitions for Applications using Accelerators

Partition Nodes Max wall time Interconnect Accelerators Comment
titanshort i-nodes 5 hours Infiniband 4 GeForce GTX TITAN per node see using GPUs under slurm
titanlong i-nodes 5 days Infiniband 4 GeForce GTX TITAN per node see using GPUs under slurm
teslashort h-nodes 5 hours Infiniband - see using GPUs under slurm
teslalong h-nodes 5 days Infiniband - see using GPUs under slurm
deeplearning dgx-nodes 12 hours Infiniband 8 Tesla V100-SXM2 per node for access get in touch with us

1)
if all nodes are functional
  • start/working_on_mogon/partitions/out_of_service.txt
  • Last modified: 2020/09/23 11:17
  • by jrutte02