Login Nodes

Calculations are to be submitted as jobs to the SLURM scheduler. This means in particular that jobs are not to be run on login nodes. Processes running directly on these nodes should be limited to tasks such as editing, data transfer and management, data analysis, compiling codes, and debugging—as long as these tasks are not resource intensive (be that memory, CPU, network, or I/O). Any resource intensive work must be run on the compute nodes through the batch system.

In order to give everyone a fair share of the login nodes resources, we are imposing the following limits.

Resource limits on login nodes

ResourceLimit
Memory$10\thinspace\text{GB}$
CPU cores4

Any process that is consuming extensive resources on a login-node may be killed, especially when it begins impacting other users on that node. If a process is creating significant problems for the system, it will be killed immediately and the user will be contacted via e-mail.

SLURM commands are resource intensive! If not on login nodes, they are the user interaction tool with the scheduling system. Please avoid placing them in loops, like

for i in `seq ...`; do
  sbatch ... cmd $i
done

This could be replaced by a job array.

Login and Service Nodes

You can use the following service nodes to log in to MOGON:

Service NodeFQDNDescription
login21miil01.zdv.uni-mainz.deLogin Node
login22miil02.zdv.uni-mainz.deLogin Node
login23miil03.zdv.uni-mainz.deLogin Node
hpcgatehpcgate.zdv.uni-mainz.deJump Host

Since you access MOGON Service-Nodes through the HPCGATE you can omit zdv.uni-mainz.de, e.g.: for login21 just miil01 is sufficient.

Interactive work

Please note that interactive work, like testing code for example, can be executed in interactive jobs.

Something like watch squeue to act upon a job status is causing strain on the scheduler (particularly, if invoked by numerous users). If it is about checking a job status for subsequent work, this could be implemented using job dependencies.

Also, the scheduler needs time to work: The default time between watch cycles is below the update frequency of the scheduler.

Repeated abuse of login-nodes may result in notification of your group administrator and potentially locking your account.

There are always better solutions, do not hesitate to contact us.