In order to test or visualize it is sometimes handy to allocate resources for an interactive job. SLURM provides two commands to facilitate this:
Simple Interactive Work with srun
To get an interactive shell you can just run:
srun --pty -p <partition name> -A <account name> bash -i
You can reserve more time, memory or CPUs as well.
Allocation with salloc
To quote the official documentation:
salloc is used to allocate a Slurm job allocation, which is a set of resources (nodes), possibly with some set of constraints (e.g. number of processors per node). When
salloc successfully obtains the requested allocation, it then runs the command specified by the user.
salloc -N 2 -p parallel-A zdvhpc
salloc: Granted job allocation 3242
salloc: Waiting for resource configuration
salloc: Nodes z[0001-0002] are ready for job
# now you can use two nodes and start the desired application
srun -N1 [other parameters] <some application confined on one node>
srun [other parameters] <some application triggered on all nodes>
srun [other parameters] <some mpi application>
# do not forget to type 'exit' or else you will be working in a subshell
During a session with salloc you may log in to the allocated nodes (with SSH) and monitor their behaviour. This can be handy to estimate memory usage, too.
ssh <allocated node>