Orca

The program ORCA is a electronic structure program package written by F. Neese, with contributions from many current and former coworkers and several collaborating groups. The binaries of ORCA are available free of charge for academic users for a variety of platforms.

Website: https://orcaforum.kofo.mpg.de/app.php/portal

ORCA comes with a license, which prohibits us to “just install and everyone can use it”. We, therefore, need individual groups to print and sign the linked ORCA license). Subsequently, send it via internal mail (“Hauspost”) to “ZDV HPC group”. We will then set the permissions accordingly.

When doing so, please include (the send EULAG does not contain this information):

  • name
  • username
  • e-mail address (in case we need to approach you)
  • and - particularly - the project name of your MOGON project.

Running ORCA in parallel

To start multiple ORCA processes, use the PAL keyword in the keyword list:

! [...]
%PAL NPROCS <NUM> END

where <NUM> is the number of processes you want to start.

Launch ORCA with the full path by using $EBROOTORCA/orca in the job script. This is necessary in parallel mode so that ORCA will correctly find its executables.
Make sure you use localscratch for your calculations or even make use of the ramdisk for I/O intensive jobs!

Hint on the Use of Parallel ORCA from the ORCA Manual

Thus, all major modules are parallelized in the present version. The efficiency is such that for RI-DFT perhaps up to 16 processors are a good idea while for hybrid DFT and Hartree-Fock a few more processors areappropriate. Above this, the overhead becomes significant and the parallelization loses efficiency. Coupled-cluster calculations usually scale well up to at least 8 processors but probably it is also worthwhile to try 16. For Numerical Frequencies or Gradient runs it makes sense to use as many processors as 3*Number of Atoms. If you run a queuing system you have to make sure that it works together with ORCA in a reasonable way.

Memory Usage

Memory usage is controlled by %maxcore. The keyword specifies the memory per core in MB that ORCA is allowed to use. In principle, you should not request more than 75% of the available physical memory, since ORCA occasionally uses more than the %maxcore setting indicates.

Example

In the following example 2000 MB memory per core is allocated to ORCA. Since 4 CPU cores have been requested, this implies that the total amount of memory required by ORCA is: $4 \cdot 2000MB = 8000MB$

! [...]
%maxcore 2000
%PAL NPROCS 4 END

Example ORCA Job Script

#!/bin/bash
#-----------------------------------------------------------------
# Example SLURM job script to run ORCA on MOGON II.
#
# This script requests 16 cores on one node and 500MB RAM per core
#-----------------------------------------------------------------

####### Job Information / Mail Notify #######
#SBATCH --job-name="Orca_Example"                   # Set Job Name
#SBATCH --mail-user=<Joe_User>@uni-mainz.de         # Specify Receiving Mail Address
#SBATCH --mail-type=ALL                             # Specify Type of Mail: NONE, BEGIN, END, FAIL, REQUEUE, ALL
#SBATCH -A <Account>

####### Job Output #######
#SBATCH --output %x-%j.out                          # Stdout Output File (%j expands to JobID)
#SBATCH --error %x-%j.err                           # Stderr Output File (%x expands to JobName)

####### Partition #######
#SBATCH --partition=smp                             # Queue Name on MOGON II

####### Parallelism #######
#SBATCH --ntasks=16                                 # Total Number of Cores

####### Ressources #######
#SBATCH --mem-per-cpu=500                           # Amount of Memory per CPU (in MB)
#SBATCH --time=0-00:05:00                           # Run Time in 'Days-Hours:Minutes:Seconds'

####### Modules #######
module purge
module load chem/ORCA/4.2.1-gompi-2019b

### Localscratch ###
# Save Directory
OUT_DIR=$(pwd)/Orca_Output/h20_$SLURM_JOB_ID
# Make Output Directory
mkdir -p $OUT_DIR

# Set Job Directory to Local Scratch Space
export JOBDIR=/localscratch/$SLURM_JOB_ID

# Copy Files to Scratch
cp $SLURM_SUBMIT_DIR/*.txt $JOBDIR
#cp $SLURM_SUBMIT_DIR/*.xyz $JOBDIR
#cp $SLURM_SUBMIT_DIR/*.inp $JOBDIR

# Change to Scratch Directory
cd $JOBDIR

### ORCA Run ###
# Run Calculation
# for versions < 5 use: '$EBROOTORCA/orca', instead
$EBROOTORCA/bin/orca ${JOBDIR}/*.txt > ${OUT_DIR}/Orca_Output_h20_$SLURM_JOB_ID.out

# Copy Everything from Scratch to Output Directory
cp $JOBDIR/* $OUT_DIR

! B3LYP def2-SVP Opt
%maxcore 500
%PAL NPROCS 16 END

*xyz 0 1
H 0.0 0.0 0.0
H 0.0 0.0 1.0
*