start:development:scripting_languages:python

Python

Currently, we have a variety of Python-Versions available as module files. To list them all run

$ module avail|& grep 'lang/Python'

Python2 < 2.7.16 and Python3 < 3.7.4

The Python-Versions available as module files, do provide numpy, scipy, pandas, cython and more. However, especially a matplotlib module is most likely missing. This is because our installation framework installs it separately. Hence, the matplotlib functionality has to be loaded as an additional functionality as a module file.

The intel versions are linked against Intel's MKL. Exporting OMP_NUM_THREADS enables multithreaded matrix handling with numpy.

Python2 >= 2.7.16 and Python3 >= 3.7.4

Our installation framework altered its policies to avoid the cluttering of modulefiles. Hence, when loading a Python Module:

$ module load lang/Python/<python-version>

only the bare Python with a few additional libraries (or “modules” in Python-speak) is available. To use the scientific modules load:

$ module load lang/SciPy-bundle/<bundle-version>

The toolchain version of the bundle-version has to fit the compiler of the python-version. The same is true for the matplotlib modules, which can be loaded as:

$ module load vis/matplotlib/<matplotlib-version>

In this case the Python versions of the Python-module and the matplotlib module have to match as well as the toolchain version to the python version.

If you intend to use Python in combination with another module, ensure that the toolchain and the toolchain version of the additional module fit with your selected Python module. With regard to the Python version, try to stay as current as possible.

If you need additional Python packages, you can easily install them yourself either "globally" in your home directory or inside of a virtual environment.

Your Personal Environment (Additional Packages)

In general, having a personal Python environment where you can install third-party packages (without needing root priviliges) yourself is very easy. The preparation steps needed on MOGON are described below.

https://xkcd.com/1987/

While the first variant is already sufficient, we recommend using virtualenvs since they are a lot easier to work with. Virtualenvs can also be shared between users if created in your groups project directory, but most importantly virtual environments bear the potential to avoid the setup hell you might otherwise experience.

Please refrain from installing software, which is installed already a module file.
Do not use any of the modules ending on -bare as they are installed as special dependencies for particular modules (or actually installed by accident) to construct your virtual environment.
We strongly discourage using any *conda setup on one of our clusters: It has often been a source of messing up an existing environment only to be discovered at a source of interference when switching back our modules.
  1. First load an appropriate Python module, see the implications above.
  2. Then navigate to your home directory (if in doubt, type cd).

A so called virtualenv can be seen as an isolated, self-contained Python environment of third-party packages.
Different virtualenvs do not interfere with each other nor with the system-wide installed packages.

It is advised to make use of virtualenv in Python, especially if you intend to install different combinations or versions of various Python packages. Virtualenvs can also be shared between users if created in your groups project directory.

In the following section we will be using <ENV> as a place holder for the environment name you intend to use. Feel free to choose a name to your liking. We recommend naming the environment after its purpose and/or the python-version you intend to use.

Create

You can simply create, activate, use, deactivate and destroy as many virtual environments as you want:

When using Python 3

please use the built-in virtualenv-module of Python 3 python3 -m venv instead of virtualenv –python=$(which python).

Creating a virtualenv will simply set up a directory structure and install some baseline packages:

$ python -m venv <ENV>

Now, your virtual environment could be activated, yet always the respective modules would have to be loaded, first. We take one scipy-bundle as a an example - you can take any. And we take a scipy-bundle, because you will probably need it anyway and it resolves other python dependencies.

$ # we shall make the activation script writable - temporarily:
$ chmod +w <ENV>/bin/activate
$ # then we append the module load statement we have been using earlier to the activation script
$ #      NOTE: the -s will prevent the module load statement from cluttering your terminal or log file
$ echo module load lang/SciPy-bundle/<version> >> <ENV>/bin/activate
$ # likewise you can add a matching matplotlib version, if your would like to have that functionality
$ echo module load vis/matplotlib/<version> >> <ENV>/bin/activate
$ # in the end we need to protect the activate script from accidental modification:
$ chmod -w <ENV>/bin/activate 

Activate

To work in a virtualenv, you first have to activate it, which sets some environment variables for you:

$ source <ENV>/bin/activate
(<ENV>)$ # Note the name of the virtualenv in front of your prompt - nice, heh?

Use

Now you can use your virtualenv - newly installed packages will just be installed inside the virtualenv and just be visible to the python interpreter you start from within the virtualenv:

(<ENV>)$ pip install dill
Defaulting to user installation because normal site-packages is not writeable
Collecting dill
  Downloading dill-0.3.3-py2.py3-none-any.whl (81 kB)
     |████████████████████████████████| 81 kB 244 kB/s 
Installing collected packages: dill
Successfully installed dill-0.3.3

And now compare what happens with the python interpreter from inside the virtualenv and with the system python interpreter:

(<ENV>)$ python -c 'import dill'
(>ENV>)$ /usr/bin/python -c 'import dill'
Traceback (most recent call last):
  File "<string>", line 1, in <module>
ImportError: No module named dill

Deactivate

Deactivating a virtualenv reverts the activation step and all its changes to your environment:

(<ENV>)$ deactivate
$

Destroy

To destroy a virtualenv, simply delete its directory:

$ rm <ENV>

You can use virtual environments exactly as described when working with accelerators. However, as the architecture is different, you need maintain a virtual environment which is not created on the login-node but on an accelerator node.

Please note: The set of module (technically the MODULEPATH search path) on s-nodes is different to the rest of MOGON I. Hence, you cannot expect to find all modules on the login nodes also on the s-node.
Work in progress.

Load Environment Modules (module load [mod])

To load environment modules in python:

execfile('/usr/share/Modules/init/python.py')
module('load',<modulename>)

Multiprocessing

Smaller numbers of tasks can be divided amongst workers on a single node. In high level languages like Python, or in lower level languages using threading language constructs such as OpenMP, this can be accomplished with little more effort than a serial loop. This example also demonstrates using Python as the script interpreter for a Slurm batch script, however note that since Slurm copies and executes batch scripts from a private directory, it is necessary to manually add the runtime directory to the Python search path.

#!/bin/env python
 
#SBATCH --job-name=multiprocess
#SBATCH --output=logs/multiprocess_%j.out
#SBATCH --time=01:00:00
#SBATCH --partition=parallel  # Mogon II
#SBATCH --account=<youraccount>
#SBATCH --nodes=1
#SBATCH --exclusive
 
import multiprocessing
import sys
import os
 
# necessary to add cwd to path when script run 
# by slurm (since it executes a copy)
sys.path.append(os.getcwd()) 
 
def some_worker_function(some_input): pass
 
# get number of cpus available to job
ncpus = int(os.environ["SLURM_JOB_CPUS_PER_NODE"])
 
# create pool of ncpus workers
pool = multiprocessing.Pool(ncpus)
 
# apply work function in parallel
pool.map(some_worker_function, range(100))

Process and threaded level parallelism is limited to a single machine. To

#!/bin/env python
 
#SBATCH --job-name=mpi
#SBATCH --output=logs/mpi_%j.out
#SBATCH --time=01:00:00
#SBATCH --partition=parallel  # Mogon II
#SBATCH --ntasks=128 # e.g. 2 skylake nodes on Mogon II
 
from mpi4py import MPI
 
def some_worker_function(rank, size)
 
comm = MPI.COMM_WORLD
rank = comm.Get_rank()
size = comm.Get_size()
 
some_worker_function(rank, size)

MPI programs and Python scripts must be launched using srun as shown in this Slurm batch script:

#!/bin/bash
 
#SBATCH --job-name=mpi
#SBATCH --output=logs/mpi_%j.out
#SBATCH --time=01:00:00
#SBATCH --partition=parallel  # Mogon II
#SBATCH --account=<your account>
#SBATCH --ntasks=128 # two skylake nodes on Mogon II
 
module load <python module with mpi4py>
 
srun --mpi=pmi2  python mpi_pk.py

In this case we are only using MPI as a mechanism to remotely launch tasks on distributed nodes. All processes must start and end at the same time, which can lead to waste of resources if some job steps take longer than others.

Performance Hints

Many of the hints are inspired by O'Reilly's Python Cookbook chapter on performance (Chapter 14)1). We only discuss very little here explicitly, it is worth reading this chapter. If you need help getting performance out of Python scripts contact us.

Better than guessing is to profile, how much time a certain program or task within this program takes. Guessing bottlenecks is a hard task, profiling often worth the effort. The above mentioned Cookbook covers this chapter.

Avoid them as much you can. If you have to use them, compile them, prior to any looping, e.g.:

import re
myreg = re.compile('\d')
for stringitem in list:
   re.search(myreg, stringitem)
   # or
   myreg.search(stringitem)

A little-known fact is that code defined in the global scope like this runs slower than code defined in a function. The speed difference has to do with the implementation of local versus global variables (operations involving locals are faster). So, if you want to make the program run faster, simply put the scripting statements in a function (also: see O'Reilly's Python Cookbook chapter on performance).

The speed difference depends heavily on the processing being performed.

Every use of the dot (.) operator to access attributes comes with a cost. Under the covers, this triggers special methods, such as getattribute() and getattr(), which often lead to dictionary lookups.

You can often avoid attribute lookups by using the from module import name form of import as well as making selected use of bound methods. See the illustration in O'Reilly's Python Cookbook chapter on performance.

To avoid constant flushing (particularly in Python 2.x) and use buffered output instead, either use Python's logging module instead as it supports buffered output. An alternative is to write to sys.stdout and only flush in the end of a logical block.

In Python 3.x the print()-function comes with a keyword argument flush, which defaults to False. However, use of the logging module is still recommended.

Any constant scalar is best not calculated in any loop - regardless of the programming language. Compilers might(!) optimize this away, but are not always capable of doing so.

One example (timings for the module tools/IPython/6.2.1-foss-2017a-Python-3.6.4 on Mogon I, results on Mogon II may differ, the message will hold):

Every trivial constant is re-computed, if the interpreter is asked for this:

In [1]: from math import pi
 
In [2]: %timeit [1*pi for _ in range(1000)]
   ...: 
149 µs ± 6.5 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
 
In [3]: %timeit [pi for _ in range(1000)]
87.1 µs ± 2 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)

The effect is more pronounced, if division is involved2):

In [4]: some_scalar = 300
 
In [5]: pi_2 = pi / 2
 
In [6]: %timeit [some_scalar / (pi / 2) for _ in range(1000)]
249 µs ± 10.4 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
 
In [7]: %timeit [some_scalar / pi_2 for _ in range(1000)]
224 µs ± 5.62 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)

Solution: Some evaluations are best placed outside of loops and bound to a variable.

Remember that every Python Module on Mogon comes with Cython. Cython is an optimising static compiler for both the Python programming language and the extended Cython programming language.

While we cannot give a comprehensive intro in this wiki document, we recommend using Cython whenever possible and give this little example:

Imaging you have a (tested) script, you need to call frequently. Then create modules your main script can import and write a setup script like this:

# script: setup.py 
#!/usr/bin/env python
 
import os
from distutils.core import setup
from distutils.extension import Extension
from Cython.Distutils import build_ext
 
named_extension = Extension(
    "name of your extension",
    ["directory_of_your_module/<module_name1>.pyx",
     "directory_of_your_module/<module_name2>.pyx"],
    extra_compile_args=['-fopenmp'],
    extra_link_args=['-fopenmp'],
    include_path = os.environ['CPATH'].split(':')
)
 
setup(
    name = "some_name",
    cmdclass = {'build_ext': build_ext},
    ext_modules = [named_extension] 
)

Replace named_extension with a name of your liking, and fill-in all place holders. You can now call the setup-skript like this:

$ python ./setup.py build_ext --inplace

This will create a file directory_of_your_module/<module_name1>.c and a file directory_of_your_module/<module_name1>.so will be the result of a subsequent compilation step.

In Cython you can release the global interpreter lock (GIL), see this document (scroll down a bit), when not dealing with pure python objects.

In particular Cython works with ''numpy''.

Profiling memory is a special topic on itself. There is, however, the Python module "memory profiler", which is really helpful if you have an idea where to look. There is also Pympler, yet another such module.

Things to consider

Python is an interpreted language. As such it should not be used for lengthy runs in an HPC environment. Please use the availability to compile your own modules with Cython; consult the relevant Cython documentation. If you do not know how to start, attend a local Python course or schedule a meeting at our local HPC workshop.


1)
As the link is frequently broken, please report, when this happens - apparently O'Reilly does not like seeing it online elsewhere, but had it online for free in the past.
2)
for compiled functions, particularly - in interpreted code, as shown here, the effect is limited as every number is a Python-Object, too
  • start/development/scripting_languages/python.txt
  • Last modified: 2021/08/10 08:45
  • by meesters