start:development:scripting_languages:julia

This is an old revision of the document!

# Julia Programming Language

#### Work In Progress

33%
33%
10%

Julia is a high-performance, high-level, dynamic programming language.

Distinctive aspects of Julia's design include a type system with parametric polymorphism in a dynamic programming language; with multiple dispatch as its core programming paradigm. Julia supports concurrent, (composable) parallel and distributed computing (with or without using MPI and/or the built-in corresponding to “OpenMP-style” threads), and direct calling of C and Fortran libraries without glue code. Julia uses a just-in-time (JIT) compiler that is referred to as “just-ahead-of-time” (JAOT) in the Julia community, as Julia compiles all code (by default) to machine code before running it.

We currently offer the following Julia versions:

JGU HPC Modules
lang/Julia/1.5.3-linux-x86_64    lang/Julia/1.6.0-linux-x86_64 (D)

One can use a certain version with the following command:

module load lang/Julia/1.6.0-linux-x86_64

# Submitting a Serial Julia Job

hello_mogon.jl
println("Hello MOGON!")
serial_julia_job.slurm
#!/bin/bash
#SBATCH --partition=smp
#SBATCH --account=<YourHPC-Account>
#SBATCH --time=0-00:01:00
#SBATCH --mem=512 #0.5GB
#SBATCH --job-name=julia_serial_example
#SBATCH --output=%x_%j.out
#SBATCH --error=%x_%j.err

module purge

julia hello_mogon.jl
cat julia_serial_example_*.out
Hello MOGON!

# Submitting a Parallel Julia Job

Julia offers two main possibilities for parallel computing: A multi-threading based parallelism, which basically is a shared memory parallelism and distributed Processing, which parallelizes code across different Julia processes.

The number of execution threads is controlled either by using the -t/–threads command line argument or by using the JULIA_NUM_THREADS environment variable. When both are specified, then -t/–threads takes precedence.

Therefore, to start Julia with four threads, you must execute the following command:

julia --threads 4

But let's explore the basics of Julia's multi-threading capabilities with an example:

hello_mogon_smp.jl
Threads.@threads for i=1:20
println(Hello MOGON! The Number of iteration is $i from Thread$(Threads.threadid())/$(Threads.nthreads())) end  julia_smp_job.slurm #!/bin/bash #SBATCH --partition=smp #SBATCH --account=<Your-ZDV-Account> #SBATCH --time=0-00:02:00 #SBATCH --mem-per-cpu=1024 #1GB #SBATCH --nodes=1 #SBATCH --ntasks-per-node=1 #SBATCH --cpus-per-task=6 #SBATCH --job-name=smp_julia #SBATCH --output=%x_%j.out #SBATCH --error=%x_%j.err module purge module load lang/Julia julia --threads$SLURM_CPUS_PER_TASK hello_mogon_smp.jl

Once the job is finished, you can display the result with the following command:

cat smp_julia_*

The output should be similar to the following lines:

Hello MOGON! The Number of iteration is 5 from Thread 2/6
Hello MOGON! The Number of iteration is 9 from Thread 3/6
Hello MOGON! The Number of iteration is 15 from Thread 5/6
Hello MOGON! The Number of iteration is 10 from Thread 3/6
Hello MOGON! The Number of iteration is 11 from Thread 3/6
Hello MOGON! The Number of iteration is 12 from Thread 4/6
Hello MOGON! The Number of iteration is 16 from Thread 5/6
Hello MOGON! The Number of iteration is 17 from Thread 5/6
Hello MOGON! The Number of iteration is 6 from Thread 2/6
Hello MOGON! The Number of iteration is 13 from Thread 4/6
Hello MOGON! The Number of iteration is 7 from Thread 2/6
Hello MOGON! The Number of iteration is 14 from Thread 4/6
Hello MOGON! The Number of iteration is 1 from Thread 1/6
Hello MOGON! The Number of iteration is 8 from Thread 2/6
Hello MOGON! The Number of iteration is 2 from Thread 1/6
Hello MOGON! The Number of iteration is 3 from Thread 1/6
Hello MOGON! The Number of iteration is 18 from Thread 6/6
Hello MOGON! The Number of iteration is 4 from Thread 1/6
Hello MOGON! The Number of iteration is 19 from Thread 6/6
Hello MOGON! The Number of iteration is 20 from Thread 6/6

Starting with julia -p n provides n worker processes on the local machine. Generally it makes sense for n to equal the number of CPU threads (logical cores) on the machine. Note that the -p argument implicitly loads module Distributed

Julia Documentation, Multi-processing and Distributed Computing

distributed_julia_example.jl
@everywhere begin
using LinearAlgebra
a = zeros(200,200);
end

println("Number of requested Slurm CPUs per Task is: ", slurm_cores)

println("Number of available Julia Processes: ", nprocs())
println("Number of available Julia Worker Processes: ", nworkers())

calctime = @elapsed @sync @distributed for i=1:200
a[i] = maximum(abs.(eigvals(rand(500,500))))
end

println("With ", slurm_cores, " CPUs per Task the calculation took ", calctime, " seconds.")

julia_distributed_job.slurm
#SBATCH --partition=smp
#SBATCH --account=<Your-HPC-Account>
#SBATCH --time=0-00:03:00
#SBATCH --mem-per-cpu=4096 #4GB
#SBATCH --nodes=1
#SBATCH --job-name=dist_julia
#SBATCH --output=%x_%j.out
#SBATCH --error=%x_%j.err

module purge

julia --procs $SLURM_CPUS_PER_TASK parallel_julia_example.jl cat dist_julia*.out [ ... ] With 2 CPUs per Task the calculation took 34.029345377 seconds. CPUs per Task Runtime (s) 2 34.03 4 19.27 6 14.58 8 12.02 # Julia MPI Job First, Julia must be configured for the use of MPI. For this purpose the MPI Wrapper for Julia is used. Log in to one ouf our Service-Nodes and then load Julia and the desired MPI module via: module load mpi/OpenMPI/4.0.5-GCC-10.2.0 module load lang/Julia Next, you need to build the MPI package for Julia with Pkg: julia -e 'ENV["JULIA_MPI_BINARY"]="system"; using Pkg; Pkg.add("MPI"); Pkg.build("MPI", verbose=true)' The output should should be similar to the following if installation and build was successful: [ ... ] [ Info: using system MPI ] 0/1 ┌ Info: Using implementation │ libmpi = "libmpi" │ mpiexec_cmd = mpiexec └ MPI_LIBRARY_VERSION_STRING = "Open MPI v4.0.5, package: Open MPI henkela@login22.mogon Distribution, ident: 4.0.5, repo rev: v4.0.5, Aug 26, 2020\0" ┌ Info: MPI implementation detected │ impl = OpenMPI::MPIImpl = 2 │ version = v"4.0.5" └ abi = "OpenMPI" Now that MPI and Julia have been set up correctly, we can proceed to the example. hello_mogon_mpi.jl using MPI MPI.Init() comm = MPI.COMM_WORLD my_rank = MPI.Comm_rank(comm) comm_size = MPI.Comm_size(comm) println("Hello MOGON! I am Rank ", my_rank, " of ", comm_size, " on ", gethostname()) MPI.Finalize() julia_mpi_job.slurm #!/bin/bash #SBATCH --partition=parallel #SBATCH --account=<YourHPC-Account> #SBATCH --time=0-00:02:00 #SBATCH --mem-per-cpu=2048 #2GB #SBATCH --nodes=2 #SBATCH --ntasks-per-node=8 #SBATCH --job-name=mpi_julia #SBATCH --output=%x_%j.out #SBATCH --error=%x_%j.err module purge module load mpi/OpenMPI/4.0.5-GCC-10.2.0 module load lang/Julia export JULIA_MPI_PATH=$EBROOTOPENMPI

srun julia -- hello_mogon_mpi.jl
cat mpi_julia_*.out
Hello MOGON! I am Rank 9 of 16 on z0278.mogon
Hello MOGON! I am Rank 2 of 16 on z0277.mogon
Hello MOGON! I am Rank 11 of 16 on z0278.mogon
Hello MOGON! I am Rank 12 of 16 on z0278.mogon
Hello MOGON! I am Rank 15 of 16 on z0278.mogon
Hello MOGON! I am Rank 10 of 16 on z0278.mogon
Hello MOGON! I am Rank 13 of 16 on z0278.mogon
Hello MOGON! I am Rank 8 of 16 on z0278.mogon
Hello MOGON! I am Rank 14 of 16 on z0278.mogon
Hello MOGON! I am Rank 1 of 16 on z0277.mogon
Hello MOGON! I am Rank 3 of 16 on z0277.mogon
Hello MOGON! I am Rank 5 of 16 on z0277.mogon
Hello MOGON! I am Rank 4 of 16 on z0277.mogon
Hello MOGON! I am Rank 0 of 16 on z0277.mogon
Hello MOGON! I am Rank 6 of 16 on z0277.mogon
Hello MOGON! I am Rank 7 of 16 on z0277.mogon
• start/development/scripting_languages/julia.1618574272.txt.gz