start:software:visualization:paraview

# Paraview

In order to use more hosts set the value of the parameter -N correspondingly. Additional params can be added at the end of the salloc line.

Please remember, that client and server version should not mismatch! Very important to reserve a node(s) exclusive!

module add vis/ParaView/5.9.0-intel-2020a-osmesa-mpi-binary
salloc -A <your_account> -p parallel -N 2 --exclusive --time 1-0:0:0 -C anyarch --mem=100G

#### Output (example)

salloc: Granted job allocation 9279358
salloc: Waiting for resource configuration
salloc: Nodes z[0823-0824] are ready for job

Now you can start the Paraview server:

tests\$ mpirun pvserver  --force-offscreen-rendering --mpi

The command “mpirun” provide the MPI environment, that pvserver starts the number of the mpi instances corresponding to the SLURM job reservation. In this case one MPI instance per core on the both reserved nodes. If you want to start at the moment a less number of the serving MPI instances, then just use the command “mpirun -n <number> …”

#### Output (example)

Waiting for client...
Connection URL: cs://z0823.mogon:11111
Accepting connection(s): z0823.mogon:11111

After the Paraview server successfully started, you can use the information from the output for the establishment of the ssh-tunnel to the listening port of the pvserver. See the part below, how to establish a tunnel from your Linux/MacOS to the cluster node.

The command looks like:

ssh -L <portnum-on-your-PC>:<node-name>:<listening-portnum-on-the-node> <login-node>

The port number <portnum-on-your-PC> can be the number of an any free port on your Linux/MacOS. For simplicity you can choose the same port number as the listening port of the pvserver.
Let take the data from the previous example:

• you are working on the cluster over the login node miil01
• you've got the first node z0823.mogon, where an on the port 11111 listening instance of the pvserver is running.

The following command should be started on your Linux/MacOS in order to establish the tunnel connection from your Linux/MacOS to the first node:

ssh -L 11111:z0823.mogon:11111 miil01

#### Output (example)

privacyIDEA_Authentication:

##\      ##\                                               ######\ ######\
###\    ### |                                              \_##  _|\_##  _|
####\  #### | ######\   ######\   ######\  #######\          ## |    ## |
##\##\## ## |##  __##\ ##  __##\ ##  __##\ ##  __##\         ## |    ## |
## \###  ## |## /  ## |## /  ## |## /  ## |## |  ## |        ## |    ## |
## |\#  /## |## |  ## |## |  ## |## |  ## |## |  ## |  

Since the tunnel is established successfully, you can connect your Paraview client to the server.

Independent from where you try to connect, you need an established ssh-tunnel connection to the first node with the listening instance of the pvserver.

1. Start the client on your desired machine (local).
2. Select “Connect …”
4. Enter localhost as hostname, your chosen port and click “Configure”
5. Leave the settings as it and click on “Save”
6. Now select the created entry and click on “Connect”
7. If no error message pops up, you should have connected. To check, open the memory viewer
8. The result should look like this:
9. To disconnect, simply select File » Disconnect

After disconnection the pvserver terminates automatically, so you can see message “Exiting…” from every MPI instance of the pvserver in the ssh session with the running interactive SLURM job or in the jobs log file:

Exiting...
Exiting...
Exiting...
...

#### Easy method: Configure VPN

• start/software/visualization/paraview.txt