Show pageOld revisionsBack to top This page is read only. You can view the source, but not change it. Ask your administrator if you think this is wrong. ====== Paraview ====== ===== Starting an interactive job via command line manually ===== In order to use more hosts set the value of the parameter -N correspondingly. Additional params can be added at the end of the salloc line. Please remember, that client and server version should not mismatch! **Very important to reserve a node(s) exclusive!** ==== Commands for Intel MPI ==== <code bash> module add vis/ParaView/5.9.0-intel-2020a-osmesa-mpi-binary salloc -A <your_account> -p parallel -N 2 --exclusive --time 1-0:0:0 -C anyarch --mem=100G </code>\\ === Output (example) === <code bash> salloc: Granted job allocation 9279358 salloc: Waiting for resource configuration salloc: Nodes z[0823-0824] are ready for job </code>\\ Now you can start the Paraview server: <code bash> tests$ mpirun pvserver --force-offscreen-rendering --mpi </code>\\ The command "mpirun" provide the MPI environment, that pvserver starts the number of the mpi instances corresponding to the SLURM job reservation. In this case one MPI instance per core on the both reserved nodes. If you want to start at the moment a less number of the serving MPI instances, then just use the command "mpirun -n <number> ..." === Output (example) === <code bash> Waiting for client... Connection URL: cs://z0823.mogon:11111 Accepting connection(s): z0823.mogon:11111 </code>\\ After the Paraview server successfully started, you can use the information from the output for the establishment of the ssh-tunnel to the listening port of the pvserver. See the [[start:software:visualization:paraview#establishment_of_the_ssh-tunnel|part below]], how to establish a tunnel from your Linux/MacOS to the cluster node. ===== Establishment of the ssh-tunnel ===== The command looks like: <code bash> ssh -L <portnum-on-your-PC>:<node-name>:<listening-portnum-on-the-node> <login-node> </code>\\ The port number <portnum-on-your-PC> can be the number of an any free port on your Linux/MacOS. For simplicity you can choose the same port number as the listening port of the pvserver. \\ Let take the data from the previous example:\\ * you are working on the cluster over the login node miil01 * you've got the first node z0823.mogon, where an on the port 11111 listening instance of the pvserver is running. \\ The following command should be started on your Linux/MacOS in order to establish the tunnel connection from your Linux/MacOS to the first node: <code bash> ssh -L 11111:z0823.mogon:11111 miil01 </code>\\ === Output (example) === <code bash> privacyIDEA_Authentication: ##\ ##\ ######\ ######\ ###\ ### | \_## _|\_## _| ####\ #### | ######\ ######\ ######\ #######\ ## | ## | ##\##\## ## |## __##\ ## __##\ ## __##\ ## __##\ ## | ## | ## \### ## |## / ## |## / ## |## / ## |## | ## | ## | ## | ## |\# /## |## | ## |## | ## |## | ## |## | ## | </code>\\ Since the tunnel is established successfully, you can connect your Paraview client to the server. ===== Connecting to the server ===== Independent from where you try to connect, you need an established ssh-tunnel connection to the first node with the listening instance of the pvserver. - Start the client on your desired machine (local). - Select "Connect ..." \\ {{start:software:visualization:pv59step1.png?300|}} - Click on "Add Server" \\ {{start:software:visualization:pv59step2.png?300|}} - Enter localhost as hostname, your chosen port and click "Configure" \\ {{start:software:visualization:pv59step3.png?300|}} - Leave the settings as it and click on "Save" \\ {{start:software:visualization:pv59step4.png?300|}} - Now select the created entry and click on "Connect" \\ {{start:software:visualization:pv59step5.png?300|}} - If no error message pops up, you should have connected. To check, open the memory viewer \\ {{:pv_step5.png?300|}} - The result should look like this: \\ {{start:software:visualization:pv59step8.png?300|}} - To disconnect, simply select File >> Disconnect \\ {{:pv59step7.png?300|}} After disconnection the pvserver terminates automatically, so you can see message "Exiting..." from every MPI instance of the pvserver in the ssh session with the running interactive SLURM job or in the jobs log file: <code bash> Exiting... Exiting... Exiting... ... </code>\\ === Easy method: Configure VPN === start/software/visualization/paraview.txt Last modified: 2021/06/16 18:49by noskov