Parallel Visualization with ParaView

Access and reservation of multiple GPU nodes

Our supercomputer Mistral currently includes 21 GPU nodes. This built-in visualization server is described in more technical detail here. While in most cases a single node is sufficient for the post-processing and visualization of data, yet in some cases, it requires a bit more hardware to work on extremely large data sets. The reservation/login procedure for a single node is described here. A standard user can reserve 2 nodes with 8 GPUs in total. If you need more GPUs for the visualization of your data, please contact Beratung@DKRZ.

The recommended way to reserve multiple nodes is to download:download:this script <>, which needs to be executed locally on your own Linux or Mac machine. You may need to adjust this script to include your project/user name, as well as to point to your local vncviewer installation. A public key based login to MISTRAL is also required. This script automatically connects to Mistral, reserves several GPU nodes, starts a vnc server on the first node and connects to this server using your local vncviewer.

To start the reservation, simply execute above script in a local terminal by adding your project ID and the number of nodes you want to reserve (default would be 2).

somewhere:~> ./ -A <ProjectID> -n <NumberOfNodes>

Manual Reservation: If you are working under Windows or can not execute above script, you need to reserve the nodes manually. Afterwards, you need to start a VNC session and connect to that VNC session as is described here. To manually allocate GPU nodes for 12 hours you need to execute the following line on Mistral:

mistral:~> salloc -N <NumberOfNodes> --exclusive -p gpu -A <projectID> -t12:00:00 -- /bin/bash -c 'ssh -X $(scontrol show hostnames $SLURM_JOB_NODELIST | head -n1)'

Starting ParaView Server and ParaView Client: ParaView Client/Server can currently be used with ParaView 5.2 and 5.4.1 (the latest version). Please open two terminal windows in your remote VNC session and enter the following command to start the ParaView servers in the first terminal window:

mgXXX:~> /work/kv0653/bin/ <Processes>

Starting ParaView 5.4.1 on 2 nodes with 4 GPUs and 4 processes per GPU.
Waiting for client...
Connection URL: cs://mgXXX:11111
Accepting connection(s): mgXXX:1111

This little script checks the number of nodes reserved and starts on each node per default 4 processes per GPU. With the optional argument <Processes> you can specify a different number of processes per GPU. With 4 GPUs per node this totals to 16 ParaView servers per node. Important is the connection address that is printed out in the terminal: mgXXX:11111.

Now start in the second terminal window the ParaView client:

mgXXX:~> /work/kv0653/bin/

Alternatively, you can also start ParaView from the menu, i.e. Applications -> Graphics ->  ParaView5.4.1 To use ParaView 5.2 as client/server, simply replace 54 with 52 in the name of both start scripts above. The second command starts the ParaView client with the familiar user interface. In here you need to connect to the previously started ParaView server. This can be done by clicking on the Connect icon in the menu (marked in red below).


After this, a dialog opens, in which you need to select the correct server. If it does not yet exist from a previous use case, you can click on Add Server to add a menu entry for your servers address.


Leave everything as is, and just add the correct server address mgXXX:11111 from above as host and specify a suitable name. Then click on Configure and in the following dialog press Save.


Now click on this menu item in the Choose Server Configuration dialog and click on Connect. If everything works, you can see that the client is connected to the server and you can start by loading (possibly large) data sets. The example below shows the entire ICON ocean data set in 3D at 10 km horizontal resolution. In 3D this accumulates to around 250 million cells which are now very interactive. Visualized is the variable temperature. ParaView behaves more or less the same as in single mode, except that in parallel mode several computations and visualizations are tremendously faster. Volume rendering works as well, but is not recommended for irregular data sets such as ICON, due to the grid layout and an internal necessary reordering of the grid.


This last visualization also shows the 3D ICON ocean data, but this time, the data is colored according to the process ID on which a particular area of the data is running at. In this example we have 2 nodes with 4 GPUs and 4 processes per GPU, ie.32 processes in total.