Containers on Mistral

In some cases, there is a need to bundle or package your application(s) into isolated portable environments (containers). Some use cases include:

  • Testing new code

  • Reproducing and sharing results (workflows)

  • Workaround for missing system libraries (see ML example)

At DKRZ we have started providing some containerization tools. Currently on Mistral, Singularity is available. In this page you will learn how to use, create, and run Singularity containers.


Containers on Mistral is at early stage. There are especially a few pitfalls due to the old kernel of the RHEL 6 such that not all common recipes from the web will work! We encourage you to share your containers or recipes, so it can be beneficial for other Mistral users. We will inform you on how and where can you achieve this.

How to use Singularity?


Please check the official Singularity user guide here for detailed information.

To start using Singularity commands on Mistral, you need to load the singularity environment module first:

module load singularity/3.6.1-gcc-9.1.0

For now, only version 3.6 is available, but this can change when we upgrade to newer versions. Just hit this command to show available Singularity versions on Mistral:

module avail singularity

and to check the version:

singularity --version

Check the help:

singularity --help

File system inside containers

Inside the container, some system-defined bind paths are automatically mounted, e.g. host directories like $HOME (user’s home directory) or /tmp are already available. In case you want to mount other directories from the host in the container:

  • use the option --bind/-B src[:dest[:opts]]

  • src and dest are paths outside and inside of the container respectively

  • opts are mount options (ro: read-only, rw:read-write)

It is also possible to bind directories using environment variables:

$ export SINGULARITY_BINDPATH="/scratch, /work"
$ singularity shell CONTAINER.sif

More details can be found here.

GPU/CUDA support

It is required that the version of CUDA inside the container is supported by the compute node. You can always use

module avail

to check the versions of CUDA available where you run the Singularity container. This rule can apply for other software, e.g. MPI, too.

To run Singularity with CUDA/GPU support you only need to add --nv flag to run/exec commands, e.g.:

singularity run --nv ...

More details can be found here.

MPI support


  • Version of MPI must be the same on the host and in the container.

  • InfiniBand mount

srun --mpi=pmi2 Options singularity exec Container_Image.sif Command

Singularity with batch support

It is possible and even recommended to run Singularity containers as interactive job or Slurm batch job.

<userid>@mlogin1% srun --pty -A <Account> -p <PARTITION> singularity shell CONTAINER.sif

You can also run the container as a batch job, e.g.:

#SBATCH -J mistral_singularity
#SBATCH -o mistral_singularity.out
#SBATCH -p shared
#SBATCH -t 0-00:60

module purge
module load singularity
module load cuda

# Singularity command line options
singularity exec container.sif bash

If you name it mistral_script.sbatch, then you can run it:

sbatch run mistral.sbatch

Output log will be saved in mistral_singularity.out

Singularity and Docker

It is even possible to pull and run Docker images:

singularity pull docker://centos:7
INFO:    Converting OCI blobs to SIF format
INFO:    Starting build...
Getting image source signatures
Copying blob ab5ef0e58194 done
Copying config 0a7908e1b9 done
Writing manifest to image destination
INFO:    Creating SIF file...
INFO:    Build complete: centos_7.sif

To access and run a shell within the container:

$ singularity shell centos_7.sif

Now you are running in the container. You can check the kernel and OS in the container:

Singularity> uname -r

Singularity> cat /etc/os-release
NAME="CentOS Linux"
VERSION="7 (Core)"
ID_LIKE="rhel fedora"
PRETTY_NAME="CentOS Linux 7 (Core)"



Containerized Machine learning with PyTorch/GPU/CUDA

Pytorch deep learning singularity container

Please, check this repository, we have built some singularity images supported on Mistral.

Build your own image

Currently, it is not possible to build container images directly on Mistral, it requires sudo, which is not granted to Mistral users. However, you can build the image on your laptop and then push it (scp) to Mistral and run it.

CACHE and TMP directories

To avoid overwhelming your $HOME with singularity cache and tmp files, you can overwrite the default variables by exporting:

  • SINGULARITY_TMPDIR: Used with the build command, to consider a temporary location for the build.

  • SINGULARITY_CACHEDIR: Specifies the directory for image downloads to be cached in.

before singularity commands, for example:

mkdir -p /scratch/{a,z}/$USER/singularity/{cache,tmp}
export SINGULARITY_TMPDIR=/scratch/{a,z}/$USER/singularity/tmp
export SINGULARITY_CACHEDIR=/scratch/{a,z}/$USER/singularity/cache


FATAL: kernel too old

This error is due to the kernel version on the host where the container should run. The Operating System on the Mistral cluster is Red Hat Enterprise Linux release 6.4 (RHEL6). Since there is no plan to upgrade the OS on MIstral, a workaround would be to downgrade the OS in the container. For example, containers with the following OS can be used:

  • Centos 7

  • Ubuntu 14 (trusty) or Ubuntu 16 (xenial)

Tips & Tricks

Mounting software tree

To use softwares/modules from mistral in your container, there are some specific directories that need to be mounted first:

  • /sw/spack-rhel6

  • /sw/rhel-x64

  • /mnt/lustre01/spack-workplace

To bind the above paths when you shell into the container:

$ singularity shell --bind /mnt/lustre01/spack-workplace/spack-rhel6/:/sw/spack-rhel6 --bind /mnt/lustre01 --bind /sw/rhel6-x64/:/sw/rhel6-x64 CONTAINER.sif

Once these directories mounted in the container, you can setup the spack environment:

$ . /sw/spack-rhel6/spack/share/spack/

To check if spack/module available:

$ module av

$ spack list


Please report any issue using one of our contact channels.