Singularity#
In this page you will learn how to use Singularity containers on Levante.
How to use Singularity?#
Note
Please check the official Singularity user guide here for detailed information.
To start using Singularity commands on Levante, you need to first load the singularity module:
module load singularity/3.8.5-gcc-11.2.0
To check for different versions use the following command:
module avail singularity
and to verify the loaded version:
singularity --version
and to get the help:
singularity --help
File system inside containers#
Inside the container, some system-defined bind paths are automatically
mounted, e.g. host directories like $HOME
(user’s home directory) or
/tmp
are already available. In case you want to mount other directories
from the host in the container:
use the option
--bind/-B src[:dest[:opts]]
src and dest are paths outside and inside of the container respectively
opts are mount options (ro: read-only, rw:read-write)
It is also possible to bind directories using environment variables:
$ export SINGULARITY_BINDPATH="/scratch, /work" $ singularity shell CONTAINER.sif
More details can be found here.
GPU/CUDA support#
It is required that the version of CUDA inside the container is supported by the compute node. You can always use
module avail
to check the versions of CUDA available where you run the Singularity container. This rule can apply for other software, e.g. MPI, too.
To run Singularity with CUDA/GPU support you only need to add --nv
flag to
run
/exec
commands, e.g.:
singularity run --nv ...
More details can be found here.
MPI support#
Requirements:
Version of MPI must be the same on the host and in the container.
InfiniBand mount
srun --mpi=pmi2 Options singularity exec Container_Image.sif Command
Singularity with batch support#
It is possible and even recommended to run Singularity containers as interactive job or Slurm batch job.
<userid>@levante1% srun --pty -A <Account> -p <PARTITION> singularity shell CONTAINER.sif
You can also run the container as a batch job, e.g.:
#!/bin/bash
#SBATCH -J levante_singularity
#SBATCH -o levante_singularity.out
#SBATCH -p shared
#SBATCH -t 0-00:60
module purge
module load singularity
module load cuda
# Singularity command line options
singularity exec container.sif bash
If you name it levante_script.sbatch, then you can run it:
sbatch run levante_script.sbatch
Output log will be saved in levante_singularity.out
Singularity and Docker#
It is even possible to pull and run Docker images:
singularity pull docker://centos:8
INFO: Converting OCI blobs to SIF format
INFO: Starting build...
Getting image source signatures
Copying blob ab5ef0e58194 done
Copying config 0a7908e1b9 done
Writing manifest to image destination
...
INFO: Creating SIF file...
INFO: Build complete: centos_8.sif
To access and run a shell within the container:
$ singularity shell centos_8.sif
Singularity>
Now you are inside the container. You can check the kernel and OS in the container:
Singularity> uname -r
4.18.0-513.24.1.el8_9.x86_64
Singularity> cat /etc/os-release
NAME="CentOS Linux"
VERSION="8 (Core)"
ID="centos"
ID_LIKE="rhel fedora"
VERSION_ID="8"
PRETTY_NAME="CentOS Linux 8 (Core)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:centos:centos:8"
HOME_URL="https://www.centos.org/"
BUG_REPORT_URL="https://bugs.centos.org/"
REDHAT_SUPPORT_PRODUCT="centos"
REDHAT_SUPPORT_PRODUCT_VERSION="8"
Build your own image#
Currently, it is not possible to build container images directly on Levante,
it requires root permissions, which are not granted to the users.
However, you can build the image on your laptop and then push it (scp
) to Levante
and use it.
CACHE and TMP directories#
To avoid overwhelming your $HOME directory with singularity cache and tmp files, you can overwrite the default variables by exporting:
SINGULARITY_TMPDIR: Used with the build command, to consider a temporary location for the build.
SINGULARITY_CACHEDIR: Specifies the directory for image downloads to be cached in.
before running any singularity command, for example:
mkdir -p /scratch/{a,z}/$USER/.singularity/{cache,tmp}
export SINGULARITY_TMPDIR=/scratch/{a,z}/$USER/.singularity/tmp
export SINGULARITY_CACHEDIR=/scratch/{a,z}/$USER/.singularity/cache
Tips & Tricks#
Mounting software tree#
To use softwares/modules from Levante in your container, there are some specific directories that need to be mounted/binded first:
/sw/spack-levante
/sw/spack-workplace
/usr/share/Modules
To bind the above paths when you shell
into the container:
$ singularity shell --bind /sw/spack-workplace/,/sw/spack-levante/,/usr/share/Modules CONTAINER.sif
Or you can add the path(s) to the SINGULARITY_BINDPATH environment variable as described in File system inside containers.
Once these directories mounted in the container, you can setup the spack
:
> . /sw/spack-workplace/spack-0.20.1/share/spack/setup-env.sh
To check if spack/module available:
$ module av
$ spack list
Contact#
Report any issue to support@dkrz.de.