Kernels are programming language specific processes that run independently and interact with the Jupyter Applications and their user interfaces 1.

Here you are going to learn how to use default kernels and how to enable your customized environments based on e.g conda or virtualenv. Please contact for any issue or request.

System-wide kernels


Most of the kernels/modules are based on conda environments, after activation you can use conda list to check all installed packages and their versions.

On Levante, we povide the following kernels:

  • Python 3 based on the module python3/2022.01-gcc-11.2.0`

    • Widely used open-source packages are already installed

    • We will/might add more when corresponding modules are available

  • R 4.1.2 (based on the module r/4.1.2)

  • Julia (based on the module julia/1.7.0) –> see this blog post for more details

  • ESMValTool (based on the latest module esmvaltool)

Most of these kernels/modules are based on conda, so you can use conda list to check all installed packages and their versions.


You can not install or update packages in the system-wide modules. We do not recommend using the --user flag to install new packages in your $HOME directory, it will not always work.


For testing new libraries/packages, we suggest that you use your own conda/env as described below and you turn it into a kernel.

Wrapper packages

Some Python libraries/kernels are just wrappers arround software binaries e.g (py) CDO. In this case, the binary needs to be loaded before using the wrapper, otherwise you will get the Module ‘xyz’ not found error. In Python 3 kernel some well known binaries are already loaded: CDO, SLK, git.

For any other wrapper, you can load modules or set environment variables in a file named .kernel_env (that you need to create in your home directory). This file is sourced anytime you start the default Python 3 kernel.

For instance, pynco requires the module netCDF Operator (NCO). You can load by adding this line to the .kernel_env file:

module load nco

Use your own kernel

To get full control of the Python interpreter and packages, we recommend that you create your own environment. Please, follow these steps:

  • With conda

% module load python3
% conda create -n env-name ipykernel python=3.x
% source activate env-name
% python -m ipykernel install --user --name my-kernel --display-name="My Kernel"
% conda deactivate
  • With virtualenv

If virtualenv is not available, you have to install it first before trying the following steps. The best way to install virtualenv is with pip:

% module load python3
% python -m pip install --user virtualenv
% python -m virtualenv --system-site-packages /path/to/new-kernel
% source /path/to/new-kernel/bin/activate
% pip install ipykernel
% python -m ipykernel install --user --name my-kernel --display-name="New Kernel"

You can now add/install additional packages that you need in your new environment and then:

(new-kernel) % deactivate


  • (Re)start the server (jupyter notebook)

  • Refresh the browser (jupyterlab)

Now, the new kernel should be available.

Kernel specifications are in ~/.local/share/jupyter/kernels/.

More details on kernels can be found here.


You can even go further with the configuration of your new kernel by updating the kernel.json. The content looks like this:

 "argv": [
 "display_name": "new-kernel",
 "language": "python"

It is possible to specify additional environment variables:

 "argv": [
 "display_name": "new-kernel",
 "language": "python",
 "env": {
     "variable": "value",

Best practices

  • Where to install the new environment?

Based on the number and size of the python packages installed in your new enviroment, disk usage in your $HOME directory can easily exceed the limit. Therefore, it is preferable to create conda/virtualenv environments in /work (in the corresponding project). In that case, creating a conda env for example in work:

% conda  create --prefix /work/project_id/$USER ...
  • Helper script for kernel.json

You can create a shell script and make it executable (chmod +x Inside the script you can put all configuration you want in your new kernel. An example can be for example loading system modules. The sctructure of the script can be like this:


source /etc/profile
module purge

module load netcdf_c/4.3.2-gcc48
module load python/3.5.2
python -m ipykernel_launcher -f "$1"

And the kernel.json:

 "argv": [
 "display_name": "new-kernel",
 "language": "python"
  • uninstall/remove a kernel

    • jupyter kernelspec remove kernel-name

    • delete the corresponding conda/virtual environment if you don’t need it anymore

Share kernels

For workshops or trainings, you can create and share kernels (based on conda environments) accross users. The process is quite similar to Use your own kernel except that some paths need to be changed:

  1. create the conda environment in a directory readable by the users (generally in /work/project_id)

conda create --prefix /work/project_id/env_name python ipykernel
  1. create a kernel specification

python -m ipykernel install --prefix /work/project_id/ ...

Finally, users have to create a symbolic link in their $HOME directory to the kernel specification:

ln -s /work/project_id/share/jupyter/kernels/kernel_name



This happens when you to try to activate a conda environment but conda is not (yet) in the path. There are two solutions for this issue:

  • use source activate instead of conda activate

  • type this before using conda:

    . `dirname $(which conda)`/../etc/profile.d/