Kernels#
Kernels are programming language specific processes that run independently and interact with the Jupyter Applications and their user interfaces
[1].
Here you are going to learn how to use default kernels and how to enable your customized environments
based on e.g conda or virtualenv. Please contact support@dkrz.de
for any issue or request.
System-wide kernels#
Note
Most of the kernels/modules are based on conda environments, after activation you can use conda list
to check all installed packages and their versions.
On Levante, we povide the following kernels:
0 Python 3 based on the module python3/unstable (see details here)
1 Python 3 based on the module python3/2023.01-gcc-11.2.0`
Widely used open-source packages are already installed
We will/might add more when corresponding modules are available
R 4.1.2 (based on the module r/4.1.2)
Julia (based on the module julia/1.7.0) –> see this blog post for more details
ESMValTool (based on the latest module esmvaltool)
Bash (execute bash commands in jupyter cells)
ML (based on the latest module
pytorch
)
More information on Python modules can be found here
Note
You can not install or update packages in the system-wide modules. We do not recommend using the --user
flag to install new packages in your $HOME
directory, it will not always work.
Note
For testing new libraries/packages, we suggest that you use your own conda/env as described below and you turn it into a kernel.
Wrapper packages#
Some Python libraries/kernels are just wrappers arround software binaries e.g (py) CDO. In this case, the binary needs to be loaded before using the wrapper, otherwise you will get the Module ‘xyz’ not found error. In Python 3 kernel some well known binaries are already loaded: CDO, SLK, git.
For any other wrapper, you can load modules or set environment variables in a file named .kernel_env (that you need to create in your home directory). This file is sourced anytime you start the default Python 3 kernel.
For instance, pynco requires the module netCDF Operator (NCO). You can load by adding this line to the .kernel_env file:
module load nco
Use your own kernel#
To get full control of the Python interpreter and packages, we recommend that you create your own environment. Please, follow these steps:
With conda
% module load python3
% conda create -n env-name -c conda-forge ipykernel python=3.x
% source activate env-name
% python -m ipykernel install --user --name my-kernel --display-name="My Kernel"
% conda deactivate
With virtualenv
If virtualenv is not available, you have to install it first before trying the following steps. The best way to install virtualenv is with pip:
% module load python3
% python -m pip install --user virtualenv
% python -m virtualenv --system-site-packages /path/to/new-kernel
% source /path/to/new-kernel/bin/activate
% pip install ipykernel
% python -m ipykernel install --user --name my-kernel --display-name="New Kernel"
You can now add/install additional packages that you need in your new environment and then:
(new-kernel) % deactivate
Finally:
(Re)start the server (jupyter notebook)
Refresh the browser (jupyterlab)
Now, the new kernel should be available.
Kernel specifications are in ~/.local/share/jupyter/kernels/
.
More details on kernels can be found here.
Advanced#
You can even go further with the configuration of your new kernel by updating the kernel.json. The content looks like this:
{
"argv": [
"/home/user/kernels/new-kernel/bin/python",
"-m",
"ipykernel_launcher",
"-f",
"{connection_file}"
],
"display_name": "new-kernel",
"language": "python"
}
It is possible to specify additional environment variables:
{
"argv": [
"/home/user/kernels/new-kernel/bin/python",
"-m",
"ipykernel_launcher",
"-f",
"{connection_file}"
],
"display_name": "new-kernel",
"language": "python",
"env": {
"variable": "value",
}
}
Best practices#
Where to install the new environment?
Based on the number and size of the python packages installed in your new enviroment, disk usage in your $HOME
directory can easily exceed the limit.
Therefore, it is preferable to create conda/virtualenv environments in /work
(in the corresponding project).
In that case, creating a conda env for example in work:
% conda create --prefix /work/project_id/$USER -c conda-forge [PACKAGES] ...
Helper script for kernel.json
You can create a start-kernel.sh shell script and make it executable (chmod +x start-kernel.sh). Inside the script you can put all configuration you want in your new kernel. An example can be for example loading system modules. The sctructure of the script can be like this:
#!/bin/bash
source /etc/profile
module purge
module load netcdf_c/4.3.2-gcc48
module load python/3.5.2
python -m ipykernel_launcher -f "$1"
And the kernel.json:
{
"argv": [
"start-kernel.sh",
"{connection_file}"
],
"display_name": "new-kernel",
"language": "python"
}
uninstall/remove a kernel
jupyter kernelspec remove kernel-name
delete the corresponding conda/virtual environment if you don’t need it anymore
Troubleshooting#
CommandNotFoundError#
This happens when you to try to activate a conda environment but conda is not (yet) in the path. There are two solutions for this issue:
use source activate instead of conda activate
type this before using conda:
. `dirname $(which conda)`/../etc/profile.d/conda.sh