Posts by Sofiane Bendoukha

How to re-enable the deprecated python kernels?

As you propably know, we will rename/remove some unused/outdated/ python modules, please see the details here. Since the jupyterhub kernels are based on modules, the deprecated kernels will no longer be available as default kernels in jupyter notebooks/labs.

NO PANIC, if you have been working with those deprecated kernels and want to continue using them in your notebooks, please follow the steps below.

Read more ...


How to install R packages in different locations?

The default location for R packages is not writeble and you can not install new packages. On demand we install new packages system-wide and for all users. However, it possible to install packages in different locations than root and here are the steps:

create a directory in $HOME e.g. ~/R/libs

Read more ...


How to install jupyter kernel for Matlab

In this tutorial, I will describe the steps to create a kernel for Matlab on Levante. get the matlab_kernel working in Jupyterhub on Levante.

conda environment with python 3.9

../../_images/matlab.png

Read more ...


Requested MovieWriter (ffmpeg) not available

Do you want to create videos / animations with ffmpeg from your jupyter notebook? you need ffmpeg-python (conda) which requires ffmpeg software on Mistral (module)

conda env with ffmpeg-python and ipykernel

Read more ...


How to containerIze your jupyter kernel?

We have seen in this blog post how to encapsulate a jupyter notebook (server) in a singularity container . In this tutorial, I am going to describe how you can run a jupyter kernel in a container and make it available in the jupyter*.

Possible use case for this is to install a supported PyTorch version and work with jupyter notebooks (see GLIBC and the container-based workaround).

Read more ...


Create a kernel from your own Julia installation

We already provide a kernel for Julia based on the module julia/1.7.0.

In order to use it, you only need to install ÌJulia:

Read more ...


Connect Spyder IDE to a remote kernel on Mistral

I am just describing spontaneously what worked for me to connect my local Spyder instance to a remote node on Mistral THAT YOU CAN CONNECT TO VIA SSH FROM YOUR LOCAL MACHINE!!!!

This is just a draft tutorial that will be updated/optimized afterwards.

image0

Read more ...


Python environment locations

Kernels are based on python environments created with conda, virtualenv or other package manager. In some cases, the size of the environment can tremendously grow depending on the installed packages. The default location for python files is the $HOME directory. In this case, it will quickly fill your quota. In order to avoid this, we suggest that you create/store python files in other directories of the filesystem on Mistral.

The following are two alternative locations where you can create your Python environment:

Read more ...


How to quickly create a test kernel

This is a follow up on Kernels. In some cases, the process of publishing new Python modules can take long. In the meantime, you can create a test kernel to use it in Jupyterhub. Creating new conda environments and using them as kernels has been already described here. In this example, we are not going to create a new conda env but only the kernel configuration files.

in this tutorial, I will take the module python3/2021-01. as an example.

Read more ...


CF Python package added to the software tree

According to this link:

The Python cf package is an Earth Science data analysis library that is built on a complete implementation of the CF data model. The cf package implements the CF data model 1 for its internal data structures and so is able to process any CF-compliant dataset. It is not strict about CF-compliance, however, so that partially conformant datasets may be ingested from existing datasets and written to new datasets. This is so that datasets that are partially conformant may nonetheless be modified in memory.

Read more ...


SLURM update / Memory use

Slurm config on Mistral has been updated to fix an issue related to memory use.

Prior the update, some Slurm jobs continue consuming the available memory (and even swap) of the allocated node and exceed the allocated memory (set in sbatch or srun). If this occurs, it also affect other jobs/users.

error message

Read more ...


Dask jobqueue on Mistral

According to the official Web site, Dask jobqueue can be used to deploy deploy Dask on job queuing systems like PBS, Slurm, MOAB, SGE, LSF, and HTCondor. Since the queuing system on Mistral is Slurm, we are going to show how to start a Dask cluster there. The idea is simple as described here. The difference is that the workers can be distributed through multiple nodes from the same partition. Using Dask jobqueue will Dask cluster as a Slurm jobs.

In this case, Jupyterhub will often play an interface role and the Dask can use more than the allocated resources to your jupyterhub session (profiles).

Dask jobqueue

Read more ...


Jupyter notebook/lab extensions

Extensions bring additional interesting features to Jupyter*. Depending on the workflow in the notebook, users can install/enable extensions when required. Although is easy to add extensions to both Jupyter notebook an lab, the process can be sometimes annoying based on where jupyter is served from.

In general, installing and enabling extensions in your laptop or using the start-jupyter script is straightforward, especially when the developers well describe their extensions. There should be no restrictions or permissions issues, just follow the instructions.

Extensions configurator

Read more ...


Enable NCL Kernel in Jupyterhub

can’t use NCL (Python) as kernel in Jupyter

This tutorial won’t work

Read more ...


Single jupyter notebooks in containers

you are using singularity containers

you need jupyter notebooks

Read more ...


Spawner options now savable

We introduced a new feature to the preset and advanced options form. This is a nice feature especially for the advanced options form, which contain many fields. You can also reset the options to their initial values by clicking on reset. The form options are saved in the client’s browser every 10 seconds and are not lost if:

the browser crashes

../../_images/options_saved.gif

Read more ...


New Singularity module deployed

Recently, we deployed a new version of Singularity: 3.6.1. The old version is not available anymore due to many bugs reported by some users.

Errors like these are now fixed:

Read more ...


VS Code Remote on Mistral

vs code is your favorite IDE

interested to use the remote extension

https://code.visualstudio.com/assets/docs/remote/remote-overview/architecture.png

Read more ...


Jupyterhub log file

Each Jupyter notebook is running as a SLUM job on MIstral. By default, stdout and stderr of the SLURM batch job that is spawned by Jupyterhub is written to your HOME directory on the HPC system. In order to make it simple to locate the log file:

if you use the preset options form: the log file is named jupyterhub_slurmspawner_preset_<id>.log.

Read more ...


GLIBC and the container-based workaround

Have you ever tried to install/use a software on Mistral and seen a message like this?

This is for example one of the reasons why PyTorch is not available in our python3 module. Those software packages require a newer version of glibc. Unfortunately, most of Mistral nodes are based on CentOS 6 kernel. To check the version of glibc:

Read more ...


Simple Dask clusters in Jupyterhub

There are multiple ways to create a dask cluster, the following is only an example. Please consult the official documentation. The Dask library is installed and can be found in any of the python3 kernels in jupyterhub. Of course, you can use your own python environment.

The simplest way to create a Dask cluster is to use the distributed module:

Dask Labextension

Read more ...


DKRZ Tech Talks

It is our great pleasure to introduce the DKRZ Tech Talks. In this series of virtual talks we will present services of DKRZ and provide a forum for questions and answers. They will cover technical aspects of the use of our compute systems as well as procedures such as compute time applications and different teams relevant to DKRZ such as our machine learning specialists. The talks will be recorded and uploaded afterwards for further reference.

Go here for more information.

Read more ...