Posts by Sofiane Bendoukha
- 24 October 2022
This content is based on this notebook.
jupyter-dash package makes it easy to develop Plotly Dash apps
from the Jupyter Notebook and JupyterLab.
- 25 October 2021
The default location for R packages is not writable and you can not install new packages. On demand we install new packages system-wide and for all users. However, it possible to install packages in different locations than root and here are the steps:
create a directory in
- 10 June 2021
In this tutorial, I will describe i) the steps to create a kernel for Matlab and ii) get the
matlab_kernel working in Jupyterhub on Levante.
conda environment with python 3.9
- 06 May 2021
Requested MovieWriter (ffmpeg) not available
conda env with
- 04 May 2021
Containers are not supported on Levante at this point.
We have seen in this blog post how to encapsulate a jupyter notebook (server) in a singularity container . In this tutorial, I am going to describe how you can run a jupyter kernel in a container and make it available in the jupyter*.
- 23 March 2021
We already provide a kernel for Julia based on the module julia/1.7.0.
In order to use it, you only need to install ÌJulia:
- 04 March 2021
Kernels are based on python environments created with
virtualenv or other package manager. In some cases, the size of the
environment can tremendously grow depending on the installed packages.
The default location for python files is the
$HOME directory. In
this case, it will quickly fill your quota. In order to avoid this, we
suggest that you create/store python files in other directories of the
filesystem on Levante.
The following are two alternative locations where you can create your Python environment:
- 16 November 2020
According to the official Web site,
Dask jobqueue can be used to
deploy Dask on job queuing systems like PBS, Slurm, MOAB, SGE,
LSF, and HTCondor. Since the queuing system on Levante is Slurm, we are
going to show how to start a Dask cluster there. The idea is simple as
described here. The difference is that the workers can be distributed
through multiple nodes from the same partition. Using Dask jobqueue you can launch
Dask cluster/workers as a Slurm jobs. In this case, Jupyterhub will play an interface role and the Dask
can use more than the allocated resources to your jupyterhub session
Load the required clients
- 05 November 2020
Can’t use NCL (Python) as kernel in Jupyter
This tutorial won’t work
- 07 October 2020
you are using singularity containers
you need jupyter notebooks
- 07 October 2020
- 01 August 2022
See Wrapper packages here.
- 02 October 2020
Recently, we deployed a new version of Singularity: 3.6.1. The old version is not available anymore due to many bugs reported by some users.
Errors like these are now fixed:
- 01 October 2020
vs code is your favorite IDE
interested to use the remote extension
- 25 September 2020
Each Jupyter notebook is running as a SLUM job on Levante. By default,
stderr of the SLURM batch job that is spawned by
Jupyterhub is written to your
HOME directory on the HPC system. In
order to make it simple to locate the log file:
if you use the
preset options form: the log file is named
- 18 September 2020
There are multiple ways to create a dask cluster, the following is only an example. Please consult the official documentation. The Dask library is installed and can be found in any of the python3 kernels in jupyterhub. Of course, you can use your own python environment.
The simplest way to create a Dask cluster is to use the distributed module:
- 03 September 2020
It is our great pleasure to introduce the DKRZ Tech Talks. In this series of virtual talks we will present services of DKRZ and provide a forum for questions and answers. They will cover technical aspects of the use of our compute systems as well as procedures such as compute time applications and different teams relevant to DKRZ such as our machine learning specialists. The talks will be recorded and uploaded afterwards for further reference.
Go here for more information.