Retrievals from tape#
file version: 25 Jun 2025
current software versions: slk version 3.3.91; slk_helpers version 1.16.4; slk wrappers 2.4.0
Warning
Please avoid using slk retrieve
and please do not use slk recall
anymore. Instead, please use slk_helpers recall
+ slk_helpers retrieve
or our new “watcher” scripts.
Introduction and Summary#
The retrieval of files from the DKRZ tape archive is split into two steps: first, they are copied from tape into a cache and, then, transfered from the cache to the user. In StrongLink, the first process is denoted as recall and the second as retrieval. The recalls are managed by a job scheduler within StrongLink similar to SLURM jobs on Levante. Thus, you submit a recall job and come back later to collect the files. The retrievals, instead, require slk
/ slk_helpers
to actively transfer the data via the Levante node on which you run the retrieval command. While a file is being transfered it has a temporary filename [filename][transfer-id]slkretrieve
. After successful retrieval, it is renamed.
StrongLink natively provides the command slk retrieve
for retrieving data from the tape archive. This command automatically submits a recall job and retrieves the requested files to you as soon as they are in the cache. This works well if you require two files from the tape archive and no error arises in the process. Due to shortcomings of the StrongLink software, the whole StrongLink system might slow down considerably for all users, if multiple retrieval requests targeting multiple tapes are submitted by one user. Additionally, the error messages of slk retrieve
are commonly not meaningful.
We strongly recommend to use tools provided by DKRZ for your data retrieval. They clearly split recall (transfer: tape to cache) and retrieval (transfer: cache to you) and restruture your retrieval requests into subrequests which can be handled more efficiently by StrongLink. The legacy slk retrieve
command is kept because we do not want to break established workflows. Moreover, it is save to be used for the retrieval of low number of files.
We provide different tools for retrieving up to 4 files and more than four files. These limits do not apply when all files are cached but only if they need to be recalled from tape. If you need 40 files, please do not manually split your request into ten requests of four files each or into 40 requests of one file each. This might cause the whole StrongLink to become slow for all users. The scripts we provide for retrieving more than four files split your request up into subrequests and submit them time-delayed to allow StrongLink to process them efficiently.
The file transfer from cache to you/Lustre requires up to 6 GB of memory. When our tools submit SLURM jobs automatically for you, then they allocate sufficiently memory. When you run slk_helpers retrieve
without SLURM job submission or slk retrieve
, please do not do this on the Levante login nodes. Instead, please do it via a batch job on the shared
partition or via an interactive batch session on the interactive
partition (Run slk in the “interactive” partition). Please always allocate 6 GB of memory (--mem=6GB
). If your slk
is killed with a message like /sw/[...]/bin/slk: line 16: [...] Killed
, then the allocated amount of memory was too low.
The number of tape drives and the processing capacity of StrongLink are limited. Therefore, recall jobs might be queued until resources are available which causes waiting time for you. You can check out the status of your recall jobs by slk_helpers job_status <job id>
and print the queue length with slk_helpers job_queue
. For details on this, please refer to our section Waiting and processing time of retrievals.
Resume interruped retrievals#
While a file is being transfered it has a temporary filename [filename][transfer-id]slkretrieve
. After successful retrieval, it is renamed. If a retrieval process is canceled/killed, then this temporary file might remain in the destination locations. When you restart a retrieval, the transfer is not resumed from the temporary file but and new temporary file with a new id is created. You need to clean up these file artefacts manually.
The retrieval tools which we recommend skip files automatically when they exist in the destination location and have the same size and timestamp as the source file. Our tool slk_helpers retrieve
offers different options to alter this default behaviour. slk retrieve
will automatically overwrite existing files if -s
is not set.
Recommended retrieval workflow#
In the beginning of year 2025, the slk_helpers
got several new commands and existing commands were extended. In addition, new scripts were introduced to automate the recall/retrieval of files. These new/updated commands and scripts are easier to use then the scripts from 2023 and 2024 and the original slk
commands. If you use scripts / workflows from before 2025 for your retrievals, please adapt them to our current recommendations. If you need any help in this process, please contact us (support@dkrz.de)
We provide different tools for retrieving **up to 4 files** and **more than four files**. This is because StrongLink does not efficiently organize multiple recall jobs targeting multiple tapes. If you need 40 files, please do not split your request into ten requests of four files each or into 40 requests of one file each. This might cause the whole StrongLink to become slow for all users. The scripts we provide for retrieving **more than four files** split your request up into subrequests and submit them time-delayed to allow StrongLink to process them efficiently.
retrieve up to four files#
We assume that two files file01.nc
and file02.nc
should be recalled and retrieved. If your files are already cached, then you can skip the recall and directly start with slk_helpers retrieve
. You can retrieve more than four files by this proceedure, if they are stored on four tapes or less and/or are cached.
module load slk
file1=/arch/ab1234/file01.nc
file2=/arch/cd5678/file02.nc
destination=/work/ab1234/data
slurm_job_account=ab1234
## start recall job: copy from tape to HSM-cache
slk_helpers recall ${file1} ${file2} -d ${destination}
# job ID is returned
# you can check the status of the job by
slk_helpers job_status <job_id>
# when the job failed, please run the same recall command again.
## start retrieval job: copy from HSM-cache to Lustre / 'destination' as soon as files are back
## this command can be run immediately after the previous recall command has been started
slk_helpers retrieve ${file1} ${file2} -d ${destination} --slurm ${slurm_job_account}
# slurm job is submitted; details are printed on how to stop the job and where a log is located
# '--run-as-slurm-job-with-account' and '--slurm' are equal
The slk_helpers recall
command can submit a recall job to get files from up to four tapes at once. It fails with an error when more than four tapes are targeted. But, you may use it to recall ten files which are stored on one tape.
retrieve more than four files#
Note
If all files you wish to retrieve are already located in the HSM cache then you may ignore the limit of four files and directly use slk_helpers retrieve
.
We provide a new set of scripts which recall and retrieve files. They are included in all slk
modules on Levante which contain a slk_wrappers
version greater or equal to 2.0.0 (e.g. slk/3.3.91_h1.13.3_w2.0.1
). The new retrieval setup allows to get
all files from a namespace/folder recursively
all files found by a search
all files matching a regular expression
a file list
Three new slk_helpers commands were introduced to mask and simplify the scripts’ usage for you:
slk_helpers init_watchers
: generates various files required by the new scripts (wrapper toslk_helpers gfbt
; details)slk_helpers start_watchers
: submit recall and retrieve SLURM job scriptsslk_helpers stop_watchers
: cancel recall and retrieval SLURM jobs
Theory#
case: general workflow
Depending on how you wish to specify your source files (file list, search, …), only the slk_helpers init_watchers
command changes. The different variants are given further below.
There will be many new text files created by the next commands. Therefore, we create a new folder and change into it
mkdir -p <tmp folder>
cd <tmp folder>
module load slk
slk_helpers init_watchers <provide source file(s)> <additional parameters> -d <local destinationPath> -ns
slk_helpers start_watchers <DKRZ project slurmJobAccount>
Check the retrieve.log
and recall.log
files to receive status information. If something seems to hang, please check the tapes_error.txt
and files_error.txt
files and report issues to support@dkrz.de .
If you wish to cancel the recall and/or retrieval watchers, please use slk_helpers stop_watchers [--recall-watcher-only|--retrieve-watcher-only]
while you are in the tmp folder
create above.
When the recall and retrieve watchers stop/die/are-aborted, then you can resume the whole process by running slk_helpers start_watchers
again. You can also start only the recall or the retrieval by setting the appropriate flags as documented. The init_watchers
/ gfbt
command should not be run again under normal conditions. If you decide to run the gfbt
command again, please clean up the working folder, first, or create a new folder to run the whole command-script-chain there.
case: retrieve content of namespace recursively
slk_helpers init_watchers <path to namespace> -R -d <local destinationPath> -ns
...
example: Retrieve files stored in specific path
case: retrieve search results
We recommend to proceed as described in Search and retrieve files instead of using the search id directly.
slk_helpers init_watchers --search-id <search id> -d <local destinationPath> -ns
...
example: Retrieve all files found by a search
case: retrieve files matching a Regular Expression
Only Regular Expression in the filename but not in the foldernames in the path are evaluated.
slk_helpers init_watchers <path + filename including a reg. exp.> --regex -d <local destinationPath> -ns
...
example: Retrieve all files matching a Regular Expression
case: retrieve list of files
Each file has to be specified by its full path.
slk_helpers init_watchers <list of files to be retrieved> -d <local destinationPath> -ns
...
Example#
We want to get CCLM forcing made from ERA5 data for the years 1973, 1974 and 1975. We are in project ab1234
and to to retrieve the data to /work/ab1234/forcing
. Please create a new directory and change into it. There will be many new text files created by the next commands.
Zero, load the appropriate slk module:
module load slk
First, we run the init_watchers
command for the years 1974 and 1975 only because we forgot that we also need 1974:
$ slk_helpers init_watchers -R /arch/pd1309/forcings/reanalyses/ERA5/year1974 /arch/pd1309/forcings/reanalyses/ERA5/year1975 -d /work/ab1234/forcing -ns
# command line output is given further low for the interested reader
The output shows a a nice summary from which tapes how many files need to be recalled. We realized that 1973 is missing and simply let gfbt
append the information to the generated files (--append-output
):
$ slk_helpers init_watchers -R /arch/pd1309/forcings/reanalyses/ERA5/year1973 -d /work/ab1234/forcing -ns --append-output
# command line output is given further low for the interested reader
Please notify us when you see tapes in ERRORSTATE
.
Now, there should be multiple new files in the current directory. Please remain in this directory and proceed.
Next, we submit the watcher scripts
$ slk_helpers start_watchers ab1234
successfully submitted recall watcher job with SLURM job id '1234567'
successfully submitted retrieve watcher job with SLURM job id '1234568'
Check the retrieve.log
and recall.log
files. Check the tapes_error.txt
and files_error.txt
files and report issues to support@dkrz.de
Thats it!
Command line output first init_watchers
command:
progress: generating file grouping based on search id 826348 in preparation
progress: generating file grouping based on search id 826348 (for up to 190 files) started
collection storage information for search id 826348 started
Number of pages with up to 1000 resources per page to iterate: 1
collection storage information for search id 826348 finished
creating and returning object to host resource storage information
progress: generating file grouping based on search id 826348 (for up to 190 files) finished
progress: getting tape infos for 51 tapes started
progress: getting tape infos for 51 tapes finished
progress: extracting tape stati for 51 tapes started
progress: extracting tape stati for 51 tapes finished
------------------------------------------------------------------------------
progress: updating tape infos for 51 tapes started
progress: updating tape infos for 51 tapes finished
progress: extracting tape stati for 51 tapes started
progress: extracting tape stati for 51 tapes finished
------------------------------------------------------------------------------
cached (AVAILABLE ): 23
M24350M8 (BLOCKED ): 2
M24365M8 (AVAILABLE ): 3
M24366M8 (AVAILABLE ): 2
M21306M8 (AVAILABLE ): 2
M21307M8 (AVAILABLE ): 1
M21314M8 (ERRORSTATE ): 4
M21315M8 (AVAILABLE ): 1
M24390M8 (AVAILABLE ): 1
M24391M8 (AVAILABLE ): 1
M24280M8 (AVAILABLE ): 3
M21336M8 (AVAILABLE ): 2
M21341M8 (AVAILABLE ): 2
M21344M8 (AVAILABLE ): 1
M21345M8 (AVAILABLE ): 5
M21342M8 (BLOCKED ): 8
M22372M8 (BLOCKED ): 1
M21348M8 (BLOCKED ): 3
M21349M8 (BLOCKED ): 3
M21346M8 (AVAILABLE ): 3
M21347M8 (AVAILABLE ): 2
M21350M8 (AVAILABLE ): 1
M24294M8 (AVAILABLE ): 1
M24295M8 (AVAILABLE ): 1
M24173M8 (AVAILABLE ): 3
M22509M8 (AVAILABLE ): 1
M21360M8 (AVAILABLE ): 1
M21358M8 (AVAILABLE ): 1
M21362M8 (AVAILABLE ): 5
M21363M8 (AVAILABLE ): 3
M32623M8 (AVAILABLE ): 7
M21369M8 (AVAILABLE ): 1
M32621M8 (AVAILABLE ): 7
M32626M8 (AVAILABLE ): 11
M32627M8 (AVAILABLE ): 10
M22395M8 (AVAILABLE ): 4
M32630M8 (ERRORSTATE ): 3
M24320M8 (AVAILABLE ): 1
M24321M8 (AVAILABLE ): 3
M32631M8 (AVAILABLE ): 7
M22655M8 (AVAILABLE ): 3
M24324M8 (AVAILABLE ): 1
M32635M8 (AVAILABLE ): 10
M24325M8 (AVAILABLE ): 1
M32632M8 (AVAILABLE ): 8
M22659M8 (AVAILABLE ): 3
M32638M8 (AVAILABLE ): 8
M21385M8 (AVAILABLE ): 1
M32636M8 (AVAILABLE ): 4
M24202M8 (AVAILABLE ): 1
M32640M8 (ERRORSTATE ): 4
M32377M8 (AVAILABLE ): 2
------------------------------------------------------------------------------
Command line output second init_watchers
command:
progress: generating file grouping based on search id 826349 in preparation
progress: generating file grouping based on search id 826349 (for up to 95 files) started
collection storage information for search id 826349 started
Number of pages with up to 1000 resources per page to iterate: 1
collection storage information for search id 826349 finished
creating and returning object to host resource storage information
progress: generating file grouping based on search id 826349 (for up to 95 files) finished
progress: getting tape infos for 43 tapes started
progress: getting tape infos for 43 tapes finished
progress: extracting tape stati for 43 tapes started
progress: extracting tape stati for 43 tapes finished
------------------------------------------------------------------------------
progress: updating tape infos for 43 tapes started
progress: updating tape infos for 43 tapes finished
progress: extracting tape stati for 43 tapes started
progress: extracting tape stati for 43 tapes finished
------------------------------------------------------------------------------
M24277M8 (AVAILABLE ): 2
M24339M8 (AVAILABLE ): 1
M24280M8 (AVAILABLE ): 3
M21336M8 (AVAILABLE ): 1
M24278M8 (AVAILABLE ): 5
M22422M8 (AVAILABLE ): 1
M24279M8 (AVAILABLE ): 1
M21340M8 (AVAILABLE ): 1
M24221M8 (AVAILABLE ): 1
M21345M8 (AVAILABLE ): 2
M24350M8 (BLOCKED ): 1
M21342M8 (BLOCKED ): 2
M24351M8 (AVAILABLE ): 1
M24223M8 (AVAILABLE ): 1
M22372M8 (BLOCKED ): 5
M21349M8 (AVAILABLE ): 4
M21346M8 (AVAILABLE ): 2
M21347M8 (AVAILABLE ): 2
M21350M8 (AVAILABLE ): 1
M24294M8 (AVAILABLE ): 1
M32016M8 (AVAILABLE ): 1
M24366M8 (AVAILABLE ): 1
M21363M8 (AVAILABLE ): 3
M32623M8 (AVAILABLE ): 5
M21305M8 (AVAILABLE ): 1
M32621M8 (AVAILABLE ): 3
M32626M8 (AVAILABLE ): 5
M32627M8 (AVAILABLE ): 5
M24379M8 (AVAILABLE ): 1
M22395M8 (AVAILABLE ): 2
M32630M8 (ERRORSTATE ): 1
M32631M8 (AVAILABLE ): 5
M22655M8 (AVAILABLE ): 3
M32635M8 (AVAILABLE ): 6
M21314M8 (ERRORSTATE ): 2
M32632M8 (AVAILABLE ): 1
M32638M8 (AVAILABLE ): 4
M24390M8 (AVAILABLE ): 1
M32636M8 (AVAILABLE ): 2
M24391M8 (AVAILABLE ): 1
M21322M8 (AVAILABLE ): 1
M32640M8 (ERRORSTATE ): 2
M32119M8 (AVAILABLE ): 1
------------------------------------------------------------------------------
Is a file stored in the HSM cache and/or on tape?#
The output of slk list
indicates whether a file is stored in the HSM cache or not. If the 11th character of the permissions/mode-string is a t
then the file is stored exclusively on tape. If it is a -
then the file is stored in the cache. In the latter case, the user does not know whether the file is additionally stored on tape or not, for example if the file was archived shortly before slk list
was performed and had not yet been transferred to tape. There are a few edge cases when this storage information provided by slk list
is not accurate. If you need 100% correct information on the caching status of files, please use slk_helpers iscached
as described further below..
Example using slk list
:
$ slk list /arch/ex/am/ple
-rw-r--r--- k204221 bm0146 11 B 02 Mar 2021 file_1.txt
-rw-r--r--t k204221 bm0146 16 B 02 Mar 2021 file_2.txt
-rw-r--r--t k204221 bm0146 15 B 02 Mar 2021 file_3.txt
Example explained: The file file_1.txt
is stored in the cache and can be quickly retrieved. The files file_2.txt
and file_3.txt
are only stored on tape and their retrieval will take more time.
The slk_helpers
feature a command iscached
, which prints out information on the location of storage. Please note that the exit code of this command is 1
if the tested file is not cached (see How do I capture exit codes?). Example:
$ slk_helpers iscached /arch/ex/am/ple/file_2.txt
File is not cached
$ echo $?
1
$ slk_helpers iscached /arch/ex/am/ple/file_1.txt
File is cached
$ echo $?
0
Additionally, the command slk_helpers iscached
accepts multiple file paths or multiple resource ids (with --resource-ids
) as input. If not input is provided, it reads from stdin
. By this, you can also pipe file lists into the command.
$ cat file_list.txt
/arch/ex/am/ple/file_1.txt
/arch/ex/am/ple/file_2.txt
/arch/ex/am/ple/file_3.txt
/arch/ex/am/ple/file_4.txt
$ cat file_list.txt | slk_helpers iscached -v
/arch/ex/am/ple/file_2.txt is not cached
Number of files stored in the cache: 3/4
$ echo $?
1
Waiting and processing time of retrievals#
Background#
The number of tape drives limits the number of tapes from which data can be read in parallel. All newly archived data are written onto tapes of the newest available type. All data that have been archived or retrieved since the StrongLink system went online in Oct/Nov 2021 are stored on this tape type. Currently, approximately 20 tape drives for this type of tapes and approximately 50 drives for older tape types are available. When much data are archived and have to be written to tape, more than half of these 20 tape drives may be allocated for writing. A new tape library with additional tapes drives is ordered and is planned to be commissioned in the first half of 2023. Until then, there is a bottleneck for retrieving data which has been archived or accessed in the past year – particularly when much data are archived in parallel. There is no bottleneck with respect to tape drives when data, which have not been touched since Nov 2021, are to be retrieved.
StrongLink considers each process, which accesses a tape, as a job. Each job has a unique ID. A job, which reads from a tape, is denoted as recall job. If a new job comes in and a tape drive is free, StrongLink will start processing this job. New jobs will be queued if all tape drives are allocated to other jobs. This queue is independent of the SLURM queue on Levante. There is not prioritization of jobs. However, jobs are not always processed by the first in first out principle. E.g.: a recall job A
was submitted first, followed by recall job B
, followed by recall job C
. Jobs A
and C
need to recall a file from the same tape. When StrongLink reads this particular tape for job A
it will get the data for job C
as well.
Each recall job can use a limited number of tape drives to read data from tape. Currently (Jan 2023), this value is set to 2. This might change without notification depending on the system load and will not be instantly updated here. Each tape drive of the newest available generation can reach a transfer rate of up to 300 MB/s. Thus, 176 GB of data can be read in 10 minutes when the conditions are optimal. When the data are not stored in the beginning of the tape but somewhere in the middle, the tape drive needs to spool the tape to the appropriate position. This takes time. Additionally, tapes have be taken by a robot arm from the library slot to the tape drive in advance which might take up to one minute.
Check job and queue status#
Check the StrongLink-internal queue via slk_helpers job_queue
:
$ slk_helpers job_queue
total read jobs: 110
active read jobs: 12
queued read jobs: 98
If you run slk retrieve
(and at least one file needs to be read from tape) or slk recall
, the command will print the id of the corresponding recall job to the slk log (~/.slk/slk-cli.log
; 84835
in the example below):
2023-01-12 09:45:10 xU22 2036 INFO Executing command: "recall 275189"
2023-01-12 09:45:11 xU22 2036 INFO Created copy job with id: '84835' for - "recall 275189"
The status of a job is printed via slk_helpers job_status JOB_ID
:
$ slk_helpers job_status 84835
SUCCESSFUL
$ slk_helpers job_status 84980
QUEUED (12)
$ slk_helpers job_status 84981
QUEUED (13)
$ slk_helpers job_status 84966
PROCESSING
The status can be QUEUED ([PLACE_IN_THE_QUEUE])
, PROCESSING
, SUCCESSFUL
, FAILED
or ABORTED
.
Similar as for SLURM jobs we cannot provide the average processing or waiting time of a retrieval/recall job. However, based on the information provided in the Background section above, you can estimate how long the pure retrieval might tape.
Search and retrieve files#
We recommend not to use the old workflow combining search ids and slk retrieve
anymore. Using too many search ids or running multiple complex search queries might slow down the StrongLink system. If your search is already finished, please write the results into a file and work with this file.
# write search results into a file
$ slk list <search id> | awk '{ print $9 }' | sed '/^$/d' > file_list.txt
# use file list with slk_helpers recall and retrieve
$ cat file_list.txt | slk_helpers recall
...
$ cat file_list.txt | slk_helpers retrieve -d . --slurm ab1234
...
# OR use file list with recall/retrieve watchers
$ slk_helpers init_watchers `cat file_list.txt | tr '\n' ' '` -d . -ns
...
Please read our current recommendations for retrievals at the top of this page before you proceed.
Group Files By Tape#
Warning
In the past, we recommended to manually split large retrieval requests into multiple request with each request targeting one tape. If you are no expect user who sets up a very specific retrieval workflow, please use our recall/retrieval watchers which do automatically what is described below. However, this command might be still useful if you only want to check out the distribution of files on tapes.
The command group_files_by_tape
receives one or more files and checks onto which tapes the files are written on and whether they are currently in the HSM cache. Depending on the provided parameters, it just prints the number of tapes, a list of files per tape or even runs a search query per tape. Since group_files_by_tape
is relatively cumbersome to type, there is the short form gfbt
available. gfbt
accepts files, file lists, search ids and much more (see below).
If you want to count the number of tapes on which your files are stored, run gfbt
with --count-tapes
. Files in the cache are ignored.
$ slk_helpers gfbt /arch/bm0146/k204221/iow -R --count-tapes
10 tapes with single-tape files
0 tapes with multi-tape files
Some files are split into two parts which are stored on multiple tapes. gfbt
treats these files differently from files which are stored as one part on one tape.
If you want to get an overview over the number of files stored per tape and/or over the tape stati, run gfbt
with --details --count-files
:
$ slk_helpers gfbt /arch/bm0146/k204221/iow/ -R --details --count-files
cached (AVAILABLE ): 1
C25543L6 (AVAILABLE ): 1
C25566L6 (AVAILABLE ): 2
M12208M8 (AVAILABLE ): 3
M20471M8 (AVAILABLE ): 1
M12211M8 (AVAILABLE ): 4
C25570L6 (AVAILABLE ): 1
M12215M8 (AVAILABLE ): 5
C25539L6 (ERROSTATE ): 2
B09208L5 (BLOCKED ): 1
M12217M8 (AVAILABLE ): 2
Alphanumeric string in the first column is the tape barcode, which you can ignore in most cases. cached
(first row) contains all files which are currently in the cache. The string in the brackets is the tape status (see also Tape Stati):
AVAILABLE
=> tape available for retieval/recallBLOCKED
=> tape is blocked by a write job; please try laterERRORSTATE
=> tape is in a bad state; please contact support@dkrz.de and the tape will be reset
gfbt
accepts different things as input as follows:
# one file as input
$ slk_helpers gfbt /arch/bm0146/k204221/iow/iow_data2_001.tar
# file list as input
$ slk_helpers gfbt /arch/bm0146/k204221/iow/iow_data2_001.tar /arch/bm0146/k204221/iow/iow_data2_002.tar
# directory/namespace as input (position of `-R` not relevant`); also multiple
$ slk_helpers gfbt -R /arch/bm0146/k204221/iow
$ slk_helpers gfbt /arch/bm0146/k204221/iow -R
$ slk_helpers gfbt -R /arch/bm0146/k204221/iow /arch/bm0146/k204221/iow2
# path with a regular expression in the filename
$ slk_helpers gfbt /arch/bm0146/k204221/iow/iow_data2_00[0-9].tar
# search id (only one search id; not more than one)
$ slk_helpers gfbt --search-id 123456
# search query (only one search query; not more than one)
$ slk_helpers gfbt --search-query '{"path": {"$gte": "/arch/bm0146/k204221/iow"}}'
Retrieval wrapper for SLURM#
Please use the new recall/retrieve “*watchers*” instead.
Retrieval script templates#
We provided several script templates at this location until 2025. Since slk_helpers retrieve
exists and allows to generate and submit SLURM job scripts automatically (--slurm <account>
) and since we offer the recall/retrieve “*watcher*” scripts for larger requests, there is no need anymore for the original script templates. If your retrieval workflows are still based on the old templates, please consider switching to our new tools.