slk usage examples#

file version: 05 Feb 2025

current software versions: slk version 3.3.91; slk_helpers version 1.13.2; slk wrappers 2.0.1

Note

StrongLink uses the term “namespace” or “global namespace” (gns). A “namespace” is comparable to a “directory” or “path” on a common file system.

Obtain Access Token#

You have to manually login every 30 days to the StrongLink instance via slk login:

$ slk login
Username: XYZ
Password:
Login Successful

A login token is created after successful login.

Note

slk stores the login token in the home directory of each user (~/.slk/config.json). By default, this file can only accessed by the respective user (permissions: -rw-------/600). However, users should be careful when doing things like chmod 755 * in their home directory. If you assume that your slk login token has been compromised please contact support@dkrz.de .

Check if access token is still valid#

$ slk_helpers session
Your session token is valid until Jun 19, 2021, 09:02:27 AM

The date and time until which your login token is valid will be printed.

Archival#

Archive one file#

We have a file my_file.nc located in /work/bm0146/k204221/important_files and want to archive it onto tape to /ex/am/ple/bm0146/k204221/file_collection

$ slk archive /work/bm0146/k204221/important_files/my_file.nc /ex/am/ple/bm0146/k204221/file_collection

or

$ cd /work/bm0146/k204221/important_files
$ slk archive my_file.nc /ex/am/ple/bm0146/k204221/file_collection

Archive directory recursively#

We wish to archive the whole directory /work/bm0146/k204221/important_files onto tape to /ex/am/ple/bm0146/k204221

$ slk archive -R /work/bm0146/k204221/important_files /ex/am/ple/bm0146/k204221

Archive all files stored in a subset of available directories#

A model run was performed over 100 years from 1900 to 1999. The results of each model year are stored in a dedicated directory – thus, having folders 1900 until 1999. We want to archive the 1980s only.

$ slk archive /work/bm0146/k204221/model_xyz/output/198? /ex/am/ple/model_xyz/output

Create directory#

The slk has no mkdir command yet. slk archive automatically creates the target namespace (= directory) if it does not exist. If you wish to create a namespace or namespace hierarchy in advance (e.g. for a project), you have two options:

  • solution A: use slk_helpers mkdir or slk_helpers mkdir -R

  • solution B: create folder structure and dummy files locally and archive them with slk archive

Solution A: use slk_helpers mkdir or mkdir -R#

$ slk_helpers mkdir /ex/am/ple/namespaceExists/newNamespace

or

$ slk_helpers mkdir -R /ex/am/ple/hierarchy/of/new/namespaces

Solution B: create dummy files and archive them with slk archive#

Example: we already have the folder /ex/am/ple/bm0146/k204221 and want to create the folders /ex/am/ple/bm0146/k204221/abc/d01/efg, /ex/am/ple/bm0146/k204221/abc/d02/efg and /ex/am/ple/bm0146/k204221/abc/d03/efg.

$ mkdir -p abc/d01/efg abc/d02/efg abc/d03/efg
$ echo "blub" > abc/d01/efg/dummy.txt
$ echo "blub" > abc/d02/efg/dummy.txt
$ echo "blub" > abc/d03/efg/dummy.txt
$ slk archive -R abc /ex/am/ple/bm0146/k204221
$ rm -rf abc

Note

Archiving empty directories is currently not supported. This was meant to be a feature.

Check checksum of archived file#

StrongLink calculates two checksums of each archived file and stores them in the metadata. It compares the stored checksums with the file’s actual checksums at certain stages of the archival and retrieval process. Commonly, users do not need to check the checksum manually. But, you can if you prefer to do it. If a file has no checksum then it has not been fully archived yet (e.g. the copying is still in progress).

# archive the file
$ slk archive test.nc /ex/am/ple/bm0146/k204221/file_collection
[========================================/] 100% complete. Files archived: 1/1, [5B/5B].

# get the checksum from StrongLink
$ slk_helpers checksum -t sha512 /ex/am/ple/bm0146/k204221/file_collection/test.nc
c7bb8f1a8c4fbf5ff1d8990e0b0859bde7a320f337ca65ea1e79a36423b6d9909da793b26c1c69a711d27867b4f0eae1a4ef0db8483e29f9cda3719208618ffc

# calculate the checksum of your local file
$ sha512sum test.nc
c7bb8f1a8c4fbf5ff1d8990e0b0859bde7a320f337ca65ea1e79a36423b6d9909da793b26c1c69a711d27867b4f0eae1a4ef0db8483e29f9cda3719208618ffc  test.nc

Failed archival: check if files are partially archived#

We run slk archive and it fails for some reason

$ slk archive *.nc /arch/ab1234/c567890/test -vv
file_001gb_a.nc SUCCESSFUL
# killed ...
/sw/spack-levante/slk-3.3.91-wuylnb/bin/slk: line 16: 104426 Killed                  LC_ALL=en_US.utf8 LANG=en_US.utf8 ${SLK_JAVA} -Xmx4g -jar $JAR_PATH "$@"

Now we need to check whether there are partially/incompletely archived files in /arch/ab1234/c567890/test. This is done by starting a verify job and collecting its results. For details please check the section Verify file size on page Archivals to tape.

# submit a verify job for the destination folder
$ slk_helpers submit_verify_job /arch/ab1234/c567890/test -R
Submitting up to 1 verify job(s) based on results of search id 576002:
search results: pages 1 to 1 of 1; visible search results: 10; submitted verify job: 176395
Number of submitted verify jobs: 1

# ... after some time ...
# check if the job finished => status "COMPLETED"
$ slk_helpers job_status 176395
COMPLETED

# collect the results
$ slk_helpers result_verify_job 176395
Errors:
Resource content size does not match record: /arch/ab1234/c567890/test/file_001gb_b.nc
Resource content size does not match record: /arch/ab1234/c567890/test/file_001gb_f.nc
Erroneous files: 2

The two files file_001gb_b.nc and file_001gb_f.nc are partial files. They should be re-archived (automatically overwritten) or deleted.

Please check the Archivals page for details on this.

Failed archival: check if files are flagged as partial#

Warning

Files which are flagged as partial are not necessarily partial/incomplete files. But, partially/incompletely archived files are commonly flagged as partial.

We run slk archive and it fails for some reason

$ slk archive *.nc /dkrz_test/netcdf/20230914c -vv
file_001gb_a.nc SUCCESSFUL
# killed ...
/sw/spack-levante/slk-3.3.91-wuylnb/bin/slk: line 16: 104426 Killed                  LC_ALL=en_US.utf8 LANG=en_US.utf8 ${SLK_JAVA} -Xmx4g -jar $JAR_PATH "$@"

Now we look into the target folder and see this

$ slk list /dkrz_test/netcdf/20230914c
-rw-r--r--- k204221     bm0146          1.1G   16 Jul 2021 08:36 file_001gb_a.nc
-rw-r--r--- k204221     bm0146          1.1G   16 Jul 2021 08:36 file_001gb_b.nc (Partial File)
-rw-r--r--- k204221     bm0146          1.1G   16 Jul 2021 08:36 file_001gb_c.nc (Partial File)
-rw-r--r--- k204221     bm0146        144.5M   16 Jul 2021 08:36 file_001gb_d.nc (Partial File)

We run slk archive again (with or without -vv) and get

$ slk archive *.nc /dkrz_test/netcdf/20230914c -vv
file_001gb_a.nc SKIPPED
file_001gb_b.nc SKIPPED
file_001gb_c.nc SKIPPED
file_001gb_d.nc SUCCESSFUL
Non-recursive Archive completed

Now, all files should be archived properly. We take a look into the target location:

$ slk list /dkrz_test/netcdf/20230914c
-rw-r--r--- k204221     bm0146          1.1G   16 Jul 2021 08:36 file_001gb_a.nc
-rw-r--r--- k204221     bm0146          1.1G   16 Jul 2021 08:36 file_001gb_b.nc (Partial File)
-rw-r--r--- k204221     bm0146          1.1G   16 Jul 2021 08:36 file_001gb_c.nc (Partial File)
-rw-r--r--- k204221     bm0146          1.1G   16 Jul 2021 08:36 file_001gb_d.nc

Two files are still flagged as partial although they were skipped by slk archive (== file has been archived completely). To be sure, we run slk archive again with -vv and get

$ slk archive *.nc /dkrz_test/netcdf/20230914c -vv
file_001gb_a.nc SKIPPED
file_001gb_b.nc SKIPPED
file_001gb_c.nc SKIPPED
file_001gb_d.nc SKIPPED
Non-recursive Archive completed

Be careful: when we apply a slk command like slk group on one of the flagged files, slk list does not print the flag anymore:

$ slk group ka1209 /dkrz_test/netcdf/20230914c/file_001gb_b.nc
[========================================|] 100% complete. Files changed: 1/1, [1.1G/1.1G].
$ slk list /dkrz_test/netcdf/20230914c
-rw-r--r--- k204221     bm0146          1.1G   16 Jul 2021 08:36 file_001gb_a.nc
-rw-r--r--- k204221     bm0146          1.1G   16 Jul 2021 08:36 file_001gb_b.nc
-rw-r--r--- k204221     bm0146          1.1G   16 Jul 2021 08:36 file_001gb_c.nc (Partial File)
-rw-r--r--- k204221     bm0146          1.1G   16 Jul 2021 08:36 file_001gb_d.nc

However, the file file_001gb_b.nc is still flagged as slk_helpers has_no_flag_partial indicates:

$ slk_helpers has_no_flag_partial -R -v /dkrz_test/netcdf/20230914c
/dkrz_test/netcdf/20230914c/file_001gb_b.nc has partial flag
/dkrz_test/netcdf/20230914c/file_001gb_c.nc has partial flag
Number of files without partial flag: 2/4

Note

Please notify the DKRZ support (support@dkrz.de) when you own partial. Please check in advance via slk archive -vv whether files have been actually completely archived.

Search files#

The query language is a dialect of JSON. Examples and a reference table are given in the section Search files by metadata and on the page Reference: StrongLink query language, respectively. There is also a description in the StrongLink Command Line Interface Guide from page 6 onwards. In the beginning, it might take some time to formulate correct search queries. Therefore, we will provide tools to generate search queries for common use cases.

The output of a search request is NOT the listing of datasets matching the search request, but a SEARCH_ID. This SEARCH_ID can then be used by further slk commands (see below). The SEARCH_ID is assigned globally – e.g. SEARCH_ID 423 exists only once. Each user has access to each SEARCH_ID. Thus, a user can share his/her SEARCH_ID with colleagues. However, the output of slk list SEARCH_ID or retrieval of slk retrieve SEARCH_ID ... depends on the read permissions of the executing user.

Warning

You can use ' or " to enclose the search query string when you call slk search. We strongly recommend using '. If you use " please keep in mind to escape all " and $ within your search query string with \. Section search files larger than 1 MB contains an example of both ways.

Note

If slk list seems to hang on a certain SEARCH_ID then there might be too many search results to collect. Alternatively to slk list, you can run slk_helpers list_search on the same SEARCH_ID which will continuously print collected search results.

search files by owner / user#

We would like to search for all files belonging to user k204221 (id: 25301).

Solution A: using easy-search options#

$ slk search -user k204221

Solution B: using RQL search query#

A description on how to write the search queries for slk search is provided at the page StrongLink query language and on pages 6 to 8 of the StrongLink Command Line Interface Guide.

$ slk search '{"resources.posix_uid":25301}'

Hint: your ID or that of another user on the DKRZ system can be obtained by using the id command in the Linux shell

$ id $DKRZ_USER

search files larger than 1 MB#

We would like to search for all files which are larger than 1 MB.

$ slk search '{"resources.size":{"$gt": 1048576}}'
#  OR
$ slk search "{\"resources.size\":{\"\$gt\": 1048576}}"
#  DO NOT FORGET TO ESCAPE THE $ AS WELL

search file based optional metadata#

We would like to search for “Max” as value in the metadata field “Producer” of the schema “image”.

$ slk search '{"image.Producer":"Max"}'

search a file by name#

We would like to find the file search_me.jpg.

Solution A: using easy-search options#

$ slk search -name search_me.jpg

Solution B: using RQL search query#

$ slk search "{\"resources.name\": \"search_me.jpg\"}"
Search continuing. .....
Search ID: 23

search files by name using regular expressions#

We would like to find all files of the format file_[0-9].nc (like file_1.nc, file_2.nc, …):

$ slk search "{\"resources.name\": {\"\$regex\": \"file_[0-9].nc\"}}"
Search continuing. .....
Search ID: 380

$ slk list 380
-rw-r--r--t   k204221   bm0146  11 B    02 Mar 2021     file_2.txt
-rw-r--r--t   k204221   bm0146  16 B    02 Mar 2021     file_1.txt
-rw-r--r--t   k204221   bm0146  11 B    02 Mar 2021     file_1.txt
Files 1-3 of 3

Warning

The namespace / path must not contain regular expressions.

See also

There are two similar regular expression examples in Generate search queries for filenames.

search files by one of two owners – logical OR#

We would like to search for all files belonging to user k204216 (id: 24855) or k204221 (id: 25301).

$ slk search '{"$or": [{"resources.posix_uid":24855},{"resources.posix_uid":25301}]}'

Hint: your ID or that of another user on the DKRZ system can be obtained by using the id command in the Linux shell

$ id $DKRZ_USER

search files based on two metadata fields – logical AND#

We would like to search for the file surface_iow_day3d_temp_emep_2003.nc belonging the user k204221

$ slk search '{"$and":[{"resources.name": "surface_iow_day3d_temp_emep_2003.nc"}, {"resources.posix_uid": 25301}]}'
Search continuing. .....
Search ID: 65

search files with specific metadata in a namespace recursively#

We wish to search recursively in /ex/am/ple/testing for files with Max Mustermann as value in the metadata field document.Author.

$ slk search '{"$and": [{"path": {"$gte": "/ex/am/ple/testing"}}, {"document.Author": "Max Mustermann"}]}'
Search continuing. .....
Search ID: 77

search all files that follow the CMIP Conventions#

We wish to search all files that have CMIP written in their global attribute Conventions:

$ slk search '{"netcdf.Conventions": {"$regex": "CMIP"}}'
Search continuing. .....
Search ID: 526

save search ID into shell variable#

slk search does not provide a feature out of the box to only print the SEARCH_ID. Currently (might change in future versions), the SEARCH_ID is printed in columns >= 12 of the second row of the text output of slk search. We can use tail and sed to get the second line and extract a number or use tail and cut to get the second line and drop the first 11 characters. Example:

# normal call of slk search
$ slk search '{"resources.posix_uid": 25301}'
Search continuing. .....
Search ID: 466

# get ID using sed:
$ search_id=`slk search '{"resources.posix_uid": 25301}' | tail -n 1 | sed 's/[^0-9]*//g'`
$ echo $search_id
470

# get ID by dropping first 11 characters of the second line
$ search_id=`slk search '{"resources.posix_uid": 25301}' | tail -n 1 | cut -c12-20`
$ echo $search_id
471

# use awk pattern matching to get the correct line and correct column
$ search_id=`slk search '{"resources.posix_uid": 25301}' | awk '/Search ID/ {print($3)}'`
$ echo $search_id
507

Note

This is an example for bash. When using csh, you need to prepend set `` in front of the assignments of the shell variables: ``set search_id=....

using shell variables in searches#

We would like to search for all files belonging to user k204221 (id: 25301).

Solution without shell variable:

$ id k204221 -u
25301
$ slk search "{\"resources.posix_uid\":25301}"
Search continuing. .....
Search ID: 474

Solution with shell variable:

$ export uid=`id k204221 -u`
$ slk search "{\"resources.posix_uid\":$uid}"
Search continuing. .....
Search ID: 475

Solution calling another shell program from within a search query:

$ slk search "{\"resources.posix_uid\":`id k204221 -u`}"
Search continuing. .....
Search ID: 475

Note

The example shell commands are meant for bash. If you are using csh or tcsh they do not work as printed here but have to be adapted.

Generate search queries#

Since version 1.2.2 the slk_helpers offer the command gen_file_query and since version 1.9.2 the command gen_search_query. This command first accepts one or more files/namespaces as input and generates a search query string. Additionally, the user can specify whether the files should be stored in the HSM cache or on a certain tape via --cached-only and --tape-barcode TAPEBARCODE, respectively. Technical background: slk_helpers gen_file_query describes how gen_file_query identifies files and namespaces and how it splits them up. The second command accepts search constraints as key-value-pairs in the form fieldname=value and generates a search query which connects all constraints via an and operator. In addition, the user can provide an existing search query via --search-query which will be also connected to the newly generated query via an and operator.

slk_helpers gen_file_query and gen_search_query do not perform searches but they generated search query strings which can be used as input to slk search. Below there are several applications of these commands.

Find a file anywhere#

$ slk_helpers gen_file_query output.nc
{"resources.name":{"$regex":"output.nc"}}

$ slk_helpers gen_search_query resources.name=output.nc
{"resources.name":"output.nc"}

Note on the output: The $regex``operator does not need to be in the query to work properly in this case (sufficient: ``{"resources.name":"output.nc"}). However, it is difficult to determine for gen_file_query to identify whether a regular expression was provided or not. Also some other program-internal workflows would have been more complicated.

Find all resources in a namespace (non-recursive)#

$ slk_helpers gen_file_query /arch/bm0146/k204221
{"path":{"$gte":"/arch/bm0146/k204221", "$max_depth": 1}}

$ slk_helpers gen_search_query path=/arch/bm0146/k204221
{"path":{"$gte":"/arch/bm0146/k204221", "$max_depth": 1}}

Note on the output: path works only in combination with the operator $gte (details: path metadata field). This is specific for path and does not work with any other metadatafield / operator.

Find all resources in a namespace (recursively)#

$ slk_helpers gen_file_query -R /arch/bm0146/k204221
{"path":{"$gte":"/arch/bm0146/k204221"}}

$ slk_helpers gen_search_query -R path=/arch/bm0146/k204221
{"path":{"$gte":"/arch/bm0146/k204221"}}

Note on the output: path works only in combination with the operator $gte (details: path metadata field)

Find a file in a namespace recursively#

$ slk_helpers gen_file_query -R /arch/bm0146/k204221/output.nc
{"$and":[{"path":{"$gte":"/arch/bm0146/k204221"}},{"resources.name":{"$regex":"output.nc"}}]}

$ slk_helpers gen_search_query path=/arch/bm0146/k204221 resources.name=output.nc
{"$and":[{"path":{"$gte":"/arch/bm0146/k204221"}},{"resources.name":"output.nc"}]}

Find three files in two namesspaces#

$ slk_helpers gen_file_query /arch/bm0146/k204221/output.nc /arch/bm0146/k204221/INDEX.txt /arch/bm0146/k204221/restart/rsrt.n
{"$or":[{"$and":[{"path":{"$gte":"/arch/bm0146/k204221/restart", "$max_depth": 1}},{"resources.name":{"$regex":"rsrt.nc"}}]},{"$and":[{"path":{"$gte":"/arch/bm0146/k204221", "$max_depth": 1}},{"resources.name":{"$regex":"output.nc|INDEX.txt"}}]}]}

$ slk_helpers gen_file_query /arch/bm0146/k204221/output.nc /arch/bm0146/k204221/INDEX.txt /arch/bm0146/k204221/restart/rsrt.nc | jq
{
  "$or": [
    {
      "$and": [
        {
          "path": {
            "$gte": "/arch/bm0146/k204221/restart",
            "$max_depth": 1
          }
        },
        {
          "resources.name": {
            "$regex": "rsrt.nc"
          }
        }
      ]
    },
    {
      "$and": [
        {
          "path": {
            "$gte": "/arch/bm0146/k204221",
            "$max_depth": 1
          }
        },
        {
          "resources.name": {
            "$regex": "output.nc|INDEX.txt"
          }
        }
      ]
    }
  ]
}

Note on the output: The files which are located in one namespace are grouped automatically.

Find files with regular expressions 1#

Find files with the names output_00.nc to output_19.nc.

$ slk_helpers gen_file_query /arch/bm0146/k204221/output_[01][0-9].nc
{"$and":[{"path":{"$gte":"/arch/bm0146/k204221", "$max_depth": 1}},{"resources.name":{"$regex":"output_[01][0-9].nc"}}]}

Warning

The namespace / path must not contain regular expressions.

Find files with regular expressions 2#

Find files with the names output_tas_00.nc to output_tas_19.nc and output_psl_00.nc to output_psl_19.nc. When you use rounded brackets in the regular expression, you need to enclose the filename with single quotation marks.

$ slk_helpers gen_file_query '/arch/bm0146/k204221/output_(tas|psl)_[01][0-9].nc'
{"$and":[{"path":{"$gte":"/arch/bm0146/k204221","$max_depth":1}},{"resources.name":{"$regex":"output_(tas|psl)_[01][0-9].nc"}}]}

Find files which are group per year in sub-namespaces#

We have monthly output files for several years. The files are stored in one folder per year. An example is given below.

… code-block:

...
/arch/bm0146/k204221/output/year1999/output_1999_01.nc
                                     output_1999_02.nc
                                     output_1999_03.nc
                                     output_1999_04.nc
                                     output_1999_05.nc
                                     output_1999_06.nc
                                     output_1999_07.nc
                                     output_1999_08.nc
                                     output_1999_09.nc
                                     output_1999_10.nc
                                     output_1999_11.nc
                                     output_1999_12.nc
/arch/bm0146/k204221/output/year2000/output_2000_01.nc
                                     output_2000_02.nc
                                     output_2000_03.nc
                                     output_2000_04.nc
                                     output_2000_05.nc
                                     output_2000_06.nc
                                     output_2000_07.nc
                                     output_2000_08.nc
                                     output_2000_09.nc
                                     output_2000_10.nc
                                     output_2000_11.nc
                                     output_2000_12.nc
/arch/bm0146/k204221/output/year2001/output_2001_01.nc
                                     output_2001_02.nc
                                     output_2001_03.nc
                                     output_2001_04.nc
                                     output_2001_05.nc
                                     output_2001_06.nc
                                     output_2001_07.nc
                                     output_2001_08.nc
                                     output_2001_09.nc
                                     output_2001_10.nc
                                     output_2001_11.nc
                                     output_2001_12.nc
...

We would like to retrieve the all files of the years 2000 and 2001. For this purpose, we do a recursive search and omit the namespace yearYYYY.

$ slk_helpers gen_file_query -R /arch/bm0146/k204221/output/output_200[01]_[0-9][0-9].nc
{"$and":[{"path":{"$gte":"/arch/bm0146/k204221/output"}},{"resources.name":{"$regex":"output_200[01]_[0-9][0-9].nc"}}]}

$ slk search '{"$and":[{"path":{"$gte":"/arch/bm0146/k204221/output"}},{"resources.name":{"$regex":"output_200[01]_[0-9][0-9].nc"}}]}'
Search continuing. .
Search ID: 128349

$ slk_helpers list_search 128349
-rw-r--r--t     18380387228 /arch/bm0146/k204221/output/year2001/output_2001_02.nc
-rw-r--r--t     19693272030 /arch/bm0146/k204221/output/year2001/output_2001_06.nc
-rw-r--r--t     20349714432 /arch/bm0146/k204221/output/year2001/output_2001_10.nc
-rw-r--r--t     19693272030 /arch/bm0146/k204221/output/year2000/output_2000_04.nc
-rw-r--r--t     19693272030 /arch/bm0146/k204221/output/year2001/output_2001_11.nc
-rw-r--r--t     20349714432 /arch/bm0146/k204221/output/year2000/output_2000_05.nc
-rw-r--r--t     19036829629 /arch/bm0146/k204221/output/year2000/output_2000_02.nc
-rw-r--r--t     20349714432 /arch/bm0146/k204221/output/year2001/output_2001_12.nc
-rw-r--r--t     20349714432 /arch/bm0146/k204221/output/year2000/output_2000_01.nc
-rw-r--r--t     20349714432 /arch/bm0146/k204221/output/year2000/output_2000_03.nc
-rw-r--r--t     20349714432 /arch/bm0146/k204221/output/year2001/output_2001_08.nc
-rw-r--r--t     20349714432 /arch/bm0146/k204221/output/year2001/output_2001_07.nc
-rw-r--r--t     19693272030 /arch/bm0146/k204221/output/year2001/output_2001_09.nc
-rw-r--r--t     20349714432 /arch/bm0146/k204221/output/year2001/output_2001_01.nc
-rw-r--r--t     19693272030 /arch/bm0146/k204221/output/year2000/output_2000_06.nc
-rw-r--r--t     20349714432 /arch/bm0146/k204221/output/year2000/output_2000_12.nc
-rw-r--r--t     20349714432 /arch/bm0146/k204221/output/year2000/output_2000_07.nc
-rw-r--r--t     20349714432 /arch/bm0146/k204221/output/year2001/output_2001_03.nc
-rw-r--r--t     20349714432 /arch/bm0146/k204221/output/year2001/output_2001_05.nc
-rw-r--r--t     20349714432 /arch/bm0146/k204221/output/year2000/output_2000_08.nc
-rw-r--r--t     19693272030 /arch/bm0146/k204221/output/year2001/output_2001_04.nc
-rw-r--r--t     19693272030 /arch/bm0146/k204221/output/year2000/output_2000_09.nc
-rw-r--r--t     19693272030 /arch/bm0146/k204221/output/year2000/output_2000_11.nc
-rw-r--r--t     20349714432 /arch/bm0146/k204221/output/year2000/output_2000_10.nc

Note on the output: slk_helpers list_search prints files in the order in which the files were found by the search. They are not sorted alphabetically or similar.

Find files smaller than certain size#

Currently, the slk_helpers command gen_search_query can only interprete the equals sign and no greater/lower than signs. Therefore, we need to manipulate the generated search query. Let’s assume that we want to search for all files smaller than 100 MB (104 857 600 byte).

$ slk_helpers gen_search_query resources.size=104857600
{"resources.size":104857600}

Now, we need to add an operator for < to this query which is $lt. All operators are listet here. Please surround the condition by so that the < in not interpreted by the shell.

slk_helpers gen_search_query 'resources.size<104857600'
{"resources.size":{"$lt":104857600}}

We can also probide a lower size limited:

slk_helpers gen_search_query 'resources.size<104857600' 'resources.size>1024'
{"$and":[{"resources.size":{"$gt":1024}},{"resources.size":{"$lt":104857600}}]}

List files#

List files stored in specific namespace#

We would like to print all files stored on tape in /ex/am/ple/bm0146/k204221.

$ slk list /ex/am/ple/bm0146/k20422

Recursively list files stored in specific namespace#

We would like to print all files stored on tape in /ex/am/ple/bm0146/k204221 and in sub-namespaces.

$ slk list -R /ex/am/ple/bm0146/k204221

List search results#

Please see List all files of a specific user

List all files of a specific user#

We would like to print all files belonging to user k204221 (id: 25301).

First, get the user id (uid) of the user k204221:

$ id k204221
uid=25301(k204221) gid=1076(bm0146) groups=1076(bm0146),1544(dm),200524(ka1209),1603(bk1123)

Second, define a search query:

$ slk search '{"resources.posix_uid":25301}'
Search continuing. .....
Search Id: 9

Third, we print all found files:

$ slk list 9
...

List search results vs. list the content of a folder#

We perform a search with slk search to find the content of a namespace. slk list <search_id> prints only files (no namespaces) that the user is allowed to see/read. In contrast, slk list <namespace> lists files and sub-namespaces in a namespace. Alternatively, you might use slk_helpers list_search <search_id> which prints files and namespaces. If you wish slk_helpers list_search to print only files, please run it with -f / --only-files. Please be aware that we consider slk_helpers list_search as deprecated. The example below clarifies the situation. In the example, we assume that the sub-namespace test does not contain any files.

$ slk search '{"path": {"$gte": "/ex/am/ple/testing/testing/test03/test"}}'
Search continuing. .....
Search ID: 856

$ slk list 856 | cat
-rw-r--r--t  k204221        bm0146   16.1M  01 Apr 2021  /ex/am/ple/testing/testing/test03/test/some_file.nc
Files: 1

$ slk list /ex/am/ple/testing/testing/test03/test
drwxr-xr-xt  25301          900                  06 Apr 2021    test1
drwxr-xr-xt  25301          900                  06 Apr 2021    test2
-rw-r--r--t  k204221        bm0146   16.1M       01 Apr 2021    some_file.nc
Files: 3

$ slk_helpers list_search 856
drwxr-xr-xt         0  /ex/am/ple/testing/testing/test03/test/test2
-rw-r--r--t  16882074  /ex/am/ple/testing/testing/test03/test/some_file.nc
drwxr-xr-xt         0  /ex/am/ple/testing/testing/test03/test/test1
Resources: 3

$ slk_helpers list_search -f 856
-rw-r--r--t  16882074  /ex/am/ple/testing/testing/test03/test/some_file.nc
Resources: 1

Note

slk_helpers list_search omits ownership, group and modification date. However, it prints all sizes in byte, whereas slk list prints the file size in human-readable form.

Retrieve files#

StrongLink’s slk includes the command retrieve to transfer files from the tape archive to the user. However, the original slk retrieve does not work well in many situations. E.g. if slk retrieve fails, it is not clear whether the tape access failed (and might repeatly fail!) or another issue occurred. Additionally, certain retrieval requests may slow down the StrongLink system considerably.

Therefore, we are working on improving the situation step by step. As of February 2025, we provide the two new commands slk_helpers recall and slk_helpers retrieve and a few new scripts for this purpose (recall / retrieve watchers). The new commands / scripts split the retrieval process into the transfer from tape to cache (recall) and the transfer from cache to the user (retrieve). This might seem more complicated for the user but actually it saves considerable debugging time when retrieval issues occur and makes the whole process more transparent.

Retrieve file stored in specific path#

We would like to retrieve a file located at /ex/am/ple/bm0146/k20422/dm/retrieve_us/test.nc to the current directory (.).

$ slk retrieve -R /ex/am/ple/bm0146/k20422/dm/retrieve_us/test.nc .

Alternatively, you can use our new tools which automatically submit a SLURM job. Thus, you do not need to keep the terminal session open.

$ module load slk/3.3.91_h1.13.3_w2.0.1
$ ## start recall job: copy from tape to HSM-cache
$ slk_helpers recall /ex/am/ple/bm0146/k20422/dm/retrieve_us/test.nc -d .
$ # job ID is returned

$ # start retrieval job: copy from HSM-cache to tape as soon as files are back
$ slk_helpers retrieve /ex/am/ple/bm0146/k20422/dm/retrieve_us/test.nc -d . --run-as-slurm-job-with-account ${slurm_job_account}

$ # you can check the status of the recall job by
$ slk_helpers job_status <job_id>
$ # when the job failed, please run the same recall command again. The retrieve SLURM job will run repeatedly until the file is back in cache.

When you want to retrieve multiple files, please follow the process described further below or on the retrieval page.

Retrieve files stored in specific path#

We would like to retrieve all files located in the folder /ex/am/ple/bm0146/k20422/dm/retrieve_us to /scratch/k/k204221/data.

$ # load new slk module explicitely (not default slk module in Feb 2025)
$ module load slk/3.3.91_h1.13.3_w2.0.1
$ # We will create many text files to structure the retrieval.
$ # Therefore, please create a new folder for this purpose and change into it
$ mkdir tmp_folder_gfbt
$ cd tmp_folder_gfbt
$ # generate tape grouping
$ slk_helpers gfbt -R /ex/am/ple/bm0146/k20422/dm/retrieve_us -wf1 /scratch/k/k204221/data -v
$ # start recall and retrieve watcher scripts
$ start_recall_watcher.sh <LEVANTE COMPUTE TIME PROJECT>
$ start_retrieve_watcher.sh <LEVANTE COMPUTE TIME PROJECT>
$ # check the status in the recall.log and retrieve.log files

Details on this process are available here.

Note

You could also use slk retrieve -R ... directly. However, this might be slower and cause performance issues of StrongLink affecting all users.

Resume an interrupted retrieval#

Note

Files have a temporary name during the retrieval. Therefore, all files which are named by their full name have been completely retrieved. The temporary file name is of the format ~[FILENAME][RANDOM_NUMBER].slkretrieve. Thus, the file ngc2013_atm_ml_23h_inst_1_20211201T210000Z.nc might have the temporary file name ~ngc2013_atm_ml_23h_inst_1_20211201T210000Z.nc-9165206605614277442.slkretrieve.

Case: recall/retrieve watcher scripts were used

Change into the folder in which you originally started the recall/retrieve watcher scripts and start them again. Existing files are skipped. You do not need to re-run gfbt / group_files_by_tape. If you wish to do this, please create a new temporary folder and run gfbt within it.

$ # load new slk module explicitely (not default slk module in Feb 2025)
$ module load slk/3.3.91_h1.13.3_w2.0.1
$ # restart recall and retrieve watcher scripts
$ start_recall_watcher.sh <LEVANTE COMPUTE TIME PROJECT>
$ start_retrieve_watcher.sh <LEVANTE COMPUTE TIME PROJECT>
$ # check the status in the recall.log and retrieve.log files

Case: classical ``slk retrieve`` was used

We ran slk retrieve -R /ex/am/ple/bm0146/k20422/dm . in the batch job. Unfortunately, the batch job was killed due to a timeout and slk retrieve did not finish retrieving all files. We want to run it again an skip all files which have already been completely retrieved. To do this, we run slk retrieve and second time and add the parameter -s for skip:

slk retrieve -s -R /ex/am/ple/bm0146/k20422/dm .

This command will skip all existing files. If -s was not set, all files would be retrieved again and the previously retrieved files would be overwritten. If -d is set, all files would be retrieved again and``DUPLICATE`` files would be created for each already existing file.

Retrieve all files matching a Regular Expression#

Assume, we wish to retrieve all files matching this regular expression: /arch/bm0146/k204221/output_[01][0-9].nc. This works similar as in example Retrieve files stored in specific path but the slk_helpers gfbt call has to look like this:

$ # generate tape grouping
$ slk_helpers gfbt /arch/bm0146/k204221/output_[01][0-9].nc --regex -wf1 /scratch/k/k204221/data -v

Please do not forget the parameter --regex.

Retrieve all files matching a Regular Expression#

Assume, we wish to retrieve a long list of files. This works similar as in example Retrieve files stored in specific path but the slk_helpers gfbt call has to look like this:

$ # generate tape grouping
$ slk_helpers gfbt /arch/bm0146/k204221/file01.nc /arch/bm0146/k204221/another_file.nc /arch/bm0146/k204221/a_tar_ball.tar -wf1 /scratch/k/k204221/data -v

Retrieve all files of a specific user#

We would like to retrieve all files belonging to user k204221 (id: 25301) into /scratch/k/k204221/data. A description on how to write the search queries for slk search is provided at the page StrongLink query language and on pages 6 to 8 of the StrongLink Command Line Interface Guide.

First, get the user id (uid) of the user k204221:

$ id k204221
uid=25301(k204221) gid=1076(bm0146) groups=1076(bm0146),1544(dm),200524(ka1209),1603(bk1123)

Second, define a search query:

$ slk search '{"resources.posix_uid":25301}'
Search continuing. .....
Search Id: 11

Third, we retrieve the files into destination directory:

$ # load new slk module explicitely (not default slk module in Feb 2025)
$ module load slk/3.3.91_h1.13.3_w2.0.1
$ # We will create many text files to structure the retrieval.
$ # Therefore, please create a new folder for this purpose and change into it
$ mkdir tmp_folder_gfbt
$ cd tmp_folder_gfbt
$ # generate tape grouping
$ slk_helpers gfbt --search-id 11 -wf1 /scratch/k/k204221/data -v
$ # start recall and retrieval
$ start_recall_watcher.sh <LEVANTE COMPUTE TIME PROJECT>
$ start_retrieve_watcher.sh <LEVANTE COMPUTE TIME PROJECT>
$ # check the status in the recall.log and retrieve.log files

Details on this process are available here.

Manually verify that retrieval was successful#

StrongLink calculates two checksums of each archived file and stores them in the metadata. It compares the stored checksums with the file’s actual checksums at certain stages of the archival and retrieval process. If you wish, you can check the checksum manually. We provide a batch script template for a file archival plus subsequent checksum check here. If a file has no checksum then it has not been fully archived yet (e.g. the copying is still in progress). You should not retrieve such a file.

# retrieve the file
$ slk retrieve /ex/am/ple/bm0146/k204221/file_collection/test.nc .
[========================================-] 100% complete 1/1 files [5B/5B]

# get the checksum of the archived file from StrongLink
$ slk_helpers checksum -t sha512 /ex/am/ple/bm0146/k204221/file_collection/test.nc
c7bb8f1a8c4fbf5ff1d8990e0b0859bde7a320f337ca65ea1e79a36423b6d9909da793b26c1c69a711d27867b4f0eae1a4ef0db8483e29f9cda3719208618ffc

# calculate the checksum of your retrieved file
$ sha512sum test.nc
c7bb8f1a8c4fbf5ff1d8990e0b0859bde7a320f337ca65ea1e79a36423b6d9909da793b26c1c69a711d27867b4f0eae1a4ef0db8483e29f9cda3719208618ffc  test.nc

Group files smartly to optimize retrieval speed#

Please use our new recall/retrieve watcher workflow briefly described in Retrieve files stored in specific path, Retrieve files stored in specific search and Retrieve all files of a specific user. Details on this process are available here.

Automatize retrieval of files grouped by tape#

Please use our new recall/retrieve watcher workflow briefly described in Retrieve files stored in specific path, Retrieve files stored in specific search and Retrieve all files of a specific user. Details on this process are available here. The recall/retrieve sbatch wrappers released in Summer 2024 are considered as deprecated since February 2025.

Check status of your data request from tape#

If you run slk retrieve (and at least one file needs to be read from tape) or slk recall, the command will print the id of the corresponding recall job to the slk log (~/.slk/slk-cli.log; 84835 in the example below):

2023-01-12 09:45:10 xU22 2036 INFO  Executing command: "recall 275189"
2023-01-12 09:45:11 xU22 2036 INFO  Created copy job with id: '84835' for - "recall 275189"

The command slk_helpers recall will print the job ID directly to the terminals stdout.

The status of a job is printed via slk_helpers job_status JOB_ID:

$ slk_helpers job_status 84835
SUCCESSFUL

$ slk_helpers job_status 84980
QUEUED (12)

$ slk_helpers job_status 84981
QUEUED (13)

$ slk_helpers job_status 84966
PROCESSING

Details are described in Waiting and processing time of retrievals.

tag files (set metadata)#

Currently, it is not possible to set the metadata of single files. Setting the metadata is only possible for all files in a directory or for all files found by a search. However, this will be changed in future releases of the slk.

set one metadata field of all files in one directory#

We have archived some very large text files into the namespace /ex/am/ple/bm0146/k204221/texts and, now, want to assign the author’s name (Daniel Neumann) via the metadata field document.Author.

Please see the page Reference: metadata schemata for a list of all metadata schemata and their fields.

$ slk tag /ex/am/ple/bm0146/k204221/text document.Author="Daniel Neumann"
Searching for resources in GNS path: /ex/am/ple/bm0146/k204221/text
Search continuing. .....
Search ID: 26
Add Metadata Job complete, applied to 12 of 12 resources.

set one metadata field of all files of one type belonging to one person#

We would like to assign the author’s name (Daniel Neumann) via the metadata field document.Author to all text files (mime type: text/plain) by the user k204221.

First, we need to search for the files

$ slk search '{"$and": [{"resources.mimetype":"text/plain"},{"resources.posix_uid":25301}]}'
Search continuing. .....
Search ID: 383

Then we apply slk tag on the search result:

$ slk tag 383 document.Author="Daniel Neumann"
Search continuing. .....
[========================================|] 100% complete Metadata applied to 359 of 359 resources. Finishing up......

Change permissions and group of files and directories#

Note

Changes of the ownership (slk owner) can only be performed by an admin user. Changes of the group can only be performed by the file’s owner or an admin. Users can only set groups in which they are members. The Linux terminal commands chown and chgrp behave the same.

Grant everyone / all users read access to a directory and its content#

We would like to grant all users read access to the namespace /ex/am/ple/bm0146/k20422/public_data recursively. “All users” in this context should mean “the file’s group, all users not in the group and myself”.

$ slk chmod -R a+r /ex/am/ple/bm0146/k20422/public_data

Revoke write access to directory and its content for users of the group#

We would like to revoke write access to /ex/am/ple/bm0146/k20422/top_secret_data and its content for all users in the directory’s/file’s group.

$ slk chmod -R g-w /ex/am/ple/bm0146/k20422/top_secret_data

Change the group of a directory and its content#

We would like to change the group of the /ex/am/ple/bm0146/k20422/group_data and its content to bm0146. We need to be the owner of the namespace and its content. We need to be member of group bm0146.

$ slk group -R bm0146 /ex/am/ple/bm0146/k20422/group_data

Get user/group IDs and names#

Get user id from user name#

# get your user id
$ id -u

# get the id of any user
$ id USER_NAME -u

# get the id of any user
$ getent passwd USER_NAME
#  OR
$ getent passwd USER_NAME | awk -F: '{ print $3 }'

Get user name from user id#

# get user name from user id
$ getent passwd USER_ID | awk -F: '{ print $1 }'

Get user id from user name#

# get the id of any group
$ getent group GROUP_NAME | awk -F: '{ print $3 }'

# get group names and their ids of all groups of which you are a member
$ id

Get user name from user id#

# get group name from group id
$ getent group GROUP_ID | awk -F: '{ print $1 }'

# get group names and their ids of all groups of which you are a member
$ id

slk in batch jobs on compute nodes#

Simple archival job script:#

#!/bin/bash

## ~~~~~~~~~~~~~~~~~~~~~~~~~ start user input ~~~~~~~~~~~~~~~~~~~~~~~~~
# HINT:
#   * You can change the values right of the "=" as you wish.
#   * The "%j" in the log file names means that the job id will be inserted

#SBATCH --job-name=test_slk_job   # Specify job name
#SBATCH --output=test_job.o%j    # name for standard output log file
#SBATCH --error=test_job.e%j     # name for standard error output log file
#SBATCH --partition=shared      # Specify partition name
#SBATCH --ntasks=1             # Specify max. number of tasks to be invoked
#SBATCH --time=08:00:00        # Set a limit on the total run time
#SBATCH --account=ka1209       # Charge resources on this project account
## ~~~~~~~~~~~~~~~~~~~~~~~~~ end user input ~~~~~~~~~~~~~~~~~~~~~~~~~

source_folder=/work/ka1209/ex/am/ple
target_namespace=/arch/xz1234/$USER/data

# create namespace on StrongLink
# (optional; should be created by "slk archive" automatically)
slk_helpers mkdir -R ${target_namespace}

# do the archival
echo "doing 'slk archive -R ${source_folder} ${target_namespace}'"
slk archive -R ${source_folder} ${target_namespace}
# '$?' captures the exit code of the previous command
if [ $? -ne 0 ]; then
  >&2 echo "an error occurred in slk archive call"
else
  echo "archival successful"
fi

Extensive archival job script with some diagnostics#

#!/bin/bash

## ~~~~~~~~~~~~~~~~~~~~~~~~~ start user input ~~~~~~~~~~~~~~~~~~~~~~~~~
# HINT:
#   * You can change the values right of the "=" as you wish.
#   * The "%j" in the log file names means that the job id will be inserted

#SBATCH --job-name=test_slk_job   # Specify job name
#SBATCH --output=test_job.o%j    # name for standard output log file
#SBATCH --error=test_job.e%j     # name for standard error output log file
#SBATCH --partition=shared      # Specify partition name
#SBATCH --ntasks=1             # Specify max. number of tasks to be invoked
#SBATCH --time=08:00:00        # Set a limit on the total run time
#SBATCH --account=ka1209       # Charge resources on this project account
## ~~~~~~~~~~~~~~~~~~~~~~~~~ end user input ~~~~~~~~~~~~~~~~~~~~~~~~~


## ~~~~~~~~~~~~~~~~~~~~~~~~~ start user input ~~~~~~~~~~~~~~~~~~~~~~~~~
# source folder for archival
data_source=/work/ka1209/ex/am/ple
# target folder for archival
data_destination=/arch/xz1234/$USER/data
# file to write out run time and similar ...
statistics_file=statistics.csv
## ~~~~~~~~~~~~~~~~~~~~~~~~~ end user input ~~~~~~~~~~~~~~~~~~~~~~~~~


# time and date of the start of the job
date_start=`date +%Y-%m-%dT%H:%M:%S`
# create tmp dir
mkdir tmp

## user output
echo "data source directory:  $data_source"
echo "data target directory:  $data_destination"
echo "statistics output file: $statistics_file"
echo ""
echo "start date:             $date_start"
echo ""

## do the archival here
# create namespace on StrongLink
# (optional; should be created by "slk archive" automatically)
slk_helpers mkdir -R ${data_destination}

# this is for timing: /usr/bin/time -f "%E" -o tmp/time_job_$SLURM_JOB_ID.txt
# We write the run time of slk archive into a file from which we will read later on
echo "starting slk archive:   slk archive -R ${data_source} ${data_destination}"
/usr/bin/time -f "%E" -o tmp/time_job_$SLURM_JOB_ID.txt slk archive -R ${data_source} ${data_destination}
exit_code_archive=$?
run_time=`cat tmp/time_job_$SLURM_JOB_ID.txt`
echo "finished slk archive:   "
echo "         * exit code:   $exit_code_archive"
echo "         * run time:    $run_time"
echo ""

echo "write statisitics file: $statistics_file"
## write statistics
#     JOB ID,        Node Name,Src Dir,      ,Dst Dir,    slk archive version,Exit Code,           Run Time
echo "$SLURM_JOB_ID,`hostname`,${data_source},${data_destination},`which slk`,${exit_code_archive},${run_time}" >> ${statistics_file}
echo ""

echo ""
echo "finished"

Run archival in batch job and capture the job id#

solution a): we have the archival command in a script, which is submitted as SLURM job.

echo "submit slk archival job"
job_id_new=`sbatch ./archive_script.sh | awk ' { print $4 } '`
echo "job id of archival job: ${job_id_new}"

solution b): we submit the slk archive call directly to SLURM

echo "submit slk archival job"
job_id_new=`sbatch --partition=shared --account=bm0146 slk archive *.nc
/arch/bm0146/k204221/test | awk '{ print $4 }'
echo "job id of archival job: ${job_id_new}"

We strongly advice against the solution (b) because the exit code of slk archive cannot be captured this way. Unfortunately, slk archive does not print any command line output into the SLURM job log. Therefore, solution (a) is better: putting slk archive into its own script in which the exit code is properly captured. A simple example script for solution (a) is given in the section Simple SLURM job scripts using slk.

Simple SLURM job scripts using slk#

The new slk_helpers retrieve command is able to generate its own SLURM job script for small retrievals (see here).

When you want to retrieve more than five to ten files, please user out recall/retrieve watcher scripts described here. These scripts automatically submit SLURM jobs.