slk usage examples

file version: 25 August 2022

current software versions: slk version 3.3.21; slk_helpers version 1.2.4

Note

StrongLink uses the term “namespace” or “global namespace” (gns). A “namespace” is comparable to a “directory” or “path” on a common file system.

Obtain Access Token

You have to manually login every 30 days to the StrongLink instance via slk login:

$ slk login
Username: XYZ
Password:
Login Successful

A login token is created after successful login.

Note

slk stores the login token in the home directory of each user (~/.slk/config.json). By default, this file can only accessed by the respective user (permissions: -rw-------/600). However, users should be careful when doing things like chmod 755 * in their home directory. If you assume that your slk login token has been compromised please contact support@dkrz.de .

Check if access token is still valid

$ slk_helpers session
Your session token is valid until Jun 19, 2021, 09:02:27 AM

The date and time until which your login token is valid will be printed.

Archival

Archive one file

We have a file my_file.nc located in /work/bm0146/k204221/important_files and want to archive it onto tape to /ex/am/ple/bm0146/k204221/file_collection

$ slk archive /work/bm0146/k204221/important_files/my_file.nc /ex/am/ple/bm0146/k204221/file_collection

or

$ cd /work/bm0146/k204221/important_files
$ slk archive my_file.nc /ex/am/ple/bm0146/k204221/file_collection

Archive directory recursively

We wish to archive the whole directory /work/bm0146/k204221/important_files onto tape to /ex/am/ple/bm0146/k204221

$ slk archive -R /work/bm0146/k204221/important_files /ex/am/ple/bm0146/k204221

Archive all files stored in a subset of available directories

A model run was performed over 100 years from 1900 to 1999. The results of each model year are stored in a dedicated directory – thus, having folders 1900 until 1999. We want to archive the 1980s only.

$ slk archive /work/bm0146/k204221/model_xyz/output/198? /ex/am/ple/model_xyz/output

Create directory

The slk has no mkdir command yet. slk archive automatically creates the target namespace (= directory) if it does not exist. If you wish to create a namespace or namespace hierarchy in advance (e.g. for a project), you have two options:

  • solution A: use slk_helpers mkdir or slk_helpers mkdir -R

  • solution B: create folder structure and dummy files locally and archive them with slk archive

Solution A: use slk_helpers mkdir or mkdir -R

$ slk_helpers mkdir /ex/am/ple/namespaceExists/newNamespace

or

$ slk_helpers mkdir -R /ex/am/ple/hierarchy/of/new/namespaces

Solution B: create dummy files and archive them with slk archive

Example: we already have the folder /ex/am/ple/bm0146/k204221 and want to create the folders /ex/am/ple/bm0146/k204221/abc/d01/efg, /ex/am/ple/bm0146/k204221/abc/d02/efg and /ex/am/ple/bm0146/k204221/abc/d03/efg.

$ mkdir -p abc/d01/efg abc/d02/efg abc/d03/efg
$ echo "blub" > abc/d01/efg/dummy.txt
$ echo "blub" > abc/d02/efg/dummy.txt
$ echo "blub" > abc/d03/efg/dummy.txt
$ slk archive -R abc /ex/am/ple/bm0146/k204221
$ rm -rf abc


.. note::

  Archiving empty directories is currently not supported. This was meant to be a feature.

Check checksum of archived file

StrongLink calculates two checksums of each archived file and stores them in the metadata. It compares the stored checksums with the file’s actual checksums at certain stages of the archival and retrieval process. Commonly, users do not need to check the checksum manually. But, you can if you prefer to do it. If a file has no checksum then it has not been fully archived yet (e.g. the copying is still in progress).

# archive the file
$ slk archive test.nc /ex/am/ple/bm0146/k204221/file_collection
[========================================/] 100% complete. Files archived: 1/1, [5B/5B].

# get the checksum from StrongLink
$ slk_helpers checksum -t sha512 /ex/am/ple/bm0146/k204221/file_collection/test.nc
c7bb8f1a8c4fbf5ff1d8990e0b0859bde7a320f337ca65ea1e79a36423b6d9909da793b26c1c69a711d27867b4f0eae1a4ef0db8483e29f9cda3719208618ffc

# calculate the checksum of your local file
$ sha512sum test.nc
c7bb8f1a8c4fbf5ff1d8990e0b0859bde7a320f337ca65ea1e79a36423b6d9909da793b26c1c69a711d27867b4f0eae1a4ef0db8483e29f9cda3719208618ffc  test.nc

Search files

Note

Currently, slk search is not available due to faulty filtering of search results. Please use slk_helpers search_limited instead.

The slk search (currently deactivated; please use slk_helpers search_limited) command uses a query language that was designed by StrongBox Data Solutions. The query language is a dialect of JSON. Examples and a reference table are given in the section Search files by metadata and on the page Reference: StrongLink query language, respectively. There is also a description in the StrongLink Command Line Interface Guide from page 6 onwards. In the beginning, it might take some time to formulate correct search queries. Therefore, we will provide tools to generate search queries for common use cases.

The output of a search request is NOT the listing of datasets matching the search request, but a SEARCH_ID. This SEARCH_ID can then be used by further slk commands (see below). The SEARCH_ID is assigned globally – e.g. SEARCH_ID 423 exists only once. Each user has access to each SEARCH_ID. Thus, a user can share his/her SEARCH_ID with colleagues. However, the output of slk list SEARCH_ID or retrieval of slk retrieve SEARCH_ID ... depends on the read permissions of the executing user.

Warning

You can use ' or " to enclose the search query string when you call slk search. We strongly recommend using '. If you use " please keep in mind to escape all " and $ within your search query string with \. Section search files larger than 1 MB contains an example of both ways.

search files by owner / user

We would like to search for all files belonging to user k204221 (id: 25301).

Solution A: using easy-search options

$ slk search -user k204221
# currently, slk search is deactivated and slk_helpers search_limited does not support this argument

Solution B: using RQL search query

A description on how to write the search queries for slk search is provided at the page StrongLink query language and on pages 6 to 8 of the StrongLink Command Line Interface Guide.

$ slk search '{"resources.posix_uid":25301}'
$ slk_helpers search_limited '{"resources.posix_uid":25301}'

Hint: your ID or that of another user on the DKRZ system can be obtained by using the id command in the Linux shell

$ id $DKRZ_USER

search files larger than 1 MB

We would like to search for all files which are larger than 1 MB.

$ slk search '{"resources.size":{"$gt": 1048576}}'
$ slk_helpers search_limited '{"resources.size":{"$gt": 1048576}}'
#  OR
$ slk search "{\"resources.size\":{\"\$gt\": 1048576}}"
#  DO NOT FORGET TO ESCAPE THE $ AS WELL

search file based optional metadata

We would like to search for “Max” as value in the metadata field “Producer” of the schema “image”.

$ slk search '{"image.Producer":"Max"}'
$ slk_helpers search_limited '{"image.Producer":"Max"}'

search a file by name

We would like to find the file search_me.jpg.

Solution A: using easy-search options

$ slk search -name search_me.jpg
# currently, slk search is deactivated and slk_helpers search_limited does not support this argument

Solution B: using RQL search query

$ slk search "{\"resources.name\": \"search_me.jpg\"}"
Search continuing. .....
Search ID: 23

# or
$ slk_helpers search_limited "{\"resources.name\": \"search_me.jpg\"}"
...

search files by name using regular expressions

We would like to find all files of the format file_[0-9].nc (like file_1.nc, file_2.nc, …):

$ slk search "{\"resources.name\": {\"\$regex\": \"file_[0-9].nc\"}}"
Search continuing. .....
Search ID: 380

# or
$ slk_helpers search_limited "{\"resources.name\": {\"\$regex\": \"file_[0-9].nc\"}}"
...

$ slk list 380
-rw-r--r--t   k204221   bm0146  11 B    02 Mar 2021     file_2.txt
-rw-r--r--t   k204221   bm0146  16 B    02 Mar 2021     file_1.txt
-rw-r--r--t   k204221   bm0146  11 B    02 Mar 2021     file_1.txt
Files 1-3 of 3

Warning

The namespace / path must not contain regular expressions.

See also

There are two similar regular expression examples in Generate search queries for filenames.

search files by one of two owners – logical OR

We would like to search for all files belonging to user k204216 (id: 24855) or k204221 (id: 25301).

$ slk search '{"$or": [{"resources.posix_uid":24855},{"resources.posix_uid":25301}]}'

# or
$ slk_helpers search_limited '{"$or": [{"resources.posix_uid":24855},{"resources.posix_uid":25301}]}'
...

Hint: your ID or that of another user on the DKRZ system can be obtained by using the id command in the Linux shell

$ id $DKRZ_USER

search files based on two metadata fields – logical AND

We would like to search for the file surface_iow_day3d_temp_emep_2003.nc belonging the user k204221

$ slk search '{"$and":[{"resources.name": "surface_iow_day3d_temp_emep_2003.nc"}, {"resources.posix_uid": 25301}]}'
Search continuing. .....
Search ID: 65

# or
$ slk_helpers search_limited '{"$and":[{"resources.name": "surface_iow_day3d_temp_emep_2003.nc"}, {"resources.posix_uid": 25301}]}'
...

search files with specific metadata in a namespace recursively

We wish to search recursively in /ex/am/ple/testing for files with Max Mustermann as value in the metadata field document.Author.

$ slk search '{"$and": [{"path": {"$gte": "/ex/am/ple/testing"}}, {"document.Author": "Max Mustermann"}]}'
Search continuing. .....
Search ID: 77

# or
$ slk_helpers search_limited '{"$and": [{"path": {"$gte": "/ex/am/ple/testing"}}, {"document.Author": "Max Mustermann"}]}'
...

search all files that follow the CMIP Conventions

We wish to search all files that have CMIP written in their global attribute Conventions:

$ slk search '{"netcdf.Conventions": {"$regex": "CMIP"}}'
Search continuing. .....
Search ID: 526

# or
$ slk_helpers search_limited '{"netcdf.Conventions": {"$regex": "CMIP"}}'
...

save search ID into shell variable

slk search does not provide a feature out of the box to only print the SEARCH_ID. Currently (might change in future versions), the SEARCH_ID is printed in columns >= 12 of the second row of the text output of slk search. We can use tail and sed to get the second line and extract a number or use tail and cut to get the second line and drop the first 11 characters. Example:

# normal call of slk search
$ slk search '{"resources.posix_uid": 25301}'
Search continuing. .....
Search ID: 466

# get ID using sed:
$ search_id=`slk search '{"resources.posix_uid": 25301}' | tail -n 1 | sed 's/[^0-9]*//g'`
$ echo $search_id
470

# get ID by dropping first 11 characters of the second line
$ search_id=`slk search '{"resources.posix_uid": 25301}' | tail -n 1 | cut -c12-20`
$ echo $search_id
471

# use awk pattern matching to get the correct line and correct column
$ search_id=`slk search '{"resources.posix_uid": 25301}' | awk '/Search ID/ {print($3)}'`
$ echo $search_id
507

Note

This is an example for bash. When using csh, you need to prepend set `` in front of the assignments of the shell variables: ``set search_id=....

using shell variables in searches

We would like to search for all files belonging to user k204221 (id: 25301).

Solution without shell variable:

$ id k204221 -u
25301
$ slk search "{\"resources.posix_uid\":25301}"
Search continuing. .....
Search ID: 474

Solution with shell variable:

$ export uid=`id k204221 -u`
$ slk search "{\"resources.posix_uid\":$uid}"
Search continuing. .....
Search ID: 475

Solution calling another shell program from within a search query:

$ slk search "{\"resources.posix_uid\":`id k204221 -u`}"
Search continuing. .....
Search ID: 475

Note

The example shell commands are meant for bash. If you are using csh or tcsh they do not work as printed here but have to be adapted.

Generate search queries

Since version 1.2.2 the slk_helpers offer the command gen_file_query. This command accepts one or more files/namespaces as input and generates a search query string. Technical background: slk_helpers gen_file_query describes how gen_file_query identifies files and namespaces and how it splits them up. slk_helpers gen_file_query does not perform a search but the generated search query string can be used as input to slk search. Below there are several applications of this command.

Find a file anywhere

$ slk_helpers gen_file_query output.nc
{"resources.name":{"$regex":"output.nc"}}

Note on the output: The $regex``operator does not need to be in the query to work properly in this case (sufficient: ``{"resources.name":"output.nc"}). However, it is difficult to determine for gen_file_query to identify whether a regular expression was provided or not. Also some other program-internal workflows would have been more complicated.

Find all resources in a namespace (non-recursive)

$ slk_helpers gen_file_query /arch/bm0146/k204221
{"path":{"$gte":"/arch/bm0146/k204221", "$max_depth": 1}}

Note on the output: path works only in combination with the operator $gte (details: path metadata field).

Find all resources in a namespace (recursively)

$ slk_helpers gen_file_query -R /arch/bm0146/k204221
{"path":{"$gte":"/arch/bm0146/k204221"}}

Note on the output: path works only in combination with the operator $gte (details: path metadata field)

Find a file in a namespace recursively

$ slk_helpers gen_file_query -R /arch/bm0146/k204221/output.nc
{"$and":[{"path":{"$gte":"/arch/bm0146/k204221"}},{"resources.name":{"$regex":"output.nc"}}]}

Find three files in two namesspaces

$ slk_helpers gen_file_query /arch/bm0146/k204221/output.nc /arch/bm0146/k204221/INDEX.txt /arch/bm0146/k204221/restart/rsrt.n
{"$or":[{"$and":[{"path":{"$gte":"/arch/bm0146/k204221/restart", "$max_depth": 1}},{"resources.name":{"$regex":"rsrt.nc"}}]},{"$and":[{"path":{"$gte":"/arch/bm0146/k204221", "$max_depth": 1}},{"resources.name":{"$regex":"output.nc|INDEX.txt"}}]}]}

$ slk_helpers gen_file_query /arch/bm0146/k204221/output.nc /arch/bm0146/k204221/INDEX.txt /arch/bm0146/k204221/restart/rsrt.nc | jq
{
  "$or": [
    {
      "$and": [
        {
          "path": {
            "$gte": "/arch/bm0146/k204221/restart",
            "$max_depth": 1
          }
        },
        {
          "resources.name": {
            "$regex": "rsrt.nc"
          }
        }
      ]
    },
    {
      "$and": [
        {
          "path": {
            "$gte": "/arch/bm0146/k204221",
            "$max_depth": 1
          }
        },
        {
          "resources.name": {
            "$regex": "output.nc|INDEX.txt"
          }
        }
      ]
    }
  ]
}

Note on the output: The files which are located in one namespace are grouped automatically.

Find files with regular expressions 1

Find files with the names output_00.nc to output_19.nc.

$ slk_helpers gen_file_query /arch/bm0146/k204221/output_[01][0-9].nc
{"$and":[{"path":{"$gte":"/arch/bm0146/k204221", "$max_depth": 1}},{"resources.name":{"$regex":"output_[01][0-9].nc"}}]}

Warning

The namespace / path must not contain regular expressions.

Find files with regular expressions 2

Find files with the names output_tas_00.nc to output_tas_19.nc and output_psl_00.nc to output_psl_19.nc. When you use rounded brackets in the regular expression, you need to enclose the filename with single quotation marks.

$ slk_helpers gen_file_query '/arch/bm0146/k204221/output_(tas|psl)_[01][0-9].nc'
{"$and":[{"path":{"$gte":"/arch/bm0146/k204221","$max_depth":1}},{"resources.name":{"$regex":"output_(tas|psl)_[01][0-9].nc"}}]}

Find files which are group per year in sub-namespaces

We have monthly output files for several years. The files are stored in one folder per year. An example is given below.

… code-block:

...
/arch/bm0146/k204221/output/year1999/output_1999_01.nc
                                     output_1999_02.nc
                                     output_1999_03.nc
                                     output_1999_04.nc
                                     output_1999_05.nc
                                     output_1999_06.nc
                                     output_1999_07.nc
                                     output_1999_08.nc
                                     output_1999_09.nc
                                     output_1999_10.nc
                                     output_1999_11.nc
                                     output_1999_12.nc
/arch/bm0146/k204221/output/year2000/output_2000_01.nc
                                     output_2000_02.nc
                                     output_2000_03.nc
                                     output_2000_04.nc
                                     output_2000_05.nc
                                     output_2000_06.nc
                                     output_2000_07.nc
                                     output_2000_08.nc
                                     output_2000_09.nc
                                     output_2000_10.nc
                                     output_2000_11.nc
                                     output_2000_12.nc
/arch/bm0146/k204221/output/year2001/output_2001_01.nc
                                     output_2001_02.nc
                                     output_2001_03.nc
                                     output_2001_04.nc
                                     output_2001_05.nc
                                     output_2001_06.nc
                                     output_2001_07.nc
                                     output_2001_08.nc
                                     output_2001_09.nc
                                     output_2001_10.nc
                                     output_2001_11.nc
                                     output_2001_12.nc
...

We would like to retrieve the all files of the years 2000 and 2001. For this purpose, we do a recursive search and omit the namespace yearYYYY.

$ slk_helpers gen_file_query -R /arch/bm0146/k204221/output/output_200[01]_[0-9][0-9].nc
{"$and":[{"path":{"$gte":"/arch/bm0146/k204221/output"}},{"resources.name":{"$regex":"output_200[01]_[0-9][0-9].nc"}}]}

$ slk_helpers search_limited '{"$and":[{"path":{"$gte":"/arch/bm0146/k204221/output"}},{"resources.name":{"$regex":"output_200[01]_[0-9][0-9].nc"}}]}'
Search continuing. .
Search ID: 128349

$ slk_helpers list_search 128349
-rw-r--r--t     18380387228 /arch/bm0146/k204221/output/year2001/output_2001_02.nc
-rw-r--r--t     19693272030 /arch/bm0146/k204221/output/year2001/output_2001_06.nc
-rw-r--r--t     20349714432 /arch/bm0146/k204221/output/year2001/output_2001_10.nc
-rw-r--r--t     19693272030 /arch/bm0146/k204221/output/year2000/output_2000_04.nc
-rw-r--r--t     19693272030 /arch/bm0146/k204221/output/year2001/output_2001_11.nc
-rw-r--r--t     20349714432 /arch/bm0146/k204221/output/year2000/output_2000_05.nc
-rw-r--r--t     19036829629 /arch/bm0146/k204221/output/year2000/output_2000_02.nc
-rw-r--r--t     20349714432 /arch/bm0146/k204221/output/year2001/output_2001_12.nc
-rw-r--r--t     20349714432 /arch/bm0146/k204221/output/year2000/output_2000_01.nc
-rw-r--r--t     20349714432 /arch/bm0146/k204221/output/year2000/output_2000_03.nc
-rw-r--r--t     20349714432 /arch/bm0146/k204221/output/year2001/output_2001_08.nc
-rw-r--r--t     20349714432 /arch/bm0146/k204221/output/year2001/output_2001_07.nc
-rw-r--r--t     19693272030 /arch/bm0146/k204221/output/year2001/output_2001_09.nc
-rw-r--r--t     20349714432 /arch/bm0146/k204221/output/year2001/output_2001_01.nc
-rw-r--r--t     19693272030 /arch/bm0146/k204221/output/year2000/output_2000_06.nc
-rw-r--r--t     20349714432 /arch/bm0146/k204221/output/year2000/output_2000_12.nc
-rw-r--r--t     20349714432 /arch/bm0146/k204221/output/year2000/output_2000_07.nc
-rw-r--r--t     20349714432 /arch/bm0146/k204221/output/year2001/output_2001_03.nc
-rw-r--r--t     20349714432 /arch/bm0146/k204221/output/year2001/output_2001_05.nc
-rw-r--r--t     20349714432 /arch/bm0146/k204221/output/year2000/output_2000_08.nc
-rw-r--r--t     19693272030 /arch/bm0146/k204221/output/year2001/output_2001_04.nc
-rw-r--r--t     19693272030 /arch/bm0146/k204221/output/year2000/output_2000_09.nc
-rw-r--r--t     19693272030 /arch/bm0146/k204221/output/year2000/output_2000_11.nc
-rw-r--r--t     20349714432 /arch/bm0146/k204221/output/year2000/output_2000_10.nc

Note on the output: slk_helpers list_search prints files in the order in which the files were found by the search. They are not sorted alphabetically or similar.

List files

List files stored in specific namespace

We would like to print all files stored on tape in /ex/am/ple/bm0146/k204221.

$ slk list /ex/am/ple/bm0146/k20422

Recursively list files stored in specific namespace

We would like to print all files stored on tape in /ex/am/ple/bm0146/k204221 and in sub-namespaces.

$ slk list -R /ex/am/ple/bm0146/k204221

List search results

Please see List all files of a specific user

List all files of a specific user

We would like to print all files belonging to user k204221 (id: 25301).

First, get the user id (uid) of the user k204221:

$ id k204221
uid=25301(k204221) gid=1076(bm0146) groups=1076(bm0146),1544(dm),200524(ka1209),1603(bk1123)

Second, define a search query:

$ slk search '{"resources.posix_uid":25301}'
Search continuing. .....
Search Id: 9

Third, we print all found files:

$ slk list 9
...

list search results vs. list the content of a folder

We perform a search with slk search / slk_helpers search_limited to find the content of a namespace. slk list search_id prints only files (no namespaces) that the user is allowed to see/read. In contrast, slk list namespace and slk_helpers list_search search_id list files and sub-namespaces in a namespace. If you wish slk_helpers list_search to print only files, please run it with -f / --only-files. The example below clarifies the situation. In the example, we assume that the sub-namespace test does not contain any files.

$ slk_helpers search_limited '{"path": {"$gte": "/ex/am/ple/testing/testing/test03/test"}}'
Search continuing. .....
Search ID: 856

$ slk list 856 | cat
-rw-r--r--t  k204221        bm0146   16.1M  01 Apr 2021  some_file.nc
Files: 1

$ slk list /ex/am/ple/testing/testing/test03/test
drwxr-xr-xt  25301          900                  06 Apr 2021    test1
drwxr-xr-xt  25301          900                  06 Apr 2021    test2
-rw-r--r--t  k204221        bm0146   16.1M       01 Apr 2021    some_file.nc
Files: 3

$ slk_helpers list_search 856
drwxr-xr-xt         0  /ex/am/ple/testing/testing/test03/test/test2
-rw-r--r--t  16882074  /ex/am/ple/testing/testing/test03/test/some_file.nc
drwxr-xr-xt         0  /ex/am/ple/testing/testing/test03/test/test1
Resources: 3

$ slk_helpers list_search -f 856
-rw-r--r--t  16882074  /ex/am/ple/testing/testing/test03/test/some_file.nc
Resources: 1

Note

slk_helpers list_search omits ownership, group and modification date. However, it prints all sizes in byte and 0 byte as size of namespaces. In contrast, slk list prints the file size in human-readable form and omits to print a size for namespaces at all.

Retrieve files

Retrieve files stored in specific path

We would like to retrieve all files located in the folder /ex/am/ple/bm0146/k20422/dm/retrieve_us to the current directory. The folder retrieve_us should not be created in .

$ slk retrieve -R /ex/am/ple/bm0146/k20422/dm/retrieve_us/ .

Retrieve folder stored in specific path

We would like to retrieve the folder /ex/am/ple/bm0146/k20422/dm/retrieve_us with its content to the current directory. The folder retrieve_us will be created in ..

$ slk retrieve -R /ex/am/ple/bm0146/k20422/dm/retrieve_us .

Retrieve all files of a specific user (search is deactivated)

We would like to retrieve all files belonging to user k204221 (id: 25301) into /scratch/k/k204221/data. A description on how to write the search queries for slk search is provided at the page StrongLink query language and on pages 6 to 8 of the StrongLink Command Line Interface Guide.

First, get the user id (uid) of the user k204221:

$ id k204221
uid=25301(k204221) gid=1076(bm0146) groups=1076(bm0146),1544(dm),200524(ka1209),1603(bk1123)

Second, define a search query:

$ slk search '{"resources.posix_uid":25301}'
Search continuing. .....
Search Id: 11

Third, we retrieve the files into destination directory:

$ slk retrieve 11 /scratch/k/k204221/data

Manually verify that retrieval was successful

StrongLink calculates two checksums of each archived file and stores them in the metadata. It compares the stored checksums with the file’s actual checksums at certain stages of the archival and retrieval process. If you wish, you can check the checksum manually. We provide a batch script template for a file archival plus subsequent checksum check here. If a file has no checksum then it has not been fully archived yet (e.g. the copying is still in progress). You should not retrieve such a file.

# retrieve the file
$ slk retrieve /ex/am/ple/bm0146/k204221/file_collection/test.nc .
[========================================-] 100% complete 1/1 files [5B/5B]

# get the checksum of the archived file from StrongLink
$ slk_helpers checksum -t sha512 /ex/am/ple/bm0146/k204221/file_collection/test.nc
c7bb8f1a8c4fbf5ff1d8990e0b0859bde7a320f337ca65ea1e79a36423b6d9909da793b26c1c69a711d27867b4f0eae1a4ef0db8483e29f9cda3719208618ffc

# calculate the checksum of your retrieved file
$ sha512sum test.nc
c7bb8f1a8c4fbf5ff1d8990e0b0859bde7a320f337ca65ea1e79a36423b6d9909da793b26c1c69a711d27867b4f0eae1a4ef0db8483e29f9cda3719208618ffc  test.nc

tag files (set metadata)

Currently, it is not possible to set the metadata of single files. Setting the metadata is only possible for all files in a directory or for all files found by a search. However, this will be changed in future releases of the slk.

set one metadata field of all files in one directory

We have archived some very large text files into the namespace /ex/am/ple/bm0146/k204221/texts and, now, want to assign the author’s name (Daniel Neumann) via the metadata field document.Author.

Please see the page Reference: metadata schemata for a list of all metadata schemata and their fields.

$ slk tag /ex/am/ple/bm0146/k204221/text document.Author="Daniel Neumann"
Searching for resources in GNS path: /ex/am/ple/bm0146/k204221/text
Search continuing. .....
Search ID: 26
Add Metadata Job complete, applied to 12 of 12 resources.

set one metadata field of all files of one type belonging to one person (search is deactivated)

We would like to assign the author’s name (Daniel Neumann) via the metadata field document.Author to all text files (mime type: text/plain) by the user k204221.

First, we need to search for the files

$ slk search '{"$and": [{"resources.mimetype":"text/plain"},{"resources.posix_uid":25301}]}'
Search continuing. .....
Search ID: 383

Then we apply slk tag on the search result:

$ slk tag 383 document.Author="Daniel Neumann"
Search continuing. .....
[========================================|] 100% complete Metadata applied to 359 of 359 resources. Finishing up......

Change permissions and group of files and directories

Note

Changes of the ownership (slk owner) can only be performed by an admin user. Changes of the group can only be performed by the file’s owner or an admin. Users can only set groups in which they are members. The Linux terminal commands chown and chgrp behave the same.

Grant everyone / all users read access to a directory and its content

We would like to grant all users read access to the namespace /ex/am/ple/bm0146/k20422/public_data recursively. “All users” in this context should mean “the file’s group, all users not in the group and myself”.

$ slk chmod -R a+r /ex/am/ple/bm0146/k20422/public_data

Revoke write access to directory and its content for users of the group

We would like to revoke write access to /ex/am/ple/bm0146/k20422/top_secret_data and its content for all users in the directory’s/file’s group.

$ slk chmod -R g-w /ex/am/ple/bm0146/k20422/top_secret_data

Change the group of a directory and its content

We would like to change the group of the /ex/am/ple/bm0146/k20422/group_data and its content to bm0146. We need to be the owner of the namespace and its content. We need to be member of group bm0146.

$ slk group -R bm0146 /ex/am/ple/bm0146/k20422/group_data

Get user/group IDs and names

Get user id from user name

# get your user id
$ id -u

# get the id of any user
$ id USER_NAME -u

# get the id of any user
$ getent passwd USER_NAME
#  OR
$ getent passwd USER_NAME | awk -F: '{ print $3 }'

Get user name from user id

# get user name from user id
$ getent passwd USER_ID | awk -F: '{ print $1 }'

Get user id from user name

# get the id of any group
$ getent group GROUP_NAME | awk -F: '{ print $3 }'

# get group names and their ids of all groups of which you are a member
$ id

Get user name from user id

# get group name from group id
$ getent group GROUP_ID | awk -F: '{ print $1 }'

# get group names and their ids of all groups of which you are a member
$ id

slk in batch jobs on compute nodes

Simple archival job script:

#!/bin/bash

## ~~~~~~~~~~~~~~~~~~~~~~~~~ start user input ~~~~~~~~~~~~~~~~~~~~~~~~~
# HINT:
#   * You can change the values right of the "=" as you wish.
#   * The "%j" in the log file names means that the job id will be inserted

#SBATCH --job-name=test_slk_job   # Specify job name
#SBATCH --output=test_job.o%j    # name for standard output log file
#SBATCH --error=test_job.e%j     # name for standard error output log file
#SBATCH --partition=shared      # Specify partition name
#SBATCH --ntasks=1             # Specify max. number of tasks to be invoked
#SBATCH --time=08:00:00        # Set a limit on the total run time
#SBATCH --account=ka1209       # Charge resources on this project account
## ~~~~~~~~~~~~~~~~~~~~~~~~~ end user input ~~~~~~~~~~~~~~~~~~~~~~~~~

source_folder=/work/ka1209/ex/am/ple
target_namespace=/arch/xz1234/$USER/data

# create namespace on StrongLink
# (optional; should be created by "slk archive" automatically)
slk_helpers mkdir -R ${target_namespace}

# do the archival
echo "doing 'slk archive -R ${source_folder} ${target_namespace}'"
slk archive -R ${source_folder} ${target_namespace}
# '$?' captures the exit code of the previous command
if [ $? -ne 0 ]; then
  >&2 echo "an error occurred in slk archive call"
else
  echo "archival successful"
fi

Extensive archival job script with some diagnostics

#!/bin/bash

## ~~~~~~~~~~~~~~~~~~~~~~~~~ start user input ~~~~~~~~~~~~~~~~~~~~~~~~~
# HINT:
#   * You can change the values right of the "=" as you wish.
#   * The "%j" in the log file names means that the job id will be inserted

#SBATCH --job-name=test_slk_job   # Specify job name
#SBATCH --output=test_job.o%j    # name for standard output log file
#SBATCH --error=test_job.e%j     # name for standard error output log file
#SBATCH --partition=shared      # Specify partition name
#SBATCH --ntasks=1             # Specify max. number of tasks to be invoked
#SBATCH --time=08:00:00        # Set a limit on the total run time
#SBATCH --account=ka1209       # Charge resources on this project account
## ~~~~~~~~~~~~~~~~~~~~~~~~~ end user input ~~~~~~~~~~~~~~~~~~~~~~~~~


## ~~~~~~~~~~~~~~~~~~~~~~~~~ start user input ~~~~~~~~~~~~~~~~~~~~~~~~~
# source folder for archival
data_source=/work/ka1209/ex/am/ple
# target folder for archival
data_destination=/arch/xz1234/$USER/data
# file to write out run time and similar ...
statistics_file=statistics.csv
## ~~~~~~~~~~~~~~~~~~~~~~~~~ end user input ~~~~~~~~~~~~~~~~~~~~~~~~~


# time and date of the start of the job
date_start=`date +%Y-%m-%dT%H:%M:%S`
# create tmp dir
mkdir tmp

## user output
echo "data source directory:  $data_source"
echo "data target directory:  $data_destination"
echo "statistics output file: $statistics_file"
echo ""
echo "start date:             $date_start"
echo ""

## do the archival here
# create namespace on StrongLink
# (optional; should be created by "slk archive" automatically)
slk_helpers mkdir -R ${data_destination}

# this is for timing: /usr/bin/time -f "%E" -o tmp/time_job_$SLURM_JOB_ID.txt
# We write the run time of slk archive into a file from which we will read later on
echo "starting slk archive:   slk archive -R ${data_source} ${data_destination}"
/usr/bin/time -f "%E" -o tmp/time_job_$SLURM_JOB_ID.txt slk archive -R ${data_source} ${data_destination}
exit_code_archive=$?
run_time=`cat tmp/time_job_$SLURM_JOB_ID.txt`
echo "finished slk archive:   "
echo "         * exit code:   $exit_code_archive"
echo "         * run time:    $run_time"
echo ""

echo "write statisitics file: $statistics_file"
## write statistics
#     JOB ID,        Node Name,Src Dir,      ,Dst Dir,    slk archive version,Exit Code,           Run Time
echo "$SLURM_JOB_ID,`hostname`,${data_source},${data_destination},`which slk`,${exit_code_archive},${run_time}" >> ${statistics_file}
echo ""

echo ""
echo "finished"

Run archival in batch job and capture the job id

solution a): we have the archival command in a script, which is submitted as SLURM job.

echo "submit slk archival job"
job_id_new=`sbatch ./archive_script.sh | awk ' { print $4 } '`
echo "job id of archival job: ${job_id_new}"

solution b): we submit the slk archive call directly to SLURM

echo "submit slk archival job"
job_id_new=`sbatch --partition=shared --account=bm0146 slk archive *.nc
/arch/bm0146/k204221/test | awk '{ print $4 }'
echo "job id of archival job: ${job_id_new}"

We strongly advice against the solution (b) because the exit code of slk archive cannot be captured this way. Unfortunately, slk archive does not print any command line output into the SLURM job log. Therefore, solution (a) is better: putting slk archive into its own script in which the exit code is properly captured. A simple example script for solution (a) is given in the section Simple SLURM job scripts using slk.

Simple SLURM job scripts using slk

The account has to be adapted to your (or one of your) compute time projects. We recommend using striping on Levante for the time being. Some folders are striped already. Please check in advance. We strongly recommend capturing the exit code $? after each individual call of slk retrieve/archive in job scripts because no textual output fill be printed by these commands.

Please find additional example scripts here and here.

#!/bin/bash

## ~~~~~~~~~~~~~~~~~~~~~~~~~ start user input ~~~~~~~~~~~~~~~~~~~~~~~~~
# HINT:
#   * You can change the values right of the "=" as you wish.
#   * The "%j" in the log file names means that the job id will be inserted

#SBATCH --job-name=test_slk_retr_job   # Specify job name
#SBATCH --output=test_job.o%j    # name for standard output log file
#SBATCH --error=test_job.e%j     # name for standard error output log file
#SBATCH --partition=shared      # Specify partition name
#SBATCH --ntasks=1               # Specify max. number of tasks to be invoked
#SBATCH --mem=6GB                # allocated memory for the script
#SBATCH --time=08:00:00          # Set a limit on the total run time
#SBATCH --account=xz1234         # Charge resources on this project account
## ~~~~~~~~~~~~~~~~~~~~~~~~~ end user input ~~~~~~~~~~~~~~~~~~~~~~~~~

source_resource=/arch/xz1234/$USER/data/large_test_file.nc
target_folder=/work/xz1234/ex/am/ple

# create folder to retrieve into (target folder)
mkdir -p ${target_folder}

# set striping for target folder
# see https://docs.dkrz.de/doc/hsm/striping.html
# ON LEVANTE
lfs setstripe -E 1G -c 1 -S 1M -E 4G -c 4 -S 1M -E -1 -c 8 -S 1M ${target_folder}
# ON MISTRAL
#lfs setstripe -S 4M -c 8 ${target_folder}

# do the retrieval
echo "doing 'slk retrieve ${source_resource} ${target_folder}'"
slk retrieve ${source_resource} ${target_folder}
# '$?' captures the exit code of the previous command
if [ $? -ne 0 ]; then
  >&2 echo "an error occurred in slk retrieve call"
else
  echo "retrieval successful"
fi