Compiling and Linking

This page briefly describes how to build software on mistral, i.e. to generate executable files from source code files (typically written in C/C++ or Fortran).

Compilers

As listed below, we provide a selection of high quality compilers on mistral. Compilers are not loaded by default. You have to use the module environment to access them. We recommend to specify the module version number explicitly otherwise the lexicographically highest version is loaded by default, which might not be latest or desired version.

Intel Compilers

For most applications we recommend to use the Intel compilers in its latest version since they fully support the underlying CPU architecture. The compiler version can be selected by loading the corresponding module file, for example:

# Use the "latest" versions of Intel compiler
$ module load intel

# Use a specific version of Intel compiler
$ module load intel/18.0.4

The specific compiler names are:

  • icc - for C source code

  • ifort - for Fortran source code

  • icpc - for C++ source code

The table below lists some useful compiler options that are commonly used for the Intel compilers. For further information please refer to the man pages of the respective compiler

man ifort
man icc
man icpc

or the comprehensive documentation on Intel website.

Option

Description

-qopenmp

Enables the parallelizer to generate multi-threaded code based on the OpenMP directives

-g

Creates debugging information in the object files. This is necessary if you want to debug your program

-O[0-3]

Sets the optimization level

-L<libary path>

A path can be given in which the linker searches for libraries

-D

Defines a CPP macro

-U

Undefines a CPP macro

-I<include directory>

Allows to add further directories to the include file search path

-sox

Stores useful information like compiler version, options used etc. in the executable

-ipo

Inter-procedural optimization

-xAVX or -xCORE-AVX2

Indicates the processor for which code is created

-help

Gives a long list of quite a big amount of options

Note

Using the compiler option -xCORE-AVX2 forces the Intel compiler to use full AVX2 support/vectorization (with FMA instructions) which might results in binaries that do not produce MPI decomposition independent results. Switching to -xAVX should solve this issue but could result in up to 15% slower runtime.

GNU Compiler Collection (GCC)

GCC is a suite of compilers for C (gcc), C++ (g++), Fortran (gfortran), and D (gcd) programming languages. You need to load an environment module for gcc to access a recent version of the GNU compiler suite. The use of the older system gcc located in /usr/bin and provided as part of the base Linux operating system is generally inadvisable.

NAG

NAG compilers have proved to be very useful for debugging und checking if the source code is standard conforming. It is not appropriate to create model binaries for production runs.

PGI

The PGI compiler is interesting for users porting their codes for execution on a GPU. The last installed version of the professional edition is 17.7, newer versions will be based on the community edition if users request them.

Compiling and Linking MPI programs

MPI Libraries

Several Message Passing Interface (MPI) library implementations are available on mistral:

  • OpenMPI Starting with version 2.0.0 all optimizations by BULL/ATOS, which were previously implemented in bullxMPI, are now given in OpenMPI. Also these versions are built using the Mellanox HPC-X toolkit to directly benefit from the underlying IB architecture. The latest OpenMPI modules will automatically load the appropriate hpcx modules. This the recommended MPI library implementation to be used on Mistral:

  • BullxMPI (notice: out of support) Although the bullxMPI library was used throughout for the benchmarks of the HLRE-3 procurement, we no longer recommend to use bullxMPI with FCA. The old FCA/2.5 version depends on a central FCA-manager that is no longer available. As an alternative OpenMPI >2.0.0 should be used in combination with HCOLL.

  • IntelMPI. We recommend using IntelMPI versions 2017 and newer, since prior versions might get stuck in MPI_Finalize and therefore waste CPU time without real computations.

No MPI libraries are loaded by default. Similar to compilers, you have to explicitly load an appropriate environment module for a certain MPI implementation.

Note

Because Fortran module files are compiler specific, it is important to use a consistent combination of compiler and MPI library, i.e. to use the MPI installation built with the same compiler (as indicated by suffixes such as intel14, gcc64, nag62, etc. in the MPI module names) as the compiler selected to build your code.

MPI Compiler Wrappers

It is highly advisabe to use MPI compiler wrappers to compile and link MPI parallel programs. Such wrappers are provided with each MPI library implementation. They automatically build up the MPI environment (i.e. set paths to MPI include files and MPI libraries) to facilitate the compilation and linking steps. The following table shows the names of the Intel compilers as well as names of IntelMPI and BullxMPI/OpenMPI compiler wrappers:

Language

Intel Compiler

IntelMPI wrapper

OpenMPI/BullxMPI wrapper

Fortran 90/95/2003

ifort

mpiifort

mpifort or mpif90

Fortran 77

ifort

mpiifort

mpif77

C++

icpc

mpiicpc

mpic++

C

icc

mpiicc

mpicc

Note

The computational performance and scalability of MPI applications on Mistral can be considerably improved by an optimal choice of the runtime parameters provided by MPI libraries. The appropriate MPI run time settings strongly depend on the type of application and MPI library used. For most MPI versions installed on Mistral, we provide some recommendations for MPI environment settings that proved to be beneficial for different model codes commonly used at DKRZ.

Examples

  • Compile a hybrid MPI/OpenMP program using Intel Fortran compiler and OpenMPI with Mellanox HPC-X toolkit:

$ module load intel/18.0.4 openmpi/2.0.2p2_hpcx-intel14
$ mpif90 -qopenmp -O3 -xCORE-AVX2 -fp-model source -o mpi_omp_prog program.f90
  • Compile a MPI program in Fortran using Intel Fortran compiler and Intel MPI:

$ module load intel/18.0.4 intelmpi/2018.5.288
$ mpiifort -O3 -xCORE-AVX2 -fp-model source -o mpi_prog program.f90
  • Compile a MPI program in Fortran using GCC Fortran compiler and OpenMPI:

$ module load gcc/6.4.0 openmpi/2.0.2p2_hpcx-gcc64
$ mpifort -O3 -xCORE-AVX2 -fp-model source -o mpi_prog program.f90