This document is under active development and discussion!

If you find errors or omissions in this document, please don’t hesitate to submit an issue or open a pull request with a fix. We also encourage you to ask questions and discuss any aspects of the project on the {feelpp} Gitter forum. New contributors are always welcome!

Mesocenter of Strasbourg

The mesocenter of Strasbourg is the HPC cluster of the university of Strasbourg.

For more information see

1. Preriquisites

You have to own an account on the machine. See the cluster documentation for the first step on the cluster.

2. Feel++ usage

Documentation deprecated
Work in progress

2.1. Latest working configuration on hpc-login with develop version of feelpp/config:

You have to use the configuration provided for the cluster in the feelpp/config repository. It is currently cloned in /usr/local/feelpp/config.

To be able to use the configuration, you have to add the following commands in your .bashrc file:

# The cd commands are important, because we rely on the current working directory to find
# the correct files to source
cd /usr/local/feelpp/config/etc
source feelpprc.sh
cd -

To ensure a working configuration, please use the profiles provided in the config repository. Current working profiles on Strasbourg mesecenter: - compilers/gnu481.profile - compilers/clang35.profile

Sample slurm submission script: (Needs to be filled with user data, when <> is present)

#!/bin/bash
## Options needed to specify where to launch the jobs and the authorization (required)
#SBATCH -p <partition>
#SBATCH -A <AuthGroup>
## Number of cores required
#SBATCH -n <NBC>
## Time until the job is killed (optional)
#SBATCH -t 12:00:00
## Number of cores per node to user (optional)
#SBATCH --tasks-per-node <NBTPN>
## Impose a constraint to the type of node we want (optional)
#SBATCH --constraint <C>
## Required memory size (optional
#SBATCH --mem=<MSIZE>
## Output some data about the processor (optional)
#SBATCH --cpu_bind=verbose
## Send a mail at the end of the execution (optional)
#SBATCH --mail-type=END
#SBATCH --mail-user=ancel@math.unistra.fr

# Source the config
cd /usr/local/feelpp/config/etc
source feelpprc.sh
cd -

# Load a compiler profile
module load gnu481.profile

# Finally launch the job
# mpirun of openmpi is natively interfaced with Slurm
# No need to precise the number of processors to use
cd ${HOME}/git/feelpp/build_gcc/doc/manual/tutorial
mpirun <myapp>

##  The quick disk is on /workdir (17 To free at this time)

## Job submissions

### Interactive

To submit jobs interactively for example on 10 cores, type

salloc -n 10

now you have allocated 10 cores and you are ready to launch interactively with mpirun

mpirun --bind-to-core -x LD_LIBRARY_PATH  -x IMPORTANT_VAR /where/is/my/app --option1=arg1 --option2=arg2

2.2. Batch

Save it in myBatch.slurm

#! /bin/bash

#SBATCH -n 2048 #need 2048 cores (one thread by core)
source $HOME/.bash_profile  #or source ~/myEnv
export IMPORTANT_VAR=important_value
cd /workdir/math/`whoami`
mpirun --bind-to-core -x LD_LIBRARY_PATH  -x IMPORTANT_VAR /where/is/my/app --option1=arg1 --option2=arg2

To execute, just type:

sbatch myBatch.slurm

Note that in {feelpp}, slurm script can be automatically generated by enabling FEELPP_ENABLE_SLURM

3. Visualisation process

You can either - rsync the $workdir to your computer - use remote visualisation using vnc

3.1. VNC

  • export view using ssh -Y

  • use Chicken of the Vnc (to be installed):

    1. log in hpc-visu

    2. launch the server: /opt/TurboVNC/bin/vncserver Enter a password (you choose one) see: ```New 'X' desktop is hpc-visu-01:6

    3. With chicken on the VNC, log into hpc-visu, display 6 (on that example) and use your password to connect.

    4. open a term in the display opened, launch: vglrun xterm -ls -sb To let the process launched by xterm working on a gpu.

    5. launch paraview & enjoy.

ADMIN ONLY - Install new versions of software

New software are installed in /usr/local/feelpp (let’s call it FEELPP_IDIR). You need to be in the "math" group to have access to this directory.

3.2. Clang >= 3.5

Clang is built in the $FEELPP_IDIR/_clang directory and installed in the $FEELPP_IDIR/clang-X.Y directory, where X.Y is the current version and FEELPP_IDIR is the installation directory of applications/libraries built for feelpp (i.e. /usr/local/feelpp).

Follow the [getting started](http://clang.llvm.org/get_started.html) tutorial from clang to build the directory structure to build clang. You can use the TAG for the clang version you want or use the official latest release of [clang](http://llvm.org/releases/) than the svn version.

You will stop just before the ../llvm/configure step. On hpc-login the default gcc compiler version is 4.4 and clang >= 3.5 requires at least gcc 4.7. To take this into account, first load a gcc module for a compiler >= 4.7 (e.g. compilers/gnu47) and then configure llvm with the following:

../llvm-X.Y.0.src/configure --prefix=/usr/local/feelpp/clang-X.Y --with-gcc-toolchain=$PATH_TO_GCC ```
with PATH_TO_GCC the path where gcc libs, executables and includes are stored.
#Then:
make -j $NJOBS
make install

And you should be all set for clang. Don’t forget to load the module of the gcc version you used before loading the clang module, otherwise it won’t be able to find the correct stdlib version.