Private: How to change my password on www.plafrim.fr?

  1. Go to http://www.plafrim.fr/wp-login.php
  2. Action : Lost your password ?
  3. Write either your plafrim id (same than ssh) or the email address shown in your plafrim account then Action generate a new password
  4. You should have received a email
  5. Click on the link (the longer one) proposed within this email
  6. Choose your own password
  7. You may test the connexion by http://www.plafrim.fr/en/connection/


Private: How to cite PlaFRIM in your publications?

Don't forget to cite PlaFRIM in all publications presenting results or contents obtained or derived from the usage of PlaFRIM.

The official acknowledgment to use in your publication must be the following:

Acknowledgment: Experiments presented in this paper were carried out using the PlaFRIM experimental testbed, supported by Inria, CNRS (LABRI and IMB), Université de Bordeaux, Bordeaux INP and Conseil Régional d'Aquitaine (see https://www.plafrim.fr/).


Private: How to cite PlaFRIM in Hal (Open Archive)?

When you deposit a  publication (article, conférence, thèse, poster, …)  in Hal (Open Archive) do not forget to add plafrim in the Project/Collaboration field of the metadata.

Usage / access

Private: What are the different storage spaces ?

There are four storage spaces on the machine with different vocations:

Max size
Deletion Hardware Protection (RAID) Backup Primary use
How to obtain
/home 20Go never with regular + versionning individual auto
/projets 200Go never with regular + versionning group on demand (project name + members login + justification)
/lustre 1To never with :!: without :!: individual auto
/tmp variable :!: If needed & when restarting machines :!: :!: sans :!: :!: without :!: individual auto

"home directory" base storage

Hosts /home/user account. It is backed up (in /home/.snapshot directory) and duplicated off-site for 4 weeks.

The size of the user account can not exceed 20 GB.

Group project storage /projets


The project /project space is a storage space that can be shared between several users to deposit data, software, ...

This space is saved in a similar way to user space (in a folder like /.projets_X/.snapshot/DATE/nom_projet - do : ls -ald /.projets_*/.snapshot/*/project_name to see available backups).

To obtaine a such space, simply send an email to Plafrim Support, specifying the name of the project, the people connected to this project and the content of the project.

The project space is located in /projects/project_name

The size of the project space is limited to 200 GB.


Lustre temporary storage space

The size of this temporary storage space is limited to 1TB.

Local temporary storage


Temporary storage is available on each node. This space is located on the local disks:
- / tmp
The size of this space depends on the nodes. For more information, use the df command.


Private: How to access to PlaFRIM?

Your ssh client must use a "ProxyCommand" to reach the target server

Sample configuration of .ssh/config to reach plafrim on port 22 :

(replace LOGIN_PLAFRIM with your actual login)

Host plafrim
ForwardAgent yes
ForwardX11 yes
ProxyCommand ssh -A -l LOGIN_PLAFRIM ssh.plafrim.fr -W plafrim:22

Check that your private key is loaded with ssh-add -l. If not, load it with ssh-add ~/.ssh/private_key

Then use ssh LOGIN_PLAFRIM@plafrim

Private: How can I get shell prompts in interactive mode?

srun --pty bash -i

Which equivalent to  "srun -p defq -N1 -n1 --pty bash -i",


  • -N 1 (or --nodes=1) : the node count, by default it equal to 1.
  • -n 1 (or -- ntasks=1) : number of tasks, by default it equal to 1, otherwise it must be equal or less to the cores of the node ( which mean take one task per core).
  • -p defq (or --partition=defq ) :  partition used for the job.  If no partition mentioned, the system use the default one which is defq.


Sur nœud devel:

$ hostname

To ask for a remote terminal:

$ srun --pty bash -i

Check the node on which you get the terminal

$ hostname
@miriel001 ~]$

With the "--exclusive" option, this will ensure that job is given exclusive use of entire node (x24 core).


  • The option "--pty" works also in the case "--nodes=#N >1". In order to ask for remote terminal with more than one nodes, you can use screen
  • By default the "srun" command export all environment user.

Private: How to access from a node to an external site?

The user must ask to plafrim-support why he needs to access to the asked internet site and for how long. After having check that there was no technical issues (security…), the plafrim team informs about the vailability of the connection through the mailing list plafrim-users.

Job manager

Private: How to launch multi-prog jobs

To launch different instances of job using one reservation use multi-prog feature.

Juste create one multiprog.conf file containing instances to launch and which ressources to address. Use srun with useful tasks.

For example, with a multiprog.conf file as follows :


2-4 hostname

1,5 echo task:%t

0 echo offset:%o

output of :

srun -n 6 --multi-prog multiprog.conf 

will be:







Private: What are the submission queues and their limitations on the platform?

Herein after all queues and their limitations defined on PlaFRIM2 plate-forme.

For Miriel nodes:

List of nodes: Miriel[001-077]

Queue Memory CPU Time Walltime Nodes Cores
defq < 02:00:00 < 4 < 4
court < 04:00:00 < 42 < 1008
longq < 72:00:00 < 42 < 1008
special < 00:30:00 < 77 < 1848


Queue Max User Running Max User Queuable Max Job Running Max Job Queuable
defq 15 30 - 30
court 2 10 20 64
longq 2 10 16 64
special 10 20 40 50

For Mistral nodes:

List of nodes: Mistral[01-18]

Queues for nodes with MIC cards:

Queue Memory CPU Time Walltime Nodes Cores
court_mistral < 04:00:00 < 18 < 360
long_mistral <72:00:00 < 16 < 320


Queue Max User Running Max User Queuable Max Job Queuable Max Job Running
court_mistral 2 10 20 10
long_mistral 2 5 10 10

Pour les noeuds sirocco :

Liste des noeuds concernés : sirocco[01-05]

Queues for nodes with GPU cards:

Queue Memory CPU Time Walltime Nodes Cores
court_sirocco < 04:00:00 < 5 < 120
longq_sirocco < 72:00:00 < 2 < 48


Queue Max User Running Max User Queuable Max Job Queuable Max Job Running
court_sirocco 2 4 10 5
long_sirocco 1 2 4 2

Private: How to submit a job with SLURM?

There are two ways to submit a job with SLURM:

Using batch script file:

$ cat script-slurm.sh
#!/usr/bin/env bash
## name of job
## Resources: (nodes, procs, tasks, walltime, ... etc)
#SBATCH -n 4
#SBATCH -t00:05:00
# #  standard output message
#SBATCH -o batch%j.out
# # output error message
#SBATCH -e batch%j.err
module purge
module load slurm/14.03.0
## modules to load for the job
module load compiler/gcc/4.9.0
echo "=====my job informations ===="
echo "Node List: " $SLURM_NODELIST
echo "my jobID: " $SLURM_JOB_ID
echo "Partition: " $SLURM_JOB_PARTITION
echo "submit directory:" $SLURM_SUBMIT_DIR
echo "submit host:" $SLURM_SUBMIT_HOST
echo "In the directory: `pwd`"
echo "As the user: `whoami`"
srun -n4 ./a.out

and submit the job with sbatch command:

$sbatch script-slurm.sh

Using an interactive session (salloc/srun):

For some reasons (compilation, debugging, short tests, ... etc), one need to use an interactif job with shell line command, so there are three ways to do under SLURM:

  • running jobs with steps, which permit to reuse all or some resources to run commands/applications, eg:

    salloc -N3 --ntasks-per-node=3  -t 01:00:00

  • running application/command directly on all resources needed at runtime,eg:

    srun -N3 --ntasks-per-node=3 ./a.out

  • running shell on allocated node, and get shell prompts on the node, eg:

          srun -p <partition> --pty bash -i

For more details visit:

Private: What resources are available for my job?

sinfo -l : can give this information.

also you can specify which information you want to display:

sinfo -s -o "%.18P %.10g %.12t %.10L %.12l %.12c %.20N %.12s %.10G %.18f"

To display all available resources on specific partition:

sinfo -p <partition> -t idle

The sview tool could also give some informations about used and free resources on the platform.To use it, load slurm module and launch sview command).

Note:  To have the state of cluster visit  state

Private: How to get nodes informations and features on the platform?

The  PlaFRIM platform contains several heterogeneous nodes  (multi-cores, hyper-threads,  MIC/GPU cards, processor generation, ... etc).

To display  details of each node, you can use sinfo:

sinfo -lNe


  • N: (--Node) Print information in a node-oriented format.
  • l: (--long) Print more detailed information.
  • e: (--exact) If set, do not group node information on multiple nodes unless their configurations to be reported are identical.

E.g :

$ sinfo -lNe

mirage[01-04,06-08] 7 court_mirage idle~ 12 2:6:1 36195 391749 1 Westmere none
mirage[01-04,06-08] 7 long_mirage idle~ 12 2:6:1 36195 391749 1 Westmere none
mirage05 1 long_mirage down* 12 2:6:1 32000 0 1 Westmere Not responding
mirage05 1 court_mirage down* 12 2:6:1 32000 0 1 Westmere Not responding
mirage09 1 long_mirage idle~ 12 2:6:1 36195 391749 1 Westmere none
mirage09 1 court_mirage idle~ 12 2:6:1 36195 391749 1 Westmere none
miriel[001-015,021-039,050-057] 42 special allocated 24 2:12:1 128832 297587 1 Miriel none
miriel[001-015,021-039,050-057] 42 multiPart allocated 24 2:12:1 128832 297587 1 Miriel none
miriel[001-015,021-039,050-057] 42 defq* allocated 24 2:12:1 128832 297587 1 Miriel none  


Private: How can I kill all my running jobs?

To kill all running jobs in batch session, use scancel command with the login name option or  with the list of job's id  (separated by space):

$scancel -u <user>


$scancel  jobid_1 ... jobid_N

Eg. for a user with login bee:

Get all jobid of running jobs.

$squeue -u bee
2545 longq test1 bee R 4:46:27 1 miriel007
2552 longq test2 bee R 4:46:47 1 miriel003
2553 longq test1 bee R 4:46:27 1 miriel004

To cancel all running jobs for user bee.

$scancel -u bee


only cancel the jobs 2552 and 2553.

$scancel 2553 2552

In interactive session you can also use exit command or ctrl+D.
For more information use man scancel

Note: You can only kill jobs owned by yourself.

Private: How to ask for nodes with GPU cards in my job?

To ask for server with GPU cards in your job in interactif mode or batch mode, choose one of partitions *_sirocco ( court or long) and gres=gpu:<GPU_COUNT> option, as follows:

<srun/salloc/sbatch> -p court_sirocco --gres=gpu:<gpu_count> ...

With <gpu_count>  is the count of  GPUs cards by node, it must be bellow than the maximal number of gpu in each node. Which is 4 for sirocco0[1-5], and 2 for sirocco06.


  • to use in interactif mode:

module load compiler/cuda/6.5/toolkit/6.5.14
srun  -p court_sirocco  --gres=gpu:4 nvidia-smi

  • to use in batch mode:

#SBATCH -p court_sirocco
#SBATCH -o cuda%j.out
#SBATCH -e cuda%j.err
#SBATCH --gres=gpu:2
#SBATCH --time=0-00:05:15
module purge
module load slurm/14.03.0
module load compiler/cuda/6.5/toolkit/6.5.14
echo "gpu visible devices are:"$CUDA_VISIBLE_DEVICES
srun hostname
srun nvidia-smi

All siroccco machines had the same GPU cards, the features of GPU card are:

  • nom de la carte GPU : Tesla K40-M
  • 2880 cœurs cuda
  • 12GB mémoire
  • peak DP : 1.43 Tflops
  • peak SP : 4.29 Tflops

Parallel programming

Private: How to run my application with OpenMPI ?

To launch MPI applications on miriel, sirocco and devel nodes :

mpirun -np "nb_procs" --mca mtl psm  ./apps

If you need (pour les miriels 01 à 44) OmniPath interconnect :

mpirun -np "nb_procs" --mca mtl psm2 ./apps

All these feature are available for OpenMPI version > 2.0.0

Private: How to easy launch my application with Intel MPI ?

With mpi/intel-mpi/64/2017.1/132 module (or above)

To launch MPI applications on miriel, sirocco and devel nodes :

mpirun -np "nb_procs" -psm  ./apps

If you need (pour les miriels 01 à 44) OmniPath interconnect :

mpirun -np "nb_procs" -psm2 ./apps

All these feature are available for IntelMPI version > 2017

Private: How to run my application with Intel MPI?

Choose your build and execution environment using modules.


module load compiler/gcc/4.9.0
module add compiler/intel/64/2015.3.187
module add mpi/intel-mpi/64/5.0.3/048

Launch with mpiexec.hydra command:

srun hostname  -s| sort -u > mpd.hosts

Select the particular network fabrics to be used. With environment variable I_MPI_FABRICS.

I_MPI_FABRICS=<fabric>|<intra-node fabric>: <inter-node fabric>

Where <fabric> := {shm, dapl, tcp, tmi, ofa}

For example, to select shared memory fabric (shm), for intra-node communication mpi process, and tag maching interface fabric (tmi), for inter-node communication mpi process, use the following command:

export I_MPI_FABRICS=shm:tmi
mpiexec.hydra -f mpd.hosts -n $SLURM_NPROCS ./a.out

Launch with srun command:

export I_MPI_PMI_LIBRARY=/cm/shared/apps/slurm/14.03.0/lib64/libpmi.so
export I_MPI_FABRICS=shm:tmi
srun -n $SLURM_NPROCS  ./a.out

The available fabrics on the plate forme are:

tmi TMI-capable network fabrics including Intel® True Scale Fabric, Myrinet*, (through Tag Matching Interface)
ofa OFA-capable network fabric including InfiniBand* (through OFED* verbs)
dapl DAPL-capable network fabrics, such as InfiniBand*, iWarp*, Dolphin*, and XPMEM* (through DAPL*)/span>
tcp TCP/IP-capable network fabrics, such as Ethernet and InfiniBand* (through IPoIB*)

You can also specify a  list of fabrics, (The default value is dapl,tcp)with the environment variable I_MPI_FABRICS_LIST. The first fabric detected will be used at runtime:

I_MPI_FABRICS_LIST=<fabrics list>

Where <fabrics list> := <fabric>,...,<fabric>

(for more details visit https://software.intel.com/sites/products/documentation/hpc/ics/impi/41/lin/Reference_Manual/Communication_Fabrics_Control.htm )


  • miriel servers have Intel True Scale Infiniband card, for inter-node communications, tmi is prefered
  • mistral servers have Mellanox infiniband card, dapl or ofa is prefered.
  • previous servers (fourmi, mirabelle, mirage) also have Mellanox Infiniband card