New zonda and sirocco nodes available

New compute nodes are now available (see

  • zonda[07-21]: 2 * AMD EPYC 7452 32-Core Processor and 256 GB RAM
  • sirocco21: 2 * AMD EPYC 7402 24-Core Processor and 2 GPU Nvidia Ampere A100 and 512 Go RAM

To reserve them, use the constraints “zonda” or “a100”.

These machines have just been integrated, do not hesitate to contact us if you see improvements to be made or need further information.

New nodes available

New compute nodes are now available (see

  • bora [041-044] : 2 * Intel(R) Xeon(R) Gold 6240 CPU and 192 GB RAM (identical to bora[001-040])
  • zonda [01-06] : 2 * AMD EPYC 7452 32-Core Processor and 256 GB RAM
  • sirocco [18-20] : 2 * Intel(R) Xeon(R) Gold 5218R Processor and 2 Nvidia Quadro RTX8000 GPUs and 192 GB RAM

To reserve them, use the constraints “bora”, “zonda” or “rtx8000”.

These machines have just been integrated, do not hesitate to contact us if you see improvements to be made or need further information.

Update for the squeue command


Users are now allowed to use the squeue command on the devel nodes
through sudo, to display information about all jobs.

sudo squeue

squeue is the only command for which information about all users will
be made available to all, as its information are not a breach to the
GPDR related constraints.


The PlaFRIM team

Changes for the information available through the squeue command


We are currently in a phase of compliance with the constraints related
to the GDPR; this limits in particular the access to the slurm database.
The tool squeue is impacted with this limitation. Users only see
information about their jobs when making queries (this is also the case
for all other slurm commands, scontrol, sacct...).

The sinfo command can still be used to get an overview of the platform
usage. The state of the nodes and a filtered output of the squeue
command are available on the platform web page

This page is currently being updated to report as much information as
possible to the users. We will keep working on it in the coming days and
weeks, to hopefully meet everyone's expectations.

Please note this address is only available once signed in on the web
site (you need to use the WP identifiers obtained at the account creation).


The PlaFRIM team

PlaFRIM discussion team on mattermost

Dear all,

A PlaFRIM discussion team is now available on the Inria mattermost
server. PlaFRIM users got an an email with the link to join.

This team is intended for discussions on any subject related to the use
of the platform. Channels can be created for some specific needs. In any
cases, here some rules to follow

- DO NOT USE any channel to report tickets, sending an email to the
technical team plafrim-support AT is the only way to submit tickets.

- Refrain from having NON-SERIOUS conversations or trolls.

For those not having Inria email addresses, an external account must be
created for you to access the mattermost server. Please send an email
back to nathalie.furmento AT if you need such an account.


The PlaFRIM technical team

Building A High-Performance Solver Stack on Top of a Runtime System

The teams HiePACS, Storm and Tadaam have been cooperating for more than a decade now, on developing the idea of building numerical solvers on top of parallel runtime systems.

From the precursory static/dynamic scheduling experiments explored in the PhD of Mathieu Faverge defended in 2009 to the full-featured SolverStack suite of numerical solvers running on modern, task-based runtime systems such as StarPU and PaRSEC, this idea of delegating part of the optimization process from solvers to external system as been successful. The communication library NewMadeleine is also part of this HPC software stack.

PlaFRIM has always been a key enabling component of these collaborations. Thanks to its heterogeneous computing units (standard nodes, GPU, Intel KNL, Numa nodes, …), the development and validation of our software stack have been made easier. Multiple collaborations with national and international universities and industrials have also been made thanks to our use of the platform.

Contact : Olivier Aumage oliver.aumage AT

Predictive Rendering

To generate photo-realistic images, one needs to simulate the light transport inside a chosen virtual scene observed from a virtual viewpoint (i.e., a virtual camera). A virtual scene is obtained by modelling (or measuring from the real-world) the:

– shapes of the objects and the light sources,

– the materials reflectance and transmittance

– the spectral emittance of the light sources.

Simulating the light transport is done by solving the recursive Rendering Equation. This equation states that the equilibrium radiance (in Wm-2sr-1 per wavelength) leaving a point is the sum of emitted and reflected radiance under a geometric optics approximation.

The Rendering Equation is therefore directly related to the law of conservation of energy.The rendering equation is solved with Monte-Carlo computations In the context of Computer Graphics, a Monte-Carlo sample is a geometric ray carrying radiance along its path, which is stochastically (e.g., using Russian roulette) constructed.

PlaFRIM permits researchers in Computer Graphics to simulate billion of light paths/rays to generate reference images for a given virtual scene.

These data can be used to validate:

– new models that predicts how light is scattered by a material

– new rendering algorithm that are more efficient in terms of variance but also in terms of parallelism.

Indeed, PlaFRIM offers a large palette of computing nodes (CPU, bi-GPU) that permit us to develop, test and validate the whole rendering pipeline.

Contact: Romain Pacanowski romain.pacanowski AT