En visio : https://webconf.u-bordeaux.fr/b/nat-z6j-urw
Programme :
– Présentation générale
– Présentation technique
– Prise en main de SLURM et des modules
– Prise en main de GUIX
En visio : https://webconf.u-bordeaux.fr/b/nat-z6j-urw
Programme :
– Présentation générale
– Présentation technique
– Prise en main de SLURM et des modules
– Prise en main de GUIX
New compute nodes are now available (see https://www.plafrim.fr/hardware-documentation/):
To reserve them, use the constraints “zonda” or “a100”.
These machines have just been integrated, do not hesitate to contact us if you see improvements to be made or need further information.
Just a message to inform you that on sirocco[03-04] nodes, only 3 K40M GPU cards are now available instead of 4 initially because of a hardware problem on these nodes.
We are sorry for this inconvenience.
New compute nodes are now available (see https://www.plafrim.fr/hardware-documentation/):
To reserve them, use the constraints “bora”, “zonda” or “rtx8000”.
These machines have just been integrated, do not hesitate to contact us if you see improvements to be made or need further information.
The FAQ has been updated to describe all storage spaces available on PlaFRIM and the quota command to display their usage and limits.
All,
Users are now allowed to use the squeue command on the devel nodes
through sudo, to display information about all jobs.
sudo squeue
squeue is the only command for which information about all users will
be made available to all, as its information are not a breach to the
GPDR related constraints.
Cheers,
The PlaFRIM team
Standard nodes miriel[001-088] and K40M nodes sirocco[01-05] are in best effort and without support, and will be removed from the platform when failing to start.
All, We are currently in a phase of compliance with the constraints related to the GDPR; this limits in particular the access to the slurm database. The tool squeue is impacted with this limitation. Users only see information about their jobs when making queries (this is also the case for all other slurm commands, scontrol, sacct...). The sinfo command can still be used to get an overview of the platform usage. The state of the nodes and a filtered output of the squeue command are available on the platform web page https://www.plafrim.fr/jobs-monitoring/ This page is currently being updated to report as much information as possible to the users. We will keep working on it in the coming days and weeks, to hopefully meet everyone's expectations. Please note this address is only available once signed in on the web site (you need to use the WP identifiers obtained at the account creation). Cheers, The PlaFRIM team
Dear all, A PlaFRIM discussion team is now available on the Inria mattermost server. PlaFRIM users got an an email with the link to join. This team is intended for discussions on any subject related to the use of the platform. Channels can be created for some specific needs. In any cases, here some rules to follow - DO NOT USE any channel to report tickets, sending an email to the technical team plafrim-support AT inria.fr is the only way to submit tickets. - Refrain from having NON-SERIOUS conversations or trolls. For those not having Inria email addresses, an external account must be created for you to access the mattermost server. Please send an email back to nathalie.furmento AT labri.fr if you need such an account. Cheers, The PlaFRIM technical team
The teams HiePACS, Storm and Tadaam have been cooperating for more than a decade now, on developing the idea of building numerical solvers on top of parallel runtime systems.
From the precursory static/dynamic scheduling experiments explored in the PhD of Mathieu Faverge defended in 2009 to the full-featured SolverStack suite of numerical solvers running on modern, task-based runtime systems such as StarPU and PaRSEC, this idea of delegating part of the optimization process from solvers to external system as been successful. The communication library NewMadeleine is also part of this HPC software stack.
PlaFRIM has always been a key enabling component of these collaborations. Thanks to its heterogeneous computing units (standard nodes, GPU, Intel KNL, Numa nodes, …), the development and validation of our software stack have been made easier. Multiple collaborations with national and international universities and industrials have also been made thanks to our use of the platform.
Contact : Olivier Aumage oliver.aumage AT inria.fr