Hardware documentation

Technology

PlaFRIM aims to allow users to experiment with new hardware technologies and to develop new codes.

Access to the cluster state (sign-in required) : Pistache

You will find below a list of all the available nodes on the platform by category. For each category, there is a brief description, and more information is shown when clicking on the associated button.

To allocate a specific category of node with SLURM, you need to specify the node features. To display the list, call the command

$ sinfo -o "%60f %N"
AVAIL_FEATURES                                               NODELIST
miriel,intel,haswell,infinipath                              miriel[044-045,048,050-053,056-058,060-064,066-073,075-076,078-079,081,083-088]
visu                                                         visu01
bora,intel,cascadelake,omnipath                              bora[001-040]
sirocco,intel,haswell,mellanox,nvidia,tesla,k40m             sirocco[01-05]
sirocco,intel,skylake,omnipath,nvidia,tesla,v100,bigmem      sirocco17
souris,sgi,ivybridge,bigmem                                  souris
sirocco,intel,broadwell,omnipath,nvidia,tesla,p100           sirocco[07-13]
sirocco,intel,skylake,omnipath,nvidia,tesla,v100             sirocco[14-16]
arm,cavium,thunderx2                                         arm01
brise,intel,broadwell,bigmem                                 brise
amd,diablo                                                   diablo[01-05]
kona,intel,knightslanding,knl                                kona[01-04]
miriel,intel,haswell,omnipath,infinipath                     miriel[001-043]

For example, to reserve a bora node, you need to call

$ salloc -C bora

Standard Nodes

Standard nodes bora[001-040]

General characteristics

  • A full description of this architecture can be found on the WikiChip website.
  • Bi-sockets nodes which include 2 * 18-core Cascade Lake Intel® Xeon® Skylake Gold 6240 @ 2,6 GHz.
  • By default, turbo-boost and Hyperthreading are disabled to ensure the reproducibility of the experiments carried out on the nodes.
  • Nodes have 192 Go of memory (5.3 Go/core) @2933 MHz.
  • All nodes also have a 10G interface to mount the BeeGFS parallel file system shared by all the platform nodes.
  • Nodes have a local disk (/tmp) of 1 To – disk SATA Seagate ST1000NX0443 with 7.2krpm (/sys/block/sda/device/model).

Networks

  • A OmniPath 100 Gb/s network is interconnecting all bora nodes, as well as the front nodes devel01 and devel02

Main view



Standard nodes miriel[001-088]

General characteristics

  • A full description of this architecture can be found on the WikiChip website.
  • Bi-sockets nodes which include 2 Dodeca-core Haswell Intel® Xeon® E5-2680 v3 @ 2,5 GHz.
  • By default, turbo-boost and Hyperthreading are disabled to ensure the reproducibility of the experiments carried out on the nodes.
  • Nodes have 128 Go of memory (5.3 Go/core) @2933 MHz.
  • All nodes also have a 10G interface to mount both BeeGFS and Lustre parallel file systems shared by all the platform nodes.
  • Nodes have a local disk (/tmp) of 300 Go – SATA Seagate ST9500620NS à 7.2krpm (/sys/block/sda/device/model).

Networks

  • A OmniPath 100 Gb/s network is interconnecting one part of the miriel nodes (from miriel001 to miriel043), as well as the front node devel03
  • A Infinipath 40 Gb/s network is also interconnecting all miriel nodes and devel03.

Main view



Standard nodes diablo[001-005]

General characteristics

  • A full description of this architecture can be found on the WikiChip website.
  • Bi-sockets nodes which include 2 * 32 cores AMD EPYC 7452 @ 1,5 GHz for diablo01 to diablo04
  • Bi-sockets nodes which include 2 * 64 cores AMD EPYC 7702 @ 1,5 GHz for diablo05
  • By default, turbo-boost and Hyperthreading are disabled to ensure the reproducibility of the experiments carried out on the nodes.
  • Nodes have 256 Go of memory (4 Go/core) @2133 MHz. from diablo01 to diablo04 and 1 To of memory (4 Go/core) @2133 MHz for diablo05
  • All nodes also have a 10G interface to mount both BeeGFS and Lustre parallel file systems shared by all the platform nodes.
  • Nodes have a local disk (/tmp) of 1 To – SATA Seagate ST1000NM0008-2F2 à 7.2krpm (/sys/block/sda/device/model).

Networks

  • A Mellanox Infiniband 100 Gb/s network is interconnecting all diablo nodes

Main view



Standard node arm01

General characteristics

  • A full description of this architecture can be found on the WikiChip website.
  • Bi-sockets nodes which include 2 Octacosa-core Cavium ThunderX2(R) CN9975 v2.1 @ 2,0 GHz.
  • By default, turbo-boost and Hyperthreading are disabled to ensure the reproducibility of the experiments carried out on the nodes.
  • This node have 256 Go of memory (4.6 Go/core) @2666 MHz.
  • This node also have a 10G interface to mount both BeeGFS and Lustre parallel file systems shared by all the platform nodes.
  • Node have a local disk (/tmp) of 128 Go – SATA Seagate ST1000NM0008-2F2 à 7.2krpm (/sys/block/sda/device/model).

Main view



Accelerator Nodes

K40M nodes sirocco[01-05] with 4 K40M GPU cards –

General characteristics

  • A full description of this architecture can be found on the WikiChip website.
  • Bi-sockets nodes which include 2 Dodeca-core Haswell Intel® Xeon® E5-2680 v3 @ 2,5 GHz.
  • By default, turbo-boost and Hyperthreading are disabled to ensure the reproducibility of the experiments carried out on the nodes.
  • Nodes have 128 Go of memory (5.3 Go/core) @2133 MHz.
  • All nodes also have a 10G interface to mount both BeeGFS and Lustre parallel file systems shared by all the platform nodes.
  • Nodes have a local disk (/tmp) of 1 To – SATA Seagate ST91000640NS à 7.2krpm (/sys/block/sda/device/model).

GPU Types

  • A full description of this architecture can be found on the Wikipedia website.
  • 4 K40m GPU are available on each node

Networks

Main view



K40M Node sirocco06 with 2 K40M GPU cards

General characteristics

  • A full description of this architecture can be found on the WikiChip website.
  • Bi-sockets nodes which include 2 Deca-core Ivy-Bridge Intel® Xeon® E5-2670 v2 @ 2,5 GHz.
  • By default, turbo-boost and Hyperthreading are disabled to ensure the reproducibility of the experiments carried out on the nodes.
  • The node has 128 Go of memory (6.4 Go/core) @1866 MHz.
  • All nodes also have a 10G interface to mount both BeeGFS and Lustre parallel file systems shared by all the platform nodes.
  • Nodes have a local disk (/tmp) of 1 To – SATA Seagate ST1000NM0023 à 7.2krpm (/sys/block/sda/device/model).

GPU Types

Networks

  • A Mellanox Infiniband 40 Gb/s QDR (Quad Data Rate) network is interconnecting all nodes from sirocco01 to sirocco06


– P100 nodes sirocco[07-13] with 2 P100 GPU cards

General characteristics

  • A full description of this architecture can be found on the WikiChip website.
  • Bi-sockets nodes which include 2 Hexadeca-core Broadwell Intel® Xeon® E5-2683 v4 @ 2,1 GHz.
  • By default, turbo-boost and Hyperthreading are disabled to ensure the reproducibility of the experiments carried out on the nodes.
  • Nodes have 256 Go of memory (8 Go/core) @2133 MHz.
  • All nodes also have a 10G interface to mount both BeeGFS and Lustre parallel file systems shared by all the platform nodes.
  • Nodes have a local disk (/tmp) of 300 Go – SAS WD Ultrastar HUC156030CSS204 à 15krpm (/sys/block/sda/device/model).

GPU Types

Networks

  • A Omnipath 100 Gb/s network is interconnecting all these nodes

Main view



– V100 nodes sirocco[14-16] with 2 V100 GPU cards –

General characteristics

  • A full description of this architecture can be found on the WikiChip website.
  • Bi-sockets nodes which include 2 Hexadeca-core Skylake Intel® Xeon® Gold 6142 @ 2,6 GHz.
  • By default, turbo-boost and Hyperthreading are disabled to ensure the reproducibility of the experiments carried out on the nodes.
  • Nodes have 384 Go of memory (12 Go/core) @2666 MHz.
  • All nodes also have a 10G interface to mount both BeeGFS and Lustre parallel file systems shared by all the platform nodes.
  • Nodes have a local disk (/scratch) of 750 Go – NVMe Samsung (/sys/block/sda/device/model).

GPU Types

  • A full description of this architecture can be found on the NVIDIA website.
  • 2 V100 GPU are available on each node

Networks

  • A Omnipath 100 Gb/s network is interconnecting all these nodes

Main view



– V100 nodes sirocco17 with 2 V100 GPU cards and 1B memory –

General characteristics

  • A full description of this architecture can be found on the WikiChip website.
  • Bi-sockets node which include 2 20*core Skylake Intel® Xeon® Gold 6148 @ 2,4GHz.
  • By default, turbo-boost and Hyperthreading are disabled to ensure the reproducibility of the experiments carried out on the nodes.
  • Node has 1 To of memory (25.6 Go/core) @1866 MHz.
  • Node also has a 10G interface to mount both BeeGFS and Lustre parallel file systems shared by all the platform nodes.
  • Node has a local disk (/tmp) of 1 To – disque SAS Seagate ST300MP0026 à 15krpm (/sys/block/sda/device/model).

GPU Types

Networks

  • A Omnipath 100 Gb/s network is interconnecting all nodes from sirocco14 to sirocco17

Main view



– KNL nodes kona[01-04] –

General characteristics

  • A full description of this architecture can be found on the WikiChip website.
  • Mono-socket nodes which include 2 64-core 7230 Airmont (Atom) Intel® @ 1.3 GHz and 4 threads per core
  • By default, turbo-boost and Hyperthreading are disabled to ensure the reproducibility of the experiments carried out on the nodes.
  • Nodes have 96 Go of memory (1.5 Go/core) @2400 MHz. and 16 Go of MCDRAM fully configurable
  • All nodes also have a 10G interface to mount both BeeGFS and Lustre parallel file systems shared by all the platform nodes.
  • Nodes have a local disk (/scratch) of 800 Go – SSD Intel SSDSC2BX80 (/sys/block/sda/device/model).

Configurations

  • Each of the 4 nodes have its own configuration.
    • kona01
      • memory mode : flat
      • cluster mode : quadrant
    • kona02
      • memory mode : cache
      • cluster mode : quadrant
    • kona03
      • memory mode : flat
      • cluster mode : snc-4
    • kona04
      • memory mode : cache
      • cluster mode : snc-4


Big Memory Nodes

Two nodes are specifically considered as Big Memory nodes : brise and souris which are described below.

Two other nodes could be also considered as Big Memory nodes : diablo05 and sirocco17 as they have 1 To of memory as described above.

– Quadri-socket SMP brise node –

General characteristics

  • A full description of this architecture can be found on the WikiChip website.
  • Quadri-sockets nodes which include 4 tetracosa-core Intel® Xeon® E7-8890 v4 @ 2,2GHz.
  • By default, turbo-boost and Hyperthreading are disabled to ensure the reproducibility of the experiments carried out on the nodes.
  • Nodes have 1 To of memory (10.7 Go/core) @1600 MHz.
  • All nodes also have a 10G interface to mount both BeeGFS and Lustre parallel file systems shared by all the platform nodes.
  • Nodes have a local disk (/tmp) of 280 Go – disque SAS à 15krpm (/sys/block/sda/device/model).

Main view



– SGI (Altix UV2000) SMP node souris –

General characteristics

  • A full description of this architecture can be found on the WikiChip website.
  • Dodeca-sockets nodes which include octa-core Intel® Ivy-Bridge Xeon® E5-4620 v2 @ 2,6 GHz.
  • By default, turbo-boost and Hyperthreading are disabled to ensure the reproducibility of the experiments carried out on the nodes.
  • Nodes have 3 To of memory (16 Go/core) @1600 MHz.
  • All nodes also have a 10G interface to mount both BeeGFS and Lustre parallel file systems shared by all the platform nodes.
  • This node is diskless

Main view