The computing system Monte Leon is accessible via ssh from the front end ela.cscs.ch as leone.cscs.ch.
The operating system on the login nodes is Red Hat Enterprise Linux Server release 6.7: direct access to the compute nodes is allowed with an active allocation only within the SLURM batch queuing system (see RUNNING JOBS).
|Model||HP DL 360 Gen 9|
|20 Compute Nodes||2 x Intel(R) Xeon(R) CPU E5-2667 v3 @ 3.2 GHz (Haswell) (8 cores/socket)|
|4 Login Nodes||2 x Intel(R) Xeon(R) CPU E5-2690 v3 @ 2.60 GHz (Haswell) (12 cores/socket)|
|Theoretical Peak Performance||16.4 TFlops|
|Memory Capacity per Node||768 GB (DDR4-2133)|
|Total System Memory||16 TB DDR4|
|Interconnect Configuration||Infiniband FDR high-speed interconnect. 10 Gb Ethernet|
|Scratch capacity||/scratch/leone 1.3 PB|
The software environment on Monte Leone is controlled using the modules framework, which gives an easy and flexible mechanism to access compilers, tools and applications.
Each programming environment loads the local MPI library. Available programming environments are GNU, Intel and PGI. You can get information on a specific module using the following commands (in the example below, change <module> with the module name):
$ module avail <module>
$ module show <module>
$ module help <module>
Please follow this link for more detailds on compiling and optimizing your code.
The system has 2 high-speed interconnects based on FDR: the first one is dedicated to the MPI traffic and the second one to the storage high speed traffic (GPFS, file transfer, etc…).
The $SCRATCH space /scratch/leone/$USER is connected via Infiniband interconnect. The shared storage under /project and /store is available through the high speed interconnect from the login nodes only.
Please carefully read the general information on filesystems at CSCS.
For further information, please contact help(at)cscs.ch.