Monch

The computing cluster Monch is accessible via ssh from the frontend ela.cscs.ch as monch.cscs.ch.

The operating system on the login nodes is CentOS 6.6: direct access to the compute nodes is not allowed, to run jobs you need to use the SLURM batch queueing system (see RUNNING JOBS).

Specifications

Model NEC cluster
376 Compute Nodes 2 x Intel® Xeon® EP E5-2660 v2 (Ivy Bridge) @ 2.2 GHz (10 cores/socket)
4 Login Nodes 2 x Intel® Xeon® EP E5-2660 v2 (Ivy Bridge) @ 2.2 GHz (10 cores/socket) 32 GB RAM
Theoretical Peak Performance 132.4 TFlops
Memory Capacity per node 32/64/256 GB (DDR3-1600)
Memory Bandwidth per node 51.2/51.2/204.8 GB/s
Total System Memory 18.2 TB DDR3
Interconnect Configuration 1 Infiniband FDR high-speed interconnects. Full non-blocking fat tree topology
Scratch capacity /mnt/lnec 350 TB

Programming Environment

The software environment on Monch is controlled using the modules framework, which gives an easy and flexible mechanism to access compilers, tools and applications.

Each programming environment loads the local MPI library. Available programming environments are GNU, Intel and PGI. You can get information on a specific module using the following commands (in the example below, change <module> with the module name):


		
			

$ module avail <module>
$ module show <module>
$ module help <module>

Please follow this link for more detailds on compiling and optimizing your code.

Network

The system has 2 high-speed interconnects based on FDR: the first one is dedicated to the MPI traffic and the second one to the storage high speed traffic (GPFS, file transfer, etc…).

File Systems

The $SCRATCH space /scratch/monch/$USER is connected via Infiniband interconnect. The shared storage under /project and /store is available through the high speed interconnect from the login nodes only.

Please carefully read the general information on filesystems at CSCS.

For further information, please contact help(at)cscs.ch.