IBM iDataPlex System - Castor
This page describes Castor in the following sections
Short Description of the System
The CASTOR facility is a tightly coupled computing cluster system, running RHEL 6.3 Server operating system release and offers 32 IBM dx360 M3 nodes based on the dual-socket six-cores Intel Xeon 5650 processor architecture running at 2.6 GHz, offering 24 GB of main system memory per node, for a total of 384 cpu cores and 768 GB aggregate memory. The cluster nodes are all homogeneous in term of HW configuration, and come equipped with 2 NVIDIA Tesla FERMI GPUs per node of type M2090.
How to Access Castor
Only users with an approved HP2C projects can have access to Castor . Castor is indeed a dedicated GPGPU computing cluster system.
Castor is accessible from the front end machine ela (ela.cscs.ch) as castor.cscs.ch.
[same password as ela]
Programming Environment and Supported Software
The software environment on Castor is controlled using the modules framework which gives an easy and flexible mechanism to access to all of the CSCS provided compilers, tools and applications.
Compilers for Castor are described in details here. GNU, PGI, Pathscale and Intel compilers are available as well the GPU development environments including CUDA and PGI accelerator compilers. Nodes are running the Red Hat Enterprise Linux Server Release 6.3 Operating System.
Batch Jobs and Interactive Jobs
The SLURM batch queuing system is used for the submission of jobs on Castor. The batch system can also be used to gain access to an interactive batch job, where you are provided with a set of compute nodes at your disposal and an interactive shell prompt from which to directly use these nodes using the mpirun/mpiexec commands.
Interactive batch jobs should only be requested for small period of time and should be requested on as small a numbers of processors as is required. Since an interactive batch job is allocated through the standard batch scheduling algorithms, you should only request this type of access if there are already sufficient free resources on the machine so that the interactive session can begin immediately
Details of batch submission and how to set up an interactive batch job are available here. For a list of the most useful SLURM commands, please have a look at the corresponding FAQ section under the User Forum.
Castor has a scratch space (/scratch/castor/user_name) of about 30 TB connected via high speed QDR Infiniband 40 Gb/s interconnect. Note that this storage is not backed up and is cleared on regular intervals so please ensure that you do not target this as a long term storage.
Access to the shared storage (/project) is also available via an high speed QDR Infiniband 40 Gb/s interconnect.
Access to the shared storage (/store) is also available via an high speed QDR Infiniband 40 Gb/s interconnect.
Detailed Machine Description
The IBM iDataPlex Research & Development Cluster CASTOR is a new CSCS facility which extends the current hybrid GPGPU/CPU resource portfolio. During the Q4 2011 it has been fully integrated into the CSCS Supercomputing ecosystem, and has been opened exclusively to the HP2C swiss scientific user community for hybrid multicore/multi-GPU computing, data analysis, and specific internal CSCS research activities.
The CASTOR facility is a tightly coupled computing cluster system, running RHEL 6.3 Server operating system release and offers 32 IBM dx360 M3 nodes based on the dual-socket six-cores Intel Xeon 5650 processor architecture running at 2.6 GHz, offering 24 GB of main system memory per node, for a total of 384 cpu cores, and 768 GB aggregate memory.
The cluster nodes are all homogeneous in term of HW configuration, and come equipped with 2 NVIDIA Tesla GPUs per node :
- NVIDIA Fermi M2090: www.nvidia.com/docs/IO/105880/DS-Tesla-M-Class-Aug11.pdf
Based on the CUDA architecture codenamed Fermi the Tesla M-class GPU Computing Modules are today (Summer 2011) the world's fastest parallel computing processors for HPC.
The highest performance Fermi-based GPGPU M2090 has the following features :
- 665 GFlops Peak DP
- 6 GB memory size (ECC off)
- 177 GB/sec memory bandwidth (ECC off)
- 512 CUDA cores
SLURM Resource Manager V 2.5.4 is the main batch queuing system installed and supported on the cluster which let the end-users access in a shared or reserved mode any available GPGPU computing resource.
Several class of nodes have been defined inside the cluster, covering special functionalities :
- Class 0: Administration Node (1x)
- Class 1: Login Node (2x)
- Class 2: GPU Computing Nodes (16x)
- Class 3: Storage Nodes (2x)
As an high speed network interconnect, the cluster CASTOR rely on a dedicated public internal Infiniband QDR fabric infrastructure, supporting both parallel-MPI and data traffic, integrated into the global fat-tree based CSCS fabric, based on 2 x 36-ports IB QDR Mellanox switches. A local GPFS /scratch cluster file system is running on top of the this IB fabric, in order to provide fast I/O performance to end user jobs.
In addition, a commodity 1 GbE LAN ensures interactive login access, home, project and application GPFS file sharing among the cluster nodes, and an additional standard 1 GbE administration network is also reserved for cluster management purposes.