VSC configuration levels out of operation

The following configuration levels of the Vienna Scientific Cluster (VSC) are no longer in operation. 

VSC-3

VSC-3 (Vienna Scientific Cluster) © VSC / Claudia Blaas-Schenner

VSC-3 (Vienna Scientific Cluster) © VSC / Claudia Blaas-Schenner

The third configuration level of the Vienna Scientific Cluster – the VSC3 – was installed at the Arsenal in Vienna from 2014 to 2022. The VSC-3 ranked

  • 85th in the TOP500 list of fastest supercomputers worldwide and
  • 86th in the Green500 list of most energy-efficient supercomputers worldwide (in November 2014).

For the VSC-3, particular importance was attached to green IT. This was clearly reflected in energy-efficient processors and an efficient cooling system. The cooling system mainly used synthetic oil that removes excess heat.
 

Hardware and software

The VSC-3 had a total of 2,020 compute nodes that consisted of the following components:

  • 2,020 dual socket compute nodes with 32,768 cores, comprising 130 TB of RAM, and an InfiniBand network consisting of

    • 2 × Intel Xeon E5-2650v2 CPUs (2.6 GHz, 8 cores, code name Ivy-Bridge) each
    • 64 GB – 256 GB of RAM (DDR3, ECC, 1,866 MHz) each
    • 2 × Intel QLE7340 Single-Port QDR-InfiniBand HCA (QDR80) network

  • 5 access nodes and compute notes. Additionally:

    • 2 × 10 GbE Ethernet ports
    • 2 × GbE Ethernet ports
    • redundant power supply

  • 2 × boot server as HA cluster and compute nodes but with 128 GB of RAM

    • 8 × 480 GB Enterprise SSD in the RAID
    • High-performance IPoIB implementation (ConnectX3 adapter)

  • 2 × master nodes for cluster management and monitoring

    • The workload manager SLURM takes care of the queuing system and cluster management.

  • Parallel file system, based on the BeeGFS file system

    • 500 TB net capacity
    • 9 storage servers and 8 storage extension units
    • At least 20 GB/s bandwidth

  • NFS server, connection via QDR InfiniBand network

    • 1 PB net capacity in total
    • 10 identical NFS servers
    • High-performance IPoIB implementation (ConnectX3 adapter)

  • QDR InfiniBand network, implemented as QDR-80

    • Island implementation with no more than 288 nodes per island
    • The blocking factor within one island is 1:2.

  • Cooling system: Liquid submersion cooling

    • 23 cooling containers for the computing nodes, InfiniBand leaf switches and spine switches
    • 6 redundant pump modules including heat exchanger

The system had a maximum performance capacity of 584 TFlops.
 

VSC-3 Plus

The VSC-3 was expanded to the VSC-3 Plus by adding 864 dual socket compute nodes with 17,280 cores and 55 TB of RAM in total:

  • 816 nodes with 64 GB of RAM
  • 48 nodes with 256 GB of RAM
  • 2 × Intel Xeon CPU E5-2660 v2 (2.20 GHz, 10 cores)
  • IB Mellanox 56 Gbps (4X FDR)

VSC-2

VSC-2 (Vienna Scientific Cluster)

VSC-2 (Vienna Scientific Cluster)

The second configuration level of the Vienna Scientific Cluster – the VSC-2 – was set up at the Arsenal in Vienna. After its launch, it ranked 56th on the TOP500 list of fastest supercomputers worldwide.
 

Hardware and software

  • The 1,314 compute nodes had 2 AMD Opteron processors each (Magny Cours 6132HE) with 2,2 GHz clock rate and 8 cores. Every core had over 32 GB of RAM and a 16 GB SSD hard disk.
  • The 12 storage servers and 2 NFS servers were operated with 2 Intel Xeon Westmere E5620 (quad core) processors and 48 GB of RAM. Available for storage space were 24 TB SAS for using data (FHGFS) and 48 TB NFS.
  • The high-performance InfiniBand network contained 80 switches Voltaire Grid Director 4036.
  • The operating system used for the VSC-2 was Scientific Linux 6.0 with Intel MPI and Open MPI. The Oracle Grid Engine and the ClustWare-Appliance were available as management software. As compilers, the VCS-2 uses Intel Fortran and C compilers (Composer XE 2011) as well as GNU Fortran compiler.

VSC-1

VSC-1 (Vienna Scientific Cluster)

VSC-1 (Vienna Scientific Cluster)

The VSC-1 was installed at the Vienna University of Technology in 2009. It was ranked 156th in the TOP500 list of fastest supercomputers worldwide when it was launched (in November 2009). It was taken out of operation in 2015.
 

Hardware and software

  • The 436 compute nodes had two quad core processors each (Intel X5550 Nehalem) with a clock rate of 2.66 GHz (in total: 3,488 CPU cores). Every compute node had 24 GB of memory (1.333 GHz DDR3) (in total: 11.2 TB) and one hard disk (S-ATA) with 500 GB.
  • Performance: (R_max) 35.5 TFlops
  • All nodes were connected via InfiniBand. In the configuration of the VSC, the band­width amounted to 40 Gbps. The latency for a connection between any two nodes was about 1.5 micro­seconds. All InfiniBand components were made by QLogic.
  • The operating system used was CentOS, a Linux version that is optimised for server operation and for high-performance computing.
  • As a batch system, the VSC-1 used the Sun Grid Engine. MPI was used for parallel queries. Compiler (Fortran, C/C++) and development libraries were made by Intel.