Supercomputer – VSC

The Vienna Scientific Cluster (VSC) is a supercomputer for selected academic projects that have an extremely high demand for computing power. The current configuration level is the VSC-4.

The VSC is a collaboration of the Vienna University of Technology (TU Wien), the University of Vienna, the University of Natural Resources and Applied Life Sciences, Vienna, the Graz University of Technology and the University of Innsbruck.

Using the VSC

Access to the Vienna Scientific Cluster is granted for projects that

  • have successfully undergone a peer review procedure and,
  • in addition to academic excellence,
  • require high performance computing.

If the projects have been positively reviewed by the Austrian Science Fund (FWF), the EU, etc., an additional review procedure is not necessary. The VSC form of the TU Wien is available for the submission of projects and for applications for resources.

In addition, test ­accounts (temporary and with limited resour­ces) can be assigned quickly and unbureaucratically.

Users of the University of Vienna

  • Users can use access nodes available via vsc2.vsc.ac.at and vsc3.vsc.ac.at. They are logically part of the University of Vienna's data network.
  • The connection between the University of Vienna and the TU Wien was physically increased to 10 Gbps. Therefore, large amounts of data can be transferred smoothly between the VSC and the servers at departments of the University of Vienna.

If you have any questions regarding the VSC, please contact service@vsc.ac.at.

VSC-4

VSC-4 (Vienna Scientific Cluster)

VSC-4 (Vienna Scientific Cluster) © ZID / Katharina Lüthke

The next configuration level – the VSC-4 – has been installed at the Arsenal.

The VSC-4 ranked

  • 82th in the TOP500 list of fastest supercomputers worldwide

Hardware and software

  • Based on Lenovo-DWC-System
  • Direct water cooling mit bis zu 50 Grad Celsius im Vorlauf
  • 90 % der Abwärme in Wasserkühlung
  • insgesamt 790 Knoten bestehend aus
    • 700 nodes with each
      • 2 x Intel Platinum 8174 (24 cores, 3,1 GHz)
      • 96 GB Hauptspeicher
      • Omnipath Fabric
      • Omnipath (100 Gbit) adapter
    • 78 fat nodes mit je
      • 2 x Intel Platinum 8174 (24 cores, 3,1 GHz)
      • 384 GB Hauptspeicher
      • Omnipath Fabric
      • Omnipath (100 Gbit) adapter
    • 12 very fat nodes with each
      • 2 x Intel Platinum 8174 (24 cores, 3,1 GHz)
      • 768 GB Hauptspeicher
      • Omnipath Fabric
      • Omnipath (100 Gbit) adapter

Per node

  • 2 x Intel Xeon 8174 24 Core 3.1 GHz CPU 240 W
  • 12 x 8/32/64 GB TruDDR4 RDIMMS 2666MHz
  • 1 x Intel OPA1O0 HFA
  • 1 x Intel 480 GB SSD S4510 SATA
  • 1 x 1GbE Intel X722

Login nodes

  • Lenovo ThinkSystem SR630 Server (1 HE)
  • 10 Knoten des Typs Lenovo SR630 lHE (air-cooled)
  • 2 x Intel Xeon 4108 8 Core 1.8 GHz CPU 85 W
  • 12 x 16 GB TruDDR4 RDIMMS 2666 MHz
  • 1 x Intel OPA1O0 HFA
  • 2 x 10 GbE
  • 1 x Intel 480 GB SSD 54510 SATA
  • 2 x 550W Platinum Hot-Swap power supply

service nodes

 

  • 2 nodes Lenovo SR630 lHE (air-cooled)
  • 2 x Intel Xeon 4108 8 Core 1.8 GHz CPU 85W
  • 12 x 16 GB TruDDR4 RDIMMS 2400MHz
  • 1 x Intel OPA1O0 HFA
  • 4 x 10 GbE
  • 1 x RAID 530 8i adapter
  • 6 x 600 GB SAS 10k HDD
  • 2 x 550W Platinum Hot-Swap power supply

Omnipath (Infiniband) setup

  • 27 Edge Intel OPA 100 Series 48 port TOR Switch dual power supply
  • 16 Core Intel OPA 100 Series 48 port TOR Switch dual power supply

Ethernet setup

  • 27 Edge Intel OPA 100 Series 48 port TOR Switch dual power supply
  • 16 Core Intel OPA 100 Series 48 port TOR Switch dual power supply

VSC-3

VSC-3 (Vienna Scientific Cluster) © VSC / Claudia Blaas-Schenner

VSC-3 (Vienna Scientific Cluster) © VSC / Claudia Blaas-Schenner

The third configuration level of the Vienna Scientific Cluster – the VSC3 – has been installed at the Arsenal in Vienna in 2014. The VSC-3 ranked

  • 85th in the TOP500 list of fastest supercomputers worldwide and
  • 86th in the Green500 list of most energy-efficient supercomputers worldwide (in November 2014).

For the VSC-3, particular importance was attached to green IT. This is clearly reflected in energy-efficient processors and an efficient cooling system. The cooling system mainly uses synthetic oil that removes excess heat.

Hardware and software

The VSC-3 has a total of 2,020 compute nodes that consist of the following components:

  • 2,020 dual socket compute nodes with 32,768 cores, comprising 130 TB of RAM, and an InfiniBand network consisting of
    • 2 × Intel Xeon E5-2650v2 CPUs (2.6 GHz, 8 cores, code name Ivy-Bridge) each
    • 64 GB – 256 GB of RAM (DDR3, ECC, 1,866 MHz) each
    • 2 × Intel QLE7340 Single-Port QDR-InfiniBand HCA (QDR80) network
  • 5 access nodes and compute notes. Additionally:
    • 2 × 10 GbE Ethernet ports
    • 2 × GbE Ethernet ports
    • redundant power supply
  • 2 × boot server as HA cluster and compute nodes but with 128 GB of RAM
    • 8 × 480 GB Enterprise SSD in the RAID
    • High-performance IPoIB implementation (ConnectX3 adapter)
  • 2 × master nodes for cluster management and monitoring
    • The workload manager SLURM takes care of the queuing system and cluster management.
  • Parallel file system, based on the BeeGFS file system
    • 500 TB net capacity
    • 9 storage servers and 8 storage extension units
    • At least 20 GB/s bandwidth
  • NFS server, connection via QDR InfiniBand network
    • 1 PB net capacity in total
    • 10 identical NFS servers
    • High-performance IPoIB implementation (ConnectX3 adapter)
  • QDR InfiniBand network, implemented as QDR-80
    • Island implementation with no more than 288 nodes per island
    • The blocking factor within one island is 1:2.
  • Cooling system: Liquid submersion cooling
    • 23 cooling containers for the computing nodes, InfiniBand leaf switches and spine switches
    • 6 redundant pump modules including heat exchanger

The system has a maximum performance capacity of 584 TFlops.

VSC-3 Plus

The VSC-3 was expanded to the VSC-3 Plus by adding 864 dual socket compute nodes with 17,280 cores and 55 TB of RAM in total:

  • 816 nodes with 64 GB of RAM
  • 48 nodes with 256 GB of RAM
  • 2 × Intel Xeon CPU E5-2660 v2 (2.20 GHz, 10 cores)
  • IB Mellanox 56 Gbps (4X FDR)

VSC-2 (out of operation)

VSC-2 (Vienna Scientific Cluster)

VSC-2 (Vienna Scientific Cluster)

The second configuration level of the Vienna Scientific Cluster – the VSC-2 – was set up at the Arsenal in Vienna. After its launch, it ranked 56th on the TOP500 list of fastest supercomputers worldwide.

Hardware and software

  • The 1,314 compute nodes had 2 AMD Opteron processors each (Magny Cours 6132HE) with 2,2 GHz clock rate and 8 cores. Every core had over 32 GB of RAM and a 16 GB SSD hard disk.
  • The 12 storage servers and 2 NFS servers were operated with 2 Intel Xeon Westmere E5620 (quad core) processors and 48 GB of RAM. Available for storage space were 24 TB SAS for using data (FHGFS) and 48 TB NFS.
  • The high-performance InfiniBand network contained 80 switches Voltaire Grid Director 4036.
  • The operating system used for the VSC-2 was Scientific Linux 6.0 with Intel MPI and Open MPI. The Oracle Grid Engine and the ClustWare-Appliance were available as management software. As compilers, the VCS-2 uses Intel Fortran and C compilers (Composer XE 2011) as well as GNU Fortran compiler.

VSC-1 (out of operation)

VSC-1 (Vienna Scientific Cluster)

VSC-1 (Vienna Scientific Cluster)

The VSC-1 was installed at the Vienna University of Technology in 2009. It was ranked 156th in the TOP500 list of fastest supercomputers worldwide when it was launched (in November 2009). It was taken out of operation in 2015.

Hardware and software

  • The 436 compute nodes had two quad core processors each (Intel X5550 Nehalem) with a clock rate of 2.66 GHz (in total: 3,488 CPU cores). Every compute node had 24 GB of memory (1.333 GHz DDR3) (in total: 11.2 TB) and one hard disk (S-ATA) with 500 GB.
  • Performance: (R_max) 35.5 TFlops
  • All nodes were connected via InfiniBand. In the configuration of the VSC, the band­width amounted to 40 Gbps. The latency for a connection between any two nodes was about 1.5 micro­seconds. All InfiniBand components were made by QLogic.
  • The operating system used was CentOS, a Linux version that is optimised for server operation and for high-performance computing.
  • As a batch system, the VSC-1 used the Sun Grid Engine. MPI was used for parallel queries. Compiler (Fortran, C/C++) and development libraries were made by Intel.

Special purpose hardware

Special purpose hardware is special hardware that can be used for special purposes. For example, users can use graphics accelerators and Many Integrated Core architectures. For further information, please visit the VSC Wiki or send an e-mail to service@vsc.ac.at.

GPUs

  • 50 GPU nodes with the NVIDIA GTX 1080 cards
  • 2 nodes with 8 × NVIDIA GTX 1080 cards
  • 2 nodes with 10 × NVIDIA GTX 1080 cards

AMD Epyc

  • epyc1: 2 × 7551 processor, 64 cores, 256 GB of memory
  • epyc2: 7351P processor, 16 cores, 128 GB memory
  • epyc3: 7551P processor, 32 cores, 128 GB memory

Many Integrated Cores (MIC)

  • 4 × Xeon Phis – Knights Landing CPUs

17 bioinformatics nodes and storage

  • 10 servers for $HOME
  • 8 servers for $GLOBAL
  • 16 servers for $BINFS / $BINFL
  • 3 servers for testing
  • ~ 800 HDDs
  • ~ 100 SSDs

Connection to the GPFS Storage of the EODC (Earth Observation Data Centre for Water Resources Monitoring).

VSC Research Center

The VSC Research Center was established as part of a project in the framework of structural funds for the higher education area (Hochschulraumstrukturmittelprojekt). It offers a training programme that is targeted at using supercomputers and that supports users in optimising programs. The courses of the VSC Research Center are open to members of the partner universities and also – subject to availability – to external persons.

This project also supports doctoral candidates and postdocs in optimising and documenting highly relevant programs and making them available as open source software.

Further information about courses and the VSC Research Center