Supercomputer – VSC

The Vienna Scientific Cluster (VSC) is a supercomputer for selected academic projects that have an extremely high demand for computing power. The current configuration level is the VSC-5.

The VSC is a collaboration of the Vienna University of Technology (TU Wien), the University of Vienna, the University of Natural Resources and Applied Life Sciences, Vienna, the Graz University of Technology and the University of Innsbruck.

Using the VSC

Access to the Vienna Scientific Cluster is granted for projects that

  • have successfully undergone a peer review procedure and,
  • in addition to academic excellence,
  • require high performance computing.

If the projects have been positively reviewed by the Austrian Science Fund (FWF), the EU, etc., an additional review procedure is not necessary. The VSC form of the TU Wien is available for the submission of projects and for applications for resources.

In addition, test ­accounts (temporary and with limited resour­ces) can be assigned quickly and unbureaucratically.

You can find more information about using the VSC at the VSC Wiki.
 

Users of the University of Vienna

  • Users can use access nodes available via vsc4.vsc.ac.at and vsc5.vsc.ac.at. They are logically part of the University of Vienna's data network.
  • The connection between the University of Vienna and the TU Wien was physically increased to 10 Gbps. Therefore, large amounts of data can be transferred smoothly between the VSC and the servers at departments of the University of Vienna.

If you have any questions regarding the VSC, please contact service@vsc.ac.at.

MUSICA

MUSICA is Austria's next supercomputer with networked HPC systems at locations in Innsbruck, Linz and Vienna.

The installation of MUSICA in Vienna has already begun. The system is scheduled to go into test operation in late fall 2024. Regular operation should be possible from January 2025.

MUSICA will provide users with significantly more computing power: the fastest supercomputers in Austria to date, VSC-4 and VSC-5, have a combined performance of 5.01 petaflops (computing operations per second). The new HPC cluster will provide a computing power of around 40 petaflops, making it one of the most powerful systems in the world .

 

Hardware and software

Applies to all systems (Vienna, Linz and Innsbruck)

  • GPU nodes: 4 x Nvidia H100 94 GB, connected via NVLINK, 2 x AMD Epyc 9654, 96 cores and 768 GB memory, 4 x NDR 200. power consumption: max 4kW
  • CPU nodes: 2 x AMD Epyc 9654, 96 cores and 768 GB memory, 1 x NDR 200
  • WEKA Storage Server from MEGWARE: 12 NVMe disks with 15, 36 TB each, 2 x NDR400 Infiniband high-speed network
  • WEKA® Data Platform: 4 PB all-flash storage solution with up to 1800 GB/s read and 750 GB/s write performance

 

VSC-5

Photo VSC-5

VSC-5 (Vienna Scientific Cluster) © VSC Team

The fifth configuration level of the Vienna Scientific Cluster – VSC-5 – is located at the Arsenal and has been in operation since 2022. The VSC-5 ranks

  • 301st in the TOP500 list of fastest supercomputers worldwide (as of June 2022).


Hardware and software

  • The system contains a total of 98,560 cores and 120 GPUs.
  • 770 computing nodes, each with 2 AMD EPYC 7713 processors with 64 cores each.
  • Each node has 512 GB of main memory and a 1.92 TB SSD in the basic configuration.
  • Of these nodes are:

    • 6 reserve nodes
    • 120 nodes with extended main memory 1 TB/node
    • 20 nodes with extended main memory 2 TB/node
    • 60 GPU nodes with 2 NVIDIA A100 GPUs each

  • 10 login nodes each with 1 EPYC 7713 CPU with 128 GB.
  • 3 service nodes (similarly equipped as login nodes).
  • All nodes (computing, login and service nodes) each have a 1.92 TB SSD device via PCI-E 3.0 × 4.
  • The service nodes have 4 each and form a private CEPH storage.
  • The internal network is 200 Gbit Infiniband with 4:1 overbooking.
  • The login nodes each have an additional 25 Gbit Ethernet to the outside.
  • The connection to the storage is via 6 × 200 Gbit Infiniband.
  • The storage is shared with VSC-4, which is connected with 12 × 100 Gbit Omnipath.
  • The system is largely water-cooled.

VSC-4

VSC-4 (Vienna Scientific Cluster)

VSC-4 (Vienna Scientific Cluster) © ZID / Katharina Lüthke

The next configuration level – the VSC-4 – has been installed at the Arsenal.

The VSC-4 ranked

  • 82th in the TOP500 list of fastest supercomputers worldwide

Hardware and software

  • Based on Lenovo-DWC-System
  • Direct water cooling with up to 50 degrees Celsius in the supply line
  • 90 % of the waste heat in water cooling
  • a total of 790 nodes consisting of

    • 700 nodes with each

      • 2 × Intel Platinum 8174 (24 cores, 3,1 GHz)
      • 96 GB main memory
      • Omnipath Fabric
      • Omnipath (100 Gbit) adapter

    • 78 fat nodes mit je

      • 2 × Intel Platinum 8174 (24 cores, 3,1 GHz)
      • 384 GB main memory
      • Omnipath Fabric
      • Omnipath (100 Gbit) adapter

    • 12 very fat nodes with each

      • 2 × Intel Platinum 8174 (24 cores, 3,1 GHz)
      • 768 GB main memory
      • Omnipath Fabric
      • Omnipath (100 Gbit) adapter

Per node

  • 2 × Intel Xeon 8174 24 Core 3.1 GHz CPU 240 W
  • 12 × 8/32/64 GB TruDDR4 RDIMMS 2666MHz
  • 1 × Intel OPA1O0 HFA
  • 1 × Intel 480 GB SSD S4510 SATA
  • 1 × 1GbE Intel X722

Login nodes

  • Lenovo ThinkSystem SR630 Server (1 HE)
  • 10 nodes of the type Lenovo SR630 lHE (air-cooled)
  • 2 × Intel Xeon 4108 8 Core 1.8 GHz CPU 85 W
  • 12 × 16 GB TruDDR4 RDIMMS 2666 MHz
  • 1 × Intel OPA1O0 HFA
  • 2 × 10 GbE
  • 1 × Intel 480 GB SSD 54510 SATA
  • 2 × 550W Platinum Hot-Swap power supply

Service nodes

  • 2 nodes Lenovo SR630 lHE (air-cooled)
  • 2 × Intel Xeon 4108 8 Core 1.8 GHz CPU 85W
  • 12 × 16 GB TruDDR4 RDIMMS 2400MHz
  • 1 × Intel OPA1O0 HFA
  • 4 × 10 GbE
  • 1 × RAID 530 8i adapter
  • 6 × 600 GB SAS 10k HDD
  • 2 × 550W Platinum Hot-Swap power supply

Omnipath (Infiniband) setup

  • 27 Edge Intel OPA 100 Series 48 port TOR Switch dual power supply
  • 16 Core Intel OPA 100 Series 48 port TOR Switch dual power supply

Ethernet setup

  • 27 Edge Intel OPA 100 Series 48 port TOR Switch dual power supply
  • 16 Core Intel OPA 100 Series 48 port TOR Switch dual power supply

VSC-3 (out of operation)

VSC-3 (Vienna Scientific Cluster) © VSC / Claudia Blaas-Schenner

VSC-3 (Vienna Scientific Cluster) © VSC / Claudia Blaas-Schenner

The third configuration level of the Vienna Scientific Cluster – the VSC3 – was installed at the Arsenal in Vienna from 2014 to 2022. The VSC-3 ranked

  • 85th in the TOP500 list of fastest supercomputers worldwide and
  • 86th in the Green500 list of most energy-efficient supercomputers worldwide (in November 2014).

For the VSC-3, particular importance was attached to green IT. This was clearly reflected in energy-efficient processors and an efficient cooling system. The cooling system mainly used synthetic oil that removes excess heat.
 

Hardware and software

The VSC-3 had a total of 2,020 compute nodes that consisted of the following components:

  • 2,020 dual socket compute nodes with 32,768 cores, comprising 130 TB of RAM, and an InfiniBand network consisting of

    • 2 × Intel Xeon E5-2650v2 CPUs (2.6 GHz, 8 cores, code name Ivy-Bridge) each
    • 64 GB – 256 GB of RAM (DDR3, ECC, 1,866 MHz) each
    • 2 × Intel QLE7340 Single-Port QDR-InfiniBand HCA (QDR80) network

  • 5 access nodes and compute notes. Additionally:

    • 2 × 10 GbE Ethernet ports
    • 2 × GbE Ethernet ports
    • redundant power supply

  • 2 × boot server as HA cluster and compute nodes but with 128 GB of RAM

    • 8 × 480 GB Enterprise SSD in the RAID
    • High-performance IPoIB implementation (ConnectX3 adapter)

  • 2 × master nodes for cluster management and monitoring

    • The workload manager SLURM takes care of the queuing system and cluster management.

  • Parallel file system, based on the BeeGFS file system

    • 500 TB net capacity
    • 9 storage servers and 8 storage extension units
    • At least 20 GB/s bandwidth

  • NFS server, connection via QDR InfiniBand network

    • 1 PB net capacity in total
    • 10 identical NFS servers
    • High-performance IPoIB implementation (ConnectX3 adapter)

  • QDR InfiniBand network, implemented as QDR-80

    • Island implementation with no more than 288 nodes per island
    • The blocking factor within one island is 1:2.

  • Cooling system: Liquid submersion cooling

    • 23 cooling containers for the computing nodes, InfiniBand leaf switches and spine switches
    • 6 redundant pump modules including heat exchanger

The system had a maximum performance capacity of 584 TFlops.
 

VSC-3 Plus

The VSC-3 was expanded to the VSC-3 Plus by adding 864 dual socket compute nodes with 17,280 cores and 55 TB of RAM in total:

  • 816 nodes with 64 GB of RAM
  • 48 nodes with 256 GB of RAM
  • 2 × Intel Xeon CPU E5-2660 v2 (2.20 GHz, 10 cores)
  • IB Mellanox 56 Gbps (4X FDR)

VSC-2 (out of operation)

VSC-2 (Vienna Scientific Cluster)

VSC-2 (Vienna Scientific Cluster)

The second configuration level of the Vienna Scientific Cluster – the VSC-2 – was set up at the Arsenal in Vienna. After its launch, it ranked 56th on the TOP500 list of fastest supercomputers worldwide.

Hardware and software

  • The 1,314 compute nodes had 2 AMD Opteron processors each (Magny Cours 6132HE) with 2,2 GHz clock rate and 8 cores. Every core had over 32 GB of RAM and a 16 GB SSD hard disk.
  • The 12 storage servers and 2 NFS servers were operated with 2 Intel Xeon Westmere E5620 (quad core) processors and 48 GB of RAM. Available for storage space were 24 TB SAS for using data (FHGFS) and 48 TB NFS.
  • The high-performance InfiniBand network contained 80 switches Voltaire Grid Director 4036.
  • The operating system used for the VSC-2 was Scientific Linux 6.0 with Intel MPI and Open MPI. The Oracle Grid Engine and the ClustWare-Appliance were available as management software. As compilers, the VCS-2 uses Intel Fortran and C compilers (Composer XE 2011) as well as GNU Fortran compiler.

VSC-1 (out of operation)

VSC-1 (Vienna Scientific Cluster)

VSC-1 (Vienna Scientific Cluster)

The VSC-1 was installed at the Vienna University of Technology in 2009. It was ranked 156th in the TOP500 list of fastest supercomputers worldwide when it was launched (in November 2009). It was taken out of operation in 2015.

Hardware and software

  • The 436 compute nodes had two quad core processors each (Intel X5550 Nehalem) with a clock rate of 2.66 GHz (in total: 3,488 CPU cores). Every compute node had 24 GB of memory (1.333 GHz DDR3) (in total: 11.2 TB) and one hard disk (S-ATA) with 500 GB.
  • Performance: (R_max) 35.5 TFlops
  • All nodes were connected via InfiniBand. In the configuration of the VSC, the band­width amounted to 40 Gbps. The latency for a connection between any two nodes was about 1.5 micro­seconds. All InfiniBand components were made by QLogic.
  • The operating system used was CentOS, a Linux version that is optimised for server operation and for high-performance computing.
  • As a batch system, the VSC-1 used the Sun Grid Engine. MPI was used for parallel queries. Compiler (Fortran, C/C++) and development libraries were made by Intel.

VSC Research Center

The VSC Research Center was established as part of a project in the framework of structural funds for the higher education area (Hochschulraumstrukturmittelprojekt). It offers a training programme that is targeted at using supercomputers and that supports users in optimising programs. The courses of the VSC Research Center are open to members of the partner universities and also – subject to availability – to external persons.

This project also supports doctoral candidates and postdocs in optimising and documenting highly relevant programs and making them available as open source software.

Further information about courses and the VSC Research Center