Vienna Scientific Cluster (VSC)

The Vienna Scientific Cluster (VSC) is a pool of high-performance computing resources that covers the computing demands of 5 different Universities: the University of Vienna, Vienna University of Technology, the University of Natural Resources and Applied Life Sciences, Graz University of Technology and the University of Innsbruck.

The VSC-1 was set up in 2009 with an investment of almost 2 Mio. Euro. It is located in the "Freihaus" (Wiedner Hauptstraße 8-10) of the Vienna University of Technology and ranked #156 in the November 2009 issue of TOP500 Supercomputing.

The Vienna Scientific Cluster VSC-2 is a HPC system which was installed by MEGWARE Computer, Chemnitz, Germany. The hardware installation of the VSC-2 at the Arsenal TU building (object 214) was completed in May 2011 and ranked #56 in the June 2011 issue of TOP500 Supercomputing.

The third development stage -- VSC-3 -- is also located at the Arsenal in Vienna. The VSC-3 was ranked 85th in the top 500 list of the world's fastest high-performance computers and 86th in the Green 500 list of the world's most energy-efficient supercomputers (as of November 2014). When choosing the VSC-3, great emphasis was placed on Green IT; this is reflected in energy-saving CPUs and an efficient cooling system. The cooling system uses a synthetic oil which conducts the heat away.

Vienna Scientific Cluster (VSC)

Hardware & Software VSC-1 (out of order)

  • 436 compute nodes Sun Fire X2270, each equipped with 2 Quadcore processors (Intel, X5550, 2.66 GHz) and 24 GB memory (1333 MHz DDR3)
  • 40 compute nodes, Supermicro (Twin^2), equipped with 2 Intel Westmere X5650 Six-Core 2.66GHz processors, 12MB Cache, 24 GB memory
  • Total number of available processor cores: 3968
  • Maximum available memory: 12.3 TB
  • R_max: 35.5 TFlops
  • Network: Infiniband QDR (40 Gbps)
  • Operating System: Linux (CentOS)
  • Applications: GAUSSIAN 09, R, MATLAB, Mathematica, HDF5, Global Arrays, LAMMPS

Hardware & Software VSC-2

  • 1314 compute nodes
    • chassis: SlashFive (from Megware),
    • CPU: 2 x AMD Opteron Magny Cours 6132HE (8 Cores, 2.2 GHz)
    • RAM: 8 x 4 GB ECC DDR3
    • HD: 16 GB SSD
    • Network: 2 x Gigabit Ethernet LAN, 1 x Infiniband QDR
  • Total number of available processor cores: 21024
  • Maximum available memory: 42.0 TB
  • Power consumption (maximum load): 420 kW
  • R_max: 135.6 TFlops
  • R_peak: 185.0 TFlops
  • Three login servers, two master nodes
  • High Performance Infiniband QDR Netzwerk
  • Two Gbit networks
  • Operating System: Linux (Scientific Linux 6.0)
  • Management software: Oracle Grid Engine, ClustWare-Appliance (from Megware)
  • Cooling: Knürr CoolTherm® – Water cooled server rack with 17 kW cooling power each

Hardware VSC-3

Der VSC-3 wird über insgesamt 1756 Rechenknoten verfügen, welche aus folgenden Komponenten bestehen:

  • insgesamt 1756 Dual-Socket Rechenknoten bestehend je aus
    • 2x Intel Xeon E5-2650v2 CPUs (2.6 GHz, 8-Kerne, Codename: "Ivy-Bridge")
    • 64 GB Hauptspeicher (DDR3, ECC, 1866 MHz)
    • 2x Intel QLE7340 Single-Port QDR-InfiniBand HCA (QDR80) Netzwerk
  • 5 Zugangsknoten wie Rechenknoten und zusätzlich:
    • 2x 10 GbE Ethernet Ports
    • 2x GbE Ethernet Ports
    • redundante Netzteile
  • 2x Bootserver als HA-Cluster wie Rechenknoten jedoch 128 GB RAM
    • 8x 480 GB Enterprise SSD im RAID
    • performante IPoIB Implementierung (ConnectX3 Adapter)
  • 2x Masterknoten zum Clustermanagement und Monitoring
    • Qeuing-System und Cluster Management ist der Workload Manager SLURM
  • Paralleles Filesystem, Fraunhofer Filesystem basieren
    • 300 TB Nettokapazität
    • 9 Storage-Server und 8 Storage-Erweiterungseinheiten
    • mindestens 20 GB/s Bandbreite
  • NFS-Server, Anbindung über QDR-InfiniBand Netzwerk
    • zusammen 200 TB Nettokapazität
    • 6 identische NFS-Server
    • performante IPoIB Implementierung (ConnectX3 Adapter)
  • QDR InfiniBand Netzwerk, realisiert als QDR-80
    • Insel-Implementierung mit maximal 288 bis 300 Nodes pro Insel
    • Blocking-Faktor innerhalb einer Insel 1:2
  • "Liquid Submersion Cooling" Kühlsystem
    • 21 Kühlcontainer für die Aufnahme der Rechenknoten und InfiniBand Leaf- und Spine Switches
    • 6 redundante Pumpenmodule inkl. Wärmetauscher
  • 5 Wassergekühlte Wärmepumpen zur Kühlung von Storagekomponenten.

Das System wird eine Höchstperformance von 584 TFLOPs und insgesamt 113.66 TB RAM haben.


Access to the VSC is granted on the basis of peer-reviewed projets which, beside scientific excellence, can demonstrate the need for extremely large computational resources. Projects can be requested via the project application assistant.
Please direct all questions related to the VSC to