HPC Systems


  • [LEO3] 08 August 2019, 17:40: Login.Leo3 is available again, Maintenance successfully completed
    The update of the following components were successfully completed:
    • Operating systems of all nodes and login-node
    • Operating systems of all fileservers
    • /tmp: is now 50GB on the login node
    • /tmp: is now 150GB on all compute nodes
    • /tmp: usrquota and groupquota are enabled

    Please check results of your finished jobs.
    Please do not hesitate to contact us if you notice any peculiarities (e-mail: zid-cluster-admin@uibk.ac.at, or please open a ticket at zid-ts.uibk.ac.at). With kind regards, Martin Thaler

Policies, Partners

HPC systems operated by the ZID

  • LEO4:
    Distributed memory infiniband cluster of the ZID (IT-Center) (2018)
  • LEO3E:
    Distributed memory infiniband cluster of the Research Area Scientific Computing (2015)
  • LEO3:
    Distributed memory infiniband cluster of the Research Area Scientific Computing (2011)
  • LCC2:
    The Linux Compute Cluster of the ZID (IT-Center) - for teaching purposes (2017)

HPC systems jointly operated with Austrian universities

  • VSC: Vienna Scientific Cluster
    HPC cooperation of major Austrian universities
    Distributed memory infiniband cluster (VSC3: 2015)
  • MACH2: Altix UV 3000
    20 TB + 1728 Cores Shared Memory Machine
    operated by Johann Kepler Universität (JKU) Linz - specialized for highly parallel jobs with large memory demands (2018 - description subject to change)

Supernational Computing Facilities

Older systems

These systems will go out of service soon. Please do not apply for new accounts and plan for transition to our current systems.

HPC specific software documentation

  • Cuda
    The Compute Unified Device Architecture (CUDA) toolkit provides a C/C++ interface for programming Nvidia's graphics processing units (GPUs). This document depicts how to use CUDA on the University's HPC systems equipped with NVidia graphics cards.
  • Matlab
    MATLAB is a high-level language and interactive environment for algorithm development, data visualization, data analysis, and numeric computation. This document depicts methods and strategies for using matlab efficiently on the HPC systems of the University of Innsbruck.
  • Setting up Your Windows PC With Putty and Xming
    This document describes how to set up the software on a Windows desktop or notebook necessary for an efficient user experience of central Linux servers. Covered items: Putty terminal emulator, Xming X11 server, settings for Putty and Xterm terminal emulators.
  • Singularity: User Defined Software Environments
    Singularity is an environment for running user-defined software stacks such as Docker containers on HPC clusters.
  • Totalview Debugger
    The TotalView Debugger is a graphical tool for debugging sequential and parallel (MPI, OpenMP, POSIX threads etc.) programs.
  • Using Software Installed by the Spack Management System

Nach oben scrollen