HPC Systems

News

  • 10 January 2020, 18:40: Maintenance successfully completed for LCC2, LEO3, LEO3E and LEO4
    Please do not hesitate to contact us if you notice any peculiarities
    (e-mail: zid-cluster-admin@uibk.ac.at, or please open a ticket at zid-ts.uibk.ac.at).
    With kind regards, Martin Thaler

General Remarks, Partners

How To Obtain An Account

Local HPC resources operated by the ZID

  • LEO4:
    Distributed memory infiniband cluster of the ZID (IT-Center) (2018)
  • LEO3E:
    Distributed memory infiniband cluster of the Research Area Scientific Computing (2015)
  • LEO3:
    Distributed memory infiniband cluster of the Research Area Scientific Computing (2011)
  • LCC2:
    The Linux Compute Cluster of the ZID (IT-Center) - for teaching purposes (2017)

  • VISLAB 1669:
    Visual Interaction Lab 1669

HPC systems jointly operated with Austrian universities

  • VSC: Vienna Scientific Cluster
    HPC cooperation of major Austrian universities
    Distributed memory infiniband cluster (VSC3: 2015)
  • MACH2: Altix UV 3000
    20 TB + 1728 Cores Shared Memory Machine
    operated by Johann Kepler Universität (JKU) Linz - specialized for highly parallel jobs with large memory demands (2018 - description subject to change)

Supranational Computing Facilities

Older systems

These systems will go out of service soon. Please do not apply for new accounts and plan for transition to our current systems.

HPC specific software documentation

  • General Purpose GPU Processing On The UIBK Leo Clusters
    Information on using GPU nodes in the UIBK HPC Leo clusters.
  • Matlab
    MATLAB is a high-level language and interactive environment for algorithm development, data visualization, data analysis, and numeric computation. This document depicts methods and strategies for using matlab efficiently on the HPC systems of the University of Innsbruck.
  • Monitoring Processes Using The Jobtop Utility
    Monitoring processes belonging to a job is a key factor in optimizing your workloads for a HPC cluster. This document describes how to use the locally developed jobtop facility allowing to run a specially configured top command on all cluster nodes that run processes of a given job.
  • Setting up Your Windows PC With Putty and Xming
    This document describes how to set up the software on a Windows desktop or notebook necessary for an efficient user experience of central Linux servers. Covered items: Putty terminal emulator, Xming X11 server, settings for Putty and Xterm terminal emulators.
  • Singularity: User Defined Software Environments
    Singularity is an environment for running user-defined software stacks such as Docker containers on HPC clusters.
  • Totalview Debugger
    The TotalView Debugger is a graphical tool for debugging sequential and parallel (MPI, OpenMP, POSIX threads etc.) programs.
  • Using Anaconda for Python and R
    Anaconda is a comprehensive, curated, high quality and high performance distribution for Python, R, and many associated packages for Linux, Windows, and MacOS, intended for use by scientists.
  • Using Software Installed by the Spack Management System

Nach oben scrollen