Back to HPC Home Page

Supercomputer LEO5 - Distributed Memory CPU and GPU Cluster of the ZID (IT-Services)


Leo5 is a high performance CPU and GPU compute cluster operated by University of Innsbruck IT services in cooperation with the Research Area Scientific Computing.

For details, see LEO5: Introduction and Overview

Applying for User Accounts

Note: all active LEO accounts are automatically activated for Leo5.

The application process is described in detail on the UIBK HPC main page, section 2.

Note in particular the description of Applying and Acknowledgments.

Using the Cluster

All HPC clusters at the University of Innsbruck hosted at the ZID are set up in a similar way. Please do take the time to read the guidelines carefully, in order to make optimal use of our HPC systems.

First Time Instructions

See this quick start tutorial to jump the first hurdles after your account was activated:

  • Login to the cluster
  • Change your password
  • Copy files from and to the cluster

Setting up the Software (Modules) Environment

We offer many software packages, typically in an increasing number of versions over time, driven by user demand. In order to utilize these efficiently and to avoid inter-package conflicts, we employ the Environment Modules package on all of our cluster systems.

See the modules environment tutorial to learn how to customize your personal software configuration.

For some more information on differences between the previous LEO clusters and LEO5, please look at the LEO5 introduction and overview.

Submitting Jobs to the Cluster

Load management on Leo5 is handled by the Slurm workload manager.

See the SLURM usage tutorial to find out how to submit your jobs to the Slurm batch scheduler.

Storing Your Data

Every time you login to the cluster, your quota and usage of available file systems are automatically displayed.

You may store your data in two areas:

  1. $HOME provides you with a small storage area, which is backed up every day. This is the place to store your important data, such as source code, valuable input files, etc. In addition to central backup, bi-hourly snapshots allow you to access previous versions of your data for up to eight weeks in the past (with increasing time granularity).
  2. $SCRATCH is a large and high performance area space for large data sets. Use this storage for input datasets, intermediate storage and output data of your jobs. Please note that, while this file system is safeguarded against loss of individual disks, users are advised to create regular backups of valuable data stored in $SCRATCH, since multiple disk failures leading to data loss cannot be completely excluded.

Available Software Packages

On each of our clusters we provide a wide variety of software packages, such as compilers, parallel environments, numerical libraries, scientific applications, utility programs, etc.

List of currently available software on LEO5

Known Problems and Configuration Changes

For your reference, we keep a list of known problems and configuration changes for Leo5 hardware and software.


Statement of Service

Maintenance Status and Recommendations for ZID HPC Systems

Nach oben scrollen