HPC Systems
Table of contents
1. General Remarks, Partners
- HPC Systems at UIBK - Description of Services
- HPC Systems of the ZID - Statement of Service Levels for Systems
- Research Area Scientific Computing
2. How To Obtain An Account, Getting Help
- Applying for HPC accounts, user obligations
- HPC Accountsfor Non-Staff Members of the University
Regulations for persons who are members of the University but are not employed (e.g. students, cooperation partners) and need an HPC account. - ZID Ticket System. If you need help with HPC services, please create a new ticket in the queue "HPC".
3. Local HPC Resources Operated by the ZID
- LEO5:
Distributed memory infiniband CPU and GPU cluster (2023) - LEO4:
Distributed memory infiniband cluster of the ZID (IT Services - 2018) - LEO3E:
Distributed memory infiniband cluster of the Research Area Scientific Computing (2015) - LCC3:
The Linux Compute Cluster of the ZID (IT-Center) - for teaching purposes (2023) - VISLAB 1669:
Visual Interaction Lab 1669
4. HPC Systems Jointly Operated with Austrian Universities
- VSC: Vienna Scientific Cluster
HPC cooperation of major Austrian universities
Distributed memory infiniband cluster (VSC3: 2015)
5. Supranational Computing Facilities
- PRACE: Partnership for Advanced Computing in Europe
Top level of European HPC Infrastructure
Systems for very high computing demands - AURELEO:
Austrian users at the
LEONARDO supercomputer
Austrian participation in the LEONARDO pre-exascale supercomputer
Access administered by VSC consortium
6. Older Systems
LEO3 and MACH2 are out of service. Information is given here for historical reference
- LEO3:
Distributed memory infiniband cluster of the Research Area Scientific Computing (established 2011, decommissioned May 2022) - MACH2: Altix UV 3000
20 TB + 1728 Cores Shared Memory Machine
operated by Johann Kepler Universität (JKU) Linz - specialized for highly parallel jobs with large memory demands (2018 - 2023)
7. HPC Specific Software Documentation
- General Purpose GPU Processing On The UIBK Leo Clusters
Information on using GPU nodes in the UIBK HPC Leo clusters. - Matlab
MATLAB is a high-level language and interactive environment for algorithm development, data visualization, data analysis, and numeric computation. This document depicts methods and strategies for using matlab efficiently on the HPC systems of the University of Innsbruck. - Monitoring Processes Using The Jobtop Utility
Monitoring processes belonging to a job is a key factor in optimizing your workloads for a HPC cluster. This document describes how to use the locally developed jobtop facility allowing to run a specially configured top command on all cluster nodes that run processes of a given job. - Qiskit
- Setting up Your Windows PC With Putty and Xming
This document describes how to set up the software on a Windows desktop or notebook necessary for an efficient user experience of central Linux servers. Covered items: Putty terminal emulator, Xming X11 server, settings for Putty and Xterm terminal emulators. - Singularity: User Defined Software Environments
Singularity is an environment for running user-defined software stacks such as Docker containers on HPC clusters. - Totalview Debugger
The TotalView Debugger is a graphical tool for debugging sequential and parallel (MPI, OpenMP, POSIX threads etc.) programs. - Using Anaconda for Python and R
Anaconda is a comprehensive, curated, high quality and high performance distribution for Python, R, and many associated packages for Linux, Windows, and MacOS, intended for use by scientists. - smbnetfs
Methods to access Windows file shares from HPC clusters - Comparison of features of UIBK HPC systems muss irgwie anders eingebunden werden
8. Course Material
- Introduction to UNIX 2017 for DK CIM
- Singularity Workshop 2018
- Using the UIBK HPC Infrastructure DK CIM 2020 (Slides, PDF Download)