Compute Services

HPC systems available to University of Innsbruck (UIBK) users (faculty, advanced students) include servers operated by the ZID in Innsbruck (Leo clusters), by the ZID in Linz (Mach) and the VSC consortium in Vienna (VSC3).

These systems enable users to run programs that are bigger and consume more resources than is possible with typical workgroup or departmental servers:

  • Parallel programs using hundreds of CPUs
  • Parallel programs using dozens or hundreds of GB of main memory
  • Programs with intensive use of disk storage, possibly using terabytes of temporary disk space

Users may bring in their own (or open source) programs on source level (e.g. C, C++, Fortran) or use preinstalled software. Programs using large amounts of computing resources (compute time and memory) should be parallelized, using tools such as MPI or OpenMP.

The hardware offered by the ZID and VSC includes distributed memory clusters with dozens to thousands of individual compute servers (nodes). Each node typically has 12 to 28 CPU cores and 24 to 512 GB of main memory. As of 2019, the UIBK Leo clusters offer a total of 255 compute nodes, 4188 CPU cores, and 12.5 TB of main memory.

A large shared-memory system with 1728 CPUs and 20 TB of main memory (Mach2) is available for programs that need very large amounts of memory and shared-memory parallelization.

Shared access to large, fast disk subsystems allows intensive usage of temporary storage. The HPC servers are directly connected to the campus backbone and high performance inter-university network (Aconet) to help save time for data transfers.

All machines feature variants of the Linux operating system (CentOS, SLES, Scientific Linux) and can be accessed via SSH. The primary user interface is the standard UNIX/Linux shell. To prevent resource contention, users start their programs via a batch queueing system (SGE, PBS, SLURM). For resource-intensive work, non-interactive use is preferred.

High quality hardware ensures stable operation with very low failure rates. Thanks to automatic monitoring, automated rollout of new / repaired machines, and maintenance contracts, malfunctions can be discovered and corrected quickly.

Software

At all UIBK HPC installations, a range of installed software is available to end users, including

  • Linux operating system with typical pre-installed utilities
  • Compilers (GNU, Intel, PGI) for C, C++, Fortran (77, 95, 2003), including OpenMP parallelization
  • Communication and parallelization libraries (OpenMPI, MPT)
  • High performance computational and data libraries (Intel MKL implementation of Lapack and FFT, HDF, and many others)
  • Development tools (parallel debugger, profiling)
  • Integrated numerical productivity tools (e.g. Matlab, Mathematica, NumPy, SciPy)
  • Application Software (e.g. finite elements, fluid dynamics, computational chemistry, athmospheric sciences, statistics, visualisation...)

As human resources permit, software of general interest can be installed centrally or compiled by individual users.

Personal Support

A small but dedicated team ensures stable daily operation of the systems as well as controlled acquisition of new servers that meet the real demands of our user community.

In addition, we offer direct help to end users for problems arising in daily use of the systems:

  • Individual introductory briefings for new users. Users can discuss their needs with HPC experts and get hints for optimal usage of the resources for their specific needs.
  • Problem support. If problems arise that cannot be solved by a user, we can offer problem analysis and advice.
  • Support for porting and optimizing programs. Code developed for other machines may not run well on a new machine, often for simple reasons. Often, experienced HPC experts can offer help to quickly resolve such problems.

Keeping it Simple

Our service spectrum is typical for classical small HPC teams. We service hundreds of scientists in a wide range of scientific fields, so we cannot become specialists in every application area. Consequently, our typical users have some degree of technical experience and know or learn how to put their programs into productive use.

In particular, we do not offer:

  • Standardized services on a commercial service level.
  • Integrated web application front ends, standard workflows for non-technical users.
  • Specific application science level support (beyond general advice in selecting numerical methods or similar).

Although integrated workflows lower the entry threshold, they add to the technical complexity and maintenance effort of the systems. Knowledge about specific HPC appications is often shared in user communities existing in institutes or at large. The Research Area Scientific Computing also offers a platform for knowledge exchange. We encourage users to profit from this vast collective experience.

Nach oben scrollen