HPC Systems of the ZID - Statement of Service Levels for Systems

To maximize the benefit from its investments in HPC systems, the ZID will try to continue operation of existing systems for some time even after warranties or maintenance contracts have expired. If you use our systems, you should understand the implications and risks of this strategy, so you will be able to plan your research projects accordingly.

In the following paragraphs, we inform you about the current and planned maintenance status of each of our systems, and what it means to use systems that are no longer under maintenance.

Risks of using systems not under maintenance

Modern hardware is relatively stable. Individual failure probabilities, though, multiplied by the number of machines in a cluster, result in a significant failure rate at least for certain components.

An individual failure may, depending on its nature, result in one or more of the following consequences:

  • Termination of individual jobs,
  • Reduction of processing capacity (loss of individual nodes),
  • Downgraded communication bandwidth,
  • Temporary or permanent loss of access to data or system functionality,
  • Complete termination of system operation, possibly including the permanent loss of certain data.

Depending on the failed component and its replacement cost, the ZID may or may not decide to repair a system. In the first case, the time to repair may be significantly longer than with a system that is under maintenance, resulting in outages that may last for days or even several weeks. We estimate this risk to be relatively low, but we definitely cannot rule it out.

Recommendations and precautions

  • Regularly back up important data in SCRATCH areas. For the Leo systems, data in HOME directories are backed up by the ZID and thus are at significantly lower risk. As with all systems, you may still decide to keep a backup of your own.
  • For the systems operated by external partners (MACH2 and VSC), there is no backup of user data at all. Please save your data by regularly copying important data using e.g. rsync.
  • All machines are operated under a best effort principle. Try to not depend on continuous system availability to meet important deadlines or research goals.

Maintenance status of individual systems

LEO

Was turned off on Aug 14 2015. HOME data and SCRATCH data can still be accessed via LEO3 and LEO3e.

LEO2

Was turned off on Feb 20 2015. HOME data can still be accessed via Leo3.

LEO3

Essential components still covered by maintenance as of 1Q2018. Upon malfunction, nodes may be removed from the Leo3 cluster. End Of Life: TBD.

LEO3e

In production and covered by full maintenance since October 2015.

LEO4

Fully operative in "Friendly User" test mode as of August 2018. All Leo3 user accounts have been automatically activated for Leo3e.

MACH

User operation terminated for University of Innsbruck as of 1 May 2018.

MACH2

Successor system to MACH. Covered by warranty until end of 2020. NOTE: in contrast to other systems, there is no data backup of any user directories, including HOME. Users are responsible for safeguarding their own data.

VSC3

In regular operation including maintenance since March 2015.