HPC Systems of the ZID - Statement of Service Levels for Systems

Stable IT services require adequate protection against outages and loss of data. To ensure continued operation and to minimize the risk of data loss, essential components of our HPC clusters are designed in a redundant manner, ensuring that certain defects (such as an individual disk failure) will not affect availability of data or services. For a period of several years after acquisition of each system, service contracts allow timely replacement of failed components, keeping the entire system in a trouble-free state of operation.

To maximize the benefit from its investments, the ZID will try to continue operation of existing systems for some time even after our maintenance contracts have expired. If you use our systems, you should understand the implications and risks of this strategy, so you will be able to plan your research projects accordingly.

In the following paragraphs, we inform you about the current and planned maintenance status of each of our systems, and what it means to use systems that are no longer under maintenance.

Risks of using systems not under maintenance

Modern hardware is relatively stable. Individual failure probabilities, though, multiplied by the number of machines in a cluster, result in a significant failure rate at least for certain components.

An individual failure may, depending on its nature, result in one or more of the following consequences:

  • Unplanned malfunction or termination of individual jobs,
  • Reduction of processing capacity (loss of individual nodes),
  • Downgraded communication bandwidth,
  • Temporary or permanent loss of access to data or system functionality,
  • Complete termination of system operation, possibly including the permanent loss of certain data.

Depending on the failed component and its replacement cost, the ZID may or may not decide to repair a system. In the first case, the time to repair may be significantly longer than with a system that is under maintenance, resulting in outages that may last for days or even several weeks. We estimate this risk to be relatively low, but we definitely cannot rule it out.

Recommendations and precautions

  • Regularly back up important data, particularly in temporary file systems such as SCRATCH. For the Leo systems only, data in HOME directories are backed up by the ZID and thus are at significantly lower risk. As with all systems, you may still decide to keep a backup of your own.
  • For the systems operated by external partners (MACH2 and VSC), there is no backup of user data at all. Please save your data by regularly copying important data to your own media using e.g. rsync.
  • All machines are operated under a best effort principle. Try to not depend on continuous system availability to meet important deadlines or research goals.

Maintenance status of individual systems

Existing LEOx user accounts are valid on all LEO clusters.

LEO3

LEO3 is being run in Legacy Mode and may permanently go out of operation upon certain failures at any time. Please do not start any new projects on LEO3. End Of Life: second half of 2021, will be replaced by the planned LEO5 system.

LEO3e

In production and covered by full maintenance since October 2015 until end of 1Q2021, prospectively through end of 2021.

LEO4

In production and covered by warranty maintenance since October 2018 until end of 2021.

MACH2

Successor system to MACH. Warranty expired as of end of 2020. System is being operated on a best effort basis and may become degraded or go out of operation at any time. No successor system is currently planned. NOTE: in contrast to other systems, there is no data backup of any user directories, including HOME. Users are responsible for safeguarding their own data.

VSC3/VSC3+

In regular operation including maintenance since March 2015.

VSC4

In regular user operation since May 2020.

Nach oben scrollen