Supercomputer LEO II of the Focal Point Scientific Computing

Applying for user accounts

As this system is being phased out in the near future, we recommend that you apply for accounts on one of our current systems.

Acknowledging cluster usage

Users are bound to recognize their use of the LEO II compute cluster by assigning all resulting publications to the Focal Point Scientific Computing within the Forschungsleistungsdokumentation (FLD, http://www.uibk.ac.at/fakten/leitung/forschung/aufgabenbereiche/fld/) of the University of Innsbruck, and by adding the following statement to each publication's acknowledgment:

This work was supported by the Austrian Ministry of Science BMWF as part of the UniInfrastrukturprogramm of the Focal Point Scientific Computing at the University of Innsbruck.

Cluster usage

All HPC clusters at the University of Innsbruck hosted at the ZID comply with a common set of usage regulations, which are summarized in the following sub-sections. Please do take the time to read the guidelines carefully, in order to make optimal use of our HPC systems.

First time instructions

See this quick start tutorial to jump the first hurdles after your account was activated:

  • Login to the cluster
  • Change your password
  • Copy files from and to the clusters

Setting up the software (modules) environment

There are a variety of application and software development packages available on our cluster systems. In order to utilize these efficiently and to avoid inter-package conflicts, we employ the Environment Modules package on all of our cluster systems.

See the modules environment tutorial to learn how to customize your personal software configuration.

Submitting jobs to the cluster

On all of our cluster systems, the distribution of jobs is handled by the Sun Grid Engine (SGE) batch scheduler.

See the SGE usage tutorial to find out how to appropriately submit your jobs to the batch scheduler, i.e. the queuing system.

Status information and resource limitations

In order to provide an efficient cluster utilization, an optimized workload and, most importantly, a fair share of resources to all of our cluster users, there are several limitations imposed on the queuing system, which need to be considered when submitting a job.

See the resource requirements and limitations document to learn how to handle these limitations efficiently.

Checkpointing and restart techniques

As High Performance Comuting (HPC) systems are by design no high availability systems, it is highly recommended to integrate some sort of checkpointing facility within your application in order to avoid job failure and loss of results.

See the checkpointing and restart tutorial for guidance on how to integrate this checkpoint procedure with the SGE batch scheduler.

Storing your data

Every time you login to the cluster, all storage areas available to you, i.e. the corresponding directories, as well as the used percentages, are listed before the current important messages. In general, the first two list items are of major importance:

  1. Your home directory (also available via the environment variable $HOME) provides you with a small but highly secure storage area, which is backed up every day. This is the place to store your important data, such as source code, valuable input files, etc.
  2. The secondly listed storage area represents the cluster's scratch space. It is also accessible via the environment variable $SCRATCH. This area provides you with enough space for large data sets and is designed for the cluster's high speed I/O. Use this storage for writing the output of your calculations and for large input data.
    Please note, that the scratch space is designed for size and speed and is therefore no high availability storage. So make sure to secure important files regularly, as total data loss - though improbable - cannot be excluded.

Further listed storage areas are mostly for data exchange purposes. Please contact the ZID cluster administration, if you feel unsure about storage usage.

Available software packages

On each of our clusters we provide a broad variety of software packages, such as compilers, parallel environments, numerical libraries, scientific applications, etc.

Hardware components

This cluster consists of 126 compute nodes (with a total of 1008 cores), 2 redundant file servers and one login node. All nodes are connected through the Infiniband 4x DDR. The storage system offers 32 TB SAS storage.
The cluster was purchased and installed from EDV-Design. It is an iDataplex solution of IBM.

leo2
Photo: Wolfgang Kapferer

Get some more details about the hardware configuration.

Cluster status and history

13th May 2009 Start of the first production run: MPI-Job with 1000 parallel processes
11th May 2009 Inauguration of the new supercomputer Leo II
27th April 2009 Launch of test operation with dedicated users
17th March 2009 Delivery and setting up the cluster

Contact

Statement of Service

Maintenance Status and Recommendations for ZID HPC Systems