MACH: collaborative system of the Universities Innsbruck and Linz

MACH represents the first high performance computing system of the Austrian Center for Scientific Computing (ACSC) and is a collaborative effort of the Universites of Innsbruck and Linz. It comprises one large shared memory system with 2048 Intel Xeon (Westmere) cores and is equipped with an overall of 16 TB of main memory. The nodes are joined by SGI's NUMAlink 5 interconnect.

User account application

Please proceed with the following steps if you intend to get an account for this cluster:

  1. As the computer center has several other high performance computing machines it is recommended to consult the system administrators ( to evaluate if this is the right system for your needs beforehand.
    Note: In case you need more scratch space please contact the HPC staff directly.
  2. Please download and fill in this application form. You need a user account name corresponding to one of the University's institutes (no student and no external "x..." accounts). By default, all new accounts will be created as power-user accounts with the corresponding service parameters of the system. If you are unsure about how to fill in the form, the ZID HPC staff will gladly assist you.
    1. If there is a representative of the Research Area Scientific Computing within your field of reasearch, ask this person to confirm the feasibility of your project by signature. If there is no appropriate representative available, directly proceed to b.
    2. Each application needs to be confirmed by signature by the Head of the Research Area Scientific Computing.
  3. Once the application form has been filled in and signed by the Head of the Research Area, please contact the ZID HPC staff to arrange an appointment for a usage briefing of about half an hour, in which we provide basic usage instructions and information about the HPC system.
    The application form will be sent to the ZID HPC Team (Technikerstrasse 23, A-6020 Innsbruck) by the Research Area. Alternatively, you may take it with you to the arranged appointment.
  4. After all the preceding steps have been performed, it usually takes one business day to set up your account with the ZID User Services (ZID Benutzerservice).

Acknowledging system usage

University of Innsbruck users are required to acknowledge their use of MACH by assigning all resulting publications to the Research Area Scientific Computing within the Forschungsleistungsdokumentation (FLD, of the University of Innsbruck, and by adding the following statement to the acknowledgments in each publication:

The computational results presented have been achieved (in part) using the HPC infrastructure of the University of Innsbruck.

System usage

The HPC systems of the University of Innsbruck, which are hosted at the ZID comply with a common set of usage regulations, which are summarized in the following sub-sections. Please do take the time to read the guidelines carefully, in order to make optimal use of our HPC environment.

First time instructions

See this quick start tutorial to jump the first hurdles after your account was activated:

  • Login to the system
  • Change your password
  • Copy files from and to the system

Setting up the software (modules) environment

There are a variety of application and software development packages available on our systems. In order to utilize these efficiently and to avoid inter-package conflicts, we employ the Environment Modules package on all of our HPC systems.

See the modules environment tutorial to learn how to customize your personal software configuration.

Submitting jobs to the system

On MACH, the distribution of jobs is handled by the PBS Professional (PBS) batch scheduler.

See the PBS usage tutorial to find out how to appropriately submit your jobs to the batch scheduler, i.e. the queuing system.

Storing your data

Every time you login to the system, all storage areas available to you, i.e. the corresponding directories, as well as the used percentages, are listed right before the current important messages. In general, the first two list items are of major importance:

  1. Your home directory (also available via the environment variable $HOME) provides you with a small but highly secure storage area, which is backed up every day. This is the place to store your important data, such as source code, valuable input files, etc.
  2. The secondly listed storage area represents the system's scratch space. It is also accessible via the environment variable $SCRATCH. This area provides you with enough space for large data sets and is designed for the system's high speed I/O. Use this storage for writing the output of your calculations and for large input data.
    Please note, that the scratch space is designed for size and speed and is therefore no high availability storage. So make sure to secure important files regularly, as total data loss - though improbable - cannot be excluded.

Further listed storage areas are mostly for data exchange purposes. Please contact the ZID cluster administration, if you feel unsure about storage usage.

Hardware components

The system is an SGI Altix UV 1000 system, set up as one maximum sized single system image. MACH represents one large shared memory system with 2048 Intel x86_64 cores and a single (SLES 11) OS instance. Technically, it consists of 128 physical two-socket-nodes. Each socket contains an 8-core Intel Xeon E78837 processor. The nodes are connected through SGI's NUMAlink 5 interconnect.

From a software perspective, each core corresponds to one CPU, and each socket corresponds to one so-called memory-node. So when you request 16 CPUs from the PBS batch system, your job will be assigned two physical sockets (= 2 memory nodes) with a total of 16 cores (= CPUs).

The RAID storage system offers 58 TB of storage space in one big XFS file system.

Technical usage information for Mach

The document Mach - Special Considerations contains a wealth of more detailed technical information relevant for practical usage of the system.

Topics covered:

  • Using CPU and memory, how to avoid overloading the system.
  • How to use MPI with SGI's message passing implementation
  • How to pin processes to individual CPUs
  • Using Intel's Math Kernel Library
  • Monitoring your processes
  • Interactive usage
  • Correct parallel usage of Matlab and Mathematica


Statement of Service

Maintenance Status and Recommendations for ZID HPC Systems

Nach oben scrollen