Mail Announcement to all users of UIBK HPC Systems

Dear users of the Leo2 cluster,

As announced previously (http://www.uibk.ac.at/zid/systeme/hpc-systeme/announcement-december-2014.html), the Leo2 cluster is approaching its end of life and will be decommissioned soon. Here are the details:

  1. We plan to take Leo2 offline on 20 Feb 2015 at 10:00 AM. All operation will be terminated, and all data stored on the Leo2 SCRATCH file system will be destroyed. The Leo2 HOME file system resides on an external file server and will not be immediately affected.

  2. Before 20 Feb 2015, the system should be in normal operation, and you may use it for normal work. This should give you sufficient time to save all data that you might need after the system has been turned off. Please start saving your data as soon as possible to avoid congestion as the final date approaches.

  3. To help transfer (subject to quota) data that you need on Leo3, the Leo2 SCRATCH and HOME file systems are accessible on the Leo3 login node under /mnt/leo2/scratch (until 20 Feb 2015) and /mnt/leo2/home (until end of May 2015).

  4. If you have many files and want to take archives of your data directly to your local Linux workstation (e.g. using a removable disk drive), you may - preferably after cleaning up data you no longer need - use commands similar to

      ssh leo2 'tar cfz - .' > leo2.cXXXyy.home.tar.gz
      ssh leo2 'cd /scratch/cXXXyy ; tar cfz - .' > leo2.cXXXyy.scratch.tar.gz

    Make sure these archives can be read correctly before depending on them.

Visit http://www.uibk.ac.at/zid/systeme/hpc-systeme/announce-leo2-jan-2014.html to read this message and any possible updates.

The Leo3 cluster will be expanded early in 2015. Due to delays in delivery of the new components, we will have to endure some time at reduced computing capacity. Please consider applying for VSC3 test access. See http://www.uibk.ac.at/zid/systeme/hpc-systeme/vsc/ for details.

Feel free to contact us if you have any questions or need assistance with saving your data.

With kind regards, your ZID HPC team.