Table of contents
1. What is Conda? Why Conda?
Python and R are known for large ecosystems of packages, often requiring specific combinations of versions of the base language (Python, R) and the packages needed for proper functionality. System-side installations typically limit users to just one, often outdated, version of the base language, rendering mechanisms such as Python Virtual Environments (venvs) inadequate for many purposes.
Package management systems such as Conda remove these restrictions. Users can install arbitrary combinations of base language and package versions into cleanly separated Conda Environments. When a Conda Environment is installed, Conda downloads the desired version of the base language and all requested packages (and their dependencies) from one or more channels of the Anaconda.org repository, hosted on repo.anaconda.com. The resolver built into Conda ensures that selected package versions are compatible with each other. Before running a program, a user activates the respective Conda Environment, ensuring that all software components are invoked from this particular environment.
Go to the Conda Documentation to get an overview of Conda functionality and ecosystem.
With respect to licensing and software maintenance, we distinguish between two sets of channels, both available from Anaconda.org:
- The so-called Anaconda Main Channel (also called defaults channel). This channel is subject to the new Anaconda Licensing Terms. It has sub-channels (main and r) for Python (+utilities) and R. A particular highlight of this channel is the availability of the Anaconda Metapackage, which is a comprehensive and curated collection of Python, R, and more than 500 associated packages, intended for use by scientists. Anaconda is typically less up-to-date than the other channels, but the Anaconda team invests extra effort to ensure a high degree of reliability and compatibility of components.
Prior to the introduction of the new licensing terms in 2024, the Python and R installations on our Leo systems were based on the Main Channel. After the change, we have stopped deployment from this channel. The University of Innsbruck has licensed Anaconda for academic use. The licensing terms clearly permit educational use but are somewhat unclear about use for academic research. So, before installing software from the Anaonda channels, please carefully review the Anaconda Licensing Terms. In case of doubt, use the free channels described below. If you are collaborating with any non-academic entities (e.g. corporations or even state funded non-academic institutions), you DEFINITELY MUST set up a licensing agreement with Anaconda OR use the free channels.
Anaconda.com provides the Miniconda installer, which defaults to the defaults Channel and allows creation of arbitrary Conda environments as well as the complete Anaconda metapackage. Please note that Anaconda monitors these installations, so absolutely make sure that your usage of this software complies with the licensing terms. - The free channels, including the conda-forge, Bioconda, and Nvidia channels. These provide (often more up-to-date) versions of innumerable packages from a variety of sources. Packages are maintained and curated by the community and are entered into the repository following a controlled build process.
The Miniforge installer is the free (BSD 3-clause license) version of the Conda installer and can be found on Github. For details, refer to the Conda Documentation page. The Miniforge installer defaults to the conda-forge channel but can install packages from arbitrary channels. It will install a Conda base environment (including the conda command), which can be enhanced by installing the mamba package.
The Micromamba installer downloads a standalone version of mamba which needs no base environment and can be found on the Mamba homepage. - Additionally, both installers can download a customized version of the pip utility to enable installation of packages missing in the Conda channels from the Python Package Index (PyPI). The Conda version of pip will install packages into your active Conda environment, maintaining the integrity and separation of your installations. Note that, while PyPI offers far more packages than the Conda channels, PyPI relies on software authors for quality management, so you as a user are responsible for verifying safety and code quality before installation.
Note that beginning in 2021, conda-forge maintainers have ceased to maintain compatibility with Anaconda's default channel, and as of 2024, the default channels are incompatible with conda-forge. If you need packages that are not in the defaults channel, install all of your packages using conda-forge or the other free channels.
1.1 Pre-Installed Conda Modules and Environments
As of February 2026, we are, after a hiatus of three years caused by the licensing uncertainties, installing Conda environments for Python and R from the free channels using the Micromamba installers.
WARNING: Do not use the Anaconda Main Channels (main and r) unless you are absolutely sure that your use is covered by the Academic License issued to the University of Innsbruck. Review the Anaconda Licensing Terms for details. If you are collaborating with any non-academic entities (e.g. corporations or even state funded non-academic institutions), you DEFINITELY MUST set up a licensing agreement with Anaconda OR use the free channels.
Discover all our pre-installed versions of Conda modules by issuing
$ module avail Micromamba
or enter
$ module avail Micromamba/2026.02
to get a list of the newest components. What follows is a quick overview of preinstalled environments.
micromamba-base-2026.02
Base environment - your starting point for installing your own environments. Contains no packages.
python-3.14.3-paramamba-mkl-2026.02
python-3.14.3-paramamba-mkl-mpi-2026.02
python-3.14.3-paramamba-openblas-2026.02
python-3.14.3-paramamba-openblas-mpi-2026.02
The UIBK Paramamba environments are intended to provide a user experience similar to the Anaconda metapackage, but from the conda-forge channel and thus without Anaconda's licensing restrictions. They replace the python-...-anaconda-... and python-...-numpy-... modules of our previous Anaconda-based installations. Each Paramamba environment contain approx. 500 packages, among them NumPy, Scipy, Pandas, and Matplotlib; frontends including Ipython and Jupyterlab, and compilers/optimizers (e.g. Cython, Numba) and many more packages for science and data analysis. They differ only with respect of a choice of MKL or Openblas implementations of BLAS, and including or omitting MPI support for distributed jobs using Slurm.
To use MPI on Leo5, you must use the srun --mpi=pmi2 command to start your parallel tasks across multiple nodes.
python-3.14.3-spyder-6.1.3-paramamba-mkl-2026.02
The Spyder IDE needs an old version of the ipython kernel and has been installed to a separate environment.
python-3.14.3-pandas-2.3.3-paramamba-mkl-2026.02
Above Paramamba environments already contain the recently released Pandas version 3, which introduces breaking changes. This environment provides Pandas version 2.3, which is compatible with older scripts and issues helpful warnings to prepare upgrade to Pandas 3, and is otherwise equivalent to python-3.14.3-paramamba-mkl-2026.02.
python-3.11.6-pytorch-2.5.1-cuda-2026.02
python-3.14.3-pytorch-2.10.0-2026.02
A collection of packages including Pytorch. The pytorch-2.5.1-cuda version contains Cuda integration and can be run on CPUs and GPUs, the more recent pytorch-2.10.0 version supports only CPUs.
r-4.3-conda-2026.02
r-4.4-conda-2026.02
The R statistics software with approx. 500 R libraries and the RStudio IDE for R.
...
More environments can be easily added to your own account (see 2.2 Extending Anaconda For Your Own Needs) or centrally upon request if of sufficient general interest.
The existing Anaconda3 modules were installed prior to Anaconda's license change and are still available to prevent disruptions of your research. The supplied miniconda-base-yyyy.mm environments default to the Anaconda Main Channels and use obsolete versions of the Conda installer. Do not use the old Anaconda modules to install new environments.
2. Using Conda On The Leo Systems
The preinstalled environments for Python and R are quite comprehensive and may satisfy the needs of many users interested in general purpose computing and data analysis. Learn how to use them in the following section 2.1 on Using Pre-Installed Environments.
If you need to install your own packages or want full control over the selection and versions of packages, proceed to section 2.2 Extending Conda For Your Own Needs.
2.1 Using Pre-Installed Environments
To discover which Conda environments are installed, issue the command
$ module avail Micromamba
For new projects, usually the most recent version is appropriate. The existing Anaconda3 modules (list with module avail Anaconda3) are being retained for compatibility and should only be used to continue with existing projects.
To use any of the Conda environments, upon login or at the beginning of a batch job, first issue
$ module load Micromamba/yyyy.mm/module-name-yyyy.mm
This will let you use the software provided with the respective module, and a restricted version of the conda command.
After loading your module, issue
$ conda list
to see which packages are available in the current environment.
To improve discoverability, these modules are also cross-linked from the Application-Software section under python and R. They can be identified by the suffix conda-yyyy.mm in their names. Example:
$ module load python/3.14.3-paramamba-mkl-conda-2026.02
is equivalent to
$ module load Micromamba/2026.02/python-3.14.3-paramamba-mkl-2026.02
2.2 Extending Conda For Your Own Needs: Create and Install Your Own Environments
If the pre-installed environments do not meet your needs, you should create one or more Conda environments and install your desired packages into these environments.
Conda environments are similar to Python virtual environments, but they extend their functionality (including free choice of Python and R versions) and completely replace them. Attempting to mix these two types of environment can lead to conflicts and inconsistencies. So if you use any Conda installation, it is best to consistently stick with Conda's package management (i.e. the conda command and - if needed - Conda's version of pip) to avoid these problems.
2.2.1 Migrating From Anaconda to Free Channels
$HOME/.conda symlink
If you have used our now-legacy Anaconda installation, your $HOME/.conda should be a symlink to your existing $SCRATCH/.conda directory to prevent overflow of your $HOME quota. Unless configured otherwise in your .condarc file (see below), the new Micromamba installation automatically installs all packages and environments to $SCRATCH/.micromamba, so this symlink is no longer necessary.
$HOME/.condarc configuration file
If you have installed Anaconda-based environments, your $HOME/.condarc configuration file will likely contain references to the Anaconda channels. Example:
OBSOLETE:
channels:
# - defaults \
# - https://repo.anaconda.com/pkgs/main > ELIMINATE THESE CHANNELS!
# - https://repo.anaconda.com/pkgs/r /
Above channels should be removed. To start cleanly, use the following template for $HOME/.condarc:
channels:
- conda-forge
- nodefaults
channel_priority: strict
If you routinely use other channels (such as bioconda), add them to the channels list.
Conda Shell Extensions
Our previous documentation included activation of the Conda shell extensions after loading the Anaconda module.
The previously recommended command
eval "$($UIBK_CONDA_DIR/bin/conda shell.bash hook)"
is now obsolete. Our Micromamba installation loads the shell extension automatically, so this command is no longer necessary and should be removed from your shell scripts and batch files.
2.2.2 At the Beginning of Each Shell Session or Batch Job
All of the following requires that you load the Micromamba/micromamba-base module. Note that other Micromamba modules supply the conda command with reduced functionality and cannot be used to successfully build reliable environments, even if at first glance it may seem that everything works fine.
$ module load Micromamba/yyyy.mm/micromamba-base-yyyy.mm
This loads and activates the Miniconda base environment and its shell integration. The
eval "$($UIBK_CONDA_DIR/bin/conda shell.bash hook)"
command is no longer necessary. After loading the Micromamba module, micromamba (aliases conda and mamba) is a shell function which allows manipulation of your session's environment variables, creation, maintenance, and activation of Conda environments.
Note:
- Never use conda init because this will modify your $HOME/.bashrc in a undesirable way which breaks functionality of our supplied environments.
- To obtain a list of all environments (preinstalled and your own) issue the command
$ conda env list - To obtain a list of installed packages in the currently active environment, issue
$ conda list - To obtain a list of packages in any given environment, issue
$ conda list -n environment
For more information, consult the Conda and Mamba documentation pages.
2.2.3 Steps To Create And Install Your Own Environments
To keep installations apart and minimize possible version conflicts, we recommend to create separate environments for different projects requiring disparate packages.
Note:
- Never try to install packages into your base environment, always start by creating a new environment.
The steps described below will yield reliable results only if you load one of our provided micromamba-base environment modules.
As a basic rule, try to install as many missing packages using conda to get optimized versions. The remaining packages can be installed into your environment using the Conda version of Python's pip command or R's install.packages() function (see below).
For all of the following, first, load and activate the Micromamba base environment as described under 2.2.2 At the Beginning of Each Shell Session or Batch Job.
2.2.4 Create Your New Conda Environment
You typically should have a good notion of the packages necessary to run your software, either from the prerequisites section of the software documentation, or from the import statements of your programs.
Given a tentative list of required packages, create your new environment:
$ conda create -n myenvironment package1 [package2 ...] [-c channel] ...
Substitute a good name for myenvironment, specify - if needed - necessary channels with -c. Your environment will be created in $SCRATCH/.conda/envs/myenvironment, and all packages requested on the command line are installed into your new environment.
Then activate your environment
$ conda activate myenvironment
and try to run your software. You may also want to explicitly look for missing components by trying to invoke them:
For all needed shell commands name (e.g. ipython), issue
$ which name
and note which commands have not been found.
For Python packages, start python and issue
>>> import name
statements and note which packages cannot be found.
Likewise proceed for R packages with
> library('name')
statements.
2.2.5 Identify Missing Packages Which Can Be Installed By Conda
For each required/missing package name, issue
$ conda search name
Take note if name was found. When your list is complete, add the new packages to your existing environment
$ conda install -n myenvironment package1 [package2 ...] [-c channel] ...
Unfortunately, used channels are not automatically stored with the environment, so any -c channel arguments must match exactly those that you used when you created the environment. At the end, you might want to reiterate your installation using the complete set of packages to ensure reproducibility
$ conda deactivate
$ conda env remove myenvironment
$ conda create -n myenvironment package1 [package2 ...] [-c channel] ...
$ conda activate myenvironment
While it is possible to incrementally install more packages after creating and activating an environment, installing all packages at creation does a better job at avoiding version conflicts.
Repeat this process until no new packages can be installed.
If any Python modules or R libraries are still missing after this process (i.e. they could not be installed using Conda), you will need to install them using the conda-specific version of pip (Python) or install.packages() (R) into your activated Conda environment.
2.2.6 Add Missing Components Using Pip
If, after installing required packages using Conda, any packages are still missing, these can be installed into your Conda environment using Conda's version of pip.
First, activate your environment if you have not done so
$ conda activate myenvironment
and use Conda to install pip and Conda's compiler environment:
$ conda install pip gcc_linux-64 gxx_linux-64 gfortran_linux-64
The compilers are necessary because the OS compiler versions may be inconsistent with Conda's expectations, possibly leading to compile or runtime errors.
Then, do not create a Python virtual environment (as you would normally do outside Conda), but simply use Conda's pip to install the remaining packages into your active Conda environment:
$ pip install package1 [package2 ...]
This may result in a large number of requisites to be installed automatically, many of which could have been installed by Conda instead. You can find out which by issuing conda search commands for each package added by pip. So to proceed, add the list of packages that pip would install to your list of candidate conda packages, destroy your existing environment:
$ conda deactivate
$ conda env remove -n myenvironment
... and repeat above Conda installation with your enhanced list. Often, this leaves very few packages to be installed with pip, making optimal use of Conda's performance optimizations.
Note
After using pip for the first time into a given Conda environment, you no longer should use Conda to install more packages into the same environment. Should this be necessary, simply begin from the start by creating a new Conda environment and proceeding as described above.
2.2.7 Final Cleanup
When you are done installing Conda packages, you may, with your environment still active, use
$ conda clean --all --yes
to remove unneeded installation material from your environments.
Note that you are responsible for backup of your own data in $SCRATCH. Since it is easy to recreate environments, it is usually sufficient to record the necessary steps for successfully creating your environment(s).
For exact reproducibility, use conda export to save a description of your environment to a YAML file, e.g.
$ conda export -n my-environment -f my-environment.yaml
and copy the output file to a safe location. When you later need to recreate your environment from the YAML file, use a conda env create command like
$ conda env create -n my-environment -f my-environment.yaml
2.2.8 Create Your Own Paramamba Based Environments
If you would like to create environments similar to our preinstalled Paramamba environments, you will find the dependencies used to build our environments in this document.
2.3 Backups
3. Various Hints
3.1 Alternate Installers
Or current Conda installation uses the Micromamba installer, which is very fast and has matured to be used in procuction environments.
As described at the top of this page, there are other installers, in particular Miniconda and Miniforge. If you download or use these installers, make sure that you consistently use the same installer for each existing environment. Mixing installers for a given environment may easily clobber your environment, forcing you start over.
3.2 Alternate Channels
The conda-forge repository, which is our default, offers a huge selection of packages.
If your software cannot be installed from the conda-forge repository, you may want to add more channels using the -c channel argument for conda create:
- bioconda contains many Bioinformatics packages
- nvidia and pytorch offer packages that can use GPUs
- The Anaconda default channels main and r contain curated collections of packages also found on conda-forge, typically less up-to-date, tested more thoroughly, and subject to Anaconda's licensing restrictions. We do not recommend using these channels unless you are certain that your use does not infringe the licensing terms (see above).
As a final resort, use pip to install packages not found on the Conda channels as described above.
3.3 Sample Job Fragment
If you have created your own environment(s), you may use the following commands as a template
module load Micromamba/yyyy.mm/micromamba-base-yyyy.mm
conda activate myenvironment
3.4 Using MPI With Python
Slurm clusters only. Conda's OpenMPI implementations do not integrate with Slurm well. We have installed and recommend the MPICH implementation instead.
3.4.1 Using MPI With Pre-Installed Environments
Several pre-installed environments contain a functional MPI integration. These have the string -mpi in their names. Please see above 1.1 Pre-Installed (Ana)conda modules and environments and then proceed to the section 3.4.3 Starting MPI Programs on Slurm Clusters below.
3.4.2 Using MPI With Your Own Environments
If you need to create your own environments, please include the following into your conda create command:
$ conda create -n myenv [your packages ...] mpi4py mpi=*=mpich
3.4.3 Starting MPI Programs on Slurm Clusters
In both cases, Conda's mpirun command is only able to start processes on your local node, but does not correctly manage CPU and memory allocation. To start MPI processes, use the following Slurm command in your batch job:
srun --mpi=pmi2 [more options] your-script.py [...]
Omitting the --mpi=pmi2 option will cause all MPI tasks to be incorrectly started with rank 0, so please do not forget to include this option in your batch scripts.
3.4.4 Using Mpi4Py Futures on UIBK Slurm Clusters
mpi4py.futures is an MPI-enabled implementation of Python's concurrent.futures library class. This class is capable of dynamic process management. However, due to static resource allocation by Slurm, this capability is of no practical use, so the following fragments show how to correctly start mpi4py.futures code using resources allocated by Slurm.
Template Python program myprogram.py
#!/usr/bin/env python3
from mpi4py.futures import MPICommExecutor
def myworker( arg ):
# do work on individual arg
# note that this function must be defined in __main__ and __worker___
return result
if __name__ == '__worker__':
# optionally put code here specific to worker
if __name__ == '__main__':
args = iterable # input arguments for myworker
with MPICommExecutor() as executor:
for arg, result in zip(args, executor.map( myworker, args )):
# process result
Inside a Slurm allocation or batch job, start this code using the construct
srun --mpi=pmi2 python3 -m mpi4py.futures ./myprogram.py
Note that mpi4py.futures has to be specified both in the program and on the command line. Direct invocation of the script without python -m mpi4py.futures would cause all tasks to incorrectly be started in the __main__ namespace.
The worker function is run concurrently with as many processes as started by srun, and the executor loop body processes the results as they become available. Results are returned in-order unless the keyword argument unordered=True is passed to map.
3.5 More On Environments
- You may also create a Conda environment in a non-standard location using conda create -p path/to/env. Such environments will not be listed by conda env list and need to be remembered.
- Environments do not nest. While conda deactivate takes you back to a previously activated environment, conda activate newenv will replace the currently activated environment with the newenv.
- For details, see the Conda Managing environments documentation.
3.6 Using Your PC To Display a Jupyter Notebook Running On A Server
You can start a JupyterLab process on a Leo login node or in a batch job and display its GUI in a browser window on your PC.
3.6.1 Interactive Session on a LEO Login Node
The following instructions work both for a local Windows or Linux workstation.
- On your PC start an SSH session to your selected Leo login node.
Linux: start a terminal window and enter
ssh [user@]leon
Windows: connect to a LEO login node using Putty. - In the Leo SSH session:
- load Micromamba (module load Micromamba/xxxxxx) and activate your environment as needed.
- Start JupyterLab:
jupyter-lab --no-browser
- Jupyterlab will display several URLs. Indentify the following URL:
http://localhost:88xy/lab?token=zzzzzzzzzzzzzzzzzzzzz - On your PC, start another terminal window (Linux), a WSL window (if WSL is installed), or a CMD window (Windows).
- In this window, set up an SSH tunnel to your JupyterLab session:
ssh -N -L localhost:9001:localhost:88xy user@leon
The port number 9001 is arbitrary. Pick any port which is not in use. If you want to have several JupyterLab sessions, we suggest to use sequential numbers 9001, 9002.... - Start a new browser window on your PC.
- Copy and paste above URL into your browser's address field, replace the port number 88xy by your local port number (e.g. 9001). Verify that your URL looks like
http://localhost:9001/lab?token=zzzzzzzzzzzzzzzzzzzzz
and hit Enter. Your Jupyter session should be displayed.
3.7 Using Your PC To Display a Jupyter Notebook Running in a Slurm or SGE job
You can also start a Jupyter kernel in a Leo batch job and display its dialog in a browser window on your PC.
3.7.1 Sample Job SGE (LEO3e, LEO4)
#!/bin/bash
#$ -q std.q
#$ -N jupyter
#$ -pe openmp 2
#$ -l h_vmem=4G
#$ -j yes
#$ -l h_rt=4:00:00
cat $0
module purge
module load Micromamba/2026.02/python-3.14.3-paramamba-mkl-2026.02
echo "START: $(date)"
echo ${USER}@$(hostname)
jupyter-lab --no-browser
echo "END: $(date)"
3.7.2 Sample Job Slurm (LEO5, LCCn)
#!/bin/bash
#SBATCH -J jupyter
#SBATCH --export=NONE
#SBATCH --ntasks=1
#SBATCH --cpus-per-task=2
#SBATCH --threads-per-core=1
#SBATCH --hint=nomultithread
#SBATCH --mem=8G
#SBATCH --time=04:00:00
cat $0
module purge
module load Micromamba/2026.02/python-3.14.3-paramamba-mkl-mpi-2026.02
echo "START: $(date)"
echo ${USER}@$(hostname)
srun --export=ALL --cpus-per-task=$SLURM_CPUS_PER_TASK --unbuffered jupyter-lab --no-browser
echo "END: $(date)"
Adapt either of the sample job scripts accorting to your needs. Using srun to start JupyterLab ensures that CPU affinity is limited to the CPUs you actually requested. Otherwise, your job will use multiple hardware threads even if you requested one thread per core. The --unbuffered option of the jupyter-lab command in the Slurm example is optional and speeds up display of the job output.
Then proceed as follows:
- On your PC start an SSH session to your selected Leo login node.
Linux: start a terminal window and enter
ssh [user@]leon
Windows: connect to a LEO login node using Putty. - Submit your job script.
- Wait until your job has started, then look at its output. It should contain the user@host close to the top, and an URL of the form
http://localhost:88xy/lab?token=zzzzzzzzzzzzzzzzzzzzz
close to the bottom. - On your PC, start another terminal window (Linux), a WSL window (if WSL is installed), or a CMD window (Windows).
- In this window, set up an SSH tunnel to your JupyterLab session on the worker node nxxx, using the login node leon as a jump host:
ssh -N -L localhost:9001:localhost:88xy -J user@leon user@nxxx
The port number 9001 is arbitrary. Pick any port which is not in use (if it is, you'll get an error message). The user@ part is optional if your local user name is the same as on the remote machine. If you want to have several JupyterLab sessions, we suggest to use sequential numbers 9001, 9002.... - Start a new browser window on your PC.
- Copy and paste above URL into your browser's address field, replace the port number 88xy by your local port number (e.g. 9001). Verify that your URL looks like
http://localhost:9001/lab?token=zzzzzzzzzzzzzzzzzzzzz
and hit Enter. Your Jupyter session should be displayed.
3.8 Recommendations and Caveats
- Should I run my JupyterLab session on the login node or in a batch job?
This depends on what you are planning to do in your session: If you are doing development work with long idle periods and only brief calculations, use the login node. If you do production work with substantial CPU and memory usage, start JupyterLab in a batch job. - Please do not forget to terminate your Jupyter session (Menu "File/Shutdown") and the SSH tunnel (CTRL-C) after use. Remember that an idling job blocks resources from productive use.
- GPUs are a particularly scarce resource and cannot be easily shared. If your code uses GPUs (e.g. tensorflow or pytorch), make sure that it does not grab both GPUs on the login node, and also make sure to terminate your JupyterLab session when you are done.
- TBD: ensure resource usage enforcement in batch job.
4. Documentation and Notes
4.1 License Information
- Use of the conda-forge channel is free (BSD 3 clause license).
- Use of the biocoda channel is free (MIT license).
- Use of Anaconda (channels main and r) is subject to the Anaconda Terms of Service. The University of Innsbruck has an Academic License. Before using these channels, please carefully review the license terms to ensure your compliance.
4.2 Anaconda And Conda Web Sites
4.3 Python 2 Legacy Code
We no longer support Python2 because it has been obsolete since January 2020. See the "Sunsetting Python 2" article for background information.
If you still have legacy code written in Python2, it will likely be possible to automatically convert large portions of your code using tools such as 2to3 (which has been deprecated as of Python 3.11). Since Python2 and Python3 have a few semantically undecidable incompatibilities (e.g. string handling, generator functions vs. functions returning lists), you may need to apply a few manual corrections after automatic conversions to get your code to run and perform well. To our experience with a few projects, the effort for a successful conversion is not very high.
4.4 Links To Other Noteworthy Anaconda Installations
4.5 Notes
- This is a generic binary installation that should work on all of our microarchitectures. Should any of your Anaconda based jobs or processes abort with Error Illegal instruction, please let us know, and we will try to fix this.
- If a conda clean command fails because it tries to remove material from our shared installation, please let us know so we can correct this situation.
- Starting some time between 2022 and 2023, Conda have silently stopped installing GPU enabled versions of Cuda based software such as Tensorflow and Pytorch when the installation runs on a machine not equipped with GPUs. Modules that do not have cuda in their names use CPU only. If you wish to use GPU enabled versions, please create your own environments for the time being. For details, please see the Conda-Forge document Installing CUDA-enabled packages like TensorFlow and PyTorch.