Research Group Artificial Intelligence

The website of the Artificial Intelligence Research Group, which has been in existence since September 2025, is currently under construction.

Our research

Our research lab is interested in learning algorithms and learning dynamics that enable artificial intelligence. We use probabilistic Machine Learning approaches to develop novel and to decisively improve existing algorithms for unsupervised and semi-supervised learning. We are especially interested in new principles of learning, new theoretical results and foundations, scalable and interpretable learning, learning based on strong models, and learning under difficult conditions (few data, big data, strong noise, structured noise, missing data, etc.). Our approaches address aspects of intelligence where current AI systems struggle. Our research topics include disentanglement, data understanding, generalization, advanced learning from limited data, efficient learning on big data etc. Generative models and variational optimization are our main theoretical frameworks, and we have a particular interest in models with discrete latent variables. Our developed systems often enable novel applications, or applications under conditions that are too challenging for conventional approaches. We apply our approaches to data from different domains including general pattern recognition data, visual data, acoustic data, heterogeneous medical data and medical imaging. Furthermore, we are interested in the relation of our learning systems to biological learning and biological intelligence.

More content on our projects will follow soon.

Publications

A selection of papers of the last three years is the following:

H. Mousavi, J. Lücke (2025).
Linear and Nonlinear Generative Models for 'Zero-Shot'Image Denoising in the Limit of Few Photons.
Journal of Mathematical Imaging and Vision 67(3): 1-17.

S.Salwig*, J. Drefs* and J. Lücke (2024).
Zero-shot denoising of microscopy images recorded at high-resolution limits.
PLOS Computational Biology 20(6): e1012192.
*joint first authorship.

D. Velychko, S. Damm, Z. Dai, A. Fischer and J. Lücke (2024).
Learning Sparse Codes with Entropy-Based ELBOs.
Int. Conf. on Artificial Intelligence and Statistics (AISTATS), 2089-2097.

H. Mousavi, J. Drefs, F. Hirschberger, J. Lücke (2023).
Generic Unsupervised Optimization for a Latent Variable Model with Exponential Family Observables.
Journal of Machine Learning Research 24(285):1−59.

S. Damm*, D. Forster, D. Velychko, Z. Dai, A. Fischer and J. Lücke* (2023).
The ELBO of Variational Autoencoders Converges to a Sum of Entropies.
Int. Conf. on Artificial Intelligence and Statistics (AISTATS), 3931-3960, 2023.
*joint main contributions

J. Drefs*, E. Guiraud*, F. Panagiotou, J. Lücke (2023).
Direct Evolutionary Optimization of Variational Autoencoders With Binary Latents.
European Conference on Machine Learning, 357-372.
*joint first authorship

F. Hirschberger*, D. Forster* and J. Lücke (2022).
A Variational EM Acceleration for Efficient Clustering at Very Large Scales.
IEEE Transactions on Pattern Analysis and Machine Intelligence 44(12):9787-9801.
*joint first authorship.

J. Drefs, E. Guiraud and J. Lücke (2022).
Evolutionary Variational Optimization of Generative Models.
Journal of Machine Learning Research 23(21):1-51.

Other selected papers


J. Lücke and D. Forster (2019).
k-means as a variational EM approximation of Gaussian mixture models.
Pattern Recognition Letters 125:349-356.

F. Hutter*, J. Lücke*, L. Schmidt-Thieme* (2015).
Beyond manual tuning of hyperparameters.
KI - Künstliche Intelligenz 29 (4), 329-337.
*alphabetical order

Z. Dai and J. Lücke (2014).
Autonomous Document Cleaning – A Generative Approach to Reconstruct Strongly Corrupted Scanned Texts.
IEEE Transactions on Pattern Analysis and Machine Intelligence 36(10): 1950-1962.

A.-S. Sheikh, J. A. Shelton, J. Lücke (2014).
A Truncated EM Approach for Spike-and-Slab Sparse Coding.
Journal of Machine Learning Research 15:2653-2687.

M. Henniges, R.E. Turner, M. Sahani, J. Eggert, J. Lücke (2014).
Efficient occlusive components analysis.
The Journal of Machine Learning Research 15 (1), 2689-2722.

J Lücke, J Eggert (2010).
Expectation truncation and the benefits of preselection in training generative models.
The Journal of Machine Learning Research 11, 2855-2900.
J. Lücke, C. von der Malsburg (2004).
Rapid processing and unsupervised learning in a model of the cortical macrocolumn.
Neural Computation 16 (3), 501-533.

Nach oben scrollen