Universität Innsbruck


Invited Talks

Rainer Blatt (IQOQI, University of Innsbruck)


Jan Broersen (Utrecht University): A logic analysis of Responsibility for Interventions and non-Interventions

We formally analyse the problem of backward looking responsibility for outcomes in a non-deterministic setting with multiple agents making choices at different moments in the past. We distinguish and formalise three core modes of (causal) responsibility: (1) being the agent who initialised a course of events that led to the outcome, (2) being an agent that had the opportunity to intervene in a course of events that led to the outcome, but refrained from doing so, and (3) being the agent that enabled that an agent came in the position to initialise a course of events that led to the outcome (sense (1)). Our analysis is directly applicable to modern questions about how to assign responsibilities in scenarios involving, for instance, self-driving cars and autonomous weapons. Conceptually and technically, the analysis brings together insights from stit theory (the achievement stit) and causal modelling (interventions).

Rafael Chaves (International Institute of Physics, UFRN): Machine learning non-local correlations  [Slides: pdf, 2.7MB]

The recent years have seen a renewed interest at the intersection between causality, machine learning and quantum physics. In this talk we will discuss - first from a broad perspective - the connections between these topics. Then we will focus on a specific problem: non-locality detection and quantification of some given correlations. Commonly, this is achieved via the violation of Bell
inequalities. Unfortunately, however, their systematic derivation quickly becomes unfeasible as the scenario of interest grows in complexity. As we will see, a machine learning approach provides an alternative route to that aim, achieving a high performance in a number of relevant scenarios and providing a proof-of-principle for the relevance of machine learning for understanding non-locality.

Giulio Chiribella (University of Oxford): Quantum speedup in testing causal hypotheses [Slides: pdf, 4.7MB] 

The study of physical processes often requires testing alternative hypotheses on the causal dependencies among a set of variables. When only a finite amount of data is available, the problem is to infer the correct hypothesis with the smallest probability of error. In this talk I will provide a general framework, which can be used to formulate causal hypotheses in a theory-independent way. In this framework, one can fix a set of hypothesis and determine how well they can be tested in different theories. As an example, I will consider the task of identifying the effect of a given variable. I will show that a quantum setup can identify the effect with exponentially smaller probability of error than the best setup for the classical version of the problem. The origin of the speedup is the availability of quantum strategies that run multiple tests in a superposition.

Wolfgang Lechner (University of Innsbruck): Programmable superpositions with Hebbian (un-)learning


Alexey Melnikov (University of Innsbruck): Machine learning for designing new quantum experiments [Slides: pdf, 24.2MB] 

Quantum experiments push the envelope of our understanding of fundamental concepts in quantum physics. The designing of modern quantum experiments is difficult and often clashes with human intuition. In my talk, I will address the question of whether a machine can propose novel quantum experiments. In our work (PNAS 115, 1221) we answer this question in the affirmative in the context of photonic quantum experiments, although our technique is more generally applicable. I will talk about reinforcement learning and demonstrate how the projective simulation model can be used to design novel quantum experiments and discover experimental techniques. The observed features of learning highlight the possibility that machine learning could have a significantly more creative role in future research.


Alejandro Perdomo-Ortiz (Rigetti Computing): Quantum-assisted machine learning in near-term quantum devices


Justus Piater (University of Innsbruck): Concept learning for robot intelligence [Slides: pdf, 2.9MB]

Current robot learning, and machine learning in general, requires carefully engineered setups (environments, objective functions, training data, etc.) for learning to succeed. Perception and action spaces are specially crafted to meet the requirements of the learning objective, which is specified in advance.
How can we construct robot learning systems that can learn in an open-ended fashion, acquire skills not foreseen by its designers, and scale up to virtually unlimited levels of complexity? I argue that a key to achieving this lies in the robot's ability to learn abstract concepts that can be reused as a basis for future learning, both in autonomous exploration and for teaching by humans.

Aske Plaat (Leiden University): Artificial Intelligence & Quantum Computing [Slides: pdf, 50.3MB]

My background is in AI. Only recently have I been exposed to that other buzzword, Quantum Computing. This talk will give an overview of recent advances in AI. I will discuss why there is so much interest in AI. I will attempt a definition of AI and of Machine Learning. I will discuss current research interests in AI, and I will discuss a (somewhat long) list of Quantum Machine Learning algorithms. This is not a technical talk, so I will not discuss these algorithms in depth, but I will discuss their limitations. I will conclude by proposing three possible areas of interest for Quantum Machine Learning research. I look forward to an interactive discussion.

Patrick Rebentrost (MIT): Quantum algorithms for the Hopfield network, gradient descent and Monte Carlo


Renato Renner (ETH Zurich): Discovering physical concepts with neural networks

Neural networks are often applied as a black box, i.e., we do not know how they solve a given problem. This limits their utility for scientific discoveries. In this talk, I will describe a network architecture (which we proposed recently in arXiv:1807.10300) that can overcome this limitation. Roughly, given experimental data about a physical system, the network compresses this data to a simple representation and answers questions about the system using the representation only. Physical concepts can then be extracted from the learned representation. We regard this as a first step towards answering the question whether the traditional ways by which physicists model nature naturally arise from the experimental data without any mathematical and physical pre-knowledge, or if there are alternative elegant formalisms..

(This is joint work with Raban Iten, Tony Metger, Henrik Wilming, and Lidia del Rio.)

Matthias Rupp (Fritz-Haber-Institut, Max-Planck-Gesellschaft): Accurate Energy Predictions via Machine Learning

[Slides: pdf, 8.4MB]

Matthias Troyer (ETH Zurich/Microsoft): ML+Q


Ronald de Wolf (QuSoft, CWI, University of Amsterdam): Learning from quantum examples: Strengths and weaknesses [Slides: pdf, 1.2MB]

In classical supervised concept learning, one typically tries to learn a Boolean function f from examples of the form (x,f(x)), where x is distributed according to some distribution D that may or may not be known to the learner. In this talk we consider how well we can learn when an example is not random, but a quantum superposition over (x,f(x)) with amplitudes given by the square roots of the probabilities D(x). This model is a natural quantum generalization of Valiant's classical PAC learning model, and was introduced by Bshouty and Jackson in '95. We describe some positive and negative results known in this context. On the positive side, if the distribution D is known to be uniform over all x, then quantum learners can improve over classical learners, both in terms of sample complexity (=the number of examples needed to learn f) and in terms of time complexity. This happens for instance for learning DNF and for learning Fourier-sparse functions. On the negative side, we show that in the distribution-independent setting, where the learner does not know the distribution D in advance and has to succeed for every possible D, the quantum and classical sample complexity are equal up to a constant factor: both are determined by the same function of the VC-dimension of the underlying concept class, and the error parameters.
Most of the work presented here is joint with Srinivasan Arunachalam.

Mário Ziman (Institute of Physics, Slovak Academy of Sciences): Limitations on learning of quantum processes

[Slides: pdf, 3.4MB]

Nach oben scrollen