Venue:
SR1
Lecturer:
Martin Butz - Universität Tübingen
Abstract:
Hierarchical, compositional models are crucial for interacting with our world in a versatile and adaptive manner. What is the structure of these models? How are they learned? Multidisciplinary evidence suggests that we segment our world into event-predictive conceptual structures and embed these events into contexts. I selectively introduce some of our recent neuro-cognitive models (Bayesian and generative recurrent artificial neural networks) along these lines and identify critical inductive learning and processing biases. These models have the potential to progressively close the gap between current conceptual models of cognition and embodied sensorimotor experiences. Moreover, these developments plot a path towards fully grounded AI.