Hearings Embodied Artificial Intelligence and Machine Learning

Monday, 15.10.2018 - Room 3W03

Ramirez Amaro Karinne


Title: "A Semantic Reasoning Method for the Understanding of Human Activities"

Autonomous robots are expected to learn new skills and to re-use past experiences in different situations as efficient, intuitive and reliable as possible. Robots need to adapt to different sources of information, for example, videos, robot sensors, virtual reality, etc. Then, to advance the research in the understanding of human activities, in robotics, the development of learning methods that adapt to different datasets are needed. In this talk, I will introduce a novel learning method that generates compact and general semantic models to infer human activities. This learning method allows robots to obtain and determine a higher-level understanding of a demonstrator’s behavior via semantic representations. First, the low-level information is extracted from the sensory data, then a meaningful semantic description, the high-level, is obtained by reasoning about the intended human behaviors. The introduced method has been assessed on different robots, e.g. the iCub, REEM-C, and TOMM, with different kinematic chains and dynamics. Furthermore, the robots use different perceptual modalities, under different constraints and in several scenarios ranging from making a sandwich to driving a car assessed on different domains (home-service and industrial scenarios). One important aspect of our approach is its scalability and adaptability toward new activities, which can be learned on-demand. Overall, the presented compact and flexible solutions are suitable to tackle complex and challenging problems for autonomous robots.

Saveriano Matteo


Title: "Learning Structured Robotic Tasks via Human Imitation"

Everyday human activities involve multiple actions executed on certain objects with a specific order. These activities represent the kind of task that social robots are asked to execute. Hand programming the multitude of tasks a robot has to execute is clearly unfeasible. A possible solution to reduce robot manual programming is imitation learning, probably the most popular approach to transfer new skills from a human teacher to a robot learner.
This talk discusses the problem of representing human activities as structured tasks and presents a learning and supervision framework which allows a robotic manipulator to intuitively and incrementally acquire novel tasks from human demonstrations. The framework relies on two hierarchical levels that tightly interact with each other by exchanging information on the task state. In particular, the low level allows safe human imitation and robust motion execution, while at the higher level a supervisory system exploits a symbolic representation of the task to flexibly schedule the execution. Experiments on a real robot show the effectiveness of the presented framework.

Agostini Alejandro


Title: "Bridging the Signal to Symbol Gap in Embodied Cognitive Systems"

The usual approach to implement a robotic platform capable of executing human-like tasks is to combine methods of different levels of abstractions into a cognitive architecture, ranging from (high-level) artificial intelligence planning techniques to (low-level) robotic techniques for sensing and acting. Despite the large efforts done in this active research area, the results obtained so far are very limited. The lack of unification between artificial intelligence and robotic techniques makes integration and consistency checking very complicated and forces the adoption of ad-hoc solutions for every specific scenario and application. This limits transferability and greatly deteriorates performance, preventing the successful application of robotic cognitive systems in everyday human environments. My research attacks these problems. I present, on the one hand, a new task planning method that blends symbolic descriptions and continuous physical parameters used by robotic techniques in a unified planning approach. On the other hand, I describe an incremental learning mechanism that permits the robot to continuously operate in unstructured human environments by quickly adapting to unexpected situations under the natural guidance of a lay human. I conclude with some practical examples, applications beyond cognitive robotics, and future work.

Nach oben scrollen