Dekoratives Bild

International Workshop: Trust in the Age of AI

Dekanatssitzungssaal (Theologie)

12. 5. 2026
  8:45 - 18:00
Dekatanatssitzungssaal (Karl-Rahner-Platz 1)

 

Artificial intelligence systems increasingly shape how we access information, make decisions, and interact with the world. From personalized assistants and automated decision systems to AI-generated media, these technologies raise urgent questions about trust. When should we trust AI systems? What makes them trustworthy? And how do users actually respond to them?

This conference brings together philosophers, computer scientists, and empirical researchers to explore the foundations and challenges of trust in AI. Contributions address topics including the relationship between expertise and understanding in AI systems, formal approaches to verifying ethical properties such as fairness and accountability, the impact of AI assistants on personal autonomy, and the ways transparency labels influence trust in AI-generated content.

By combining conceptual, technical, and empirical perspectives, the conference aims to deepen our understanding of trust in AI and to clarify how AI systems should be designed, evaluated, and integrated into human epistemic practices.

12.5.2026 | Dekanatssitzungssaal (Theologie | Karl-Rahner-Platz 1) 

8:45 – 9:00: Welcoming & Introduction

9:00 – 10:00: Giuseppe Primiero (Milan): Formalizing and Verifying Trustworthiness in AI: A Practical Application to Fairness

10:00 – 11:00: Eleonora Catena (Erlangen): The rise of Personalized AI Assistants and their impact on personal autonomy

11:30 – 12:30: Matteo Baggio (Turin) & Federica I. Malfatti (Innsbruck): Can AI understand?

13:30 – 14:30: Christiane Elisabeth Ernst (Innsbruck) & Kathrin Figl (Innsbruck): Seeing Is Not Always Believing: How AI Labels Shape Trust in Deepfake Content About Health Claims

14:30 – 15:30 Katherine Dormandy (Innsbruck): TBA

16:00 – 17:00 Camilla Quaresimini (Milan): TBA

17:00 – 18:00 Ritsaart Reimann (Graz): TBA

Portrait von Christian List

Matteo Baggio is a postdoctoral researcher at the University of Turin, working on a project titled “Controlling and Utilizing Uncertainty in the Health Sciences.” He earned a PhD in Cognitive Science and Philosophy of Mind from the University School for Advanced Studies IUSS Pavia. In his dissertation, he examines the epistemic role of intuitions in the theory of logic. He spent a study visit at the University of Genoa and held research positions at the University of Bergen and the Complutense University of Madrid. His research focuses on classical and social epistemology as well as logic.

Portrait von Tine Stein

Eleonora Catena is a PhD student in Philosophy and Ethics of AI at the Friedrich-Alexander-Universität Erlangen-Nürnberg. She has a background in philosophy, political science and Artificial Intelligence.

Portrait Oliver Wiertz

Katherine Dormandy is Professor of Philosophy at the Department of Christian Philosophy at the University of Innsbruck. One major focus of her research is the norms of human thought in social settings. Central topics include epistemic authority, religious and worldview disagreement, the epistemological import of first-person narratives, and the role of interpersonal trust.

Portrait Christiane Ernst

Christiane Ernst completed her Bachelor's degree in Computer Science with a focus on Cyber Security and Software Quality at the University of Innsbruck in 2021. She continued her studies and obtained her Master's degree in Business Information Systems from the same university in 2023. Additionally, from 2018 to 2020, she worked as a Technical Editor for the Journal of Statistical Software. Currently, she is working as a research assistant and doctoral candidate at the Department of Information Systems, Production and Logistics Management.

Portrait Kathrin Figl

Kathrin Figl is a Professor for Human-centric Information Systems Design at the Depart­ment of Infor­ma­tion Sys­tems, Pro­duc­tion and Logis­tics Man­age­ment at the University of Innsbruck. Her research focus lies on human-centric information system design, human-AI interaction, algorithmic management, digital nudging, online manipulation, eye-tracking in user research, cognitive biases and heuristics and fake news.

Portrait von Markus Moling

Federica I. Malfatti is currently completing her habilitation at the Department of Christian Philosophy as part of the Erika Cremer Habilitation Programme at the University of Innsbruck. In her habilitation project, she is working on the topic of epistemic trust. The aim is to investigate the nature and normativity of epistemic trust and to shed light on the relationship between trust and autonomy.

Portrait von Thomas M. Schmidt

Giuseppe Primiero is Professor of Logic with the Logic, Uncertainty, Computation and Information Lab in the Department of Philosophy at the University of Milan, Italy. He acts as Scientific Director for PHILTECH, Research Center for The Philosophy of Technology and as Programme Leader for the Master's Degree in Human-Centered AI. He is co-founder and Chief Research Officer of MIRAI. Giuseppe works in the formal modeling and verification of multi-agent systems, to evaluate properties of trustworthiness, fairness and responsibility in AI.

Portrait Camilla Quaresmini

Camilla Quaresmini is a PhD student in Science, Technology and Policy for Sustainable Change at Politecnico di Milano, and a computational philosopher. Her research focuses on making social networks more equitable, particularly in how people interact and share information.

Portrait von Christoph Jäger

Ritsaart Reimann is a postdoctoral researcher working on the ethics of artificial intelligence, with a background in social epistemology. His current research centres on questions around responsibility and trust in human–AI interactions. He combines philosophical analysis with empirical work that explores how people think about AI systems and what these attitudes mean for the design and regulation of intelligent machines.

The advancements in LLMs have opened the door to Personalized AI Assistants, which are trained on personal data to model the characteristics of a specific person and assist them with performing tasks (writing, decision-making, and communication). This innovative AI-based personalized assistance brings about unprecedented implications for personal autonomy, warranting ethical analysis and regulation. Drawing on Catriona Mackenzie’s multidimensional and relational account of personal autonomy, this paper analyses the ways in which different Personalized AI Assistants can support or undermine three distinct but intertwined dimensions of personal autonomy (self-determination, self-governance, self-authorization). By spelling out various challenges and opportunities, this analysis provides insights for designing Personalized AI Assistants to support each dimension of personal autonomy. In so doing, this paper paves the way for anticipating potential impacts and guiding the development of Personalized AI Assistants in light of personal autonomy.

As AI systems increasingly mediate socially significant decisions, calls for trustworthy AI have become widespread. Yet trustworthiness requires clear mechanisms for determining whether an AI system actually satisfies ethical principles such as fairness, transparency, and accountability. Moreover, the complexity of contemporary AI models challenges our ability to systematically verify compliance with such normative requirements. This talk presents a logical approach to this problem, in which trustworthiness is treated as a post-hoc formally specifiable property of AI systems. By selecting ethically relevant predicates describing system behaviour and identifying admissible forms of their violation, it becomes possible to reason formally about whether a system satisfies trustworthiness conditions. The approach is illustrated through the case of algorithmic fairness, showing how formal specifications of fairness can guide the analysis of machine learning models and we discuss how these ideas inform the development of practical tools within the MIRAI (Milano Responsible AI) platform, bridging philosophical analysis, formal methods, and empirical auditing of AI systems.

Our reliance on AI systems as sources of information is steadily increasing. LLMs answer questions, explain complex topics, offer advice, and solve problems across a wide range of domains. As a result, users sometimes trust their outputs and treat them as experts or epistemic authorities. But is such trust justified? Do AI systems genuinely deserve the status of experts? While reliability is an important condition for expertise, the literature on expertise suggests that it is not sufficient. Experts are not merely reliable performers; they possess understanding of their domain. This raises a crucial question: can AI systems genuinely be said to understand? In this paper, we address this question by distinguishing between two forms of understanding. We argue that it is one thing to understand an epistemic mediator—such as a theory, model, or representational system—and another to understand phenomena or reality through the lens of such mediators. We call the former symbolic understanding and the latter noetic understanding. While symbolic understanding primarily consists in the ability to competently deploy epistemic mediators for explanation, inference, and prediction, noetic understanding requires a further element: an appropriate noetic profile, including rational commitment to the mediator as a plausible representation of reality. We argue that contemporary AI systems may exhibit some degree of symbolic understanding but lack noetic understanding. This distinction has important implications for the status of AI expertise: whether AI systems qualify as experts depends on whether expertise requires noetic understanding or whether symbolic understanding suffices.

Do labels such as “AI-generated” influence how people interpret online content? As deepfake videos and other AI-generated media spread rapidly on social media, platforms and regulators increasingly rely on transparency labels to inform users about manipulated or synthetic content. Yet such signals may do more than merely provide information: they may shape how viewers interpret the claims presented in the content itself. This is especially critical in the context of health claims, where misleading information can affect beliefs, judgments, and potentially behavior. Combining an eye-tracking experiment with an online experiment using AI-generated TikTok-style videos featuring factual and myth-based health claims, we investigate how AI-generated labels and fact-check labels guide visual attention, influence credibility judgments, and shape viewers’ interpretation of the same message. We further explore spillover effects by examining whether labeling some videos changes how users evaluate unlabeled content. Our findings aim to advance research on digital transparency, misinformation, and AI-mediated information environments by showing how labeling mechanisms shape user cognition and trust in health-related deepfake content. For practice, the study provides guidance on how to design transparency signals that inform users without unintentionally creating false reassurance or blanket skepticism.

Federica I. Malfatti

Department of Christian Philosophy

Karl-Rahner-Platz 1, Innsbruck

christliche-philosophie@uibk.ac.at

    Nach oben scrollen