12. 5. 2026
8:45 - 18:00
Dekatanatssitzungssaal (Karl-Rahner-Platz 1)
Organized by Federica I. Malfatti
Registration via christliche-philosophie@uibk.ac.at
Artificial intelligence systems increasingly shape how we access information, make decisions, and interact with the world. From personalized assistants and automated decision systems to AI-generated media, these technologies raise urgent questions about trust. When should we trust AI systems? What makes them trustworthy? And how do users actually respond to them?
This workshop brings together philosophers, computer scientists, and empirical researchers to explore the foundations and challenges of trust in AI. Contributions address topics including the relationship between expertise and understanding in AI systems, formal approaches to verifying ethical properties such as fairness and accountability, the impact of AI assistants on personal autonomy, and the ways transparency labels influence trust in AI-generated content.
By combining conceptual, technical, and empirical perspectives, the conference aims to deepen our understanding of trust in AI and to clarify how AI systems should be designed, evaluated, and integrated into human epistemic practices.
12.5.2026 | Dekanatssitzungssaal (Theologie | Karl-Rahner-Platz 1)
8:45 – 9:00: Welcoming & Introduction
Digital Whiplash: The Case of Trust
The rise of Personalized AI Assistants and their impact on personal autonomy
Who Does AI Think You Are? The Limits of Gender Classification
As AI systems increasingly mediate socially significant decisions, calls for trustworthy AI have become widespread. Yet trustworthiness requires clear mechanisms for determining whether an AI system actually satisfies ethical principles such as fairness, transparency, and accountability. Moreover, the complexity of contemporary AI models challenges our ability to systematically verify compliance with such normative requirements. This talk presents a logical approach to this problem, in which trustworthiness is treated as a post-hoc formally specifiable property of AI systems. By selecting ethically relevant predicates describing system behaviour and identifying admissible forms of their violation, it becomes possible to reason formally about whether a system satisfies trustworthiness conditions. The approach is illustrated through the case of algorithmic fairness, showing how formal specifications of fairness can guide the analysis of machine learning models and we discuss how these ideas inform the development of practical tools within the MIRAI (Milano Responsible AI) platform, bridging philosophical analysis, formal methods, and empirical auditing of AI systems.
The ubiquitous integration of AI-powered systems in morally consequential decision-making procedures raises a thorny question: when such systems generate harm, who should be held responsible? A prominent regulatory response proposes that suitably designed control architectures can ensure that blame is appropriately allocated. To live up to this promise, the proposed architectures must be both normatively adequate and regarded as such, since otherwise they are at best practically useless, and at worst useless because morally problematic. In this talk, I will present some preliminary data on whether two of the most widely discussed control arrangements—loop-based oversight relations and Santoni de Sio and Van den Hoven’s track and trace framework—succeed in directing laypeople’s responsibility attributions along the pathways they prescribe, as well as whether those pathways are in fact normatively apt.
Do labels such as “AI-generated” influence how people interpret online content? As deepfake videos and other AI-generated media spread rapidly on social media, platforms and regulators increasingly rely on transparency labels to inform users about manipulated or synthetic content. Yet such signals may do more than merely provide information: they may shape how viewers interpret the claims presented in the content itself. This is especially critical in the context of health claims, where misleading information can affect beliefs, judgments, and potentially behavior. Combining an eye-tracking experiment with an online experiment using AI-generated TikTok-style videos featuring factual and myth-based health claims, we investigate how AI-generated labels and fact-check labels guide visual attention, influence credibility judgments, and shape viewers’ interpretation of the same message. We further explore spillover effects by examining whether labeling some videos changes how users evaluate unlabeled content. Our findings aim to advance research on digital transparency, misinformation, and AI-mediated information environments by showing how labeling mechanisms shape user cognition and trust in health-related deepfake content. For practice, the study provides guidance on how to design transparency signals that inform users without unintentionally creating false reassurance or blanket skepticism.
The advancements in LLMs have opened the door to Personalized AI Assistants, which are trained on personal data to model the characteristics of a specific person and assist them with performing tasks (writing, decision-making, and communication). This innovative AI-based personalized assistance brings about unprecedented implications for personal autonomy, warranting ethical analysis and regulation. Drawing on Catriona Mackenzie’s multidimensional and relational account of personal autonomy, this paper analyses the ways in which different Personalized AI Assistants can support or undermine three distinct but intertwined dimensions of personal autonomy (self-determination, self-governance, self-authorization). By spelling out various challenges and opportunities, this analysis provides insights for designing Personalized AI Assistants to support each dimension of personal autonomy. In so doing, this paper paves the way for anticipating potential impacts and guiding the development of Personalized AI Assistants in light of personal autonomy.
AI systems are often presented as neutral tools for classifying people. Yet Automatic Gender Recognition (AGR) technologies show how fragile this assumption can be. Far from merely making technical mistakes, these systems embed problematic assumptions, namely, that gender is binary, fixed, and inferable from physical traits. As a result, they systematically misclassify gender non-conforming individuals, producing forms of algorithmic misgendering that are not only unfair but also epistemically and ethically problematic. Building on recent work on data quality dimensions for fair AI and on a feedback-based rethinking of AGR, this talk argues that trust in AGR cannot be reduced to accuracy alone. Rather, it requires a richer understanding of fairness, one that includes other quality dimensions beyond accuracy, while also recognizing the limits of automated classification itself. I will discuss why AGR systems generate ontological and epistemological errors, and propose possible strategies to mitigate these problems. Rather than offering a definitive solution, the talk aims to open a broader discussion about whether, how, and in which contexts these technologies should be used at all. In this sense, AGR serves as a case study for reflecting on what it means to trust AI in contexts involving human identity and self-determination.
Our reliance on AI systems as sources of information is steadily increasing. LLMs answer questions, explain complex topics, offer advice, and solve problems across a wide range of domains. As a result, users sometimes trust their outputs and treat them as experts or epistemic authorities. But is such trust justified? Do AI systems genuinely deserve the status of experts? While reliability is an important condition for expertise, the literature on expertise suggests that it is not sufficient. Experts are not merely reliable performers; they possess understanding of their domain. This raises a crucial question: can AI systems genuinely be said to understand? In this paper, we address this question by distinguishing between two forms of understanding. We argue that it is one thing to understand an epistemic mediator—such as a theory, model, or representational system—and another to understand phenomena or reality through the lens of such mediators. We call the former symbolic understanding and the latter noetic understanding. While symbolic understanding primarily consists in the ability to competently deploy epistemic mediators for explanation, inference, and prediction, noetic understanding requires a further element: an appropriate noetic profile, including rational commitment to the mediator as a plausible representation of reality. We argue that contemporary AI systems may exhibit some degree of symbolic understanding but lack noetic understanding. This distinction has important implications for the status of AI expertise: whether AI systems qualify as experts depends on whether expertise requires noetic understanding or whether symbolic understanding suffices.
Matteo Baggio is a postdoctoral researcher at the University of Turin, working on a project titled “Controlling and Utilizing Uncertainty in the Health Sciences.” He earned a PhD in Cognitive Science and Philosophy of Mind from the University School for Advanced Studies IUSS Pavia. In his dissertation, he examines the epistemic role of intuitions in the theory of logic. He spent a study visit at the University of Genoa and held research positions at the University of Bergen and the Complutense University of Madrid. His research focuses on classical and social epistemology as well as logic.
Eleonora Catena is a PhD student in Philosophy and Ethics of AI at the Friedrich-Alexander-Universität Erlangen-Nürnberg. She has a background in philosophy, political science and Artificial Intelligence.
Katherine Dormandy is Professor of Philosophy at the Department of Christian Philosophy at the University of Innsbruck. One major focus of her research is the norms of human thought in social settings. Central topics include epistemic authority, religious and worldview disagreement, the epistemological import of first-person narratives, and the role of interpersonal trust.
Christiane Ernst completed her Bachelor's degree in Computer Science with a focus on Cyber Security and Software Quality at the University of Innsbruck in 2021. She continued her studies and obtained her Master's degree in Business Information Systems from the same university in 2023. Additionally, from 2018 to 2020, she worked as a Technical Editor for the Journal of Statistical Software. Currently, she is working as a research assistant and doctoral candidate at the Department of Information Systems, Production and Logistics Management.
Kathrin Figl is a Professor for Human-centric Information Systems Design at the Department of Information Systems, Production and Logistics Management at the University of Innsbruck. Her research focus lies on human-centric information system design, human-AI interaction, algorithmic management, digital nudging, online manipulation, eye-tracking in user research, cognitive biases and heuristics and fake news.
Federica I. Malfatti is currently completing her habilitation at the Department of Christian Philosophy as part of the Erika Cremer Habilitation Programme at the University of Innsbruck. In her habilitation project, she is working on the topic of epistemic trust. The aim is to investigate the nature and normativity of epistemic trust and to shed light on the relationship between trust and autonomy.
Giuseppe Primiero is Professor of Logic with the Logic, Uncertainty, Computation and Information Lab in the Department of Philosophy at the University of Milan, Italy. He acts as Scientific Director for PHILTECH, Research Center for The Philosophy of Technology and as Programme Leader for the Master's Degree in Human-Centered AI. He is co-founder and Chief Research Officer of MIRAI. Giuseppe works in the formal modeling and verification of multi-agent systems, to evaluate properties of trustworthiness, fairness and responsibility in AI.
Camilla Quaresmini is a PhD student in Science, Technology and Policy for Sustainable Change at Politecnico di Milano, and a computational philosopher. Her research focuses on making social networks more equitable, particularly in how people interact and share information.
Ritsaart Reimann is a postdoctoral researcher working on the ethics of artificial intelligence, with a background in social epistemology. His current research centres on questions around responsibility and trust in human–AI interactions. He combines philosophical analysis with empirical work that explores how people think about AI systems and what these attitudes mean for the design and regulation of intelligent machines.
Please register via email at christliche-philosophie@uibk.ac.at.








