Dekoratives Bild

International Workshop: Trust in the Age of AI

Dekanatssitzungssaal (Theologie)

12. 5. 2026
8:45 - 18:00
Dekatanatssitzungssaal (Karl-Rahner-Platz 1)
Organized by Federica I. Malfatti
Registration via christliche-philosophie@uibk.ac.at

 

Artificial intelligence systems increasingly shape how we access information, make decisions, and interact with the world. From personalized assistants and automated decision systems to AI-generated media, these technologies raise urgent questions about trust. When should we trust AI systems? What makes them trustworthy? And how do users actually respond to them?

This workshop brings together philosophers, computer scientists, and empirical researchers to explore the foundations and challenges of trust in AI. Contributions address topics including the relationship between expertise and understanding in AI systems, formal approaches to verifying ethical properties such as fairness and accountability, the impact of AI assistants on personal autonomy, and the ways transparency labels influence trust in AI-generated content.

By combining conceptual, technical, and empirical perspectives, the conference aims to deepen our understanding of trust in AI and to clarify how AI systems should be designed, evaluated, and integrated into human epistemic practices.

12.5.2026 | Dekanatssitzungssaal (Theologie | Karl-Rahner-Platz 1) 

8:45 – 9:00: Welcoming & Introduction

9:00 – 10:00: Giuseppe Primiero (Milan): 
Formalizing and Verifying Trustworthiness in AI: A Practical Application to Fairness

10:00 – 11:00: Katherine Dormandy (Innsbruck):

Digital Whiplash: The Case of Trust

11:30 – 12:30: Ritsaart Reimann (Graz):
In the Loop, out of Sync: Moral Cognition in Human-AI Interactions 

13:30 – 14:30: Christiane Elisabeth Ernst (Innsbruck) & Kathrin Figl (Innsbruck): 
Seeing Is Not Always Believing: How AI Labels Shape Trust in Deepfake Content About Health Claims

14:30 – 15:30: Eleonora Catena (Erlangen): 

The rise of Personalized AI Assistants and their impact on personal autonomy

16:00 – 17:00 Camilla Quaresimini (Milan):

Who Does AI Think You Are? The Limits of Gender Classification

17:00 – 18:00 Matteo Baggio (Turin) & Federica I. Malfatti (Innsbruck): 
Can AI understand? 

As AI systems increasingly mediate socially significant decisions, calls for trustworthy AI have become widespread. Yet trustworthiness requires clear mechanisms for determining whether an AI system actually satisfies ethical principles such as fairness, transparency, and accountability. Moreover, the complexity of contemporary AI models challenges our ability to systematically verify compliance with such normative requirements. This talk presents a logical approach to this problem, in which trustworthiness is treated as a post-hoc formally specifiable property of AI systems. By selecting ethically relevant predicates describing system behaviour and identifying admissible forms of their violation, it becomes possible to reason formally about whether a system satisfies trustworthiness conditions. The approach is illustrated through the case of algorithmic fairness, showing how formal specifications of fairness can guide the analysis of machine learning models and we discuss how these ideas inform the development of practical tools within the MIRAI (Milano Responsible AI) platform, bridging philosophical analysis, formal methods, and empirical auditing of AI systems.

Digitalization is transforming human life, but our cognitive hardwiring is honed for an analog world. This hardwiring, though flexible, is ill-equipped to handle certain aspects of digital life. This problem, which I call the problem of digital whiplash, arises for online epistemic trust: the trust that a hearer invests in a speaker for knowledge, and that a speaker invests in a hearer for recognition as a knower. I argue that human cognition is a mismatch for online environments that call for wise epistemic trust.
Two features of cognition are suited to epistemic trust offline, but not online. First, cognition is embodied: we gather information through our senses and process it in our brain, using built-in heuristics as well as a combined physiology and psychology (e.g. expectations and moods). Second, cognition is contextual. Our sensory information, and the factors influencing how we process it, arise from our environment – our physical surroundings, social norms, and prior experience.
Offline, these cognitive features help trust wisely. The smallest signals of untrustworthiness in offline contexts trigger physiological responses. By contrast, in online communication (i.e. the text-based communication of social media, commercial platforms, and discussion forums), we are poorly equipped to recognize or respond to epistemic untrustworthiness.
Three problems arise. First is the problem of contextual instability. Whereas offline contexts remain stable from one person and situation to another, online contexts are changeable in unpredictable ways. For example, we interact not with people directly but online personas. On the one hand, this makes epistemic trust easier to betray (Simpson 2011); on the other hand, we tend to treat personas with the default trust usually extended to persons.
Second is the problem of signal sparsity. Online contexts contain few social signals (and no physiological ones), yielding little information to process in the usual embodied way. On the one hand, we are less apt to be trustworthy online, since our visceral response to others’ epistemic needs, due to the sparsity of embodied signals, is apt to be weaker. On the other hand, we are still inclined to trust epistemically online, since trust is arguably a cognitive default, extended in the absence of signals of untrustworthiness (Coady 1992).
Third is the distraction problem. Online contexts present various distractions that occupy cognitive and emotional bandwidth. These make us less apt to notice reasons to distrust, but even if we notice them, they will be intellectual rather than embodied and so will have less motivating force. Wise epistemic trust is much more difficult online than off.
One might object that, if we exercise unwise epistemic trust online, we have only ourselves to blame (McGeer 2004): trust online is no different from trust offline. But I argue that the problem runs deeper. Our cognitive hardwiring itself is a mismatch for the online environments in which we increasingly find ourselves. The problem of digital whiplash for online trust is real: cognizing in digital environments is a different kind of activity entirely, for which we are cognitively ill-equipped.

The ubiquitous integration of AI-powered systems in morally consequential decision-making procedures raises a thorny question: when such systems generate harm, who should be held responsible? A prominent regulatory response proposes that suitably designed control architectures can ensure that blame is appropriately allocated. To live up to this promise, the proposed architectures must be both normatively adequate and regarded as such, since otherwise they are at best practically useless, and at worst useless because morally problematic. In this talk, I will present some preliminary data on whether two of the most widely discussed control arrangements—loop-based oversight relations and Santoni de Sio and Van den Hoven’s track and trace framework—succeed in directing laypeople’s responsibility attributions along the pathways they prescribe, as well as whether those pathways are in fact normatively apt. 

Do labels such as “AI-generated” influence how people interpret online content? As deepfake videos and other AI-generated media spread rapidly on social media, platforms and regulators increasingly rely on transparency labels to inform users about manipulated or synthetic content. Yet such signals may do more than merely provide information: they may shape how viewers interpret the claims presented in the content itself. This is especially critical in the context of health claims, where misleading information can affect beliefs, judgments, and potentially behavior. Combining an eye-tracking experiment with an online experiment using AI-generated TikTok-style videos featuring factual and myth-based health claims, we investigate how AI-generated labels and fact-check labels guide visual attention, influence credibility judgments, and shape viewers’ interpretation of the same message. We further explore spillover effects by examining whether labeling some videos changes how users evaluate unlabeled content. Our findings aim to advance research on digital transparency, misinformation, and AI-mediated information environments by showing how labeling mechanisms shape user cognition and trust in health-related deepfake content. For practice, the study provides guidance on how to design transparency signals that inform users without unintentionally creating false reassurance or blanket skepticism.

The advancements in LLMs have opened the door to Personalized AI Assistants, which are trained on personal data to model the characteristics of a specific person and assist them with performing tasks (writing, decision-making, and communication). This innovative AI-based personalized assistance brings about unprecedented implications for personal autonomy, warranting ethical analysis and regulation. Drawing on Catriona Mackenzie’s multidimensional and relational account of personal autonomy, this paper analyses the ways in which different Personalized AI Assistants can support or undermine three distinct but intertwined dimensions of personal autonomy (self-determination, self-governance, self-authorization). By spelling out various challenges and opportunities, this analysis provides insights for designing Personalized AI Assistants to support each dimension of personal autonomy. In so doing, this paper paves the way for anticipating potential impacts and guiding the development of Personalized AI Assistants in light of personal autonomy.

AI systems are often presented as neutral tools for classifying people. Yet Automatic Gender Recognition (AGR) technologies show how fragile this assumption can be. Far from merely making technical mistakes, these systems embed problematic assumptions, namely, that gender is binary, fixed, and inferable from physical traits. As a result, they systematically misclassify gender non-conforming individuals, producing forms of algorithmic misgendering that are not only unfair but also epistemically and ethically problematic. Building on recent work on data quality dimensions for fair AI and on a feedback-based rethinking of AGR, this talk argues that trust in AGR cannot be reduced to accuracy alone. Rather, it requires a richer understanding of fairness, one that includes other quality dimensions beyond accuracy, while also recognizing the limits of automated classification itself. I will discuss why AGR systems generate ontological and epistemological errors, and propose possible strategies to mitigate these problems. Rather than offering a definitive solution, the talk aims to open a broader discussion about whether, how, and in which contexts these technologies should be used at all. In this sense, AGR serves as a case study for reflecting on what it means to trust AI in contexts involving human identity and self-determination.

Our reliance on AI systems as sources of information is steadily increasing. LLMs answer questions, explain complex topics, offer advice, and solve problems across a wide range of domains. As a result, users sometimes trust their outputs and treat them as experts or epistemic authorities. But is such trust justified? Do AI systems genuinely deserve the status of experts? While reliability is an important condition for expertise, the literature on expertise suggests that it is not sufficient. Experts are not merely reliable performers; they possess understanding of their domain. This raises a crucial question: can AI systems genuinely be said to understand? In this paper, we address this question by distinguishing between two forms of understanding. We argue that it is one thing to understand an epistemic mediator—such as a theory, model, or representational system—and another to understand phenomena or reality through the lens of such mediators. We call the former symbolic understanding and the latter noetic understanding. While symbolic understanding primarily consists in the ability to competently deploy epistemic mediators for explanation, inference, and prediction, noetic understanding requires a further element: an appropriate noetic profile, including rational commitment to the mediator as a plausible representation of reality. We argue that contemporary AI systems may exhibit some degree of symbolic understanding but lack noetic understanding. This distinction has important implications for the status of AI expertise: whether AI systems qualify as experts depends on whether expertise requires noetic understanding or whether symbolic understanding suffices.

Portrait von Christian List

Matteo Baggio is a postdoctoral researcher at the University of Turin, working on a project titled “Controlling and Utilizing Uncertainty in the Health Sciences.” He earned a PhD in Cognitive Science and Philosophy of Mind from the University School for Advanced Studies IUSS Pavia. In his dissertation, he examines the epistemic role of intuitions in the theory of logic. He spent a study visit at the University of Genoa and held research positions at the University of Bergen and the Complutense University of Madrid. His research focuses on classical and social epistemology as well as logic.

Portrait von Tine Stein

Eleonora Catena is a PhD student in Philosophy and Ethics of AI at the Friedrich-Alexander-Universität Erlangen-Nürnberg. She has a background in philosophy, political science and Artificial Intelligence.

Portrait Oliver Wiertz

Katherine Dormandy is Professor of Philosophy at the Department of Christian Philosophy at the University of Innsbruck. One major focus of her research is the norms of human thought in social settings. Central topics include epistemic authority, religious and worldview disagreement, the epistemological import of first-person narratives, and the role of interpersonal trust.

Portrait Christiane Ernst

Christiane Ernst completed her Bachelor's degree in Computer Science with a focus on Cyber Security and Software Quality at the University of Innsbruck in 2021. She continued her studies and obtained her Master's degree in Business Information Systems from the same university in 2023. Additionally, from 2018 to 2020, she worked as a Technical Editor for the Journal of Statistical Software. Currently, she is working as a research assistant and doctoral candidate at the Department of Information Systems, Production and Logistics Management.

Portrait Kathrin Figl

Kathrin Figl is a Professor for Human-centric Information Systems Design at the Depart­ment of Infor­ma­tion Sys­tems, Pro­duc­tion and Logis­tics Man­age­ment at the University of Innsbruck. Her research focus lies on human-centric information system design, human-AI interaction, algorithmic management, digital nudging, online manipulation, eye-tracking in user research, cognitive biases and heuristics and fake news.

Portrait von Markus Moling

Federica I. Malfatti is currently completing her habilitation at the Department of Christian Philosophy as part of the Erika Cremer Habilitation Programme at the University of Innsbruck. In her habilitation project, she is working on the topic of epistemic trust. The aim is to investigate the nature and normativity of epistemic trust and to shed light on the relationship between trust and autonomy.

Portrait von Thomas M. Schmidt

Giuseppe Primiero is Professor of Logic with the Logic, Uncertainty, Computation and Information Lab in the Department of Philosophy at the University of Milan, Italy. He acts as Scientific Director for PHILTECH, Research Center for The Philosophy of Technology and as Programme Leader for the Master's Degree in Human-Centered AI. He is co-founder and Chief Research Officer of MIRAI. Giuseppe works in the formal modeling and verification of multi-agent systems, to evaluate properties of trustworthiness, fairness and responsibility in AI.

Portrait Camilla Quaresmini

Camilla Quaresmini is a PhD student in Science, Technology and Policy for Sustainable Change at Politecnico di Milano, and a computational philosopher. Her research focuses on making social networks more equitable, particularly in how people interact and share information.

Portrait von Christoph Jäger

Ritsaart Reimann is a postdoctoral researcher working on the ethics of artificial intelligence, with a background in social epistemology. His current research centres on questions around responsibility and trust in human–AI interactions. He combines philosophical analysis with empirical work that explores how people think about AI systems and what these attitudes mean for the design and regulation of intelligent machines.

Please register via email at christliche-philosophie@uibk.ac.at.

Department of Christian Philosophy
University of Innsbruck
Karl-Rahner-Platz 1, 1. floor
A-6020 Innsbruck
    Nach oben scrollen