Organized by Federica I. Malfatti (Institut für Christliche Philosophie)
Artificial intelligence systems increasingly shape how we access information, make decisions, and interact with the world. From personalized assistants and automated decision systems to AI-generated media, these technologies raise urgent questions about trust. When should we trust AI systems? What makes them trustworthy? And how do users actually respond to them?
This workshop brings together philosophers, computer scientists, and empirical researchers to explore the foundations and challenges of trust in AI. Contributions address topics including the relationship between expertise and understanding in AI systems, formal approaches to verifying ethical properties such as fairness and accountability, the impact of AI assistants on personal autonomy, and the ways transparency labels influence trust in AI-generated content.
By combining conceptual, technical, and empirical perspectives, the conference aims to deepen our understanding of trust in AI and to clarify how AI systems should be designed, evaluated, and integrated into human epistemic practices.
Where?
Dekatanatssitzungssaal (Karl-Rahner-Platz 1)
When?
12.05. 2026, 8:45 - 18:00
Click here for more information.
