Description
Large Language Models (LLMs) are rapidly evolving and are increasingly integrated into academic, professional, and everyday settings. Despite their growing presence, the ability of individuals to use these tools effectively and responsibly – especially in educational settings – still varies significantly. Thus, developing and enhancing AI literacy and prompting competences is essential to ensure that LLMs can be effectively used to support learning. In fact, in the context of higher education, LLMs offer significant potential to support teaching and learning, particularly when used in a structured, discipline-specific, and reflective manner.
The project Prompt Lab: Documenting Prompts and Critically Reflecting the Effects of LLMs for Learning, initiated by a cross-faculty team of the University of Innsbruck, is dedicated to supporting both teaching staff and students in the effective and responsible use of Large language models (LLMs) for discipline-specific learning.
At the core of the project is the development of a Prompt Submission tool and a Prompt Catalogue, which are respectively:
- a collaborative digital platform where users can submit high-quality prompts related to their specific area of study
- a catalogue to find, explore, and engage with prompts.
By making successful prompts openly available, the platform enables educators and students to use or adapt them, while also improving their own prompting skills.
In addition to the digital platform, the project includes presentations, practical workshops, and the development of teaching and learning materials made available on the University's OLAT platform. These activities aim to further disseminate and enhance AI literacy and prompt-engineering competences.
Bibliography
- Brocca N. & Garassino D. (in review). Leveraging LLMs in a Linguistics Lesson: Student Perceptions, Prompting Skills, and Learning Outcomes. International Journal of Artificial Intelligence in Education.
- Brocca N., & Iniesta Jiménez R., 2025. Un estudio sobre el uso de ChatGPT para resolver tareas escritas en la clase de ELE. Babylonia 1/2025, 28-31. https://doi.org/10.55393/babylonia.v1i.498
- Hagleitner W., Gleirscher L. & Berger F. (2025). Ausgewählte Ergebnisse einer Befragung von BA-Studierenden der LFU zur Nutzung von KI im Studium SS 2024. Vortrag & Diskussion AG "Wissenschaftliche Handlungsfelder" im Bachelorstudium Erziehungswissenschaft, Die Nutzung von KI im Studium der Sozial- und Geisteswissenschaften, Fakultät für Bildungswissenschaften, Universität Innsbruck. https://www.uibk.ac.at/events/2025/01/14/die-nutzung-von-ki-im-studium-der-sozial-und-geisteswissenschaften
- Simon J., Spiecker gen. Döhmann I., & von Luxburg U. (2024). Generative KI – jenseits von Euphorie und einfachen Lösungen. Halle (Saale): Nationale Akademie der Wissenschaften Leopoldina [Diskussion; 34]. https://doi.org/10.26164/leopoldina_03_01226
- Taglieber J., Kremmel B., Hilmi Tuna M., Hoffmann T. D., Takim A., Schreiner C., & Kapelari S. (2022). Fragenkatalog Ethik. Selbstevaluation zur Einhaltung ethischer Rahmenrichtlinien und rechtlicher Vorgaben bei der Durchführung von Forschungsprojekten an der Fakultät für LehrerInnenbildung. Universität Innsbruck.
https://umfrage.uibk.ac.at/limesurvey/allgemein/index.php/629795?lang=de (accessed February 3, 2025) - Tomlinson B., Black R.W., & Patterson D.J. (2024). The carbon emissions of writing and illustrating are lower for AI than for humans. Scientific Reports 14, 3732. https://doi.org/10.1038/s41598-024-54271-x
- Zamfirescu-Pereira J.D., Richmond Y. Wong, Bjoern Hartmann, and Qian Yang. 2023. Why Johnny Can’t Prompt: How Non-AI Experts Try (and Fail) to Design LLM Prompts. In: Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems (CHI '23). Association for Computing Machinery, New York, NY, USA, Article 437, 1–21. https://doi.org/10.1145/3544548.3581388
- Vygotsky L.S. (1978). Mind in society: The development of higher psychological processes. Cambridge (MA): Harvard University Press.