Artificial Intelligence – Recommendations For Students
Recommendation for students of the Faculty of Education at the University of Innsbruck on the use of artificial intelligence (AI) in their studies.
Last update: December 2025
The rapid development of Artificial Intelligence (AI) and the increasing possibilities for its application open up risks, but also opportunities for many areas of society and thus also for Science and Higher Education. The University of Innsbruck and thus also the Faculty of Education see it as their duty to advise and support students in the use of this technology, both with regard to their studies and also for professional use. The faculty encourages students to continuously experiment with this technology, explore its possibilities and limitations, and share their experiences with others. This recommendation thus provides guidance on what AI can be used for and how it can be used in studies, but above all, it encourages a critical and reflective attitude toward the possibilities, limitations, and dangers of this technology.
It is important to note that the decision as to whether the use of generative language models and AI-driven software in general is permitted in courses or for the performance of certain tasks in oral or written examinations or in final and qualification tasks is primarily the responsibility of the respective course instructor or supervisor and must be agreed upon. It must also be agreed how the possible use of AI is to be documented in concrete terms (e.g., by adapting the affidavit, screenshots, etc.).
Under point 7, you will find a summary of the most important points for a critical and reflective approach to AI.
Project Management: Mag. Wolfgang Hagleitner, PhD; Mag.a Stefanie Jäger, BEd, BA, PhD. Project team: Birgit Bätz, BA, MA; Ass. Prof.in Dr.in Diana Lohwasser; Patricia Schwärzler
With special thanks to: Tabea Eichhorn, BA MSc, University of Innsbruck; Faculty and Student Representative for Educational Sciences, University of Innsbruck; Univ.-Prof. Mag. Dr. Matthias C. Kettemann (Institute for Theory and Future of Law, University of Innsbruck); Associate Prof. Ulrich Leitner (Dean of Studies, Faculty of Education, University of Innsbruck); Christoph Tauber BA (Office of the Dean, Faculty of Education); Robert Rebitsch (Office for Scientific Integrity, University of Innsbruck).
When computer systems are able to recognise patterns, make decisions, or translate language, this is generally referred to as AI. When these systems not only analyse data but also generate new content based on large amounts of data and learn from that data, this is referred to as generative AI. “Generative AI means that prompts based on input data are used to produce new content, such as text, images, audio, and video, whose automated origin may not be obvious to humans. Examples of generative AI include AI chatbots such as ChatGPT, Bard, and Bing. Other examples are DALL-E, Murf, Simplified, and Midjourney.” (University of Innsbruck 2024a)
A wide range of generative AI tools are available in the context of study, research, and teaching. Their possible applications are constantly expanding and range from use as an informed ‘conversation partner’ or source of inspiration or support, to the creation, editing, and processing of texts, the analysis of qualitative or quantitative data, literature research, or the summarisation of texts, to exam preparation or the creation of program code.
Food for thought:
Can I assess which AI programs are suitable for which applications?
Literature:
University of Innsbruck (2024a). FAQs on the topic of AI at the University of Innsbruck. Available at: https://www.uibk.ac.at/en/about-us/digitization/ai-and-university/faqs/ (accessed 22 October 2025).
The generative process begins with a request (prompt) made by the users. Based on these prompts and the training data, generative AI uses patterns, similarities, and probabilities to calculate, for example, the next word, the next phrase, or the next pixel. A lack of “knowledge” in the form of data on specific subject areas, the reliability of sources, errors or biases in the training data, outdated data, or faulty contextualisation can lead to results that sound plausible but are false, inaccurate, discriminatory, or stigmatising. The quality of the results depends largely on the quality of the training data, but also on the prompts. Users are therefore required to guide AI through meaningful prompts and to always question and verify (especially) text outputs in terms of their correctness and quality.
Users are also encouraged to consider aspects of sustainability and to use AI consciously and in a resource-efficient manner. For example, a correspondence with ChatGPT consumes around half a liter of water, and the energy required to generate an image corresponds to the full charge of a smartphone. The training of AI models is based on meticulous human labour, which is often carried out in low-wage countries with low standards of worker protection (cf.: Ahmad, Z., Staiger, T. 2024). For this reason, the use of AI should always be critically examined with regard to ecological and social sustainability.
Food for thought:
What do I know about the underlying training data and AI models? Can I assess whether and to what extent the results of the AI tool are appropriate and useful for my purposes?
Literature:
Ahmad, Zamina; Staiger, Teresa (2024): Das Ökosystem der KI-Basismodelle: Wie Daten, Energie und menschliche Arbeit KI-Basismodelle formen. Hg. v. reframe (Tech). Available at: https://www.reframetech.de/wissensseite-basismodelle/ (accessed 27 January 2025).
Critical and reflective engagement with content was already central to academic work even before AI helped generate content. Therefore, the rules of good scientific practice also apply when working with (generative) AI.
The following briefly outlines the ‘Guidelines for Good Scientific Practice at the University of Innsbruck’ (University of Innsbruck 2023) that are relevant to studying and teaching:
Maintaining scientific integrity:
“All persons engaged in research are committed to scientific integrity. Honesty, sincerity, transparency, and the scientific method are indispensable prerequisites of scholarly work if it is to contribute to the acquisition of knowledge and be valued by society.” (University of Innsbruck 2023)Scientific work lege artis in accordance with general, discipline-specific and subject-specific rules and standards:
Documentation of the scientific working process so that each step can be traced.
Protection of intellectual property and prior contributions of all participants as well as third parties, and correct attribution of authorship: all external contributions must be disclosed and documented. This means that sources and their authorship must be referenced.
Violations of the rules of good scientific practice must be strictly avoided. These include:
Plagiarism
Idea theft
Falsification and appropriation of research results
Food for thought:
Am I familiar with the general and discipline-specific standards, and can I appropriately apply them in the context of AI use (e.g., research ethics, the use of sources)?
Literature:
University of Innsbruck (2023): Safeguarding good scientific practice. Available at: https://www.uibk.ac.at/en/research/quality-assurance/good-scientific-practice/(accessed 22 October 2025).
The decision as to whether generative language models and AI-driven software in general are permitted in courses or for the completion of certain tasks in oral or written examinations or in final and qualification assignments lies primarily with the respective course instructor or supervisor and must be agreed upon in advance.
Grades, evaluations, and the awarding of academic titles are based on academic achievements that have been completed independently and without external assistance. Plagiarism occurs when “texts, content, or ideas are adopted and presented as one’s own.” (University Act 2002/version 2021; see also section 4 “Good Scientific Practice”). For example, if AI is used to improve one’s own text (spelling, grammar, expression), this does not constitute plagiarism. However, if someone else’s text is paraphrased with the help of AI without citing the source, this constitutes plagiarism. If text is generated with the help of AI and not appropriately identified as external work, this at least creates the false impression of an independent scientific achievement¹.
Food for thought:
Have I taken into account the requirements of the respective course instructor/supervisor and sufficiently clarified the (possible) use of AI, and am I aware of the consequences if I fail to do so?
Literature:
Universities Act 2002 (UG 2002). Consolidated version of 31 December 2021. Available at: https://www.ris.bka.gv.at/Dokumente/Erv/ERV_2002_1_120/ERV_2002_1_120.html, (accessed 22 October 2025)
Foot notes
1 See in particular §11(2) of the Study Law Regulations of the University of Innsbruck; University of Innsbruck 2024b.
In general, when using AI or entering data into AI systems, the general Austrian and European data protection laws, personal rights, and copyright laws, as well as the study law regulations of the University of Innsbruck apply (cf. 2024b). Responsibility for complying with these laws and regulations lies with the users or students.
Personal data must not be entered into AI tools, as frequent input of sensitive data about a person may allow AI tools to learn from it, and it cannot be ruled out that providers may read, store, and use this data. When using AI to process scientific datasets (e.g., numerical data, interview data, transcripts, audio, or video recordings), it must be ensured that promised anonymity, scientific integrity, and the University of Innsbruck’s guidelines for handling research data are maintained.
Data subject to any form of confidentiality must not be entered into AI systems. Data or texts that are considered intellectual works or are subject to copyright (e.g., texts or materials from lecturers) may not be entered into AI programs without the explicit consent of the copyright holders.
AI-generated outputs are generally free from copyright claims by the system provider or the system itself, provided that the output does not violate trademark rights or represent a continuation of copyrighted works that were used by the AI as training data and reproduced verbatim in the output (cf. University of Innsbruck 2024a). Responsibility for the use of AI-generated data lies with the user.
Food for thought:
Am I sufficiently familiar with copyright and data protection issues to assess whether my use of AI complies with legal requirements?
Literature:
University of Innsbruck (2024a). FAQs about AI at the University of Innsbruck. Available at: https://www.uibk.ac.at/en/about-us/digitization/ai-and-university/faqs/(accessed 22 October 2025).
University of Innsbruck (2024b). Statutes of the University of Innsbruck: Section “Study Law Regulations”. Available at: https://www.uibk.ac.at/zentraler-rechtsdienst/richtlinien-und-verordnungen-des-rektorats/satzungsteile/stsb_konsolidierte_fassung.html.en(accessed 22 October 2025).
Texts, tables, images, etc. generated by generative AI do not constitute a scientific source; therefore, outputs produced by generative AI should not be cited in the same way as those of authors (cf. University of Innsbruck 2024a). The following section uses the example of ChatGPT to illustrate how AI-generated data can be used and how such outputs can be cited.
Use of AI for academic work using ChatGPT as an example
What can ChatGPT generally be used for?
Use of statements from ChatGPT as a non-scientific primary source
Literature research
Summaries
Improving the understanding of complex texts
Outlining/structuring
As input
…
Within this recommendation, section 6.1 discusses point 1, “Use of statements from ChatGPT as a non-scientific primary source,” and section 6.2 discusses point 2, “Literature research.”
Food for thought:
Have I sufficiently reflected on which AI programs I can use appropriately for which purposes[VG1] [LG2] ? (Can I assume that AI-generated statements reflect expertise?)
Literature:
University of Innsbruck (2024a). FAQs about AI at the University of Innsbruck. Available at:https://www.uibk.ac.at/en/about-us/digitization/ai-and-university/faqs/(accessed 22 October 2025).
If direct or indirect quotations from ChatGPT are used in an essay, a seminar paper, a bachelor’s. or master’s thesis, or in another context (e.g., presentations, posters, etc.), they must be consistently cited in accordance with the citation style specified by the respective instructors or supervisors.
Recommended steps to ensure the traceability of AI interactions and AI-generated data:
Query ChatGPT; save the prompt and response directly with OpenAI, take screenshots, and copy the permalink – important for documentation and citation.
Insert quotations (direct or indirect) into your own work and indicate the source within the text using a short citation. Depending on the citation style, remember to include abbreviations such as cf. when paraphrasing.
Depending on the chosen citation style, include the sources in footnotes and/or in the bibliography (as described above).
The guidelines presented here for using statements from generative AI models as a primary source were developed taking into account current requirements of the respective citation conventions, were coordinated with the Office for Scientific Integrity at the University of Innsbruck, and were adapted according to the regulations of the University of Innsbruck.
The information for documenting the use of generative AI models can also be used to make the use of these tools for summaries, outlines, etc. transparent.
A reliable literature search cannot be conducted using AI (here using ChatGPT as an example), as search queries may be answered incompletely, inconsistently, illogically, and uncritically. However, AI can support the research process, for example by narrowing down topics, assisting with translations, identifying synonyms, or automating certain processes. Nevertheless, a critical review of the results is indispensable.
Users should generally be aware of how AI searches, where it searches, and what it searches for. To facilitate a better understanding, the team of the University and State Library (ULB) at the University of Innsbruck has created both video tutorials and written materials, which are available to all students and staff of UIBK on OLAT after logging in:
As already emphasised in section 5, the decision regarding whether and how AI may be used in the course of studies must always be discussed with the respective course instructor or the supervisor of the thesis. In addition, the rules of good scientific practice (see above) must always be observed. If the use of AI (e.g., ChatGPT) is permitted, a critical approach is essential. The following points should be considered in particular:
- Language models – such as the content produced by ChatGPT – are NOT scientific sources. Indicating them as a source according to the citation styles described above is only necessary to show that language models were used, where they were used, and to what extent they were used. When citing, the citation standards and requirements of the respective institute, instructors, or supervisors apply.
- Language models must not be treated as authors in citations within the text, among other reasons due to copyright considerations.
- A separate entry for the AI tools used must be included in the bibliography.
- Limitations must always be taken into account:
- Language models are not designed to verify the truthfulness of their own statements.
- Language models are often not up to date.
- Summaries, research results, and other responses to prompts may contain incorrect or inaccurate information.
- There is a wide variety of generative AI models with different strengths and weaknesses. To use language models effectively, it is important to understand how they work and to inform yourself about their possibilities and limitations.
- Data protection compliance is often insufficient, data processing is unclear, and transparency is lacking. This must be considered when using (generative) AI models.
Independent thinking, reflection, and critical engagement are essential for academic work and for work in the field of educational sciences. These tasks cannot and should not be carried out by generative language models. AI cannot replace independent thinking!
- Future of Life Institute: (FLI): The EU Artificial Intelligence Act. Current developments and analyses of the EU AI Act.
- AI Campus (KI-Campus):The learning platform for artificial intelligence.
- University of Innsbruck: OLAT course: Basic Training in Artificial Intelligence(access after OLAT login).
- University of Innsbruck: FAQs on artificial intelligence
- University of Innsbruck: AI Tools – Resources for teaching and research
- University of Innsbruck: Writing Center of the University of Innsbruck