Recommendation for good practice in the use of generative AI at the Vienna University of Technology.
Please note that the information below is updated regularly according to current knowledge.

© Adobe Stock Datei NR.: 275810514 Bildungseinrichtungslizenz von Feodora
The progressive development of AI technologies opens up a variety of opportunities in the field of teaching and learning, but also brings ethical, social and legal challenges. This page highlights the good practice in dealing with generative AI at TU Wien in the area of study and teaching, taking into account a university didactic approach. The decision whether teachers use AI systems is up to them. If they do use them, the guidelines must be clearly defined and the focus must be on how to deal with them.
The responsibility for the results of such generative systems always lies with the human being and they must therefore be critically questioned.
What to look for?

© Adobe Stock Datei NR.: 618444229 Bildungseinrichtungslizenz von Feodora
Interdisciplinary integration: If possible, AI topics should be integrated into courses so that students learn how to deal with them. AI permeates many disciplines and opens up different opportunities to act as a learning partner, for example.
Practical relevance in education: The integration of generative AI should focus on project-based learning in which students actively use AI. This promotes practical knowledge and creative problem solving.
Ethics and society: When using AI in teaching, attention must also be paid to ethical and social aspects of AI. Raising students' awareness of responsibility and the impact of AI on society is a key issue.
Bias: Generative AI systems are never free of bias because they reflect the bias of the materials they are trained with; this bias can harm individuals, groups of people, or society as a whole. Appropriate caution is therefore required when dealing with synthetic media; students must therefore be encouraged to think critically in this context as well.
Collaborative learning: Collaborative learning and sharing of ideas and experiences between students can be encouraged. AI's such as ChatGPT et al. can be useful as a learning partner to promote critical thinking, as the results are often flawed.
Privacy: in the context of AI systems (as with other digital systems), privacy can be an issue when personal data is used to train the AI or when the AI is used to analyze personal data. AI systems must also comply with data protection standards such as purpose limitation of personal data or data transfer to a "third country" in accordance with data protection requirements. Students must be made aware that everything that is entered is also made publicly accessible accordingly.
Transparency and explainability: The AI models and algorithms used should be as transparent and explainable as possible. To this end, reference can also be made to current research results on different systems. This is important to create trust in the decisions of the AI systems and to prevent discrimination.
Responsible research: AI research should be conducted ethically and responsibly. Potential risks and impacts on society and the environment must be considered.
Respect intellectual property rights: intellectual property rights must be respected in the development and use of AI systems. To this end, it may be helpful to clarify the rights to data, models, and software in advance.
Support low-barrier access to teaching and learning materials: Common generative AI applications can also be used to generate image or video descriptions, or also to enable automated captioning of audio and video materials. This can promote low-barrier access and support equity. See also:
![[Translate to English:] ChatBot spricht über ein Chatsystem mit einer Person am Laptop](/fileadmin/_processed_/5/f/csm_AdobeStock_619778832__2__991bab725a.jpeg)
Responsible research: Conduct AI research ethically and responsibly. Consider potential risks and impacts on society and the environment.
Respect intellectual property rights: Ensure that intellectual property rights are respected in the development of AI systems. Clarify rights to data, models, and software up front.
Handling AI in Written Work: Since Large Language Models such as ChatGPT can produce entire written papers (e.g., bachelor's theses, seminar papers, etc.), as well as presentations, it is important to focus on the process of producing such papers. ChatGPT et al. must be documented as a tool and also the places where AI's were used must be marked accordingly. Since responses are often incorrect, it is important that students learn how to use them.
In particular, it is important to document what generative AI systems were used for in the creation of the paper, how it was ensured that the media thus generated was correct in content. As an appendix, it is also useful to document what was learned from it. For substantial contributions, indicate the generative AI system used, as well as the prompts and the date of the query used to generate the media.
Generative systems must be documented as aids and also the places where AI's were used must be marked accordingly.
For more informations about rules for plagiarism look into: Leitfaden zum Umgang mit Plagiaten in studentischen Arbeiten an der TU Wien, opens a file in a new window .
If generative AI texts are used in one's own work without substantial changes, they should be treated similarly to citations from other sources. This means that it must be clearly marked which text passages were created by a generative AI system.
In addition to identifying the synthetic material, a source citation is required that includes the "prompt" used to generate the media, a date, and the system used. ("Prompt example, 09/12/2023, ChatGPT, version").
Collect and share experience values: In order to be able to continue to progress our learning path in a goal-oriented manner as TU Vienna, the experiences you have gathered are very important to us and will be gladly read. Experiences can be sent to anna.fuessl@tuwien.ac.at.
Academ. Integrity: students must learn that any use of generative AI must be clearly and transparently labeled, the prompts used must be disclosed, and links to the results must be provided. It is important to understand that failure to disclose the use of generative AI in a student paper is tantamount to plagiarism.
Special risks in dealing with AI
![[Translate to English:] dekoratives Bild](/fileadmin/_processed_/2/6/csm_AdobeStock_276760325_58ee82ac2a.jpeg)
"Cognitive Bias": The linguistic fluency of systems like ChatGPT comes across as Intelligent and Persuasive, thus influencing perception. This is true even if the written information is incorrect.
AI and the environment: On the level of sustainability, training of AI systems is very energy-intensive and correspondingly burdensome. This applies both to established systems such as ChatGPT and to models that are to be trained themselves.
Discriminatory algorithms: Avoid using algorithms that could provide discriminatory results based on gender, skin color, or other protected characteristics.
Uncontrolled autonomy: Avoid using AI systems with high autonomy, especially when human supervision is required. The responsibility should always lie with the human.
Lack of data protection measures: Ensure that adequate measures are in place to protect personal data. Inadequate safeguards can lead to data breaches.
University didactic concept
Integrating (generative) AI as a support in your course at TU Wien can help to enrich the learning experience and provide students with a diverse approach to the topics. The points below influence each other and therefore should not be considered independently. Here are some ways you can use Large Language Models and image-generating AIs:
Supplementing course materials: students can use Large Language Models like ChatGPT to get additional explanations, examples, and applications to course content. They can ask questions and better understand more complex concepts using the answers provided by generative AIs like ChatGPT. Questioning these answers can further the understanding of algorithms that these systems use, thus stimulating critical thinking.
Adapted methodology: in written work, the focus should be on the process of creation and less on the end result.
Interactive discussions: One way AI can be used is through virtual discussion groups where students can interact with ChatGPT or similar to get different perspectives and views on a topic. This can promote critical thinking skills, problem solving skills, or provide alternative explanations.
Exercises and tasks: The use of interactive exercises and tasks in which students use AI to develop solutions is also a possibility for the use of AI. This can be particularly useful in, for example, programming tasks or mathematical problems. Integrating AI into, for example, simulation or modeling tasks, can be exciting to explore alternative scenarios and outcomes. This offers a possibility to deepen the understanding of complex systems.
Preparation Courses and learning outcomes: For didactic reduction and inspiration, generative AI can be useful to generate case studies or formulate learning outcomes, for example.
Critical position: When working with generative AI, the answer must always be critically questioned, since the systems give answers based on statistical models that do not necessarily correspond to the facts. When students themselves work with answers from generative AI, they learn to deal with it critically and responsibly, since the answers do not always correspond to the facts or, for example, quotations are not correctly labeled.
Support in deepening the learning material: Generative AI can answer basic questions, provide more detailed explanation or alternative solutions, assist in troubleshooting, and thus act as a "learning buddy."
Research: Students can be offered the opportunity to use AI's to assist with research for their projects. This can make it easier for them to access different sources and information and learn to critically question the answers given.
Ethics and social impact: Generative AI can also be used to foster discussions about ethical and social implications of technology. Students can ask questions and learn about different ethical viewpoints on these issues.
Feedback and evaluation: In addition to feedback by teachers within a course, generative AI can be used to provide quick and constructive feedback on exercises or projects. This can support the learning process, motivate the students and relieve the teachers. This freed-up time can in turn be used to interact directly with the students.
Testing: Generative AI is suitable for answering questions aimed at pure knowledge reproduction. To prevent students from using the systems in this way, questions can be modified more in the direction of applied knowledge. An example of this from mathematics: the wrong solution path is presented as an indication and the students' task is to find the error.
It is important that you view AI as a complementary tool and that it is integrated into your course in a meaningful way. Educate students on how to best use (generative) AI and what the limitations of the technology are. Encourage students to think critically and to question and examine the knowledge they receive from AI.
Dealing, especially with generative, Artificial Intelligence at the Vienna University of Technology requires a comprehensive understanding of the technical as well as the legal and ethical aspects. Adherence to the aforementioned points as well as the implementation of the university didactic concept contribute to promoting responsible handling of AI and to positioning the TU Vienna as a pioneer in this field. The steady progress in AI research and teaching offers numerous opportunities that need to be exploited, while at the same time possible risks and challenges need to be actively addressed.