1. Home
  2. Artificial Intelligence – Staff Guidance

Artificial Intelligence – Staff Guidance

Introduction

The aim of this article is to provide staff with guidance on students’ use of Artificial Intelligence (AI) within assessments and is designed to accompany the student guidance on Academic Integrity and Artificial Intelligence: A Guide for Students. AI technology is evolving at a fast pace and we therefore need be agile in our response to the capabilities of AI tools. Please note that more general guidance regarding AI and how we can use it in teaching and learning, as well as threats to academic integrity, will follow soon.

Our position

The University Group recognises the potential of AI tools in our teaching, learning and in the workplaces that our students will be entering.  It is therefore important that we support their use in a considered and transparent way that recognises the associated risks.  We need to be future-focused and ensure we take advantage of what AI has to offer. One of the most significant advantages of AI is its ability to provide students with a personalised learning experience. AI technologies enable students to access learning materials more quickly and efficiently than ever before, which will support students with assessments. However, whilst taking advantage of what AI has to offer our students, we need to ensure we educate and guide students towards appropriate use of AI. It remains too easy for students to misunderstand how they can use AI tools and potentially breach our academic integrity guidelines. The Quality Assurance Agency (QAA) has recently produced a briefing paper on the threat to academic integrity posed by AI.

It isn’t viable or forward-thinking to enter a blanket ban on AI use, neither is it appropriate to ignore the risks it poses to academic integrity. We encourage staff to consider how we adapt our teaching, learning, and assessment to make the most of AI in an effective, ethical, and transparent way. At the same time, we must continue to maintain academic integrity and AI misuse will only add to the fundamental challenges we face in this endeavour. Whilst we should continue to screen for academic misconduct, we should address the root causes of it, through developing a culture of academic integrity, through open discussions with our students about the appropriate and inappropriate uses of AI tools.  

Appropriate use of AI

  • To help to debug programming code.
  • When students clearly show what is their own work and show the steps they took in producing the work (e.g., drafts, notes), when AI is allowed as part of the resources drawn upon to create an assessment.
  • To help students with their comprehension, spelling, language, and grammar*.
  • To help students with disabilities convert from one medium to another*.
  • To help find answers to coursework questions (not exams) based on information that could be found on the internet*.  
  • To help generate ideas about research or generate material to provide a starting point.
  • To help get over ‘writer’s block’*.
  • To help with explanations about ideas and concepts*.
  • To help with planning the structure of written materials*. 
  • To help with reviewing and analysing written materials. 
  • To help revise materials.
  • When the use of AI has been clearly acknowledged and is allowed as part of the resources drawn upon to create an assessment.

* There should not normally be the need for a student to acknowledge these types of routine uses of AI tools unless it results in substantial amounts of largely unchanged input from the AI being included in the student’s submission. However, if in doubt, it is safer to acknowledge.

Inappropriate use of AI

  • Writing an assignment in student’s own language, and then using an AI tool to translate into English before submission.
  • Submissions for assessment that consist of substantially unchanged/unmodified output from an AI tool without acknowledgement of its use
  • Answering phased online test questions in real time.
  • Use of AI when the module leader has stated it is disallowed for specific reasons in relation to the assessment.

If there are reasons why AI should not be allowed in relation to an assessment, it is important to make the rationale for this clear to students.

Acknowledging, Describing, and Referencing AI Use

It is essential that students understand why they must acknowledge their use of AI in any assessment, as a fundamental part of academic integrity. Students should acknowledge, describe, and reference AI use.

Acknowledgement

Students should acknowledge whether and to what extent AI has been used in their work. Examples are:

  • No content generated by AI technology has been presented as my own work.
  • I acknowledge the use of <insert AI tool(s)/link/date of access> to generate materials used for background research and self-study in the drafting of this assessment. 
  • I acknowledge the use of <insert AI tool(s)/link/date of access> to generate materials that were included within my final submission. 

Describe

Alongside acknowledgements, students should describe how the information or material was generated (including the prompts used), what the output was and how the output was changed by the student. Examples are:

  • The following prompts were input into <AI system>: <List prompt(s)> 
  • The output obtained was: <Paste the output generated by the AI system> 
  • The output was changed by me in the following ways: <explain the actions taken> 

Reference

Students should cite and reference the use of AI using the APA Guidelines: https://apastyle.apa.org/blog/how-to-cite-chatgpt .

Example citation:

When prompted with “Is the left brain right brain divide real or a metaphor?” the ChatGPT-generated text indicated that although the two brain hemispheres are somewhat specialized, “the notation that people can be characterized as ‘left-brained’ or ‘right-brained’ is considered to be an oversimplification and a popular myth” (OpenAI, 2023).

Example reference:

OpenAI. (2023). ChatGPT (Mar 14 version) [Large language model]. https://chat.openai.com/chat

Assessment Design

We encourage staff in the long-term to consider the ways in which module learning outcomes, assessments and assessment criteria can benefit from redesign to ensure it is the student’s own learning that is being assessed and to maintain academic integrity. This can be achieved through creating assessments that are about process rather than product, and that are about applying knowledge, not just about knowledge. In the shorter term, we encourage staff to offer students opportunities to learn how AI tools work, to openly discuss and debate the ethics of AI, the current strengths, and limitations of the technology, and if AI tools can be used appropriately, i.e., effectively, ethically, and transparently in relation to their assessments.

Appropriate use and inappropriate use of AI will vary across disciplines and across assessments; therefore, it is important to consider the guidelines below in the context of your subject area, the learning outcomes of your modules, and how students are assessed.

Assessment design to counter academic misconduct

Guidance for Students

https://sway.office.com/KU3CWfZdHFENWDJg?ref=email

Further Guidance

Understanding Artificial Intelligence Tools

AI tools and the impact on higher education

AI & academic misconduct

Authors

This guidance was authored by:

  • Irene Glendinning
  • Emma Holdsworth
  • George Hulene
  • Martin Jenkins
  • Stephen Dawkins

Author

Updated on October 10, 2023
Was this article helpful?