1. Home
  2. AI & Academic Misconduct

AI & Academic Misconduct

AI tools do present many opportunities for their application in higher Education.  However, they also present challenges through concerns on academic integrity and also in recognising the limitations and ethical implications of using these tools.  This article explores these challenges and what actions can be taken to address these challenges.  It will cover:

  • Assessment design to counter academic misconduct through the use of AI tools
  • Detecting the use of AI in student work
  • Developing our students AI literacy

Assessment design to counter academic misconduct

This article on the AEPD website highlights the three elements that need consideration in our assessment design to reduce academic misconduct:

  • Opportunity – making engaging in academic misconduct more challenging and making it easier to detect
  • Rationalisation – ensuring that students understand the importance of academic integrity.
  • Pressure – students will face many pressures, academic, cultural/social, personal, which can cause behaviours which can lead to academic misconduct

Countering academic misconduct requires this holistic approach.  Taking a course-based approach to our assessment and ensuring consideration is given to the ‘time on task’ required for the assessment will reduce the pressure that can drive students into poor academic practice. 

Expectations for academic integrity must be clearly communicated to students.  With the rapid development in AI tools what is considered appropriate and inappropriate use must be clearly communicated to students.  This is framed by the Coventry University Group position on the use of AI tools.  This needs to be supported by helping students develop their AI literacy [see below].

How assessment tasks are designed can help reduce the opportunities for academic misconduct using AI tools.  Designing tasks that are authentic, encouraging application, personalised, live, or that encourage reflection and focus on the learning process rather than the product can help reduce opportunity.  In addition, it is important that assessments are regularly refreshed, so where authentic assessments are used (via case studies, for example), the case studies should be updated for each iteration.

Authentic assessment

Assessments that require students to demonstrate professional skills and attitudes as well as just knowledge.  For example, enquiry-based activities in which students are required to present proposed solutions, with justifications.

Assessments that provide a level of specificity, such as being based on contexts that are very topical and local will make it more difficult for AI tools to generate responses.

Personalise the assessment

Asking students to provide a short reflection outlining their approach and reasoning to an assessment for example.

Portfolio approaches, requiring students to select a range of evidence can also create a personalised assessment.  Consideration will need to be given to the manageability of such assessments based on cohort sizes.

Making drafts part of the assessment will add personalisation, provide formative opportunities for the students and provide insight on students’ capabilities and progression.

Where appropriate allow students to submit in their own chosen mode.  For example, submitting a podcast or narrated presentation instead of a written submission. 

Focus on the learning process

Focusing on the end-product, such as an essay or presentation, may not easily evidence all the learning, particularly skills and capabilities.  The assessment could require or focus on evidence of the learning process, asking for plans, storyboards or reflections on the process emphasising more the skills and capabilities developed through the assessment.

Making connections between assessments can also help reduce opportunity for academic integrity and highlight process.  For example, asking students to explain how they have used feedback from previous assessments will enhance the personalised nature of these tasks.

Live assessments

Live assessments provide an opportunity to interact with and test students understanding (e.g., viva, presentations). 

Detecting the use of AI in student work

Large Language Model AI tools work by predicting what the next word in sentence, this can mean they ‘make things up’ and also the language used will be very controlled.  This does mean there are some clues and patterns that can be looked for to give an indication as to whether the writing is AI generated.  These include:

  • A lot of repetition, reframing, particularly of paraphrases.
  • A lack of opinion or personal responses to something
  • Something that lacks depth- on the surface it looks convincing but has less substance.
  • Generic answers with little that is creative, innovative, or strange.
  • Writing that reinforces dominant narratives, ideas, or systems, straying into bias on occasions.
  • As they predict next word common words (It, is etc) are repeated.
  • Lack of any typos which you would expect to see in human generated text.
  • Use of American spellings

AI Literacy

Outputs from AI tools are machine generated, drawing on the databases the different tools process.  As such any errors or bias inherent in those sources can be replicated.  It is also known that tools such as ChatGPT3 can make things up.  It is therefore vital that students are supported in developing their AI Literacy so, where appropriately used they do so critically recognising the limitations as well as the potential of such tools. 

These considerations include:

  • Ethical use of AI – the risk that AI generates false information; the risk that AI communicates existing bias; its potential for plagiarism
  • Understanding how AI tools work – the machine generated predictive model for text generation for example

For further considerations on AI Literacy see Maha Bali’s post https://blog.mahabali.me/educational-technology-2/what-i-mean-when-i-say-critical-ai-literacy/

QAA (2023) Maintaining quality and standards in the ChatGPT era: QAA advice on the opportunities and challenges posed by generative artificial intelligence, QAA https://www.qaa.ac.uk/en/membership/membership-areas-of-work/academic-integrity/chatgpt-and-artificial-intelligence

QAA (2023) The rise of artificial intelligence software and potential risks for academic integrity: briefing paper for higher education providers, QAA   https://www.qaa.ac.uk/en/membership/membership-areas-of-work/academic-integrity/chatgpt-and-artificial-intelligence

OECD (2023) ChatGPT and artificial intelligence in higher education: a quick start guide, OECD https://unesdoc.unesco.org/ark:/48223/pf0000385146

Article Authors

Martin Jenkins & David Ashworth

Updated on June 27, 2023
Was this article helpful?