AI & AI

The availability of generative artificial intelligence tools such as ChatGPT, Google’s Bard, Bing AI and many others presents emerging challenges for academic integrity. However, our academic integrity principles have not changed – we will continue to expect staff and students to apply academic integrity values of honesty, respect, trust, responsibility and support, across learning, teaching, and research.  

We have already seen some instances of students using artificial intelligence to prepare their assignments – but is this a breach of the Academic Integrity Policy?

The answer, of course, is that it depends

Each assessment task will have a different level of acceptable use of these tools. It is important to explain to students if and how they can use these tools in their assignments

Although the Academic Integrity Policy is currently under review, it is still the case that if students directly copy and paste from any generative artificial intelligence service or software, it will be considered plagiarism.

What can academic staff do?

  • Be *crystal* clear with your students about what is acceptable, and what is not, for each assessment task. This advice should be provided to students when assessment instructions are given. Having it in writing in the assessment instructions, the rubric, and supported with a verbal discussion in class will help ensure students understand what is required. Provide details of how they may or may not use these types of tools in order to avoid unintended Academic Integrity breaches. To help you detail what is and is not permitted you can use:
    • An assessment check list of possible inclusions and exclusions that you can customise for your assessment tasks. The checklist also includes a supporting traffic light framework that you can show to students.
  • Have conversations, with colleagues and students, about what the use of artificial intelligence might look like within your discipline and for your unit assessment tasks. You may like to facilitate a class discussions using this
  • Advise students on the most appropriate way to acknowledge the use of AI tools. The library has prepared some advice on citing and critiquing AI sources. We also have an extended discussion on advising students about using and evidencing their use of generative artificial intelligence for assessment.
  • Be on the lookout for evidence, and be sure to discuss this with your teaching and marking team too. Indicators to look out for might be one, or a combination, of these:
    • Fake references. Make sure you enable ‘include references’ in Turnitin. If the references don’t match a source or a reference is a patchwork of matches then investigate further.
    • Invalid use of citations — that is, the article cited doesn’t contain the quotes being used.
    • Factual errors. Especially those that go against what was taught in the unit.
    • An unexpected change in the student’s performance or quality of writing in a task compared with earlier submissions in the unit.
    • Unusual or odd words or phrases that seem out of place in student writing.
    • Tortured phrases may be the result of using a paraphrasing tool – for example: “Counterfeit consciousness” in place of “Artificial intelligence”.
    • Use of only generic, world famous, non-localised, pre 2021* resources/sources/examples – but only where the task instructed the use of localised, recent, niche/specific information to be included in the response. [* Update: some Generative AI tools can now integrate web search results in the output].

Be aware that the evidence points above are not ‘proof’ that the work was written by AI, they are just possible indicators. These may be just an indicator of poor scholarship or it may be an indicator of contract cheating too.

Update: Turnitin launched on 5th April 2023 an ‘AI writing detection’ feature within Turnitin via the similarity report that is available to staff only (not students) – it too is not ‘proof’, so please read this advice about the features and limitations of Turnitin’s AI detection tool. Update 2: The AI detection feature is no longer enabled at MQ.

Note: Staff are strongly recommended to avoid using third party “AI detection” services to check student work. Aside from the privacy concerns, the resulting evidence is likely to be deeply questionable.

What to do if you find a suspected ‘AI generated’ academic integrity breach?

If you have any concerns about how a student has completed an assessment task, then we urge you to report it. However, suspicion or a hunch isn’t sufficient on its own. You should provide evidence and reasoning about why you think a student may have used artificial intelligence.

  • Before reporting it:
    • Gather points of evidence (as outlined above).
    • Consider having a conversation with the student. Some questions to ask include: What was their writing process? How did they go about completing the assessment? What key resources did they draw upon? Also ask questions to ascertain their level of understanding of the content of the assessment. What were the key points they learnt from completing the assignment? Can they justify the arguments, findings or position expressed in the assessment?
    • Write down the rationale for your suspicions. In addition to the above consider the context of the student, the nature of the task, what students were told was permitted/not permitted for that task, the content itself and the conventions in the discipline.

You can also talk to your faculty Academic Integrity Officer before making a formal report. 

  • Remember: While reporting a suspected breach of the Academic Integrity Policy doesn’t always result in an allegation letter being sent to the student, reporting suspected breaches does assist in determining where further resources or education needs to be provided to students. 

Once you have considered the above, you are now ready to:


Academic Integrity Resources at MQ


Share your experience

We welcome your thoughts in the comments below regarding AI (academic integrity) and generative AI (artificial intelligence) at MQ. You can also contribute your ideas by emailing professional.learning@mq.edu.au.

See other posts in the Generative AI series.

Acknowledgements: Post edited and reviewed by Kylie Coaldrake, Mathew Hillier, Riley O’Keefe, Kane Murdoch and Daniel Anson. Banner image: Peshkova on Shutterstock

Posted by Teche Editor

Leave a reply

Your email address will not be published. Required fields are marked *