What are the top 5 assessment types used at MQ? How resilient are they to use of Generative Artificial Intelligence? and What (if anything) can we do to enhance their resilience right now? (and that don’t require changes in the MQ CMS).

We ran the numbers and… (drumroll),  

The top five* assessment types used across units in 2023 are: 

  • Quiz/test  
  • Participatory task (with great variation) 
  • Examination 
  • Essay 
  • Report 

*As measured by the number of tasks used across all MQ units at all unit levels in the CMS as of S1 2023. 

Let’s consider the resilience of these five types, now that AI tools are widely available.

Quiz / Test

High risk for online non-invigilated quizzes and tests.

Such quizzes are vulnerable where questions ask for factual recall on basic discipline knowledge. Generative AI tools (such as ChatGPT) can respond to multiple choice questions, including the provision of explanations as to why each is correct or incorrect. 

Mitigation strategies include:

Running quizzes during tutorial time (see this post for a detailed ‘how-to’), including diagrams/images in the question stem or the options, questions that focus on higher order application (analysis, synthesis, or evaluation), rephrasing questions to ask students to pick the most likely/effective/lest effective option or including complex, multi-step calculations. 

Participatory task 

High risk for text responses to a discussion forum 

Participatory tasks can manifest in many different ways across the university! Tasks such as responding to discussion forum questions in iLearn as well as generic written reflections can be vulnerable to generative AI. Creative tasks such as artworks may be vulnerable to visual generative AI tools.

Mitigation strategies include:

Running participatory tasks in-class, asking students to produce accompanying images or diagrams, or using multiple connected reflective tasks related to other unit work. Requiring specific, recent, localised, responses that are contextualised to events, ideas or activities undertaken in the unit can reduce the utility of generative AI tools. 

Examination

Low to Medium risk for in-person invigilated exams.

Invigilated in person examinations are reasonably secure, but these are not impervious to cheating (e.g. crib notes, looking up info while in the toilet, impersonation, fake IDs, using electronic devices to communicate with an assistant, gaining early access to questions, or changing results, have all happened at Australian universities).  

Pen-on-paper exams also suffer from being a relatively inauthentic test of knowledge application in many disciplines, particularly at higher levels. However, examinations that are ‘take-home’ are vulnerable in a similar way to online quizzes and essays. 

Mitigation strategies include:

Writing questions that require application of theory to data or practices, or asking students to include a congruent diagram, flowchart or drawing in their response.

Essay

Very high risk for non-invigilated essays

Non-invigilated essays on broad, general, and well-known concepts are particularly vulnerable to generative AI.  

Mitigation strategies include:  

  • Including localised, unit specific, recent resources in the question or case resources. Similarly adding multiple media elements as part of the task will make submitting the task to a generative AI tool more difficult.
  • Requiring students to use refereed journal sources (those behind a ‘pay wall’ that are still available to students for free via the MQ library), and by checking that legitimate sources and correct citation appears in student responses. 
  • Using new case studies or questions rather than re-using them (even if they are unique, localised, and recent).

Report

High risk for non-invigilated reports.

The risks around reports are similar to that of essays, and the risks increase if (for example) well-known organisations with a high online presence are used or where common issues or problems are the subject (e.g. the marketing strategy of Coca-Cola or the design brief of the Sydney Opera house). 

Mitigation strategies include:  

  • Using localised, niche, recent and specific task subject, elements or resources used in the unit.  
  • Requiring one or more congruent multimedia elements in student responses, e.g. illustrative graphs, diagrams, flow charts, infographics and videos that directly illustrate the argument, idea or data being presented in the student’s response. 

Exploring risk factors

Besides the top five, we can also consider the design features of assessment that may increase the risk of generative AI tools impacting of the integrity of assessment.

You my like to try this activity:
Assessment design reflection questions – an Explorer of AI risk due to AI tools!

Overall advice

Where possible, seek to design tasks and questions that are recent and authentic to the local context of students and the unit. You can also ask students to include a combination of local, recent or peer reviewed or pay-walled resources (such as journal articles found via MQ library databases) in their response. Asking students to include live links (e.g. a DOI URL) in their reference list entries or links to generative AI tool output will also help markers check on integrity.

Tasks that involve multiple elements or complexity, higher order thinking, argumentation, evaluation and judgement are harder for generative AI tools to produce quality output. At least not without more guidance and input from the user, by which time the user needs to understand the subject in some detail to be able to produce reasonable output.  

These are broad categories and contain a great deal of variation. We encourage you to explore the intersection of generative AI tools and the assessment tasks you have designed by querying the tools with your task and questions and be guided in your approach by discussions in your discipline group. 

Other advice includes:


Share your experience

We welcome your comments below about your proposed assessment changes or ways of working with students around the use AI at tools. You can also contribute your ideas by emailing professional.learning@mq.edu.au.

See other posts in the Generative AI series.

Found an ‘AI generated’ academic integrity breach? See this advice on how to gather evidence and report it to MQ for investigation.

Acknowledgements: Banner image: Stable Diffusion generated artwork “Robot Karate” (28 Feb 2023). M. Hillier. CC0 1.0 Universal Public Domain.

Posted by Mathew Hillier

Mathew has been engaged by Macquarie University as an e-Assessment Academic in residence and is available to answer questions by MQ staff. Mathew specialises in Digital Assessment (e-Assessment) in Higher Education. Has held positions as an advisor and academic developer at University of New South Wales, University of Queensland, Monash University and University of Adelaide. He has also held academic teaching roles in areas such as business information systems, multimedia arts and engineering project management. Mathew recently led a half million dollar Federal government funded grant on e-Exams across ten university partners and is co-chair of the international 'Transforming Assessment' webinar series as the e-Assessment special interest group under the Australasian society for computers in learning in tertiary education. He is also an honorary academic University of Canberra.

Leave a reply

Your email address will not be published. Required fields are marked *