Remember when search engines arrived and became tools that we all use? It is possible that Generative Artificial Intelligence will do the same. These tools do more than generate text – they also produce images (DALL-E2, Stable Diffusion), designs (StockIMG.AI), audio (Podcastle.AI) and can be used for video editing (Runwayml) or for reviewing scientific literature ( The space is expanding at a rapid speed with over 1000 AI tools now available.

While the scope for change is limited for assessments tasks already in train for 2023, there are things we can do support students to work with integrity, that do not require changes to the MQCMS.

The following ideas are a collection of tips and insights from across Macquarie. These are intended to be actionable, first steps towards our collective AI literacy that we can start doing now.

Test drive and tweak  

Test-driving assessment task questions from your units using generative AI tools will provide a good sense of what to expect from such tools. Make any viable adjustments to the task and take the AI output into consideration when marking student responses, and when conducting moderation with marking teams.  

You may also like to try this activity:
Assessment design reflection questions – an Explorer of AI risk due to AI tools!

Some tweaks that are applicable to many assessments:

  • Make assessment more localised – Contextualise the task to refer to local (events, places, organisations, people) and using recent (after 2021*) sources and examples. [Update: Some Generative AI tools can now integrate live web search results into the output].
  • Make assessment tasks unit-specific – Students are explicitly asked to integrate sources, references, models, theories or material used in the unit in their assessment response.
  • Include non-text elements – Include non-text resources within the assessment prompt or question. Images, infographics, diagrams, flowcharts, models, videos or other multi-media elements are more difficult to submit as part of a query to a generative AI tool. However, caution is needed because these will present accessibility barriers to some students.
  • Create new questions – Avoid reusing case studies, questions or prompts from previous sessions and similarly, avoid asking generic questions about well-known events or entities as part of an assessment task.
  • Connected assessment tasks – When using multi-phase assessment tasks (e.g. drafts, outlines, design briefs) where feedback is given between stages, allows educators to observe changes in student work. Check for consistency between phases, consistency in their justifications given compared to the product produced and compare student performance on the task to other tasks they have done in the unit.

Update: additional resource – Further advice on reframing assessments in light of Generative AI from the University of Melbourne Centre for Study in Higher Education.

Adjust the requirements for task responses

In addition to designing tasks with the first three items above, you can also request that students include these elements in their responses. That is, students make references to sources that are local, recent, unit specific (e.g. via Leganto), or to paywalled, refereed journal articles available via MQ library.

Requiring the inclusion of congruent/matching multiple media elements (images, video, designs or diagrams) may reduce the risk that a single AI tool can be used to produce a useful response.

Asking students to include live links in their reference list where applicable (e.g. a DOI URL) will also help markers check sources more quickly.

See it in action – You my also like to check out:

  1. MQ Examples: Some real life case studies of how others at MQ are tweaking their assessments for 2023.
  2. The Top 5: We also evaluated the top 5 assessment types used at MQ for their level of resilience to AI generation and provided some specific mitigation strategies.

Beyond the assessment task itself, there are a number of scaffolding actions that educators can take to support assessment with integrity.

Communicate with students

This is new territory for both educators and students, so don’t be afraid to propose to students that we should explore and learn together. It is suggested that educators:

Discuss academic integrity in the context of AI with students

Use of AI content generators can be discussed in class or via iLearn forums. This can be framed in the context of academic integrity, research ethics and critical thinking in the discipline.  

Co-create rules with students

Consider jointly creating guidelines with students for the use (or otherwise) of AI tools for the assessment tasks in the unit. You could use this lesson plan as a starting point.

Clarify the rules

Specify the conditions under which each assessment task is to be completed. Include whether generative AI tools are permitted in completing tasks, and if so, to what extent can AI content generators (and other tools, resources, sources) be used for each assessment task. The more explicit the guidance the better so be sure to also clarify what is not permitted.

Outline ethical use of AI tools

Use of AI tools in learning and assessment is guided by MQ academic integrity values. It is academic practice to properly cite content or ideas that are not your own; the same is true when using AI generated material. It is recommended that students:

Acknowledge AI sourced content

Ask students to cite where generative AI tools are used to generate ideas, direct quotes, or paraphrases.

Show their working

Consider using approaches that have students surface their thinking and the processes they used to arrive at their response. For example:

Explore effective ways of using AI tools

Like all tools, generative AI tools have differing capabilities and limitations. Exploring these with students will help them become more AI literate. Some activities you can do with students include:

Critique an exemplar

A response produced by a generative AI tool can be used to frame a class critique (either the educator can prepare this in advance or students may submit questions to the AI tool during the class – but note any limitations as to the availability to students of suitable AI tools at that time). This critique may include fact checking (correctness, misconceptions), analysis of the writing style, the structure of the argument, the manner in which ideas or facts are supported (or otherwise, including checking for fake references), or if there are any contradictions within the output. Guide students in exploring the strengths and weaknesses of the AI generated responses and implications this has for the quality of their work, learning, and the development of their critical thinking and evaluative judgement.

Hunt for references

Perhaps continuing the previous activity, students (in small groups or individually), can be asked to find quality references to support or contest the generated content. Having students search via MQ library via Leganto reading lists or searching Library online databases for relevant journals or books. Highlight the need to seek reliable sources in order to support an argument, position or statement they make within their work. Remind students that they are ultimately responsible for the work the produce and submit.

Write a good prompt

Also termed “prompt engineering“, this is key to effective use of generative AI tools. Have students discuss the characteristics of effective prompts and experiment with ways to produce higher quality output from AI tools that are relevant to the task. Consider how different tools have different optimal prompt design. Consider co-creating a set of guidelines or advice for how effective research strategies and prompts can used for various tasks. A free online course to learn prompting is available.

Discuss social impacts and potential uses

Discuss the social impacts and potential uses of AI more broadly, including on employability. Teaching staff can lead discussion on how AI content generators are being used in professional and personal life. Students can also be asked to source or suggest examples. Examples from ABC NEWS (24 Jan 2023) and a 2022 McKinsey report include:
Art and design — Architectural design ideation, creating images and detail artwork, music, scripts and poetry.
Marketing — crafting targeted marketing, social media scubas blog posts and tweets, property listings and event descriptions for ticket sakes sites, and technical sales content (including text, images, and video); customer assistants (chat bots) trained for specific businesses, such as retail, insurance or banking.
Operations — generating task lists for efficient execution of a given activity or suggesting fixes for production problems.
IT — rapidly prototyping software code, reviewing code and writing supporting documentation.
Legal — summarising legal information, drafting contracts, drafting and reviewing annual reports.
R&D — summarising scientific literature, accelerating drug discovery through better understanding of diseases and creating novel protein sequences.

Share your experience

We welcome further suggestions for assessment changes or ways of working with students on the use of AI tools in the comments below. You can also contribute your ideas by emailing

See other posts in the Generative AI series.

Found an ‘AI generated’ academic integrity breach? See this advice on how to gather evidence and report it to MQ for investigation.

Acknowledgements: Banner image: Stable Diffusion generated artwork “Robot repairing a book” (28 Feb 2023). M. Hillier. CC0 1.0 Universal Public Domain.

Posted by Mathew Hillier

Mathew has been engaged by Macquarie University as an e-Assessment Academic in residence and is available to answer questions by MQ staff. Mathew specialises in Digital Assessment (e-Assessment) in Higher Education. Has held positions as an advisor and academic developer at University of New South Wales, University of Queensland, Monash University and University of Adelaide. He has also held academic teaching roles in areas such as business information systems, multimedia arts and engineering project management. Mathew recently led a half million dollar Federal government funded grant on e-Exams across ten university partners and is co-chair of the international 'Transforming Assessment' webinar series as the e-Assessment special interest group under the Australasian society for computers in learning in tertiary education. He is also an honorary academic University of Canberra.

Leave a reply

Your email address will not be published. Required fields are marked *