Do you have a collection of automatically graded quizzes on iLearn that ChatGPT can now complete effortlessly? Wondering about your next steps?
You have two options to consider:
Option 1: Run Quizzes in Tutorials
You can continue to use quizzes in your tutorials, much like some of your colleagues have done earlier in the year, to ensure student learning. However, there’s an alternative worth exploring (keep reading).
Option 2: Shift the Grading Focus
Instead of grading for accuracy, think about awarding students a participation mark for their reflections on the quizzes.
How does this work?
Ask students to identify the most challenging aspects of the quiz and how they plan to improve. Alternatively, they can delve into why certain aspects were difficult for them.
These reflections can be shared in a Q&A forum, where students can see others’ submissions only after they’ve posted their own. Encourage students to comment on each other’s work.
What will students be graded on?
Simply completing a certain number of these tasks, which can be tracked automatically by iLearn, can serve as a participation mark. While it may require some initial oversight to ensure quality (e.g., addressing ‘low-quality contributions’), this approach allows you to keep using quizzes, have a meaningful participation component, and, most importantly, promote student reflection and engagement in peer discussions.
Interested in what other teachers at MQ are doing with their assessment tasks? See this case study of tuning up an assessment in response to ChatGPT.
Looking for more examples!
Are you rethinking your assessments in response to new Generative AI tools? If so, how? We’d love to feature your thoughts or experiences with Teche readers. Just drop us an email at email@example.com or leave a comment under the post.
The more we share examples and ideas, the better! It’s a learning time for all of us!
Image credits: Image generated with MidJourney AI tool by the author (Dr Olga Kozar)