I was thrilled when Lauren Gellatly, a teaching academic from MQBS who has taught a diverse range of units — from Social Marketing, Marketing Management, and Brand Management to Human Resource Management and Social Entrepreneurship — agreed to share her experience of test-driving ChatGPT for teaching this semester. Let’s hear from Lauren! Enjoy!
I have to admit, I was concerned about ChatGPT when it first exploded in popularity early this year. I saw ChatGPT (the most widely discussed version of generative AI) as something that had serious potential for student exploitation. For those of you who aren’t familiar, generative AI is basically a computer ‘program’ that has been trained using enormous amounts of data. It analyses the data and spots patterns to ‘learn’ how to create original content. This content might be words, music, video, images… you are almost only limited by your imagination. In short; it is capable of generating a response to almost any question we, or students, feed it.
Like it or not, AI is here, and students have access to it. With this in mind, I did what any responsible educator would do- set out to see how much fun I could have with it. I asked ChatGPT to write birthday messages in the tone of celebrities who have particularly identifiable ways of speaking and found it was capable of generating quite amusing and readily identifiable renditions of everyone from polarising US politicians to rappers. These were a hit with friends.
Having considered my concerns regarding students potentially misusing the tool and explored frivolous uses, I started to wonder how this new tool could be leveraged in the classroom and with class preparation. The first step was a deep dive into how to get the best results when using this new technology. Crafting requests has come to be known as ‘prompt engineering’. Having seen prompt engineering positions being advertised at over $300 000 p.a., I briefly considered the financial benefits of a career change before my love of teaching won out and I continued my self-education, attempting to unlock what makes a ‘good’ prompt. This could be a whole post of its own even by someone with a relatively novice-level of understanding.
Satisfied that I had the basics down, I set about assessing AI’s capabilities for making me more efficient and improving students’ learning experience.
Assessment 1: Generating statistics
The results of my first serious experiment were less than impressive. I was updating some lectures and asked ChatGPT to provide me with some up-to-date statistics on entrepreneurship in Australia. What followed was a comprehensive list. I then asked to have citations included, with URLs, and was provided with these. I was, momentarily, thrilled. The only problem? The statistics were completely fabricated, and none of the URLs were valid. An important lesson was learned, and also shared with my students; AI-generated output can’t be assumed to be correct.
Grade: Fail
Although it was a Fail for generating statistics and sources this time, it provided a High Distinction-level example to warn my students of its shortcomings.
Surely, though, something with this much hype must be useful somehow. While I’m sure many have used it to generate fantastic results with reliable information, I went the opposite way- I asked ChatGPT to create things that were intentionally incorrect.
Assessment 2: Financials for entrepreneurs
One area my fist year entrepreneurship students seem to struggle with is grasping financial data. They are required to generate their own financial projections, but prior to doing so it helps them to look at examples with a critical eye.
To test ChatGPT’s capabilities and challenge my students’ analytical skills, I asked it to draft financial reports with intentional mistakes and areas for improvement. The resulting output included a balance sheet, cash flow statement and profit and loss for a fictitious small jewellery business.
Importantly, ChatGPT was also able to generate a list of the inaccuracies to which teaching staff could refer. I checked the validity of this list, along with performing my own examination of the financials generated and was pleased with the accuracy (or inaccuracy!) of the output.
Following a discussion about AI-generated material, the ethics of this and its applications, these reports were provided to students and they were asked to identify and correct errors. This hands-on approach not only engaged students but also sharpened their financial acumen, making the learning experience more immersive.
Grade: Distinction
Assessment 3: Example reference lists
I knew ChatGPT had provided fictitious references when I asked for real ones, so I wondered what would happen if I asked it to, intentionally, generate fake references. Why? My reasoning was twofold;
- I wanted to see what this looked like so I could potentially identify it in student work. Well before ChatGPT I had some unfortunate students cite ‘Fayk & Maydup’ and ‘Notreal’, so I am fully aware that it is a matter of ‘when’ I see AI-generate references, not ‘if’.
- I wanted to work on referencing and citation with students, but I didn’t want to use sources they could simply reference and cite by copying and pasting from the internet. These, at times, contain inaccuracies, and I believe it is an important skill for students to be able to accurately cite their sources.
I prompted ChatGPT to create a fake list of citations for business students, including ‘humorous’ names and article titles published in sources such as the “Journal of Bogus Research”. I was particularly impressed with its initial response:
Unfortunately, this was very easy to get around.
After requesting them in APA format, I was provided with a serviceable list which I then refined by asking it to randomly allocate years, page numbers and other bibliographic information.
At present, this has only been used in a YouTube short that I made, but I can see its application for my first-year students next year.
Grade: High Distinction
I love that it attempted to talk me out of engaging what it though was problematic academic practices, and the list did end up being completed. I wasn’t thrilled that I could so easily get it to put ‘ethics’ aside, but at the end of the day I’m dealing with a machine and can’t expect too much.
At this point I remembered that ChatGPT is only one of many versions of generative AI, and decided I should explore other options.
Assessment 4: B-roll for lectures
After some intense Googling, I found my next candidate for assessment, invideo.
My goal was to create AI-generated B-roll/footage for use in my lectures. This was to be video-only, and purely for visual appeal rather than instructive purposes. I wanted to present a case study on Ford’s pricing strategies, and thought the students might appreciate reprieve from staring at my face on video. I logged on to invideo.io and prompted the AI there to craft a video around a script I had written, reasoning that providing a script would give it more detail to work with regarding appropriate visuals, as well as making sure the video was approximately the correct length to match my voiceover. While inputting the prompt was simple and fast, generating the video was painfully slow. I ended up letting the process run overnight. Admittedly, I was using the free version, and invideo do advertise that paid users get faster processing times. This is more than fair!
The output generated was reasonable. A string of stock footage clips related to cars. Some clips had watermarks on them (removable with a paid account), but these were easily replaced by simply prompting the software to remove all clips from X source and replace with clips from elsewhere. It also included a voiceover of the script and some somewhat ominous background music.
Attempt two saw me make a few changes simply by typing feedback into a text box; the aforementioned ‘clip swap’ took place, and I asked for the background music to be less dramatic. Generating version 2.0 was significantly faster, and the requested changes were implemented. Doing this manually would have taken me half an hour or more.
For my third and final version I requested that the voiceover be removed so that I could narrate the video myself. The final product was easy to download and ready for use. This was significantly less labour-intensive than finding the clips myself and editing them together.
Grade: Strong Credit
But with additional experimentation and refinement of prompts I can see this being higher. A paid version would also help here, but that is not a failure on the part of the AI!
Edit: I have since attempted another video, and processing time was much faster.
Updated grade: Strong Distinction
Overall results (so far)
I encourage my students to be creative, to experiment, and to be critical in their thinking. My limited forays into AI’s applications for academics have seen me engage in all of these. It’s far from perfect, and I certainly won’t be relying on AI to write for me*, but for simple tasks like the ones mentioned above, my limited experimentation has uncovered some serious time-saving potential.
I’m looking forward to spending some time over the summer months further exploring the potential of various forms of generative AI for enhancing my students’ learning experience and increasing my productivity.
Overall grade: Solid Credit
*Silly birthday messages excluded!
Image credits: Header image generated with MidJourney AI tool by Olga Kozar
Share this: