If you’ve spent the summer taking a break from news and social media, you may have missed it – but for the rest of us trying to dodge the drip-fed details from Prince Harry’s tell-all memoir, news channels have been buzzing with the launch of ChatGPT, a new online application released in November by OpenAI. Generating text that is almost indistinguishable from that written by humans, it is already clear that this technology will disrupt education, as well as having implications for other aspects of our lives.
Already, there is evidence that industry is looking to ChatGPT for expert advice, perhaps alongside or as an adjunct to seeking out research experts, and this technology is being actively deployed in terms of creative endeavours. All this interest begs the questions: is ChatGPT really a tool that might revolutionise the way we synthesise and use knowledge, or is it, as Australian singer-songwriter Nick Cave has stated, ‘a grotesque mockery of what it is to be human’? And should schools and universities be worried?
Those of us in the education sector ought to be concerned if we simply see ChatGPT as something to fear and suppress; but if we think creatively, we might just be able to turn this to our advantage. Unmanaged, ChatGPT presents real risks to academic integrity and to student wellbeing, staff workloads and a school or university’s reputation.
One of several writing applications and bots which claim to be able to create text, such as Quillbot, Wordtune, Outwrite and Essayailab, ChatGPT appears to be more sophisticated than most of the others, with capabilities ranging from writing drafts based on existing text, generating summaries of articles and YouTube videos, identifying coding errors, solving maths problems, generating course syllabi, assessment topics and rubrics, creating fake references and potentially even grading assessments. The ChatGPT innovation roadmap also includes the development of a new application that will allow it to create audio recordings that mimic the user’s own voice.
It does have some limitations though. As a text-based AI model, by its own admission ChatGPT doesn’t have the capability to provide real-time data or statistics, and its knowledge base currently only goes up to 2021.
Nonetheless, education discussion forums and mainstream media alike are awash with posts expressing concerns about the potentially negative impact of ChatGPT on how we currently teach and assess students and the implications for academic integrity. While these are real concerns, we need to also consider its potential as a ‘transformer technology’ with the capability to multiply our current abilities by, say, improving writing and communication; perhaps, in the same way the calculator facilitated more accurate calculations, with real possibilities for supporting second language students and those with communication and learning difficulties.
In the current education environment, where students are often more focused on the collection of marks than on the process of learning itself, it is inevitable that those returning to class at the start of 2023 will be turning to ChatGPT. And if the almost 600 million views on TikTok is anything to go by, then New Zealand students will be well across this too. Already, at universities we are seeing requests to return to pre-digital pen-and-paper assessments and some calls for in-person, on-campus supervised examinations. There is a real risk here that well-intentioned knee-jerk reactions to ChatGPT will result in a move away from authentic assessments to more conservative and potentially old-fashioned teaching approaches. Simple prevention and mitigation measures will further increase teaching staff workloads. This is particularly apposite for those education providers who see digital and blended delivery modes driving more access to higher education. Put simply, we don’t want to take a step backwards.
Unfortunately, there are no quick and easy solutions. Leaders in the education sector need to ensure that our courses are designed and delivered in ways that prioritise engaging and meaningful teacher-student and peer to peer interactions are essential, as is thinking about how we support the development of ‘AI literacies’ within our curriculum in the same way as we support the development of other academic and digital literacies. What is clear though is that we ignore this at our peril.
The proactive management of academic integrity in the context of increasing infringements and a firm commitment to educative rather than punitive approaches has been a topic of discussion for several years, but we must now tackle this head on.
Reviewing how we conduct assessment, developing educative resources for staff and students, and prioritising student and staff education rather than implementing ‘blanket bans’ and adopting punitive approaches focused on ‘catching’ users of ChatGPT will be key here too. While there are apps already in market which claim to be able to detect use of AI tools, and Turnitin is working on a similar service, this approach is likely to become something akin to an arms race and then we will always be behind. Above all, we need to ensure that our curricula focus remains on the development of good assessment design, and critical thinking and academic skills that will mean students are less likely to resort to AI.
Professor Giselle Byrnes is Provost at Te Kunenga ki Pūrehuroa Massey University.
Related news
Survey reveals wary Kiwi attitudes around emerging technologies
Part of a recent study by the Management, Analytics and Decision Making (MAD) research group has revealed that New Zealanders acknowledge the advantages of emerging technologies but are concerned with the challenges they generate and the government’s handle on regulations.
In the age of robots, let’s build the case for humans
Unprecedented technological development means we need to actively put people first in education, business and social policy, says Professor Jim Arrowsmith.