To say that generative AI tools have tilted our world on its axis is an understatement. This new and rapidly developing technology has the potential to revolutionise how we work, teach and research – and yet, many of us are still grappling with what questions to ask. In this respect, students are, I suspect, way ahead of us. However, as a university community, we are not alone. A survey by the Higher Education Policy Institute found that 53% of undergraduates in the United Kingdom were using AI to generate essays. Similarly, a recent study by the University of Melbourne found that while nearly half (48%) of students had never used generative AI tools, around 10% had used it to produce content they then submitted for assessment. New data reveals the University of Sydney recorded a 1000% increase in serious academic cheating cases referred to the Registrar in the last two years’ alone.
Universities are working at pace to support a massive post-COVID-19 shift in student expectations, which includes greater flexibility to how they access and consume their learning, assessment and information. A recently published report commission by Studiosity indicated that most (59%) New Zealand students expect their universities to adopt AI-based support, with only 52% agreeing that their university is moving fast enough to provide those tools. Students’ reasons for using AI include confidence, speed of return, and to avoid being a burden on others.
If this all feels a bit overwhelming, I want to hone in here on the use of AI in assessment. Specifically, I want to ask: is the use of AI in student assessment really as ubiquitous as we think? I don’t think so. Here at Massey we had approximately 85,000 assessment items submitted to Turnitin for AI writing detection in Semester 1 this year. Of those, 11,000 submissions (13%) have had AI detection percentages between 21%-100% and 1,700 (2%) show as being between 90%-100% AI-generated. Turnitin’s own guidance states that there is a higher incidence of false positives when the percentage is between one and 20 so we should be disregarding any submission that scores below 20%.
Early last year, Massey approved guidelines on generative AI for students and staff. This document took the position that we need to strike a balance between embracing ethical and responsible use of technology in teaching and learning, while ensuring academic integrity is upheld.
In what follows below, I seek to synthesise guidance written by our Teaching and Learning Enhancement Services that we’ve provided to staff in marking assessments and considering the possible use of generative AI.
Here are my ‘top five tips’:
Make it clear in the assignment instructions if you allow generative AI or not
Massey has taken the view (as per our guidelines) that because AI tools are now commonplace both within and beyond the university, including workplaces, staff are encouraged to integrate the use of AI into teaching and learning, and provide students with clear guidelines on the ethical and appropriate use of artificial intelligence in their learning experiences. But if, for academic reasons, it is not appropriate, then make this clear.
Use human judgement
Drawing on your own judgement is critical in determining whether AI has been used in an assessment. Bearing in mind there is no one factor that offers ‘proof’ - you may be alerted by error-free writing, writing which lacks a 'personal voice' or is very different to other examples of that student's work; generic writing which does not use the language of the discipline; factual inaccuracies or irrelevant references; changes in writing style within the assessment; no connection to frameworks or theories being discussed as part of the course and repetitive and disconnected paragraphs. These sorts of things should send up ‘red flags’.
Use Turnitin as a last resort
The university has access to Turnitin's AI writing detection tool, but this should only be used in conjunction with human judgement (see above) and with an awareness of its limitations. It should not be used as a definitive and stand-alone measure of misconduct on its own. Further information about this is available here. It is critical to note that Turnitin is tuned to pick up any form of AI writing – not just ‘ChatGPT’ output. This potentially includes output from a vast array of tools that can be described as ‘writing improvers’ or ‘writing assistants’. For example, this includes the Grammarly ‘Rewrite with Grammarly’ or ‘Improve this’ feature, and the Quillbot ‘Paraphrase’ feature. Unless they have been vetted by the university, do not use other ‘detection’ tools.
Talk with your students
In all cases, the principles of natural justice should be followed, which includes providing students with all the evidence considered by the course coordinator, and offering them a fair opportunity to respond to it. Where it is necessary to communicate with a student under these circumstances (whether by email, virtually or face-to-face), it is important to follow an approach which is fair and reasonable, but also does not impede the resolution process. In this context, the following suggestions might be helpful:
- Have a plan for the conversation with the student/s and collect all relevant documentation that indicates why the work may have been produced by using unauthorised assistance.
- Make sure you are clear on what is in scope for the assessment (what is allowed and what is not).
- Be empathetic - focus on the documentation and avoid assumptions and hypotheticals.
- Consider possible solutions and next steps for the situation and help the student reflect on their performance.
- Where needed, direct the student to support resources and services, then explain the outcome and further process/requirements.
Seek advice
If you need help, contact your College Academic Integrity Officer (AIO) to discuss the seriousness of the suspected policy breach. Following discussion, your AIO will either assist with the suspected breach or delegate this to you. If the management of the investigation is delegated to you, there are email templates available to initiate communication with the student. The student may either resubmit their work for a capped mark; show evidence that the work is their own; or request a meeting with an AIO. Natural justice and the university's commitment to an educative approach must apply.
I hope this helps and, just in case you were wondering – this was not written by generative AI.
Professor Giselle Byrnes is Provost at Te Kunenga ki Pūrehuroa Massey University.
Related news
Opinion: Will AI control our politics, or vice versa? And why did Elon Musk ask for more regulation, not less?
By Associate Professor Grant Duncan.
Opinion: ChatGPT – what does it mean for academic integrity?
By Professor Giselle Byrnes