Navigating the rapidly changing world of generative artificial intelligence is like trying to escape quicksand; the more you resist, the more you find yourself immersed. And let’s face it, artificial intelligence tools are thoroughly embedded in our lives, from online shopping and customer service chatbots through to the algorithms driving social media feeds. For those of us in the higher education sector, we are now looking at how to responsibly leverage this technology in research and teaching, while supporting students to use these tools, where appropriate, in ways that uphold academic integrity.
Generative artificial intelligence refers to tools that autonomously generate content based on human prompts. Typically, they use complex machine learning models trained on large data sets to generate responses that keep adapting to new information. OpenAI’s GPT-4o, along with Meta AI and Google Gemini, signal a step change in terms of sophistication given their ability to interpret images and adopt flexible writing abilities, among other functions.
While we don’t yet have our own university sector-wide principles here in Aotearoa New Zealand, the general response of higher education providers is to encourage teachers not to resist the technology, but embrace transparency in discussions with students around the ethics of using such tools. In short, we’ve moved from fearing that generative artificial intelligence will encourage students to cheat – a risk best managed by rethinking the purpose and design of assessment – to exploring how generative artificial intelligence tools can be used to positively enhance learning, teaching and research.
Te Kunenga ki Pūrehuroa Massey University’s Academic Board recently approved a set of new guidelines for staff on using generative artificial intelligence. Seeking to strike a balance between enabling staff to innovate with new technologies while also observing their ethical and responsible use, these guidelines are underpinned by two key principles: first, the university’s commitment to ensuring quality and rigour in our teaching, research, and related work, and second, upholding academic integrity, which means acting with honesty, responsibility and openness.
At Massey, we want to encourage the informed and considered use of appropriate generative artificial intelligence tools in our work. This is because it has the potential to improve accessibility, further personalise learning and stimulate creativity – values that we hold dear. Generative artificial intelligence also has the potential to support research by fostering innovation and new ideas while accelerating the discovery process. It can also enhance our efficiency, improving workflows, and simplify data management.
The guidelines also advise staff of the sensitivities and risks associated with using generative artificial intelligence tools. We recognise that alongside the opportunities that generative artificial intelligence affords, there are real risks that need to be mitigated. These include issues of ownership, authorship, and reliability. This is particularly relevant to Massey as a university with aspirations to be Te Tiriti-led, and concerns regarding Māori data sovereignty.
So, how might we use tools like ChatGPT in ways that are ethical and responsible?
In some instances, the answer is simple; in other cases, this might require some in-depth thinking. Generative artificial intelligence can today be used to write an entire scientific article or respond to an assessment, providing you are using the right prompts. Taking credit for this work is a clear breach of academic integrity.
Generative artificial intelligence can, however, be used to advance research, teaching and the experience of studying by using it ethically. Fundamentally, it is a tool that can help us use our time, energy and intelligence more efficiently. Think about all the repetitive tasks that humans are not good at, such as referencing. We know that referencing systems, such as Zotero or Mendeley, have enhanced our productivity while writing scientific outputs or assessments. Generative AI can accelerate this process and bring it to the next level.
Generative artificial intelligence can assist us with multiple tasks that do not require critical thinking. The tools can save us time fixing typos in our text and bugs in our codes. They can be used, for example, to extract information from documents, such as the sample size of an experiment or statistical findings for a literature review. However, we still need human critical thinking to interpret the extracted data and provide a story of what this tells us.
We do, of course, need to exercise caution when using generative artificial intelligence tools. How many times have you caught generative artificial intelligence tools lying? Or providing you with a reference that looked legitimate, but did not actually exist? Many generative artificial intelligence models have been trained to provide reasonable answers based on the data with which they were trained. While we do not have control over how these models were fed, we can verify the information they provide. For example, when asking a generative artificial intelligence tool to extract data from a document, it is always worth asking the generative artificial intelligence tool to quote the text from which the data was extracted.
So, we think that generative artificial intelligence is worth leveraging, but doing this judiciously. And just in case you were wondering, no – no generative artificial intelligence tools were employed in writing this piece!
Professor Giselle Byrnes is Provost at Te Kunenga ki Pūrehuroa Massey University. Associate Professor Rino Lovreglio is a Rutherford Discovery Fellow for Royal Society Te Apārangi and an Associate Professor in Building Technology.
Related news
Opinion: How should teachers consider AI in assessment?
By Professor Giselle Byrnes
Opinion: ChatGPT – what does it mean for academic integrity?
By Professor Giselle Byrnes