Get all your news in one place.
100’s of premium titles.
One app.
Start reading
The Guardian - UK
The Guardian - UK
Technology
Anna Fazackerley

AI makes plagiarism harder to detect, argue academics – in paper written by chatbot

Bristol University
Bristol University is among the institutions to have issued new guidance on how to detect the use of ChatGPT. Photograph: Adrian Sherratt/Alamy

An academic paper entitled Chatting and Cheating: Ensuring Academic Integrity in the Era of ChatGPT was published this month in an education journal, describing how artificial intelligence (AI) tools “raise a number of challenges and concerns, particularly in relation to academic honesty and plagiarism”.

What readers – and indeed the peer reviewers who cleared it for publication – did not know was that the paper itself had been written by the controversial AI chatbot ChatGPT.

“We wanted to show that ChatGPT is writing at a very high level,” said Prof Debby Cotton, director of academic practice at Plymouth Marjon University, who pretended to be the paper’s lead author. “This is an arms race,” she said. “The technology is improving very fast and it’s going to be difficult for universities to outrun it.”

Cotton, along with two colleagues from Plymouth University who also claimed to be co-authors, tipped off editors of the journal Innovations in Education and Teaching International. But the four academics who peer-reviewed it assumed it was written by these three scholars.

For years, universities have been trying to banish the plague of essay mills selling pre-written essays and other academic work to any students trying to cheat the system. But now academics suspect even the essay mills are using ChatGPT, and institutions admit they are racing to catch up with – and catch out – anyone passing off the popular chatbot’s work as their own.

The Observer has spoken to a number of universities that say they are planning to expel students who are caught using the software.

The peer-reviewed academic paper that was written by a chatbot appeared this month in the journal Innovations in Education and Teaching International.
The peer-reviewed academic paper that was written by a chatbot appeared this month in the journal Innovations in Education and Teaching International. Photograph: Debby RE Cotton

Thomas Lancaster, a computer scientist and expert on contract cheating at Imperial College London, said many universities were “panicking”.

“If all we have in front of us is a written document, it is incredibly tough to prove it has been written by a machine, because the standard of writing is often good,” he said. “The use of English and quality of grammar is often better than from a student.”

Lancaster warned that the latest version of the AI model, ChatGPT-4, which was released last week, was meant to be much better and capable of writing in a way that felt “more human”.

Nonetheless, he said academics could still look for clues that a student had used ChatGPT. Perhaps the biggest of these is that it does not properly understand academic referencing – a vital part of written university work – and often uses “suspect” references, or makes them up completely.

Cotton said that in order to ensure their academic paper hoodwinked the reviewers, references had to be changed and added to.

Lancaster thought that ChatGPT, which was created by the San Francisco-based tech company OpenAI, would “probably do a good job with earlier assignments” on a degree course, but warned it would let them down in the end. “As your course becomes more specialised, it will become much harder to outsource work to a machine,” he said. “I don’t think it could write your whole dissertation.”

Bristol University is one of a number of academic institutions to have issued new guidance for staff on how to detect that a student has used ChatGPT to cheat. This could lead to expulsion for repeat offenders.

Prof Kate Whittington, associate pro vice-chancellor at the university, said: “It’s not a case of one offence and you’re out. But we are very clear that we won’t accept cheating because we need to maintain standards.”

Prof Debby Cotton of Plymouth Marjon University highlighted the risks of AI chatbots helping students cheat.
Prof Debby Cotton of Plymouth Marjon University highlighted the risks of AI chatbots helping students to cheat. Photograph: Karen Robinson/The Observer

She added: “If you cheat your way to a degree, you might get an initial job, but you won’t do well and your career won’t progress the way you want it to.”

Irene Glendinning, head of academic integrity at Coventry University, said: “We are redoubling our efforts to get the message out to students that if they use these tools to cheat, they can be withdrawn.”

Anyone caught would have to do training on appropriate use of AI. If they continued to cheat, the university would expel them. “My colleagues are already finding cases and dealing with them. We don’t know how many we are missing but we are picking up cases,” she said.

Glendinning urged academics to be alert to language that a student would not normally use. “If you can’t hear your student’s voice, that is a warning,” she said. Another is content with “lots of facts and little critique”.

She said that students who can’t spot the weaknesses in what the bot is producing may slip up. “In my subject of computer science, AI tools can generate code but it will often contain bugs,” she explained. “You can’t debug a computer program unless you understand the basics of programming.”

With fees at £9,250 a year, students were only cheating themselves, said Glendinning. “They’re wasting their money and their time if they aren’t using university to learn.”

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.