After the discovery of a sophisticated chatbot that can imitate academic work, universities are being urged to take precautions against the use of artificial intelligence to write essays, sparking a debate about better ways to evaluate students in the future.
ChatGPT, a program developed by Microsoft-backed OpenAI that can form arguments and write convincing swaths of text, has sparked widespread concern that students will use it to cheat on written assignments.
Academics, higher education consultants, and cognitive scientists from around the world have proposed that universities develop new modes of assessment in response to the threat that AI poses to academic integrity. ChatGPT is a large language model that has been trained on millions of data points, including large sections of text and books. It generates convincing and coherent responses to questions by predicting the next plausible word in a sequence of words, but its answers are frequently incorrect and necessitate fact-checking.
When you ask the program to generate a reading list on a specific topic, it may generate bogus references. JISC, a UK-based charity that advises higher education on technology, hosted a seminar this week for about 130 university representatives. A “war between plagiarism software and generative AI won’t help anyone,” they were told, and the technology could be used to improve writing and creativity. The widespread availability of this free tool has raised concerns about whether it renders essays obsolete or necessitates additional resources to mark content.
Turnitin is software used by approximately 16,000 school systems worldwide to detect plagiarism and can detect some types of AI-assisted writing. According to Annie Chechitelli, Turnitin’s chief product officer, the company is developing a tool to assist educators in assessing work that has “traces” of it. Chechitelli also cautioned against a “arms race” in detecting cheaters and suggested that educators encourage human skills like critical thinking and editing.
Over-reliance on online tools may have an adverse effect on development or creativity. According to a 2020 study conducted by Rutgers University, students who Google answers to their homework receive lower grades on exams. “Students aren’t going to get automatic As by submitting AI-generated content; it’s more of a workhorse than Einstein,” said Kay Firth-Butterfield, head of artificial intelligence at the World Economic Forum in Davos, adding that the technology would improve rapidly.
Academics have expressed concern that education has been slow to adapt to these tools. “The education system as a whole is just waking up to this, [but it is] the same sort of issue as mobile phones in school. “The reaction has been to ignore it, reject it, ban it, and then try to accommodate it,” said Mike Sharples, emeritus professor at the Open University and author of Story Machines: How Computers Have Become Creative Writers.
The Future of Education
According to Charles Knight, a higher education consultant, shifting to more interactive assessments or reflective work could be costly and difficult for an already cash-strapped sector. “Part of the reason the written essay is so successful is economic,” he adds. “If you conduct [other] assessments, the cost and time required increase.” Universities UK, which represents the sector, said it was keeping a close eye on the situation but was not actively working on it, while TEQSA, Australia’s independent higher education regulator, said institutions needed to define their rules clearly and communicate them to students. “In many cases, learning is a process; it isn’t about the end result, and an essay isn’t useful in many jobs,” said Rebecca Mace, digital philosopher and educational researcher at UCL’s Institute of Education.