University of Texas Agriculture professor Jared Mumm had a plan to prevent his students from using ChatGPT for their term papers: ask the artificial intelligence system itself to rat on students who had requested its services. He copied and pasted the received essays into the chatbox and the machine replied that more than half of the writings were “his”, so the teacher did not hesitate to suspend the (allegedly) fraudsters. But the students had not cheated, they were slandered.
As explained to Rolling Stone by a person familiar with the facts, the students accused of cheating have been temporarily deprived of their graduate diplomas. After receiving the tip from ChatGPT, Mumm sent an email on Monday to the group of students informing them that he had corrected the marks of his last three essays of the semester: several of them had received an “X” in the course because he had checked if they had used the software to write the papers and the bot claimed to have authored each one, the professor argued.
ChatGPT is unreliable for checking if a text is AI generated
Michael Dwyer / LaPresse
GPT Chat is a very powerful tool in the hands of teachers, but only if used well. “I copy and paste your responses and it will say if it has generated the content,” he warned the class, not realizing that ChatGPT doesn’t really work like that. What Mumm apparently didn’t know is that the chatbot isn’t built to detect AI-composed material, or even self-produced material. And it has been known to sometimes issue harmful misinformation.
Read also Francesc Bracero, Carina Farreras
To demonstrate the unreliability of ChatGPT when it comes to knowing if an AI is the author of a work, the Reddit user Delicious_Village112 decided to do an experiment: he asked the chatbot if Professor Mumm’s thesis was the product of artificial intelligence and, indeed, , ChatGPT assured that “The text contains several characteristics that are consistent with the content generated by AI”.
Fortunately for the students, the university has taken action on the matter and is “investigating the incident and developing policies to address the use or misuse of AI technology in the classroom.” Students will now meet individually with Mumm following the disastrous AI incident that may have cost them their course.