“LLMs are continuously evolving, could lead to an “arms race” between Chatbots that can write like humans and AI detection software that can catch them”, says Nigel.


Earlier in May this year, we covered a story where ChatGPT invented a sexual harassment lawsuit, causing a lot of problems for the actual person it claimed was the accused. What followed was a wake-up call for many who began to realize that ChatGPT freely invents and sometimes even writes its own scientific papers and news articles to back up bogus stories that it creates out of thin air.

Heather Desaire, a chemistry professor at the University of Kansas, was quoted stating AI writing is like “the game of two truths and a lie,” while Princeton University computer science professor Arvind Narayanan has called ChatGPT a “bullshit generator.” This can be quite a serious problem, especially for people who are using ChatGPT for work.

ChatGPT’s inclination to tell tall stories can often lead to wild goose chases that waste time and money, not to mention reputation, like in the case of Jonathan Turley, the victim of the fake sexual harassment lawsuit we mentioned earlier. Another problem that arises from AI’s ability to create its own scientific papers is the fact that it mimics actual human writing so well that it’s difficult to tell the difference between an AI-generated paper and one actually written by a human. This is quickly becoming a nightmare for the education sector as teachers, and lecturers have begun complaining that students are using ChatGPT to write papers and assignments, which is not only plagiarism (since ChatGPT assembles text from many sources) but also cheating.

Image Credit: Shutterstock

Cheating with ChatGPT

One such incident occurred at the University of Texas A&M-Commerce, where an instructor put about half a class’s diplomas on hold pending further investigation after ChatGPT claimed it was the author of their work. That’s right; he used ChatGPT to check if the essays were written by ChatGPT, which is funny and ironic, especially since ChatGPT is quickly becoming known for disseminating fictitious information.

That being said, we can’t call ChatGPT a liar either because that would imply dishonesty, what it’s doing is exactly what it was intended to do, and that is to produce text like a human. While at least one student has admitted to using ChatGPT to cheat, some have claimed their work to be their own, and others have opted to re-write their essays.

University officials have stated that no one failed the class though they are looking into ways to monitor and control the use of AI like ChatGPT in the classrooms. With regards to asking ChatGPT whether it wrote something or not, Matt Novak, a senior contributor at Forbes, has written an interesting post on how he tried that and how it doesn’t work.

In fact, ChatGPT ended up taking credit for a paper he actually wrote himself. What’s funny is that at one point, everyone was complaining about how previous language learning models (LLM) sounded robotic and not human. Now we are at a point where we are slowly beginning to realize how important it is to be able to tell the difference.

Image Credit: Shutterstock

A Ray of Hope for the Humans

So if ChatGPT can’t tell you if it wrote something or not without “lying,” is that it? Not really; human beings take plagiarism pretty seriously, and a collective effort is being made to come up with a way to distinguish between AI-generated and human-generated text. Earlier in April this year, Turnitin, a plagiarism detection tool, demoed its new machine learning software that it claims can detect computer-generated text. The catch, however, is that this tool only flags content as AI-generated if it’s “98% confident.” Additionally, when some folks at the Washington Post decided to put this tool to the test, not only did they find that it could easily be fooled if the writing was part human and part ChaGPT, but it also falsely flagged innocent students as cheaters.

While even Open AI, the creators of ChatGPT, have warned that their own tool, the AI text classifier, can only detect if something is AI-generated about 26% of the time, there might still be a light at the end of the tunnel. A team of researchers at the University of Kansas claimed last month that they had trained a machine learning algorithm to detect ChatGPT content with a 99% detection rate. The team led by Heather Desaire, a chemistry professor at the University of Kansas, has provided proof of concept that it is possible to use AI to detect AI. They also believe their algorithm’s success is because it focuses on the stylistic differences between human and AI-generated writing, as humans tend to use more punctuation and “equivocal” language.

ChatGPT
Image Credit: Shutterstock

AI vs AI

Though the initial test size was pretty small and consisted of about 1,200 paragraphs to train and then another 1,200 paragraphs to test, 99% is pretty promising. The problem, however, would be that LLMs like ChatGPT are continuously learning, evolving, and improving, which could probably lead to an “arms race” between Chatbots that can write like humans and AI detection software that can catch them, AI vs AI.

In case you missed:

With a background in Linux system administration, Nigel Pereira began his career with Symantec Antivirus Tech Support. He has now been a technology journalist for over 6 years and his interests lie in Cloud Computing, DevOps, AI, and enterprise technologies.

Leave A Reply

Share.
© Copyright Sify Technologies Ltd, 1998-2022. All rights reserved