On Thursday, OpenAI quietly pulled its AI Classifier, an experimental tool designed to detect AI-written text. The decommissioning, first noticed by Decrypt, occurred with no major fanfare and was announced through a small note added to OpenAI's official AI Classifier webpage:
As of July 20, 2023, the AI classifier is no longer available due to its low rate of accuracy. We are working to incorporate feedback and are currently researching more effective provenance techniques for text, and have made a commitment to develop and deploy mechanisms that enable users to understand if audio or visual content is AI-generated.
Released on January 31 amid clamor from educators about students potentially using ChatGPT to write essays and schoolwork, OpenAI's AI Classifier always felt like a performative Band-Aid on a deep wound. From the beginning, OpenAI admitted that its AI Classifier was not "fully reliable," correctly identifying only 26 percent of AI-written text as "likely AI-written" and incorrectly labeling human-written works 9 percent of the time.
As we've pointed out on Ars, AI writing detectors such as OpenAI's AI Classifier, Turnitin, and GPTZero simply don't work with enough accuracy to rely on them for trustworthy results. The methodology behind how they work is speculative and unproven, and the tools are currently routinely used to falsely accuse students of cheating.
Humans can write like AI models, and AI models can write like humans if properly prompted. Often, all it takes to evade AI detectors is to simply ask ChatGPT to write in the style of a known author. But this hasn't stopped a small industry of commercial AI detectors from sprouting up over the past six months.
"If OpenAI can't get its AI detection tool to work, nobody else can either," tweeted AI writer and futurist Daniel Jeffries. "I've said before that AI detection tools are snake oil sold to people and this is just further proof that they are. Don't trust them. They're nonsense."
These statements have so far been backed up by recent studies (Sadasivan et al., 2023) and testimonials from educators who often find that their own human-written work is flagged as AI-composed. Additionally, AI writing detectors have been found to unfairly punish non-native English writers and possibly neurodivergent writers.
Research is still underway to determine if AI-generated text can be watermarked (by purposely manipulating the frequency of words in an AI-generated output), but the study cited above shows that text watermarking can easily be defeated by AI models that paraphrase the output.
For now, it seems that AI writing is here to stay. Going ahead, AI-augmented text will likely flow among the great works of mankind undetectably if deployed with skill. It may be time to look beyond how text is composed and ensure that it properly represents what a particular human wants to say, which is the point of all effective communication.
Recommended Comments
There are no comments to display.
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.