OpenAI AI detectors and ChatGPT itself are unable to distinguish
Software

OpenAI: AI detectors and ChatGPT itself are unable to distinguish neural network text from human text

OpenAI has acknowledged that existing tools for detecting AI-generated text are ineffective and often lead to false positives. In an updated Frequently Asked Questions (FAQ) section, the company emphasizes that even ChatGPT itself is unable to accurately determine whether text was created by humans or machines. Experts are calling for a move away from automated AI detection tools and towards trusting human experience and intuition.

    Image source: mozarttt / Pixabay

Image source: mozarttt / Pixabay

On their website, the FAQ section is under the heading “How can teachers respond to students passing off AI-generated work as their own?» Company talks about whether AI detectors work: “In short, no. Although several companies (including OpenAI) have released tools to detect AI-generated content, none of them have been able to reliably distinguish AI-generated content from human-generated content“.

AI detectors like GPTZero often give false positive results because they are based on untested detection methods. Ultimately, there is nothing special about AI-generated texts that always distinguishes them from texts written by humans. In addition, the detectors can be bypassed by rewording the text. In July, OpenAI stopped supporting its experimental AI Text Classifier tool, which had a dismal accuracy rate of 26%.

The company also debunks another myth that ChatGPT can detect whether text is machine generated or not: “Furthermore, ChatGPT has no “understanding” of what content can be created by AI. Sometimes he makes up answers to questions like “Did you write that?” [эссе]?” or “Could this have been written by an AI?” These answers are random and have no basis.“.

OpenAI also talks about the tendency of its AI models to report false information: “Sometimes ChatGPT sounds convincing, but can provide false or misleading information (often referred to as a “hallucination”). There may even be quotes or references included. So don’t use it as your only research source“. For example, in May, a New York lawyer who used ChatGPT and received fictitious facts in response was threatened with losing his license.

Even if AI detectors don’t work, that doesn’t mean a human can never recognize AI-generated text. A teacher who is familiar with a student’s writing style may notice if the student’s style or level of knowledge suddenly changes.

Additionally, some sloppy attempts to pass off AI-generated text as your own can reveal tell-tale signs like the phrase “as an AI language model“, indicating that the person simply copied and pasted the answer received from ChatGPT without even reading it. Recently, the journal Nature published an article about how readers encountered the phrase “Regenerate response“(Generate New Response), an element of the ChatGPT interface.

Given the current state of technological development, it is safest to avoid using fully automated tools to identify texts generated by AI. This is confirmed by the opinion of Ethan Mollick, professor at the Wharton School of the University of Pennsylvania and analyst in the field of AI. He emphasizes that AI detectors currently have a high rate of false positives, namely use Do not do it.

Thus, the question of the difference between machine text and human text remains open, and the answer to it may lie within the realm of each individual’s intuitive understanding and professional experience.

About the author

Robbie Elmers

Robbie Elmers is a staff writer for Tech News Space, covering software, applications and services.

Add Comment

Click here to post a comment