OpenAI, the company responsible for the development of ChatGPT, has developed an online tool to determine if a piece of text was generated by AI or written by a human. The AI Text Classifier, powered by a language model, presents the likelihood of text being AI-generated on a five-point scale that ranges from “very unlikely” to “likely” and “unclear.”
Despite its potential, schools and universities in France, India, and the US have taken the precautionary measure of banning students from accessing the ChatGPT chatbot and submitting AI-generated essays.
However, the classifier’s predictions can sometimes be incorrect and raise questions about its reliability. OpenAI acknowledges the limitations of the AI Text Classifier, stating that it is meant to spark a conversation about the distinction between human-written and AI-generated content, but should not be relied upon solely. The tool was trained on a mix of human-written text and AI-generated text from various sources yet may not accurately represent all forms of human-written text.
Adding to the uncertainty, the AI Text Classifier may not prove useful for teachers seeking to verify if AI generated a student’s assignment. The classifier’s sensitivity is not advanced enough to identify if parts of the text were written by a computer, making it possible for text that was computer-generated and then edited by a human to evade detection. The situation continues as OpenAI investigates alternative methods, such as watermarking techniques, to detect AI-generated text.