I really really doubt this, openai said recently that ai detectors are pretty much impossible. And in the article they literally use the wrong name to refer to a different AI detector.
Especially when you can change Chatgpt’s style by just asking it to write in a more casual way, “stylometrics” seems to be an improbable method for detecting ai as well.
It’s in openai’s best interests to say they’re impossible. Completely regardless of the truth of if they are, that’s the least trustworthy possible source to take into account when forming your understanding of this.
openai had their own ai detector so I don’t really think it’s in their best interest to say that their product being effective is impossible
deleted by creator
Willing to bet it also catches non-AI text and calls it AI-generated constantly
The original paper does have some figures about misclassified paragraphs of human-written text, which would seem to mean false positives. The numbers are higher than for misclassified paragraphs of AI-written text.
The best part of that if AI does a good job of summarizing, then anyone who is good at summarizing will look like AI. Like if AI news articles look like a human wrote it, then a human written news article will look like AI.