• CthulhuOnIce@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    7
    ·
    1 year ago

    I really really doubt this, openai said recently that ai detectors are pretty much impossible. And in the article they literally use the wrong name to refer to a different AI detector.

    Especially when you can change Chatgpt’s style by just asking it to write in a more casual way, “stylometrics” seems to be an improbable method for detecting ai as well.

    • Fredthefishlord@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      4
      ·
      1 year ago

      It’s in openai’s best interests to say they’re impossible. Completely regardless of the truth of if they are, that’s the least trustworthy possible source to take into account when forming your understanding of this.

      • CthulhuOnIce@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        4
        ·
        1 year ago

        openai had their own ai detector so I don’t really think it’s in their best interest to say that their product being effective is impossible

  • simple@lemm.ee
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Willing to bet it also catches non-AI text and calls it AI-generated constantly

    • floofloof@lemmy.caOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      1 year ago

      The original paper does have some figures about misclassified paragraphs of human-written text, which would seem to mean false positives. The numbers are higher than for misclassified paragraphs of AI-written text.

    • snooggums@kbin.social
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      The best part of that if AI does a good job of summarizing, then anyone who is good at summarizing will look like AI. Like if AI news articles look like a human wrote it, then a human written news article will look like AI.