• Chewy@discuss.tchncs.de
        link
        fedilink
        arrow-up
        1
        ·
        edit-2
        2 months ago

        I noticed those language models don’t work well for articles with dense information and complex sentence structure. Sometimes they forget the most important point.

        They are useful as a TLDR but shouldn’t be taken as fact, at least not yet and for the foreseeable future.

        A bit off topic, but I’ve read a comment in another community where someone asked chatgpt something and confidently posted the answer. Problem: the answer is wrong. That’s why it’s so important to mark AI LLM generated texts (which the TLDR bots do).