• kaffiene@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    6 months ago

    It’s not intelligent, it’s making an output that is statistically appropriate for the prompt. The prompt included some text looking like a copyright waiver.

      • kaffiene@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        6 months ago

        It’s not. It’s reflecting it’s training material. LLMs and other generative AI approaches lack a model of the world which is obvious on the mistakes they make.

        • feedum_sneedson@lemmy.world
          link
          fedilink
          arrow-up
          0
          arrow-down
          1
          ·
          edit-2
          6 months ago

          Tabula rasa, piss and cum and saliva soaking into a mattress. It’s all training data and fallibility. Put it together and what have you got (bibbidy boppidy boo). You know what I’m saying?

            • feedum_sneedson@lemmy.world
              link
              fedilink
              arrow-up
              0
              arrow-down
              1
              ·
              edit-2
              6 months ago

              Okay, now you’re definitely protecting projecting poo-flicking, as I said literally nothing in my last comment. It was nonsense. But I bet you don’t think I’m an LLM.