I hear people saying things like “chatgpt is basically just a fancy predictive text”. I’m certainly not in the “it’s sentient!” camp, but it seems pretty obvious that a lot more is going on than just predicting the most likely next word.

Even if it’s predicting word by word within a bunch of constraints & structures inferred from the question / prompt, then that’s pretty interesting. Tbh, I’m more impressed by chatgpt’s ability to appearing to “understand” my prompts than I am by the quality of the output. Even though it’s writing is generally a mix of bland, obvious and inaccurate, it mostly does provide a plausible response to whatever I’ve asked / said.

Anyone feel like providing an ELI5 explanation of how it works? Or any good links to articles / videos?

  • Acamon@lemmy.worldOP
    link
    fedilink
    arrow-up
    1
    ·
    9 months ago

    Just to make clear because it seems to come up a lot in some responses - I absolutely don’t think (and never have) that chatgpt is intelligent, ‘understands’ what I’m saying to it or what it’s saying to me (let alone is accurate!). Older chat bots were very prone to getting in weird loops, or sudden context/topic switches. Chatgpt doesn’t do this very often, and I was wondering what the mechanism for keeping it’s answers plausibly connected to the topic under discussion and avoiding grammatical cul-de-sacs.

    I know it’s just a model, I want to understand the difference between it’s predictions and the predictions on my Android keyboard. Is it simply considering the entire previous text as it makes its predictions vs just the last few words? Why doesn’t it occasionally respond with a hundred thousand word response? Many of the texts it’s trained on are longer than it’s usual responses. There seems to be some limits and guidance given either through its training data or its response training that guide it beyond “based on the texts I have seen, what is the most likely word.” and I was curious if there was a summary what the blend of corpus based prediction and respinse feedback, etc. has been used.