Google rolled out AI overviews across the United States this month, exposing its flagship product to the hallucinations of large language models.

  • atrielienz@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 month ago

    I understand the gist but I don’t mean that it’s actively like looking up facts. I mean that it is using bad information to give a result (as in the information it was trained on says 1+1 =5 and so it is giving that result because that’s what the training data had as a result. The hallucinations as they are called by the people studying them aren’t that. They are when the training data doesn’t have an answer for 1+1 so then the LLM can’t do math to say that the next likely word is 2. So it doesn’t have a result at all but it is programmed to give a result so it gives nonsense.