I saw people complaining the companies are yet to find the next big thing with AI, but I am already seeing countless offer good solutions for almost every field imaginable. What is this thing the tech industry is waiting for and what are all these current products if not what they had in mind?

I am not great with understanding the business point of view of this situation and I have been out from the news for a long time, so I would really appreciate if someone could ELI5.

  • slazer2au@lemmy.world
    link
    fedilink
    English
    arrow-up
    46
    arrow-down
    9
    ·
    4 months ago

    Here’s a secret. It’s not true AI. All the hype is marketing shit.

    Large language models like GPT, llama, and Gemini don’t create anything new. They just regurgitate existing data.

    You can see this when chat bots keep giving the same 2 pieces incorrect information. They have no concept of they are wrong.

    Until a llm can understand why it is wrong we won’t have true AI.

      • xep@fedia.io
        link
        fedilink
        arrow-up
        3
        ·
        4 months ago

        Statistical methods have been a longstanding mainstay in the field of AI since its inception. I think the trouble is that the term AI has been co-opted for marketing.

    • BlameThePeacock@lemmy.ca
      link
      fedilink
      English
      arrow-up
      18
      arrow-down
      7
      ·
      4 months ago

      That’s not a secret. The industry constantly talks about the difference between LLMs and AGI.

      • slazer2au@lemmy.world
        link
        fedilink
        English
        arrow-up
        12
        ·
        4 months ago

        Until a product goes through marketing and they slap that ‘Using AI’ into the blurb when it doesn’t.

        • agamemnonymous@sh.itjust.works
          link
          fedilink
          arrow-up
          3
          arrow-down
          1
          ·
          4 months ago

          LLMs are AI. They are not AGI. AGI is a particular subset of AI, that does not preclude non-general AI from being AI.

          People keep talking about how it just regurgitates information, and says incorrect things sometimes, and hallucinates or misinterprets things, as if humans do not also do those things. Most people just regurgitate information they found online, true or false. People frequently hallucinate things they think are true and stubbornly refuse to change when called out. Many people cannot understand when and why they’re wrong.

    • Zos_Kia@lemmynsfw.com
      link
      fedilink
      arrow-up
      6
      arrow-down
      2
      ·
      4 months ago

      Large language models like GPT, llama, and Gemini don’t create anything new

      That’s because it is a stupid use case. Why should we expect AI models to be creative, when that is explicitly not what they are for?

    • Nikls94@lemmy.world
      link
      fedilink
      arrow-up
      3
      arrow-down
      1
      ·
      edit-2
      4 months ago

      I have different weights for my two dumbbells and I asked ChatGPT 4.0 how to divide the weights evenly on all 4 sides of the 2 dumbbells. It told me to use 4 half-pound weighs instead of my 2 pound weighs constantly, and finally after like 15 minutes, it admitted that, with my sets of weights, it’s impossible to divide them evenly…

      • FaceDeer@fedia.io
        link
        fedilink
        arrow-up
        3
        ·
        4 months ago

        You used an LLM for one of the things it is specifically not good at. Dismissing its overall value on that basis is like complaining that your snowmobile is bad at making its way up and down your basement stairs, and so it is therefore useless.

        • Nikls94@lemmy.world
          link
          fedilink
          arrow-up
          1
          ·
          edit-2
          4 months ago

          You are totally right! Sadly, people think that LLMs are able to do all of these things…

    • FaceDeer@fedia.io
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      4 months ago

      It is true AI, it’s just not AGI. Artificial General Intelligence is the sort of thing you see on Star Trek. AI is a much broader term and it encompasses large language models, as well as even simpler things like pathfinding algorithms or OCR. The term “AI” has been in use for this kind of thing since 1956, it’s not some sudden new marketing buzzword that’s being misapplied. Indeed, it’s the people who are insisting that LLMs are not AI that are attempting to redefine a word that’s already been in use for a very long time.

      You can see this when chat bots keep giving the same 2 pieces incorrect information. They have no concept of they are wrong.

      Reminds me of the classic quote from Charles Babbage:

      “On two occasions I have been asked, – “Pray, Mr. Babbage, if you put into the machine wrong figures, will the right answers come out?” … I am not able rightly to apprehend the kind of confusion of ideas that could provoke such a question”

      How is the chatbot supposed to know that the information it’s been given is wrong?

      If you were talking with a human and they thought something was true that wasn’t actually true, do you not count them as an intelligence any more?