• Korne127@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    1 year ago

    The worst part about is it that there have been already two winters in AI development, in the early twothousands and sometimes in the 70/80s? I think? because of exactly this: They always hyped up AI and said they’d solve all the world’s problems in a short time, and when that obviously didn’t happen, people got disappointed in it and pulled funding…

    • R0cket_M00se@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Well the models we have now are already useful for things, so it’s unlikely it’ll just disappear now.

      We didn’t have the computer technology to make it happen back then, they just didn’t know it at the time.

      • Korne127@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        1 year ago

        That’s not my point. We’ve had good AIs and much development in that area of research already 50 years ago. Chess computers started being better than the best humans in the early 2000s. It’s not a particularly new field. But the development and research of artificial intelligence already completely stopped two times and it took over a decade each time to really start research in the field as well.
        The reasons why this happened is because of too big promises; even if they succeeded in some things, they promised way too much. If they continue promising way too much in the current AI hype as well, I can see the exact same thing happening again: People getting disappointed and the field getting isolated for another decade.
        I’m not saying the current successes will disappear, but that future development might, for a good while, just as it happened back then.

        • R0cket_M00se@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          None of the previous stabs at AI were more than a parlour trick, modern AI are capable of not only full and natural conversations but have the unique ability to turn that into completing tasks based on how well the human operator can describe the problem and explain the proposed solution.

          It’s not always perfect, but it gets close enough for the professional to make use of it by cutting out the research phase of any given project. Or by getting the bulk of the work done without the hours it would have taken to do it. Refining the solution might take ten to fifteen minutes but you don’t have to be a math genius to see the benefits. Plus the models we have now are exploding in niche use-cases. We have image generation, voice generation, code generation, all at near human standards. I’ve had it walk me through how to deploy python scripts via VSC, then I had it walk me through setting up a Git repository, then I asked it to take me through a DnD/Choose your own adventure scenario with specific choices having consequences down the line. It was a little basic but I gave it a preestablished universe and the general premise, it researched the rest on its own and used the data to fill in the gaps in a way I hadn’t even suggested based on what it found of the universe.

          That last one isn’t a productive use case, sure. The point is that what we have now isn’t just some one off computer like a chess bot or a Smash Bros CPU set to its highest level, it’s a seed for every future version of machine learning algorithm that will be used to specifically design models for special scenarios. It’s become ingrained in our society now, and it’s unlikely to just disappear like the rest of what you’re describing.