The research from Purdue University, first spotted by news outlet Futurism, was presented earlier this month at the Computer-Human Interaction Conference in Hawaii and looked at 517 programming questions on Stack Overflow that were then fed to ChatGPT.

“Our analysis shows that 52% of ChatGPT answers contain incorrect information and 77% are verbose,” the new study explained. “Nonetheless, our user study participants still preferred ChatGPT answers 35% of the time due to their comprehensiveness and well-articulated language style.”

Disturbingly, programmers in the study didn’t always catch the mistakes being produced by the AI chatbot.

“However, they also overlooked the misinformation in the ChatGPT answers 39% of the time,” according to the study. “This implies the need to counter misinformation in ChatGPT answers to programming questions and raise awareness of the risks associated with seemingly correct answers.”

  • NotMyOldRedditName@lemmy.world
    link
    fedilink
    English
    arrow-up
    17
    ·
    edit-2
    1 month ago

    My experience with an AI coding tool today.

    Me: Can you optimize this method.

    AI: Okay, here’s an optimized method.

    Me seeing the AI completely removed a critical conditional check.

    Me: Hey, you completely removed this check with variable xyz

    Ai: oops you’re right, here you go I fixed it.

    It did this 3 times on 3 different optimization requests.

    It was 0 for 3

    Although there was some good suggestions in the suggestions once you get past the blatant first error

    • Zos_Kia@lemmynsfw.com
      link
      fedilink
      English
      arrow-up
      8
      ·
      1 month ago

      Don’t mean to victim blame but i don’t understand why you would use ChatGPT for hard problems like optimization. And i say this as a heavy ChatGPT/Copilot user.

      From my observation, the angle of LLMs on code is linked to the linguistic / syntactic aspects, not to the technical effects of it.

      • NotMyOldRedditName@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        1 month ago

        Because I had some methods I thought were too complex and I wanted to see what it’d come up with?

        In one case part of the method was checking if a value was within one of 4 ranges and it just dropped 2 of the ranges in the output.

        I don’t think that’s asking too much of it.

        • Zos_Kia@lemmynsfw.com
          link
          fedilink
          English
          arrow-up
          4
          ·
          1 month ago

          I don’t think that’s asking too much of it.

          Apparently it was :D i mean the confines of the tool are very limited, despite what the Devin.ai cult would like to believe.

    • eupraxia@lemmy.blahaj.zone
      cake
      link
      fedilink
      English
      arrow-up
      5
      ·
      1 month ago

      That’s been my experience with GPT - every answer Is a hallucination to some extent, so nearly every answer I receive is inaccurate in some ways. However, the same applies if I was asking a human colleague unfamiliar with a particular system to help me debug something - their answers will be quite inaccurate too, but I’m not expecting them to be accurate, just to have helpful suggestions of things to try.

      I still prefer the human colleague in most situations, but if that’s not possible or convenient GPT sometimes at least gets me on the right path.

    • piecat@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      1 month ago

      My favorite is when I ask for something and it gets stuck in a loop, pasting the same comment over and over

  • efstajas@lemmy.world
    link
    fedilink
    English
    arrow-up
    19
    arrow-down
    2
    ·
    1 month ago

    Yeah it’s wrong a lot but as a developer, damn it’s useful. I use Gemini for asking questions and Copilot in my IDE personally, and it’s really good at doing mundane text editing bullshit quickly and writing boilerplate, which is a massive time saver. Gemini has at least pointed me in the right direction with quite obscure issues or helped pinpoint the cause of hidden bugs many times. I treat it like an intelligent rubber duck rather than expecting it to just solve everything for me outright.

    • person420@lemmynsfw.com
      link
      fedilink
      English
      arrow-up
      5
      ·
      1 month ago

      I tend to agree, but I’ve found that most LLMs are worse than I am with regex, and that’s quite the achievement considering how bad I am with them.

      • efstajas@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 month ago

        Hey, at least we can rest easy knowing that human devs will be needed to write regex for quite a while longer.

        … Wait, I’m horrible at Regex. Oh well.

    • Jimmyeatsausage@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 month ago

      Same here. It’s good for writing your basic unit tests, and the explain feature is useful getting for getting your head wrapped around complex syntax, especially as bad as searching for useful documentation has gotten on Google and ddg.

    • InternetPerson@lemmings.world
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      1 month ago

      That’s a good way to use it. Like every technological evolution it comes with risks and downsides. But if you are aware of that and know how to use it, it can be a useful tool.
      And as always, it only gets better over time. One day we will probably rely more heavily on such AI tools, so it’s a good idea to adapt quickly.

  • exanime@lemmy.today
    link
    fedilink
    English
    arrow-up
    15
    ·
    1 month ago

    You have no idea how many times I mentioned this observation from my own experience and people attacked me like I called their baby ugly

    ChatGPT in its current form is good help, but nowhere ready to actually replace anyone

    • UnderpantsWeevil@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      1 month ago

      A lot of firms are trying to outsource their dev work overseas to communities of non-English speakers, and then handing the result off to a tiny support team.

      ChatGPT lets the cheap low skill workers churn out miles of spaghetti code in short order, creating the illusion of efficiency for people who don’t know (or care) what they’re buying.

      • exanime@lemmy.today
        link
        fedilink
        English
        arrow-up
        4
        ·
        1 month ago

        Yeap… Another brilliant short term strategy to catch a few eager fools that won’t last mid term

  • zelifcam@lemmy.world
    link
    fedilink
    English
    arrow-up
    13
    arrow-down
    1
    ·
    edit-2
    1 month ago

    “Major new Technology still in Infancy Needs Improvements”

    – headline every fucking day

    • lauha@lemmy.one
      link
      fedilink
      English
      arrow-up
      15
      ·
      1 month ago

      “Corporation using immature technology in productions because it’s cool”

      More news at eleven

      • capital@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 month ago

        This is scary because up to now, all software released worked exactly as intended so we need to be extra special careful here.

        • otp@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 month ago

          Yes, and we never have and never will put lives in the hands of software developers before!

          Tap for spoiler

          /s…for this comment and the above one, for anyone who needs it

    • jmcs@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      8
      ·
      1 month ago

      unready technology that spews dangerous misinformation in the most convincing way possible is being massively promoted

      • AIhasUse@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        arrow-down
        5
        ·
        1 month ago

        Yeah, because no human would convincingly lie on the internet. Right, Arthur?

        It’s literally built on what confidently incorrect people put on the internet. The only difference is that there are constant disclaimers on it saying it may give incorrect information.

        Anyone too stupid to understand how to use it is too stupid to use the internet safely anyways. Or even books for that matter.

        • jmcs@discuss.tchncs.de
          link
          fedilink
          English
          arrow-up
          4
          ·
          1 month ago

          Holy mother of false equivalence. Google is not supposed to be a random dude on the Internet, it’s supposed to be a reference tool, and for the most part it was a good one before they started enshittifying it.

          • AIhasUse@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            arrow-down
            2
            ·
            1 month ago

            Google is a search engine. It points you to web pages that are made by people. Many times, the people who make those websites have put things on them that are knowingly or unknowingly incorrect but said in an authoritative manner. That was all I was saying, nothing controversial. That’s been a known fact for a long time. You can’t just read something on a single site and then be sure that it has to be true. I get that there are people who strangely fall in love with specific websites and think they are absolute truth, but thats always been a foolish way to use the internet.

            A great example of people believing blindly is all these horribly doctored google ai images saying ridiculous things. There are so many idiots that think every time they see a screenshot of Google ai saying something absurd that it has to be true. People have even gone so far as to use ridiculous fonts just to point out how easy it is to get people to trust anything. Now there’s a bunch of idiots that think all 20 or so Google ai mistakes they’ve seen are all genuine, so much so that they think almost all Google ai responses are incorrect. Some people are very stupid. Sorry to break it to you, but LLMs are not the first thing to put incorrect information on the internet.

      • AIhasUse@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        1 month ago

        I’m honestly a bit jealous of you. You are going to be so amazed when you realise this stuff is just barely getting started. It’s insane what people are already building with agents. Once this stuff gets mainstream, and specialized hardware hits the market, our current paradigm is going to seem like silent black and white films compared to what will be going on. By 2030 we will feel like 2020 was half a century ago at least.

          • AIhasUse@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            arrow-down
            1
            ·
            1 month ago

            Ray Kurzweil has a phenomenal record of making predictions. He’s like 90% or something and has been saying AGI by 2029 for something like 30+ years. Last I heard, he is sticking with it, but he admits he may be a year or two off in either direction. AGI is a pretty broad term, but if you take it as “better than nearly every human in every field of expertise,” then I think 2029 is quite reasonable.

              • explodicle@sh.itjust.works
                link
                fedilink
                English
                arrow-up
                0
                ·
                1 month ago

                Maybe only 51% of the code it writes needs to be good before it can self-improve. In which case, we’re nearly there!

                • AIhasUse@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  0
                  arrow-down
                  1
                  ·
                  1 month ago

                  We are already past that. The 48% is from a version of chatgpt(3.5) that came out a year ago, there has been lots of progress since then.

    • Snot Flickerman@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      3
      ·
      1 month ago

      in Infancy Needs Improvements

      I’m just gonna go out on a limb and say that if we have to invest in new energy sources just to make these tools functionably usable… maybe we’re better off just paying people to do these jobs instead of burning the planet to a rocky dead husk to achieve AI?

      • Thekingoflorda@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        2
        ·
        1 month ago

        Just playing devil’s advocate here, but if we could get to a future with algorithms so good they are essentially a talking version of all human knowledge, this would be a great thing for humanity.

        • Snot Flickerman@lemmy.blahaj.zone
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          1
          ·
          edit-2
          1 month ago

          this would be a great thing for humanity.

          That’s easy to say. Tell me how. Also tell me how to do it without it being biased about certain subjects over others. Captain Beatty would wildly disagree with this even being possible. His whole shtick in Fahrenheit 451 is that all the books disagreed with one another, so that’s why they started burning them.

  • BeatTakeshi@lemmy.world
    link
    fedilink
    English
    arrow-up
    11
    arrow-down
    1
    ·
    edit-2
    1 month ago

    Who would have thought that an artificial intelligence trained on human intelligence would be just as dumb

    • capital@lemmy.world
      link
      fedilink
      English
      arrow-up
      16
      arrow-down
      2
      ·
      edit-2
      1 month ago

      Hm. This is what I got.

      I think about 90% of the screenshots we see of LLMs failing hilariously are doctored. Lemmy users really want to believe it’s that bad through.

      Edit:

      • otp@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        5
        ·
        1 month ago

        I’ve had lots of great experiences with ChatGPT, and I’ve also had it hallucinate things.

        I saw someone post an image of a simplified riddle, where ChatGPT tried to solve it as if it were the entire riddle, but it added extra restrictions and have a confusing response. I tried it for myself and got an even better answer.

        Prompt (no prior context except saying I have a riddle for it):

        A man and a goat are on one side of the river. They have a boat. How can they go across?

        Response:

        The man takes the goat across the river first, then he returns alone and takes the boat across again. Finally, he brings the goat’s friend, Mr. Cabbage, across the river.

        I wish I was witty enough to make this up.

        • capital@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 month ago

          I reproduced that one and so I believe that one is true.

          I looked up the whole riddle and see how it got confused.

          It happened on 3.5 but not 4.

            • capital@lemmy.world
              link
              fedilink
              English
              arrow-up
              3
              ·
              1 month ago

              Evidently I didn’t save the conversation but I went ahead and entered the exact prompt above into GPT-4. It responded with:

              The man can take the goat across the river in the boat. After reaching the other side, he can leave the goat and return alone to the starting side if needed. This solution assumes the boat is capable of carrying at least the man and the goat at the same time. If there are no further constraints like a need to transport additional items or animals, this straightforward approach should work just fine!

      • AIhasUse@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 month ago

        Yesterday, someone posted a doctored one on here saying everyone eats it up even if you use a ridiculous font in your poorly doctored photo. People who want to believe are quite easy to fool.

  • Subverb@lemmy.world
    link
    fedilink
    English
    arrow-up
    11
    arrow-down
    1
    ·
    1 month ago

    ChatGPT and github copilot are great tools, but they’re like a chainsaw: if you apply them incorrectly or become too casual and careless with them, they will kickback at you and fuck your day up.

  • dependencyinjection@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    6
    ·
    1 month ago

    Sure does, but even when wrong it still gives a good start. Meaning in writing less syntax.

    Particularly for boring stuff.

    Example: My boss is a fan of useMemo in react, not bothered about the overhead, so I just write a comment for the repetitive stuff like sorting easier to write

    // Sort members by last name ascending
    

    And then pressing return a few times. Plus with integration in to Visual Studio Professional it will learn from your other files so if you have coding standards it’s great for that.

    Is it perfect? No. Does it same time and allow us to actually solve complex problems? Yes.

    • Zos_Kia@lemmynsfw.com
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 month ago

      Agreed and i have the exact same approach. It’s like having a colleague next to you who’s not very good but who’s super patient and always willing to help. It’s like having a rubber duck on Adderall who has read all the documentation that exists.

      It seems people are in such a hurry to reject this technology that they fall into the age old trap of forming completely unrealistic expectations then being disappointed when they don’t pan out.

      • dependencyinjection@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 month ago

        Exactly. I suspect many of the people that complain about its inadequacies don’t really work in an industry that can leverage the potential of this tool.

        You’re spot on about the documentation aspect. I can install a package and rely on the LLM to know the methods and such and if it doesn’t, then I can spend some time to read it.

        Also, I suck at regex but writing a comment about what the regex will do will make the LLM do it for me. Then I’ll test it.

        • Zos_Kia@lemmynsfw.com
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 month ago

          Honestly i started at a new job 2 weeks ago and i’ve been breezing through subjects (notably thanks to ChatGPT) at an alarming rate. I’m happy, the boss is happy, OpenAI get their 20 bucks a month. It’s fascinating to read all the posts from people who claim it cannot generate any good code - sounds like a skill issue to me.

    • interdimensionalmeme@lemmy.ml
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      2
      ·
      1 month ago

      Yes, and even if it was only right 1% of the time it would still be amazing

      Also hallucinations are not a universally bad thing.

    • agelord@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      1
      ·
      1 month ago

      In my experience, if you have the necessary skills to point it at the right direction, you don’t need to use it at the first place

      • andallthat@lemmy.world
        link
        fedilink
        English
        arrow-up
        8
        ·
        edit-2
        1 month ago

        it’s just a convenience, not a magic wand. Sure relying on AI blindly and exclusively is a horrible idea (that lots of people peddle and quite a few suckers buy), but there’s room for a supervised and careful use of AI, same as we started using google instead of manpages and (grudgingly, for the older of us) tolerated the addition of syntax highlighting and even some code completion to all but the most basic text editors.

      • Test_Tickles@lemmynsfw.com
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 month ago

        So we should all live alone in the woods in shacks we built for ourselves, wearing the pelts of random animals we caught and ate?
        Just because I have the skills to live like a savage doesn’t mean I want to. Hell, even the idea of glamping sounds awful to me.
        No thanks, I will use modern technology to ease my life just as much as I can.

    • aidan@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 month ago

      It can, it also sometimes can’t unless you ask it “could it be x answer”

  • S13Ni@lemmy.studio
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    1
    ·
    1 month ago

    It does but when you input error logs it does pretty good job at finding issues. I tried it out first by making game of snake that plays itself. Took some prompting to get all features I wanted but in the end it worked great in no time. After that I decided to try to make distortion VST3 plugin similar to ZVEX Fuzz Factory guitar pedal. It took lot’s of prompting to get out something that actually builds without error I was quickly able to fix those when I copied the error log to the prompt. After that I kept prompting it further eg. “great, now it works but Gate knob doesn’t seem to do anything and knobs are not centered”.

    In the end I got perfectly functional distortion plugin. Haven’t compared it to an actual pedal version yet. Not that AI will just replace us all but it can be truly powerful once you go beyond initial answer.

  • reksas@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    3
    ·
    1 month ago

    I just use it to get ideas about how to do something or ask it to write short functions for stuff i wouldnt know that well. I tried using it to create graphical ui for script but that was constant struggle to keep it on track. It managed to create something that kind of worked but it was like trying to hold 2 magnets of opposing polarity together and I had to constantly reset the conversation after it got “corrupted”.

    Its useful tool if you dont rely on it, use it correctly and dont trust it too much.

    • tea@lemmy.today
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 month ago

      This has been true for code you pull from posts on stackoverflow since forever. There are some good ideas, but they a. Aren’t exactly what you are trying to solve and b. Some of the ideas are incomplete or just bad and it is up to you to sort the wheat from the chaff.

  • corroded@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 month ago

    I will resort to ChatGPT for coding help every so often. I’m a fairly experienced programmer, so my questions usually tend to be somewhat complex. I’ve found that’s it’s extremely useful for those problems that fall into the category of “I could solve this myself in 2 hours, or I could ask AI to solve it for me in seconds.” Usually, I’ll get a working solution, but almost every single time, it’s not a good solution. It provides a great starting-off point to write my own code.

    Some of the issues I’ve found (speaking as a C++ developer) are: Variables not declared “const,” extremely inefficient use of data structures, ignoring modern language features, ignoring parallelism, using an improper data type, etc.

    ChatGPT is great for generating ideas, but it’s going to be a while before it can actually replace a human developer. Producing code that works isn’t hard; producing code that’s good requires experience.

  • d0ntpan1c@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 month ago

    What drives me crazy about its programming responses is how awful the html it suggests is. Vast majority of its answers are inaccessible. If anything, a LLM should be able to process and reconcile the correct choices for semantic html better than a human… but it doesnt because its not trained on WIA-ARIA… its trained on random reddit and stack overflow results and packages those up in nice sounding words. And its not entirely that the training data wants to be inaccessible… a lot of it is just example code wothout any intent to be accessible anyway. Which is the problem. LLM’s dont know what the context is for something presented as a minimal example vs something presented as an ideal solution, at least, not without careful training. These generalized models dont spend a lot of time on the tuned training for a particular task because that would counteract the “generalized” capabilities.

    Sure, its annoying if it doesnt give a fully formed solution of some python or js or whatever to perform a task. Sometimes it’ll go way overboard (it loves to tell you to extend js object methods with slight tweaks, rather than use built in methods, for instance, which is a really bad practice but will get the job done)

    We already have a massive issue with inaccessible web sites and this tech is just pushing a bunch of people who may already be unaware of accessible html best practices to write even more inaccessible html, confidently.

    But hey, thats what capitalism is good for right? Making money on half-baked promises and screwing over the disabled. they arent profitable, anyway.

  • NounsAndWords@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    2
    ·
    1 month ago

    GPT-2 came out a little more than 5 years ago, it answered 0% of questions accurately and couldn’t string a sentence together.

    GPT-3 came out a little less than 4 years ago and was kind of a neat party trick, but I’m pretty sure answered ~0% of programming questions correctly.

    GPT-4 came out a little less than 2 years ago and can answer 48% of programming questions accurately.

    I’m not talking about mortality, or creativity, or good/bad for humanity, but if you don’t see a trajectory here, I don’t know what to tell you.

      • otp@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        1 month ago

        I appreciate the XKCD comic, but I think you’re exaggerating that other commenter’s intent.

        The tech has been improving, and there’s no obvious reason to assume that we’ve reached the peak already. Nor is the other commenter saying we went from 0 to 1 and so now we’re going to see something 400x as good.

        • stufkes@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 month ago

          I think the one argument for the assumption that we’re near peak already is the entire issue of AI learning from AI input. I think numberphile discussed a maths paper that said that to achieve the accuracy that we want, there is simply not enough data to train it on.

          That’s of course not to say that we can’t find alternative approaches

        • 31337@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 month ago

          We’re close to peak using current NN architectures and methods. All this started with the discovery of transformer architecture in 2017. Advances in architecture and methods have been fairly small and incremental since then. The advancements in performance has mostly just been throwing more data and compute at the models, and diminishing returns have been observed. GPT-3 costed something like $15 million to train. GPT-4 is a little better and costed something like $100 million to train. If the next model costs $1 billion to train, it will likely be a little better.

        • 14th_cylon@lemm.ee
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          1 month ago

          I appreciate the XKCD comic, but I think you’re exaggerating that other commenter’s intent.

          i don’t think so. the other commenter clearly rejects the critic(1) and implies that existence of upward trajectory means it will one day overcome the problem(2).

          while (1) is well documented fact right now, (2) is just wishful thinking right now.

          hence the comic, because “the trajectory” doesn’t really mean anything.

          • otp@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 month ago

            In general, “The technology is young and will get better with time” is not just a reasonable argument, but almost a consistent pattern. Note that XKCD’s example is about events, not technology. The comic would be relevant if someone were talking about events happening, or something like sales, but not about technology.

            Here, I’m not saying that you’re necessarily right or they’re necessarily wrong, just that the comic you shared is not a good fit.

            • 14th_cylon@lemm.ee
              link
              fedilink
              English
              arrow-up
              1
              ·
              1 month ago

              In general, “The technology is young and will get better with time” is not just a reasonable argument, but almost a consistent pattern. Note that XKCD’s example is about events, not technology.

              yeah, no.

              try to compare horse speed with ford t and blindly extrapolate that into the future. look at the moore’s law. technology does not just grow upwards if you give it enough time, most of it has some kind of limit.

              and it is not out of realm of possibility that llms, having already stolen all of human knowledge from the internet, having found it is not enough and spewing out bullshit as a result of that monumental theft, have already reached it.

              that may not be the case for every machine learning tool developed for some specific purpose, but blind assumption it will just grow indiscriminately, because “there is a trend”, is overly optimistic.

              • otp@sh.itjust.works
                link
                fedilink
                English
                arrow-up
                1
                arrow-down
                1
                ·
                1 month ago

                I don’t think continuing further would be fruitful. I imagine your stance is heavily influenced by your opposition to, or dislike of, AI/LLMs

                • 14th_cylon@lemm.ee
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  arrow-down
                  1
                  ·
                  1 month ago

                  oh sure. when someone says “you can’t just blindly extrapolate a curve”, there must be some conspiracy behind it, it absolutely cannot be because you can’t just blindly extrapolate a curve 😂

      • NounsAndWords@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        1 month ago

        Perhaps there is some line between assuming infinite growth and declaring that this technology that is not quite good enough right now will therefore never be good enough?

        Blindly assuming no further technological advancements seems equally as foolish to me as assuming perpetual exponential growth. Ironically, our ability to extrapolate from limited information is a huge part of human intelligence that AI hasn’t solved yet.

        • 14th_cylon@lemm.ee
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          1 month ago

          will therefore never be good enough?

          no one said that. but someone did try to reject the fact it is demonstrably bad right now, because “there is a trajectory”.

    • egeres@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 month ago

      Lemmy seems to be very near-sighted when it comes to the exponential curve of AI progress, I think this is an effect because the community is very anti-corp

      • NounsAndWords@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        1 month ago

        No clue? Somewhere between a few years (assuming some unexpected breakthrough) or many decades? The consensus from experts (of which I am not) seems to be somewhere in the 2030s/40s for AGI. I’m guessing accuracy probably will be more on a topic by topic basis, LLMs might never even get there, or only related to things they’ve been heavily trained on. If predictive text doesn’t do it then I would be betting on whatever Yann LeCun is working on.

    • Snot Flickerman@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      1 month ago

      https://www.reuters.com/technology/openai-ceo-altman-says-davos-future-ai-depends-energy-breakthrough-2024-01-16/

      Speaking at a Bloomberg event on the sidelines of the World Economic Forum’s annual meeting in Davos, Altman said the silver lining is that more climate-friendly sources of energy, particularly nuclear fusion or cheaper solar power and storage, are the way forward for AI.

      “There’s no way to get there without a breakthrough,” he said. “It motivates us to go invest more in fusion.”

      It’s a good trajectory, but when you have people running these companies saying that we need “energy breakthroughs” to power something that gives more accurate answers in the face of a world that’s already experiencing serious issues arising from climate change…

      It just seems foolhardy if we have to burn the planet down to get to 80% accuracy.

      I’m glad Altman is at least promoting nuclear, but at the same time, he has his fingers deep in a nuclear energy company, so it’s not like this isn’t something he might be pushing because it benefits him directly. He’s not promoting nuclear because he cares about humanity, he’s promoting nuclear because has deep investment in nuclear energy. That seems like just one more capitalist trying to corner the market for themselves.

      • AIhasUse@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 month ago

        We are running these things on computers not designed for this. Right now, there are ASICs being built that are specifically designed for it, and traditionally, ASICs give about 5 orders of magnitude of efficiency gains.