• CeeBee@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      1 month ago

      Bad take. Is the first version of your code the one that you deliver or push upstream?

      LLMs can give great starting points, I use multiple LLMs each for various reasons. Usually to clean up something I wrote (too lazy or too busy/stressed to do manually), find a problem with the logic, or maybe even brainstorm ideas.

      I rarely ever use it to generate blocks of code like asking it to generate “a method that takes X inputs and does Y operations, and returns Z value”. I find that those kinds of results are often vastly wrong or just done in a way that doesn’t fit with other things I’m doing.

      • brbposting@sh.itjust.works
        link
        fedilink
        arrow-up
        0
        ·
        1 month ago

        LLMs can give great starting points, I use multiple LLMs each for various reasons. Usually to clean up something I wrote (too lazy or too busy/stressed to do manually), find a problem with the logic, or maybe even brainstorm ideas.

        Impressed some folks think LLMs are useless. Not sure if their lives/workflows/brains are that different from ours or they haven’t given at the college try.

        I almost always have to use my head before a language model’s output is useful for a given purpose. The tool almost always saves me time, improves the end result, or both. Usually both, I would say.

        It’s a very dangerous technology that is known to output utter garbage and make enormous mistakes. Still, it routinely blows my mind.

    • restingboredface@sh.itjust.works
      cake
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 month ago

      Yeah, but the non-tech savvy business leaders see they can generate code with AI and think ‘why do I need a developer if I have this AI?’ and have no idea whether the code it produces is right or not. This stat should be shared broadly so leaders don’t overestimate the capability and fire people they will desperately need.

      • piecat@lemmy.world
        link
        fedilink
        arrow-up
        0
        ·
        1 month ago

        I say let it happen. If someone is dumb enough to fire all their workers… They deserve what will happen next

        • The Dark Lord ☑️@lemmy.ca
          link
          fedilink
          arrow-up
          0
          ·
          1 month ago

          It won’t happen like that. Leadership will just under-hire and expect all their developers to be way more efficient. Working will be really stressful with increased deadlines and people questioning why you couldn’t meet them.

      • EatATaco@lemm.ee
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 month ago

        And they’ll find out very soon that they need devs when they actually try to test something and nothing works.

      • Boozilla@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 month ago

        Programming jobs will be safe for a while. They’ve been trying to eliminate those positions since at least the 90s. Because coders are expensive and often lack social skills.

        But I do think the clock is ticking. We will see more and more sophisticated AI tools that are relatively idiot-proof and can do things like modify Salesforce, or create complex new Tableau reports with a few mouse clicks, and stuff like that. Jobs will be chiseled away like our unfortunate friends in graphic design.

        • BlameThePeacock@lemmy.ca
          link
          fedilink
          English
          arrow-up
          0
          ·
          1 month ago

          You, along with most people, are still looking at automation wrong. It’s never been about removing people entirely, even AI, it’s about doing the same work with less cost.

          If you can eliminate one programmers from your four person team by giving the other three AI to produce the same amount of work, congrats you’ve just automated one programming job.

          Programming jobs aren’t going anywhere, but either the amount of code produced is about to skyrocket, or the number of employed programmers is going to drop (or most likely both of those things).

          • Tyrangle@lemmy.world
            cake
            link
            fedilink
            arrow-up
            0
            ·
            1 month ago

            Right on. AI feels like a looming paradigm shift in our field that we can either scoff at for its flaws or start learning how to exploit for our benefit. As long as it ends up boosting productivity it’s probably something we’re going to have to learn to work with for job security.

            • BlameThePeacock@lemmy.ca
              link
              fedilink
              English
              arrow-up
              0
              ·
              1 month ago

              It’s already boosting productivity in many roles. That’s just going to accelerate as the models get better, the processing gets cheaper, and (as you said) people learn to use it better.

          • myliltoehurts@lemm.ee
            link
            fedilink
            arrow-up
            0
            ·
            1 month ago

            I wonder if this will also have a reverse tail end effect.

            Company uses AI (with devs) to produce a large amount of code -> code is in prod for a few years with incremental changes -> dev roles rotate or get further reduced over time -> company now needs to modernize and change very large legacy codebase that nobody really understands well enough to even feed it Into the AI -> now hiring more devs than before to figure out how to manage a legacy codebase 5-10x the size of what the team could realistically handle.

            Writing greenfield code is relatively easy, maintaining it over years and keeping it up to date and well understood while twisting it for all new requirements - now that’s hard.

            • BlameThePeacock@lemmy.ca
              link
              fedilink
              English
              arrow-up
              0
              ·
              1 month ago

              AI will help with that too, it’s going to be able to process entire codebases at a time pretty shortly here.

              Given the visual capabilities now emerging, it can likely also do human-equivalent testing.

              One of the biggest AI tricks we haven’t started seeing much of yet in mainstream use is this kind of automated double-checking. Where it generates an answer, and then validates if the answer is valid before actually giving it to a human. Especially in coding bases, there really isn’t anything stopping it from coming up with an answer compiling, running into an error, re-generating, and repeating until the code passes all unit tests or even potentially visual inspection.

              The big limit on this right now is sheer processing cost and context lengths for the models. However, costs for this are dropping faster than any new tech we’ve seen, and it will likely be trivial in just a few years.

      • Scrubbles@poptalk.scrubbles.tech
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 month ago

        Yeah management are all for this, the first few years here are rough with them immediately hitting the “fire the engineers we have ai now”. They won’t realize their fuckup until they’ve been promoted away from it

      • NuXCOM_90Percent@lemmy.zip
        link
        fedilink
        arrow-up
        0
        ·
        1 month ago

        Mentioned it before but:

        LLMs program at the level of a junior engineer or an intern. You already need code review and more senior engineers to fix that shit for them.

        What they do is migrate that. Now that junior engineer has an intern they are trying to work with. Or… companies realize they don’t benefit from training up those newbie (or stupid) engineers when they are likely to leave in a year or two anyway.

    • thehatfox@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 month ago

      Generally you want to the reference material used to improve that first version to be correct though. Otherwise it’s just swapping one problem for another.

      I wouldn’t use a textbook that was 52% incorrect, the same should apply to a chatbot.

  • Ech@lemm.ee
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 month ago

    For the upteenth time - an llm just puts words together, it isn’t a magic answer machine.

  • Veraxus@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    edit-2
    1 month ago

    I’m surprised it scores that well.

    Well, ok… that seems about right for languages like JavaScript or Python, but try it on languages with a reputation for being widely used to write terrible code, like Java or PHP (hence having been trained on terrible code), and it’s actively detrimental to even experienced developers.

    • BlameThePeacock@lemmy.ca
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 month ago

      No need to defend it.

      Either it’s value is sufficient that businesses can make money by implementing it and it gets used, or it isn’t.

      I’m personally already using it to make money, so I suspect it’s going to stick around.

  • Melkath@kbin.social
    link
    fedilink
    arrow-up
    0
    ·
    1 month ago

    Developing with ChatGPT feels bizzarely like when Tony Stark invented a new element with Jarvis’ assistance.

    It’s a prolonged back and forth, and you need to point out the AIs mistakes and work through a ton of iterations to get something that is close enough that you can tweak it and use, but it’s SO much faster than trawling through Stack Overflow or hoping someone who knows more than you can answer a post for you.

    • elgordio@kbin.social
      link
      fedilink
      arrow-up
      0
      ·
      1 month ago

      Yeah if you treat it is a junior engineer, with the ability to instantly research a topic, and are prepared to engage in a conversation to work toward a working answer, then it can work extremely well.

      Some of the best outcomes I’ve had have needed 20+ prompts, but I still arrived at a solution faster than any other method.

      • Melkath@kbin.social
        link
        fedilink
        arrow-up
        0
        ·
        1 month ago

        In the end, there is this great fear of “the AI is going to fully replace us developers” and the reality is that while that may be a possibility one day, it wont be any day soon.

        You still need people with deep technical knowledge to pilot the AI and drive it to an implemented solution.

        AI isnt the end of the industry, it has just greatly sped up the industry.

  • THCDenton@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    1 month ago

    It was pretty good for a while! They lowered the power of it like immortan joe. Do not be come addicted to AI

  • Samueru@lemmy.ml
    link
    fedilink
    arrow-up
    0
    ·
    1 month ago

    I find it funny that thumbnail with a “fail” I’m actually surprised that it got 48% right.

  • Crisps@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    1 month ago

    In the short term it really helps productivity, but in the end the reward for working faster is more work. Just doing the hard parts all day is going to burn developers out.

    • birbs@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      1 month ago

      I program for a living and I think of it more as doing the interesting tasks all day, rather than the mundane and repetitive. Chat GPT and GitHub Copilot are great for getting something roughly right that you can tweak to work the way you want.

  • gnuplusmatt@reddthat.com
    link
    fedilink
    arrow-up
    0
    ·
    1 month ago

    I’ve used chatgpt and gemini to build some simple powershell scripts for use in intune deployments. They’ve been fairly simple scripts. Very few have of them have been workable solutions out of the box, and they’ve often filled with hallucinated cmdlets that don’t exist or are part of a thirdparty module that it doesn’t tell me needs to be installed. It’s not useless tho, because I am a lousy programmer its been good to give me a skeleton for which I can build a working script off of and debug myself.

    I reiterate that I am a lousy programmer, but it has sped up my deployments because I haven’t had to work from scratch. 5/10 its saved me a half hour here and there.

    • FaceDeer@fedia.io
      link
      fedilink
      arrow-up
      0
      ·
      1 month ago

      I’m a good programmer and I still find LLMs to be great for banging out python scripts to handle one-off tasks. I usually use Copilot, it seems best for that sort of thing. Often the first version of the script will have a bug or misunderstanding in it, but all you need to do is tell the LLM what it did wrong or paste the text of the exception into the chat and it’ll usually fix its own mistakes quite well.

      I could write those scripts myself by hand if I wanted to, but they’d take a lot longer and I’d be spending my time on boring stuff. Why not let a machine do the boring stuff? That’s why we have technology.

  • haui@lemmy.giftedmc.com
    link
    fedilink
    arrow-up
    0
    ·
    1 month ago

    The interesting bit for me is that if you ask a rando some programming questions they will be 99% wrong on average I think.

    Stack overflow still makes more sense though.

  • crossmr@kbin.social
    link
    fedilink
    arrow-up
    0
    ·
    1 month ago

    The best method I’ve found for using it is to help you with languages you may have lost familiarity in and to walk it through what you need step by step. This lets you evaluate it’s reasoning. When it gets stuck in a loop:

    Try A!
    Actually A doesn’t work because that method doesn’t exist.
    Oh sorry Try B!
    Yeah B doesn’t work either.
    You’re right, so sorry about that, Try A!
    Yeah… we just did this.

    at that point it’s time to just close it down and try another AI.

  • Max-P@lemmy.max-p.me
    link
    fedilink
    arrow-up
    0
    ·
    1 month ago

    I don’t even bother trying with AI, it’s not been helpful to me a single time despite multiple attempts. That’s a 0% success rate for me.

  • Boozilla@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    1 month ago

    It’s been a tremendous help to me as I relearn how to code on some personal projects. I have written 5 little apps that are very useful to me for my hobbies.

    It’s also been helpful at work with some random database type stuff.

    But it definitely gets stuff wrong. A lot of stuff.

    The funny thing is, if you point out its mistakes, it often does better on subsequent attempts. It’s more like an iterative process of refinement than one prompt gives you the final answer.

    • WalnutLum@lemmy.ml
      link
      fedilink
      arrow-up
      0
      ·
      1 month ago

      This is because all LLMs function primarily based on the token context you feed it.

      The best way to use any LLM is to completely fill up it’s history with relevant context, then ask your question.

      • Boozilla@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 month ago

        I worked on a creative writing thing with it and the more I added, the better its responses. And 4 is a noticeable improvement over 3.5.

    • Downcount@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      1 month ago

      The funny thing is, if you point out its mistakes, it often does better on subsequent attempts.

      Or it get stuck in an endless loop of two different but wrong solutions.

      Me: This is my system, version x. I want to achieve this.

      ChatGpt: Here’s the solution.

      Me: But this only works with Version y of given system, not x

      ChatGpt: <Apology> Try this.

      Me: This is using a method that never existed in the framework.

      ChatGpt: <Apology> <Gives first solution again>

      • UberMentch@lemmy.world
        cake
        link
        fedilink
        arrow-up
        0
        ·
        1 month ago

        I used to have this issue more often as well. I’ve had good results recently by **not ** pointing out mistakes in replies, but by going back to the message before GPT’s response and saying “do not include y.”

        • brbposting@sh.itjust.works
          link
          fedilink
          arrow-up
          0
          ·
          1 month ago

          Agreed, I send my first prompt, review the output, smack my head “obviously it couldn’t read my mind on that missing requirement”, and go back and edit the first prompt as if I really was a competent and clear communicator all along.

          It’s actually not a bad strategy because it can make some adept assumptions that may have seemed pertinent to include, so instead of typing out every requirement you can think of, you speech-to-text* a half-assed prompt and then know exactly what to fix a few seconds later.

          *[ad] free Ecco Dictate on iOS, TypingMind’s built-in dictation… anything using OpenAI Whisper, godly accuracy. btw TypingMind is great - stick in GPT-4o & Claude 3 Opus API keys and boom

      • BrianTheeBiscuiteer@lemmy.world
        link
        fedilink
        arrow-up
        0
        ·
        1 month ago

        While explaining BTRFS I’ve seen ChatGPT contradict itself in the middle of a paragraph. Then when I call it out it apologizes and then contradicts itself again with slightly different verbiage.

      • mozz@mbin.grits.dev
        link
        fedilink
        arrow-up
        0
        ·
        1 month ago
        1. “Oh, I see the problem. In order to correct (what went wrong with the last implementation), we can (complete code re-implementation which also doesn’t work)”
        2. Goto 1
    • tristan@aussie.zone
      link
      fedilink
      arrow-up
      0
      ·
      edit-2
      1 month ago

      I was recently asked to make a small Android app using flutter, which I had never touched before

      I used chatgpt at first and it was so painful to get correct answers, but then made an agent or whatever it’s called where I gave it instructions saying it was a flutter Dev and gave it a bunch of specifics about what I was working on

      Suddenly it became really useful…I could throw it chunks of code and it would just straight away tell me where the error was and what I needed to change

      I could ask it to write me an example method for something that I could then easily adapt for my use

      One thing I would do would be ask it to write a method to do X, while I was writing the part that would use that method.

      This wasn’t a big project and the whole thing took less than 40 hours, but for me to pick up a new language, setup the development environment, and make a working app for a specific task in 40 hours was a huge deal to me… I think without chatgpt, just learning all the basics and debugging would have taken more than 40 hours alone

    • mozz@mbin.grits.dev
      link
      fedilink
      arrow-up
      0
      ·
      1 month ago

      It’s incredibly useful for learning. ChatGPT was what taught me to unlearn, essentially, writing C in every language, and how to write idiomatic Python and JavaScript.

      It is very good for boilerplate code or fleshing out a big module without you having to do the typing. My experience was just like yours; once you’re past a certain (not real high) level of complexity you’re looking at multiple rounds of improvement or else just doing it yourself.

      • Boozilla@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 month ago

        Exactly. And for me, being in middle age, it’s a big help with recalling syntax. I generally know how to do stuff, but need a little refresher on the spelling, parameters, etc.

      • CeeBee@lemmy.world
        link
        fedilink
        arrow-up
        0
        ·
        1 month ago

        It is very good for boilerplate code

        Personally I find all LLMs in general not that great at writing larger blocks of code. It’s fine for smaller stuff, but the more you expect out of it the more it’ll get wrong.

        I find they work best with existing stuff that you provide. Like “make this block of code more efficient” or “rewrite this function to do X”.

  • paddirn@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 month ago

    I wonder if the AI is using bad code pulled from threads where people are asking questions about why their code isn’t working, but ChatGPT can’t tell the difference and just assumes all code is good code.