At Open Source Summit Japan, Linux and Git creator Linus Torvalds talked about Rust in Linux, Linux maintainer fatigue, and AI’s future role in Linux and open-source development.

  • Spectacle8011@lemmy.comfysnug.space
    link
    fedilink
    arrow-up
    34
    ·
    11 months ago

    After he got a handle on it, Torvalds returned to the kernel. He’s been much more mild-tempered since then. As he mentioned in Tokyo, he won’t be “giving some company the finger. I learned my lesson.”

    This is probably a good thing.

    Looking ahead, Hohndel said, we must talk about “artificial intelligence large language models (LLM). I typically say artificial intelligence is autocorrect on steroids. Because all a large language model does is it predicts what’s the most likely next word that you’re going to use, and then it extrapolates from there, so not really very intelligent, but obviously, the impact that it has on our lives and the reality we live in is significant. Do you think we will see LLM written code that is submitted to you?”

    Torvalds replied, “I’m convinced it’s gonna happen. And it may well be happening already, maybe on a smaller scale where people use it more to help write code.” But, unlike many people, Torvalds isn’t too worried about AI. “It’s clearly something where automation has always helped people write code. This is not anything new at all.”

    Indeed, Torvalds hopes that AI might really help by being able “to find the obvious stupid bugs because a lot of the bugs I see are not subtle bugs. Many of them are just stupid bugs, and you don’t need any kind of higher intelligence to find them. But having tools that warn more subtle cases where, for example, it may just say ‘this pattern does not look like the regular pattern. Are you sure this is what you need?’ And the answer may be ‘No, that was not at all what I meant. You found an obvious bag. Thank you very much.’ We actually need autocorrects on steroids. I see AI as a tool that can help us be better at what we do.”

    But, “What about hallucinations?,” asked Hohndel. Torvalds, who will never stop being a little snarky, said, “I see the bugs that happen without AI every day. So that’s why I’m not so worried. I think we’re doing just fine at making mistakes on our own.”

    There were no questions about whether maintainers would start utilizing LLMs. The questions were focused on how maintainers would respond to LLM-generated (or -assisted) patches being submitted to them. This attitude seems perfectly reasonable to me, but it would have been more interesting to ask questions about whether maintainers would start using LLMs in their work. Torvalds might have responded with a more interesting answer.

    • possibly a cat@lemmy.ml
      link
      fedilink
      arrow-up
      2
      ·
      11 months ago

      I agree it was a sober statement, but a narrow response. I like what Hondel said:

      but obviously, the impact that it has on our lives and the reality we live in is significant.

      And I was curious of Linus would comment at a more meta-level as well.

      I just threw together a portfolio Flask app (infinitely simpler than doing kernel work, of course) that was 2000-3000 lines of connecting APIs and processing data. An AI wrote basically all of the code. 95% of it was scripts that I absolutely could have written myself with my usual references, and the other 5% I would have eventually found explained on StackExchange. (I still managed to learn from the code, thankfully, because I was still proof-reading and continually debugging it.) I knew what I wanted the app to do and how I wanted it to be done, and the AI gave me more-or-less functional code for each mechanism. It saved me hours of tinkering with CSS and other front-end tinkering that I loathe. It does take time to get the AI on the same page with your design - and to maintain its focus - but I can see myself becoming significantly more productive through these tools. I’m no expert, but neither is most of the workforce (although kernel work is, again, much more in the expert realm).

      Afaik any sort of predictive and prototyping features have led to notable productivity gains. If this is predictive text on steroids, which I do not inherently disagree with, then we’re still talking about some pretty crazy steroids. What happens when even kernel work gets done in 10% of the time it normally would have taken? Is a surplus of labor maintained, and if so where does the newly-freed effort get utilized? We’re getting closer to the passing of the torch, and this technology could have profound organization consequences. But maybe it is too early to speak confidently on these matters. The resource consumption of AI and its growth isn’t particularly sustainable, after all.

      • Spectacle8011@lemmy.comfysnug.space
        link
        fedilink
        arrow-up
        1
        ·
        11 months ago

        It was interesting to hear your perspective!

        I’m a newbie programmer (and have been for quite a few years), but I’ve recently started trying to build useful programs. They’re small ones (under 1000 lines of code), but they accomplish the general task well enough. I’m also really busy, so as much as I like learning this stuff, I don’t have a lot of time to dedicate to it. The first program, which was 300 lines of code, took me about a week to build. I did it all myself in Python. It was a really good learning experience. I learned everything from how to read technical specifications to how to package the program for others to easily install.

        The second program I built was about 500 lines of code, a little smaller in scope, and prototyped entirely in ChatGPT. I needed to get this done in a weekend, and so I got it done in 6 hours. It used SQLite and a lot of database queries that I didn’t know much about before starting the project, which surely would have taken hours to research. I spent about 4 hours fixing the things ChatGPT screwed up myself. I think I still learned a lot from the project, though I obviously would have learned more if I had to do it myself. One thing I asked it to do was to generate a man page, because I don’t know Groff. I was able to improve it afterward by glancing at the Groff docs, and I’m pretty happy with it. I still have yet to write a man page for the first program, despite wanting to do it over a year ago.

        I was not particularly concerned about my programs being used as training data because they used a free license anyway. LLMs seem great for doing the work you don’t want to do, or don’t want to do right now. In a completely unrelated example, I sometimes ask ChatGPT to generate names for countries/continents because I really don’t care that much about that stuff in my story. The ones it comes up with are a lot better than any half-assed stuff I could have thought of, which probably says more about me than anything else.

        On the other hand, I really don’t like how LLMs seem to be mainly controlled by large corporations. Most don’t even meet the open source definition, but even if they did, they’re not something a much smaller business can run. I almost want to reject LLMs for that reason on principle. I think we’re also likely to see a dramatic increase in pricing and enshittification in the next few years, once the excitement dies down. I want to avoid becoming dependent on this stuff, so I don’t use it much.

        I think LLMs would be great for automating a lot of the junk work away, as you say. The problem I see is they aren’t reliable, and reliability is a crucial aspect of automation. You never really know what you’re going to get out of an LLM. Despite that, they’ll probably save you time anyway.

        I’m no expert, but neither is most of the workforce (although kernel work is, again, much more in the expert realm).

        I think experts are the ones who would benefit from LLMs the most, despite LLMs consistently producing average work in my experience. They know enough to tell when it’s wrong, and they’re not so close to the code that they miss the obvious. For years, translators have been using machine translation tools to speed up their work, basically relegating them to being translation checkers. Of course, you’d probably see a lot of this with companies that contract translators at pitiful rates per word who need to work really hard to get decent pay. Which means the company now expects everyone to perform at that level, which means everyone needs to use machine translation tools to keep up, which means efficiency is prioritized over quality.

        This is a very different scenario to kernel work. Translation has kind of been like that for a while from what I know, so LLMs are just the latest thing to exacerbate the issues.

        I’m still pretty undecided on where I fall on the issue of LLMs. Ugh, nothing in life can ever be simple. Sorry for jumping all over the place, lol. That’s why I would have been interested in Linus Torvalds’ opinion :)