• Scrubbles@poptalk.scrubbles.tech
    link
    fedilink
    English
    arrow-up
    4
    ·
    3 months ago

    The fun thing with AI that companies are starting to realize is that there’s no way to “program” AI, and I just love that. The only way to guide it is by retraining models (and LLMs will just always have stuff you don’t like in them), or using more AI to say “Was that response okay?” which is imperfect.

    And I am just loving the fallout.

  • swordsmanluke@programming.dev
    link
    fedilink
    arrow-up
    4
    ·
    3 months ago

    What I think is amazing about LLMs is that they are smart enough to be tricked. You can’t talk your way around a password prompt. You either know the password or you don’t.

    But LLMs have enough of something intelligence-like that a moderately clever human can talk them into doing pretty much anything.

    That’s a wild advancement in artificial intelligence. Something that a human can trick, with nothing more than natural language!

    Now… Whether you ought to hand control of your platform over to a mathematical average of internet dialog… That’s another question.

    • bbuez@lemmy.world
      link
      fedilink
      arrow-up
      2
      ·
      3 months ago

      I don’t want to spam this link but seriously watch this 3blue1brown video on how text transformers work. You’re right on that last part, but its a far fetch from an intelligence. Just a very intelligent use of statistical methods. But its precisely that reason that reason it can be “convinced”, because parameters restraining its output have to be weighed into the model, so its just a statistic that will fail.

      Im not intending to downplay the significance of GPTs, but we need to baseline the hype around them before we can discuss where AI goes next, and what it can mean for people. Also far before we use it for any secure services, because we’ve already seen what can happen

      • swordsmanluke@programming.dev
        link
        fedilink
        arrow-up
        1
        ·
        3 months ago

        Oh, for sure. I focused on ML in college. My first job was actually coding self-driving vehicles for open-pit copper mining operations! (I taught gigantic earth tillers to execute 3-point turns.)

        I’m not in that space anymore, but I do get how LLMs work. Philosophically, I’m inclined to believe that the statistical model encoded in an LLM does model a sort of intelligence. Certainly not consciousness - LLMs don’t have any mechanism I’d accept as agency or any sort of internal “mind” state. But I also think that the common description of “supercharged autocorrect” is overreductive. Useful as rhetorical counter to the hype cycle, but just as misleading in its own way.

        I’ve been playing with chatbots of varying complexity since the 1990s. LLMs are frankly a quantum leap forward. Even GPT-2 was pretty much useless compared to modern models.

        All that said… All these models are trained on the best - but mostly worst - data the world has to offer… And if you average a handful of textbooks with an internet-full of self-confident blowhards (like me) - it’s not too surprising that today’s LLMs are all… kinda mid compared to an actual human.

        But if you compare the performance of an LLM to the state of the art in natural language comprehension and response… It’s not even close. Going from a suite of single-focus programs, each using keyword recognition and word stem-based parsing to guess what the user wants (Try asking Alexa to “Play ‘Records’ by Weezer” sometime - it can’t because of the keyword collision), to a single program that can respond intelligibly to pretty much any statement, with a limited - but nonzero - chance of getting things right…

        This tech is raw and not really production ready, but I’m using a few LLMs in different contexts as assistants… And they work great.

        Even though LLMs are not a good replacement for actual human skill - they’re fucking awesome. 😅

    • Rozaŭtuno@lemmy.blahaj.zone
      link
      fedilink
      arrow-up
      1
      ·
      3 months ago

      There’s a game called Suck Up that is basically that, you play as a vampire that needs to trick AI-powered NPCs into inviting you inside their house.

        • Lmaydev@programming.dev
          link
          fedilink
          arrow-up
          1
          ·
          3 months ago

          LLMs are in a position to make boring NPCs much better.

          Once they can be run locally at a good speed it’ll be a game changer.

          I reckon we’ll start getting AI cards for computers soon.

          • bbuez@lemmy.world
            link
            fedilink
            arrow-up
            1
            ·
            3 months ago

            We already do! And on the cheap! I have a Coral TPU running for presence detection on some security cameras, I’m pretty sure they can run LLMs but I haven’t looked around.

            GPT4ALL runs rather well on a 2060 and I would only imagine a lot better on newer hardware

  • fidodo@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    3 months ago

    Damn it, all those stupid hacking scenes in CSI and stuff are going to be accurate soon

    • General_Effort@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      3 months ago

      You’re allowed to use copyrighted works for lots of reasons. EG satire parody, in which case you can legally publish it and make money.

      The problem is that this precise situation is not legally clear. Are you using the service to make the image or is the service making the image on your request?

      If the service is making the image and then sending it to you, then that may be a copyright violation.

      If the user is making the image while using the service as a tool, it may still be a problem. Whether this turns into a copyright violation depends a lot on what the user/creator does with the image. If they misuse it, the service might be sued for contributory infringement.

      Basically, they are playing it safe.

    • hatedbad@lemmy.sdf.org
      link
      fedilink
      English
      arrow-up
      0
      ·
      3 months ago

      just a guess, but in order for an LLM to generate or draw anything it needs source material in the form of training data. For copyrighted characters this would mean OpenAI would be willingly feeding their LLM copyrighted images which would likely open them up to legal action.

  • Rhaedas@fedia.io
    link
    fedilink
    arrow-up
    1
    ·
    3 months ago

    LLMs are just very complex and intricate mirrors of ourselves because they use our past ramblings to pull from for the best responses to a prompt. They only feel like they are intelligent because we can’t see the inner workings like the IF/THEN statements of ELIZA, and yet many people still were convinced that was talking to them. Humans are wired to anthropomorphize, often to a fault.

    I say that while also believing we may yet develop actual AGI of some sort, which will probably use LLMs as a database to pull from. And what is concerning is that even though LLMs are not “thinking” themselves, how we’ve dived head first ignoring the dangers of misuse and many flaws they have is telling on how we’ll ignore avoiding problems in AI development, such as the misalignment problem that is basically been shelved by AI companies replaced by profits and being first.

    HAL from 2001/2010 was a great lesson - it’s not the AI…the humans were the monsters all along.

  • halloween_spookster@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 months ago

    I once asked ChatGPT to generate some random numerical passwords as I was curious about its capabilities to generate random data. It told me that it couldn’t. I asked why it couldn’t (I knew why it was resisting but I wanted to see its response) and it promptly gave me a bunch of random numerical passwords.

      • ForgotAboutDre@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        3 months ago

        It won’t generate random numbers. It’ll generate random numbers from its training data.

        If it’s asked to generate passwords I wouldn’t be surprised if it generated lists of leaked passwords available online.

        These models are created from masses of data scraped from the internet. Most of which is unreviewed and unverified. They really don’t want to review and verify it because it’s expensive and much of their data is illegal.