Those same images have made it easier for AI systems to produce realistic and explicit imagery of fake children as well as transform social media photos of fully clothed real teens into nudes, much to the alarm of schools and law enforcement around the world.

Until recently, anti-abuse researchers thought the only way that some unchecked AI tools produced abusive imagery of children was by essentially combining what they’ve learned from two separate buckets of online images — adult pornography and benign photos of kids.

But the Stanford Internet Observatory found more than 3,200 images of suspected child sexual abuse in the giant AI database LAION, an index of online images and captions that’s been used to train leading AI image-makers such as Stable Diffusion. The watchdog group based at Stanford University worked with the Canadian Centre for Child Protection and other anti-abuse charities to identify the illegal material and report the original photo links to law enforcement.

  • Snot Flickerman@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    11
    arrow-down
    5
    ·
    edit-2
    7 months ago

    This is obviously not intentional on the part of the LLM purveyors. They probably are kicking themselves at the nightmare PR storm in dealing with this, and how it could effectively lead to more actual legislation controlling them. If there’s one thing congressscritters love, it’s writing bad fucking laws to “protect the children.”

    However, this paints a clear picture at how few guardrails are on these LLM’s at all, and how they do need legislation to keep them from doing this kind of dumb “move fast, break shit” ethos which almost always results in situations like these where they’re going “oopsies!” about the horrible shit they did to make this happen.

    It should be clear that the profit motive and desire to “be the first” out the door with the technology has driven the people making these to make thoughtless short-term decisions of using straight pirated media for their model or not actually checking your data-set and ending up with your image generator being trained on child-porn.

    So yeah, we need legislation to wrangle these fuckers into some semblance of doing due diligence in fucking anything they do, and it would be helpful if that legislation wasn’t spurred by dumb fucking “protect the children” horseshit that will result in a bad bill that hurts everybody, including children, like usual.

    Also, personal opinion, but as someone who has messed with AI image generation, just based on many weird prompt outputs I’ve gotten, I kind of always suspected they were being trained on abusive material. If you couldn’t see it, honestly, you weren’t paying attention, but once again that’s personal opinion.

    • FaceDeer@kbin.social
      link
      fedilink
      arrow-up
      8
      ·
      7 months ago

      This isn’t about LLMs, it’s image generation. And it’s also important to note that these are still just accusations, and anything child porn related is going to have a lot of angry people insisting that there are no shades of grey while disagreeing on where that supposed hard line actually lies.

      Finally, the researcher behind this has stated that he opposes the existence of these AIs regardless of the child porn issue, so I have some dubiousness there as well.

      But this is the internet, so let the angry arguments roll on I guess.