Those same images have made it easier for AI systems to produce realistic and explicit imagery of fake children as well as transform social media photos of fully clothed real teens into nudes, much to the alarm of schools and law enforcement around the world.

Until recently, anti-abuse researchers thought the only way that some unchecked AI tools produced abusive imagery of children was by essentially combining what they’ve learned from two separate buckets of online images — adult pornography and benign photos of kids.

But the Stanford Internet Observatory found more than 3,200 images of suspected child sexual abuse in the giant AI database LAION, an index of online images and captions that’s been used to train leading AI image-makers such as Stable Diffusion. The watchdog group based at Stanford University worked with the Canadian Centre for Child Protection and other anti-abuse charities to identify the illegal material and report the original photo links to law enforcement.

  • the_q@lemmy.world
    link
    fedilink
    arrow-up
    4
    arrow-down
    8
    ·
    7 months ago

    Y’all keep on using it though. You keep using Twitter. You willingly participate in the exploitation and suffering of others. You only care when it’s convenient or if it puts the spotlight on you.

    • Snot Flickerman@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      3
      ·
      7 months ago

      I’m not even sure there’s a solution to this with international capitalism and nearly 10 billion humans.

      With that many people existing and having access to such services there will always be enough people using such services to justify their existence.

      I understand why you used the term “you” but I think that misunderstands how many different people with different reasoning for why they continue to use such things continue to do so.

      As much as it might be cathartic to call other people out for it, in the end, it’s a little myopic.