• Nollij@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    8
    ·
    1 year ago

    Of all the problems and challenges with this idea, this is probably the easiest to solve technologically. If we assume that AI-generated material is given the ok to be produced, the AI generators would need to (and easily can, and arguably already should) embed a watermark (visible or not) or digital signature. This would prevent actual photos from being presented as AI. It may be possible to remove these markers, but the reasons to do so are very limited in this scenario.

    • raccoona_nongrata@beehaw.org
      link
      fedilink
      arrow-up
      12
      ·
      1 year ago

      This wouldn’t disrupt the pattern of pedophiles forming communities though, which is where a lot of the abuse begins to happen; as pedophiles begin to network with one another an affirm and normalize eachother’s compulsion towards abuse it emboldens them to act on those desires. It doesn’t matter if a site is a full of AI imagery, it has the same effect of allowing these communities to form.

      There is no value in AI CSAM. And yes, AI content should be watermarked, but there’s no justifiable reason to allow the sexualization of children, whether through real photos or AI ones.

      • Nollij@sopuli.xyz
        link
        fedilink
        English
        arrow-up
        5
        ·
        1 year ago

        I was actually specifically avoiding all of those concerns in my reply. They’re valid, and others are discussing them on this thread, just not what my reply was about.

        I was exclusively talking about how to identify if an image was generated by AI or was a real photo.

        • abhibeckert@beehaw.org
          link
          fedilink
          arrow-up
          6
          ·
          edit-2
          1 year ago

          I was exclusively talking about how to identify if an image was generated by AI or was a real photo.

          These images are being created with open source / free models. Whatever watermark feature the open source code has will simply be removed by the criminal.

          Watermarking is like a lock on a door. Keeps honest people honest… which is useful, but it’s not going to stop any real criminals.

          • evranch@lemmy.ca
            link
            fedilink
            arrow-up
            7
            ·
            1 year ago

            In this specific scenario, you wouldn’t want to remove the watermark.

            The watermark would be the only thing that defines the content as “harmless” AI-generated content, which for the sake of discussion is being presented as legal. Remove the watermark, and as far as the law knows, you’re in possession of real CSAM and you’re on the way to prison.

            The real concern would be adding the watermark to the real thing, to let it slip through the cracks. However, not only would this be computationally expensive if it was properly implemented, but I would assume the goal in marketing the real thing could only be to sell it to the worst of the worst, people who get off on the fact that children were abused to create it. And in that case, if AI is indistinguishable from the real thing, how do you sell criminal content if everyone thinks it’s fake?

            Anyways, I agree with other commenters that this entire can of worms should be left tightly shut. We don’t need to encourage pedophilia in any way. “Regular” porn has experienced selection pressure to the point where taboo is now mainstream. We don’t need to create a new market for bored porn viewers looking for something shocking.

            • abhibeckert@beehaw.org
              link
              fedilink
              arrow-up
              4
              ·
              edit-2
              1 year ago

              The real concern would be adding the watermark to the real thing, to let it slip through the cracks. However, not only would this be computationally expensive if it was properly implemented,

              It wouldn’t be expensive, you could do it on a laptop in a few seconds.

              Unless, of course, we decide only large corporations should be allowed to generate images and completely outlaw all of the open source / free image generation software - that’s not going to happen.

              Most images are created with a “diffusion” model where you take an image, and run an algorithm that slightly modifies it. Over and over and over until you get what you want. You don’t have to (and commonly don’t - for the best results) start with a blank image. And you can run just a single pass, with the output being almost indistinguishable from the input.

              This is a hard problem to solve and I think catching abuse after it happens is increasingly going to be more difficult. Better to focus on stopping the abuse from happening in the first place. E.g. by flagging and investigating questionable behaviour by kids in schools. That approach is proven and works well.

              • evranch@lemmy.ca
                link
                fedilink
                arrow-up
                2
                ·
                1 year ago

                The image generation can be cheap, but I was imagining this sort of watermark wouldn’t be so much a visible part of the image, but an embedded signature that hashes the image.

                Require enough PoW to generate the signature, and this would at least cut down the volumes of images created, and possibly limit them to groups or businesses with clusters that could be monitored, without clamping down on image generation in general.

                A modified version of what you mentioned could work too, but where just these specific images have to be vetted and signed by a central authority using a private key. Image generation software wouldn’t be restricted for general purposes, but no signature on suspicious content and it’s off to jail.