• macniel@feddit.de
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    Mhm I have mixed feelings about this. I know that this entire thing is fucked up but isn’t it better to have generated stuff than having actual stuff that involved actual children?

    • retrospectology@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      The arrest is only a positive. Allowing pedophiles to create AI CP is not a victimless crime. As others point out it muddies the water for CP of real children, but it also potentially would allow pedophiles easier ways to network in the open (if the images are legal they can easily be platformed and advertised), and networking between abusers absolutely emboldens them and results in more abuse.

      As a society we should never allow the normalization of sexualizing children.

      • lily33@lemm.ee
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        2 months ago

        Actually, that’s not quite as clear.

        The conventional wisdom used to be, (normal) porn makes people more likely to commit sexual abuse (in general). Then scientists decided to look into that. Slowly, over time, they’ve become more and more convinced that (regular) porn availability in fact reduces sexual assault.

        I don’t see an obvious reason why it should be different in case of CP, now that it can be generated.

        • Lowlee Kun@feddit.de
          link
          fedilink
          English
          arrow-up
          0
          ·
          2 months ago

          It should be different because people can not have it. It is disgusting, makes them feel icky and thats just why it has to be bad. Conventional wisdom sometimes really is just convential idiocracy.

    • ImminentOrbit@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 month ago

      It reminds me of the story of the young man who realized he had an attraction to underage children and didn’t want to act on it, yet there were no agencies or organizations to help him, and that it was only after crimes were committed that anyone could get help.

      I see this fake cp as only a positive for those people. That it might make it difficult to find real offenders is a terrible reason against.

    • PhlubbaDubba@lemm.ee
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      I think the point is that child attraction itself is a mental illness and people indulging it even without actual child contact need to be put into serious psychiatric evaluation and treatment.

    • Murvel@lemm.ee
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      It feeds and evolves a disorder which in turn increases risks of real life abuse.

      But if AI generated content is to be considered illegal, so should all fictional content.

      • SigHunter@lemmy.kde.social
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        2 months ago

        Or, more likely, it feeds and satisfies a disorder which in turn decreases risk of real life abuse.

        Making it illegal so far helped nothing, just like with drugs

        • Murvel@lemm.ee
          link
          fedilink
          English
          arrow-up
          0
          ·
          2 months ago

          That’s not how these addictive disorders works… they’re never satisfied and always need more.

      • Norgur@kbin.social
        link
        fedilink
        arrow-up
        0
        ·
        2 months ago

        Two things:

        1. Do we know if fuels the urge to get real children? Or do we just assume that through repetition like the myth of “gateway drugs”?
        2. Since no child was involved and harmed in the making of these images… On what grounds could it be forbidden to generate them?
        • HopeOfTheGunblade@kbin.social
          link
          fedilink
          arrow-up
          0
          ·
          2 months ago

          I would love to see research data pointing either way re #1, although it would be incredibly difficult to do so ethically, verging on impossible. For #2, people have extracted originals or near-originals of inputs to the algorithms. AI generated stuff - plagiarism machine generated stuff, runs the risk of effectively revictimizing people who were already abused to get said inputs.

          It’s an ugly situation all around, and unfortunately I don’t know that much can be done about it beyond not demonizing people who have such drives, who have not offended, so that seeking therapy for the condition doesn’t screw them over. Ensuring that people are damned if they do and damned if they don’t seems to pretty reliably produce worse outcomes.

        • Thorny_Insight@lemm.ee
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          2 months ago

          Alternative perspective is to think that does watching normal porn make heterosexual men more likely to rape women? If not then why would it be different in this case?

          The vast majority of pedophiles never offend. Most people in jail for child abuse are just plain old rapists with no special interest towards minors, they’re just an easy target. Pedophilia just describes what they’re attracted to. It’s not a synonym to child rapist. It usually needs to coinside with psychopathy to create the monster that most people think about when hearing that word.

    • BrianTheeBiscuiteer@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      I have trouble with this because it’s like 90% grey area. Is it a pic of a real child but inpainted to be nude? Was it a real pic but the face was altered as well? Was it completely generated but from a model trained on CSAM? Is the perceived age of the subject near to adulthood? What if the styling makes it only near realistic (like very high quality CG)?

      I agree with what the FBI did here mainly because there could be real pictures among the fake ones. However, I feel like the first successful prosecution of this kind of stuff will be a purely moral judgement of whether or not the material “feels” wrong, and that’s no way to handle criminal misdeeds.

      • Zorque@kbin.social
        link
        fedilink
        arrow-up
        0
        ·
        2 months ago

        Everything is 99% grey area. If someone tells you something is completely black and white you should be suspicious of their motives.

    • Catoblepas@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      Did we memory hole the whole ‘known CSAM in training data’ thing that happened a while back? When you’re vacuuming up the internet you’re going to wind up with the nasty stuff, too. Even if it’s not a pixel by pixel match of the photo it was trained on, there’s a non-zero chance that what it’s generating is based off actual CSAM. Which is really just laundering CSAM.

      • forensic_potato@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        This mentality smells of “just say no” for drugs or “just don’t have sex” for abortions. This is not the ideal world and we have to find actual plans/solutions to deal with the situation. We can’t just cover our ears and hope people will stop

      • Zorque@kbin.social
        link
        fedilink
        arrow-up
        0
        ·
        2 months ago

        Is everything completely black and white for you?

        The system isn’t perfect, especially where we prioritize punishing people over rehabilitation. Would you rather punish everyone equally, emphasizing that if people are going to risk the legal implications (which, based on legal systems the world over, people are going to do) they might as well just go for the real thing anyways?

        You don’t have to accept it as morally acceptable, but you don’t have to treat them as completely equivalent either.

        There’s gradations of questionable activity. Especially when there’s no real victims involved. Treating everything exactly the same is, frankly speaking, insane. Its like having one punishment for all illegal behavior. Murder someone? Death penalty. Rob them? Straight to the electric chair. Jaywalking? Better believe you’re getting the needle.

        • Mastengwe@lemm.ee
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          2 months ago

          Ironically, You ask if everything is completely black and white for someone without accepting that there’s nuance to the very issue you’re calling out. And assuming that “everything”- a very black and white term, is not very nuanced, is it?

          No, not EVERYTHING, but some things. And this is one of those things. Both forms should be illegal. Period. No nuance, no argument, NO grey area.

          This does not mean that nuance doesn’t exist. It just means that some believe that it SHOULDN’T exist within the paradigm of child porn.

  • Kedly@lemm.ee
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    Ah yes, more bait articles rising to the top of Lemmy. The guy was arrested for grooming, he was sending these images to a minor. Outside of Digg, anyone have any suggestions for an alternative to Lemmy and Reddit? Lemmy’s moderation quality is shit, I think I’m starting to figure out where I lean on the success of my experimental stay with Lemmy

    • cum@lemmy.cafe
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      You can go to an instance that follows your views closer and start blocking instances that post low quality content to you. Lemmy is a protocol, it’s not a single community. So the moderation and post quality is going to be determined by the instance you’re on and the community you’re with.

      • Armok: God of Blood@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        This is throwing a blanket over the problem. When the mods of a news community allow bait articles to stay up because they (presumably) further their views, it should be called out as a problem.

    • FiniteBanjo@lemmy.today
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      Lemmy as a whole does not have moderation. Moderators on Lemmy.today cannot moderate Lemmy.world or Lemmy.ml, they can only remove problematic posts as they come and as they see fit or block entire instances which is rare.

      If you want stricter content rules than any of the available federated instances then you’ll have to either:

      1. Use a centralized platform like Reddit but they’re going to sell you out for data profits and you’ll still have to occasionally deal with shit like “The Donald.”

      2. Start your own instance with a self hosted server and create your own code of conduct and hire moderators to enforce it.

      • Kedly@lemm.ee
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        2 months ago

        Yeah, I know, thats why I’m finding lemmy not for me. This new rage bait every week is tiring and not adding anything to my life except stress, and once I started looking at who the moderaters were when Lemmy’d find a new thing to rave about, I found that often there was 1-3 actual moderators, which, fuck that. With reddit, the shit subs were the exception, here it feels like they ALL (FEEL being a key word here) have a tendency to dive face first into rage bait

        Edit: Most of the reddit migration happened because Reddit fucked over their moderators, a lot of us were happy with well moderated discussions, and if we didnt care to have moderators, we could have just stayed with reddit after the moderators were pushed away

  • peanuts4life@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    It’s worth mentioning that in this instance the guy did send porn to a minor. This isn’t exactly a cut and dry, “guy used stable diffusion wrong” case. He was distributing it and grooming a kid.

    The major concern to me, is that there isn’t really any guidance from the FBI on what you can and can’t do, which may lead to some big issues.

    For example, websites like novelai make a business out of providing pornographic, anime-style image generation. The models they use deliberately tuned to provide abstract, “artistic” styles, but they can generate semi realistic images.

    Now, let’s say a criminal group uses novelai to produce CSAM of real people via the inpainting tools. Let’s say the FBI cast a wide net and begins surveillance of novelai’s userbase.

    Is every person who goes on there and types, “Loli” or “Anya from spy x family, realistic, NSFW” (that’s an underaged character) going to get a letter in the mail from the FBI? I feel like it’s within the realm of possibility. What about “teen girls gone wild, NSFW?” Or “young man, no facial body hair, naked, NSFW?”

    This is NOT a good scenario, imo. The systems used to produce harmful images being the same systems used to produce benign or borderline images. It’s a dangerous mix, and throws the whole enterprise into question.

    • PirateJesus@lemmy.today
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      The major concern to me, is that there isn’t really any guidance from the FBI on what you can and can’t do, which may lead to some big issues.

      The Protect Act of 2003 means that any artistic depiction of CSAM is illegal. The guidance is pretty clear, FBI is gonna raid your house…eventually. We still haven’t properly funded the anti-CSAM departments.

  • helpImTrappedOnline@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    The headline/title needs to be extended to include the rest of the sentence “and then sent them to a minor”

    Yes, this sicko needs to be punished. Any attempt to make him the victim of " the big bad government" is manipulative at best.

    • DarkThoughts@fedia.io
      link
      fedilink
      arrow-up
      0
      ·
      2 months ago

      All LLM headlines are like this to fuel the ongoing hysteria about the tech. It’s really annoying.

      • helpImTrappedOnline@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        2 months ago

        Sure is. I report the ones I come across as clickbait or missleading title, explaining the parts left out…such as this one where those 7 words change the story completely.

        Whoever made that headline should feel ashamed for victimizing a grommer.

      • ameancow@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        Based on the blacklists that one has to fire up before browsing just about any large anime/erotica site, I am guessing that these “laws” are not enforced, because they are flimsy laws to begin with. Reading the stipulations for what constitutes a crime is just a hotbed for getting an entire case tossed out of court. I doubt any prosecutors would lean hard on possession of art unless it was being used in another crime.

  • Darkard@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    And the Stable diffusion team get no backlash from this for allowing it in the first place?

    Why are they not flagging these users immediately when they put in text prompts to generate this kind of thing?

    • DarkThoughts@fedia.io
      link
      fedilink
      arrow-up
      0
      ·
      2 months ago

      Because what prompts people enter on their own computer isn’t in their responsibility? Should pencil makers flag people writing bad words?

    • macniel@feddit.de
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      You can run the SD model offline, so on what service would that User be flagged?

    • yukijoou@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      my main question is: how much csam was fed into the model for training so that it could recreate more

      i think it’d be worth investigating the training data usued for the model

    • PirateJesus@lemmy.today
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      Stable Diffusion has been distancing themselves from this. The model that allows for this was leaked from a different company.

    • PirateJesus@lemmy.today
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      Asked whether more funding will be provided for the anti-paint enforcement divisions: it’s such a big backlog, we’ll rather just wait for somebody to piss of a politician to focus our resources.

  • Ibaudia@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    Isn’t there evidence that as artificial CSAM is made more available, the actual amount of abuse is reduced? I would research this but I’m at work.

  • StaySquared@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    I wonder if cartoonized animals in CSAM theme is also illegal… guess I can contact my local FBI office and provide them the web addresses of such content. Let them decide what is best.

    • SeattleRain@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      Well yeah. Just because something makes you really uncomfortable doesn’t make it a crime. A crime has a victim.

      Also, the vast majority of children are victimized because of the US’ culture of authoritarianism and religious fundamentalism. That’s why far and away children are victimized by either a relative or in a church. But y’all ain’t ready to have that conversation.

      • sugartits@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        That thing over there being wrong doesn’t mean we can’t discuss this thing over here also being wrong.

        So perhaps pipe down with your dumb whataboutism.

        • SeattleRain@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          2 months ago

          It’s not whataboutism, he’s being persecuted because of the idea that he’s hurting children all the while law enforcement refuses to truly persecute actual institutions victimizing children and are often colluding with traffickers. For instance LE throughout the country were well aware of the scale of the Catholic church’s crimes for generations.

          How is this whataboutism.

          • sugartits@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            2 months ago

            Because it’s two different things.

            We should absolutely go after the Catholic church for the crimes committed.

            But here we are talking about the creation of child porn.

            If you cannot understand this very simple premise, then we have nothing else to discuss.

            • SeattleRain@lemmy.world
              link
              fedilink
              English
              arrow-up
              0
              ·
              2 months ago

              They’re not two different things. They’re both supposedly acts of pedophilia except one would take actual courage to prosecute (churches) and the other which doesn’t have any actual victims is easy and is a PR get because certain people find it really icky.

          • DarkThoughts@fedia.io
            link
            fedilink
            arrow-up
            0
            ·
            2 months ago

            Just to be clear here, he’s not actually persecuted for generating such imagery like the headline implies.

    • Todd Bonzalez@lemm.ee
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      First of all, it’s absolutely crazy to link to a 6 month old thread just to complain that you go downvoted in it. You’re pretty clearly letting this site get under your skin if you’re still hanging onto these downvotes.

      Second, none of your 6 responses in that thread are logical, rational responses. You basically just assert that things that you find offensive enough should be illegal, and then just type in all caps at everyone who explains to you that this isn’t good logic.

      The only way we can consider child porn prohibition constitutional is to interpret it as a protection of victims. Since both the production and distribution of child porn hurt the children forced into it, we ban it outright, not because it is obscene, but because it does real damage. This fits the logic of many other forms of non-protected speech, such as the classic “shouting ‘fire’ in a crowded theatre” example, where those hurt in the inevitable panic are victims.

      Expanding the definition of child porn to include fully fictitious depictions, such as lolicon or AI porn, betrays this logic because there are no actual victims. This prohibition is rooted entirely in the perceived obscenity of the material, which is completely unconstitutional. We should never ban something because it is offensive, we should only ban it when it does real harm to actual victims.

      I would argue that rape and snuff film should be illegal for the same reason.

      The reason people disagree with you so strongly isn’t because they think AI generated pedo content is “art” in the sense that we appreciate it and defend it. We just strongly oppose your insistence that we should enforce obscenity laws. This logic is the same logic used as a cudgel against many other issues, including LGBTQ rights, as it basically argues that sexually disagreeable ideas should be treated as a criminal issue.

      I think we all agree that AI pedo content is gross, and the people who make it and consume it are sick. But nobody is with you on the idea that drawings and computer renderings should land anyone in prison.

  • Greg Clarke@lemmy.ca
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    This is tough, the goal should be to reduce child abuse. It’s unknown if AI generated CP will increase or reduce child abuse. It will likely encourage some individuals to abuse actual children while for others it may satisfy their urges so they don’t abuse children. Like everything else AI, we won’t know the real impact for many years.

      • DarkThoughts@fedia.io
        link
        fedilink
        arrow-up
        0
        ·
        2 months ago

        I suggest you actually download stable diffusion and try for yourself because it’s clear that you don’t have any clue what you’re talking about. You can already make tiny people, shaved, genitals, flat chests, child like faces, etc. etc. It’s all already there. Literally no need for any LoRAs or very specifically trained models.

        • LadyAutumn@lemmy.blahaj.zone
          link
          fedilink
          English
          arrow-up
          0
          ·
          2 months ago

          It should be illegal either way, to be clear. But you think theyre not training models on CSAM? Youre trusting in the morality/ethics of people creating AI generated child pornography?

          • Greg Clarke@lemmy.ca
            link
            fedilink
            English
            arrow-up
            0
            ·
            2 months ago

            The use of CSAM in training generative AI models is an issue no matter how these models are being used.

            • L_Acacia@lemmy.one
              link
              fedilink
              English
              arrow-up
              0
              ·
              2 months ago

              The training doesn’t use csam, 0% chance big tech would use that in their dataset. The models are somewhat able to link concept like red and car, even if it had never seen a red car before.

              • AdrianTheFrog@lemmy.world
                link
                fedilink
                English
                arrow-up
                0
                ·
                2 months ago

                Well, with models like SD at least, the datasets are large enough and the employees are few enough that it is impossible to have a human filter every image. They scrape them from the web and try to filter with AI, but there is still a chance of bad images getting through. This is why most companies install filters after the model as well as in the training process.

                • DarkThoughts@fedia.io
                  link
                  fedilink
                  arrow-up
                  0
                  ·
                  2 months ago

                  You make it sound like it is so easy to even find such content on the www. The point is, they do not need to be trained on such material. They are trained on regular kids, so they know their sizes, faces, etc. They’re trained on nude bodies, so they also know how hairless genitals or flat chests look like. You don’t need to specifically train a model on nude children to generate nude children.

  • PirateJesus@lemmy.today
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    OMG. Every other post is saying their disgusted about the images part but it’s a grey area, but he’s definitely in trouble for contacting a minor.

    Cartoon CSAM is illegal in the United States. AI images of CSAM fall into that category. It was illegal for him to make the images in the first place BEFORE he started sending them to a minor.

    https://www.thefederalcriminalattorneys.com/possession-of-lolicon

    https://en.wikipedia.org/wiki/PROTECT_Act_of_2003

    • gardylou@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      Yikes at the responses ITT. This shit should definitely be illegal, and the people that want it probably want to abuse real children too. All of you parsing arguments to make goddamn representations of sexual child abuse legal should take a long hard look in the mirror and consider whether or not you yourself need therapy.

      • dev_null@lemmy.ml
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        The discussion will never be resolved in your favour, if you shut down the discussion.

      • dev_null@lemmy.ml
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        The discussion will never be resolved in your favour, if you shut down the discussion.

      • dev_null@lemmy.ml
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        The discussion will never be resolved in your favour, if you shut down the discussion.

      • dev_null@lemmy.ml
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        The discussion will never be resolved in your favour, if you shut down the discussion.

      • dev_null@lemmy.ml
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        The discussion will never be resolved in your favour, if you shut down the discussion.

      • dev_null@lemmy.ml
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        The discussion will never be resolved in your favour, if you shut down the discussion.

      • dev_null@lemmy.ml
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        The discussion will never be resolved in your favour, if you shut down the discussion.

      • dev_null@lemmy.ml
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        The discussion will never be resolved in your favour, if you shut down the discussion.

      • dev_null@lemmy.ml
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        The discussion will never be resolved in your favour, if you shut down the discussion.

      • dev_null@lemmy.ml
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        The discussion will never be resolved in your favour, if you shut down the discussion.

      • dev_null@lemmy.ml
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        The discussion will never be resolved in your favour, if you shut down the discussion.

      • dev_null@lemmy.ml
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        The discussion will never be resolved in your favour, if you shut down the discussion.

      • dev_null@lemmy.ml
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        The discussion will never be resolved in your favour, if you shut down the discussion.

      • dev_null@lemmy.ml
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        The discussion will never be resolved in your favour, if you shut down the discussion.

      • zbyte64@awful.systems
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        Big brain PDF tells the judge it is okay because the person in the picture is now an adult.

        • surewhynotlem@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          2 months ago

          That’s the issue though. As far as I know it hasn’t been tested in court and it’s quite possible the law is useless and has no teeth.

          With AI porn you can point to real victims whose unconsented pictures were used to train the models, and say that’s abuse. But when it’s just a drawing, who is the victim? Is it just a thought crime? Can we prosecute those?

        • arefx@lemmy.ml
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          2 months ago

          You can say pedophile… that “pdf file” stuff is so corny and childish. Hey guys lets talk about a serious topic by calling it things like “pdf files” and “Graping”. Jfc

  • Deceptichum@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    What an oddly written article.

    Additional evidence from the laptop indicates that he used extremely specific and explicit prompts to create these images. He likewise used specific ‘negative’ prompts—that is, prompts that direct the GenAI model on what not to include in generated content—to avoid creating images that depict adults.”

    They make it sound like the prompts are important and/or more important than the 13,000 images…

  • TheObviousSolution@lemm.ee
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    2 months ago

    He then allegedly communicated with a 15-year-old boy, describing his process for creating the images, and sent him several of the AI generated images of minors through Instagram direct messages. In some of the messages, Anderegg told Instagram users that he uses Telegram to distribute AI-generated CSAM. “He actively cultivated an online community of like-minded offenders—through Instagram and Telegram—in which he could show off his obscene depictions of minors and discuss with these other offenders their shared sexual interest in children,” the court records allege. “Put differently, he used these GenAI images to attract other offenders who could normalize and validate his sexual interest in children while simultaneously fueling these offenders’ interest—and his own—in seeing minors being sexually abused.”

    I think the fact that he was promoting child sexual abuse and was communicating with children and creating communities with them to distribute the content is the most damning thing, regardless of people’s take on the matter.

    Umm … That AI generated hentai on the page of the same article, though … Do the editors have any self-awareness? Reminds me of the time an admin decided the best course of action to call out CSAM was to directly link to the source.

  • UnpluggedFridge@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    These cases are interesting tests of our first amendment rights. “Real” CP requires abuse of a minor, and I think we can all agree that it should be illegal. But it gets pretty messy when we are talking about depictions of abuse.

    Currently, we do not outlaw written depictions nor drawings of child sexual abuse. In my opinion, we do not ban these things partly because they are obvious fictions. But also I think we recognize that we should not be in the business of criminalizing expression, regardless of how disgusting it is. I can imagine instances where these fictional depictions could be used in a way that is criminal, such as using them to blackmail someone. In the absence of any harm, it is difficult to justify criminalizing fictional depictions of child abuse.

    So how are AI-generated depictions different? First, they are not obvious fictions. Is this enough to cross the line into criminal behavior? I think reasonable minds could disagree. Second, is there harm from these depictions? If the AI models were trained on abusive content, then yes there is harm directly tied to the generation of these images. But what if the training data did not include any abusive content, and these images really are purely depictions of imagination? Then the discussion of harms becomes pretty vague and indirect. Will these images embolden child abusers or increase demand for “real” images of abuse. Is that enough to criminalize them, or should they be treated like other fictional depictions?

    We will have some very interesting case law around AI generated content and the limits of free speech. One could argue that the AI is not a person and has no right of free speech, so any content generated by AI could be regulated in any manner. But this argument fails to acknowledge that AI is a tool for expression, similar to pen and paper.

    A big problem with AI content is that we have become accustomed to viewing photos and videos as trusted forms of truth. As we re-learn what forms of media can be trusted as “real,” we will likely change our opinions about fringe forms of AI-generated content and where it is appropriate to regulate them.

    • yamanii@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      partly because they are obvious fictions

      That’s it actually, all sites that allow it like danbooru, gelbooru, pixiv, etc. Have a clause against photo realistic content and they will remove it.

    • Corkyskog@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      It comes back to distribution for me. If they are generating the stuff for themselves, gross, but I don’t see how it can really be illegal. But if your distributing them, how do we know their not real? The amount of investigative resources that would need to be dumped into that, and the impact on those investigators mental health, I don’t know. I really don’t have an answer, I don’t know how they make it illegal, but it really feels like distribution should be.