Internet Watch Foundation has found a manual on dark web encouraging criminals to use software tools that remove clothing. The manipulated image could then be used against the child to blackmail them into sending more graphic content, the IWF said.

  • AutoTL;DR@lemmings.worldB
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    This is the best summary I could come up with:


    Paedophiles are being urged to use artificial intelligence to create nude images of children to extort more extreme material from them, according to a child abuse charity.

    The Internet Watch Foundation (IWF) said a manual found on the dark web contained a section encouraging criminals to use “nudifying” tools to remove clothing from underwear shots sent by a child.

    Last month the Guardian revealed that the Labour party was considering a ban on nudification tools that allow users to create images of people without their clothes on.

    Hargreaves added that the Online Safety Act, which became law last year and imposes a duty of care on social media companies to protect children, “needs to work”.

    According to research published last week by the communications regulator, Ofcom, a quarter of three- to four-year-olds own a mobile phone and half of under-13s are on social media.

    The government is preparing to launch a consultation in the coming weeks that will include proposals to ban the sale of smartphones to under-16s and raise the minimum age for social media sites from 13 to as high as 16.


    The original article contains 542 words, the summary contains 184 words. Saved 66%. I’m a bot and I’m open source!

    • norbert@kbin.social
      link
      fedilink
      arrow-up
      0
      ·
      2 months ago

      Pretty sure I remember people saying this kind of stuff would happen as soon as deepfakes started being a thing accessible to the public.

      We really don’t have any solution to this yet. The tech is advancing so quickly at this point; every other month there’s a significant jump in capabilities. I feel like the cat is out of the bag, photorealistic AI images are available to the home user with virtually no technical restrictions or limits and no amount of regulation is going to put the genie back in the bottle.

      • quindraco@lemm.ee
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        We really don’t have any solution to this yet.

        We do, and always have, but good luck with implementation. Humanity hates acting like an adult.

        1. Critical thinking: society knows to a certainty deepfakes exist and hence should be intrinsically skeptical of any image they see, demanding the image’s source establish some reason to trust the image. We could be less blindly trusting.
        2. Body acceptance: for 0 seconds of humanity’s history has it made credible sense to shame someone over having seen them naked. We could choose not to.
        3. Competence: Appeasing these people only encourages them. If people would just understand that giving your blackmailer what they want is always strictly worse than not doing so, it would remove the incentive to blackmail. Why would you trust your blackmailer to keep your secret? Makes no sense.
        • norbert@kbin.social
          link
          fedilink
          arrow-up
          0
          ·
          2 months ago

          I agree with everything you said but I don’t actually see a solution you’ve posted. Yes we could grow up and have a more mature view of sex and the human body but that doesn’t change the ease of access or manufacturing potentially illegal material right now.

          • quindraco@lemm.ee
            link
            fedilink
            English
            arrow-up
            0
            ·
            2 months ago

            If we could implement maturity and so on, ease of access to the toola would be a non-issue. And the tools would be as legal as crayons or pencils, and the “material” as legal as any cartoon. But I agree with you that this is a super real practical problem, because we can’t.

  • natural_motions@lemmynsfw.com
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    2 months ago

    Cue the AI apologists trying to explain how AI child porn is a safe, victim free outlet for pedophiles to indulge in their mental illness.

    • stevedidwhat_infosec@infosec.pub
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      Pedophile apologists*

      Nobody interested in the development of AI would be interested in defending pedos, unless they’re pedos. That’s reality.

      Why lump the two groups together?

      In fact, AI is used by these orgs to prevent workers from having to look at these images themselves which is partially why mod/admin/content filter people’s burnout is so high.

      Everytime some nasty shit (pedo shit, gore, etc) is posted on tumblr, Facebook, Instagram, etc, those reports go through real people (or did prior to these AI models). Now imagine smaller, upcoming websites like lemmy instances that might not have the funds or don’t know of this AI solution.

      AI fixes problems too - the root of the problem is cyber extortion. Whether that means the criminals are photoshopping or using AI. They’re targeting children for Christ sake, besides that being fucked up all by itself, it’s not hard to fool a child. AI or not. How criminals are contacting and blackmailing YOUR CHILDREN is the problem imo

  • Jo Miran@lemmy.ml
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    These sickos could just wank off to the AI-gen stuff, but I suspect that the real thrill is in the abuse.

    Time to look at memes before I get more upset.

  • HubertManne@kbin.social
    link
    fedilink
    arrow-up
    0
    ·
    2 months ago

    and that is a way to abuse that technology that never crossed my mind. do I actually belong to this species?!

    • TimeSquirrel@kbin.social
      link
      fedilink
      arrow-up
      0
      ·
      2 months ago

      Yes, you do. We all have the potential for the most horrific acts of evil you can imagine in the right circumstances. That’s why we usually have such a hard time when we give people power to rule over others and need a convoluted system of checks and balances to make sure nobody becomes a dick. Yet it still happens. Benevolence is not the default human state. You have to work to stay that way. That ability separates us from the brutal world of the wild.

      • misc@lemmy.sdf.org
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        Not everyone tho i believe everyone is different that’s why some good people will never get into power or will be killed after getting power or just get the power taken from them because they are not corrupt enough to keep it kinda like why there are no ethical multi billionaires (atleast that i know of as of yet).

        • Llewellyn@lemm.ee
          link
          fedilink
          English
          arrow-up
          0
          ·
          2 months ago

          Everyone is not that different: after all, we all are the same biological species: homo sapiens

            • Syn_Attck@lemmy.today
              link
              fedilink
              English
              arrow-up
              0
              ·
              edit-2
              2 months ago

              Every wild dog will chase and eat prey and their own poop, and attack any dog that challenges them or their pack.

              Every wild cat will chase and catch their prey, and then play with it while keeping it alive for a while until it ultimately dies.

              Humans are basically dogs or cats that have 1000’s of societal incentives not to chase and play with prey, drilled into most of us from the beginning, but everyone still has that innate ability that will come out under the right circumstances. If civilization ended today, who you think you are, and what you think you are not capable of today, ends today.

  • BreakDecks@lemmy.ml
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    I want to be happy that the IWF exists and is collecting data about this kind of thing. This is extremely difficult and important work.

    But they are also lobbying to ban encryption, arguing that privacy only helps criminals.

    Sorry, but if Facebook is too dangerous for kids, instead of banning encryption so the authorities can more easily catch child abusers, let’s just ban children from using Facebook before they fall victim to abuse.