misk@sopuli.xyz to Technology@lemmy.worldEnglish · 5 months agoFBI Arrests Man For Generating AI Child Sexual Abuse Imagerywww.404media.coexternal-linkmessage-square126fedilinkarrow-up11arrow-down10
arrow-up11arrow-down1external-linkFBI Arrests Man For Generating AI Child Sexual Abuse Imagerywww.404media.comisk@sopuli.xyz to Technology@lemmy.worldEnglish · 5 months agomessage-square126fedilink
minus-squareDarkard@lemmy.worldlinkfedilinkEnglisharrow-up0·5 months agoAnd the Stable diffusion team get no backlash from this for allowing it in the first place? Why are they not flagging these users immediately when they put in text prompts to generate this kind of thing?
minus-squaremacniel@feddit.delinkfedilinkEnglisharrow-up0·5 months agoYou can run the SD model offline, so on what service would that User be flagged?
minus-square🇦🇺𝕄𝕦𝕟𝕥𝕖𝕕𝕔𝕣𝕠𝕔𝕕𝕚𝕝𝕖@lemm.eelinkfedilinkEnglisharrow-up0·5 months agoNot everything exists on the cloud (someone else’s computer)
minus-squareyukijoou@lemmy.blahaj.zonelinkfedilinkEnglisharrow-up0·5 months agomy main question is: how much csam was fed into the model for training so that it could recreate more i think it’d be worth investigating the training data usued for the model
minus-squareDarkThoughts@fedia.iolinkfedilinkarrow-up0·5 months agoBecause what prompts people enter on their own computer isn’t in their responsibility? Should pencil makers flag people writing bad words?
minus-squarePirateJesus@lemmy.todaylinkfedilinkEnglisharrow-up0·5 months agoStable Diffusion has been distancing themselves from this. The model that allows for this was leaked from a different company.
And the Stable diffusion team get no backlash from this for allowing it in the first place?
Why are they not flagging these users immediately when they put in text prompts to generate this kind of thing?
You can run the SD model offline, so on what service would that User be flagged?
Not everything exists on the cloud (someone else’s computer)
my main question is: how much csam was fed into the model for training so that it could recreate more
i think it’d be worth investigating the training data usued for the model
Because what prompts people enter on their own computer isn’t in their responsibility? Should pencil makers flag people writing bad words?
Stable Diffusion has been distancing themselves from this. The model that allows for this was leaked from a different company.