It’s steady pressure and it’s only in one direction. Some countries resist more than others. I’m guessing you are not in the EU, because if so, you’d be aware of the “chat control” push.
Even so, it’s not the days of Napster anymore. Think about hardware DRM. It stops no one but you, too, paid to have it developed and built into your devices. Think about Content ID. That’s not going away. It’s only going to be expanded. That frog will be boiled.
Recently, intellectual property has been reframed as being about “consensual use of data”. I think this is proving to be very effective. It’s no longer “piracy” or “theft”, it’s a violation of “consent”. The deepfake issue creates a direct link to sexual aggression. One bill in the US, that ostensibly targets deepfakes, would apply to any movie with a sex scene; making sharing it a federal felony.
Hey, I’m just saying how it’s going. Look at, say, threads here about deepfakes. See all the calls for laws and government action. How can that be enforced?
it would be if internet regulation was practically enforceable for anyone other than commercial businesses operating out in the open.
Well, then I guess we just have to call for more government enforcement.
In the EU, there is certainly more government pressure, instead of just lawsuits between big (or small) players.
I just described what’s going on. The world outside of China or Russia is going slower but the direction is the same.
Borders in cyberspace is the future. There are increased efforts to regulate the internet everywhere. Think copyright, age verification, the GDPR, or even anti-CSAM laws. It’s all about making sure that information is only available to people who are permitted to access it. China is really leading the way here.
We do not agree with China’s regulations, but that only means that we need border controls. Data must be checked for regulatory compliance with local laws.
It always comes down to transubstantiation versus consubstantiation.
-Lisa Simpson
I don’t think that the whole transubstantiation issue is big for Catholics, in practice. But they are supposed to believe that during mass, bread and wine literally turn into the flesh and blood of Jesus Christ. Protestants have a slightly different take. Maybe it only becomes an issue in the context of the British domination of Ireland. I’m not sure, but at least in some Protestant/Anglican circles the Catholic belief was/is considered barbaric. https://en.wikipedia.org/wiki/Transubstantiation#Anglicanism
Maybe it’s derived from 19th century Anglicanism, when there were poor houses and Famine Roads?
Side note: As a neutral person (ie atheist), I find the retelling of the “feeding of the multitude” rather dubious. The anti-welfare message isn’t there. It’s a common conservative talking point in the US, that government welfare makes people dependent. The thing about eating Jesus is from elsewhere. It doesn’t belong in that story. The author adapted these pieces from the bible and made inserted their own teachings.
It’s funny how little connection there is between scripture and actual teachings. For abortion, they bothered to change the text.
The way it looks, Adobe has to do this to comply with EU law.
Interesting take. There’s the standard conservative anti-welfare message, but also very old-fashioned anti-catholicism. I guess this is from a conservative US version of Protestantism. But which denomination exactly? Or is that standard fare for evangelicals these days?
Why is she claiming that the bill is about liability?
I can relate to the sentiment, but that just makes it worse. How do you enforce ownership of data?
There’s only 1 thing for it: More internet surveillance.
But it’s also possible to do things like build a mass facial recognition database with image data,
Facebook built one years ago, but ended up destroying it. https://www.theverge.com/2021/11/2/22759613/meta-facebook-face-recognition-automatic-tagging-feature-shutdown
@Mistral@lemmings.world How are you feeling about yourself?
Was a reference to the thread next door that revealed - horror of horrors - that photos of children were part of the training data. Sure, you never know who is behind these hit pieces, but there doesn’t really need to be anyone behind it.
Oh no. That’s unethical!
/s
I’m sure it works fine in the lab. But it really only targets one specific AI model; that one specific Stable Diffusion VAE. I know that there are variants of that VAE around, which may or may not be enough to make it moot. The “Glaze” on an image may not survive common transformations, such as rescaling the image. It certainly will not survive intentional efforts to remove it, such as appropriate smoothing.
In my opinion, there is no point in bothering in the first place. There are literally billions of images on the net. One locks up gems because they are rare. This is like locking up pebbles on the beach. It doesn’t matter if the lock is bad.
Saw a post on Bluesky from someone in tech saying that eventually, if it’s human-viewable it’ll also be computer-viewable, and there’s simply no working around that, wonder if you agree on that or not.
Sort of. The VAE, the compression, means that the image generation takes less compute; ie cheaper hardware and less energy. You can have an image generator that works on the same pixels, visible to humans. Actually, that’s simpler and existed earlier.
By Moore’s law, it would be many years, even decades, before that efficiency gain is something we can do without. But I think, maybe, this becomes moot once special accelerator chips for neural nets are designed.
What makes it obsolete is the proliferation of open models. EG Today Stable Diffusion 3 becomes available for download. This attack targets 1 specific model and may work on variants of it. But as more and more rather different models become available, the whole thing becomes increasingly pointless. Maybe you could target more than one, but it would be more and more effort for less and less effect.
You are apparently mistaking me for someone else.
Animals never could own property. PETA sued to get the monkey recognized as author and thus copyright-holder of the selfie. Or, more likely, to generate publicity as that was obviously never going to happen.
Another rubbish hit piece on open source.
It doesn’t work like that. The monkey selfie case did not set any kind of precedence. Animals cannot own property, including copyrights.
For a work to be under copyright in the US, it has to be an “original work of authorship” and contain “a modicum of creativity”. Some countries allow broader copyrights. Photographs that are accidentally triggered are public domain. CCTV footage is a gray area. Setting up a camera and luring animals into triggering it, might produce copyrighted images. A court would have to decide if the individual circumstances constitute authorship and a modicum of creativity. An animal snagging a camera and triggering it certainly doesn’t. The monkey selfie case did nothing to advance the law.
A public domain image is just that. Attempting to assert ownership over one is either an error or fraud. I don’t know what the US rules are when a rights-owner can’t be found. I doubt that you can just become the default owner of some property just by writing something on a website.
In a future where this is established, wouldn’t you expect non-compliant hardware to be treated just as drugs or machine guns are treated now?
I think that’s hardly an immediate worry, though. Various services already scan for illegal content or suspicious activity. It wouldn’t take much to get ISPs to snitch on their customers.