Not sure how to fel about this, but if they are honest about the labels and accurate 100% of the time with labeling it’s a nice feature for independant fact checkers
You may be able to prove that a photo with certain metadata was taken by a camera (my understanding is that that’s the method), but you can’t prove that a photo without it wasn’t, because older cameras won’t have the necessary support, and wiping metadata is trivial anyway. So is it better to have more false negatives than false positives? Maybe. My suspicion is that it won’t make much difference to most people.
It’s of course troubling that AI images will go unidentified through this service (I am also not at all confident that Google can do this well/consistently).
However I’m also worried about the opposite side of this problem- real images being mislabeled as AI. I can see a lot of bad actors using that to discredit legitimate news sources or stories that don’t fit their narrative.
Google is planning to roll out a technology that will identify whether a photo was taken with a camera, edited by software like Photoshop, or produced by generative AI models.
So they are going to use AI to detect AI. That should not present any problems.
They’re going to use AI to train AI*
So nothing new here
Use AI to train AI to detect AI, got it.
Yes, it’s called a GAN and has been a fundamental technique in ML for years.
Yeah but what if they added another GAN to check the existing GAN. It would fix everything.
My point is just that they’re effectively describing a discriminator. Like, yeah, it entails a lot more tough problems to be tackled than that sentence makes it seem, but it’s a known and very active area of ML. Sure, there may be other metadata and contextual features to discriminate upon, but eventually those heuristics will inevitably be closed up and we’ll just end up with a giant distributed, quasi-federated GAN. Which, setting aside the externalities that I’m skeptical anyone in a position of power to address is equally in an informed position of understanding, is kind of neat in a vacuum.
looks dubious
The problem here is that if this is unreliable – and I’m skeptical that Google can produce a system that will work across-the-board – then you have a synthesized image that now has Google attesting to be non-synthetic.
Fun fact about AI products (or any gold rush economy) it doesn’t have to work. It just has to sell.
I mean this is generally true about anything but it’s particularly bad in these situations. Also PT Barnum had a few thoughts on this as well.
The problem here is that if this is unreliable…
And the problem if it is reliable is that everyone becomes dependent on Google to literally define reality.