The Biden administration doesn’t seem quite certain how to do it – but it would clearly like to see AI watermarking implemented as soon as possible, despite the idea being marred by many misgivings.
And, even despite what some reports admit is a lack of consensus on “what digital watermark is.” Standards and enforcement regulation are also missing. As has become customary, where the government is constrained or insufficiently competent, it effectively enlists private companies.
With the standards problem, these seem to none other than tech dinosaur Adobe, and China’s TikTok.
It’s hardly a conspiracy theory to think the push mostly has to do with the US presidential election later this year, as watermarking of this kind can be “converted” from its original stated purpose – into a speech-suppression tool.
The publicly presented argument in favor is obviously not quite that, although one can read between the lines. Namely – AI watermarking is promoted as a “key component” in combating misinformation, deepfakes included.
And this is where perfectly legal and legitimate genres like parody and memes could suffer from AI watermarking-facilitated censorship.
Spearheading the drive, such as it is, is Biden’s National Artificial Intelligence Advisory Committee and now one of its members, Carnegie Mellon University’s Ramayya Krishnan, admits there are “enforcement issues” – but is still enthusiastic about the possibility of using technology that “labels how content was made.”
From the Committee’s point of view, a companion AI tool would be a cherry on top.
However, there’s still no actual cake. Different companies are developing watermarking which can be put in three categories: visible, invisible (i.e., visible only to algorithms), and based on cryptographic metadata.
And while supporters continue to tout watermarking as a great way to detect and remove “misinformation,” experts are at the same time pointing out that “bad actors,” who are their own brand of experts, can easily remove watermarks – or, adding another layer to the complication of fighting “misinformation” windmills – create watermarks of their own.
At the same time, insisting that manipulated content is somehow a new phenomenon that needs to be tackled with special tools is a fallacy. Photoshopped images, visual effects, and parody, to name but a few, have been around for a long time.
Why do they want us to watermark everything on the internet? Why don’t they watermark their shit, then say “a Biden video is not real unless it is signed by the Biden private key. Here is our public key to use for verification” and then require any media organisations to verify any political videos before using them or something. Idk.