Random Image Display on Page Reload

Big AI Won’t Stop Election Deepfakes With Watermarks

Jul 27, 2023 7:00 AM

Big AI Won’t Stop Election Deepfakes With Watermarks

Experts warn of a new age of AI-driven disinformation. A voluntary agreement brokered by the White House doesn’t go nearly far enough to address those risks.

3D pixelated human face emerging from pixelated background

Illustration: themotioncloud/Getty Images

In May, a fake image of an explosion near the Pentagon went viral on Twitter. It was soon followed by images seeming to show explosions near the White House as well. Experts in mis- and disinformation quickly flagged that the images seemed to have been generated by artificial intelligence, but not before the stock market had started to dip.

It was only the latest example of how fake content can have troubling real-world effects. The boom in generative artificial intelligence has meant that tools to create fake images and videos, and pump out huge amounts of convincing text, are now freely available. Misinformation experts say we are entering a new age where distinguishing what is real from what isn’t will become increasingly difficult.

Last week the major AI companies, including OpenAI, Google, Microsoft, and Amazon, promised the US government that they would try to mitigate the harms that could be caused by their technologies. But it’s unlikely to stem the coming tide of AI-generated content and the confusion that it could bring.

The White House says the companies’ “voluntary commitment” includes “developing robust technical mechanisms to ensure that users know when content is AI generated, such as a watermarking system,” as part of the effort to prevent AI from being used for “fraud and deception.”

But experts who spoke to WIRED say the commitments are half measures. “There's not going to be a really simple yes or no on whether something is AI-generated or not, even with watermarks,” says Sam Gregory, program director at the nonprofit Witness, which helps people use technology to promote human rights.

Watermarking is commonly used by picture agencies and newswires to prevent images from being used without permission—and payment.

But when it comes to the variety of content that AI can generate, and the many models that already exist, things get more complicated. As of yet, there is no standard for watermarking, meaning that each company is using a different method. Dall-E, for instance, uses a visible watermark (and a quick Google search will find you many tutorials on how to remove it), whereas other services might default to metadata, or pixel-level watermarks that are not visible to users. While some of these methods might be hard to undo, others, like visual watermarks, can sometimes become ineffective when an image is resized.

“There's going to be ways in which you can corrupt the watermarks,” Gregory says.

The White House’s statement specifically mentions using watermarks for AI-generated audio and visual content, but not for text.

There are ways to watermark text generated by tools like OpenAI’s ChatGPT, by manipulating the way that words are distributed, making a certain word or set of words appear more frequently. These would be detectable by a machine but not necessarily a human user.

That means that watermarks would need to be interpreted by a machine and then flagged to a viewer or reader. That’s made more complex by mixed media content—like the audio, image, video, and text elements that can appear in a single TikTok video. For instance, someone might put real audio over an image or video that's been manipulated. In this case, platforms would need to figure out how to label that a component—but not all—of the clip had been AI-generated.

Most Popular

And just labeling content as AI-generated doesn’t do much to help users figure out whether something is malicious, misleading, or meant for entertainment.

“Obviously, manipulated media is not fundamentally bad if you're making TikTok videos and they're meant to be fun and entertaining,” says Hany Farid, a professor at the UC Berkeley School of Information, who has worked with software company Adobe on its content authenticity initiative. “It's the context that is going to really matter here. That will continue to be exceedingly hard, but platforms have been struggling with these issues for the last 20 years.”

And the rising place of artificial intelligence in the public consciousness has allowed for another form of media manipulation. Just as users might assume that AI-generated content is real, the very existence of synthetic content can sow doubt about the authenticity of any video, image, or piece of text, allowing bad actors to claim that even genuine content is fake—what’s known as the “liar’s dividend.” Gregory says the majority of recent cases that Witness has seen aren’t deepfakes being used to spread falsehoods; they’re people trying to pass off real media as AI-generated content.

In April a lawmaker in the southern Indian state of Tamil Nadu alleged that a leaked audio recording in which he accused his party of stealing more than $3 billion was “machine-generated.” (It wasn’t.) In 2021, in the weeks following the military coup in Myanmar, a video of a woman doing a dance exercise while a military convoy rolls in behind her went viral. Many online alleged that the video had been faked. (It hadn’t.)

Right now, there’s little to stop a malicious actor from putting watermarks on real content to make it appear fake. Farid says that one of the best ways to guard against falsifying or corrupting watermarks is through cryptographic signatures. “If you're OpenAI, you should have a cryptographic key. And the watermark will have information that can only have been known to the person holding the key,” he says. Other watermarks can be at the pixel level or even in the training data that the AI learns from. Farid points to the Coalition for Content, Provenance, and Education, which he advises, as a standard that AI companies could adopt and adhere to.

“We are quickly entering this time where it's getting harder and harder to believe anything we read, see, or hear online,” Farid says. “And that means not only are we going to be fooled by fake things, we're not going to believe real things. If the Trump Access Hollywood tape were released today, he would have plausible deniability,” Farid says.

Get More From WIRED

Vittoria Elliott is a reporter for WIRED, covering platforms and power. She was previously a reporter at Rest of World, where she covered disinformation and labor in markets outside the US and Western Europe. She has worked with The New Humanitarian, Al Jazeera, and ProPublica. She is a graduate of… Read more
Platforms and power reporter

More from WIRED

Inside the DIY Race to Replicate LK-99

Experts doubt claims that LK-99 is a room-temperature superconductor set to open up a future of levitating trains and quantum tech. Andrew McCalip wants to see for himself.

Gregory Barber

Uber’s CEO Says He’ll Always Find a Reason to Say His Company Sucks

Dara Khosrowshahi swooped in to tame a beastly work culture and try to make the on-demand giant profitable. Now he’s expanding into trucks and boats, but he still sees Uber as a fixer-upper.

Steven Levy

This Disinformation Is Just for You

Generative AI won't just flood the internet with more lies—it may also create convincing disinformation that's targeted at groups or even individuals.

Thor Benson

5 Ways ChatGPT Can Improve, Not Replace, Your Writing

Generate your own text—but get help from the AI bot to make it stand out.

David Nield

It's Time to Rethink Digital Ownership

In his quest to watch every Nicolas Cage movie in chronological order, law professor and author Aaron Perzanowski confirmed that he owns nothing—and that you probably don’t, either.

Gideon Lichfield

WhatsApp Made a Movie About Afghan Women's Soccer

As the UK pushes for a law that threatens end-to-end encryption, WhatsApp has given itself a starring role in a doc about a girls’ soccer team fleeing the Taliban.

Morgan Meaker

It’s Getting Harder for the Government to Secretly Flag Your Social Posts

Social apps prioritize content moderation tips from governments and online watchdogs. A US court ruling and a new EU law could restrict the practice, but they still leave loopholes.

Paresh Dave

This Is the Era of Zombie Twitter

The bird may be dead, but Twitter—er, X—is still alive for communities, news, and memes.

Amanda Hoover

*****
Credit belongs to : www.wired.com

Check Also

TikTok’s Creator Economy Stares Into the Abyss

Louise Matsakis Business Apr 24, 2024 7:00 AM TikTok’s Creator Economy Stares Into the Abyss …