Random Image Display on Page Reload

Google Cracks Down on Explicit Deepfakes

Jul 31, 2024 9:00 AM

Google Cracks Down on Explicit Deepfakes

Newly announced measures by the search giant aim to make AI-generated, or otherwise spoofed explicit content, more difficult to discover.

Image may contain Electronics Phone Mobile Phone and Text

Photograph: NurPhoto/Getty Images

A few weeks ago, a Google search for “deepfake nudes jennifer aniston” brought up at least seven high-up results that purported to have explicit, AI-generated images of the actress. Now they have vanished.

Google product manager Emma Higham says that new adjustments to how the company ranks results, which have been rolled out this year, have already cut exposure to fake explicit images by over 70 percent on searches seeking that content about a specific person. Where problematic results once may have appeared, Google’s algorithms are aiming to promote news articles and other nonexplicit content. The Aniston search now returns articles such as “How Taylor Swift's Deepfake AI Porn Represents a Threat” and other links like a Ohio attorney general warning about “deepfake celebrity-endorsement scams” that target consumers.

“With these changes, people can read about the impact deepfakes are having on society, rather than see pages with actual nonconsensual fake Images,” Higham wrote in a company blog post on Wednesday.

The ranking change follows a WIRED investigation this month that revealed that in recent years Google management rejected numerous ideas proposed by staff and outside experts to combat the growing problem of intimate portrayals of people spreading online without their permission.

While Google made it easier to request removal of unwanted explicit content, victims and their advocates have urged more proactive steps. But the company has tried to avoid becoming too much of a regulator of the internet or harm access to legitimate porn. At the time, a Google spokesperson said in response that multiple teams were working diligently to bolster safeguards against what it calls nonconsensual explicit imagery (NCEI).

The widening availability of AI image generators, including some with few restrictions on their use, has led to an uptick in NCEI, according to victims’ advocates. The tools have made it easy for just about anyone to create spoofed explicit images of any individual, whether that’s a middle school classmate or a mega-celebrity.

In March, a WIRED analysis found Google had received more than 13,000 demands to remove links to a dozen of the most popular websites hosting explicit deepfakes. Google removed results in around 82 percent of the cases.

As part of Google’s new crackdown, Higham says that the company will begin applying three of the measures to reduce discoverability of real but unwanted explicit images to those that are synthetic and unwanted. After Google honors a takedown request for a sexualized deepfake, it will then try to keep duplicates out of results. It will also filter explicit images from results in queries similar to those cited in the takedown request. And finally, websites subject to “a high volume” of successful takedown requests will face demotion in search results.

“These efforts are designed to give people added peace of mind, especially if they’re concerned about similar content about them popping up in the future,” Higham wrote.

Google has acknowledged that the measures don’t work perfectly, and former employees and victims’ advocates have said they could go much further. The search engine prominently warns people in the US looking for naked images of children that such content is unlawful. The warning’s effectiveness is unclear, but it’s a potential deterrent supported by advocates. Yet, despite laws against sharing NCEI, similar warnings don’t appear for searches seeking sexual deepfakes of adults. The Google spokesperson has confirmed that this will not change.

Paresh Dave is a senior writer for WIRED, covering the inner workings of Big Tech companies. He writes about how apps and gadgets are built and about their impacts while giving voice to the stories of the underappreciated and disadvantaged. He was previously a reporter for Reuters and the Los Angeles Times,… Read more
Senior Writer

    Read More

    TikTok Sued by US Justice Department for Alleged Violations of Kids’ Privacy

    The social media company, already fighting for its existence in the US, now has to contend with a potentially expensive penalty stemming from its policies toward users under 13.
    Paresh Dave

    How Watermelon Cupcakes Kicked Off an Internal Storm at Meta

    Arab and Muslim workers at Meta allege that its response to the crisis in Gaza is one-sided and out of hand. “It makes me sick that I work for this company,” says one employee.
    Paresh Dave

    SearchGPT Is OpenAI’s Direct Assault on Google

    The company behind ChatGPT is expanding into search, and leaning heavily on its relationships with publishers.
    Reece Rogers

    A US Judge Ruled That Google Is an Illegal Monopolist. Here's What Might Come Next

    Judge Amit Mehta’s ruling has triggered a potentially yearslong process to decide how to punish the company. For users, it could mean a future in which Google isn’t front and center everywhere.
    Paresh Dave

    Google Search Is an Illegal Monopoly, US Judge Rules

    Nearly a year after the US government took Google to trial, a judge has found that the tech giant violated antitrust laws. A new trial will determine how Google should be penalized.
    Paresh Dave

    Spotify, Stop Trying to Become a Social Media App

    The music streaming service has added a comment function under podcasts. Who is it for, anyway?
    Elana Klein

    OpenAI Touts New AI Safety Research. Critics Say It’s a Good Step, but Not Enough

    The company announced a new technique to make the workings of its systems more transparent, but people familiar with OpenAI say more oversight is needed.
    Will Knight

    The ACLU Fights for Your Constitutional Right to Make Deepfakes

    States across the US are seeking to criminalize certain uses of AI-generated content. Civil rights groups are pushing back, arguing that some of these new laws conflict with the First Amendment.
    Arthur Holland Michel

    *****
    Credit belongs to : www.wired.com

    Check Also

    New York Cracked Down on Airbnb One Year Ago. NYC Housing Is Still a Mess

    Amanda Hoover Business Sep 5, 2024 7:30 AM New York Cracked Down on Airbnb One …