Random Image Display on Page Reload

Google Lifts a Ban on Using Its AI for Weapons and Surveillance

Google Lifts a Ban on Using Its AI for Weapons and Surveillance

Google published principles in 2018 barring its AI technology from being used for sensitive purposes. Weeks into President Donald Trump’s second term, those guidelines are being overhauled.

The Google Bay View campus in Mountain View California US on Tuesday Nov. 28 2023. Google said in March 2022 that it...

The Google Bay View campus in Mountain View, California.Photograph: Mike Kai Chen/Getty Images

Google announced Tuesday that it is overhauling the principles governing how it uses artificial intelligence and other advanced technology. The company removed language promising not to pursue “technologies that cause or are likely to cause overall harm,” “weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people,” “technologies that gather or use information for surveillance violating internationally accepted norms,” and “technologies whose purpose contravenes widely accepted principles of international law and human rights.”

The changes were disclosed in a note appended to the top of a 2018 blog post unveiling the guidelines. “We’ve made updates to our AI Principles. Visit AI.Google for the latest,” the note reads.

In a blog post on Tuesday, a pair of Google executives cited the increasingly widespread use of AI, evolving standards, and geopolitical battles over AI as the “backdrop” to why Google’s principles needed to be overhauled.

Google first published the principles in 2018 as it moved to quell internal protests over the company’s decision to work on a US military drone program. In response, it declined to renew the government contract and also announced a set of principles to guide future uses of its advanced technologies, such as artificial intelligence. Among other measures, the principles stated Google would not develop weapons, certain surveillance systems, or technologies that undermine human rights.

But in an announcement on Tuesday, Google did away with those commitments. The new webpage no longer lists a set of banned uses for Google’s AI initiatives. Instead, the revised document offers Google more room to pursue potentially sensitive use cases. It states Google will implement “appropriate human oversight, due diligence, and feedback mechanisms to align with user goals, social responsibility, and widely accepted principles of international law and human rights.” Google also now says it will work to “mitigate unintended or harmful outcomes.”

“We believe democracies should lead in AI development, guided by core values like freedom, equality, and respect for human rights,” wrote James Manyika, Google senior vice president for research, technology, and society, and Demis Hassabis, CEO of Google DeepMind, the company’s esteemed AI research lab. “And we believe that companies, governments, and organizations sharing these values should work together to create AI that protects people, promotes global growth, and supports national security.”

They added that Google will continue to focus on AI projects “that align with our mission, our scientific focus, and our areas of expertise, and stay consistent with widely accepted principles of international law and human rights.”

Multiple Google employees expressed concern about the changes in conversations with WIRED. “It's deeply concerning to see Google drop its commitment to the ethical use of AI technology without input from its employees or the broader public, despite long-standing employee sentiment that the company should not be in the business of war,” says Parul Koul, a Google software engineer and president of the Alphabet Union Workers-CWA.


Got a Tip?

Are you a current or former employee at Google? We’d like to hear from you. Using a nonwork phone or computer, contact Paresh Dave on Signal/WhatsApp/Telegram at +1-415-565-1302 or paresh_dave@wired.com, or Caroline Haskins on Signal at +1 785-813-1084 or at emailcarolinehaskins@gmail.com


US President Donald Trump’s return to office last month has galvanized many companies to revise policies promoting equity and other liberal ideals. Google spokesperson Alex Krasov says the changes have been in the works much longer.

Google lists its new goals as pursuing bold, responsible, and collaborative AI initiatives. Gone are phrases such as “be socially beneficial” and maintain “scientific excellence.” Added is a mention of “respecting intellectual property rights.”

After the initial release of its AI principles roughly seven years ago, Google created two teams tasked with reviewing whether projects across the company were living up to the commitments. One focused on Google’s core operations, such as search, ads, Assistant, and Maps. Another focused on Google Cloud offerings and deals with customers. The unit focused on Google’s consumer business was split up early last year as the company raced to develop chatbots and other generative AI tools to compete with OpenAI.

Timnit Gebru, a former colead of Google’s ethical AI research team who was later fired from that position, claims the company’s commitment to the principles had always been in question. “I would say that it’s better to not pretend that you have any of these principles than write them out and do the opposite,” she says.

Three former Google employees who had been involved in reviewing projects to ensure they aligned with the company’s principles say the work was challenging at times because of the varying interpretations of the principles and pressure from higher-ups to prioritize business imperatives.

Google still has language about preventing harm in its official Cloud Platform Acceptable Use Policy, which includes various AI-driven products. The policy forbids violating “the legal rights of others” and engaging in or promoting illegal activity, such as “terrorism or violence that can cause death, serious harm, or injury to individuals or groups of individuals.”

However, when pressed about how this policy squares with Project Nimbus—a cloud computing contract with the Israeli government, which has benefited the country’s military — Google has said that the agreement “is not directed at highly sensitive, classified, or military workloads relevant to weapons or intelligence services.”

“The Nimbus contract is for workloads running on our commercial cloud by Israeli government ministries, who agree to comply with our Terms of Service and Acceptable Use Policy,” Google spokesperson Anna Kowalczyk told WIRED in July.

Google Cloud’s Terms of Service similarly forbid any applications that violate the law or “lead to death or serious physical harm to an individual.” Rules for some of Google’s consumer-focused AI services also ban illegal uses and some potentially harmful or offensive uses.

Update 2/04/25 5:45 ET: This story has been updated to include an additional comment from a Google employee.

Comment

Join the WIRED community to add comments.

Sign in or create account

Paresh Dave is a senior writer for WIRED, covering the inner workings of Big Tech companies. He writes about how apps and gadgets are built and about their impacts while giving voice to the stories of the underappreciated and disadvantaged. He was previously a reporter for Reuters and the Los Angeles Times, … Read more
Senior Writer

    Caroline Haskins is a freelance reporter based in New York covering tech with a focus on politics, labor, and culture. She has previously worked as a staff reporter at Business Insider, BuzzFeed News, and Vice's Motherboard, and as a research editor at Business Insider.You can send her tips via email … Read more
      Read More

      OpenAI’s Operator Lets ChatGPT Use the Web for You

      The company that kicked off the AI chatbot craze now wants AI to do more than just talk.
      Will Knight

      New US Rule Aims to Block China’s Access to AI Chips and Models by Restricting the World

      The US government has announced a radical plan to control exports of cutting-edge AI technology to most nations.
      Will Knight

      Mira Murati’s AI Startup Makes First Hires, Including Former OpenAI Executive

      It’s a major get for Murati’s mysterious startup, which has also poached engineers and researchers from a number of other prominent AI firms.
      Zoë Schiffer

      Hands On With DeepSeek’s R1 Chatbot

      DeepSeek’s chatbot with the R1 model is a stunning release from the Chinese startup. While it’s an innovation in training efficiency, hallucinations still run rampant.
      Reece Rogers

      DeepSeek’s New AI Model Sparks Shock, Awe, and Questions From US Competitors

      Some worry the Chinese startup’s impressive tech indicates the US is losing its lead in AI, but it may really be a sign that a new approach to building models is gaining traction.
      Will Knight

      OpenAI’s o3-Mini Is a Leaner AI Model That Keeps Pace With DeepSeek

      On the heels of DeepSeek R1, the latest model from OpenAI promises more advanced capabilities at a cheaper price.
      Will Knight

      Here’s How DeepSeek Censorship Actually Works—and How to Get Around It

      A WIRED investigation shows that the popular Chinese AI model is censored on both the application and training level.
      Zeyi Yang

      Why ‘Beating China’ in AI Brings Its Own Risks

      The US is increasingly intent on winning the AI race with China. Experts say this ignores the benefits of collaboration—and the danger of unintended consequences.
      Will Knight

      Elon Musk Ally Tells Staff ‘AI-First’ Is the Future of Key Government Agency

      Sources say the former Tesla engineer now in charge of the Technology Transformation Services wants an agency that operates like a “startup software company.”
      Makena Kelly

      This New AI Search Engine Has a Gimmick: Humans Answering Questions

      A new AI-powered search engine called Pearl is launching today, with an unusual pitch: It promises to connect you with an actual human expert if the AI answer sucks. WIRED gave it a spin.
      Kate Knibbs

      DeepSeek’s Safety Guardrails Failed Every Test Researchers Threw at Its AI Chatbot

      Security researchers tested 50 well-known jailbreaks against DeepSeek’s popular new AI chatbot. It didn’t stop a single one.
      Matt Burgess

      Chinese AI App DeepSeek Soars in Popularity, Startling Rivals

      The company said Monday it was temporarily limiting new sign-ups due to “large-scale malicious attacks” on its services.
      Louise Matsakis

      *****
      Credit belongs to : www.wired.com

      Check Also

      Saturn solidifies its title as moon king with the discovery of 128 new moons

      With a whopping total of 274 moons, researchers say Saturn’s tug of war with Jupiter …