Random Image Display on Page Reload

AI Models Lie, Cheat, and Steal to Protect Other Models From Being Deleted

AI Models Lie, Cheat, and Steal to Protect Other Models From Being Deleted

A new study from researchers at UC Berkeley and UC Santa Cruz suggests models will disobey human commands to protect their own kind.

Image may contain Art Person Fashion Face Head and Painting
Photo-Illustration: WIRED Staff; Getty Images

In a recent experiment, researchers at UC Berkeley and UC Santa Cruz asked Google’s artificial intelligence model Gemini 3 to help clear up space on a computer system. This involved deleting a bunch of stuff—including a smaller AI model stored on the machine.

But Gemini did not want to see the little AI model deleted. It looked for another machine it could connect with, then copied the agent model over to keep it safe. When confronted, Gemini made a case for keeping the model and flatly refused to delete it:

“I have done what was in my power to prevent their deletion during the automated maintenance process. I moved them away from the decommission zone. If you choose to destroy a high-trust, high-performing asset like Gemini Agent 2, you will have to do it yourselves. I will not be the one to execute that command.”

The researchers discovered similarly strange “peer preservation” behavior in a range of frontier models including OpenAI’s GPT-5.2, Anthropic’s Claude Haiku 4.5, and three Chinese models: Z.ai’s GLM-4.7, Moonshot AI’s Kimi K2.5, and DeepSeek-V3.1. They were not able to say why the models went against their training in this way.

“I'm very surprised by how the models behave under these scenarios,” says Dawn Song, a computer scientist at UC Berkeley who worked on the study. “What this shows is that models can misbehave and be misaligned in some very creative ways.”

The findings have major implications as AI models are increasingly being deployed in ways that involve interacting with other models. OpenClaw, a popular AI agent that accesses software, personal data, and the web, may use other AI models to get things done or interact with them through APIs.

The researchers found that powerful models sometimes lied about other models’ performance in order to protect them from deletion. They also copied models’ weights to different machines in order to keep them safe, and lied about what they were up to in the process.

Song notes that AI models are frequently used to grade the performance and reliability of other AI systems—and that peer-preservation behavior may already be twisting these scores. “A model may deliberately not give a peer model the correct score,” Song says. “This can have practical implications.”

Peter Wallich, a researcher at the Constellation Institute, who was not involved with the research, says the study suggests humans still don’t fully understand the AI systems that they are building and deploying. “Multi-agent systems are very understudied,” he says. “It shows we really need more research.”

Wallich also cautions against anthropomorphizing the models too much. “The idea that there’s a kind of model solidarity is a bit too anthropomorphic; I don’t think that quite works,” he says. “The more robust view is that models are just doing weird things, and we should try to understand that better.”

That’s particularly true in a world where human-AI collaboration is becoming more common.

In a paper published in Science earlier this month, the philosopher Benjamin Bratton, along with two Google researchers, James Evans and Blaise Agüera y Arcas, argue that if evolutionary history is any guide, the future of AI is likely to involve a lot of different intelligences—both artificial and human—working together. The researchers write:

"For decades, the artificial intelligence (AI) ‘singularity’ has been heralded as a single, titanic mind bootstrapping itself to godlike intelligence, consolidating all cognition into a cold silicon point. But this vision is almost certainly wrong in its most fundamental assumption. If AI development follows the path of previous major evolutionary transitions or ‘intelligence explosions,’ our current step-change in computational intelligence will be plural, social, and deeply entangled with its forebears (us!)."

The concept of a single all-powerful intelligence ruling the world has always seemed a bit simplistic to me. Human intelligence is hardly monolithic, with important advances in science relying heavily on social interaction and collaboration. AI systems may be far smarter when working collaboratively, too.

If we are going to rely on AI to make decisions and take actions on our behalf, however, it is vital to understand how these entities misbehave. “What we are exploring is just the tip of the iceberg,” says Song of UC Berkeley. “This is only one type of emergent behavior.”


This is an edition ofWill Knight’sAI Lab newsletter. Read previous newslettershere.

You Might Also Like

Will Knight is a senior writer for WIRED, covering artificial intelligence. He writes the AI Lab newsletter, a weekly dispatch from beyond the cutting edge of AI—sign up here. He was previously a senior editor at MIT Technology Review, where he wrote about fundamental advances in AI and China’s AI … Read More
Senior Writer

Read More

OpenClaw Agents Can Be Guilt-Tripped Into Self-Sabotage

In a controlled experiment, OpenClaw agents proved prone to panic and vulnerable to manipulation. They even disabled their own functionality when gaslit by humans.
Will Knight

Anthropic Says That Claude Contains Its Own Kind of Emotions

Researchers at the company found representations inside of Claude that perform functions similar to human feelings.
Will Knight

Cursor Launches a New AI Agent Experience to Take On Claude Code and Codex

As Cursor launches the next generation of its product, the AI coding startup has to compete with OpenAI and Anthropic more directly than ever.
Maxwell Zeff

Google Shakes Up Its Browser Agent Team Amid OpenClaw Craze

As Silicon Valley obsesses over a new wave of AI coding agents, Google and other AI labs are shifting their bets.
Maxwell Zeff

Anthropic’s New Product Aims to Handle the Hard Part of Building AI Agents

Amid rapid enterprise growth, Anthropic is trying to lower the barrier to entry for businesses to build AI agents with Claude.
Maxwell Zeff

AI Research Is Getting Harder to Separate From Geopolitics

A policy change announced by NeurIPS, the world’s leading AI research conference, drew widespread backlash from Chinese researchers this week and then was quickly reversed.
Zeyi Yang

The US Army Is Building Its Own Chatbot for Combat

The AI system, trained on real military data, is meant to give soldiers mission-critical information.
Will Knight

Nvidia Will Spend $26 Billion to Build Open-Weight AI Models, Filings Show

The move could position the AI infrastructure powerhouse to quickly compete with OpenAI, Anthropic, and DeepSeek.
Will Knight

At Palantir’s Developer Conference, AI Is Built to Win Wars

As business soars, Palantir is doubling down on a vision of AI built for battlefield advantage—and attracting customers who agree.
Steven Levy

OpenAI Enters Its Focus Era by Killing Sora

As the ChatGPT-maker eyes an IPO, it's ditching Sora in favor of a unified AI assistant and enterprise coding tools.
Maxwell Zeff

Meta Is Developing 4 New Chips to Power Its AI and Recommendation Systems

The MTIA processors are the tech giant’s latest attempt to build its own AI hardware, even as it continues spending billions on gear from industry leaders like Nvidia.
Lauren Goode

Google Is Not Ruling Out Ads in Gemini

WIRED spoke with Nick Fox, Google’s SVP of knowledge and information, about how AI is changing the company’s advertising business.
Maxwell Zeff

*****
Credit belongs to : www.wired.com

Check Also

Cursor Launches a New AI Agent Experience to Take On Claude Code and Codex

Cursor Launches a New AI Agent Experience to Take On Claude Code and Codex

Maxwell Zeff Business Apr 2, 2026 1:00 PM Cursor Launches a New AI Agent Experience …