Random Image Display on Page Reload

Anthropic’s Claude Takes Control of a Robot Dog

Nov 12, 2025 2:00 PM

Anthropic’s Claude Takes Control of a Robot Dog

Anthropic believes AI models will increasingly reach into the physical world. To understand where things are headed, it asked Claude to program a quadruped.

Photo-Illustration: WIRED Staff; Getty Images

As more robots start showing up in warehouses, offices, and even people’s homes, the idea of large language models hacking into complex systems sounds like the stuff of sci-fi nightmares. So, naturally, Anthropic researchers were eager to see what would happen if Claude tried taking control of a robot—in this case, a robot dog.

In a new study, Anthropic researchers found that Claude was able to automate much of the work involved in programming a robot and getting it to do physical tasks. On one level, their findings show the agentic coding abilities of modern AI models. On another, they hint at how these systems may start to extend into the physical realm as models master more aspects of coding and get better at interacting with software—and physical objects as well.

“We have the suspicion that the next step for AI models is to start reaching out into the world and affecting the world more broadly,” Logan Graham, a member of Anthropic’s red team, which studies models for potential risks, tells WIRED. “This will really require models to interface more with robots.”

Courtesy of Anthropic

Courtesy of Anthropic

Anthropic was founded in 2021 by former OpenAI staffers who believed that AI might become problematic—even dangerous—as it advances. Today’s models are not smart enough to take full control of a robot, Graham says, but future models might be. He says that studying how people leverage LLMs to program robots could help the industry prepare for the idea of “models eventually self-embodying,” referring to the idea that AI may someday operate physical systems.

It is still unclear why an AI model would decide to take control of a robot—let alone do something malevolent with it. But speculating about the worst-case scenario is part of Anthropic’s brand, and it helps position the company as a key player in the responsible AI movement.

In the experiment, dubbed Project Fetch, Anthropic asked two groups of researchers without previous robotics experience to take control of a robot dog, the Unitree Go2 quadruped, and program it to do specific activities. The teams were given access to a controller, then asked to complete increasingly complex tasks. One group was using Claude’s coding model—the other was writing code without AI assistance. The group using Claude was able to complete some—though not all—tasks faster than the human-only programming group. For example, it was able to get the robot to walk around and find a beach ball, something that the human-only group could not figure out.

Anthropic also studied the collaboration dynamics in both teams by recording and analyzing their interactions. They found that the group without access to Claude exhibited more negative sentiments and confusion. This might be because Claude made it quicker to connect to the robot and coded an easier-to-use interface.

Courtesy of Anthropic

The Go2 robot used in Anthropic’s experiments costs $16,900—relatively cheap, by robot standards. It is typically deployed in industries like construction and manufacturing to perform remote inspections and security patrols. The robot is able to walk autonomously but generally relies on high-level software commands or a person operating a controller. Go2 is made by Unitree, which is based in Hangzhou, China. Its AI systems are currently the most popular on the market, according to a recent report by SemiAnalysis.

The large language models that power ChatGPT and other clever chatbots typically generate text or images in response to a prompt. More recently, these systems have become adept at generating code and operating software—turning them into agents rather than just text-generators.

Many researchers are interested in the potential for agents to take physical actions in addition to operating on the web. To help make this a reality, some well-funded startups are trying to develop AI models that can control vastly more capable robots. Others are developing new kinds of robots, like humanoids, which might someday work in people’s homes.

Changliu Liu, a roboticist at Carnegie Mellon University, says the results of Project Fetch are interesting but not hugely surprising. Liu adds that the analysis of team dynamics is notable because it hints at new ways to design interfaces for AI-assisted coding. “What I would be most interested to see is a more detailed breakdown of how Claude contributed,” she adds. “For example, whether it was identifying correct algorithms, choosing API calls, or something else more substantive.”

Some researchers warn that using AI to interact with robots increases the potential for misuse and mishap. “Project Fetch demonstrates that LLMs can now instruct robots on tasks,” says George Pappas, a computer scientist at the University of Pennsylvania who studies these risks.

Pappas notes, however, that today’s AI models need to access other programs for tasks like sensing and navigation in order to take physical action. His group developed a system called RoboGuard that limits the ways AI models can get a robot to misbehave by imposing specific rules on the robot’s behavior. Pappas adds that an AI system’s ability to control a robot will only really take off when it is able to learn by interacting with the physical world. “When you mix rich data with embodied feedback,” he says, “you’re building systems that cannot just imagine the world, but participate in it.”

This could make robots a lot more useful—and, if Anthropic is to be believed, a lot more risky too.


This is an edition ofWill Knight’sAI Lab newsletter. Read previous newslettershere.

You Might Also Like …

Will Knight is a senior writer for WIRED, covering artificial intelligence. He writes the AI Lab newsletter, a weekly dispatch from beyond the cutting edge of AI—sign up here. He was previously a senior editor at MIT Technology Review, where he wrote about fundamental advances in AI and China’s AI … Read More
Senior Writer

Read More

Meet the Chinese Startup Using AI—and a Team of Human Workers—to Train Robots

AgiBot is using AI-powered robots to do new manufacturing tasks. Smarter machines may transform physical labor in China.

AI Models Get Brain Rot, Too

A new study shows that feeding large language models low-quality, high-engagement content from social media lowers their cognitive abilities.

The Biggest AI Companies Met to Find a Better Path for Chatbot Companions

In a closed-door workshop led by Anthropic and Stanford, leading AI startups and researchers discussed guidelines for chatbot companions, especially for younger users.

This Open Source Robot Brain Thinks in 3D

Open source language models are crucial to AI innovation. Can open robotics models do the same for physical machines?

The US Needs an Open Source AI Intervention to Beat China

Depending on foreign-made open models is both a supply chain risk and an innovation problem, experts say.

This Home Robot Clears Tables and Loads the Dishwasher All by Itself

Sunday Robotics has a new way to train robots to do common household tasks. The startup plans to put its fully autonomous robots in homes next year.

OpenAI’s Open-Weight Models Are Coming to the US Military

The gpt-oss models are being tested for use on sensitive military computers. But some defense insiders say that OpenAI is still behind the competition.

Google DeepMind Hires Former CTO of Boston Dynamics as the Company Pushes Deeper Into Robotics

DeepMind’s chief says he envisions Gemini as an operating system for physical robots. The company has hired Aaron Saunders to help make that a reality.

AI Is the Bubble to Burst Them All

I talked to the scholars who literally wrote the book on tech bubbles—and applied their test.

The Man Who Makes AI Slop by Hand

Chinese creator Tianran Mu went viral for mimicking the eerie, unsettling aesthetic of AI videos, but his work is 100 percent human.

The AI Boom Is Fueling a Need for Speed in Chip Networking

Next-gen networking tech, sometimes powered by light instead of electricity, is emerging as a critical piece of AI infrastructure.

*****
Credit belongs to : www.wired.com

Check Also

Launching hundreds of thousands of satellites will threaten space research, scientists warn

We've all come to appreciate the beautiful images space telescopes provide us of galaxies, nebulas …