Random Image Display on Page Reload

Anthropic Denies It Could Sabotage AI Tools During War

Anthropic Denies It Could Sabotage AI Tools During War

The Department of Defense alleges the AI developer could manipulate models in the middle of war. Company executives argue that’s impossible.

Image may contain Neighborhood City Urban Aircraft Airplane Transportation and Vehicle
Photo-Illustration: WIRED Staff; Getty Images

Anthropic cannot manipulate its generative AI model Claude once the US military has it running, an executive wrote in a court filing on Friday. The statement was made in response to accusations from the Trump administration about the company potentially tampering with its AI tools during war.

“Anthropic has never had the ability to cause Claude to stop working, alter its functionality, shut off access, or otherwise influence or imperil military operations,” Thiyagu Ramasamy, Anthropic’s head of public sector, wrote. “Anthropic does not have the access required to disable the technology or alter the model’s behavior before or during ongoing operations.”

The Pentagon has been sparring with the leading AI lab for months over how its technology can be used for national security—and what the limits on that usage should be. This month, defense secretary Pete Hegseth labeled Anthropic a supply-chain risk, a designation that will prevent the Department of Defense from using the company’s software, including through contractors, over the coming months. Other federal agencies are also abandoning Claude.

Anthropic filed two lawsuits challenging the constitutionality of the ban and is seeking an emergency order to reverse it. However, customers have already begun canceling deals. A hearing in one of the cases is scheduled for March 24 in federal district court in San Francisco. The judge could decide on a temporary reversal soon after.

In a filing earlier this week, government attorneys wrote that the Department of Defense “is not required to tolerate the risk that critical military systems will be jeopardized at pivotal moments for national defense and active military operations.”

The Pentagon has been using Claude to analyze data, write memos, and help generate battle plans, WIRED reported. The government’s argument is that Anthropic could disrupt active military operations by turning off access to Claude or pushing harmful updates if the company disapproves of certain uses.

Ramasamy rejected that possibility. “Anthropic does not maintain any back door or remote ‘kill switch,’” he wrote. “Anthropic personnel cannot, for example, log into a DoW system to modify or disable the models during an operation; the technology simply does not function that way.”

He went on to say that Anthropic would be able to provide updates only with the approval of the government and its cloud provider, in this case Amazon Web Services, though he didn’t specify it by name. Ramasamy added that Anthropic cannot access the prompts or other data military users enter into Claude.

Anthropic executives maintain in court filings that the company does not want veto power over military tactical decisions. Sarah Heck, head of policy, wrote in a court filing on Friday that Anthropic was willing to guarantee as much in a contract proposed March 4. “For the avoidance of doubt, [Anthropic] understands that this license does not grant or confer any right to control or veto lawful Department of War operational decision‑making,” the proposal stated, according to the filing, which referred to an alternative name for the Pentagon.

The company was also ready to accept language that would address its concerns about Claude being used to help carry out deadly strikes without human supervision, Heck claimed. But negotiations ultimately broke down.

For the time being, the Defense Department has said in court filings that it “is taking additional measures to mitigate the supply chain risk” posed by the company by “working with third-party cloud service providers to ensure Anthropic leadership cannot make unilateral changes” to the Claude systems currently in place.

You Might Also Like

Paresh Dave is a senior writer for WIRED, covering the inner workings of Big Tech companies. He writes about how apps and gadgets are built and about their impacts while giving voice to the stories of the underappreciated and disadvantaged. He was previously a reporter for Reuters and the Los Angeles Times, … Read More
Senior Writer

    Read More

    Anthropic Sues Department of Defense Over Supply-Chain-Risk Designation

    The Claude chatbot developer says the Trump administration overstepped by escalating a contract dispute into a federal ban on the company’s technology.
    Paresh Dave

    Anthropic Claims Pentagon Feud Could Cost It Billions

    Executives at the AI startup say companies paused deal talks after the Trump administration labeled it a supply-chain risk, warning that the fallout could cause a major revenue hit.
    Paresh Dave

    Palantir Demos Show How the Military Could Use AI Chatbots to Generate War Plans

    Software demos and Pentagon records detail how chatbots like Anthropic’s Claude could help the Pentagon analyze intelligence and suggest next steps.
    Caroline Haskins

    Justice Department Says Anthropic Can’t Be Trusted With Warfighting Systems

    In response to Anthropic’s lawsuit, the government said it lawfully penalized the company for trying to limit how its Claude AI models could be used by the military.
    Paresh Dave

    What AI Models for War Actually Look Like

    While companies like Anthropic debate limits on military uses of AI, Smack Technologies is training models to plan battlefield operations.
    Will Knight

    OpenAI Had Banned Military Use. The Pentagon Tested Its Models Through Microsoft Anyway

    Sources allege the Defense Department experimented with Microsoft’s version of OpenAI technology before the ChatGPT-maker lifted its prohibition on military applications.
    Maxwell Zeff

    Anthropic Supply-Chain-Risk Designation Halted by Judge

    A judge temporarily blocked the Trump administration’s designation, clearing the way for Anthropic to keep doing business without the label starting next week.
    Paresh Dave

    Pentagon’s ‘Attempt to Cripple’ Anthropic Is Troubling, Judge Says

    During a hearing Tuesday, a district court judge questioned the Department of Defense’s motivations for labeling the Claude AI developer a supply-chain risk.
    Paresh Dave

    At Palantir’s Developer Conference, AI Is Built to Win Wars

    As business soars, Palantir is doubling down on a vision of AI built for battlefield advantage—and attracting customers who agree.
    Steven Levy

    This Jammer Wants to Block Always-Listening AI Wearables. It Probably Won’t Work

    Deveillance’s Spectre I, developed by a recent Harvard grad, wants to give people control over the always-on wearables surrounding their lives. The problem? Physics.
    Boone Ashworth

    OpenAI and Google Workers File Amicus Brief in Support of Anthropic Against the US Government

    Google DeepMind chief scientist Jeff Dean is among the AI researchers and engineers rushing to Anthropic's defense.
    Maxwell Zeff

    When AI Companies Go to War, Safety Gets Left Behind

    We were promised AI regulation and a race to the top. Now, we’re arguing about killer robots.
    Steven Levy

    *****
    Credit belongs to : www.wired.com

    Check Also

    Cursor Launches a New AI Agent Experience to Take On Claude Code and Codex

    Cursor Launches a New AI Agent Experience to Take On Claude Code and Codex

    Maxwell Zeff Business Apr 2, 2026 1:00 PM Cursor Launches a New AI Agent Experience …