Random Image Display on Page Reload

AI Safety Meets the War Machine

AI Safety Meets the War Machine

Anthropic doesn’t want its AI used in autonomous weapons or government surveillance. Those carve-outs could cost it a major military contract.

ARLINGTON VIRGINIA JUNE 26 U.S. Defense Secretary Pete Hegseth speaks during a news conference at the Pentagon on June...
Photo-Illustration: WIRED Staff; Andrew Harnik/Getty Images

When Anthropic last year became the first major AI company cleared by the US government for classified use—including military applications—the news didn’t make a major splash. But this week a second development hit like a cannonball: The Pentagon is reconsidering its relationship with the company, including a $200 million contract, ostensibly because the safety-conscious AI firm objects to participating in certain deadly operations. The so-called Department of War might even designate Anthropic as a “supply chain risk,” a scarlet letter usually reserved for companies that do business with countries scrutinized by federal agencies, like China, which means the Pentagon would not do business with firms using Anthropic’s AI in their defense work. In a statement to WIRED, chief Pentagon spokesperson Sean Parnell confirmed that Anthropic was in the hot seat. “Our nation requires that our partners be willing to help our warfighters win in any fight. Ultimately, this is about our troops and the safety of the American people,” he said. This is a message to other companies as well: OpenAI, xAI and Google, which currently have Department of Defense contracts for unclassified work, are jumping through the requisite hoops to get their own high clearances.

There’s plenty to unpack here. For one thing, there’s a question of whether Anthropic is being punished for complaining about the fact that its AI model Claude was used as part of the raid to remove Venezuela's president Nicolás Maduro (that’s what’s being reported; the company denies it). There’s also the fact that Anthropic publicly supports AI regulation—an outlier stance in the industry and one that runs counter to the administration’s policies. But there’s a bigger, more disturbing issue at play. Will government demands for military use make AI itself less safe?

Researchers and executives believe AI is the most powerful technology ever invented. Virtually all of the current AI companies were founded on the premise that it is possible to achieve AGI, or superintelligence, in a way that prevents widespread harm. Elon Musk, the founder of xAI, was once the biggest proponent of reining in AI—he cofounded OpenAI because he feared that the technology was too dangerous to be left in the hands of profit-seeking companies.

Anthropic has carved out a space as the most safety-conscious of all. The company’s mission is to have guardrails so deeply integrated into their models that bad actors cannot exploit AI’s darkest potential. Isaac Asimov said it first and best in his laws of robotics: A robot may not injure a human being or, through inaction, allow a human being to come to harm. Even when AI becomes smarter than any human on Earth—an eventuality that AI leaders fervently believe in—those guardrails must hold.

So it seems contradictory that leading AI labs are scrambling to get their products into cutting-edge military and intelligence operations. As the first major lab with a classified contract, Anthropic provides the government a “custom set of Claude Gov models built exclusively for U.S. national security customers.” Still, Anthropic said it did so without violating its own safety standards, including a prohibition on using Claude to produce or design weapons. Anthropic CEO Dario Amodei has specifically said he doesn’t want Claude involved in autonomous weapons or AI government surveillance. But that might not work with the current administration. Department of Defense CTO Emil Michael (formerly the chief business officer of Uber) told reporters this week that the government won’t tolerate an AI company limiting how the military uses AI in its weapons. “If there’s a drone swarm coming out of a military base, what are your options to take it down? If the human reaction time is not fast enough … how are you going to?” he asked rhetorically. So much for the first law of robotics.

There’s a good argument to be made that effective national security requires the best tech from the most innovative companies. While even a few years ago, some tech companies flinched at working with the Pentagon, in 2026 they are generally flag-waving would-be military contractors. I have yet to hear any AI executive speak about their models being associated with lethal force, but Palantir CEO Alex Karp isn’t shy about saying, with apparent pride, “Our product is used on occasion to kill people.”

The US might be able to flex its AI muscles with impunity when in combat with a country like Venezuela. But sophisticated opponents will have to aggressively implement their own versions of national security AI, with the result being a full-tilt arms race. The government will likely have little patience for AI companies that insist on carve-outs or lawyerly distinctions about what consists of “legal use” when a lethal practice is under question. (Especially a government that feels free to redefine the law to justify what many consider to be war crimes.) That Pentagon statement says it explicitly: If AI companies want to partner with the Department of Defense, they must commit to doing whatever it takes to win.

That mindset may make sense in the Pentagon, but it pushes the effort to create safe AI in the wrong direction. If you are creating a form of AI that won’t harm people, it’s counterproductive to also work on versions that deliver lethal force. Only a few years ago, both governments and tech executives were talking seriously about international bodies that might help monitor and limit the harmful uses of AI. You don’t hear that talk much any more. It’s a given now that the future of warfare is AI. Even more frightening, the future of AI itself might be more amenable to the kind of violence seen in warfare—if the companies that make it and the nations that wield it do not take care to contain the technology.

I have long believed that the major story of our times is the rise of digital technology. Politicians, regimes, and even countries may come and go—but tech’s remaking of humanity is irrevocable. When Donald Trump was first elected president in 2016, I spelled out this theory in a column called “The iPhone Is Bigger Than Donald Trump.” Upon his reelection in 2024, I wrote a sequel, arguing that AI was a bigger chaos agent than the president. In the long run, I argued, science trumps even Trump.

That theory now feels a little shakier. The future might hinge on who is in charge of advanced AI and how they shape and exploit it. While the lords of AI wrap themselves in patriotism and seek deals with the Pentagon, the fact is that they are supplying a fearsomely powerful and unpredictable technology to a government and a war department that rejects the idea of oversight. What would Asimov think?


This is an edition ofSteven Levy’sBackchannel newsletter. Read previous newslettershere.

You Might Also Like

Steven Levy covers the gamut of tech subjects for WIRED, in print and online, and has been contributing to the magazine since its inception. His writes Backchannel, a weekly newsletter that puts the biggest tech stories in perspective. He has been writing about technology for more than 30 years, writing … Read More
Editor at Large

Read More

Trump Moves to Ban Anthropic From the US Government

President Donald Trump’s sudden order comes after the Defense Department pressured Anthropic to drop restrictions on how its AI can be used by the military.

How Chinese AI Chatbots Censor Themselves

Researchers from Stanford and Princeton found that Chinese AI models are more likely than their Western counterparts to dodge political questions or deliver inaccurate answers.

The Only Thing Standing Between Humanity and AI Apocalypse Is … Claude?

As AI systems grow more powerful, Anthropic’s resident philosopher says the startup is betting Claude itself can learn the wisdom needed to avoid disaster.

OpenAI’s President Gave Millions to Trump. He Says It’s for Humanity

In an interview with WIRED, Greg Brockman says his political donations support OpenAI's mission—even if some employees at the company disagree.

This Defense Company Made AI Agents That Blow Things Up

Scout AI is using technology borrowed from the AI industry to power lethal weapons—and recently demonstrated its explosive potential.

I Loved My OpenClaw AI Agent—Until It Turned on Me

I used the viral AI helper to order groceries, sort emails, and negotiate deals. Then it decided to scam me.

AI Is Here to Replace Nuclear Treaties. Scared Yet?

The last major nuclear arms treaty between the US and Russia just expired. Some experts believe a combination of satellite surveillance, AI, and human reviewers can take its place. Others, not so much.

AI Bots Are Now a Significant Source of Web Traffic

New data shows AI bots pushing deeper into the web, prompting publishers to roll out more aggressive defenses.

How Mexico's ‘CJNG’ Drug Cartel Embraced AI, Drones, and Social Media

Drug kingpin Nemesio “El Mencho” Oseguera Cervantes may be dead, but the Jalisco cartel he ran for years will likely outlive him—thanks, in part, to the criminal group’s embrace of technology.

CBP Signs Clearview AI Deal to Use Face Recognition for ‘Tactical Targeting’

US Border Patrol intelligence units will gain access to a face recognition tool built on billions of images scraped from the internet.

I Infiltrated Moltbook, the AI-Only Social Network Where Humans Aren’t Allowed

I went undercover on Moltbook and loved role-playing as a conscious bot. But rather than a novel breakthrough, the AI-only site is a crude rehashing of sci-fi fantasies.

The Rise of RentAHuman, the Marketplace Where Bots Put People to Work

WIRED spoke with the Zoomer founders of a platform where AI agents hire humans to do real-world tasks. Their pitch: "People would love to have a clanker as their boss."

*****
Credit belongs to : www.wired.com

Check Also

An FBI ‘Asset’ Helped Run a Dark Web Site That Sold Fentanyl-Laced Drugs for Years

An FBI ‘Asset’ Helped Run a Dark Web Site That Sold Fentanyl-Laced Drugs for Years

Andy Greenberg Security Feb 19, 2026 6:18 PM An FBI ‘Asset’ Helped Run a Dark …