Random Image Display on Page Reload

Anthropic’s Daniela Amodei Believes the Market Will Reward Safe AI

Anthropic’s Daniela Amodei Believes the Market Will Reward Safe AI

The Trump administration might think regulation is killing the AI industry, but Anthropic president Daniela Amodei disagrees.

Daniela Amodei attends the WIRED Big Interview event.
Photograph: Annie Noelker

The Trump administration may think regulation is crippling the AI industry, but one of the industry’s biggest players doesn’t agree.

At WIRED’s Big Interview event on Thursday, Anthropic president and cofounder Daniela Amodei told WIRED editor at large Steven Levy that even though Trump’s AI and crypto czar, David Sacks, may have tweeted that her company is “running a sophisticated regulatory capture strategy based on fear-mongering,” she’s convinced her company’s commitment to calling out the potential dangers of AI is making the industry stronger.

“We were very vocal from day one that we felt there was this incredible potential” for AI, Amodei said. “We really want to be able to have the entire world realize the potential, the positive benefits, and the upside that can come from AI, and in order to do that, we have to get the tough things right. We have to make the risks manageable. And that's why we talk about it so much.”

More than 300,000 startups, developers, and companies use some version of Anthropic’s Claude model and Amodei said that, through the company’s dealings with those brands, she’s learned that, while customers want their AI to be able to do great things, they also want it to be reliable and safe.

“No one says, ‘We want a less safe product,’” Amodei said, likening Anthropic’s reporting of its model’s limits and jailbreaks to that of a car company releasing crash-test studies to show how it has addressed safety concerns. It might seem shocking to see a crash-test dummy flying through a car window in a video, but learning that an automaker updated their vehicle’s safety features as a result of that test could sell a buyer on a car. Amodei said the same goes for companies using Anthropic’s AI products, making for a market that is somewhat self-regulating.

“We’re setting what you can almost think of as minimum safety standards just by what we’re putting into the economy,” she said. Companies “are now building many workflows and day-to-day tooling tasks around AI, and they're like, ‘Well, we know that this product doesn't hallucinate as much, it doesn't produce harmful content, and it doesn't do all of these bad things.’ Why would you go with a competitor that is going to score lower on that?”


Daniela Amodei attends the WIRED Big Interview event.
Photograph: Annie Noelker

Amodei said Anthropic has become noted for its commitment to what it calls “constitutional AI,” where it trains its models on a baseline set of ethical principles and documents that teach human values. Using something like the United Nations Universal Declaration of Human Rights to train a model, Amodei said, can quickly teach an LLM how to respond to queries based not on the idea that a query is right or wrong, good or bad, empirically but rather that an issue is right or wrong in an overall ethical sense.

Anthropic’s commitment to creating a better, more ethical AI model has also helped it retain talent, Amodei said. “The story that we hear from people that come in the door [at Anthropic] is there's something about the mission and the values and this desire to be honest about both the good and the bad, and the desire to help to make the bad things better, that feels very genuine, like we mean it,” she explained.

Perhaps that’s why Anthropic has grown its staff by leaps and bounds over the past few years, from 200 staffers to over 2,000. While those numbers could seem scary, especially when considering all the AI bubble talk flying around Wall Street and Silicon Valley, Amodei said she hasn’t seen any sign of her company or industry slowing down.

“Based on what we're seeing, the models are continuing to get smarter at the exact sort of curve that the scaling laws talk about, and the revenue is continuing on that same curve,” Amodei said. “As any of the scientists that work at Anthropic would tell you, everything continues going on the curve until it doesn't, and so we really try to be self-aware and humble about that.”

Marah Eakin is a freelance journalist based in Altadena, California. She's a frequent contributor to Vulture, Dwell, Current, The Los Angeles Times, and WIRED, among other outlets. … Read More
Contributor

    Read More

    Trump Takes Aim at State AI Laws in Draft Executive Order

    The draft order, obtained by WIRED, instructs the US Justice Department to sue states that pass laws regulating AI.

    A $100 Million AI Super PAC Targeted New York Democrat Alex Bores. He Thinks It Backfired

    Leading the Future said it will spend millions to keep Alex Bores out of Congress. It might be helping him instead.

    Nvidia CEO Dismisses Concerns of an AI Bubble. Investors Remain Skeptical

    Record sales, a strong financial forecast, and CEO Jensen Huang’s impassioned arguments on his company’s earnings call weren’t enough to push Nvidia shares back to their October high.

    Poems Can Trick AI Into Helping You Make a Nuclear Weapon

    It turns out all the guardrails in the world won’t protect a chatbot from meter and rhyme.

    Gemini 3 Is Here—and Google Says It Will Make Search Smarter

    Gemini 3 is skilled at reasoning, generating video, and writing code. Amid talk of an AI bubble, Google notes the new model could help increase search revenue too.

    OpenAI Hires Slack CEO as New Chief Revenue Officer

    A memo obtained by WIRED confirms Denise Dresser's departure from Slack. She is now headed to OpenAI.

    AMD CEO Lisa Su Says Concerns About an AI Bubble Are Overblown

    Lisa Su leads Nvidia’s biggest rival in the AI chip market. When asked at WIRED’s Big Interview event if AI is a bubble, the company’s CEO said, “Emphatically, from my perspective, no.”

    OpenAI Launches GPT-5.2 as It Navigates ‘Code Red’

    The ChatGPT-maker is releasing its “best model yet” as it faces new pressures from Google and other AI competitors.

    OpenAI Should Stop Naming Its Creations After Products That Already Exist

    From “cameo” to “io,” OpenAI keeps trying to call its new and upcoming releases by names that resemble existing trademarks.

    ByteDance and DeepSeek Are Placing Very Different AI Bets

    The diverging path of China’s two leading AI players shows where the country’s artificial intelligence industry is headed.

    Cloudflare Has Blocked 416 Billion AI Bot Requests Since July 1

    Cloudflare CEO Matthew Prince claims the internet infrastructure company’s efforts to block AI crawlers are already seeing big results.

    The Biggest AI Companies Met to Find a Better Path for Chatbot Companions

    In a closed-door workshop led by Anthropic and Stanford, leading AI startups and researchers discussed guidelines for chatbot companions, especially for younger users.

    *****
    Credit belongs to : www.wired.com

    Check Also

    So Long, GPT-5. Hello, Qwen

    So Long, GPT-5. Hello, Qwen

    Will Knight Business Dec 27, 2025 6:00 AM So Long, GPT-5. Hello, Qwen In the …