Random Image Display on Page Reload

How the Loudest Voices in AI Went From ‘Regulate Us’ to ‘Unleash Us’

May 30, 2025 10:00 AM

How the Loudest Voices in AI Went From ‘Regulate Us’ to ‘Unleash Us’

Two years after Sam Altman pitched Congress on AI guardrails, he's back in Washington with a new message: To beat China, invest in OpenAI.

WASHINGTON DC MARCH 09 The White House is framed in the trees under a cloudy sky on March 9 2025 in Washington DC.
Photo-Illustration: WIRED Staff; Photograph: J. David Ake/Getty Images

On May 16, 2023, Sam Altman appeared before a subcommittee of the Senate Judiciary. The title of the hearing was “Oversight of AI.” The session was a lovefest, with both Altman and the senators celebrating what Altman called AI’s “printing press moment”—and acknowledging that the US needed strong laws to avoid its pitfalls. “We think that regulatory intervention by governments will be critical to mitigate the risks of increasingly powerful models,” he said. The legislators hung on Altman’s every word as he gushed about how smart laws could allow AI to flourish—but only within firm guidelines that both lawmakers and AI builders deemed vital at that moment. Altman was speaking for the industry, which widely shared his attitude. The battle cry was “Regulate Us!”

Two years later, on May 8 of this year, Altman was back in front of another group of senators. The senators and Altman were still singing the same tune, but one pulled from a different playlist. This hearing was called “Winning the AI Race.” In DC, the word “oversight” has fallen out of favor, and the AI discourse is no exception. Instead of advocating for outside bodies to examine AI models to assess risks, or for platforms to alert people when they are interacting with AI, committee chair Ted Cruz argued for a path where the government would not only fuel innovation but remove barriers like “overregulation.” Altman was on board with that. His message was no longer “regulate me” but “invest in me.” He said that overregulation—like the rules adopted by the European Union or one bill recently vetoed in California would be “disastrous.” “We need the space to innovate and to move quickly,” he said. Safety guardrails might be necessary, he affirmed, but they needed to involve “sensible regulation that does not slow us down.”

What happened? For one thing, the panicky moment just after everyone got freaked out by ChatGPT passed, and it became clear that Congress wasn’t going to move quickly on AI. But the biggest development is that Donald Trump took back the White House, and hit the brakes on the Biden administration's nuanced, pro-regulation tone. The Trump doctrine of AI regulation seems suspiciously close to that of Trump supporter Marc Andreessen, who declared in his Techno Optimist Manifesto that AI regulation was literally a form of murder because “any deceleration of AI will cost lives.” Vice President J.D. Vance made these priorities explicit in an international gathering held in Paris this February. “I’m not here … to talk about AI safety, which was the title of the conference a couple of years ago,” he said. “We believe that excessive regulation of the AI sector could kill a transformative industry just as it’s taking off, and we’ll make every effort to encourage pro-growth AI policies.” The administration later unveiled an AI Action Plan “to enhance America’s position as an AI powerhouse and prevent unnecessarily burdensome requirements from hindering private sector innovation.”

Two foes have emerged in this movement. First is the European Union which has adopted a regulatory regimen that demands transparency and accountability from major AI companies. The White House despises this approach, as do those building AI businesses in the US.

But the biggest bogeyman is China. The prospect of the People’s Republic besting the US in the “AI Race” is so unthinkable that regulation must be put aside, or done with what both Altman and Cruz described as a "light touch.” Some of this reasoning comes from a theory known as “hard takeoff,” which posits that AI models can reach a tipping point where lightning-fast self-improvement launches a dizzying gyre of supercapability, also known as AGI. “If you get there first, you dastardly person, I will not be able to catch you,” says former Google CEO Eric Schmidt, with the "you" being a competitor (Schmidt had been speaking about China’s status as a leader in open source.) Schmidt is one of the loudest voices warning about this possible future. But the White House is probably less interested in the Singularity than it is in classic economic competition.

The fear of China pulling ahead on AI is the key driver of current US policy, safety be damned. The party line even objects to individual states trying to fill the vacuum of inaction with laws of their own. The version of the tax-break giving, Medicaid-cutting megabill just passed by the House included a mandated moratorium on any state-level AI legislation for 10 years. That’s like eternity in terms of AI progress. (Pundits are saying that this provision won’t survive some opposition in the Senate, but it should be noted that almost every Republican in the House voted for it.)

It’s not surprising that Trumpworld would reject regulation and embrace a jingoistic stance on AI. But what happened to the seemingly genuine appetite in the industry for rules to ensure AI products don’t run amok? I contacted several of the top AI companies this week and was pointed to published blogs and transcripts from speeches and public testimony, but no executive would go on record on the topic. (To be fair, I didn’t give them much time.)

Typical of those materials was OpenAI’s policy blog. It asks for “freedom to innovate,” meaning, in all likelihood, no burdensome laws; strong export controls; and an opportunistic request for “freedom to learn.” This is a euphemistic request for Congress to redefine intellectual property as “fair use” so OpenAI and other companies can train their models with copyrighted materials—without compensating the creators. Microsoft is also asking for this bonanza. (Disclosure: I am on the council of the Authors Guild, which is suing OpenAI and Microsoft over the use of copyrighted books as training materials. Opinions expressed here are my own.)

The “light-touch” (or no-touch) regulatory camp does have an excellent point to make: No one is sure how to craft laws that prevent the worst dangers of AI without slowing the pace of innovation. But aside from avoiding catastrophic risk, there are plenty of other areas where AI regulation would not introduce speed bumps to research. These involve banning certain kinds of AI surveillance, deepfakes, and discrimination; clearly informing people when they are interacting with robots; and mandating higher standards to protect personal data in AI systems. (I admit I cheated in making that list—not by using ChatGPT, but by drawing on the kinds of AI harms that the House of Representatives would not allow states to regulate.)

Public pressure, or some spectacular example of misuse, may lead Congress to address those AI issues at some point. But what lingers for me is the about-face from two years ago when serious worries about catastrophic risk dominated conversations in the AI world. The glaring exception to this is Anthropic, which still hasn’t budged from a late October blog post—just days before the presidential election—that not only urged effective regulation to “reduce catastrophic risks” but pretty much proposed the end of times if we didn’t do it soon. “Governments should urgently take action on AI policy in the next eighteen months,” it read, in boldface. “The window for proactive risk prevention is closing fast.”

In this environment, there is virtually no chance that Anthropic will get its wish. Maybe it won’t matter: It could be that fears of an AI apocalypse are way overblown. Take note, though, that the leaders of just about every single major AI company are predicting that in a few years, we will realize artificial general intelligence. When you press them, they will also admit that controlling AI, or even understanding how it works, is a work in progress. Nonetheless, the focus is now on hastening the push to more powerful AI—ostensibly to beat China.

Chinese people have made it clear they don't want to report to robot overlords any more than we do. America's top geopolitical rival has also demonstrated some interest in imposing strong safety standards. But if the United States insists on eschewing guardrails and going full-speed toward a future that it can’t contain, our biggest competitor will have no choice but to do the same. May the best hard takeoff win.


Image may contain Symbol
Steven Levy covers the gamut of tech subjects for WIRED, in print and online, and has been contributing to the magazine since its inception. His weekly column, Plaintext, is exclusive to subscribers online but the newsletter version is open to all—sign up here. He has been writing about technology for … Read more
Editor at Large

Read More

No, Graduates: AI Hasn't Ended Your Career Before It Starts

In a commencement speech at Temple University, I shared my views on how new college graduates can compete with powerful artificial intelligence.
Steven Levy

A Political Battle Is Brewing Over Data Centers

An AI-related provision in the “Big Beautiful Bill” could restrict state-level legislation of energy-hungry data centers—and is raising bipartisan objections across the US.
Molly Taft

3 Teens Almost Got Away With Murder. Then Police Found Their Google Searches

An arson attack in Colorado had detectives stumped. The way they solved the case could put everyone at risk.
Raksha Vasudevan

‘A Billion Streams and No Fans’: Inside a $10 Million AI Music Fraud Case

A chart-topping jazz album! Loads of Spotify and Apple Music plays! Just one problem: The success might not be real.
Kate Knibbs

Why Anthropic’s New AI Model Sometimes Tries to ‘Snitch’

The internet freaked out after Anthropic revealed that Claude attempts to report “immoral” activity to authorities under certain conditions. But it’s not something users are likely to encounter.
Kylie Robison

Politico’s Newsroom Is Starting a Legal Battle With Management Over AI

Politico has rules about AI in the newsroom. Staffers say those rules have been violated—and they’re gearing up for a fight.
Kate Knibbs

How ‘Big Ag’ Spied on Animal Rights Activists and Pushed the FBI to Treat Them as Bioterrorists

For years, a powerful farm industry group served up information on activists to the FBI. Records reveal a decade-long effort to see the animal rights movement labeled a “bioterrorism” threat.
Dell Cameron

US Tech Visa Applications Are Being Put Through the Wringer

Silicon Valley’s coveted work visas are under heavy scrutiny from the Trump administration, and immigration lawyers say requests for documentation are increasing.
Lauren Goode

What It’s Like to Interview for a Job at DOGE

WIRED spoke with someone who applied for a job at Elon Musk’s so-called DOGE and discussed the five-step hiring process.
Vittoria Elliott

Who’s to Blame When AI Agents Screw Up?

As Google and Microsoft push agentic AI systems, the kinks are still being worked on how agents interact with each other—and intersect with the law.
Paresh Dave

The Time Sam Altman Asked for a Countersurveillance Audit of OpenAI

In her new book Empire of AI, journalist Karen Hao chronicles the anxieties around the OpenAI office in its early days.
Karen Hao

Trump’s Crackdown on Foreign Student Visas Could Derail Critical AI Research

The US says it will “aggressively revoke” Chinese student visas and has paused interviews for all student visa applicants. Experts warn the moves could weaken American leadership in STEM.
Will Knight

*****
Credit belongs to : www.wired.com

Check Also

A 0 Million AI Super PAC Targeted New York Democrat Alex Bores. He Thinks It Backfired

A $100 Million AI Super PAC Targeted New York Democrat Alex Bores. He Thinks It Backfired

Maxwell Zeff Business Nov 21, 2025 6:30 AM A $100 Million AI Super PAC Targeted …