Senators Want ChatGPT-Level AI to Require a Government License

Sep 9, 2023 7:00 AM

Senators Want ChatGPT-Level AI to Require a Government License

A new US government body would force companies to seek a license before working on powerful AI models like OpenAI's GPT-4, under a bipartisan proposal by senators Richard Blumenthal and Josh Hawley.

Senator Josh Hawley and Senator Richard Blumenthal speaking during a hearing

Senator Josh Hawley (R-MO) and Senator Richard Blumenthal (D-CT) speak before a Senate Judiciary Subcommittee hearing on Artificial Intelligence, at the U.S. Capitol, in Washington, DC, on Tuesday, July 25, 2023.Photograph: Graeme Sloan/Alamy

The US government should create a new body to regulate artificial intelligence—and restrict work on language models like OpenAI’s GPT-4 to companies granted licenses to do so. That’s the recommendation of a bipartisan duo of senators, Democrat Richard Blumenthal and Republican Josh Hawley, who launched a legislative framework yesterday to serve as a blueprint for future laws and influence other bills before Congress.

Under the proposal, developing face recognition and other “high risk” applications of AI would also require a government license. To obtain one, companies would have to test AI models for potential harm before deployment, disclose instances when things go wrong after launch, and allow audits of AI models by an independent third party.

The framework also proposes that companies should publicly disclose details of the training data used to create an AI model and that people harmed by AI get a right to bring the company that created it to court.

The senators’ suggestions could be influential in the days and weeks ahead as debates intensify in Washington over how to regulate AI. Early next week, Blumenthal and Hawley will oversee a Senate subcommittee hearing about how to meaningfully hold businesses and governments accountable when they deploy AI systems that cause people harm or violate their rights. Microsoft president Brad Smith and the chief scientist of chipmaker Nvidia, William Dally, are due to testify.

A day later, senator Chuck Schumer will host the first in a series of meetings to discuss how to regulate AI, a challenge Schumer has referred to as “one of the most difficult things we’ve ever undertaken.” Tech executives with an interest in AI, including Mark Zuckerberg, Elon Musk, and the CEOs of Google, Microsoft, and Nvidia, make up about half the almost-two-dozen-strong guest list. Other attendees represent those likely to be subjected to AI algorithms and include trade union presidents from the Writers Guild and union federation AFL-CIO, and researchers who work on preventing AI from trampling human rights, including UC Berkeley’s Deb Raji and Humane Intelligence CEO and Twitter’s former ethical AI leadRumman Chowdhury.

Anna Lenhart, who previously led an AI ethics initiative at IBM and is now a PhD candidate at the University of Maryland, says the senators’ legislative framework is a welcome sight after years of AI experts appearing in Congress to explain how and why AI should be regulated.

Most Popular

“It's really refreshing to see them take this on and not wait for a series of insight forums or a commission that's going to spend two years and talk to a bunch of experts to essentially create this same list,” Lenhart says.

But she’s unsure how any new AI oversight body could host the broad range of technical and legal knowledge required to oversee technology used in many areas from self-driving cars to health care to housing. “That’s where I get a bit stuck on the licensing regime idea,” Lenhart says.

The idea of using licenses to restrict who can develop powerful AI systems has gained traction in both industry and Congress. OpenAI CEO Sam Altman suggested licensing for AI developers during testimony before the Senate in May—a regulatory solution that might arguably help his company maintain its leading position. A bill proposed last month by senators Lindsay Graham and Elizabeth Warren would also require tech companies to secure a government AI license but only covers digital platforms above a certain size.

Lenhart is not the only AI or policy expert skeptical of the government licensing for AI development. In May the idea drew criticism from both libertarian-leaning political campaign group Americans for Prosperity, which fears it would stifle innovation, and from the digital rights nonprofit Electronic Frontier Foundation, which warns of industry capture by companies with money or influential connections. Perhaps in response, the framework unveiled yesterday recommends strong conflict of interest rules for staff at the AI oversight body.

Blumenthal and Hawley’s new framework for future AI regulation leaves some questions unanswered. It's not yet clear if oversight of AI would come from a newly-created federal agency or a group inside an existing federal agency. Nor have the senators specified what criteria would be used to determine if a certain use case is defined as high risk and requires a license to develop.

Michael Khoo, climate disinformation program director at environmental nonprofit Friends of the Earth says the new proposal looks like a good first step but that more details are necessary to properly evaluate its ideas. His organization is part of a coalition of environmental and tech accountability organizations that via a letter to Schumer, and a mobile billboard due to drive circles around Congress next week, are calling on lawmakers to prevent energy-intensive AI projects from making climate change worse.

Khoo agrees with the legislative framework’s call for documentation and public disclosure of adverse impacts, but says lawmakers shouldn’t let industry define what’s deemed harmful. He also wants members of Congress to demand businesses disclose how much energy it takes to train and deploy AI systems and consider the risk of accelerating the spread of misinformation when weighing the impact of AI models.

The legislative framework shows Congress considering a stricter approach to AI regulation than taken so far by the federal government, which has launched a voluntary risk-management framework and nonbinding AI bill of rights. The White House struck a voluntary agreement in July with eight major AI companies, including Google, Microsoft, and OpenAI, but also promised that firmer rules are coming. At a briefing on the AI company compact, White House special adviser for AI Ben Buchanan said keeping society safe from AI harms will require legislation.

Get More From WIRED

Khari Johnson is a senior writer for WIRED covering artificial intelligence and the positive and negative ways AI shapes human lives. He was previously a senior writer at VentureBeat, where he wrote stories about power, policy, and novel or noteworthy uses of AI by businesses and governments. He is based… Read more
Senior Writer

More from WIRED

The US Congress Has Trust Issues. Generative AI Is Making It Worse

Senators are meeting with Silicon Valley's elite to learn how to deal with AI. But can Congress tackle the rapidly emerging tech before working on itself?

Matt Laslo

Inside the Senate’s Private AI Meeting With Tech’s Billionaire Elites

Dozens of US senators listened quietly as tech titans and AI ethicists schooled them on the “civilizational risks” of generative AI.

Matt Laslo

AI Chatbots Are Invading Your Local Government—and Making Everyone Nervous

State and local governments in the US are scrambling to harness tools like ChatGPT to unburden their bureaucracies, rushing to write their own rules—and avoid generative AI's many pitfalls.

Todd Feathers

People Are Increasingly Worried AI Will Make Daily Life Worse

A Pew survey finds that a majority of Americans are more concerned than excited about the impact of artificial intelligence—adding weight to calls for more regulation.

Khari Johnson

Sundar Pichai on Google’s AI, Microsoft’s AI, OpenAI, and … Did We Mention AI?

The tech giant is 25 years old. In a chatbot war. On trial for antitrust. But its CEO says Google is good for 25 more.

Steven Levy

Meet Aleph Alpha, Europe’s Answer to OpenAI

The European Union is desperate for its own artificial intelligence giant. German startup Aleph Alpha might be its best hope.

Morgan Meaker

It Costs Just $400 to Build an AI Disinformation Machine

A developer used widely available AI tools to generate anti-Russian tweets and articles. The project is intended to highlight how cheap and easy it has become to create propaganda at scale.

Will Knight

Kids Are Going Back to School. So Is ChatGPT

Teachers are caught between cracking down on cheating with generative AI and using it to help empower students. It’s going to be a challenging year.

Pia Ceres

*****
Credit belongs to : www.wired.com

Check Also

The Dark Economics of Russell Brand

Peter Guest Business Sep 18, 2023 4:10 PM The Dark Economics of Russell Brand Russell …