Random Image Display on Page Reload

Six Months Ago Elon Musk Called for a Pause on AI. Instead Development Sped Up

Sep 28, 2023 12:01 PM

Six Months Ago Elon Musk Called for a Pause on AI. Instead Development Sped Up

Earlier this year, prominent AI and tech experts signed a letter calling for a halt to advanced AI development. When WIRED checked back in, some signatories said they had never expected it to work.

Elon Musk

Photograph: Win McNamee/Getty Images

Six months ago this week, many prominent AI researchers, engineers, and entrepreneurs signed an open letter calling for a six-month pause on development of AI systems more capable than OpenAI’s latest GPT-4 language generator. It argued that AI is advancing so quickly and unpredictably that it could eliminate countless jobs, flood us with disinformation, and—as a wave of panicky headlines reported—destroy humanity. Whoops!

As you may have noticed, the letter did not result in a pause in AI development, or even a slow down to a more measured pace. Companies have instead accelerated their efforts to build more advanced AI.

Elon Musk, one of the most prominent signatories, didn’t wait long to ignore his own call for a slowdown. In July he announced xAI, a new company he said would seek to go beyond existing AI and compete with OpenAI, Google, and Microsoft. And many Google employees who also signed the open letter have stuck with their company as it prepares to release an AI model called Gemini, which boasts broader capabilities than OpenAI’s GPT-4.

WIRED reached out to more than a dozen signatories of the letter to ask what effect they think it had and whether their alarm about AI has deepened or faded in the past six months. None who responded seemed to have expected AI research to really grind to a halt.

“I never thought that companies were voluntarily going to pause,” says Max Tegmark, an astrophysicist at MIT who leads the Future of Life Institute, the organization behind the letter—an admission that some might argue makes the whole project look cynical. Tegmark says his main goal was not to pause AI but to legitimize conversation about the dangers of the technology, up to and including the fact that it might turn on humanity. The result “exceeded my expectations,” he says.

The responses to my follow-up also show the huge diversity of concerns experts have about AI—and that many signers aren’t actually obsessed with existential risk.

Lars Kotthoff, an associate professor at the University of Wyoming, says he wouldn’t sign the same letter today because many who called for a pause are still working to advance AI. “I’m open to signing letters that go in a similar direction, but not exactly like this one,” Kotthoff says. He adds that what concerns him most today is the prospect of a “societal backlash against AI developments, which might precipitate another AI winter” by quashing research funding and making people spurn AI products and tools.

Other signers told me they would gladly sign again, but their big worries seem to involve near-term problems, such as disinformation and job losses, rather than Terminator scenarios.

“In the age of the internet and Trump, I can more easily see how AI can lead to destruction of human civilization by distorting information and corrupting knowledge,” says Richard Kiehl, a professor working on microelectronics at Arizona State University.

“Are we going to get Skynet that’s going to hack into all these military servers and launch nukes all over the planet? I really don’t think so,” says Stephen Mander, a PhD student working on AI at Lancaster University in the UK. He does see widespread job displacement looming, however, and calls it an “existential risk” to social stability. But he also worries that the letter may have spurred more people to experiment with AI and acknowledges that he didn’t act on the letter’s call to slow down. “Having signed the letter, what have I done for the last year or so? I’ve been doing AI research,” he says.

Despite the letter’s failure to trigger a widespread pause, it did help propel the idea that AI could snuff out humanity into a mainstream topic of discussion. It was followed by a public statement signed by the leaders of OpenAI and Google’s DeepMind AI division that compared the existential risk posed by AI to that of nuclear weapons and pandemics. Next month, the British government will host an international “AI safety” conference, where leaders from numerous countries will discuss possible harms AI could cause, including existential threats.

Most Popular

Perhaps AI doomers hijacked the narrative with the pause letter, but the unease around the recent, rapid progress in AI is real enough—and understandable. A few weeks before the letter was written, OpenAI had released GPT-4, a large language model that gave ChatGPT new power to answer questions and caught AI researchers by surprise. As the potential of GPT-4 and other language models has become more apparent, surveys suggest that the public is becoming more worried than excited about AI technology. The obvious ways these tools could be misused is spurring regulators around the world into action.

The letter’s demand for a six-month moratorium on AI development may have created the impression that its signatories expected bad things to happen soon. But for many of them, a key theme seems to be uncertainty—around how capable AI actually is, how rapidly things may change, and how the technology is being developed.

“M​​any AI skeptics want to hear a concrete doom scenario, but to me, the fact that it is difficult to imagine detailed, concrete scenarios is kind of the point—it shows how hard it is for even world-class AI experts to predict the future of AI and how it will impact a complex world” says Scott Niekum, a professor at the University of Massachusetts Amherst who works on AI risk and signed the letter. “And when you combine that prediction difficulty with lagging progress in safety, interpretability, and regulation, I think that should raise some alarms.”

Uncertainty is hardly proof that humanity is in danger. But the fact that so many people working in AI still seem unsettled may be reason enough for the companies developing AI to take a more thoughtful—or slower—approach.

“Many people who would be in a great position to take advantage of further progress would now instead prefer to see a pause,” says signee Vincent Conitzer, a professor who works on AI at CMU. “If nothing else, that should be a signal that something very unusual is up.”

Get More From WIRED

Will Knight is a senior writer for WIRED, covering artificial intelligence. He writes the Fast Forward newsletter that explores how advances in AI and other emerging technology are set to change our lives—sign up here. He was previously a senior editor at MIT Technology Review, where he wrote about fundamental… Read more
Senior Writer

More from WIRED

Senators Want ChatGPT-Level AI to Require a Government License

A new US government body would force companies to seek a license before working on powerful AI models like OpenAI's GPT-4, under a bipartisan proposal by senators Richard Blumenthal and Josh Hawley.

Khari Johnson

The US Congress Has Trust Issues. Generative AI Is Making It Worse

Senators are meeting with Silicon Valley's elite to learn how to deal with AI. But can Congress tackle the rapidly emerging tech before working on itself?

Matt Laslo

Sundar Pichai on Google’s AI, Microsoft’s AI, OpenAI, and … Did We Mention AI?

The tech giant is 25 years old. In a chatbot war. On trial for antitrust. But its CEO says Google is good for 25 more.

Steven Levy

ChatGPT Can Now Talk to You—and Look Into Your Life

ChatGPT inches closer to feature parity with the seductive AI assistant from Her, thanks to an upgrade that adds voice and image recognition to the chatbot.

Lauren Goode

Inside the Senate’s Private AI Meeting With Tech’s Billionaire Elites

Dozens of US senators listened quietly as tech titans and AI ethicists schooled them on the “civilizational risks” of generative AI.

Matt Laslo

What OpenAI Really Wants

The young company sent shock waves around the world when it released ChatGPT. But that was just the start. The ultimate goal: Change everything. Yes. Everything.

Steven Levy

How to Use ChatGPT’s New Image Features

OpenAI’s new image analysis update for its chatbot is both impressive and frightening. Here’s how to use it, and some advice for your experiments.

Reece Rogers

Smarter AI Assistants Could Make It Harder to Stay Human

AI helpers that make phone calls, book flights, and chat with other bots will give humans new freedom—but also lead machines to undermine people's independence.

Steven Levy

*****
Credit belongs to : www.wired.com

Check Also

Poultry producer’s farm still under quarantine following deadly avian flu infection

A poultry producer in Lower Branch, N.S., who lost more than 200 birds after avian …