An open letter signed by hundreds of prominent artificial intelligence experts, tech entrepreneurs, and scientists calls for a pause on the development and testing of AI technologies more powerful than OpenAI’s language model GPT-4 so that the risks it may pose can be properly studied.
It warns that language models like GPT-4 can already compete with humans at a growing range of tasks and could be used to automate jobs and spread misinformation. The letter also raises the distant prospect of AI systems that could replace humans and remake civilization.
“We call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4 (including the currently-being-trained GPT-5),” states the letter, whose signatories include Yoshua Bengio, a professor at the University of Montreal considered a pioneer of modern AI, historian Yuval Noah Harari, Skype cofounder Jaan Tallinn, and Twitter CEO Elon Musk.
The letter, which was written by the Future of Life Institute, an organization focused on technological risks to humanity, adds that the pause should be “public and verifiable,” and should involve all those working on advanced AI models like GPT-4. It does not suggest how a halt on development could be verified, but adds that “if such a pause cannot be enacted quickly, governments should step in and institute a moratorium,” something that seems unlikely to happen within six months.
Microsoft and Google did not respond to requests for comment on the letter. The signatories seemingly include people from numerous tech companies that are building advanced language models, including Microsoft and Google. Hannah Wong, a spokesperson for OpenAI, says the company spent more than six months working on the safety and alignment of GPT-4 after training the model. She adds that OpenAI is not currently training GPT-5.
The letter comes as AI systems make increasingly bold and impressive leaps. GPT-4 was only announced two weeks ago, but its capabilities have stirred up considerable enthusiasm and a fair amount of concern. The language model, which is available via ChatGPT, OpenAI’s popular chatbot, scores highly on many academic tests, and can correctly solve tricky questions that are generally thought to require more advanced intelligence than AI systems have previously demonstrated. Yet GPT-4 also makes plenty of trivial, logical mistakes. And, like its predecessors, it sometimes “hallucinates” incorrect information, betrays ingrained societal biases, and can be prompted to say hateful or potentially harmful things.
Part of the concern expressed by the signatories of the letter is that OpenAI, Microsoft, and Google, have begun a profit-driven race to develop and release new AI models as quickly as possible. At such pace, the letter argues, developments are happening faster than society and regulators can come to terms with.
The pace of change—and scale of investment—is significant. Microsoft has poured $10 billion into OpenAI and is using its AI in its search engine Bing as well as other applications. Although Google developed some of the AI needed to build GPT-4, and previously created powerful language models of its own, until this year it chose not to release them due to ethical concerns.
But excitement around ChatGPT and Microsoft’s maneuvers in search appear to have pushed Google into rushing its own plans. The company recently debuted Bard, a competitor to ChatGPT, and it has made a language model called PaLM, which is similar to OpenAI’s offerings, available through an API. “It feels like we are moving too quickly,” says Peter Stone, a professor at the University of Texas at Austin, and the chair of the One Hundred Year Study on AI, a report aimed at understanding the long-term implications of AI.
Stone, a signatory of the letter, says he does not agree with everything in it, and is not personally concerned about existential dangers. But he says advances are happening so quickly that the AI community and the general public barely had time to explore the benefits and possible misuses of ChatGPT before it was upgraded with GPT-4. “I think it is worth getting a little bit of experience with how they can be used and misused before racing to build the next one,” he says. “This shouldn’t be a race to build the next model and get it out before others.”
To date, the race has been rapid. OpenAI announced its first large language model, GPT-2 in February 2019. Its successor, GPT-3, was unveiled in June 2020. ChatGPT, which introduced enhancements on top of GPT-3, was released in November 2022.
Some letter signatories are parts of the current AI boom—reflecting concerns within the industry itself that the technology is moving at a potentially dangerous pace. “Those making these have themselves said they could be an existential threat to society and even humanity, with no plan to totally mitigate these risks,” says Emad Mostaque, founder and CEO of Stability AI, a company building generation AI tools, and a signatory of the letter. “It is time to put commercial priorities to the side and take a pause for the good of everyone to assess rather than race to an uncertain future,” he adds.
Recent leaps in AI’s capabilities coincide with a sense that more guardrails may be needed around its use. The EU is currently considering legislation that would limit the use of AI depending on the risks involved. The White House has proposed an AI Bill of Rights that spells out protections that citizens should expect from algorithm discrimination, data privacy breaches, and other AI-related problems. But these regulations began taking shape before the recent boom in generative AI even began.
“We need to hit the pause button and consider the risks of rapid deployment of generative AI models,” says Marc Rotenberg, founder and director of the Center for AI and Digital Policy, who was also a signatory of the letter. His organization plans to file a complaint this week with the US Federal Trade Commission calling for it to investigate OpenAI and ChatGPT and ban upgrades to the technology until “appropriate safeguards” are in place, according to its website. Rotenberg says the open letter is “timely and important” and that he hopes it receives “widespread support.”
When ChatGPT was released late last year, its abilities quickly sparked discussion around the implications for education and employment. The markedly improved abilities of GPT-4 have triggered more consternation. Musk, who provided early funding for OpenAI, has recently taken to Twitter to warn about the risk of large tech companies driving advances in AI.
An engineer at one large tech company who signed the letter, and who asked not to be named because he was not authorized to speak to media, says he has been using GPT-4 since its release. The engineer considers the technology a major shift but also a major worry. “I don’t know if six months is enough by any stretch but we need that time to think about what policies we need to have in place,” he says.
Others working in tech also expressed misgivings about the letter's focus on long-term risks, as systems available today including ChatGPT already pose threats. “I find recent developments very exciting,” says Ken Holstein, an assistant professor of human-computer interaction at Carnegie Mellon University, who asked his name be removed from the letter a day after signing it as debate emerged among scientists about the best demands to make at this moment.
“I worry that we are very much in a ‘move fast and break things’ phase,” says Holstein, adding that the pace might be too quick for regulators to meaningfully keep up. “I like to think that we, in 2023, collectively, know better than this.”
Updated 03/29/2021, 10:40 pm EST: This story has been updated to reflect the final version of the open letter, and that Ken Holstein asked to be removed as a signatory. An earlier draft of the letter contained an error. A comment from OpenAI has also been added.
Get More From WIRED
Credit belongs to : www.wired.com