Random Image Display on Page Reload

OpenAI Employees Warn of a Culture of Risk and Retaliation

Jun 4, 2024 11:13 AM

OpenAI Employees Warn of a Culture of Risk and Retaliation

An open letter signed by former and current employees at OpenAI and other AI giants calls for whistleblower protections as the artificial intelligence rapidly evolves.

OpenAI logo shown on a smartphone screen

Photograph: Dilara Irem Sancar/Getty Images

A group of current and former OpenAI employees have issued a public letter warning that the company and its rivals are building artificial intelligence with undue risk, without sufficient oversight, and while muzzling employees who might witness irresponsible activities.

“These risks range from the further entrenchment of existing inequalities, to manipulation and misinformation, to the loss of control of autonomous AI systems potentially resulting in human extinction,” reads the letter published at righttowarn.ai. “So long as there is no effective government oversight of these corporations, current and former employees are among the few people who can hold them accountable.”

The letter calls for not just OpenAI but all AI companies to commit to not punishing employees who speak out about their activities. It also calls for companies to establish “verifiable” ways for workers to provide anonymous feedback on their activities. “Ordinary whistleblower protections are insufficient because they focus on illegal activity, whereas many of the risks we are concerned about are not yet regulated,” the letter reads. “Some of us reasonably fear various forms of retaliation, given the history of such cases across the industry.”

OpenAI came under criticism last month after a Vox article revealed that the company has threatened to claw back employees’ equity if they do not sign non-disparagement agreements that forbid them from criticizing the company or even mentioning the existence of such an agreement. OpenAI’s CEO, Sam Altman, said on X recently that he was unaware of such arrangements and the company had never clawed back anyone’s equity. Altman also said the clause would be removed, freeing employees to speak out.


Got a Tip?

Are you a current or former employee at OpenAI? We’d like to hear from you. Using a nonwork phone or computer, contact Will Knight at will_knight@wired.com or securely on Signal at wak.01.


OpenAI has also recently changed its approach to managing safety. Last month, an OpenAI research group responsible for assessing and countering the long-term risks posed by the company’s more powerful AI models was effectively dissolved after several prominent figures left and the remaining members of the team were absorbed into other groups. A few weeks later, the company announced that it had created a Safety and Security Committee, led by Altman and other board members.

Last November, Altman was fired by OpenAI’s board for allegedly failing to disclose information and deliberately misleading them. After a very public tussle, Altman returned to the company and most of the board was ousted.

“We’re proud of our track record providing the most capable and safest AI systems and believe in our scientific approach to addressing risk,” said OpenAI spokesperson Liz Bourgeois in a statement. “We agree that rigorous debate is crucial given the significance of this technology and we'll continue to engage with governments, civil society and other communities around the world.”

The letters’ signatories include people who worked on safety and governance at OpenAI, current employees who signed anonymously, and researchers who currently work at rival AI companies. It was also endorsed by several big-name AI researchers including Geoffrey Hinton and Yoshua Bengio, who both won the Turing Award for pioneering AI research, and Stuart Russell, a leading expert on AI safety.

Former employees to have signed the letter include William Saunders, Carroll Wainwright, and Daniel Ziegler, all of whom worked on AI safety at OpenAI.

“The public at large is currently underestimating the pace at which this technology is developing,” says Jacob Hilton, a researcher who previously worked on reinforcement learning at OpenAI and who left the company more than a year ago to pursue a new research opportunity. Hilton says that although companies like OpenAI commit to building AI safely, there is little oversight to ensure that is the case. “The protections that we’re asking for, they’re intended to apply to all frontier AI companies, not just OpenAI,” he says.

“I left because I lost confidence that OpenAI would behave responsibly,” says Daniel Kokotajlo, a researcher who previously worked on AI governance at OpenAI. “There are things that happened that I think should have been disclosed to the public,” he adds, declining to provide specifics.

Kokotajlo says the letter’s proposal would provide greater transparency, and he believes there’s a good chance that OpenAI and others will reform their policies given the negative reaction to news of non-disparagement agreements. He also says that AI is advancing with worrying speed. “The stakes are going to get much, much, much higher in the next few years, he says, “at least so I believe.”

Updated: 6/3/2024, 5:50 pm ET: This story has been updated with comment from OpenAI.

Will Knight is a senior writer for WIRED, covering artificial intelligence. He writes the Fast Forward newsletter that explores how advances in AI and other emerging technology are set to change our lives—sign up here. He was previously a senior editor at MIT Technology Review, where he wrote about fundamental… Read more
Senior Writer

Read More

OpenAI Offers a Peek Inside the Guts of ChatGPT

Days after former employees said the company was being too reckless with its technology, OpenAI released a research paper on a method for reverse engineering the workings of AI models.

Will Knight

Scarlett Johansson Says OpenAI Ripped Off Her Voice for ChatGPT

In a scorching statement, Scarlett Johansson claims that after she turned down an invitation to voice ChatGPT, OpenAI brazenly mimicked her distinctive tones anyway.

Will Knight

An AI Cartoon May Interview You for Your Next Job

As if trying to land a new gig isn’t demoralizing enough, job seekers are meeting with characters powered by generative AI who are capable of meeting with infinite candidates to judge their skills.

Amanda Hoover

Generative AI Doesn’t Make Hardware Less Hard

Wearable AI gadgets from Rabbit and Humane were panned by reviewers, including at WIRED. Their face-plants show that it’s still tough to compete with Big Tech in the age of ChatGPT.

Lauren Goode

It’s Time to Believe the AI Hype

Some pundits suggest generative AI stopped getting smarter. The explosive demos from OpenAI and Google that started the week show there’s plenty more disruption to come.

Steven Levy

Inside the Cult of the Haskell Programmer

It’s spooky. It’s esoteric. It’s also the key to understanding the rise and relevance of functional programming.

Sheon Han

OpenAI’s Chief AI Wizard, Ilya Sutskever, Is Leaving the Company

One day after OpenAI showed off an all-new, emotional version of ChatGPT, the company announced that chief scientist Ilya Sutskever is leaving the company. He had voted in November to fire CEO Sam Altman.

Reece Rogers

Deadspin’s New Owners Are Embracing Betting Content—but Not AI

The future of once-beloved sports blog Deadspin has been murky since it was acquired by an unknown company called Lineup Publishing in March. The new ownership told WIRED they “don't want to ruin it.”

Kate Knibbs

*****
Credit belongs to : www.wired.com

Check Also

Google Admits Its AI Overviews Search Feature Screwed Up

Reece Rogers Business May 30, 2024 9:56 PM Google Admits Its AI Overviews Search Feature …