Random Image Display on Page Reload

Inside OpenAI’s Raid on Thinking Machines Lab

Inside OpenAI’s Raid on Thinking Machines Lab

OpenAI is planning to bring over more researchers from Thinking Machines Lab after nabbing two cofounders, a source familiar with the situation says. Plus, the latest efforts to automate jobs with AI.

Image may contain Adult Person People Martial Arts Sport Clothing Pants Face and Head

If someone ever makes an HBO Max series about the AI industry, the events of this week will make quite the episode.

On Wednesday, OpenAI’s CEO of applications, Fidji Simo, announced the company had rehired Barret Zoph and Luke Metz, cofounders of Mira Murati’s AI startup, Thinking Machines Lab. Zoph and Metz had left OpenAI in late 2024.

We reported last night on two narratives forming around what led to the departures, and have since learned new information.

A source with direct knowledge says that Thinking Machines leadership believed Zoph engaged in an incident of serious misconduct while at the company last year. That incident broke Murati’s trust, the source says, and disrupted the pair’s working relationship. The source also alleged Murati fired Zoph on Wednesday—before knowing he was going to OpenAI—due to what the company claimed were issues that arose after the alleged misconduct. Around the time the company learned that Zoph was returning to OpenAI, Thinking Machines raised concerns internally about whether he had shared confidential information with competitors. (Zoph has not responded to several requests for comment from WIRED.)

Meanwhile, in a Wednesday memo to employees, Simo claimed the hires had been in the works for weeks and that Zoph told Murati he was considering leaving Thinking Machines on Monday—prior to the date he was fired. Simo also told employees that OpenAI doesn’t share Thinking Machines' concerns about Zoph’s ethics.

Alongside Zoph and Metz, another former OpenAI researcher that was working at Thinking Machines, Sam Schoenholz, is rejoining the ChatGPT-maker, per Simo’s announcement. At least two more Thinking Machines employees are expected to join OpenAI in the coming weeks, according to a source familiar with the matter. Technology reporter Alex Heath was first to report the additional hires.

A separate source familiar with the matter pushed back on the perception that the recent personnel changes were wholly related to Zoph. "This has been part of a long discussion at Thinking Machines. There were discussions and misalignment on what the company wanted to build—it was about the product, the technology, and the future.”

Thinking Machines Lab and OpenAI declined to comment.

In the aftermath of these events, we’ve been hearing from several researchers at leading AI labs who say they are exhausted by the constant drama in their industry. This specific incident is reminiscent of OpenAI’s brief ouster of Sam Altman in 2023, known inside of OpenAI as “the blip.” Murati played a key role in that event as the company’s then chief technology officer, according to reporting from The Wall Street Journal.

In the years since Altman’s ouster, the drama in the AI industry has continued, with departures of cofounders at several major AI labs, including xAI’s Igor Babuschkin, Safe Superintelligence’s Daniel Gross, and Meta’s Yann LeCun (he did cofound Facebook’s longstanding AI lab, FAIR, after all).

Some might argue the drama is justified for a nascent industry whose expenditures are contributing to America’s GDP growth. Also, if you buy into the idea that one of these researchers might crack a few breakthroughs on the path to AGI, it’s probably worth tracking where they’re going.

That said, many researchers started working before ChatGPT’s breakout success and appear surprised that their industry is now the source of nearly constant scrutiny.

As long as researchers can keep raising billion-dollar seed rounds on a whim, we’re guessing the AI industry’s power shake-ups will continue apace. HBO Max writers, lock in.

Got a Tip?
Are you a current or former AI researcher who wants to talk about what's happening? We'd like to hear from you. Using a nonwork phone or computer, contact the reporter securely on Signal at mzeff.88.

How AI Labs Are Training Agents to Do Your Job

People in Silicon Valley have been musing about AI displacing jobs for decades. In the past few months, however, the efforts to actually get AI to do economically valuable work have become far more sophisticated.

AI labs are smartening up about the data they’re using to create AI agents. Last week, WIRED reported that OpenAI has been asking third-party contractors from the firm Handshake to upload examples of their real work from previous jobs to evaluate OpenAI’s agents. The companies ask employees to scrub these documents of any confidential data and personally identifying information. While it’s possible some corporate secrets or names slip by, that’s likely not what OpenAI is after (though the company could get in serious trouble if that happens, experts say).

AI labs are more interested in getting realistic examples of work created by a McKinsey consultant, Goldman Sachs investment banker, or Harvard doctor. That’s why data suppliers such as Mercor specifically seek out professionals that have worked at these companies on their job postings.

Handshake, Mercor, Surge, and Turing are some of the major data suppliers that AI labs rely on to get this data. In the past year, data firms have started paying upwards of $100 an hour to contract top talent for AI labs.

One way they’re using this data is to create “environments,” which are essentially boring video games that teach AI agents how to use enterprise software applications. The idea is that AI agents can test on environments and learn how to use real-world software that professionals would use to do their jobs.

“Over the past year, labs have increasingly recognized that they need to train and fine-tune models for a whole bunch of areas of knowledge work, including legal, health care, consulting, and banking,” says Aaron Levie, the CEO of the enterprise company Box, which offers enterprise agents powered by models from OpenAI, Anthropic, and Google. “These firms have been hiring contractors to generate datasets and rubrics, which offer ways that they can train and evaluate the model so it can get better at particular skills.”

Whether this is enough to train AI agents to execute office tasks accurately and consistently remains to be seen. AI labs have significantly improved their agents in the past year, as shown by viral products like Claude Code, which people are increasingly using for tasks outside of coding. If that’s any indication of what’s to come for other industries, it’s worth watching these enterprise agents.


This is an edition oftheModel Behavior newsletter. Read previous newslettershere.

You Might Also Like

Maxwell Zeff is a senior writer at WIRED covering the business of artificial intelligence. He was previously a senior reporter with TechCrunch, where he broke news on startups and leaders driving the AI boom. Before that, Zeff covered AI policy and content moderation for Gizmodo, and wrote some of Bloomberg’s … Read More
Senior Writer

    Zoë Schiffer oversees coverage of business and Silicon Valley at WIRED. She was previously managing editor of Platformer and a senior reporter at The Verge. … Read More
    Director, Business and Industry

      Read More

      AI Devices Are Coming. Will Your Favorite Apps Be Along for the Ride?

      Tech companies are calling AI the next platform. But some developers are reluctant to let AI agents stand between them and their users.

      Two Thinking Machines Lab Cofounders Are Leaving to Rejoin OpenAI

      The departures are a blow for Thinking Machines Lab. Two narratives are already emerging about why they happened.

      OpenAI Is Asking Contractors to Upload Work From Past Jobs to Evaluate the Performance of AI Agents

      To prepare AI agents for office work, the company is asking contractors to upload projects from past jobs, leaving it to them to strip out confidential and personally identifiable information.

      Ads Are Coming to ChatGPT. Here’s How They’ll Work

      OpenAI says ads will not influence ChatGPT’s responses, and that it won’t sell user data to advertisers.

      Reid Hoffman Wants Silicon Valley to ‘Stand Up’ Against the Trump Administration

      The LinkedIn cofounder and frequent Trump target has a simple message for his peers: “Just speak up about the things that you think are true.”

      How AI Companies Got Caught Up in US Military Efforts

      Two years ago, companies like Meta and OpenAI were united against military use of their tools. Now all of that has changed.

      Grok Is Generating Sexual Content Far More Graphic Than What's on X

      A WIRED review of outputs hosted on Grok’s official website shows it’s being used to create violent sexual images and videos, as well as content that includes apparent minors.

      Why Are Grok and X Still Available in App Stores?

      Elon Musk’s chatbot has been used to generate thousands of sexualized images of adults and apparent minors. Apple and Google have removed other “nudify” apps—but continue to host X and Grok.

      Grok Is Pushing AI ‘Undressing’ Mainstream

      Paid tools that “strip” clothes from photos have been available on the darker corners of the internet for years. Elon Musk’s X is now removing barriers to entry—and making the results public.

      AI-Powered Dating Is All Hype. IRL Cruising Is the Future

      Dating apps and AI companies have been touting bot wingmen for months. But the future might just be good old-fashioned meet-cutes.

      Anthropic’s Claude Cowork Is an AI Agent That Actually Works

      Cowork is a user-friendly version of Anthropic’s Claude Code AI-powered tool that’s built for file management and basic computing tasks. Here’s what it's like to use it.

      Tech Workers Are Condemning ICE Even as Their CEOs Stay Quiet

      The killing of George Floyd in 2020 prompted a wave of statements from tech companies and CEOs. Today, pushback against ICE is largely coming from employees, not executives.

      *****
      Credit belongs to : www.wired.com

      Check Also

      Europe Gets Serious About Age Verification Online

      Europe Gets Serious About Age Verification Online

      Laura Carrer Business Apr 7, 2026 2:00 AM Europe Gets Serious About Age Verification Online …