Random Image Display on Page Reload

OpenAI and the FDA Are Holding Talks About Using AI In Drug Evaluation

OpenAI and the FDA Are Holding Talks About Using AI In Drug Evaluation

High-ranking OpenAI employees have met with the FDA multiple times in recent weeks to discuss AI and a project called cderGPT.

Image may contain Lighting and Text

Photograph: Didem Mente/Getty Images

The Food and Drug Administration has been meeting with OpenAI to discuss the agency’s use of AI, according to sources with knowledge of the meetings. The meetings appear to be part of a broader effort at the FDA to use this technology to speed up the drug approval process.

“Why does it take over 10 years for a new drug to come to market?” wrote FDA commissioner Marty Makary on X on Wednesday. “Why are we not modernized with AI and other things? We’ve just completed our first AI-assisted scientific review for a product and that’s just the beginning.”

The remarks followed an annual meeting of the American Hospital Association earlier this week, where Makary spoke about AI’s potential to aid in the approval of new treatments for diabetes and certain types of cancer.

Makary did not specify that OpenAI was part of this initiative. But sources close to the project say a small team from OpenAI has met with the FDA and two associates of Elon Musk's so-called Department of Government Efficiency multiple times in recent weeks. The group has discussed a project called cderGPT, which likely stands for Center for Drug Evaluation, which regulates over-the-counter and prescription drugs in the US, and Research GPT. Jeremy Walsh, who was recently named as the FDA’s first-ever AI officer, has led the discussions. So far, no contract has been signed.

OpenAI declined to comment.

Walsh has also met with Peter Bowman-Davis, an undergraduate on leave from Yale who currently serves as the acting chief AI officer at the Department of Health and Human Services, to discuss the FDA’s AI ambitions. Politico first reported the appointment of Bowman-Davis, who is part of Andreessen Horowitz’s American Dynamism team.

When reached via email on Wednesday, Robert Califf, who served as FDA commissioner from 2016 to 2017 and again from 2022 through January, said the agency’s review teams have been using AI for several years now. “It will be interesting to hear the details of which parts of the review were ‘AI assisted’ and what that means,” he says. “There has always been a quest to shorten review times and a broad consensus that AI could help.”

Before Califf departed the agency, he said the FDA was considering the various ways AI could be used in internal operations. “Final reviews for approval are only one part of a much larger opportunity,” he says.

To be clear, using AI to assist in final drug reviews would represent a chance to compress just a small part of the notoriously long drug-development timeline. The vast majority of drugs fail before ever coming up for FDA review.

Rafael Rosengarten, CEO of Genialis, a precision oncology company, and a cofounder and board member of the Alliance for AI in Healthcare, says he’s in favor of automating certain tasks related to the drug-review process but says there should be policy guidance around what kind of data is used to train AI models and what kind of model performance is considered acceptable. “These machines are incredibly adept at learning information, but they have to be trained in a way so they're learning what we want them to learn,” he says.

He could see AI being used more immediately to address certain “low-hanging fruit,” such as checking for application completeness. “Something as trivial as that could expedite the return of feedback to the submitters based on things that need to be addressed to make the application complete,” he says. More sophisticated uses would need to be developed, tested, and proved out.

An ex-FDA employee who has tested ChatGPT as a clinical tool says the propensity of AI models to fabricate convincing information raises questions about how reliable such a chatbot might be. “Who knows how robust the platform will be for these reviewers’ tasks,” the ex-staffer says.

The FDA review process currently takes about a year, but the agency has several existing mechanisms to expedite that timeline for promising drugs. One of those is the fast track designation, which is for products designed to treat a serious condition and fill an unmet medical need. Another is the breakthrough therapy designation, created in 2012, which allows the FDA to grant priority review to drug candidates that may provide a substantial benefit to patients compared to current treatment options.

“Ensuring medicines can be reviewed for safety and effectiveness in a timely manner to address patient needs is critical,” says Andrew Powaleny, a spokesperson for the industry group PhRMA, via email. “While AI is still developing, harnessing it requires a thoughtful and risk-based approach with patients at the center.”

The FDA is already doing its own research on potential uses of AI. In December 2023 the agency advertised a fellowship for a researcher to develop large language models for internal use. “During participation in this program, the fellow will engage in various activities that include but are not limited to the applications of LLMs for precision medicine, drug development and regulatory science,” the fellowship description says.

In January, OpenAI announced ChatGPT Gov, a self-hosted version of its chatbot designed to comply with government regulations. The startup also said it was working toward getting FedRAMP moderate and high accreditations for ChatGPT Enterprise, which would allow it to handle sensitive government data. FedRAMP is a compliance program used by the federal government to assess cloud products; unless authorized through this program, a service cannot hold federal data.

Additional reporting by Matt Giles.

Written by WIRED Staff
Read More

Singapore’s Vision for AI Safety Bridges the US-China Divide

In a rare moment of global consensus, AI researchers from the US, Europe, and Asia came together in Singapore to form a plan for researching AI risks.
Will Knight

These Startups Are Building Advanced AI Models Without Data Centers

A new crowd-trained way to develop LLMs over the internet could shake up the AI industry with a giant 100 billion-parameter model later this year.
Will Knight

Google DeepMind’s AI Agent Dreams Up Algorithms Beyond Human Expertise

A new system that combines Gemini’s coding abilities with an evolutionary approach improves datacenter scheduling and chip design, and fine-tunes large language models.
Will Knight

This Startup Has Created AI-Powered Signing Avatars for the Deaf

New technology from British startup Silence Speaks enables an AI-generated sign language avatar to effectively give the deaf and hard of hearing an interpreter in their pocket.
Simon Hill

Meet The AI Agent With Multiple Personalities

A new AI agent from the startup Simular switches between different AI models depending on the task at hand.
Will Knight

DOGE Put a College Student in Charge of Using AI to Rewrite Regulations

A DOGE operative has been tasked with using AI to propose rewrites to the Department of Housing and Urban Development’s regulations—an effort sources are told will roll out across government.
David Gilbert

The Middle East Has Entered the AI Group Chat

The UAE and Saudi Arabia are investing billions in US AI infrastructure. The deals could help the US in the AI race against China.
Will Knight

OpenAI Backs Down on Restructuring Amid Pushback

The startup behind ChatGPT is going to remain in nonprofit control, but it still needs regulatory approval.
Paresh Dave

AI Is Spreading Old Stereotypes to New Languages and Cultures

Margaret Mitchell, an AI ethics researcher at Hugging Face, tells WIRED about a new dataset designed to test AI models for bias in multiple languages.
Reece Rogers

Behold the Social Security Administration’s AI Training Video

Social Security workers are being asked to use an AI chatbot. An animated video on how to do so failed to mention that the chatbot can’t be trusted with personally identifiable information.
David Gilbert

‘You Can’t Lick a Badger Twice’: Google Failures Highlight a Fundamental AI Flaw

Google’s AI Overviews feature credible-sounding explanations for completely made-up idioms.
Brian Barrett

This ‘College Protester’ Isn’t Real. It’s an AI-Powered Undercover Bot for Cops

Massive Blue is helping cops deploy AI-powered social media bots to talk to people they suspect are anything from violent sex criminals all the way to vaguely defined “protesters.”
Emanuel Maiberg

*****
Credit belongs to : www.wired.com

Check Also

At Bitcoin 2025, Crypto Purists and the MAGA Faithful Collide

Jessica Klein Business Jun 5, 2025 5:00 AM At Bitcoin 2025, Crypto Purists and the …