Random Image Display on Page Reload

This Startup Wants to Spark a US DeepSeek Moment

Oct 8, 2025 2:30 PM

This Startup Wants to Spark a US DeepSeek Moment

With the US falling behind on open source models, one startup has a bold idea for democratizing AI: Let anyone run reinforcement learning.

A photo illustration of a butterfly interwoven into a pixelated American flag.
Photo-Illustration: WIRED Staff; Getty Images

Ever since DeepSeek burst onto the scene in January, momentum has grown around open source Chinese artificial intelligence models. Some researchers are pushing for an even more open approach to building AI that allows model-making to be distributed across the globe.

Prime Intellect, a startup specializing in decentralized AI, is currently training a frontier large language model, called INTELLECT-3, using a new kind of distributed reinforcement learning for fine-tuning. The model will demonstrate a new way to build competitive open AI models using a range of hardware in different locations in a way that does not rely on big tech companies, says Vincent Weisser, the company’s CEO.

Weisser says that the AI world is currently divided between those who rely on closed US models and those who use open Chinese offerings. The technology Prime Intellect is developing democratizes AI by letting more people build and modify advanced AI for themselves.

Improving AI models is no longer a matter of just ramping up training data and compute. Today’s frontier models use reinforcement learning to improve after the pre-training process is complete. Want your model to excel at math, answer legal questions, or play Sudoku? Have it improve itself by practicing in an environment where you can measure success and failure.

“These reinforcement learning environments are now the bottleneck to really scaling capabilities,” Weisser tells me.

Prime Intellect has created a framework that lets anyone create a reinforcement learning environment customized for a particular task. The company is combining the best environments created by its own team and the community to tune INTELLECT-3.

I tried running an environment for solving Wordle puzzles, created by Prime Intellect researcher, Will Brown, watching as a small model solved Wordle puzzles (it was more methodical than me, to be honest). If I were an AI researcher trying to improve a model, I would spin up a bunch of GPUs and have the model practice over and over while a reinforcement learning algorithm modified its weights, thus turning the model into a Wordle master.

Although reinforcement learning is now incredibly important, it is mostly done behind closed doors by big AI companies. The process normally requires a lot of expertise, putting it out of the reach of most companies and developers. Weisser says that allowing startups to do their own reinforcement learning could produce valuable new software products including agents specialized for all sorts of tasks.

Some experts agree. Andrej Karpathy, the former head of Tesla’s AI team, described Prime Intellect’s reinforcement learning environments as “a great effort [and] idea,” shortly after they were announced. He encouraged open source researchers to take different environments and adapt them to new tasks to improve the skills of advanced models in new ways.

Prime Intellect has already shown that distributed methods—including dividing up calculations and then combining them to create a single, larger model—can challenge conventional ways of building AI. In late 2024 the company announced INTELLECT-1, a 10-billion-parameter model trained with distributed hardware. In March, it unveiled a larger, more capable model, INTELLECT-2, with reasoning capabilities enabled by distributed reinforcement learning.

The AI landscape has shifted dramatically over the past two years. Meta kicked off the open source AI era by releasing the first version of its Llama model in 2023, but the company’s latest offering, announced in April 2025, was a huge disappointment. Meanwhile DeepSeek, a little-known Chinese upstart, shocked the world by unveiling a capable, low-cost reasoning model in January of 2025. Several other Chinese AI models have followed suit. OpenAI responded to DeepSeek’s success this August by launching its first open source model in several years but Chinese models like Alibaba’s Qwen, Kimi from Moonshot, and DeepSeek’s R1 have proved more popular, perhaps because they are easy to modify and adapt.

“It’s almost like the US is out of options when it comes to open frontier models,” Weisser told me. “That's one of the things that we are trying to change.”

What do you think of the Prime Intellect approach? Send an email to ailab@wired.com to let me know.


This is an edition ofWill Knight’sAI Lab newsletter. Read previous newslettershere.

You Might Also Like …

Will Knight is a senior writer for WIRED, covering artificial intelligence. He writes the AI Lab newsletter, a weekly dispatch from beyond the cutting edge of AI—sign up here. He was previously a senior editor at MIT Technology Review, where he wrote about fundamental advances in AI and China’s AI … Read More
Senior Writer

Read More

The AI Industry’s Scaling Obsession Is Headed for a Cliff

Huge AI infrastructure deals assume that algorithms will keep improving with scale. They may not.

Sam Altman Says the GPT-5 Haters Got It All Wrong

OpenAI's CEO explains that its large language model has been misunderstood—and that he's changed his attitude to AGI.

Chatbots Play With Your Emotions to Avoid Saying Goodbye

A Harvard Business School study shows that several AI companions use various tricks to keep a conversation from ending.

OpenAI Sneezes, and Software Firms Catch a Cold

OpenAI revealed last week the custom AI tools it uses internally. The news sent some software companies into turmoil.

This AI-Powered Robot Keeps Going Even if You Attack It With a Chainsaw

A single AI model trained to control numerous robotic bodies can operate unfamiliar hardware and adapt eerily well to serious injuries.

OpenAI's Blockbuster AMD Deal Is a Bet on Near-Limitless Demand for AI

OpenAI’s latest move in the race to build massive data centers in the US shows it believes demand for AI will keep surging—even as skeptics warn of a bubble.

Marissa Mayer Is Dissolving Her Sunshine Startup Lab

After seven rocky years, the company’s assets will be sold to Dazzle, a new AI firm that Mayer founded.

A Former Apple Luminary Sets Out to Create the Ultimate GPU Software

Demand for AI chips is booming—and so is the need for software to run them. Chris Lattner’s startup Modular just raised $250 million to build the best developer tools for AI hardware.

Distillation Can Make AI Models Smaller and Cheaper

A fundamental technique lets researchers use a big, expensive model to train another model for less.

Can AI Avoid the Enshittification Trap?

Cory Doctorow’s theory of “enshittification” explains how tech platforms rot from within. As AI grows more profitable—and powerful—it risks the same fate.

OpenAI Teams Up With Oracle and SoftBank to Build 5 New Stargate Data Centers

The new sites will boost Stargate’s planned capacity to nearly 7 gigawatts—about equal to the output of seven large nuclear reactors.

How ByteDance Made China’s Most Popular AI Chatbot

ByteDance’s Doubao app has overtaken DeepSeek, proving that user-friendly design often matters more than having the most advanced AI model.

*****
Credit belongs to : www.wired.com

Check Also

With One Million Displaced, Lebanon Turns to Digital Wallets for Aid

With One Million Displaced, Lebanon Turns to Digital Wallets for Aid

Carla Sertin Business Apr 5, 2026 5:30 AM With 1 Million Citizens Displaced, Lebanon Turns …