Random Image Display on Page Reload

AI Models Get Brain Rot, Too

Oct 22, 2025 2:00 PM

AI Models Get Brain Rot, Too

A new study shows that feeding large language models low-quality, high-engagement content from social media lowers their cognitive abilities.

A photo illustration of melted code dripping from a spoon.
Photo-Illustration: WIRED Staff; Getty Images

AI models may be a bit like humans, after all.

A new study from the University of Texas at Austin, Texas A&M, and Purdue University shows that large language models fed a diet of popular but low-quality social media content experience a kind of “brain rot” that may be familiar to anyone who has spent too long doomscrolling on X or TikTok.

"We live in an age where information grows faster than attention spans—and much of it is engineered to capture clicks, not convey truth or depth,” says Junyuan Hong, an incoming assistant professor at the National University of Singapore who worked on the study as a graduate student at UT Austin. “We wondered: What happens when AIs are trained on the same stuff?”

Hong and his colleagues fed different kinds of text to two open source large language models in pretraining. They examined what happened when the models were fed a mix of highly “engaging,” or widely shared, social media posts and ones that contained sensational or hyped text like “wow,” “look,” or “today only.”

The researchers then used several different benchmarks to gauge the impact of this “junk” social media diet on two open source models: Meta’s Llama and Alibaba’s Qwen.

The models fed junk text experienced a kind of AI brain rot—with cognitive decline including reduced reasoning abilities and degraded memory. The models also became less ethically aligned and more psychopathic according to two measures.

The results mirror research on human subjects, which shows that low-quality online content has a detrimental effect on people’s cognitive abilities. The pervasiveness of the phenomenon saw “brain rot” named as the Oxford Dictionary word of the year in 2024.

The results are important for the AI industry, Hong says, because model-builders might assume that social media posts are a good source of training data for their models. “Training on viral or attention-grabbing content may look like scaling up data,” he says. “But it can quietly corrode reasoning, ethics, and long-context attention.”

The fact that LLMs suffer from brain rot seems especially worrying when AI is itself increasingly generating social media content, much of which is seemingly optimized for engagement. The researchers also found that models impaired by low-quality content could not easily be improved through retraining.

The findings also suggest that AI systems built around social platforms, such as Grok, might suffer from quality control issues if user-generated posts are used in training without an eye toward the integrity of the posts.

“As more AI-generated slop spreads across social media, it contaminates the very data future models will learn from,” Hong says. “Our findings show that once this kind of ‘brain rot’ sets in, later clean training can’t fully undo it.”


This is an edition ofWill Knight’sAI Lab newsletter. Read previous newslettershere.

You Might Also Like …

Will Knight is a senior writer for WIRED, covering artificial intelligence. He writes the AI Lab newsletter, a weekly dispatch from beyond the cutting edge of AI—sign up here. He was previously a senior editor at MIT Technology Review, where he wrote about fundamental advances in AI and China’s AI … Read More
Senior Writer

Read More

Chatbots Play With Your Emotions to Avoid Saying Goodbye

A Harvard Business School study shows that several AI companions use various tricks to keep a conversation from ending.

Can AI Avoid the Enshittification Trap?

Cory Doctorow’s theory of “enshittification” explains how tech platforms rot from within. As AI grows more profitable—and powerful—it risks the same fate.

The Man Who Makes AI Slop by Hand

Chinese creator Tianran Mu went viral for mimicking the eerie, unsettling aesthetic of AI videos, but his work is 100 percent human.

The AI Industry’s Scaling Obsession Is Headed for a Cliff

Huge AI infrastructure deals assume that algorithms will keep improving with scale. They may not.

How ByteDance Made China’s Most Popular AI Chatbot

ByteDance’s Doubao app has overtaken DeepSeek, proving that user-friendly design often matters more than having the most advanced AI model.

OpenAI Is Preparing to Launch a Social App for AI-Generated Videos

The platform appears to closely resemble TikTok and is powered by Sora 2, OpenAI's latest video generation model.

This Startup Wants to Spark a US DeepSeek Moment

With the US falling behind on open source models, one startup has a bold idea for democratizing AI: let anyone run reinforcement learning.

ByteDance’s Other AI Chatbot Is Quietly Gaining Traction Around the World

ByteDance is paying for ads and partnering with influencers to promote its AI chatbot app Cici in countries like the UK, Mexico, and Indonesia.

Sam Altman Says the GPT-5 Haters Got It All Wrong

OpenAI's CEO explains that its large language model has been misunderstood—and that he's changed his attitude to AGI.

Anthropic Has a Plan to Keep Its AI From Building a Nuclear Weapon. Will It Work?

Anthropic partnered with the US government to create a filter meant to block Claude from helping someone build a nuke. Experts are divided on whether its a necessary protection—or a protection at all.

ChatGPT’s Horny Era Could Be Its Stickiest Yet

OpenAI will soon let adults create erotic content in ChatGPT. Experts say that could lead to “emotional commodification,” or horniness as a revenue stream.

OpenAI’s New Sora App Lets You Deepfake Yourself for Entertainment

OpenAI’s latest app encourages users to generate a personal digital avatar and scroll AI-generated videos of themselves and their friends.

*****
Credit belongs to : www.wired.com

Check Also

Cursor Launches a New AI Agent Experience to Take On Claude Code and Codex

Cursor Launches a New AI Agent Experience to Take On Claude Code and Codex

Maxwell Zeff Business Apr 2, 2026 1:00 PM Cursor Launches a New AI Agent Experience …