Random Image Display on Page Reload

Vibe Coding Is the New Open Source—in the Worst Way Possible

Oct 6, 2025 6:00 AM

Vibe Coding Is the New Open Source—in the Worst Way Possible

As developers increasingly lean on AI-generated code to build out their software—as they have with open source in the past—they risk introducing critical security failures along the way.

Illustration of and warped binary code

Photo-illustration: WIRED Staff; Getty Images

Just like you probably don't grow and grind wheat to make flour for your bread, most software developers don't write every line of code in a new project from scratch. Doing so would be extremely slow and could create more security issues than it solves. So developers draw on existing libraries—often open source projects—to get various basic software components in place.

While this approach is efficient, it can create exposure and lack of visibility into software. Increasingly, however, the rise of vibe coding is being used in a similar way, allowing developers to quickly spin up code that they can simply adapt rather than writing from scratch. Security researchers warn, though, that this new genre of plug-and-play code is making software-supply-chain security even more complicated—and dangerous.

“We're hitting the point right now where AI is about to lose its grace period on security,” says Alex Zenla, chief technology officer of the cloud security firm Edera. “And AI is its own worst enemy in terms of generating code that’s insecure. If AI is being trained in part on old, vulnerable, or low-quality software that's available out there, then all the vulnerabilities that have existed can reoccur and be introduced again, not to mention new issues.”

In addition to sucking up potentially insecure training data, the reality of vibe coding is that it produces a rough draft of code that may not fully take into account all of the specific context and considerations around a given product or service. In other words, even if a company trains a local model on a project's source code and a natural language description of goals, the production process is still relying on human reviewers' ability to spot any and every possible flaw or incongruity in code originally generated by AI.

“Engineering groups need to think about the development lifecycle in the era of vibe coding,” says Eran Kinsbruner, a researcher at the application security firm Checkmarx. “If you ask the exact same LLM model to write for your specific source code, every single time it will have a slightly different output. One developer within the team will generate one output and the other developer is going to get a different output. So that introduces an additional complication beyond open source.”

In a Checkmarx survey of chief information security officers, application security managers, and heads of development, a third of respondents said that more than 60 percent of their organization’s code was generated by AI in 2024. But only 18 percent of respondents said that their organization has a list of approved tools for vibe coding. Checkmarx polled thousands of professionals and published the findings in August—emphasizing, too, that AI development is making it harder to trace “ownership” of code.

Open source projects can be inherently insecure, outdated, or at risk of malicious takeover. And they can be incorporated into codebases without adequate transparency or documentation. But researchers point out that some of fundamental backstops and accountability mechanisms that have always existed in open source are missing or severely fragmented by AI-driven development.

“AI code is not very transparent,” says Dan Fernandez, Edera's head of AI products. “In repositories like Github you can at least see things like pull requests and commit messages to understand who did what to the code, and there's a way to trace back who contributed. But with AI code, there isn't that same accountability of what went into it and whether it's been audited by a human. And lines of code coming from a human could be part of the problem as well.”

Edera’s Zenla also points out that while vibe coding may seem like a low-cost way to create bare-bones applications and tools that might not otherwise exist for low-resource groups like small businesses or vulnerable populations, the ease of use comes with the danger of creating security exposure in these most at-risk and sensitive situations.

“There's a whole lot of talk about using AI to help vulnerable populations, because it uses less effort to get to something usable,” Zenla says. “And I think these tools can help people in need, but I also think that the security implications of vibe coding will disproportionately impact people who can least afford it.”

Even in enterprise, where financial risk largely falls to the company, the personal fallout of a widespread vulnerability introduced as a result of vibe coding should weigh heavily.

“The fact is that AI-generated material is already starting to exist in code bases,” says Jake Williams, a former NSA hacker and current vice president of research and development at Hunter Strategy. “We can learn from advances in open source software-supply-chain security—or we just won't, and it will suck.”

You Might Also Like …

Lily Hay Newman is a senior writer at WIRED focused on information security, digital privacy, and hacking. She previously worked as a technology reporter at Slate, and was the staff writer for Future Tense, a publication and partnership between Slate, the New America Foundation, and Arizona State University. Her work … Read More
Senior Writer

Read More

Google’s Latest AI Ransomware Defense Only Goes So Far

Google has launched a new AI-based protection in Drive for desktop that can shut down an attack before it spreads—but its benefits have their limits.

Anthropic Will Use Claude Chats for Training Data. Here’s How to Opt Out

Anthropic is starting to train its models on new Claude chats. If you’re using the bot and don’t want your chats used as training data, here’s how to opt out.

An App Used to Dox Charlie Kirk Critics Doxed Its Own Users Instead

Plus: A ransomeware gang steals data on 8,000 preschoolers, Microsoft blocks Israel’s military from using its cloud for surveillance, call-recording app Neon hits pause over security holes, and more.

Armed Guards and Muscle Milk: Senate Investigation Reveals DOGE Takeover Details

A new Senate report claims DOGE put every American's Social Security number at risk—and that officials at federal agencies essentially obstructed an investigation, all but denying DOGE even exists.

Tile Tracking Tags Can Be Exploited by Tech-Savvy Stalkers, Researchers Say

A team of researchers found that, by not encrypting the data broadcast by Tile tags, users could be vulnerable to having their location information exposed to malicious actors.

This Microsoft Entra ID Vulnerability Could Have Been Catastrophic

A pair of flaws in Microsoft's Entra ID identity and access management system could have allowed an attacker to gain access to virtually all Azure customer accounts.

Happy Gilmore Producer Buys Spyware Maker NSO Group

Plus: US government cybersecurity staffers get reassigned to do immigration work, a hack exposes sensitive age-verification data of Discord users, and more.

OpenAI Teams Up With Oracle and SoftBank to Build 5 New Stargate Data Centers

The new sites will boost Stargate’s planned capacity to nearly 7 gigawatts—about equal to the output of seven large nuclear reactors.

OpenAI Adds Parental Safety Controls for Teen ChatGPT Users. Here’s What to Expect

OpenAI’s review process for teenage ChatGPT users who are flagged for suicidal ideation includes human moderators. Parents can expect an alert about alarming prompts within hours.

Meta Accused of Torrenting Porn to Advance Its Goal of AI ‘Superintelligence’

Strike 3 Holdings is suing Meta in federal court, alleging the tech giant pirated copyrighted adult videos to train its AI models.

North Korean Scammers Are Doing Architectural Design Now

New research shows that North Koreans appear to be trying to trick US companies into hiring them to develop architectural designs using fake profiles, résumés, and Social Security numbers.

OpenAI's Blockbuster AMD Deal Is a Bet on Near-Limitless Demand for AI

OpenAI’s latest move in the race to build massive data centers in the US shows it believes demand for AI will keep surging—even as skeptics warn of a bubble.

*****
Credit belongs to : www.wired.com

Check Also

Cursor Launches a New AI Agent Experience to Take On Claude Code and Codex

Cursor Launches a New AI Agent Experience to Take On Claude Code and Codex

Maxwell Zeff Business Apr 2, 2026 1:00 PM Cursor Launches a New AI Agent Experience …