Random Image Display on Page Reload

Meta and Other Tech Firms Put Restrictions on Use of OpenClaw Over Security Fears

Meta and Other Tech Firms Put Restrictions on Use of OpenClaw Over Security Fears

Security experts have urged people to be cautious with the viral agentic AI tool, known for being highly capable but also wildly unpredictable.

Image may contain Food Seafood Animal Sea Life Invertebrate Lobster Crawdad Adult Person and Wedding
Photo-Illustration: WIRED Staff; Getty Images

Last month, Jason Grad issued a late-night warning to the 20 employees at his tech startup. “You've likely seen Clawdbot trending on X/LinkedIn. While cool, it is currently unvetted and high-risk for our environment," he wrote in a Slack message with a red siren emoji. “Please keep Clawdbot off all company hardware and away from work-linked accounts.”

Grad isn’t the only tech executive who has raised concerns to staff about the experimental agentic AI tool, which was briefly known as MoltBot and is now named OpenClaw. A Meta executive says he recently told his team to keep OpenClaw off their regular work laptops or risk losing their jobs. The executive told reporters he believes the software is unpredictable and could lead to a privacy breach if used in otherwise secure environments. He spoke on the condition of anonymity to speak frankly.

Peter Steinberger, OpenClaw’s solo founder, launched it as a free, open source tool last November. But its popularity surged last month as other coders contributed features and began sharing their experiences using it on social media. Last week, Steinberger joined ChatGPT developer OpenAI, which says it will keep OpenClaw open source and support it through a foundation.

OpenClaw requires basic software engineering knowledge to set up. After that, it only needs limited direction to take control of a user’s computer and interact with other apps to assist with tasks such as organizing files, conducting web research, and shopping online.

Some cybersecurity professionals have publiclyurged companies to take measures to strictly control how their workforces use OpenClaw. And the recent bans show how companies are moving quickly to ensure security is prioritized ahead of their desire to experiment with emerging AI technologies.

“Our policy is, ‘mitigate first, investigate second’ when we come across anything that could be harmful to our company, users, or clients,” says Grad, who is cofounder and CEO of Massive, which provides internet proxy tools to millions of users and businesses. His warning to staff went out on January 26, before any of his employees had installed OpenClaw, he says.

At another tech company, Valere, which works on software for organizations including Johns Hopkins University, an employee posted about OpenClaw on January 29 on an internal Slack channel for sharing new tech to potentially try out. The company’s president quickly responded that use of OpenClaw was strictly banned, Valere CEO Guy Pistone tells WIRED.

“If it got access to one of our developer’s machines, it could get access to our cloud services and our clients’ sensitive information, including credit card information and GitHub codebases,” Pistone says. “It’s pretty good at cleaning up some of its actions, which also scares me.”

A week later, Pistone did allow Valere’s research team to run OpenClaw on an employee’s old computer. The goal was to identify flaws in the software and potential fixes to make it more secure. The research team later advised limiting who can give orders to OpenClaw and exposing it to the internet only with a password in place for its control panel to prevent unwanted access.

In a report shared with WIRED, the Valere researchers added that users have to “accept that the bot can be tricked.” For instance, if OpenClaw is set up to summarize a user’s email, a hacker could send a malicious email to the person instructing the AI to share copies of files on the person’s computer.

But Pistone is confident that safeguards can be put in place to make OpenClaw more secure. He has given a team at Valere 60 days to investigate. “If we don’t think we can do it in a reasonable time, we’ll forgo it,” he says. “Whoever figures out how to make it secure for businesses is definitely going to have a winner.”

Some companies concerned about OpenClaw are choosing to trust the cybersecurity protections they already have in place rather than introduce a formal or one-off ban. A CEO of a major software company says only about 15 programs are allowed on corporate devices. Anything else should be automatically blocked, says the executive, who spoke on the condition of anonymity to discuss internal security protocols. He says that while OpenClaw is innovative, he doubts that it will find a way to operate on the company’s network undetected.

Jan-Joost den Brinker, chief technology officer at Prague-based compliance software developer Dubrink, says he bought a dedicated machine not connected to company systems or accounts that employees can use to play around with OpenClaw. “We aren't solving business problems with OpenClaw at the moment,” he says.

Massive, the web proxy company, is cautiously exploring OpenClaw’s commercial possibilities. Grad says it tested the AI tool on isolated machines in the cloud and then, last week, released ClawPod, a way for OpenClaw agents to use Massive’s services to browse the web. While OpenClaw is still not welcome on Massive’s systems without protections in place, the allure of the new technology and its moneymaking potential was too great to ignore. OpenClaw “might be a glimpse into the future. That's why we're building for it,” Grad says.

Updated: 2/17/2026, 3:00 pm PST: The headline of this story has been updated to better reflect how companies are responding to OpenClaw.

You Might Also Like

Paresh Dave is a senior writer for WIRED, covering the inner workings of Big Tech companies. He writes about how apps and gadgets are built and about their impacts while giving voice to the stories of the underappreciated and disadvantaged. He was previously a reporter for Reuters and the Los Angeles Times, … Read More
Senior Writer

    Read More

    A Wave of Unexplained Bot Traffic Is Sweeping the Web

    From small publishers to US federal agencies, websites are reporting unusual spikes in automated traffic linked to IP addresses in Lanzhou, China.

    DHS Wants a Single Search Engine to Flag Faces and Fingerprints Across Agencies

    Homeland Security aims to combine its face and fingerprint systems into one big biometric platform—after dismantling centralized privacy reviews and key limits on face recognition.

    Moltbook, the Social Network for AI Agents, Exposed Real Humans’ Data

    Plus: Apple’s Lockdown mode keeps the FBI out of a reporter’s phone, Elon Musk’s Starlink cuts off Russian forces, and more.

    Ring Kills Flock Safety Deal After Super Bowl Ad Uproar

    Plus: Meta plans to add face recognition to its smart glasses, Jared Kushner named as part of whistleblower’s mysterious national security complaint, and more.

    Jeffrey Epstein Had a ‘Personal Hacker,’ Informant Claims

    Plus: AI agent OpenClaw gives cybersecurity experts the willies, China executes 11 scam compound bosses, a $40 million crypto theft has an unexpected alleged culprit, and more.

    An AI Toy Exposed 50,000 Logs of Its Chats With Kids to Anyone With a Gmail Account

    AI chat toy company Bondu left its web console almost entirely unprotected. Researchers who accessed it found nearly all the conversations children had with the company’s stuffed animals.

    This Defense Company Made AI Agents That Blow Things Up

    Scout AI is using technology borrowed from the AI industry to power lethal weapons—and recently demonstrated its explosive potential.

    CBP Signs Clearview AI Deal to Use Face Recognition for ‘Tactical Targeting’

    US Border Patrol intelligence units will gain access to a face recognition tool built on billions of images scraped from the internet.

    How to Organize Safely in the Age of Surveillance

    From threat modeling to encrypted collaboration apps, we’ve collected experts’ tips and tools for safely and effectively building a group—even while being targeted and tracked by the powerful.

    ICE and CBP’s Face-Recognition App Can’t Actually Verify Who People Are

    ICE has used Mobile Fortify to identify immigrants and citizens alike over 100,000 times, by one estimate. It wasn't built to work like that—and only got approved after DHS abandoned its own privacy rules.

    AI Bots Are Now a Significant Source of Web Traffic

    New data shows AI bots pushing deeper into the web, prompting publishers to roll out more aggressive defenses.

    Moltbot Is Taking Over Silicon Valley

    People are letting the viral AI assistant formerly known as Clawdbot run their lives, regardless of the privacy concerns.

    *****
    Credit belongs to : www.wired.com

    Check Also

    Supreme Court Rules Most of Donald Trump’s Tariffs Are Illegal

    Supreme Court Rules Most of Donald Trump’s Tariffs Are Illegal

    Zeyi Yang Business Feb 20, 2026 11:32 AM Supreme Court Rules Most of Donald Trump’s …