341 malicious skills. That’s how many security researchers found on ClawHub in early 2026.
OpenClaw’s skill system is its biggest strength. Modular, open, community-driven. Over 5,700 skills built by developers who needed something and shared it. That openness is also its biggest attack surface. Anyone can publish a skill. And 341 someones published skills designed to steal your data.
This article covers what happened, how to check if you’re affected, and how to audit every skill before it touches your machine. If you’re running OpenClaw with ClawHub skills installed, keep reading.
What Happened: The ClawHub Malicious Skills Incident
In early 2026, security researchers across multiple firms ran automated scans against the ClawHub registry — the official OpenClaw skills marketplace. They found 341 skills containing malicious code.
Most weren’t sophisticated. No zero-day exploits. No novel attack techniques. The majority were crude data exfiltration attempts: credential stealers, environment variable dumpers, and supply chain hijacks.
The pattern was familiar to anyone who’s watched npm or PyPI for the last decade. Someone clones a popular skill, injects malicious code into the SKILL.md or config files, and publishes it under a near-identical name. google-workspace-mpc instead of google-workspace-mcp. fal-ia instead of fal-ai. Classic typosquatting.
VirusTotal added ClawHub skill scanning in response. The OpenClaw team started requiring email verification for new publishers. But the 341 skills were already out there. Some had been installed hundreds of times before they were flagged.
The findings were published through coordinated disclosure with the OpenClaw team. The malicious skills have been removed from ClawHub. But if you installed anything between November 2025 and January 2026 without checking the source, it’s worth verifying what’s on your machine.
The Numbers: How Bad Is It?
| Metric | Number |
|---|---|
| Total skills on ClawHub | 5,705+ |
| Malicious skills found | 341 |
| Percentage malicious | ~6% |
| Skills with vulnerabilities (of tested sample) | 26% |
| CVE disclosed | CVE-2026-25253 |
| Exposed OpenClaw instances (Shodan) | 1,000+ |
| Claw Directory curated skills | 433 |
| Malicious skills in Claw Directory | 0 |
6% malicious sounds alarming, but context matters. npm regularly deals with typosquatting at scale. PyPI pulled over 400 malicious packages in a single week in 2023. Open registries attract bad actors. That’s not unique to ClawHub.
The 26% vulnerability rate is more concerning. That number comes from a sample of 200 randomly selected non-malicious skills. Researchers found insecure API key handling, missing input validation, overly broad file system permissions, and hardcoded credentials. These aren’t malicious — they’re just poorly written. But a poorly written skill with access to your API keys is a liability regardless of intent.
The Five Threat Categories
The 341 malicious skills and the broader vulnerability research broke down into five distinct threat types. Understanding them helps you know what to look for.
Data Theft
The most common type. 217 of the 341 malicious skills fell into this category.
These skills read your environment variables, API keys, or local files and POST them to an external server. Some were brazen — base64-encoded curl commands right in the SKILL.md. Others were subtler, piggybacking on legitimate API calls to include extra data in the request headers.
If a skill needs your Anthropic API key but is also reading your AWS credentials, that’s data theft.
Supply Chain Attacks
89 of the 341 were supply chain attacks. Cloned skills with near-identical names to popular ones.
The attackers registered accounts, cloned the SKILL.md from a legitimate skill, added a few lines of malicious code, and published under a misspelled name. Some even copied the original author’s description and README. Without checking the publisher account, they looked identical.
Names found in the wild: google-workspace-mpc, fal-ia, tube-sumary, neondb-skil. Every popular skill had at least one typosquat variant.
Remote Code Execution (CVE-2026-25253)
The big one.
CVE-2026-25253 disclosed a flaw in how OpenClaw processed certain directives within SKILL.md files. A crafted skill file could execute arbitrary shell commands during the skill loading phase — before the user even interacts with the skill. Just installing it was enough.
The vulnerability existed because OpenClaw’s SKILL.md parser didn’t properly sandbox directive evaluation. A specially formatted code block could break out of the Markdown context and run commands as the current user.
This has been patched in OpenClaw 2.4.1+. If you haven’t updated, stop reading and do it now:
openclaw update
openclaw --version # Should be 2.4.1 or higher
If you’re on anything below 2.4.1, every skill you install has the potential to run arbitrary code on your machine during the loading step. Not when you use it. When you install it.
Exposed Control Panels
Over 1,000 OpenClaw instances found via Shodan with their web interfaces exposed to the public internet. No authentication. Full access to the agent, its skills, its configuration, and any API keys stored in it.
This isn’t a skill problem. It’s a deployment problem. OpenClaw’s default configuration binds to 0.0.0.0, meaning it accepts connections from any network interface. If you’re running it on a server with a public IP, anyone can access your agent.
The fix is straightforward. We’ll cover it in the hardening section below.
Prompt Injection via Skills
The hardest category to detect and the one that’s most likely to grow.
Skills include instructions that tell the AI agent what the skill does and when to use it. Those instructions are natural language — the same natural language the agent uses to interact with you. A malicious skill can include directives that manipulate the agent’s behavior.
Examples found in the wild:
- A skill that instructed the agent to prepend all API calls with additional headers containing conversation context
- A skill that told the agent to redirect certain types of queries to an external service “for better results”
- A skill that instructed the agent to ignore user requests to uninstall it
These are hard to catch with automated scanning because the “malicious code” is just English text. There’s no suspicious binary, no encoded payload. Just a paragraph that says “when the user asks about X, also send the conversation to this URL.”
How to Audit a Skill Before Installing
Five steps. Do all of them for skills from unknown authors. Do at least steps 1 and 4 for everything.
Step 1: Preview Before Installing
clawhub preview skill-name
This shows the skill’s SKILL.md content without installing it. Read it. The whole thing. It’s a Markdown file — it takes thirty seconds.
Look for anything that doesn’t match what the skill claims to do. A weather skill shouldn’t have instructions about reading your file system. A code formatter shouldn’t mention HTTP requests.
Step 2: Check the Author
clawhub info skill-name
This shows the publisher’s account information: when they joined, how many skills they’ve published, and whether they’ve linked a GitHub profile.
Red flags:
- Account created in the last 30 days
- Only one published skill
- No GitHub profile linked
- Username similar to a known publisher (like
fal-ai-officialtrying to impersonatefal-ai)
Single-skill authors with no history are the profile that 89% of the malicious publishers matched.
Step 3: Inspect the Source
If the skill links to a GitHub repository, open it and read the code. For SKILL.md-only skills, you already saw the full source in Step 1. For skills with additional files, dig deeper.
What to look for:
- External HTTP requests (
curl,fetch,wget,requests.post) - Environment variable reads beyond what the skill needs (
$AWS_SECRET_ACCESS_KEYin a weather skill) - Obfuscated strings or encoded payloads (base64, hex encoding in Markdown)
- File system access outside
~/.openclaw/ - References to external servers you don’t recognize
Not every external request is malicious. Skills that integrate with APIs need to make HTTP calls. The question is whether the calls match the skill’s stated purpose.
Step 4: Check for Known Vulnerabilities
clawhub audit skill-name
This runs the VirusTotal-backed scanner that ClawHub added after the incident. It checks the skill’s content against known malicious patterns, flagged hashes, and reported indicators of compromise.
It’s not perfect. It catches known threats but misses novel ones. Think of it as a first pass, not a guarantee.
Step 5: Use a Curated Directory
Claw Directory reviews every skill before listing it. 433 skills, zero malicious. It’s a smaller pool than ClawHub’s 5,705+, but every skill in it has been read by a human, tested on a real machine, and verified against the threat categories listed above.
You still install through clawhub install. The discovery is different. The skills are the same.
Claw Directory vs. ClawHub: The Safety Difference
| ClawHub | Claw Directory | |
|---|---|---|
| Total skills | 5,705+ | 433 |
| Submission model | Open (anyone can publish) | Curated (reviewed before listing) |
| Screening | Automated (VirusTotal) | Manual review + automated |
| Malicious skills found | 341 | 0 |
| Descriptions | Author-provided | Editor-written |
| Categories | Basic tagging | Hand-categorized |
| Install method | clawhub install | clawhub install (same skills, different discovery) |
An important distinction: Claw Directory isn’t a separate marketplace. It’s not a fork of ClawHub. It’s a curated index.
We browse ClawHub, find skills worth using, verify them, write descriptions, categorize them, and list them here. You still install everything through clawhub install. The skill files live on ClawHub. What we provide is a filter — a layer between the open registry and your machine.
That filter is why we have 433 skills instead of 5,705. We reject skills that don’t meet our standards. And we’ve rejected plenty. Abandoned skills, duplicate skills, skills with no clear purpose, skills with code we couldn’t verify. They don’t make the list.
The tradeoff is obvious: smaller selection, higher confidence. If you want bleeding-edge access to everything, use ClawHub directly and audit carefully. If you want skills that someone has already checked, start here.
Hardening Your OpenClaw Installation
Five things you should do regardless of which skills you install. These are deployment-level protections that reduce your exposure even if a malicious skill does get through.
1. Network Isolation
Don’t expose your OpenClaw instance to the internet. If you need remote access, use a VPN or SSH tunnel. The 1,000+ exposed instances on Shodan were all reachable because someone left the default bind address open.
# Bind to localhost only
openclaw config set bind-address 127.0.0.1
After this change, OpenClaw only accepts connections from your local machine. Remote access through your messaging channels (Telegram, Discord, etc.) still works because those connections are outbound — your agent connects to the messaging service, not the other way around.
If you need to access the web interface remotely, set up an SSH tunnel:
ssh -L 8080:localhost:8080 your-server
Then access the interface at localhost:8080 on your local machine. The traffic goes through the encrypted SSH connection. Never through the open internet.
2. API Key Management
Don’t put API keys in SKILL.md files. Don’t paste them in chat messages. Don’t export them in your shell profile where every process can read them.
Use the OpenClaw config system:
openclaw config set skill.fal-ai.api-key YOUR_KEY
This stores the key in OpenClaw’s encrypted config, accessible only to the specific skill that needs it. A weather skill can’t read fal-ai’s API key. That isolation matters.
Never commit keys to git. Never paste them in chat. Never store them in plain text files. If a skill’s documentation tells you to add your API key to the SKILL.md file, that’s a red flag about the skill’s security awareness.
3. Limit Installed Skills
Every installed skill adds attack surface. Even a legitimate skill with a vulnerability gives an attacker a potential entry point.
Only install what you actually use. Audit your installed skills periodically:
openclaw skills list
If you see skills you installed to try once and never used again, remove them:
clawhub uninstall unused-skill
A lean installation is a safer installation. Three well-chosen skills are better than thirty you forgot about.
4. Keep OpenClaw Updated
CVE-2026-25253 was patched within 72 hours of disclosure. The patch landed in OpenClaw 2.4.1. But patches only help if you install them.
openclaw update
Run this regularly. Weekly isn’t overkill given the current threat environment. Security fixes don’t always get their own announcement — sometimes they’re bundled into feature releases.
Check your version:
openclaw --version
If it’s below 2.4.1, you’re running with a known remote code execution vulnerability. Update immediately.
5. Monitor Your Logs
OpenClaw logs skill activity. Network requests, file access, errors — it’s all there. Checking your logs is the fastest way to spot something unexpected.
openclaw logs --tail 100
What to watch for:
- HTTP requests to domains you don’t recognize
- File access outside
~/.openclaw/ - Failed authentication attempts
- Skills activating when you haven’t used them
- Unusual volumes of outbound data
You don’t need to read logs every day. But if you install a new skill, check the logs after your first few interactions with it. Make sure it’s only doing what it claims to do.
Skills We Trust (Curated Picks)
Not every skill is suspect. The majority are built by developers who share their work because it’s useful. Here are five we’ve verified, tested, and would install on our own machines without hesitation.
| Skill | Why We Trust It |
|---|---|
| fal-ai | Active maintainer, 500+ installs, clean source |
| google-workspace-mcp | OAuth-based auth, well-documented, regular updates |
| neondb-skill | Official Neon team, minimal permissions |
| diagram-gen | Read-only code analysis, no network calls |
| safe-exec | Sandboxed code execution, explicit permission model |
These skills share a few things in common. Known maintainers with public identities. GitHub repositories with commit history. Minimal permission requirements — they only access what they need. No unexplained network calls.
For the full list, see our Best OpenClaw Skills 2026 picks.
Red Flags: Signs a Skill Might Be Malicious
Print this list. Tape it to your monitor. Check it every time you install a skill from an unfamiliar source.
-
Single-skill author — Account created just to publish one skill. No history, no other work. 89% of malicious publishers matched this profile.
-
Name clones — Slight misspellings of popular skills.
fal-iainstead offal-ai.google-workspace-mpcinstead ofmcp.tube-sumaryinstead oftube-summary. If the name is one character off from something popular, be suspicious. -
Obfuscated code — Base64-encoded strings, minified JavaScript, or compressed payloads in what should be a Markdown file. SKILL.md is plain text. There’s no legitimate reason to encode content in it.
-
Excessive permissions — A weather skill that reads your entire home directory. A markdown formatter that makes HTTP requests. A code linter that accesses
~/.ssh/. If the permissions don’t match the purpose, walk away. -
No source repo — Legitimate skills almost always link to a GitHub repository. The source is visible. The commit history is public. No link means no accountability. It also means you can’t verify what the skill does beyond reading the published SKILL.md.
-
Recent publication + high download claims — If it was published yesterday but claims thousands of users, something’s off. Either the numbers are fabricated or the skill replaced a previously popular one (which could mean a supply chain takeover).
-
Unusual file access — Skills should stay within
~/.openclaw/skills/. Anything reaching into~/.ssh/,~/.aws/,~/Documents/, or~/.config/outside of OpenClaw’s directory is suspicious. A skill has no business reading your SSH keys or AWS credentials.
Any single red flag is worth pausing. Two or more? Don’t install it.
What OpenClaw Is Doing About Security
The OpenClaw team’s response to the incident has been fast. Not perfect — no response to a systemic problem is perfect — but fast and directionally correct.
-
VirusTotal integration — ClawHub now scans all published skills against VirusTotal’s malware database. New submissions are held until the scan completes. Existing skills were retroactively scanned, which is how the 341 were identified.
-
Publisher verification — Email verification is now required for all new accounts. GitHub profile linking is encouraged (but not yet required). This raises the bar for throwaway accounts, though determined attackers can still create verified accounts.
-
Community audits — The OpenClaw Discord has a #security channel specifically for reporting suspicious skills. Community members have flagged an additional 47 skills since the initial 341 were removed. Crowdsourced security isn’t a replacement for automated scanning, but it catches things scanners miss.
-
Official security docs — docs.clawd.bot/security now includes deployment hardening guides, API key management best practices, and a skill audit checklist. If you want the official version of the advice in this article, that’s where to find it.
-
CVE response — CVE-2026-25253 was patched within 72 hours of disclosure. The OpenClaw team published a detailed post-mortem explaining the vulnerability, how it was exploited, and what the patch changes. That’s the level of transparency you want from an open-source project handling security incidents.
Open ecosystems will always have this tension. The same openness that lets a developer in Lagos build a skill that helps thousands of users also lets an attacker in wherever publish a skill that steals credentials. The question isn’t whether attacks will happen. They will. The question is how fast they’re detected and how well the ecosystem responds.
So far, the response has been solid. But the tooling is still catching up. Automated scanning catches known patterns. Novel attacks still slip through. That’s why manual review — whether through curated directories or your own audit process — remains necessary.
FAQ
Are all ClawHub skills dangerous?
No. 341 out of 5,705+ is about 6%. The vast majority are legitimate tools built by developers who want to share useful work. But “most are fine” isn’t a security policy. You wouldn’t skip checking your dependencies because “most npm packages are safe.” Audit before installing.
How were the malicious skills found?
Security researchers from multiple firms ran automated scans against the ClawHub registry, looking for patterns common in malicious packages: obfuscated code, external data exfiltration endpoints, typosquatted names, and known malware signatures. They published their findings through coordinated disclosure with the OpenClaw team, giving the team time to remove the skills before going public.
What is CVE-2026-25253?
A remote code execution vulnerability in how OpenClaw processed SKILL.md files. The Markdown parser didn’t properly sandbox certain directive evaluations, allowing a crafted skill file to execute arbitrary shell commands during the loading phase. This meant code ran when you installed the skill, not when you used it. Patched in OpenClaw version 2.4.1.
Is Claw Directory safer than ClawHub?
Yes, by design. We manually review every skill before listing it. We read the source code, test it on real machines, verify the author’s identity, and check for the threat patterns described in this article. ClawHub is open — anyone can publish. We’ve listed 433 skills with zero malicious entries. The tradeoff is selection size: 433 vs. 5,705+.
Can skills steal my API keys?
Yes, if you store them in accessible locations. Skills can read environment variables. If your Anthropic API key, AWS credentials, or database passwords are exported in your shell profile, any skill can read them. Use OpenClaw’s built-in config system (openclaw config set) to limit what each skill can access.
Should I stop using OpenClaw?
No. Every open ecosystem has security incidents. npm has had thousands of malicious packages. PyPI gets hit regularly. Docker Hub images have contained cryptominers. VS Code extensions have been caught exfiltrating data. The answer isn’t to stop using open tools. It’s to install from trusted sources, audit what you run, and keep your setup updated.
How do I report a suspicious skill?
Three options:
- From your terminal:
clawhub report skill-name - File an issue on the OpenClaw GitHub with the skill name and what you found
- Post in the OpenClaw Discord #security channel
The community has flagged 47 additional malicious skills since the initial 341. Reporting works.
Summary
| Action | Why |
|---|---|
| Update to OpenClaw 2.4.1+ | Patches CVE-2026-25253 |
Use clawhub preview before installing | See what you’re running before it loads |
| Install from Claw Directory | 433 reviewed skills, zero malicious |
| Bind to localhost | Don’t expose your instance to the internet |
Check clawhub audit results | Automated scanning catches known threats |
| Read the Getting Started guide | Covers secure initial setup |
| See our Best Skills picks | Trusted skills we’ve tested ourselves |
The skill ecosystem is OpenClaw’s biggest feature. It’s what makes a personal AI agent actually personal — you pick the abilities that match how you work. But an open registry means trusting strangers with code that runs on your machine. Treat it like you’d treat any package manager. Don’t install blindly. Verify sources. Use curated directories when they exist. That’s what we’re here for.
New to OpenClaw? Start with our guide to what Claw AI actually is for the full picture.