In November 2025, Austrian developer Peter Steinberger — best known as the founder of PSPDFKit — quietly published an open-source project called Clawdbot. It was a personal AI agent that ran locally on your machine and talked to you through WhatsApp, Telegram, or Signal. Nothing about the launch suggested what was coming next.
Within three months, the project — now renamed OpenClaw — had amassed over 247,000 GitHub stars, surpassing React's all-time count. Over 100,000 active installations were running worldwide. Moltbook, a managed hosting platform, was running 2.5 million agents. Enterprise adoption had passed 30%. And on February 14, 2026, Steinberger announced he would be joining OpenAI, handing the project to an open-source foundation.
This is the story of how one side project became the fastest-growing AI tool in history — and the reckoning it brought with it.
How It Started
Steinberger's original idea was deceptively simple: what if an AI assistant didn't live inside a special app, a browser tab, or a corporate dashboard? What if it just lived in your messaging app — the same place you already spend hours every day?
Clawdbot (the original name) was built as a single Node.js process called the Gateway. It connects to messaging platforms via channel adapters, routes messages through a lane queue for serial execution, and runs an agentic loop where the model proposes actions, executes them, then keeps going until the task is done.
Unlike ChatGPT or Claude's chat interfaces, OpenClaw doesn't just answer questions. It acts. It reads and sends emails. It manages your calendar. It executes shell commands. It browses the web. It writes and modifies files. And it remembers everything — stored as plain Markdown and YAML files in ~/.openclaw.
The Name Changes
The journey to the name "OpenClaw" was turbulent. In January 2026, Anthropic filed a trademark complaint, arguing that "Clawdbot" was too close to "Claude." Steinberger renamed it to Moltbot. Three days later, another complaint forced a second rename. The project became OpenClaw — and the name stuck.
Ironically, the drama only fueled its visibility. A managed hosting platform called Moltbook launched at the same time, making it trivially easy for non-technical users to spin up their own OpenClaw agents. The combination of controversy, ease of access, and genuine utility created the conditions for explosive viral growth.
The Pros: Why Everyone Wants One
What makes OpenClaw different:
- Messaging-native: Works through WhatsApp, Telegram, Discord, Signal, and iMessage. No new app to download, no dashboards to learn.
- Persistent memory: Remembers every interaction, preference, and context permanently. No re-explaining your business every session.
- Truly autonomous: It plans and executes multi-step tasks independently — not just suggesting what to do, but actually doing it.
- Open source & free: No subscription fees. You only pay for the LLM API calls (Claude, GPT-4, or DeepSeek).
- Extensible skills system: Modular capabilities defined in
SKILL.mdfiles. Build your own or pull from ClawHub, the community marketplace. - Heartbeat monitoring: The Gateway daemon runs at configurable intervals (30 minutes by default), proactively checking on tasks and sending notifications.
For creators and small business owners, the appeal is obvious. Instead of juggling 10 SaaS tools, you message your OpenClaw agent: "Schedule my client meetings for next week, draft a follow-up email to the Q1 leads, and check if my Stripe revenue report is ready." It does all of it.
The Cons: The Security Crisis
But here's the thing about an AI agent that can read your email, run shell commands, and access your files: if someone else gains control of it, they own your digital life.
In February 2026, that's exactly what happened.
The ClawJacked Vulnerability
Security researchers discovered that OpenClaw, by default, binds to 0.0.0.0:18789 — meaning it listens on all network interfaces, not just localhost. A critical vulnerability dubbed ClawJacked allowed attackers to hijack OpenClaw agents through a malicious webpage. When a user visited an attacker-controlled site, JavaScript silently opened a WebSocket connection to the local OpenClaw gateway. Because the application auto-approved new device registrations on local connections, the attacker gained full control in milliseconds.
CVE-2026-25253 was assigned a CVSS score of 8.8. Two additional command injection vulnerabilities were disclosed the same day.
The Numbers Were Staggering
- 135,000+ OpenClaw instances found publicly exposed across 82 countries
- 15,000+ directly vulnerable to remote code execution
- 820+ malicious skills discovered on ClawHub out of 10,700 total — including deceptively named extensions like "solana-wallet-tracker" that installed keyloggers and Atomic Stealer malware
- One malicious skill was artificially ranked as the #1 most popular plugin
Threat actors were already running an "LLMjacking" campaign called Operation Bizarre Bazaar — scanning for exposed OpenClaw instances, hijacking them, and reselling access on the black market.
"Any system that reasons, decides, and acts on your behalf with broad access creates a new attack surface that traditional security tooling was not designed to observe." — Trend Micro Research
Shadow AI in the Enterprise
The scariest part? Employees were privately installing OpenClaw and connecting it to corporate Slack, Google Workspace, and internal systems — without their security teams knowing. A compromised agent inherits all organizational OAuth tokens, enabling lateral movement through the entire company. Traditional security tools can't distinguish legitimate agent automation from a compromise.
The Fix: Can Local LLMs Like Ollama Solve This?
One of the most common responses to the OpenClaw security crisis has been: "Just run it with a local LLM. Problem solved."
The idea is straightforward. OpenClaw connects to external models like Claude, GPT-4, or DeepSeek for its reasoning. Every prompt, every context window — your emails, calendar, files, business data — gets sent to a cloud API. If you replace that cloud model with a local LLM running through Ollama, your data never leaves your machine.
What it solves
- Data privacy: No prompts or business data sent to third-party APIs. Everything stays on-device.
- Cost: Zero API fees. Run as many queries as your hardware can handle.
- Offline operation: Your agent works without an internet connection.
- Regulatory compliance: Easier to meet GDPR, HIPAA, or industry-specific data residency requirements.
What it doesn't solve
The hard truth is that most of OpenClaw's security vulnerabilities had nothing to do with the LLM provider. The ClawJacked exploit targeted the Gateway's WebSocket binding. Command injection vulnerabilities existed in the agent's tool execution layer. Malicious skills on ClawHub ran arbitrary code regardless of which model powered the reasoning.
- The network exposure problem remains. If your Gateway binds to
0.0.0.0, a local LLM doesn't prevent hijacking. - Prompt injection still works. A malicious email or webpage can embed instructions that trick the agent into executing harmful commands — whether the model is running in the cloud or on your GPU.
- Local model quality trade-off. Models you can run on consumer hardware (7B-70B parameters) are significantly less capable than Claude or GPT-4. Your agent becomes less reliable at complex multi-step tasks.
- Ollama itself has security issues. In January 2026, researchers found 175,000 publicly exposed Ollama instances across 130 countries — many misconfigured to listen on all network interfaces, the exact same mistake OpenClaw made. Nearly half had tool-calling enabled, meaning attackers could execute code through them.
The Balanced Approach
The real answer isn't "cloud vs. local" — it's defense in depth:
- Bind to localhost only (
127.0.0.1), whether using cloud or local models - Enable mandatory authentication on the Gateway
- Audit every ClawHub skill before installing
- Use Docker sandboxing to contain the agent's file and network access
- Run sensitive workloads through local models (Ollama + Llama, Mistral, etc.) while keeping cloud models for complex reasoning
- Update to the patched version (v2026.2.26 or later)
Running a local LLM is a meaningful privacy upgrade, but it's one layer of a much larger security story.
What Happens Next
OpenClaw's trajectory is a preview of what's coming for the entire AI agent ecosystem. The tools will get more capable. The convenience will be irresistible. And the attack surface will keep growing.
Steinberger's move to OpenAI and the project's transition to an open-source foundation suggest that OpenClaw isn't going away — it's becoming infrastructure. The question is whether security culture can keep pace with adoption.
At OlabForge, our own AI agent OpenClaw is built with these lessons in mind. Security, persistent memory, and zero-friction access aren't competing goals — they're the table stakes for any AI tool that earns the right to act on your behalf.
The rise of OpenClaw is far from over. The question is whether we'll build the guardrails fast enough to match the ambition.