If you’ve been following AI developer communities over the past two weeks, you’ve witnessed one of the most contentious battles in recent tech history. Anthropic banned OpenClaw users in January. Google just followed with an even more aggressive crackdown this week. And the developer community is absolutely furious.
Here’s the situation: OpenClaw is an open-source AI agent framework that lets you automate tasks, write code, manage emails, and basically build your own personal AI assistant. It’s one of the most popular AI developer tools of 2026, with 157,000+ GitHub stars and millions of deployed instances worldwide.
And as of February 2026, both Anthropic (Claude) and Google (Antigravity/Gemini) have effectively killed the ability to use their AI models with it at least if you’re trying to use consumer subscriptions instead of expensive API keys.
The result? Paying customers some spending $200-$250/month on premium AI subscriptions are getting permanently banned without warning. Developers are calling it “draconian.” OpenClaw’s creator called Google’s approach so harsh that he’s removing Antigravity support entirely and warned users to “be careful out there.”
Let me explain what actually happened, why these companies are taking such aggressive action, and what this tells us about the future of AI access.
What Is OpenClaw and Why Does Everyone Love It?
Before we get into the controversy, you need to understand what OpenClaw actually is and why developers are so passionate about it.
OpenClaw is an open-source AI agent framework that connects large language models (like Claude, GPT, Gemini) to tools and workflows, enabling autonomous operation. Think of it as an operating system for AI agents.
What can OpenClaw do?
- Automate your email inbox (read, summarize, draft responses, send)
- Manage your calendar and schedule meetings
- Write code, debug errors, run tests autonomously
- Control your computer (open apps, click buttons, fill forms)
- Integrate with Telegram, Slack, and other messaging platforms
- Run 24/7 as an always-on personal assistant
- Execute multi-step workflows without human intervention
The power comes from its modular “skill” system. The community builds plugins that give OpenClaw new capabilities accessing APIs, interfacing with specific software, performing custom automations. It’s like an app store for AI agent abilities.
Peter Steinberger, OpenClaw’s creator (who was recently hired by OpenAI), built it as an open alternative to closed agent platforms. The philosophy: AI agents should be flexible, user-controlled, and not locked to any single AI provider.
Users loved OpenClaw because it gave them the “AI assistant” experience tech companies have been promising for years but actually working, today, with the AI models they already pay for.
Until it all came crashing down.
The Anthropic Crackdown: January 9, 2026
Let’s start with Anthropic, because they moved first.
On January 9, 2026 without any public announcement or warning to users Anthropic deployed server-side blocks that prevented Claude subscription OAuth tokens from working in third-party tools like OpenClaw.
Developers woke up to broken workflows. No error explanation. Just silent failures.
Here’s how it worked before the ban:
The OpenClaw-Claude Setup:
- User clicks “Login with Claude” in OpenClaw
- OpenClaw redirects to Claude.ai for authentication
- User authorizes with their Claude Pro ($20/month) or Max ($200/month) subscription
- OpenClaw captures the OAuth token
- OpenClaw routes all agent requests through that token to Anthropic’s servers
This gave users effectively unlimited (or very high) usage at the flat subscription price, without per-token API billing.
For power users running 24/7 autonomous agents, this was economically transformative. A $200/month Max subscription could replace $1,000+ in API costs.
The Economics That Broke the Model:
Here’s the math that made Anthropic pull the plug:
- Claude Opus 4.6 API pricing: $15 per million input tokens, $75 per million output tokens
- An active agentic coding session running Opus can burn through millions of tokens per day
- A Claude Max subscription costs $200/month flat rate
When third-party tools like OpenClaw removed the built-in rate limits that Claude Code enforces, users could consume API resources worth far more than their subscription fee.
The “Ralph Wiggum technique” made this worse. Developers were trapping Claude in autonomous self-healing loops that ran overnight feeding test failures back into the context window until all tests passed. Anthropic even shipped an official Ralph Wiggum plugin for Claude Code (where they control rate limits), but third-party tools ran the same loops without guardrails.
The result: subscription arbitrage. A $200 flat-rate plan being turned into a backend for high-volume, multi-agent automation.
The Official Policy Update:
Six weeks after the technical block, on February 17-18, 2026, Anthropic published formal documentation:
“Using OAuth tokens obtained through Claude Free, Pro, or Max accounts in any other product, tool, or service including the Agent SDK is not permitted and constitutes a violation of the Consumer Terms of Service.”
Read that carefully: even Anthropic’s own Agent SDK is off-limits with subscription tokens.
The documentation makes it explicit: OAuth authentication is “intended exclusively for Claude Code and Claude.ai.” Everything else requires API keys through the Claude Console, billed per token.
The Google Ban Wave: February 2026 Even More Aggressive
If you thought Anthropic was harsh, Google took it to another level.
Starting around February 12-14, 2026, Google began permanently disabling Antigravity access for users who connected via OpenClaw often without any prior warning.
What users saw:
403 PERMISSION_DENIED
"This service has been disabled in this account for violation of Terms of Service."
Or:
"Gemini has been disabled in this account for violation of Terms of Service. If you believe this is an error, please contact Google Cloud Support."
No warning email. No grace period. No obvious appeal path.
The Collateral Damage: Paying Customers Getting Burned
Here’s where it gets truly controversial: many of the banned accounts belonged to Google AI Ultra subscribers paying $250/month for premium access.
These weren’t free-tier users gaming the system. These were paying customers at the highest subscription tier Google offers.
Reports flooded Google’s support forums and Reddit:
- “$250/mo Ultra Subscriber Banned Without Warning”
- “Account banned for using OpenClaw zero communication from Google”
- “I pay for Google AI Ultra. Why am I being treated like a criminal?”
Even OpenClaw creator Peter Steinberger got hit. He posted on X (formerly Twitter):
“Pretty draconian from Google. Be careful out there if you use Antigravity. I guess I’ll remove support. Even Anthropic pings me and is nice about issues. Google just… bans?”
Google’s Defense: “Malicious Usage”
Varun Mohan, head of Google Antigravity and former Windsurf CEO, defended the action on X:
“We’ve been seeing a massive increase in malicious usage of the Antigravity backend that has tremendously degraded the quality of service for our users. We needed to find a path to quickly shut off access to these users that are not using the product as intended.”
He added that there’s “a path for unaware users to potentially regain access” but that path hasn’t been clearly communicated, and many banned users report being stonewalled by support.
The Three Reasons Google Cited:
1. OAuth Misuse: Antigravity OAuth tokens are intended for first-party Google applications. Using them in third-party tools violates Terms of Service.
2. Service Degradation: Automated loops and high-frequency calls generated by AI agents were degrading service quality for legitimate users.
3. Economic Arbitrage: Users were routing consumer-tier subscriptions into professional developer workflows, bypassing more expensive per-token API pricing.
The Security Angle: CVE-2026-25253
There’s a legitimate security concern that both companies reference but don’t emphasize publicly: OpenClaw has serious security vulnerabilities.
In January 2026, researchers disclosed CVE-2026-25253, a critical remote code execution vulnerability in OpenClaw with a CVSS score of 8.8 (High).
The vulnerability allowed attackers to execute arbitrary code on systems running OpenClaw through malicious “skills” (community-built plugins).
The Security Trifecta That Scares AI Companies:
- Broad Data Access: OpenClaw agents can access files, databases, emails, calendars essentially everything the user can access
- Untrusted Community Skills: Anyone can publish skills, many aren’t vetted, some are malicious
- Outbound External Communications: Agents can send emails, post to social media, call APIs all without user supervision
Put those together and you have a recipe for:
- Data exfiltration
- Account compromise
- Phishing campaigns using legitimate user credentials
- Supply chain attacks via malicious skills
Matthew Berman, a prominent AI YouTuber, called OpenClaw a security “dumpster fire” in a recent video examining the controversy.
AI companies aren’t just worried about economic arbitrage. They’re worried about their models being used in compromised systems to conduct attacks.
The OpenAI Twist: The Odd One Out
Here’s where the story gets fascinating: OpenAI is the exception.
While Anthropic and Google cracked down hard, OpenAI went the opposite direction:
January 2026: OpenAI acquired OpenClaw and hired creator Peter Steinberger to lead their personal agents division.
Official stance: OpenAI whitelisted OpenCode (a related project) and continues to allow third-party tools to use OpenAI subscription credentials.
Sam Altman personally announced Steinberger’s hiring: “Peter will drive the next generation of personal agents at OpenAI.”
Why the difference?
Theory 1: Competitive Positioning OpenAI sees an opportunity to capture the developer community while competitors alienate them.
Theory 2: Different Economics OpenAI’s pricing and margin structure might make subscription arbitrage less painful.
Theory 3: Strategic Vision OpenAI believes the future of AI is open, agent-driven workflows, and they want to lead that transformation rather than fight it.
Whatever the reason, the result is clear: if you want to use OpenClaw with a commercial AI provider in 2026, OpenAI is your only option.
Yuchen Jin observed on X: “The coding LLM war escalated after OpenAI acquired the creator of OpenClaw. I noticed Anthropic and Google blocked OpenCode from using their Pro plan subscriptions… Only OpenAI seems generous here.”
The Developer Community Response: Fury and Workarounds
The reaction from developers has been overwhelmingly negative, and not just because their tools broke.
Primary Complaints:
1. Zero Communication: Both Anthropic and Google deployed technical blocks first, explained later (or not at all).
2. No Grace Period: Paying customers lost access instantly with no chance to transition.
3. Inconsistent Messaging: Anthropic said publicly “nothing changed” while privately banning accounts. Google provided minimal explanation.
4. Treating Customers Like Criminals: $200-250/month subscribers getting instant permanent bans without warning or appeal.
5. Hypocrisy: The same companies promoting “AI for everyone” are aggressively restricting how people can use AI they pay for.
The Workarounds That Emerged:
Developers being developers, workarounds appeared within days:
Option 1: Switch to Open-Source Models
- Qwen 3.5 (Alibaba): $0.40/$2.40 per million tokens, Apache 2.0 license
- DeepSeek V3.2: $0.28/$0.42 per million tokens, MIT license
- GLM-5 (Zhipu AI): $1.00/$3.20 per million tokens
- Kimi K2.5 (Moonshot AI): $0.45/$2.25 per million tokens
For many use cases, these models are competitive with Claude and Gemini at a fraction of the cost.
Option 2: Use API Keys (But Pay More)
The official path: buy API keys through Claude Console or Google Cloud and pay per-token. This is what the companies want, but it can cost 5-10x more for heavy users.
Option 3: Run Everything Locally
Download open-weight models, run on your own hardware. Zero ongoing costs, complete control, but requires technical expertise and capable GPUs.
Option 4: Creative Account Strategies
Some users report running multiple accounts, staying under detection thresholds, or using VPNs to avoid bans. This is explicitly against ToS and could result in permanent account loss.
One Medium article detailed: “Anthropic Just Killed My $200/Month OpenClaw Setup. So I Rebuilt It for $15.” The solution: two $5/month VPS instances running Kimi K2.5 and MiniMax M2.5 as cheap alternatives.
The Bigger Picture: What This Controversy Actually Reveals
Step back from the specific OpenClaw drama and think about what’s really happening here.
1. The AI Access Model Is Broken
AI companies are trying to serve two incompatible business models simultaneously:
Consumer Model: Flat-rate subscriptions with generous (but vague) usage limits for “normal” human use.
Developer Model: Per-token API pricing designed for programmatic access and heavy workloads.
The problem: there’s a massive gray area between “normal consumer use” and “heavy developer workload” where AI agents live.
Is an AI agent checking your email every hour “consumer use”? What about writing code for 4 hours straight? Running overnight test loops? Managing your calendar automatically?
These are the use cases AI companies marketed to consumers. But when people actually built them using tools like OpenClaw, the companies freaked out about the cost and shut them down.
2. Terms of Service Are Vague Until Enforced
Anthropic’s ToS has prohibited automated access for over two years. But the rule was ignored, unenforced, and interpreted loosely until January 2026.
Google’s ToS technically prohibits third-party OAuth use, but was never actively enforced until February 2026.
Developers built businesses, workflows, and dependencies on access that was technically against ToS but practically allowed. Then enforcement happened suddenly, with no transition period.
This creates a trust problem. How can developers build on platforms when the rules can change overnight with no warning?
3. Economic Incentives Don’t Align
Users want: Affordable, flexible AI agents that work how they want.
AI Companies want: Predictable revenue, controlled usage, margin protection.
These goals are fundamentally incompatible when the technology enables users to extract far more value than they pay for.
The subscription model worked when AI was primarily conversational a few messages per day, bounded usage patterns. It breaks down when AI becomes agentic and runs autonomously for hours or days.
4. Open Source Is The Release Valve
When commercial providers crack down, developers don’t stop building agents. They switch to open-source alternatives.
Qwen 3.5, DeepSeek, GLM-5, and other open models are competitive enough for most use cases and don’t have usage restrictions (because you run them yourself).
The aggressive crackdowns by Anthropic and Google might accelerate the shift to open-source AI rather than protecting commercial moats.
5. OpenAI’s Strategy Looks Smarter
By acquiring OpenClaw and allowing third-party access, OpenAI positioned itself as the developer-friendly AI company while competitors alienated their most passionate users.
The developer community has long memories. The goodwill OpenAI just bought by welcoming OpenClaw while others banned it could pay dividends for years.
What Should You Actually Do About This?
If you’re affected by these bans or considering using OpenClaw, here’s practical guidance:
If You’re Currently Using OpenClaw with Claude or Gemini:
Stop immediately. Both companies have made it clear this violates ToS. Continuing risks permanent account loss, including potentially losing access to your email and other Google services.
The official OpenClaw documentation now warns users explicitly about this and recommends using API keys or alternative providers.
If You Want to Use AI Agents:
Option 1: Use Official Tools Claude Code, GitHub Copilot, Google Antigravity within their official apps. You’ll stay within ToS but lose flexibility.
Option 2: Switch to OpenAI Currently the only major provider allowing third-party agent frameworks with subscription credentials.
Option 3: Use API Keys Pay per-token through official APIs. More expensive but fully supported.
Option 4: Go Open Source Run Qwen 3.5, DeepSeek, or similar models locally or via affordable API providers. Requires more technical setup but gives complete control.
If You’re Building AI Products:
Don’t build dependencies on gray-area access methods. The OpenClaw situation demonstrates that access that’s technically against ToS but practically allowed can disappear instantly.
Design for API-based access from day one. Yes, it’s more expensive, but it’s the only sustainable path.
Consider open-source models seriously. They’re competitive, improving rapidly, and don’t have the rug-pull risk.
The Bottom Line: A Cautionary Tale About Platform Power
The OpenClaw controversy is ultimately a story about platform power and who controls AI access.
Anthropic and Google built amazing AI models. Developers built tools that made those models dramatically more useful. Users loved those tools and paid for subscriptions to use them.
Then the companies realized the economics didn’t work, changed the rules overnight, and banned paying customers without warning or recourse.
Is this defensible? From a business perspective, yes. Subscription arbitrage was costing them money. Security vulnerabilities in OpenClaw created real risks. They have the legal right to enforce ToS.
But from a developer relations perspective? It’s a disaster. The community feels betrayed. Trust is shattered. And the message is clear: build on platforms that you control, not platforms that control you.
The developers most hurt by this aren’t pirates or freeloaders. They’re paying customers often at the highest tiers who built sophisticated workflows around AI capabilities the companies themselves marketed.
When those workflows broke overnight with no warning, the immediate response was anger. The long-term response will be migration to alternatives: OpenAI for those who stay commercial, open-source for those who want sovereignty.
The irony: by cracking down aggressively to protect short-term margins, Anthropic and Google might have accelerated the shift to open-source AI they were trying to prevent.
OpenClaw isn’t going away. Peter Steinberger is now at OpenAI working on personal agents. The framework itself is open-source and forked hundreds of times. The demand for flexible, powerful AI agents hasn’t decreased.
But the trust that developers had in Anthropic and Google? That might be gone for good.
And in the long run, that might cost these companies far more than the subscription arbitrage ever did.
Update: As of February 23, 2026, OpenClaw has removed Google Antigravity from its list of supported providers and recommends users switch to OpenAI, API keys, or open-source alternatives. Anthropic’s ban remains in full effect with no indication of policy changes. The situation continues to evolve.


Leave a Reply