“Yes, folks, it will be difficult to guarantee that OpenClaw will work reliably with Anthropic models in the future,” OpenClaw creator Peter Steinberger wrote on X early Friday morning, posting a photo of a message from Anthropic saying his account had been suspended for “suspicious” activity.
Prohibition did not last long. Hours after the post went viral, Steinberger said his account had been restored. Some were from Anthropic engineers, among hundreds of comments, many of them in the conspiracy theory world, given that Steinberger is currently employed by Anthropic’s rival OpenAI. The engineer told the famous developer that Anthropic has never banned anyone for using OpenClaw and offered to help.
It is unclear if that is the key that restored the account. (We asked Anthropic about it.) But the entire message string was enlightening on many levels.
To summarize recent history, the ban comes on the heels of last week’s news that Anthropic’s Claude subscription will no longer cover “third-party harnesses, including OpenClaw,” the AI modeling company said.
OpenClaw users must pay for their usage separately through Claude’s API based on their usage. Essentially, Anthropic, which offers its own agent Cowork, is now charging a “claw tax.” Steinberger said he was using the API in accordance with the new rules, but was banned anyway.
Anthropic said it made the price change because the subscription was not built to accommodate the “usage patterns” of nails. Claws can be more computationally intensive than prompts or simple scripts because they run continuous inference loops, may automatically repeat and retry tasks, and may work with many other third-party tools.
But Mr. Steinberger did not accept that excuse. After Anthropic changed its pricing, he posted, “It’s funny how the timing aligns. First they copy popular features into their own closed harness, then they shut out open source.” Although he didn’t say so, he may have mentioned features added to Claude’s Agent Cowork, such as Claude Dispatch, which allows users to remotely control agents and assign tasks. Dispatch was rolled out a few weeks before Anthropic changed its pricing policy for OpenClaw.
Steinberger’s dissatisfaction with Anthropic surfaced again on Friday.
“You had a choice and you made the wrong choice,” one person wrote, insinuating that he was partly to blame for taking a job at OpenAI instead of Anthropic. Steinberger responded: “One person welcomed me and one person sent me legal threats.”
ah.
When multiple people asked him why he was using Claude instead of his employer’s model, he explained that he was only using it for testing purposes to ensure that OpenClaw updates did not break functionality for Claude users.
He explained: “You have to differentiate between two things: my work at the OpenClaw Foundation, which aims to make OpenClaw work best for *any* model provider, and my work at OpenAI, which supports future product strategy.”
Several people also pointed out that the reason Claude should be tested is because this model is a more popular choice for OpenClaw users than ChatGPT. I also heard about when Anthropic changed their prices and I said, “We’re looking into it.” (So this is a clue as to what his work at OpenAI is all about.)
Mr. Steinberger did not respond to a request for comment.
