Recent reports indicate that Anthropic's AI assistant Claude has been involved in a wave of user account suspensions, bringing the phenomenon of AI depaltforming into focus as a new challenge in the digital era. Developer Peter Steinberger had his personal account banned for "suspicious signals" while testing the compatibility of a third-party tool, OpenClaw, despite prior awareness of the system classifier's vulnerabilities. Simultaneously, ordinary users have reported similar issues, with one self-described long-term paying user stating their account was suspended even after exceeding usage limits and paying 20 times their monthly fee.
On the r/Anthropic subreddit, posts with titles like "Claude Max subscription silently revoked" have surged. Users complain of accounts being permanently banned without explanation, some even charged up to $300. This reflects how AI service providers, while combating abuse, may rely on automated systems that lead to erroneous judgments, raising user concerns about transparency and fairness. The incident coincides with a period of rapid growth for Claude's user base, highlighting the urgency of robust platform governance.


