Recently, Anthropic PBC, an AI company that brands itself on security, accidentally released the internal source code of its popular Claude AI coding assistant. According to the company statement, the incident occurred on April 1, 2026, due to 'human error in the release packaging process', and it was not a security vulnerability. Anthropic emphasized that no sensitive customer data or credentials were exposed, but this mistake still raised concerns in the industry about its operational security and internal controls.
According to Bloomberg reports, the leaked content involved approximately 1,900 files and 512,000 lines of code, primarily related to the Claude Code system. This is Anthropic's second security mishap in just one week, following a similar incident. The company has apologized via an email statement and said it is investigating the cause. The developer community quickly began analyzing the leaked code, attempting to find clues about Anthropic's future technological direction, highlighting public interest in the internal processes of AI giants.


