Prominent AI safety company Anthropic recently experienced a source code leak incident. The company accidentally made internal source code public due to a packaging issue during the release of its popular AI coding assistant, Claude Code. Anthropic officially attributed this to human error and emphasized it was not a security breach. However, the event has still sparked external scrutiny regarding the operational security of this AI developer, whose primary brand promise is safety.
The scale of the leaked code is substantial, involving approximately 1,900 files and 512,000 lines of code related to the Claude Code agent. Cybersecurity firm Straiker analyzed that attackers can now study the data flow of Claude Code's four-stage context management pipeline and potentially design malicious payloads capable of persisting within a session. This marks Anthropic's second security oversight within just a few days.

