
Anthropic, the AI research company behind the Claude language models, accidentally exposed a vast swath of its proprietary code on March 31, 2026, allowing anyone online to access and replicate one of the world's most advanced AI coding systems.
The leak, which involved more than 500,000 lines of code, spread rapidly across the internet despite the company's attempts to pull it back.
The news came after Anthropic released Claude Code version 2.1.88 to the npm package registry early Tuesday morning. The release mistakenly included a 59.8MB JavaScript source map, a file designed for internal debugging that can fully reconstruct the original code.
These files are normally kept private, but a minor error in the ignore settings allowed it to be published. Within hours, an intern and researcher named Chaofan Shou flagged the leak on X, where the download link attracted millions of viewers.
Claude code source code has been leaked via a map file in their npm registry!
— Chaofan Shou (@Fried_rice) March 31, 2026
Code: https://t.co/jBiMoOzt8G pic.twitter.com/rYo5hbvEj8
An Anthropic spokesperson told Decrypt, 'Earlier today, a Claude Code release included some internal source code. No sensitive customer data or credentials were involved or exposed. This was a release packaging issue caused by human error, not a security breach. We're rolling out measures to prevent this from happening again.'
Claude Code Leak Stirs Global Replication Effort
Once the code was public, attempts to remove it proved futile.
DMCA takedowns were issued against GitHub mirrors, but developers quickly began clean-room rewrites. One notable example came from Korean developer Sigrid Jin, who ported the code to Python using an AI orchestration tool and posted it under a new repository called Claw-Code. Within hours, the project had gained tens of thousands of stars, demonstrating the difficulty of containing digital information once it escapes into the wild.
Anthropic accidentally leaked their entire source code yesterday. What happened next is one of the most insane stories in tech history.
— Jeremy (@Jeremybtc) March 31, 2026
> Anthropic pushed a software update for Claude Code at 4AM.
> A debugging file was accidentally bundled inside it.
> That file contained… pic.twitter.com/4C4q3b6sOb
The leak revealed the inner workings of Claude, including its multi-agent coordination, permission logic, OAuth flows, and dozens of unreleased features. Among them were Kairos, a background daemon that consolidates memory overnight, and Buddy, a Tamagotchi-style AI assistant with complex stats including debugging, patience, and learning capabilities.
Buried in the code was also an 'Undercover Mode', designed to prevent the AI from exposing Anthropic's internal project names when contributing to open-source repositories.
Gergely Orosz, founder of The Pragmatic Engineer newsletter, noted, 'Anthropic accidentally leaked the TS source code of Claude Code. Repos sharing the source are taken down with DMCA, but a clean-room rewrite using Python violates no copyright and cannot be removed.'
This is either brilliant or scary:
— Gergely Orosz (@GergelyOrosz) March 31, 2026
Anthropic accidentally leaked the TS source code of Claude Code (which is closed source). Repos sharing the source are taken down with DMCA.
BUT this repo rewrote the code using Python, and so it violates no copyright & cannot be taken down! pic.twitter.com/uSrCDgGCAZ
Why This Leak Is Hard To Contain
Decentralisation makes the problem even harder. The original Claude Code has been copied to platforms like GitLawb, a decentralised git service, where it can't easily be taken down with traditional legal notices.
Other repositories have also collected Claude's internal system prompts, giving prompt engineers and AI researchers a clear look at how the model is trained and guided.
Legal questions make the situation even trickier. If parts of Claude Code were written by the AI itself, as Anthropic's CEO has suggested, it's unclear how much the company can claim as its own intellectual property. At the same time, decentralised storage and torrent sharing mean the leaked code can't realistically be erased.
For Anthropic and other AI firms, the leak raises urgent questions about how to protect their work while still contributing to a more open, collaborative research environment. To quote one commenter, 'Anthropic is now officially more open than OpenAI.'
Anthropic is now officially more open than OpenAI pic.twitter.com/T6Vgop0cSx
— Mete Polat (@metedata) March 31, 2026
As of 1 April 2026, the Claude Code source leak is still very much an active and unresolved situation — and the code can still be found online in various forms, despite Anthropic's attempts to pull it back.
Developers continue to share and discuss versions of the leaked code, including rewrites, forks, and snippets, on forums and decentralised services. This means parts of Claude Code remain publicly accessible despite Anthropic's containment efforts.