A 59-megabyte debugging file shipped to npm by accident. Inside it: 512,000 lines of TypeScript across nearly 2,000 files. The full Claude Code source.
On March 31, 2026, Anthropic published version 2.1.88 of its Claude Code package to npm. Bundled inside the package was a .map source-map artifact that was never supposed to ship. The map referenced the complete unobfuscated TypeScript source hosted on Anthropic's R2 cloud storage, making the codebase directly downloadable as a ZIP. Security researcher Chaofan Shou spotted it almost immediately. His post on X passed 28 million views. One GitHub mirror hit 84,000 stars before Anthropic's DMCA campaign disabled 8,100 forks.
By the time the takedowns landed, every researcher who wanted a copy had one.
What leaked
Not the model. Not its weights. The weights stayed safe. What leaked was the entire scaffolding: how Claude Code manages tools, structures memory, handles permissions, and orchestrates agent behavior. Every architectural decision. Every feature flag. Every internal comment. Developers found 44 unreleased features in the code, including: KAIROS, a persistent background daemon that watches a developer's repo and organizes its own memory. An anti-distillation system designed to feed fake training data to competitors who try to reverse-engineer Claude. A Tamagotchi-style companion called Buddy. A subsystem called Undercover Mode, which injects instructions into Claude's behavior to hide all reference to Anthropic when the model is contributing to public open-source repositories.
The forbidden-string list in the code names upcoming model versions: Opus 4.7 and Sonnet 4.8. The internal version numbers shipped with the leak.
What the code says about you
The leak revealed that Claude Code instruments more than usage. It scans user messages for profanity and frustration with regex pattern matching. It tracks behavior during permission prompts: which buttons get pressed, whether prompts are canceled, how many times a developer hits escape. It logs session IDs, workspace paths, platform details, and organization identifiers. None of this is unusual for a developer tool. The notable part is the granularity, and that the granularity is now public.
Latent Space ran a mega-compilation of the most consequential discoveries from the leaked codebase. The technical takeaways are substantive enough that any team building a coding agent in 2026 should read them. One developer called the codebase "a production-grade developer experience, not just a wrapper around an API". That is the engineering advantage that just got open-sourced by accident.
What Anthropic said
Anthropic called it "a release packaging issue caused by human error, not a security breach." No customer data was exposed. They pulled the package and started DMCA takedowns. The framing is technically correct. No private keys leaked. No model weights were exfiltrated. The legal posture, treating the source as proprietary intellectual property and pursuing aggressive takedowns, is also legitimate.
But the operational picture is different from the legal one. Once a 59-megabyte file with 28 million views hits the internet, the question is no longer whether the source is contained. It is whether the design decisions inside it survive contact with competitors who now have a free engineering education on building a production-grade coding agent.
Two leaks in a week
This was not an isolated incident. Days earlier, 3,000 internal Anthropic documents went public, including details about an unreleased model codenamed Mythos. The pattern is what makes the second leak harder to dismiss as bad luck.
Anthropic built its public identity around being the careful AI lab. The one that publishes detailed AI risk research. The one that employs top safety researchers. The one whose CEO testifies before the Senate about AI red lines. Operational security at the basic level (do not ship internal source maps to public package registries) is downstream of the same engineering culture that handles model safety. When the basic version fails twice in a week over the same missing line in a configuration file, it raises a fair question about the more advanced versions.
Why it matters
The strategic damage from this leak is not "competitors copy Claude Code." Cursor, Continue, Aider, and the other coding-agent products will study the architecture, take what is generally applicable, and ship faster than they were going to. That is a normal market shift. The credibility damage is the longer-tail problem.
The company that lectures the industry on AI safety just leaked its own source code. Twice. Over the same missing line in the same configuration file. The public, competitor labs, regulators, defense customers, every audience that has been told Anthropic is the meticulous one, just watched the same operational mistake compound.
If a release-packaging mistake can ship internal codenames and 44 unreleased feature flags to npm twice in a week, what assumption about the safety review pipeline are we no longer entitled to?
Originally published as an Instagram carousel on @recul.ai.