Claude Code Just Had Two Very Bad Days. Here’s What Actually Happened.

March 31, 2026 is not going to look great in the Claude Code changelog. Two separate disasters hit the same day, and the developer community has been having a field day with both of them.

First, the full source code leaked via an NPM registry map file. It appeared on GitHub under a repo called `chatgptprojects/claude-code`, and HN commenters immediately started reverse-engineering it. Second, Anthropic publicly acknowledged what many users had been reporting for weeks: people are burning through their Claude Code subscription quotas way faster than expected. Then developers figured out why. Two confirmed prompt cache bugs are inflating costs by 10 to 20 times normal. The fix is to downgrade to v2.1.34.

It’s been a rough 24 hours for Claude Code users. And the conversation on HN is only getting louder.

The Source Code Leak: What the Community Found

The NPM map file dropped a full Claude Code source repo into the public domain. Within hours, HN commenters were tearing through it and posting findings in real-time. The standout discovery so far: Claude Code’s internal sentiment analysis is built on regex.

Yes, regex. The classic “find text matching this pattern” tool from the 1980s is being used to measure user frustration. The community quickly dubbed it the “WTF/minute code quality measurement.” The takes have been relentless. One commenter compared it to “a truck company using horses to transport parts.” Another noted that Anthropic is shipping a coding agent with an architecture decision that makes experienced developers cringe.

To be fair, regex-based sentiment analysis is fast and lightweight. For a real-time monitoring system that runs constantly during agent sessions, the performance characteristics make a certain kind of sense. It’s not obviously wrong. It’s just that seeing a major AI company ship regex as their user frustration detector has a certain comedic energy that the internet has not let go of.

The broader source leak also exposed how Claude Code handles prompts, caching decisions, and user feedback loops. For a product built by one of the most well-funded AI labs in the world, the implementation details are more mundane than most people expected. That’s been a recurring theme in software reverse-engineering. The magic is mostly in the model, not the code around it.

The Quota Crisis: Why Your Subscription Is Disappearing Fast

While the source code was circulating, Anthropic was responding to a growing chorus of user complaints about usage limits. On Reddit and Discord, users reported that Claude Pro subscriptions were being drained in days. Some Max plan users reported hitting their monthly quota within an hour of starting a coding session.

Anthropic’s official response acknowledged the problem directly. Quote: “people are hitting usage limits in Claude Code way faster than expected, we’re actively investigating, it’s the top priority for the team.” That’s not corporate boilerplate. That’s a company admitting something is broken.

The developer community didn’t wait for the official diagnosis. Someone went into the binary, reverse-engineered it, and found two independent prompt cache bugs causing 10 to 20 times normal cost inflation. The cache bugs appear to be resetting or failing to reuse cached context in certain workflow patterns, effectively charging users for recomputation they should not have been billed for.

The workaround is straightforward. Downgrade to v2.1.34. Multiple users report that the downgrade “made a very noticeable difference” in quota consumption. That’s the stable version that doesn’t have the v2.2.x cache inflation bug.

What This Means If You Use Claude Code Right Now

If you’re paying for Claude Code and watching your quota vanish faster than it should, the most immediate fix is to check your version. If you’re on v2.2.x, downgrade to v2.1.34. It’s not ideal. Nobody wants to run an older version while waiting for a patch. But watching your subscription burn up in 12 days instead of 30 days is a problem you can solve right now.

Beyond the version bump, the broader lesson is one that the AI tooling community keeps relearning. The gap between “this works in a demo” and “this works reliably at scale” is wider than the marketing suggests. Coding agents are genuinely useful. They’re also young products with rough edges that show up in expensive ways when thousands of people use them daily.

The source code leak is a separate problem. It’s also a useful one. Open codebases get audited. Audits produce fixes. The regex sentiment analysis might get replaced. The prompt cache bugs will almost certainly get fixed faster now that the community has independently identified them. There’s an uncomfortable transparency that comes with being open-sourced against your will.

The Takeaway for the AI Coding Tool Space

If you’re building coding agents, this week is a free case study. Shipping a product used by thousands of people is a different challenge than building something that works in a controlled test. The prompt cache bugs are a reminder that infrastructure details matter as much as model quality. And the source code reaction is a reminder that the developer community will scrutinize everything you ship, whether you intend them to or not.

Claude Code is still one of the best coding agents available. That’s not changing because of a bad week. But the expectations for AI tooling have shifted. Users are paying real money and relying on these tools for real work. When they break, the consequences are immediate and public.

Check your version. Check your quota. And watch this space for the official fix.

Sources:
Hacker News Discussion — Source Leak
The Register — Anthropic Confirms Usage Crisis
Leaked Source Repo (GitHub)
Anthropic Official Acknowledgment (Reddit)

Leave a Reply

Your email address will not be published. Required fields are marked *