Anthropic Leak – Claude Code Source Code and Mythos Model. What does this mean for businesses?

Anthropic Leak in March 2026: Secret "Mythos" Model and 512,000 Lines of Claude Code on npm. What was Found and What are the Implications for Companies Using AI?

You are building on AI. Your team uses Claude, GPT, or Gemini in their daily work. The pipeline works, automations run, clients are happy. And then, in one week, a company that spends billions on AI safety and employs the best engineers in the world accidentally publishes 3,000 internal documents and half a million lines of source code. This is not a hypothetical scenario. This Anthropic leak happened in the last week of March 2026. And the lessons learned from it are more relevant to your company than you might think.


Anthropic Leak: Two Incidents, Five Days

NEWSLETTER

One email with a concrete point each week.

AI, B2B sales, and implementations. No spam, unsubscribe with one click.

Sign up

Mythos: Secret AI Model (March 26)

Anthropic left publicly available around 3,000 unpublished resources on its website. Draft blog posts, internal materials, images, PDFs. The reason? A CMS configuration error that caused draft documents to end up in a publicly indexed data store. It was first reported by Fortune magazine.

Among the documents was a description of a new AI model codenamed Mythos (internal project name: Capybara).

What do we know from this Anthropic leak about Mythos:

  • „This is a new class of model, larger and more intelligent than Opus models.”
  • Anthropic calls it „breakthrough change” (step change) in AI capabilities
  • Programming, reasoning, and cybersecurity test results are significantly better than Claude Opus 4.6
  • The internal assessment states plainly: Mythos is „far ahead of every other AI model in cyber capabilities.”

The most serious thread? Documents show that Anthropic privately warns US government officials before the rise of large-scale cyberattacks in 2026. More about the threats Euronews described.

But that's not the end of this week.

Claude Code source code on npm (March 31)

4:23 AM Eastern Time. Chaofan Shou, an intern at Solayer Labs, is downloading a routine update Claude Code z npm registry. The package weighs 59.8 MB. Normally, it's a few megabytes. He opens it, looks, and sees half a million lines of Anthropic's complete source code. Feature flags, system prompts, product plans. Everything. Before Anthropic could react, the code was already on GitHub. Deleting the package from npm didn't help.

How did this happen? Claude Code uses the Bun bundler, which generates source maps by default. A source map file is a debugging tool that maps compressed code back to readable source. All it took was a missing entry in .npmignore. A detailed analysis of this Anthropic leak dev.to.

Scale? 1,900 files. 512,000+ lines of TypeScript code. Full, readable code for Anthropic's flagship AI programming tool.


What was found after the Anthropic leak in the code?

Code analysis revealed things that Anthropic would prefer to keep to themselves. And which say a lot about where the entire industry is heading.

44 feature flags

Code found 44 feature flags. Switches controlling access to built, but inactive features. These are not prototypes. This is ready, compiled code waiting to be turned on.

KAIROS: Agent operating non-stop

The biggest discovery from the Anthropic leak. KAIROS (from the Greek for „right time”) appears in the code over 150 times. It turns Claude Code from an on-demand tool into an autonomous background daemon. GitHub webhook sessions, cron job scheduling, automatic resume without commands. And the most interesting part? The process autoDream, which is „memory consolidation” during times when the user is inactive. AI that works while you sleep.

But KAIROS is not the only discovery. The next one is even stranger.

BUDDY: virtual pet

No one expected this. Claude Code has a system virtual Tamagotchi-style companion. Rarity-varying species, shiny variants, procedurally generated stats. Buddy sits in a speech bubble next to the text field. Yes, seriously.

Coordinator Mode and others

In the catalog Coordinator hidden is a multi-agent orchestration system. It turns Claude Code into a coordinator managing multiple agents in parallel. In addition to this: voice command mode, browser control via Playwright, persistent memory between sessions, internal employee-only tools, and built-in system prompts.


Anthropic Leak: Company Response

The company confirmed the existence of Mythos and described it as a „new class of model” representing a „breakthrough change.” Due to cybersecurity concerns, it is limiting early access to cybersecurity defense organizations.

Regarding the Claude Code, the advocate said:

„Today's Claude Code release contained some internal source code. No customer data or credentials were exposed. This was a release packaging issue caused by human error, not a security breach.”

The package disappeared from npm. Too late. The code was already on GitHub.


Anthropic Leak and Your Company's Security

Open Door Syndrome

Most companies think a breach is a hacking problem. That someone has to break in. The truth is, the most dangerous breaches are misconfigurations. A missing entry in .npmignore. Publiczny bucket S3. Draft w CMS-ie bez flagi „prywatny”. Zero hakerstwa, 100% ludzkiego błędu.

Anthropic. Billions in budget. Top-tier engineers. The industry's best AI safety practices. And all it took was a missing entry in a single configuration file.

No one broke in. The door was just open. That's open door syndromeCompanies invest in locks, alarms, and surveillance, and then leave keys under the doormat. We previously wrote about securing AI systems in a company.. These rules are now more important than ever.

Questions for your team

You might be thinking, „But we're not Anthropic. We don't publish AI models or npm packages.” Fine, but honestly ask yourself: do you know exactly what's in your Docker image? Has anyone checked what the CMS is publishing publicly? Are API keys or test data definitely not lurking in the repository?

Some specific questions for the next standup:

  • Does a CI/CD pipeline check what actually ends up in the production package?
  • Is anyone scanning artifacts before publication?
  • Are you monitoring package sizes? 59.8 MB is a warning sign.
  • Are the files .map, .env Are test data excluded from production?

Feature flags reveal the strategy

44 Flags is Anthropic's complete product roadmap. Features the company has never publicly discussed. For OpenAI And Google is priceless knowledge.

If your company uses feature flags, ensure that flags are removed from the code after the feature is enabled. Make sure their names don't reveal business plans. And that the code behind them doesn't end up in public artifacts.

Mythos changes the rules of the game

Documents indicate that AI agents can lead multiple hacking campaigns simultaneously. That employees using AI agents may unknowingly open the door to criminals. That identity theft is becoming easier than ever. Anthropic warns the US government that this is a real threat for 2026.


Summary: Lessons from the Anthropic Leak

What happenedLesson
3,000 internal documents publiclyRegularly audit what is available on your infrastructure
Source map with full code in npmAutomate artifact checks in CI/CD
44 hidden features revealing plansRemove flags from production code
Client-side system promptsKeep sensitive instructions on the server side
Model with cyber abilitiesBefore you check the locks, check if the doors are closed

What to do after the Anthropic leak?

Good news? Each of these problems can be solved. None of the protections we describe here require months of work or a million-dollar budget. These are changes your team can implement this week.

  1. Review CI/CD pipeline. Check what goes into the production packages
  2. Audit public resources. CMS, repositories, package registries, S3 buckets
  3. Update the AI security policy The risks of AI agents and new models
  4. Does your team know what a source map is? Train them.
  5. Monitor artifact sizes. 59 MB CLI package is a red flag

The code for Claude Code is already on GitHub. Hackers are analyzing it, looking for attack vectors on companies that use this tool. Mythos, a model with „unprecedented cyber capabilities,” could be available in weeks.

If your team is using AI tools in your daily work, now is the time to check if your pipelines, artifacts, and configurations are secure. We perform code audits and optimization (so-called Web Vitals). A 30-minute call, a concrete checklist to make your app more secure or achieve better SEO results without bogging down your users. Schedule a free consultation.

More knowledge

If this post has shown you that problems in online sales are not a matter of technology, but of processes – it's time to make a decision. We are implementing a digital sales transformation: from strategy through processes to technological solutions.

Every lead has an owner and a deadline

The system automatically assigns, reminds, escalates. Zero leads without an owner.

Manager sees forecast in real time

Not "by feel," but based on data from the system. Full process visibility.

The processes are stored in the system

A new trader knows what to do from Day 1. You don't depend on one person.

Start your digital sales transformation

This is more than a free consultation. It's a concrete conversation about implementing transformation for companies ready to make decisions and take action. Fill out the form and we will prepare an initial analysis and action plan for you.

30-45 minutes. No obligation.

No. It's a qualifying interview to see if we can help.

No. After the interview you will get a recommendation, the decision is yours.

All the better. That's exactly the kind of process we implement - tailored to your business.