Anthropic's Powerful AI Model Claude Mythos Leaked, Raises Cybersecurity Concerns
In a surprising turn of events, Anthropic has been quietly testing what it describes as the most powerful artificial intelligence model it has ever developed. The world only became aware of this advanced system because the company accidentally left a draft announcement in a publicly accessible data cache, which was later discovered and reviewed by Fortune. This model, named Claude Mythos, introduces an entirely new tier above the existing Claude lineup, and according to Anthropic's own draft, it possesses such exceptional capabilities in cybersecurity that releasing it broadly at this time could potentially cause more harm than good.
Confirmation and Details of the Leak
When Fortune reached out for comment, Anthropic confirmed the existence of the Claude Mythos model. A spokesperson stated, "We're developing a general purpose model with meaningful advances in reasoning, coding, and cybersecurity. Given the strength of its capabilities, we're being deliberate about how we release it." The details first emerged after Fortune reviewed draft blog posts that were left in a publicly accessible data cache before Anthropic secured it. This accidental exposure has shed light on a significant advancement in AI technology that the company had intended to keep under wraps until a more controlled rollout.
Introducing a New Tier: Capybara
Claude Mythos is not merely an upgraded version of an existing Claude model. According to the draft reviewed by Fortune, Anthropic is introducing it as the inaugural entry in a completely new tier called "Capybara," which sits above Opus. Previously, Opus was the company's most powerful line, making this the first time Anthropic has added a new tier to its model hierarchy. Until now, the hierarchy has been structured around Haiku, Sonnet, and Opus. The name Mythos, as per the leaked post, is intended to evoke "the deep connective tissue that links together knowledge and ideas." On various benchmarks, Mythos significantly outperforms Claude Opus 4.6 across critical areas such as software coding, academic reasoning, and cybersecurity tasks. Anthropic confirmed this to Fortune, describing it as "a step change and the most capable we've built to date" and highlighting its "meaningful advances in reasoning, coding, and cybersecurity."
Cybersecurity Risks and Restricted Rollout
The draft reviewed by Fortune reveals a major concern: Claude Mythos is "currently far ahead of any other AI model in cyber capabilities." It is capable of discovering and exploiting vulnerabilities in ways that significantly outpace what human security teams can handle. This assessment comes directly from Anthropic's own evaluation of the model it built, not from third-party sources. The concern is so substantial that Anthropic has structured the entire early rollout around it. Instead of a standard launch, the company is providing select cybersecurity organizations with first access. The logic behind this approach is that defenders need a head start to strengthen their systems before the model becomes widely available. This strategy mirrors the position taken by OpenAI earlier this year when it flagged GPT-5.3-Codex as its first model classified as "high capability" for cybersecurity tasks under its Preparedness Framework.
Cost and Efficiency Challenges
The restricted rollout is not solely motivated by safety concerns. The leaked draft also indicates that Claude Mythos is a large, compute-intensive model, making it expensive for Anthropic to serve and, consequently, costly for customers to use. The company is actively working to enhance its efficiency before considering any general release. Therefore, even if the cybersecurity issues were not a factor, a wide launch might not have been feasible regardless. According to the draft reviewed by Fortune—which Anthropic has not formally confirmed in its entirety—the plan involves gradually expanding access through the Claude API over the coming weeks, with cybersecurity use cases reportedly receiving priority in the early access program.
This incident underscores the delicate balance between innovation and responsibility in the rapidly evolving field of artificial intelligence, as companies like Anthropic navigate the ethical and practical challenges of deploying cutting-edge technologies.



