According to a Bloomberg report, a small group of unauthorized users has gained access to Anthropic's new Mythos AI model. The company announced the AI model earlier this month but stated in a blog post that it would not publicly release it due to fears that it could destabilize the cybersecurity world. Instead, Anthropic limited the rollout of Mythos to a select group of approved partners because of its advanced capabilities.
How a Discord Group Accessed Anthropic's Mythos
The Bloomberg report reveals that the unauthorized access was carried out by a small group of users in a private Discord channel. The group used a combination of methods to gain entry, including access linked to a third-party contractor and online tools commonly used for cybersecurity research. The users were able to guess the likely online location of the model based on patterns from earlier Anthropic systems, with some information possibly coming from publicly available data or previous breaches involving related platforms. The group focuses on tracking unreleased AI models and uses bots to scan websites like GitHub for clues. According to the report, the users have been interacting with the model but have not used it for cybersecurity-related tasks.
What Anthropic Said
The Bloomberg report quotes an Anthropic spokesperson who stated: "We're investigating a report claiming unauthorized access to Claude Mythos Preview through one of our third-party vendor environments." The spokesperson added that the company has "no evidence that the access reported by Bloomberg went beyond a third-party vendor's environment or that it is impacting any of Anthropic's systems."
What Makes Mythos Different from Other AI Models
Anthropic says that during testing, Mythos reportedly detected thousands of critical flaws, including zero-day vulnerabilities that typically take elite human teams months to uncover. In comparison, human researchers discover about 100 such vulnerabilities annually. Experts told Business Insider that Mythos compresses exploit development from weeks to hours, representing a leap in AI's ability to handle cybersecurity tasks. Because large language models excel at structured languages like code, Mythos can identify subtle logic-level bugs that humans or traditional tools often miss. However, costs remain a concern: Anthropic said finding one decades-old vulnerability required thousands of runs and cost about $20,000.
Growing Demand for Mythos Access
Anthropic has already allowed several companies, including Amazon, Apple, and Cisco Systems, to test Mythos. The model is also being offered through Amazon's Bedrock platform to a limited set of organizations. The report added that financial institutions and government agencies are seeking early access to better understand and defend against potential risks linked to the technology. This incident underlines the broader challenge facing AI companies as they develop powerful systems while trying to control how they are accessed and used.



