Anthropic, the US artificial intelligence giant that positions itself as a champion of ethics and responsibility in the AI industry, has restricted initial access to its powerful new AI model-cum-agent, Mythos, to only 40 organizations. According to an announcement made on April 30, 2026, all but one of these organizations are American. The sole exception is the British AI Security Institute.
Reason for Restricted Access
The reason behind this limited release is the immense power of Mythos. The AI model is capable of detecting hidden vulnerabilities in software systems. If such capabilities were to fall into the hands of malicious hackers, they could potentially cause widespread damage by exploiting these previously unknown weaknesses. Anthropic aims to prevent such misuse by carefully controlling who gets access to Mythos.
Implications for AI Safety
This move highlights the ongoing debate about balancing innovation with safety in the AI sector. While Mythos could be a valuable tool for improving cybersecurity, its potential for harm is significant. By limiting access to trusted entities, Anthropic hopes to set a precedent for responsible AI deployment. The decision also underscores the company's commitment to its ethical principles, as it prioritizes societal safety over rapid commercialization.
Industry experts have mixed reactions. Some applaud Anthropic's cautious approach, while others argue that broader access could accelerate vulnerability detection and patching. Nonetheless, Anthropic remains firm in its stance, emphasizing that the risks currently outweigh the benefits of wider distribution.



