Anthropic’s Mythos AI model, a powerful cybersecurity tool that the company said could be dangerous in the wrong hands, has been accessed by a “small group of unauthorized users,” Bloomberg reports. An unnamed member of the group, identified only as “a third-party contractor for Anthropic,” told the publication that members of a private online forum got into Mythos via a mix of tactics, utilizing the contractor’s access and “commonly used internet sleuthing tools.”
The Claude Mythos Preview is a new general-purpose model that’s capable of identifying and exploiting vulnerabilities “in every major operating system and every major web browser when directed by a user to do so,” according to Anthropic. Official access to the model is limited to a handful of companies through the Project Glasswing initiative, including Nvidia, Google, Amazon Web Services, Apple, and Microsoft. Governments are also eyeing the technology. Anthropic currently has no plans to release the model publicly due to concerns that it could be weaponized.
“We’re investigating a report claiming unauthorized access to Claude Mythos Preview through one of our third-party vendor environments,” an Anthropic spokesperson said in a statement to Bloomberg. Anthropic currently has no evidence that the unauthorized access is impacting the company’s systems or goes beyond the third-party vendor’s environment.
The model was reportedly accessed illicitly on April 7th, the same day that Anthropic announced it was releasing Mythos to a limited number of companies for testing. The group that gained the unauthorized access has not been publicly identified, though Bloomberg reports that its members are part of a Discord channel that seeks out information about unreleased AI models.
The group accessed Mythos by using knowledge of Anthropic’s other model formats obtained from a recent Mercor data breach to make “an educated guess” about its online location. Members have been using Mythos regularly since gaining access — providing screenshots and a live demonstration of the model as evidence to Bloomberg — though reportedly not for cybersecurity purposes in an attempt to avoid detection by Anthropic. Other unreleased Anthropic AI models have also been accessed by the group, according to Bloomberg.


