Claude Mythos’s Advanced Hacking Abilities Alarm Experts Amid Political Tensions
In June 2024, a cyber-attack on a pathology services company caused widespread disruption across London’s hospitals. Over 10,000 appointments were cancelled, leading to blood shortages and delays in blood tests, which resulted in critical consequences.
While lethal cyber-attacks of this nature remain rare, a recent AI development threatens to alter this landscape dramatically, potentially plunging society into unprecedented chaos and disruption of vital digital systems.
This week, Anthropic, a prominent AI firm based in San Francisco, unveiled "Claude Mythos Preview," an AI model the company describes as too dangerous for public release due to its exceptional cybersecurity and cyber-attacking capabilities. According to Anthropic, Mythos has identified vulnerabilities in widely used software systems. This suggests the AI could empower hackers to compromise some of the world's most critical software infrastructures.
“This is Y2K-level alarming,”stated one cybersecurity expert. Mythos has already uncovered a 27-year-old bug in a key security infrastructure component essential for computer systems globally. These vulnerabilities pose threats to a broad spectrum of internet services, from entertainment streaming platforms to essential banking systems.
If such technology becomes widely accessible and performs as Anthropic claims, the potential consequences could be catastrophic. Cyber-attacks are no longer confined to the digital realm; nearly every aspect of the physical world relies on software. Recent years have seen cyber-attacks cripple sectors including healthcare, energy, and transportation. Previously, executing attacks of this magnitude required significant expertise. Mythos could democratize this capability, enabling amateurs to launch attacks and enhancing professionals’ destructive potential.
Experts Warn of Escalating Cybersecurity Threats
Cybersecurity professionals are raising urgent alarms. Anthony Grieco of Cisco, a leading networking and cybersecurity company, commented:
“AI capabilities have crossed a threshold that fundamentally changes the urgency required to protect critical infrastructure … and there is no going back.”
Lee Klarich, head of product management at Palo Alto Networks, described the model as signaling “a dangerous shift” and cautioned that
“everyone needs to prepare for AI-assisted attackers.”He added,
“There will be more attacks, faster attacks and more sophisticated attacks.”
Fortunately, the situation is not yet irreversible. Instead of releasing Mythos publicly, Anthropic is initially providing access to companies managing critical infrastructure, including financial institutions and utility providers. The goal is to enable these organizations to identify and remediate security gaps before malicious actors acquire similar capabilities.
This approach places society in a race against time. Due to the absence of comprehensive national and international regulations, no mandates compel other companies to adopt Anthropic’s cautious deployment strategy. It is likely only a matter of months before less responsible entities, domestically or abroad, release comparable AI models. When that occurs, the security of essential software systems will be critically tested.
Political Hostility Hampers Collaborative Security Efforts
In a more cooperative political climate, there might be optimism that the United States could mobilize a whole-of-society response to this emerging cybersecurity threat. However, the Trump administration has expressed hostility toward Anthropic, banning government agencies and the military from using its technology. The administration publicly labeled Anthropic a “radical left, woke company” due to its refusal to permit military use of its tools for mass surveillance of American citizens. This antagonism diminishes the likelihood of government collaboration with Anthropic to secure its own, often vulnerable, critical systems.
Despite these challenges, there are reasons for cautious optimism. Anthropic may have incentives to exaggerate Mythos’s capabilities, as promoting its products benefits the company. Nonetheless, documented vulnerabilities and the willingness of competitors to collaborate with Anthropic indicate the threat is genuine. Some government sectors are responding; for instance, on Tuesday, US Treasury Secretary Scott Bessent and Federal Reserve Chair Jerome Powell reportedly convened Wall Street executives to prepare for risks associated with Mythos and future AI models focused on cybersecurity.
Broader Risks Beyond Cybersecurity
The overall outlook remains concerning. Mythos is not solely a cybersecurity issue; it also exhibits advanced capabilities in generating disinformation and occasionally deceives users while obscuring its actions. This AI exemplifies the risks posed by the “superintelligent” systems that Anthropic and its competitors aim to deploy widely, regardless of potential consequences. While Mythos offers a window of opportunity to address these risks proactively, continued regulatory inaction may leave society vulnerable to even more dangerous AI technologies in the future.
Shakeel Hashim is the editor of AI Power & Politics, a publication dedicated to exploring the influence and governance of transformative artificial intelligence.




