Anthropic's Mythos Set Off a Cybersecurity 'Hysteria.' Experts Say the Threat Was Already Here.
When Anthropic announced it had built an AI model too dangerous to release, global banks scrambled, governments held emergency meetings, and cybersecurity stocks tumbled. But the experts fighting in the trenches of cyber warfare have a different message: the capability everyone's panicking about? It was already here.
The Day the Internet Discovered It Was Sitting on a Pile of Bombs
Imagine waking up to find out that someone just discovered your house has been sitting on a fault line for twenty-seven years. That's roughly what happened to the software industry on April 7, 2026.
Anthropic, the AI lab known for positioning itself as the "safety-first" alternative to OpenAI, dropped a bombshell. They had built a model called Claude Mythos Preview that was so powerful at finding and exploiting software vulnerabilities that they refused to release it to the public. Not "can't release because it's buggy." Can't release because it works too well.
The numbers were staggering. Thousands of previously unknown zero-day vulnerabilities across every major operating system and web browser. A bug in OpenBSD, arguably the most security-hardened operating system on the planet, that had been hiding for 27 years. Another in FFmpeg that survived five million automated security tests without anyone noticing. An engineer with zero cybersecurity training used Mythos to generate a working remote code execution exploit overnight.
Global banks, tech giants, and governments were sent scrambling. The U.S. Treasury Secretary convened an emergency meeting with Wall Street CEOs. British ministers issued stark warnings. India's securities regulator put the entire equities market on red alert.
There's just one problem: the capability they're worried about is already here.
What Makes Mythos Different, and What Doesn't
The Numbers That Made the World Flinch
Let's be clear about what Mythos actually demonstrated, because the benchmarks are genuinely impressive by any standard.
In Anthropic's red team testing, the model operated fully autonomously, no human involvement between the initial prompt and a working exploit. It found critical vulnerabilities in every major operating system and every major web browser. It chained multiple Linux kernel vulnerabilities together to escalate from ordinary user access to full system control. It autonomously wrote a web browser exploit that combined four vulnerabilities, escaping both the renderer sandbox and the operating system sandbox in a single attack chain.
On Firefox 147 alone, Mythos generated 181 usable exploits. Anthropic's previous flagship model, Claude Opus 4.6? It managed two. That's a 90x improvement in a single generation, not an incremental step.
The UK's AI Security Institute independently confirmed that Mythos became the first AI model ever to complete a full 32-step enterprise network takeover simulation, something human experts need about 20 hours to accomplish. On expert-level Capture the Flag cybersecurity challenges, it succeeded 73% of the time. Before April 2025, no AI model could complete those challenges at all.
A 27-year-old OpenBSD vulnerability was found for under $50 in compute costs. Discovering that same bug through traditional methods would have required months of work by a team of elite specialists.
These aren't marketing numbers. The capability leap is real.
"We Reproduced It with Older Models"
But here's where the story gets complicated, and where the real cybersecurity conversation begins.
Within weeks of Mythos's announcement, independent security researchers started reproducing Anthropic's headline results using smaller, cheaper, publicly available models.
Cybersecurity firm Vidoc used a technique called "orchestration", splitting code into smaller pieces and coordinating multiple existing models to cross-check results, and found the same vulnerabilities. "We ran older models against the same code base to see if we'd be able to detect the same vulnerabilities," said Vidoc CEO Klaudia Kloc. "We did, with both OpenAI and Anthropic's older models."
Another firm, Aisle, took the specific vulnerabilities Anthropic showcased, isolated the relevant code, and fed it to small open-weight models. Eight out of eight models detected the flagship FreeBSD exploit, including one with just 3.6 billion parameters costing $0.11 per million tokens. A 5.1-billion-parameter open model recovered the core chain of the 27-year-old OpenBSD bug.
Bruce Schneier, one of the world's most respected security technologists, put it plainly: "While Anthropic's model is really good at finding software vulnerabilities, so are other models." The UK's AI Security Institute found that OpenAI's GPT-5.5, already generally available, is comparable in capability.
As Aisle founder Stanislav Fort wrote: "A thousand adequate detectives searching everywhere will find more bugs than one brilliant detective who has to guess where to look."
Mythos is impressive. But the capability class, AI-powered vulnerability discovery at scale, is not locked inside one company's servers. It's already out there.
The Threat That Was Already Here
Anthropic's Own Models Were Already Doing This
Even Anthropic doesn't dispute that earlier models could find software vulnerabilities. In fact, the company has been warning about this for months.
In February 2026, two months before Mythos was announced, Anthropic published research showing that Claude Opus 4.6, a widely available public model, had found more than 500 high-severity vulnerabilities in open-source software. At an Anthropic event, CEO Dario Amodei acknowledged that while the scale surged with Mythos, "the trend wasn't new. We've been seeing warnings of this for a while."
This matters because it reframes the entire conversation. Mythos didn't create a new threat. It amplified one that was already growing.
"The models that we have right now are powerful enough to detect zero days at a large scale, and this is scary enough," Kloc told CNBC, noting this has been true for "a couple of months, if not a year."
The Underground Never Needed an Invitation
If you want to understand why the "containment" narrative around Mythos is fragile, you don't need to look at government briefings. You need to look at the dark web.
On April 13, less than a week after Mythos was announced, an anonymous user on Dread, the primary Reddit-like forum on Tor, posted operational advice: "I can get Claude, Gemini, and ChatGPT to write fully functioning, ready-to-deploy payloads with just a little bit of effort. 90% of most people's issues with LLMs can be fixed with better prompts. Stop messing around with abliterated models."
Two days later, another user recommended a specific Gemini jailbreak for "fraud and hacking coding/questions." A Russian-language Telegram channel with over 170,000 subscribers posted guidance on using AI to reverse-engineer binaries and find zero-days without source code, the exact capability profile Anthropic published for Mythos.
The underground had already figured out how to weaponize frontier models. They didn't need access to Mythos. They just needed better prompts.
This is the part of the story that gets lost in the headlines. The capability isn't contained. It can't be contained. Hackers working for criminal groups and adversarial nations "know how to do this, with or without Anthropic," as Kloc put it.
The Real Gap Nobody's Talking About
For all the panic about AI finding vulnerabilities at machine speed, the cybersecurity industry has a dirtier secret that predates any large language model: we've never been good at patching what we already know about.
The average enterprise takes over 60 days to remediate a critical vulnerability. Sixty percent of breaches exploit known vulnerabilities where a patch was already available. Large enterprises carry backlogs where 45% of identified vulnerabilities remain unpatched after twelve months.
Meanwhile, time-to-exploit has collapsed. From 2.4 years in 2018 to under a day in 2026. AI can reverse-engineer a patch and produce a working exploit within hours of disclosure.
Think about what that means. AI finds a vulnerability. The vendor releases a patch. Attackers use AI to reverse-engineer the patch and create an exploit. The exploit is in the wild within hours. And the average enterprise? Still working through its patching queue from last quarter.
This is the structural condition cybersecurity teams have been operating under for years. Mythos didn't create it. It just made it harder to ignore.
"The industry is panicking about the number of vulnerabilities they face now," said Ben Harris, CEO of watchTowr Labs. "But even before Mythos is widely available, it couldn't fix vulnerabilities fast enough."
What makes Mythos different, according to Anthropic, is its ability to develop working exploits with little or no human input, automating a process that previously required skilled researchers. But the patching side? That's still the part humans have to do. And humans, it turns out, are the bottleneck.
Offense First, Defense Later
Here's the uncomfortable asymmetry at the heart of this moment: AI-accelerated vulnerability discovery favors attackers over defenders in the short term. Pretty much everyone agrees on this.
JPMorgan CEO Jamie Dimon said as much when he noted that while AI tools could eventually help companies defend themselves, they are first making them more vulnerable. You have a significant increase in the volume of vulnerabilities discovered, but nobody has deployed a tool that helps you fix them at the same speed.
This isn't just about banks. Every legacy system, hospital records, payroll, building access, water treatment, now lives under an elevated threat level. The cost of crippling a power grid, a banking system, or an air traffic control network just dropped by several orders of magnitude.
But the defensive story isn't hopeless. Mozilla tested Mythos on Firefox and found 271 vulnerabilities, and then fixed them. Crucially, none of those vulnerabilities were ones a human couldn't spot. What changed was speed: AI discovered them quickly, cheaply, and at scale.
Cisco's Chief Security and Trust Officer Anthony Grieco captured the duality: "I have never been more optimistic for what we can do to change security because of the velocity. It's also a little bit terrifying because we're moving so quickly. It's also terrifying because our adversaries have this capability as well, and so frankly, we must move this quickly."
The window of defender advantage, if it exists at all, is narrow. When Mythos-class capabilities become broadly available, and every security expert agrees they will, the organizations that used this time to harden their architectures will be in a fundamentally different position than those that didn't.
Marketing, Valuation, and the "Criti-Hype" Problem
Let's address the elephant in the room. Anthropic is rumored to be approaching an IPO. OpenAI's Sam Altman has publicly accused Anthropic of "fear-based marketing", of declaring it had built a bomb so it could sell you a billion-dollar bomb shelter.
There's a term for this: "criti-hype." It's the inverse of regular hype. Instead of promising that technology will save the world, criti-hype warns that technology will destroy it, and then offers you protection for a price. The goal is the same: capture attention, drive valuation, and position yourself as the gatekeeper.
AI researcher Gary Marcus called the Mythos announcement "overblown," saying the model appeared to be "incrementally better" rather than a "breakthrough." Security expert Bruce Schneier noted that Anthropic's decision not to release Mythos publicly may also reflect a simpler reality: the model is very expensive to run, and the company may not have the resources for a general release. "What better way to juice the company's valuation than to hint at capabilities but not prove them?"
The independent evidence suggests the truth is somewhere in the middle. Mythos represents a real capability gain, and Anthropic deserves credit for releasing it through Project Glasswing rather than dropping it in the wild. At the same time, the capability is not unique, and the theatrical framing, "too dangerous to release", has very clear commercial upside.
Does this mean the cybersecurity concern is manufactured? No. The underlying trend is real and significant. But it does mean readers should apply the same skepticism to "too dangerous to release" narratives that they would to "this will change everything" narratives. Both serve someone's interests.
What Organizations Should Actually Do Right Now
Whether Mythos is a watershed or an incremental step is, in many ways, the wrong question to build a security program around. The trajectory is what matters. AI is lowering the skill floor for attackers and accelerating zero-day disclosure, and that trend doesn't depend on any single model.
Here's what experts recommend organizations do right now:
1. Accelerate Your Patch Cadence, Dramatically
The days of monthly patch cycles are over. When time-to-exploit has collapsed to under 24 hours, you need a process that can deploy critical patches within that window. Enable automatic updates wherever possible. If your computer asks to reboot, do it as soon as possible.
2. Implement Zero-Trust Architecture
Mythos-class attacks succeed because of excessive access, unnecessary visibility, and unrestricted lateral movement. Identity-based access controls, infrastructure cloaking, and micro-segmentation can contain the blast radius even when a vulnerability is exploited.
3. Audit Third-Party Access, Right Now
The unauthorized access to Mythos reportedly occurred through shared contractor credentials and a URL-guessing exercise. That gap, between the sophistication of the tool and the simplicity of the access method, should terrify every CISO. Review shared accounts and API keys across all vendor environments today.
4. Invest in Runtime Detection, Not Just Prevention
If AI can find novel attack chains at machine speed, signature-based defenses are increasingly obsolete. Organizations need behavioral detection, anomaly monitoring on AI platform API keys, and continuous evidence trails.
5. Treat AI Security as a Board-Level Issue
This isn't an IT problem. When finance ministers, central bank governors, and bank CEOs are publicly concerned about a single AI model, the framing has already shifted. Cyber resilience needs board-level attention, budget, and accountability.
6. Use AI Defensively, Don't Wait
The same AI capabilities that frighten you can also protect you. Use frontier models to scan your own codebase for vulnerabilities before attackers do. Mozilla's experience with Mythos on Firefox is the model: find the flaws, fix them, and they're gone forever.
Here's the honest takeaway from the Mythos moment: You don't need to panic, but you do need to move.
The cybersecurity threat that Mythos revealed was not created by Mythos. It was already here, growing quietly while organizations worked through their patching backlogs and hoped nobody noticed the gap between vulnerability discovery and remediation. What Mythos did was make that gap impossible to ignore.
AI is getting really good at finding and exploiting software vulnerabilities. That genie isn't going back in the bottle, and it was never only in Anthropic's bottle to begin with. The models already in the wild are powerful enough to do serious damage in the wrong hands.
The organizations that will thrive in this new environment aren't the ones with the most advanced AI. They're the ones that fixed their patching pipeline, implemented zero-trust architecture, and treated cybersecurity as a strategic priority before the alarm bells started ringing.
If you're reading this and realizing you haven't done those things yet? The alarm bells are ringing now.
Frequently Asked Questions
Is Anthropic's Mythos really too dangerous to release? Mythos does represent a significant leap in AI cybersecurity capabilities. It can autonomously find and exploit vulnerabilities that human experts missed for decades. However, independent researchers have reproduced many of its headline findings using existing public models. The threat is real, but the exclusivity is overstated.
Can hackers access Mythos? Reports indicate unauthorized users gained access to Mythos through a third-party vendor environment on the same day it was announced. Anthropic is investigating. Regardless, cybersecurity experts say bad actors already achieve similar results using publicly available frontier models with clever prompting.
Did Mythos actually find thousands of real vulnerabilities? Anthropic claims Mythos found thousands of high-severity vulnerabilities, with over 99% still unpatched. External security contractors agreed with the severity ratings 89% of the time. However, independent researchers note we don't know the false positive rate, and some findings may have been on older software versions or non-exploitable code paths.
What is Project Glasswing? Project Glasswing is Anthropic's defensive initiative that gives roughly 50 organizations, including Apple, Google, Microsoft, JPMorgan Chase, and CrowdStrike, early access to Mythos to find and patch vulnerabilities in critical infrastructure before the model's capabilities proliferate more broadly.
What should my organization do to prepare? Accelerate patch deployment, implement zero-trust architecture, audit third-party access, invest in runtime detection, treat AI security as a board-level issue, and use AI defensively to scan your own codebase before attackers do.