Company That Accidentally Published Its Own Source Code Unveils Plan to Secure World’s Software
Anthropic announces ‘Project Glasswing,’ a consortium of the world’s largest companies united by the shared goal of hoping for the best

SAN FRANCISCO — Anthropic, the AI safety company that in late March accidentally left internal documents on the public internet and then published half a million lines of its own source code to a public registry for three hours, has announced a “bold new initiative” to secure the world’s critical software infrastructure.
The initiative was prompted by what CEO Dario Amodei describes as a “happy accident”: while training its new Claude Mythos model to be good at writing code, the company discovered it had also created the most prolific vulnerability-finding machine in history. Mythos has since identified thousands of zero-day flaws in every major operating system and every major web browser. Over 99% remain unpatched, because the model finds them faster than humans can fix them.
“We haven’t trained it specifically to be good at cyber security,” Amodei explained. “We trained it to be good at code, but as a side effect of being good at code, it’s also good at cyber security.”
Here at Planet Parody we find it deeply reassuring that the most powerful cybersecurity tool ever built was not designed, planned or even intended.
Anthropic says it has no plans to release the model publicly, opting instead to give it to approximately 50 organisations, including Amazon, Apple, Google, Microsoft, and JPMorgan Chase, so they can find out what’s wrong with their own software before someone else does.
The response from world governments has been measured in the sense that no one has physically fled a building yet. US Treasury Secretary Scott Bessent and Federal Reserve Chair Jerome Powell jointly summoned Wall Street CEOs to Washington. In London, the Bank of England governor said the bank is “having to look very carefully now” at what this means for cybercrime risk, deploying the kind of language central bankers reserve for moments when they would prefer to be discussing anything else.
Among the model’s discoveries: a 27-year-old bug in OpenBSD, a security-focused operating system specifically designed to not have bugs, and a 16-year-old flaw in FFmpeg, software used by virtually every device capable of playing video. Cybersecurity firm Darktrace noted that LLMs are very good at finding vulnerabilities but “pretty bad at reliably patching them,” which is a polite way of saying the model has created a to-do list that will outlive several fiscal quarters.
“Although the risks from AI-augmented cyberattacks are serious,” Anthropic wrote in its announcement, “there is reason for optimism: the same capabilities that make AI models dangerous in the wrong hands make them invaluable for finding and fixing flaws.”
This is technically true in the same way that a flamethrower is invaluable for both starting and fighting fires.