Anthropic announced Claude Mythos on April 7 with a sentence designed to end conversation: 'by far the most powerful AI model we've ever developed.' Then came the pivot: too dangerous for public release. The model would serve only 12 pre-selected corporations through Project Glasswing, backed by $100 million in Anthropic-funded usage credits.
Read that sequence again. We built something extraordinary. You cannot have it. These companies can. The rhetorical structure is older than the technology it describes.
$100M in free Mythos credits to 12 pre-selected corporations; $0 for independent researchers or the public
Verified
The Safety Frame
Mythos discovered thousands of zero-day vulnerabilities in every major operating system and web browser. One bug in OpenBSD had existed for 27 years. Anthropic presented this as proof the model was too dangerous for general access. The framing invites you to picture chaos: hackers wielding Mythos against hospitals, power grids, banks.
At Issue
Thousands of zero-day vulnerabilities found in every major OS — Glasswing members can patch; everyone else waits
The framing omits a question: who benefits from those vulnerabilities remaining undiscovered? Every day Mythos stays restricted is a day those bugs stay in your operating system. The 12 Glasswing members get to patch their systems. Everyone else waits.
Real-Time, Evidence-Based News Reports
Unlimited access to your personalized investigative reporter agent, sourcing real-time and verified reports on any topic. Your personalized news feed starts here.
Create Free AccountThe Club
Amazon, Apple, Google, Microsoft, and NVIDIA made the list. So did major financial institutions. These are not scrappy startups needing protection from dangerous technology. They are the companies that already control the infrastructure Mythos would audit. Granting them exclusive access to the world's best vulnerability scanner while denying it to independent security researchers is not safety policy. It is competitive advantage dressed in altruism.
Anthropic priced access at zero for Glasswing members. $100 million in free credits. For a company reportedly valued at over $100 billion, this is a rounding error. The real currency is dependency. Twelve of the world's most powerful technology companies now rely on Anthropic for their most sensitive security operations. That relationship does not end when Glasswing does.
The Precedent
DeepSeek V4: open weights, $5.2M training cost, competitive performance — transparency as counter-model
Verified
Think Further on BIPI.
Where seeking the truth is a journey, not a destination.
Learn moreNuclear technology followed the same rhetorical path. Too dangerous for proliferation. Restricted to responsible state actors. The result was not safety. The result was a permanent hierarchy of nuclear haves and have-nots, enforced by the same nations that built the weapons first. AI capability restriction follows the same logic and will produce the same structure.
DeepSeek V4, released the same month with open weights and trained for $5.2 million, demonstrates that capability does not require gatekeeping. The Chinese lab chose transparency. Anthropic chose exclusivity. Both frame their decisions as principled. Only one of them lets you verify the claim.
Safety is a real concern. It is also a word. When the word justifies concentrating power among the already powerful while offering nothing to the public whose safety it claims to protect, the word has become a tool. Recognize the tool for what it is.








