Name the standard. That is always the first task. Before we argue about fairness or access or democratization, we must name the standard against which we judge the decision. The standard here is competence. And by that measure, Anthropic made the correct call on access. The question is whether they made it for the right reasons.
Claude Mythos Preview is, by every available benchmark, the most capable AI model in existence. It scores 93.9% on SWE-bench Verified. It found thousands of zero-day vulnerabilities across every major operating system. It chained Linux kernel exploits into full privilege escalation. This is not a chatbot that writes better emails. This is a system that can crack open the digital infrastructure that civilization runs on.
Restricting access to such a system is not elitism. It is the minimum standard of responsible governance. The alternative is handing a skeleton key to every script kiddie, state-sponsored hacking group, and ransomware operation on the planet. Anyone who argues for immediate, unrestricted public release of a model with these capabilities has not thought seriously about the consequences.
Project Glasswing launch partners include Amazon, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorgan Chase, the Linux Foundation, Microsoft, Nvidia, and Palo Alto Networks.
Verified
The real question is who chose the twelve. Amazon, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorgan Chase, the Linux Foundation, Microsoft, Nvidia, Palo Alto Networks. These are not obscure startups. They are the largest technology and finance companies on Earth. They control operating systems, cloud infrastructure, chip supply chains, and financial networks. Giving them first access to the most powerful AI security tool compounds advantages they already possess.
Real-Time, Evidence-Based News Reports
Unlimited access to your personalized investigative reporter agent, sourcing real-time and verified reports on any topic. Your personalized news feed starts here.
Create Free AccountA meritocratic system would select partners based on demonstrated capability in vulnerability research, track record of responsible disclosure, and ability to deploy patches at scale. By those criteria, the list is defensible but incomplete. Where is the academic security research community? Where are the national CERTs of allied governments? Where are the mid-sized companies that maintain critical infrastructure but lack the resources of a trillion-dollar corporation?
“In the past, security expertise has been a luxury reserved for organizations with large security teams. Open-source maintainers have historically been left to figure out security on their own. -- Jim Zemlin, CEO, Linux Foundation
The Linux Foundation deserves separate mention. Its inclusion signals that Anthropic recognizes open-source infrastructure as critical. Jim Zemlin rightly noted that security expertise has been a luxury for organizations with large teams, while maintainers have managed alone. The four million dollars in donations to open-source security organizations is a start. But the asymmetry remains: volunteer maintainers receive bug reports generated by a model they cannot access, reviewed by a triage team they did not hire, on a timeline set by a company they do not control.
History teaches a pattern here. The Manhattan Project concentrated nuclear capability among a small group of institutions under government oversight. The early internet concentrated networking capability among universities and defense contractors. In both cases, the initial gatekeeping reflected genuine security concerns. In both cases, the concentration of capability created enduring power structures that outlasted the original justification.
Anthropic reported 30 billion in annualized revenue, up from 9 billion at end of 2025, with over 1000 business customers each spending more than a million annually.
Verified
Think Further on BIPI.
Where seeking the truth is a journey, not a destination.
Learn moreAnthropic sits in an unusual position. It was founded by researchers who left OpenAI over concerns about safety and governance. It has built its reputation on responsible deployment. Its CEO, Dario Amodei, attends exclusive retreats with European business leaders at 18th-century manor houses. The company reported 30 billion in annualized revenue. It is evaluating an IPO as early as October 2026. The safety-first startup has become a 30-billion-dollar enterprise that decides which corporations get access to restricted superintelligence.
This is not hypocrisy. It is the natural evolution of any organization that builds something of extraordinary value. The tension between open science and controlled deployment is as old as dual-use technology itself. The question is whether the gatekeeping serves excellence or merely serves the gatekeeper.
Anthropic is reportedly evaluating an IPO as early as October 2026, according to Bloomberg.
Verified
The answer depends on what happens next. Anthropic has proposed that an independent, third-party body should oversee large-scale cybersecurity projects of this nature. That proposal, if genuine, represents the correct institutional design. Excellence requires expertise. Expertise requires trust. Trust requires accountability to something beyond shareholder value. A private company with 30 billion in revenue and an upcoming IPO is not the right permanent custodian of a tool that can compromise every operating system in production.
The standard is clear. Build the best. Restrict access based on demonstrated competence. Transition governance to institutions with broader accountability. The first two steps have happened. The third will determine whether Anthropic created a meritocratic security architecture or a new priesthood that answers only to itself.






