Ask the only question that matters. Who benefits?
Anthropic built Claude Mythos Preview. Ten trillion parameters. The most powerful AI model ever constructed. Finds thousands of zero-day vulnerabilities. Cracks open every major operating system. And the company declared it too dangerous for public release. Too dangerous for you. Not too dangerous for Amazon, Apple, Google, Microsoft, Nvidia, and JPMorgan Chase. Just too dangerous for everyone else.
Anthropic reported 30 billion in annualized revenue on the same day it announced Project Glasswing. The 100 million in usage credits represents 0.33% of that figure.
Verified
The stated motive is safety. The track record of stated motives versus actual motives in the technology industry is abysmal. So set aside what Anthropic says and look at what Anthropic does.
“We do not plan to make Claude Mythos Preview generally available due to its cybersecurity capabilities. -- Newton Cheng, Anthropic
On the same day it launched Project Glasswing, Anthropic disclosed that its annualized revenue had hit 30 billion dollars, up from 9 billion at the end of 2025. Over 1000 business customers now spend more than a million dollars annually. The company sealed a multi-gigawatt compute deal with Google and Broadcom. Bloomberg reported that Anthropic had poached a senior Microsoft executive to lead infrastructure expansion. The company is evaluating an IPO as early as October 2026.
A CMS misconfiguration in March exposed 3000 internal assets including the draft Mythos announcement. An npm error leaked 512,000 lines of source code.
Verified
Real-Time, Evidence-Based News Reports
Unlimited access to your personalized investigative reporter agent, sourcing real-time and verified reports on any topic. Your personalized news feed starts here.
Create Free AccountRead that paragraph again. A company preparing for a public offering worth potentially hundreds of billions of dollars announced on the same day that it had built the most powerful AI in history AND that it would restrict access to a hand-picked list of the largest corporations on Earth AND that its revenue had tripled in fifteen months. The press release says safety. The financial calendar says market positioning.
Bloomberg reported Anthropic is evaluating an IPO as early as October 2026, weeks after announcing it controls the most powerful AI ever built.
Verified
The partner list tells the story. Twelve launch partners. Not twelve scrappy cybersecurity startups. Not twelve universities. Not twelve government agencies. Twelve of the wealthiest, most powerful technology and finance companies in existence. These are Anthropic customers. These are Anthropic investors. Amazon Web Services hosts Anthropic models on Bedrock. Google provides compute infrastructure. Microsoft distributes through Foundry. The companies getting exclusive access to Mythos are the companies already paying Anthropic billions of dollars.
“Security is central to how we build and ship. These two incidents were human errors in publishing tooling, not breaches of our security architecture. -- Newton Cheng, Anthropic
The 100 million dollars in usage credits sounds generous until you measure it against 30 billion in revenue. That is 0.33%. A third of one percent. The 4 million in open-source donations is 0.013%. These are not investments in global cybersecurity. These are rounding errors dressed in philanthropic language.
Think Further on BIPI.
Where seeking the truth is a journey, not a destination.
Learn moreConsider the competitive dynamics. Anthropic just gave its biggest customers a tool that no competitor can access. Those customers can now scan their codebases with a model that finds vulnerabilities their rivals cannot detect. The defenders who pay Anthropic get the shield. The defenders who do not pay Anthropic get a 45-day disclosure notice after the patch ships. The security gap between Anthropic customers and everyone else just widened at machine speed.
The security lapses are the most revealing detail. In March, a CMS misconfiguration exposed 3000 internal assets, including the draft Mythos announcement. An npm packaging error leaked 512,000 lines of Claude Code source. A company that cannot secure its own blog and its own packaging pipeline asks the world to trust it as the sole custodian of a model that can autonomously compromise the Linux kernel. The defense from Anthropic: these were human errors in publishing tooling, not breaches of security architecture. The translation: trust our judgment about who should access a world-changing capability, but do not look too closely at our operational competence.
The origin story adds another layer. Anthropic was founded by researchers who left OpenAI because they believed OpenAI was moving too fast and caring too little about safety. The founding narrative was restraint. The founding identity was the responsible lab. Six years later, Anthropic earns 30 billion annually, hosts exclusive CEO retreats at English country estates, and decides which corporations get access to restricted superintelligence. The safety-first startup became the gatekeeper of the most powerful technology in history. That is not a betrayal of the original mission. That is the original mission working exactly as institutional self-interest predicts.
Anthropic proposes that an independent third-party body should eventually oversee this kind of work. Note the word "eventually." Not now, when they control the model. Not before the IPO. Eventually. In the meantime, Anthropic decides. The stated motive is safety. The structural outcome is that a single private company with an IPO on the calendar controls who gets to use the most powerful AI tool ever built. In this case, the stated motive appears genuine. It is the exception that proves the rule. But the incentive structure around it deserves every ounce of scrutiny the public can muster.








