"New Sages Unrivalled"
On Mythos
Columbia! Columbia! to glory arise,
The queen of the world, and the child of the skies,
Thy genius commands thee, with raptures behold,
While ages on ages thy splendors unfold:
Thy reign is the last and the noblest of time,
Most fruitful thy soil, most inviting thy clime;
Let crimes of the east ne’er encrimson thy name,
Be freedom, and science, and virtue thy fame.
...
New bards and new sages unrivalled shall soar
To fame unextinguished, when time is no more.
-Timothy Dwight
I stumbled recently on a painting I once loved as a young boy but had long since forgotten. “The Girl I Left Behind Me,” painted by Eastman Johnson in 1872, depicts a young girl facing a storm. The wind blows her hair and dress straight back, yet she leans into it. She holds her little books to her chest for protection and plants her left foot defiantly down. She gazes across the landscape. Her face, seen only in profile, conveys both a sense of waiting in anticipation and of being sternly prepared for whatever may come.
The storm she faces is not on the verge of clearing. The dark atmosphere suggests that the storm will only worsen, and that it is coming right for the girl. Yet she stares that future down, with little more than her books to protect her.
—
For the last six weeks or so, at least one American company has possessed a tool that could damage the operations of critical infrastructure and government services in every country on Earth, including the United States. Within another six weeks or so, if not already, 2-3 American companies will possess this capability. Some time after that, perhaps not much time at all, adversaries of the United States—principally China—will possess tools of this magnitude.
The company I am referring to is Anthropic, and the tool they posses is called Claude Mythos. Researchers at the company have said that the new model stands to fundamentally upend cybersecurity. At least, for the time being. They postulate that after a transitional period, the world will end up in a steady state where advanced AI benefits defenders rather than cyberattackers. Yet the transitional period could be a long and brutal storm, and we do not know what will break as it hits.
“The threat is not hypothetical,” they conclude. “Advanced language models are here.”
What we do next, both collectively and as individuals, will determine if we can weather the storm.
—
What do the capabilities of Mythos mean, prosaically speaking? It’s hard to say, because I do not have access to it, and in all likelihood, neither do you. The model is not currently public, and may never be in its current form. But broadly speaking, if one takes Anthropic at their word, the model can conduct automated software vulnerability discovery with nearly superhuman performance in some domains.
The model can find security vulnerabilities in software, including software systems upon which modern civilization rests, that have eluded security researchers for years, and sometimes decades. The model has found thousands of vulnerabilities so far, most of which have not yet been fixed (for this reason, Anthropic has not publicized the exploits, but they have reported them to the developers of the software in question). An enormous range of consumer and commercial services--from banking to healthcare to education to AI itself—are plausibly implicated.
My model of modern software is that, if you look hard enough, you will find critical vulnerabilities. Looking hard, however, used to be expensive—only the best hackers in the world could do it, and their time was limited. With Mythos, the price of “looking hard” at software has plummeted, and it will get cheaper each month.
This is not wholly bad news; after all, “looking hard” at software is also how software gets improved. Mythos and similarly capable models from other companies that will soon follow, in that sense, are one of the greatest gifts to cybersecurity ever given.
Yet as things stand today, the world is deeply vulnerable. Every day, you rely on untold millions of lines of code maintained by a global population of millions of developers. It will not all be fixed tomorrow, or next month, or next year. The reality is that models of this capability level—and more capable—will almost certainly diffuse widely before all “critical” software is patched. How much damage will be done is anyone’s guess.
If you doubted whether AI systems might have object-level national security implications, now we have clear evidence. Some of the most capable and prized teams in the United States intelligence community do precisely the kind of work that Claude Mythos automates. The same is true of China. You can be inclined to believe this will all work out fine in the end, but it is simply no longer credible to contend that there are no implications for national security from large language models, and therefore for government as a whole.
—
This has been a frustrating issue to discuss candidly for the past two years. The reason is that, in the adolescent period of AI policy and discourse that is now—I hope—coming to a close, taking AI risks seriously was considered uncouth. Speaking about how near-future models might have straightforwardly dangerous capabilities was enough to provoke suspicion: were you a secret “doomer” or Effective Altruist? Were you part of a grand conspiracy to achieve “regulatory capture” for the frontier AI companies? Were you trying to “ban open source”? These sorts of questions constrained debate and put blinders on a large number of otherwise-sane policymakers and other influential people. And these constraints, in turn, meant that one had to tiptoe around reality.
But I am done with tiptoeing now, and so should everyone else be. It is a great relief, albeit also a bit uncomfortable, to feel the biting winds against one’s face.
In that spirit, here are some things I believe to be true:
Actors who are hostile to the U.S. will possess the capabilities of Mythos, if not better, within a year or two. We will not stop this through “nonproliferation” or some other clever regulatory scheme. We can only blunt the impact of this reality by strengthening our cyberdefenses rapidly.
Strengthening cyberdefense will require coordination among state and local government entities, private sector critical infrastructure operators, frontier labs, and the broader private sector, as well as the federal government. But even more importantly, it will require compute: data centers. In recent testimony to the Federal Energy Regulatory Commission, I wrote about the urgency of speeding transmission siting to facilitate the buildout of supercomputing infrastructure for national security. Running massive fleets of automated software vulnerability researchers was precisely one of the use cases I cite in that testimony. In addition to speeding up the FERC process through administrative actions, we need permitting reform urgently.
Speaking of national security: The U.S. Department of War, and the federal government more broadly, are engaged in a lawfare campaign against Anthropic whose underlying motivations are deeply unclear and which attacks core American values. Now, the strategic wisdom looks worse and worse by the week. We are fighting a war against Iran, a highly capable cyberoffensive actor. It is inconceivable that the government can have a healthy relationship with the frontier AI industry while attempting to destroy what is arguably the field’s leading company. Anthropic and the Department of War must come to a truce, if not a resolution, as soon as possible, for the good of America’s national defense.
In the context of national-security-relevant cybersecurity capabilities, the key and salient difference between the United States and China is not our “innovation ecosystem,” but instead the simple reality that our firms possess the computing power to train and operate models like Mythos today, and theirs do not. It is that simple. China is prioritizing its efforts to develop its own compute manufacturing capacity, and this development is likely to motivate them even further. The best way to disrupt this is a serious increase in targeted export controls on semiconductor manufacturing equipment, too much of which flows freely today from the U.S. and its allies to China. It is long past time for major effort here from Congress and the Trump Administration.
The utility of SB 53, which requires frontier AI companies to disclose their assessments of their own models’ cybersecurity risks, is hopefully more apparent now. Some criticisms of that legislative framework have asserted that it attempts to control frontier AI or micromanage companies. But in truth, the framework rests on the notion that AI will not be controllable--that stopping the diffusion of potentially dangerous capabilities is impossible--and that therefore today’s “frontier” capabilities will be broadly dispersed within a short while. This is exactly we need transparency about what developers see at the frontier: so that a large range of societal actors can prepare their defenses appropriately against the developments we see forming at the frontier.
Today, Mythos is accessible only within Anthropic and to Anthropic’s chosen partners. Limited releases of this kind will likely be a growing trend because of both compute constraints and safety concerns. Mythos appears to be about five times more expensive to run than Opus, which was already not cheap, but for Anthropic the issue is not so much cost as it is allocating sufficient compute to serve Mythos to the public. This means that the best AI models of the future may be disproportionately, if not exclusively, used within frontier labs for their own purposes, which at least at first will be automated AI R&D. These so-called “internal deployments” have motivated my own pursuit of transparency and private governance frameworks, the latter being private organizations that would audit the safety and security posture of frontier AI companies, including their internal deployments.
—
I wrote on X that Mythos means the training wheels are coming off on AI policy. Perhaps the Department of War’s effort to strangle Anthropic is, to use another metaphor, a sign that the gloves are off too. If the last month has made anything clear, it is that we are in a nastier, sharper, harsher, meaner era of AI discourse, policy, and—ultimately—of AI development and use.
I will be honest: I do not see how it is possible for Mythos-level capabilities to diffuse through the world without causing at least some significant security crises and economic disruption. And of course, this cycle of compute infrastructure buildout has only just begun; within a year or so, gigawatts of additional AI compute capacity will be online.
The pimply and ill-shapen adolescence of AI and AI policy have come to an end. The first maturity has now begun.
It is overwhelming, and it will only become more disorienting with time. As the events of the coming years unfold, I expect many people, including loved ones, will say to me and others involved in AI policy during the adolescent era, “couldn’t you have done something to stop this?” Maybe so. All I can say for myself is I did everything I felt was prudent and possible.
There is, ultimately, no plan for how to contend with the era to come. There are no guardrails on the open plains. I am heartened by the knowledge that America has always winged it.
None of the young men who would become our founding fathers had much of an idea about what should be in our Constitution in the weeks leading up to the Constitutional Convention they had called. Young America faced seemingly irreconcilable structural tensions, and they had only the faintest idea of how they would solve them. They were armed merely with principles, knowledge, wisdom, and chutzpah.
Our country was born in improvisation, and Americans are often at our best when we are improvising with little more than principles, knowledge, wisdom, and chutzpah. America has always done well by leaning into the wind, even when it blew harshly in our face. When we are at our best, we stand defiantly against the storm. And our pursuit of greater knowledge, and of our founding ideals are, in the final analysis, the only defenses we have, our sole ballasts against the gusts.
So be like the girl in the painting. Put your foot down, hold your wisdom to your chest, and stare down the storm.



As Arod suggests it's time to think beyond individual countries. You said it yourself when describing one of the precepts of SB 53:"... the framework rests on the notion that AI will not be controllable--that stopping the diffusion of potentially dangerous capabilities is impossible--and that therefore today’s “frontier” capabilities will be broadly dispersed within a short while." Rather than suggest that China and the US (and everybody else) now have a mutual interest in coming up with an international architecture to manage ungoverned diffusion of AI you double down on "preparing defenses." Can we get ahead of a catastrophe for once?
Just as a moderating comment - to detect many of these historically present vulnerabilities, Claude requires access to the source code - which in most cases is not public or available to hackers. It can, of course, also find vulnerabilities in running code, but not to the extent it can from the source. One would assume that all significant open-source code would be analyzed quite soon, and than subsequent releases would also be analyzed.
This is not to say that Claude doesn't represent a significant threat. But it can also be looked at as a resource that will, over time, reduce that threat.