Before Leviathan Wakes
Why I Believe What I Believe
My political philosophy, as with many reflective people on the right, is characterized by a fundamental and irreconcilable tension between libertarianism and conservatism.
I am, in most ways that matter for policy, a libertarian. The fancier phrase is classical liberal, but this is primarily a label that grown-up libertarians use to distinguish themselves from adolescent (or adolescent-brained) libertarians. More precisely, people use “classical liberal” to signal that they have libertarian impulses, but are also not unreflective about the role of the state in providing a stable business environment, investing in infrastructure, providing for the national defense, and the like.
Fundamentally, I view the state as a kind of tragic necessity, something we must merely tolerate, because without it, no civilization we can conceive of is possible. I think it is possible AI will change this—that “AI,” very broadly conceived, may one day be able to conduct enough of the core functions of the state that new architectures of civilization will become possible. As a “classical liberal,” I am intrigued by this.
And yet I am also a conservative, and as a conservative, this notion makes me fearful. The conservative in me is skeptical of top-down projects to remake the world, including projects with a libertarian bent to them. The conservative in me appreciates the world for what it is, revels in what Michael Oakeshott called “the warmth of untidiness.” It is with classical liberalism that I think. But it is with conservativism that I love.
Here is what this means in practice: I oppose literally almost all AI regulation. I do not think there should be new laws to regulate “algorithmic discrimination” or “algorithmic addiction” or “algorithmic pricing.” I despise the notion of regulating “algorithmic design,” and I especially despise the idea of judges and juries second-guessing the “algorithmic design” choices of others, as seems to be the current direction of U.S. tort law.
I am opposed to efforts by bureaucrats to inject “ethics” or rules against “misinformation” into information technologies. I reject most regulation of AI use by businesses and consumers, believing as I do that existing law—plus the private sector simply “figuring it out”—will resolve most mundane AI governance challenges.
I am also not a “doomer.” I do not believe AI is “going to kill everyone”—or at least, being unable to prove a negative, I am skeptical of “existential risk.” I am opposed to pauses and bans on AI development. I am uncertain about the labor-market impact of AI in the future, but I am skeptical of the notion that AI will destroy human work, and strongly opposed to regulations or taxes designed to remedy the “problem” of AI and labor, since today, there is no problem we can perceive.
In short: I believe that almost every single idea in AI policy is bad, and I disagree with the vast majority of AI policy proposals.
But. There is this one thing. I’ll get to that later.
My preference for light regulations extends far beyond AI. Indeed, the big problem with “applying existing law to AI” is that “existing law” is often terrible—confusing, overbroad, overeager, wrong-headed, or simply outdated. I am thus in favor of deregulating the physical world so that physical instantiations of AI—robots, self-driving cars, autonomous drones, and things for which we do not even have names yet—can transform our lives. My deregulatory inclinations here go far beyond “permitting reform.” There are entire domains of the law that, if I had my druthers, we would simply delete altogether, despite my conservative skepticism of quick, radical change.
I believe that regulation and institutionalization accumulate with time, like brush within a forest, and that every once in a while, a controlled burn is necessary. I believe that the American spirit is being strangled by the administrative state, and I believe the progressive project begun in this country by Woodrow Wilson, and continued by FDR, LBJ, and all the rest, was fundamentally a mistake. I reflect upon the long arc of administrative federal power in this country, compare this to the ideals of our founding, and I wonder, usually in silence but occasionally in public, whether America is, indeed, still a republic.
And yet I love my country. I want to save my country from being strangled, though the conservative in me fears that saving it may require radical change, fast. I want to preserve the country that I love, yet the classical liberal in me knows that preservationism usually means stagnation. This is the fundamental tension in my thought, and indeed in a great deal of right-of-center political thought.
There is no “solving” this tension, though—no way out of the paradox. The classical liberal in me is always driving to solve problems. The conservative in me knows that the most important problems in life have no solution. To discover paradox, to dive into it like an ocean, to swim around within it, is the human condition at its fullest. I swim around every day in the paradox at the center of conservatism and classical liberalism. I have grappled with this tension since my reflective political life began sometime in the early aughts, and I grapple with it still today.
And this brings me to the one niche of AI regulation that I do affirmatively support today: the management of potential catastrophic risks from AI by the state.
By “catastrophic risks,” I refer to extremely damaging events caused by human misuse or abuse of an advanced AI system. I do not refer to existential risks, or the notion that the AI itself will have a desire to kill or enslave humanity.
The potential of AI posing catastrophic risks is not hypothetical. We have seen AI systems that might allow malicious actors to perpetrate devastating cyberattacks on critical systems—hospitals, banks, power plants, and the like. It seems probable that other domains of catastrophic risks, such as biological weapons development, will become live problems soon.
So I support regulations to try to mitigate the catastrophic risks of AI. There are four reasons for this. The first one is obvious: catastrophic risks are very bad! It would be better if there were less large-scale death and destruction in the world.
The second one is political-theoretic: national defense is a key pillar of why we have a state to begin with. Catastrophic risks therefore necessarily implicate Leviathan.
The third one is economic: in general, market actors do not have great incentives to protect against catastrophic risks. They are massive negative externalities, often dwarfing the balance sheet of any individual firm. Say Anthropic releases a model that a malicious actor uses to conduct a cyberattack that does $5 trillion dollars in damage. Anthropic is only worth $800 billion, so if they get sued for $5 trillion, they are already well past the point of insolvency. A catastrophic harm may well already be “lights out” for Anthropic, or any other company, so there is little incentive to avoid them, if doing so entails real costs in the present day.
The fourth reason—and this one may be the most important—is practical: once AI models have this potential, of course the state will get involved. Do you think the national-security apparatus of the United States will ignore models with the potential to be weaponized, both by America and against America? Obviously not.
Indeed, my fear—well exemplified by the recent Department of War/Anthropic fight—is that once the national-security apparatus realizes what they have on their hands with this thing we call “frontier AI,” they will go nuts. Seek to control the hell out of it. Keep it away from the public and all to themselves and whomever they deem worthy. This is the ultimate dystopia to avoid. Not only would state monopolization of advanced AI be an unprecedented instrument of government tyranny, it would also mean that all Americans, and really all humans, would be deprived of the personal enrichment, scientific insight, and economic growth that I believe frontier AI will deliver.
Many people share this fear. Those who are libertarians have mostly defaulted to denying that the catastrophic risks exist to begin with, and downplaying these risks in increasingly rococo ways as they have grown clearer. I have argued that this is a bad strategy that, at best, buys you a little time before Leviathan wakes.
Instead, I believe you must own the issue of the state’s role in catastrophic-risk mitigation. If you are a skeptic of state power, you will want to circumscribe, bound, narrow, and otherwise define the role of the state in dealing with AI’s catastrophic-risk potential. You need to own the issue, rhetorically and substantively, so that you have at least some fighting chance of preventing the state from monopolizing AI for itself.
As Tyler Cowen put it recently, “we thus want sustainable methods of perpetual interference that a)are actually somewhat useful from a safety perspective, and b)give governments some control, and the feeling of control, but not too much control.” I cannot say it any better than that.
There you have it. That’s why I support the AI regulation I support, which, in brief summary, involves the creation of private institutions to sit between the state and the frontier labs precisely so that they can mediate between the inevitable power-seeking impulses of the state and the private business of the frontier AI industry. “Sustainable methods of perpetual interference.”
No byzantine theories about how I am secretly pursuing “regulatory capture” or how I am a “closet Democrat” or “closet EA” are necessary to explain my beliefs. I am simply trying to manage two problems: (1) the actual catastrophic risk potential of AI and (2) the drive of the national-security state to control AI, once they realize what it is. These problems strike me as unavoidable and plausibly dire, so they are what I focus a significant amount of my attention on. This is not complicated.
If you disagree with my policy proposals, then I encourage you to come up with your own. But when you do, remember that irreconcilable tension is not only unavoidable, but good. Don’t try to wish the tension away. It is tension, after all, that makes music.


This helped me sort through my thinking on AI regulation and I especially loved your overview of your political philosophy. I often say that libertarianism would be the best system if it weren't for all the humans involved. It's inevitable that government regulation will be pernicious just as it's inevitable that it will happen. Could you say the same thing about at least the New Deal? I found it interesting you included that and the Great Society in the piece.