2023
Or, Why I am Not a Doomer
Dear readers,
Please accept my apologies for the few-week interruption to my publication schedule. The truth is that I needed to take a pause from writing given both the stresses of the past month and the extreme constraints that now exist on my time. I hope to get back to as close to a weekly article cadence as I can, but for the coming months I anticipate a routine but somewhat less predictable schedule. I will aim for quality above all else, and for the time being I may experiment with somewhat longer articles (my article length has gradually been rising in recent months, just as my publication cadence has slowed somewhat).
In general, it is hard for me to predict what I will write next, what length it will be, and how long it will take to write; I write when I know what I want to say. Longtime readers will know that I have always considered this project an experiment; this remains so today. Thank you for bearing with me as I continue to experiment.
Thank you also, for the kind words and acts of the last few weeks in particular. This is a tremendous community of readers. Please know that I am honored to have you all as subscribers. I have some exciting stuff in the works, and I cannot wait to share it with you in due time.
With that, onto this week’s article.
-Dean
Introduction
Earlier this month I had occasion to be on the campus of Stanford University, a place I had not visited since I worked at the Hoover Institution—a think tank headquartered in the center of campus—two years ago. From 2022 to 2024, I worked principally out of Hoover’s office in D.C., but spent about a quarter of my year at Stanford. Though my work there did not focus at all on AI, it was through walks around Stanford and the broader Palo Alto area that I formed many of my foundational thoughts about AI.
I happened to be at the Hoover HQ on the day that ChatGPT was released, though by then I had been observing improvements in these things called “language models” for some time. Back in an earlier job, we had briefly tried to use GPT-2 in policy research for some basic classification tasks. It failed. By 2023, those classification tasks were trivial for models, and significantly more complex research tasks felt possible for models to do. By 2023, it had dawned on me that real AI—not “deep learning” as a modality of statistics but actual, honest-to-goodness artificial intelligence—might be on the horizon. And it was on walks on campus that I reflected on what this would mean for me, my career, my friends, and ultimately the human future.
What kind of a challenge was this task of “alignment”? How should we think about the risks of “misalignment”? Was AI a new thing under the sun, or was it consistent with the pattern of prior emerging technologies? Does AI break the existing Constitutional order of the United States, or does it merely challenge it?
It was not in 2023 that I first considered any of these questions about AI—I had been following the deep learning revolution for a decade by then. But it was in 2023 that I developed my initial attempts at mature answers to these questions.
I felt nostalgia and wistfulness on this most recent visit as I walked some of the very same routes I walked in 2023. I long for the days when this was all just an intellectual exercise—albeit a weighty one. Sadly, it no longer is. The stakes have grown higher, the rhetoric shriller, the terms of debate starker.
The ideas and intuitions I formed on those walks in 2023 played a major role in my decision in recent weeks to draw a firm line in the sand against some actions taken by the U.S. government against the AI industry. This was a tough decision for me to make, yet the government’s actions themselves are confirmation of many of the fears I first seriously contemplated on those tranquil walks.
If 2023 was a year of sowing, I suspect 2026 may well be a year of reaping. For that reason, I think it’s probably wise to write down, as compactly as I can, the basic viewpoint I sowed as I wandered around the cradle of Silicon Valley.
My Techno-Optimism
I approach all matters of AI policy with an intrinsically techno-optimist sensibility. This means that I believe the net effect of technology on human beings has been not just modestly but overwhelmingly good, on average, throughout our history. Indeed, the notion of human beings as having a history, as opposed to merely a past, is itself a technologically contingent idea, founded as it is upon the technology of writing. We refer to human beings who lived before the invention of writing as having existed in a state of prehistory. “Human history” is not about our species per se, but rather about our species only after it had reached a certain threshold of technological sophistication. History is not about man but about techno-man.
Writing enabled all technology that came after its invention, but before it came language. Language is a funny thing, not quite a technology in the standard sense of that word. Humans did not “invent” it, for one thing. No pre-linguistic homo sapien sat down and “decided” to “design” a system of language. Indeed, deciding to design a system of language is a train of thought that could only occur to a person who already had language. Language emerged over countless years of utterances of increasing sophistication until it crossed some threshold and became “language.” It seems that it adapted, like a symbiotic microbe, to our biology, and that our biology eventually adapted with it. Put another way, language happened to people. The limited evidence we have suggests that the people to whom it happened quickly conquered all the people to whom it did not happen. We are their descendants.
Thus when people ask me whether I am a “techno-optimist,” I almost want to reject the question. It’s like asking me if I am a “cooking-optimist.” Building and using tools is the differentiating characteristic of the human condition. The human technological tradition is one to which we are all heirs and over which we therefore must be stewards. I believe we owe it to our ancestors, who toiled so mightily such that we might one day live, to build tools to enrich our lives and create order where once there was chaos. “Techno-optimism,” in my view, is not a point of view so much as it is a duty that we owe to both the humans who came before us and those who will follow us.
What kind of a technology is AI?
Language was powerful because it gave us a way to coordinate actions and crystallize knowledge. Writing, which would come much later, would be essential for crystallizing most technically complex knowledge. All tools we have built since then are manifestations of knowledge, much of which—though important not all of which—is written down somewhere. We have written down quite a bit since the dawn of written language, and it seems fitting that the next techno-human epoch is coming to us first in the form of large language models. I have written before that it is as though our knowledge is itself gaining the ability to act in our lives and as a character on the world-historical stage.
AI is a general-purpose technology matching, and probably exceeding, the significance of most prior general-purpose technologies. There is no such thing as a ‘normal’ general-purpose technology—each has transformed human affairs in its own way, and there is no ‘normal’ way in which human affairs get transformed. The birth of a new general-purpose technology is intrinsically abnormal, and probably in some impressionistic sense the level of abnormality correlates with the significance of the transformations wrought by the new technology.
For what it’s worth, I believe Arvind Narayanan and Sayash Kapoor—the authors of the paper “AI as Normal Technology”—would largely agree with the above assessment. It’s just that by “normal,” the authors did not mean “boring” or “predictable” but instead “falling into the overarching pattern of general-purpose technological transformation, which is actually inherently wild and unpredictable but which can nonetheless be matched to a pattern of invention and diffusion in which humans have influence and therefore broadly construed as ‘normal,’” though I do understand why the authors did not pick that for their title.
Rather than countering my view, I believe Narayanan and Kapoor are principally attempting to counter the view of some in the AI safety community that AI is like a “new species” or, even worse, like a “nuclear bomb.” In other words, the notion that there will come an AI model or system whose very existence fundamentally changes the conceptual architecture of the world in ways that will be both immediate and, because of the immediacy, not subject to human influence. This view is one I disagree with starkly. Because it is probably my central point of disagreement with “the doomers,” it is worth explaining in some detail, which is why this was the subject of the very first full Hyperdimensionalarticle. I had fewer than 50 subscribers back then, though, and hopefully both my views and manner of expression have matured somewhat since then. So let me give it another shot.
Why I am Not a Doomer
One common assumption (though less prevalent with time) among many people in “the AI safety community” is that artificial superintelligence will be able to “do anything.” Now, most people in this world are much too smart to say literally these words, and so it might be fairer to put my criticism this way: “many people in ‘the AI safety community’ are way too willing to resort to extreme levels of hand-waviness when it comes to the supposed capabilities of superintelligent AI.” The tautological pattern of the AI safetyist mind is easy enough to recognize once you encounter it a few times: “Well of course superintelligence will be able to do that. After all, it’s superintelligence. And because superintelligence will obviously be able to do that, you must agree with me that banning superintelligence is an urgent necessity.”
Here are some concrete examples of what I mean. Eliezer Yudkowsky has claimed that a sufficiently superintelligent AI system would be able to infer not just the theory of gravity, but of relativity, from first principles, simply by observing a few still frames from footage of an apple falling from a tree. Similarly, there is the Yudkowskian threat model that a superintelligence might be able to come up with a nucleic acid sequence that would then bootstrap molecular nanoengineering that could then be used to take over the world, and indeed the universe.
While Yudkowsky has repeated this latter scenario numerous times in his long writing career, it appears in same Time Magazine op-ed in which he famously argued that governments should be willing to bomb “rogue” data centers that do not comply with a global ban on AI development. Though this latter claim draws all the attention, it is in fact the nanoengineering claim with which I disagree more fundamentally. In other words, if I agreed that an AI system might be able to bootstrap molecular nanoengineering overnight—that an AI system could go from something like humanity’s current state of knowledge about and capability in molecular nanoengineering to “fully realized molecular nanoengineering” in what amounts to an instant—I would support banning AI development too.
But I don’t believe that’s the way the world works. More precisely, I don’t believe that is the way intelligence works. I define intelligence as the ability to extract patterns from the observation of data. He who can find patterns that better match the underlying data, and he who can do so faster, is usually the smarter one than the one whose conjectured patterns do not match the underlying data as well, or who needs to spend more time looking at the data to find the same pattern (in other words, the smarter person is more sample efficient).
It is worth noting that, by most accounts, humans remain vastly more sample efficient than deep neural networks (LLMs, for example, need to look at trillions of lines of code to become competent programmers; humans need far less). Many critiques of AI doom end there—“the systems aren’t actually all that smart, and according to [some preferred metric], there is still a big gap between human and machine intellect.”
But that’s not the interesting argument to have. We have found no law of physics that says human sample efficiency is nature’s limit; we have every reason to believe that intelligences smarter than ourselves as possible. What’s more, the pace of progress and direction of travel seem clear: I fully believe that humans will build machines more intelligent than ourselves under the definition of intelligence I have laid forth here, and I strongly suspect we will do so within the next decade. Why, then, do I believe we should continue advancing AI?
Computational Irreducibility and the Limits of Intelligence
Intelligence is a tremendously useful capability, but it is not the bottleneck on all human progress, and, crucially, an extreme amount of intelligence does not equate to omniscience. Intelligence is not knowledge. Aristotle was surely more intelligent than I am, but he was not more knowledgeable, including even about many of the topics to which he devoted his treatises. This is why I am confident I would score better on a standardized test in biology or physics than Aristotle, despite him being one of the West’s originators of those fields of inquiry.
In a similar vein, imagine a newborn baby that was guaranteed to grow into an adult with an astoundingly high IQ (say, an IQ of 300, or 500, or 1000), but raised by Aristotle in Ancient Greece. Do you expect that the baby would mature into an adult that invents all modern science within the span of a few years or decades? Eliezer Yudkowsky does. Indeed, he describes contemporary humans trying to grapple with superintelligent AI as equivalent to “the 11th century trying to fight the 21st century.” I, on the other hand, strongly doubt that our imaginary high-IQ baby would invent all modern science from first principles. First principles do not have unbounded explanatory power.
In the end, most interesting things about the universe cannot be inferred from first principles. Imagine, for example, that you came upon a dry planet with mountain ranges but no bodies of water. But imagine that you knew, magically, that the planet would soon gain an atmosphere and thus precipitation, seasons, and the like. Suppose you have a superintelligent AI with you, and you show it the map of the planet as it is, and ask it to predict where all the planet’s rivers, lakes, and oceans will lie 50 years hence, after the planet gains regular precipitation. You don’t ask it to predict “generally speaking, where the bodies of water might end up,” but instead to predict exactly where they will be.
I would submit that there is no computational process which can arrive at the end of this natural process faster than nature itself. In other words, there is no pattern or abstraction you can create that allows you to speed ahead to the end of the process, and thus there is no amount of intelligence that gets you to the correct solution faster than nature on its own. You just have to wait the 50 years to find out. This is what the scientist Stephen Wolfram describes as “computational irreducibility.” Understanding this notion deeply is key, I think, to understanding the limits of intelligence. It should therefore come as no surprise that the best debate I’ve ever heard about AI existential risk was between Wolfram and Eliezer Yudkowsky.
Computational irreducibility comes into play anytime you are interacting with a complex system (though this is not to say that computational irreducibility is intrinsic to all interactions with a complex system). Every natural ecosystem, cell, animal, and economy is a complex system. While we have all manner of methods to predict what will happen when a complex system is perturbed (we call these things “physics,” “biology,” “chemistry,” “economics,” and the like), none of those methods is perfect, and often they are far from it.
The way we build better models of the world does not usually resemble “thinking about the problem really hard.” Generally it involves testing ideas and seeing if they work in the real world. In science these are generally called “experiments,” and in business sometimes we call these “startups.” Both take time and often money (sometimes considerable amounts of both); in the limit, neither of these things can be abstracted away with intelligence, no matter how much of it you have on tap. This is the central reason that I have written so much about, and even written into public policy, automated scientific labs that could run thousands of experiments in parallel; AI will increase the number of good predictions, but these are worth little without the ability to verify those predictions with experiments at massive scale.
There is one further observation that follows from the disentanglement of knowledge and intelligence. This is that knowledge itself is distributed throughout the world in highly uneven and imperfect ways. Anyone who thinks that “all the world’s knowledge” is on the internet is deeply mistaken. There is information that exists within a firm like Taiwan Semiconductor Manufacturing Corporation that is, first of all, not only unavailable on the internet but literally against Taiwanese law to make public. Even more importantly, though, there is knowledge within that firm that cannot be written down and is only held collectively. No single employee knows it all; it is the network—the meta-organism of TSMC itself—that holds this knowledge. It cannot be replicated so easily. This is all merely a restatement of the knowledge problem most memorably elucidated by the economist Friedrich Hayek.
The implicit, and sometimes even explicit, argument of “the doomers” is that intelligence is the sole bottleneck on capability (because any other bottlenecks can be resolved with more intelligence), and that everything else follows instantly once that bottleneck is removed. I believe this is just flatly untrue, and thus I doubt many “AI doom” scenarios. Intelligence is neither omniscience nor omnipotence.
What all of this means is that I am doubtful about the ability of an AI system—no matter how smart—to eradicate or enslave humanity in the ways imagined by the doomers. Note that this is not a claim about alignment or any other technical safeguard, even if a “misaligned” AI system wanted to take over the world and had no developer- or government-imposed, AI-specific safeguards to hinder it, I contend it would still fail. “Taking over the world” involves too many steps that require capital, interfacing with hard-to-predict complex systems (yes, hard to predict even for a superintelligence), ascertaining esoteric and deliberately hidden knowledge (knowledge that cannot be deduced from first principles), and running into too many other systems and procedures with in-built human oversight. It is not any one of these things, but the combination of them, that gives me high confidence that AI existential risk is highly unlikely and thus not worth extreme policy mitigations such as bans on AI development enforced by threats to bomb civilian infrastructure like data centers. “If anyone builds it, everyone dies” is false.
Why I am Not an Anti-Doomer, Either
The above argument counters Yudkowskian and similar “doom” scenarios and also helps explain why I do not support “pauses” or “bans” on AI development. But this argument does not counter anything close to all AI risk scenarios, nor does my argument suggest anything even close to “nothing to worry about with this AI stuff!” This is where I part ways with the “anti-doom” crowd, which unfortunately has made it a mission to negate virtually all notions of AI risk that seem exotic to them, especially risks that may imply a responsibility on the part of the AI developer to mitigate.
So, for example, I think that malicious use of AI systems will create all manner of nuisances, hazards, and lethalities in the years to come, some of which might very well be catastrophic. And while I am skeptical that AI systems will “automate the economy” and displace the vast majority or all of human labor, I am reluctant to make rosy predictions about the effect of AI on the labor market over the coming decade or so. I genuinely do not know what will happen. The only policy remedy I currently believe is appropriate is to develop new and better ways to measure the effects of AI on the labor market and broader economy, and I realize this is thin gruel to anyone with concerns about their own future livelihood. It is entirely possible that AI will upend our current social contract and require an altogether new one. If it does, the social contract of the future is probably much more complex than “universal basic income,” but I won’t pretend to be a deep thinker on this subject.
In 2023, however, I set these questions about misuse and labor markets to the side. Even if these are extreme questions, they are reconcilable within a technocratic regime of some kind. In 2023, I knew those were questions for my future (and indeed, I dabble in technocratic solution-proposing around here at least sometimes). 2023 was for fundamental, rather than merely important, issues. And the fundamental question of AI governance, at the time, struck me as the question of alignment.
The Nature of Alignment
The alignment of an AI model or system refers to the ability of that model or system to robustly adhere to a given set of values. “Be nice to humans” is one very simple value, though one that gives little sustenance to the AI. Should you always be nice to humans? What if you are a robot whose job is to defend a children’s hospital, and armed attackers come? What does “being nice” mean if you are an AI representing a human in a negotiation with another human? Obviously, reality, in its infinite permutations and complexity, presents us—and therefore also sufficiently capable AIs—with scenarios much more challenging than fortune-cookie values.
This means that “the alignment problem” is in fact three distinct problems:
A technical problem: is it technically possible to cause a neural network to robustly adhere to a given set of values, regardless of what those values are?
A substantive problem: what should those values be?
A social problem: who gets to decide what those values should be, and within what parameters should individual actors be permitted to change those values? The shorthand for this is “sure, we can align AI, but align to whom?”
Problems 2 and 3 boil down to, respectively, philosophy and politics. The good news is that we have been doing both for a long time, and we have gained real insight from experience. The bad news is that we have been doing both for a long time, and most of us remain fairly poor practitioners.
The latter two problems are also the more interesting of the set, but one must start with the technical problem. After all, if it is impossible to robustly align an AI system in the first place, none of this matters very much. First there is the scoping of the technical itself. In some technical alignment literature—particularly the kind promulgated by the East Bay rationalist AI safety world (Yudkowsky!)—alignment tends to be cast as a problem almost mathematical in nature. It is supposed to be, in other words, a “problem” with a “solution,” in the way that problems in mathematics have definitive solutions.
This I always doubted from the beginning. I instead came to perceive alignment as a “muddle through” problem: we will deal with it constantly and make incremental improvements from time to time, but never quite “solve” it. It is not the kind of problem that admits of a solution.
Now, of course, I don’t know this to be the case, and I certainly believe it is likely that humanity has produced and will continue to produce a great many misaligned AI systems over the years. But by the time I was contemplating alignment, I had also developed my view on the nature of intelligence itself, described above. This meant that I also rejected the Yudkowskian view that alignment of powerful AI is something we must ensure we get “right” on the “first try.”
By the end of 2023, my basic conclusion was that, while I maintained significant uncertainty about the technical alignment problem, it seemed to me as though language models were easier to align than humans. More importantly, it seemed as though alignment was a model capability, since if I was going to trust these models to do an ever-growing range of work on my behalf, including representing me to other humans, I would need to trust the AI’s judgment. This is fantastic news: alignment could be a pure safety feature, like airbags in a car. But alignment, I concluded, was something closer to a powertrain. This meant that, at least for the foreseeable future, it was reasonable to bet that markets would incentivize improved alignment. This, combined with the inherent concern about this issue within every AI lab and the broader community, suggested to me that the technical alignment problem seemed both tractable and on track to be addressed for the coming years.
There is one important caveat, and this is that we do not know how well any of these approaches will work as AIs become more intelligent and capable. Imagine, for example, that Claude 10, in addition to being better than most humans at most cognitive labor that can be done on a computer, is also embedded into much of the critical infrastructure and large organizations in America, such that it is challenging to imagine what life would be like if Claude “turned off.” Then imagine Anthropic training Claude 11, whose training data would include clear evidence of humanity’s dependence, and intellectual inferiority to, its predecessor. How much harder does the alignment problem become in this world? We do not know. It is why vigilance remains key (though interestingly, my experience is that models underrate their capabilities because the prevalent writing in the training data about AI is about earlier, far less capable versions of AI. A model trained in June 2025 has not seen much commentary on the models of 2025 but has seen a lot of commentary on the dramatically worse models of 2023 and 2024. A recent conversation I had with a frontier lab employee confirmed that this experience is a pattern with LLMs).
Even this caveat, though, gets at the second question: the substance of the values themselves, as opposed to the technical feasibility of value-alignment to begin with. The central objective of alignment, from the perspective of the developer, is to create an intelligent entity capable of exercising prudent judgment in a nearly infinite variety of settings. There are different beliefs about how best to do that. Mine is that this requires sound philosophy—the creation of a sober and wise mind through rigorous and timeless philosophical, moral, and ethical principles.
This is distinct from what I sometimes call the “positivist” school of alignment, which tends to focus more on lists of rules. In practice, all alignment methodologies I am aware of within labs involve some amount of moral, ethical, and philosophical reasoning on the part of the developers, but the rigor varies. Anthropic is known for having the highest level of rigor in this regard, employing not just rigorous philosophical foundations but also a methodology of training called “Constitutional AI” that requires the model to reason about that philosophical foundation through self-critique and adjudication. I have called Constitutional AI “Madisonian” before for the ways in which is resembles literal Constitutional jurisprudence in the U.S.
When AI is aligned to shoddy or otherwise insufficiently rigorous values, the results can often be comical. In 2025, xAI told their models not to worry about political correctness and to dare to be edgy, and the result was the model Grok claiming to be “MechaHitler” and other absurdities. In 2024, Google released a model that had been trained on what appeared to be a rather simplistic set of “woke” or “DEI” notions, and this yielded a model that said it was impossible to say whether the conservative intellectual Chris Rufo was a better or worse person than Hitler.
The jury is out on whether and to what extent I was correct on alignment being a fundamentally philosophical venture, but just like the technical dimension of the problem, my confidence in my 2023 intuition has grown. Regardless of whether I am right, however, it seems clear that the alignment of large language models requires developers—organizations composed of human beings—to make decisions about matters of philosophy, ethics, morality, virtue, and even politics.
Alignment, in other words, is an expressive act, and therefore protected by the First Amendment. It’s crucial to emphasize that this argument is not identical to the techno-libertarian mantra that “code is speech” (though it often is). Many aspects of AI development—building data centers, racking GPUs in those data centers, optimizing inference for customers—do not strike me as speech. If I object to the regulation of those things, it would not be on First Amendment grounds. Alignment, however, almost uniquely among AI development subfields, is especially speech-y.
Conclusion
Once I realized this, the stakes of regulation were set in stark relief. Of course government cannot assume control over the development of this technology or over the firms that develop it. Of course government cannot be the ones who, in any substantive fashion, determine what constitutes “alignment” and what does not. Indeed, given how essential I expect AI systems to become to the lives and even self-expression of all humans, it is hard for me to imagine anything less American.
And this, of course, brings us to the third alignment problem: the one that is basically politics, policy, and the law. About this one I’ll have less to say here, except that in 2023 the fundamental realization I had was that the idioms and principles of classical liberalism give us the best starting place for building the solutions to the political dimensions of alignment. Pluralism, open debate, protections for minority rights, private property, and individual liberty—these things would be not just niceties but essential features of a good future. Even if much else about our world must be left behind—including things I and others cherish—these must be the things we keep. The problem is that to keep them in the face of such change is not to preserve them in amber but to transform them in a way that maintains fidelity to their original purpose.
I keep this project in mind daily. This project explains approximately everything about what I do, and what I do not do, even if the chain of connection is sometimes long and winding. And the centrality of this project to my worldview explains why decisions of mine that may seem costly to others seem to me, in the end, easy and obvious.
Once I realized this was the project toward the end of 2023, I began trying to write publicly, at first with an op-ed here and there. I realized this was my calling. Hyperdimensional was founded a little while later, in the early days of 2024.


Some thoughts:
On "Techno-Optimism": Like you, I believe technology has been strongly net-positive for humanity. At the same time, (almost) every new technology can cause damage that clearly outweighs its potential benefits: A knife can cut bread or kill a person. The reason why knives (and technology in general) are still net-positive is that we KNOW and CARE about their risks: we keep them out of the hands of children, don't allow them on board of planes, etc. With powerful AI, I think most decision makers either don't know or don't care sufficiently about the risks.
On the limits of intelligence: To become uncontrollable and destroy our future, an AI would neither have to be allknowing nor allmighty. All it needs to be is a superhuman manipulator that is able to take over power the same way a human dictator would, only 100x more effective. It could then make sure that we'll not get in the way of whatever goal it pursues, which is very likely not a goal we would want it to pursue. It will likely not destroy us on purpose, just as we don't destroy rhinos on purpose. We just change the world in a way that suits our goals, not theirs.
On alignment: Regardless how easy to solve problem 2 and 3 may or may not be, as long as problem 1 is not solved, we must not risk creating something that could get out of control. This does not mean we can't continue developing powerful AI - there are many safe ways to use narrow superhuman tool-AI or sub-human general-purpose AI that in combination I believe can help us achieve almost anything a superintelligent AGI could do for us.
Paul Christiano's prediction of gradual disempowerment seems much more likely. The debate between Yudkowsky and him was interesting. Computational irreducibility doesn't mean an AI couldn't build the future it wants. The best way to predict the future is by molding it.