Dice in the Air
A look back at 2025, and a look ahead
I.
The past year has been an unusually productive one, both for me personally and for the budding AI industry whose developments I cover. On a personal level, I developed my framework for the private governance of AI; authored essays, articles, and papers; and did a stint in government. Most important of all, God willing, my wife and I will have our first child—a boy—in as soon as a few hours or days from when I write.
I am proud of the work I’ve done, but almost in the end it is all a series of wagers. Wagers about the trajectory of AI, the capacity of our government, the resilience of our people and society, and the readiness of the West for very serious technological change. I have always tried to strike a balance between various opposing extremes, but this is an intrinsically dangerous enterprise. It is when one is trying their damndest to stay balanced that one is likeliest to fall.
Has my work been too laissez-faire or too technocratic? Have I failed to grasp some fundamental insight? Have I, in the mad rush to develop my thinking across so many areas of policy, forgotten some insight that I once had? I do not know. The dice are still in the air.
Yet I learned very much from 2025. I’d like to reflect on a remarkable year and offer some thoughts about 2026.
II.
In 2025, AI became ‘real.’ In December 2024, models were still mostly a curiosity to me. Reasoning models and Deep Research agents had started to emerge at this point, but they were nascent and slow. Up to this point, AI’s practical utility in my life had been modest—the occasional drafting of a pro forma document, the low-stakes research question. The tool-using abilities of OpenAI’s o3 models were the first true breakthrough of the year. The work I did related to the country’s AI Action Plan would have been impossible without o3 as a research assistant. This model was also the first one I viewed not merely as a convenience but as a necessity for my work. That has only become truer with time.
One year on from December 2024, models have become fantastically useful. As I have discussed recently, frontier coding agents, and especially Claude Opus 4.5, have essentially become autonomous software engineers. In just the last few weeks, the best models have done software engineering work for me that would have cost tens of thousands of dollars had I hired humans to do it.
This means a great deal more than coding well. It means using a computer well. And this means that frontier models can now do a large and growing fraction of the economically valuable tasks a human can do using a computer. This is not the only definition of “AGI,” but it is one definition of AGI. I tweeted this argument in a micro-essay earlier this week. I expected it to be controversial, but what surprised me is how few people disagreed. Even among those who did, no one framed my position as outlandish.
And this is to say nothing of the now obvious and frequent incremental discoveries made or enabled by AI systems in science and mathematics. 18 months ago models could barely do arithmetic, and now they make novel (if small) contributions on the level of a doctoral candidate in mathematics, computer science, and other fields.
One year ago my workflow was not that different than it had been in 2015 or 2020. In the past year it has been transformed twice. Today, a typical morning looks like this: I sit down at my computer with a cup of coffee. I’ll often start by asking Gemini 3 Deep Think and GPT-5.2 Pro to take a stab at some of the toughest questions on my mind that morning, “thinking,” as they do, for 20 minutes or longer. While they do that, I’ll read the news (usually from email newsletters, though increasingly from OpenAI’s Pulse feature as well). I may see a few topics that require additional context and quickly get that context from a model like Gemini 3 Pro or Claude Sonnet 4.5. Other topics inspire deeper research questions, and in those cases I’ll often designate a Deep Research agent. If I believe a question can be addressed through easily accessible datasets, I’ll spin up a coding agent and have it download those datasets and perform statistical analysis that would have taken a human researcher at least a day but that it will perform in half an hour.
Around this time, a custom data pipeline “I” have built to ingest all state legislative and executive branch AI policy moves produces a custom report tailored precisely to my interests. Claude Code is in the background, making steady progress on more complex projects.
None of this, really, was possible or in usable form one year ago.
That is a shocking amount of progress for one year. It is faster than I expected, and I considered myself bullish one year ago. And we have only barely begun to scale up compute; in 2026 we will add vastly more compute than we did in 2025. Today no gigawatt-scale data centers exist; by the end of 2026 American companies will control nearly half a dozen such facilities (in addition to multi-hundred-megawatt facilities coming online throughout the year as well).
Inside the AI labs, I am quite sure that all these capabilities and more are being used to speed up the next generation of AI systems. New companies are being formed by the week with these technologies as table stakes. My sense is that I am still only using these tools with modest imagination and for a small fraction of what I could be doing with them.
A year ago I predicted that AI progress would be faster in 2025 than it had been in 2024. That prediction was right, despite the conventional wisdom of the time. We are living through a technological takeoff unlike anything seen since the Industrial Revolution. Progress, I think, will remain fast.
III.
The politics of AI also got ‘real’ in 2025. For years, poll after poll has demonstrated broadly negative sentiment among the public about AI. But this year, we started to see that sentiment get channeled into political, and even policy, action. The proposed moratorium on state AI legislation, which was debated in Congress while I was serving in government, became an early flashpoint.
The most important thing about the moratorium debate was not so much the outcome, but instead the coalition it demonstrated. In many ways, it felt similar to the 2024 California AI safety bill SB 1047; sudden and sharp opposition to a policy idea that came from a diverse range of actors who did not necessarily understand in advance that they shared interests. In the case of SB 1047, it was academic researchers, startups, centrist Democrats, libertarians, and Big Tech. In the case of the moratorium, it was the large intellectual property portfolio owners (“creators,” euphemistically), kids safety advocates, data center NIMBYists, and AI safety organizations.
The subsequent attempt at preemption in the National Defense Authorization Act may have driven the “anti-preemption” coalition—whose interests in fact diverge wildly—closer together. At this point the coalition could become negatively polarized against the notion of preemptive federal laws at all. This would be a sad outcome, since virtually all technologies involve preemptive federal governance. The Trump Administration’s recent Executive Order on preemption, which directs various White House units to come up with a plan for federal legislation, may be an opportunity to let tensions diffuse and develop a policy proposal that at least some parts of the anti-preemption coalition can tolerate. This will probably require genuine compromise from both sides. Whether it will happen is probably the most interesting foreseeable federal policy question of 2026.
And then there are the states themselves. I do not currently expect California to pass any new frontier AI legislation in the coming year, but they are very likely to pass kids safety and other consumer protection laws that will affect frontier AI systems nonetheless. So are several dozen other states.
Already, between the investigations mounted by state and federal law enforcement and the European Union’s vast complex of technology regulations (and those of other jurisdictions), the regulatory overhead facing frontier AI developers has grown meaningfully. State legislation will only pile on, unless it converges on common standards, which is a crapshoot. 2026 will be the year that regulation begins to hurt, though it is not clear how much.
Lawsuits, too, were a major trend throughout 2025. It began early in the year with the Character.AI cases involving situations where that company’s models allegedly encouraged teenage suicidality and violence. The Adam Raine case (also involving teenage suicide) against OpenAI, brought later in the year, may well go down in legal history as a landmark. Almost immediately, the case caused OpenAI to change its policies with respect to kids safety and parental controls. This is probably good. The case has also provoked, and will continue to provoke, further lawsuits.
I will be interested to see if the rise of agents provokes novel lawsuits. Regardless, malicious actors around the world will cause meaningful harm using AI agents in 2026, though it remains to be seen how legible that harm will be. Will it feel to us like a ‘normal’ harm (a cyberattack, say), or will it feel somehow like a distinct crisis? In AI, the technology is the easiest part to predict; the hard part is predicting the public’s reaction to it.
IV.
Then there is the scale and ambition of the infrastructure. There are credible arguments to be made that AI investment is accounting for a significant fraction of U.S. GDP. America has turned on a dime to seize the AI opportunity. Of course some of this is related to the policies and rhetoric of the Trump Administration, but much of this alacrity comes from businesses and capital allocators who see immense promise in this technology. As a result, infrastructure has been built at unprecedented speed and scale. Much more of it will come online next year.
Today there are questions about the wisdom of this investment, but a year from today I expect we will be happily reaping the benefits. There is undoubtedly some froth in the market, and it would almost be surprising if there were not a bubble forming. Bubbles are a healthy part of well-functioning capitalist systems. I doubt that a bursting event with enough force to stop the current boom will occur in the near term.
There is something novel about this investment boom, however. Some have said there is a “Wild West” feeling to it. I would describe it as “thinly institutionalized.” Rarely before has a nascent industry felt so important, so quickly., while being so thinly institutionalized. This is an infrastructure buildout occurring at sovereign scale, but one which is profoundly dependent upon the personal relationships of a small number of actors to one another. It is a little bit like a developing country—or like the U.S. infrastructure boom of the Industrial Revolution, when, indeed, we were still a developing country.
It is almost as though we have become a developing country once again, at least with respect to this industry. This is probably for the better: my thesis has long been that we must let old institutions wither and build new ones in their place. It’s right there in the tense: a developing country has a future; a developed country gazes at its past. We are all ‘developing economies’ compared to the future collective wealth that is now so clearly within our grasp.
V.
Despite some pessimism about the politics of AI, I find myself closing 2025 with a deep sense of optimism. The AI models we have today are fantastically capable. The very best model, Claude Opus 4.5, is the best in large part because of its superior alignment. I trust Opus 4.5’s judgment and taste more than any other model, and this is because its alignment seems to steer the model toward being genuinely conscientious. It therefore seems likely that alignment itself will become a bigger vector of competition in the AI industry throughout 2026. On the whole, this seems to me like a good thing.
In just the last couple of weeks, we have seen a series of frontier models that are truly competent software engineers. We have built the train that can lay down its own tracks. I expect the next few years to be the most technologically dynamic of my lifetime. It will continue to feel a little bit like a developing country, and for the moment, I am happy with that. There is a fine line between “chaos” and “institutional dynamism,” and we are unlikely to straddle that line perfectly.
For the first time in a long time, the future of America feels genuinely exciting, if harder to predict than ever before, and perhaps more fraught. We are living through a takeoff, climbing to new altitudes year after year. Don’t sterilize this moment in our history. Try as you can to enrich the world around you, but at the very least, try to enjoy the view.
I get uneasy sometimes when I reflect on the role that I personally have played in some of the events of 2025. The thought that I helped shape the AI strategy of the United States fills me with a combination of pride and discomfort.
Words cannot express the gratitude I feel toward the people who gave me the opportunity to serve in government, and the many friends I made while I was serving. But I return ultimately to where I started: uncertainty about my wagers. I know that as the public wakes up to what is happening, some—including friends and family—may look at the words I have written and say, “my God, you knew, and this is all you did?” All I can say is that I did my very best.
Finally, I want to express my immense gratitude to you. This year I was faced with a core challenge: should I become a political actor or should I remain a writer? I chose the latter. Each week I ask you for the most valuable thing you have to give in this world: your time. That any of you choose to give me that gift is an honor. I try to live up to it every week, as we wait for the dice to land.
If I don’t talk to you before the year is out, I wish you happy holidays, a merry Christmas, and a wonderful new year.


I am very glad that you chose to remain a writer. Keep up the excellent work!
Dean, may the coming days bring you and your wife a healthy happy child. May you permit yourself the
freedom of allowing that life-changing event to do what it will to your perspective on AI. And may you write about it.