The Moving and the Still
Reflections on Delhi
Since I was young, I have enjoyed imagining what it must be like to walk around the inside of a cell from the perspective of something so small that it was like a human walking the streets of Manhattan. When I was younger, simpler, and more naïve, I imagined it like my textbooks told me it would be: orderly, logical, Mozartian. I supposed that a cellular pedestrian could look up at stonelike structures, enjoy the rhythm of cars stopping at red lights and gliding through green.
As I came to understand the world as it really is, it gradually dawned on me that the cell is nothing like this. Instead it is stormy, chaotic, and packed more densely than reason alone could comprehend. Or at least, that’s how it would seem to our on-foot tourist. Though a greater logic may permeate the system, from his vantage point every square picometer surrounding him is the subject of intense, ceaseless, and wholly unplanned negotiation. Everywhere there are problems being created and solutions being invented in real time. Very little is a guarantee because nothing is fixed. It is hard to keep anything in place when a trillion tug-of-wars are unfolding in parallel.
Once this realization had set in, I grasped just how artificial our manmade projections of order are. Our ninety-degree angles, our grids, our chiseled stone and sharpened metal. I developed no desire to reject that tradition—indeed, this realization made me cherish our human legacy of rational inquiry and tool building even more. Instead, I came to understand that legacy as a distinct mode of human existence, an attempt to tame nature by building models of it. Blocky models with straight lines, but models nonetheless. This, fundamentally, is why my political theory is rooted in deep skepticism of fixed rules and, more broadly, of over-elevating our human conjectures of order—of imposing our ninety-degree angles on a curvy world.
I saw that no matter how far down one peered, no matter the level of abstraction, nature always resembles something like my more mature picture of the cell. It is moving squiggles, rather than fixed lines, all the way down—but importantly, all the way up, too. We project our right angles and our fixed concepts onto the world with our reason, but in truth these are thin veneers over a fundamentally organic, unpredictable, endlessly moving reality that Michael Oakeshott called a ‘ceaseless improvisatory adventure’ and that, three-thousand years before him, the poets of the Rig Veda had already suspected of our cosmos: “perhaps it formed itself, or perhaps it did not — the One who looks down on it, in the highest heaven, only He knows, or perhaps even He does not know.” Squiggles all the way up.
—
I had occasion to learn about the ancient religious traditions of India in preparation for a trip I took to Delhi for the AI Impact Summit, the “official” global AI gathering that emerged out of the AI Safety Summits in Seoul and Bletchley Park. After spending a few days in the city, I find myself unsurprised that the above-quoted Rig Veda hymn—the best nugget of ancient wisdom I have encountered in years—was composed by poets who roamed these very same Gangetic plains.
Delhi is exceptionally, outrageously alive. Walking its streets is like experiencing Phil Spector’s Wall of Sound, but for every sensory and mental faculty. There are individual blocks where the density and diversity of activity stretch the imagination. If you have only visited Western cities, you are probably overestimating the amount of human expression and activity that can be compressed into a dozen or two square feet. And I am sure my Indian friends, with pride and with a smile, would tell me that I haven’t seen anything. I do not doubt that they are right.
As I wandered about the streets of Delhi, I pondered whether this place, with its vivacity and its warmth and its tolerance for constant flux but also its clear weaknesses in institutional cohesion, would do better or worse over the coming decades than the West, with all its systematic but cold ninety-degree angles. Like all things, the societies that do best will probably incorporate characteristics of both—fluidity and fixity, dynamism and stability. Each mode of civilization has things to learn from the other, and it is best to do so with open eyes and outstretched hand.
I came into this Summit rooting for India—not just to have a successful event, but to have a successful century. I came away rooting even more loudly. But I came in worried, too. Worried that the emerging era of machine intelligence will not be so kind to the Indian people, and indeed not to many countries of the Global South. Worried that this may be one of the first technologies to punish those countries in states of economic transition rather than fuel them. Worried that what will feel like big waves to the insulated and wealthy Americans will feel like a roaring tsunami in places like Delhi, and worried that the people here do not see it coming.
I regret to inform you that I came away even more worried than I went in.
—
The types of concerns I’ve described were latent among the globally representative attendees of Delhi, but they were not explicit. In fact they were swept under the rug, not so much dismissed as denied.
The perils and hopes that we discuss here in this newsletter—the ones that come from transformative AI, powerful AI, AGI, superintelligence, or whatever other moniker you wish—were not really on display at the Summit, not so much because of any failing of the Indians but because these topics are not part of polite global conversation. This is a domestic failing, too: as I have frequently pointed out, the implications of powerful AI are only kind of a part of the conversation in America.
At some point in 2024, for reasons I still do not entirely understand, global elites simply decided: “no, we do not live in that world. We live in this other world, the nice one, where the challenges are all things we can understand and see today.” Those who think we might live in that world talk about what to do, but mostly in private these days. It is not considered polite—indeed it is considered a little discrediting in many circles—to talk about the issues of powerful AI.
Yet the people whose technical intuitions I respect the most are convinced we do live in that world, and so am I. In broad strokes, I believe the evidence that we are fast en route to building recursively self-improving, infinitely replicable, smarter-than-human machine intelligences in the near future has basically only grown since the release of ChatGPT in 2022. There are reasonable and important conversations about what exactly that means in terms of concrete effects, and here I am often more dubious of extreme claims than some of my fellow that-world believers. But the question is very much “what are autonomous swarms of superintelligent agents going to mean for our lives?” as opposed to “will we see autonomous swarms of superintelligent agents in the near future?”
Except that these questions aren’t asked by the civil societies or policymaking apparatuses of almost any country on Earth. Many such people are aware that various Americans and even a few Brits wonder about questions like this. The global AI policy world is not by and large ignorant about the existence of these strange questions. It instead actively chooses to deny their importance. Here are some paraphrased claims that seemed axiomatic in repeated conversations I witnessed and occasionally participated in:
“The winner of the AI race will be the people, organizations, and countries that diffuse small AI models and other sub-frontier AI capabilities the fastest.”
“Small models with low compute intensity are catching up rapidly to the largest frontier models.”
“Frontier AI advances are beginning to plateau.”
At this same Summit, OpenAI CEO Sam Altman remarked: “The inside view at the [frontier labs] of what’s going to happen... the world is not prepared. We’re going to have extremely capable models soon. It’s going to be a faster takeoff than I originally thought.”
One of these conversations suggests that the most important thing about AI is the use of sub-frontier models to augment routine business processes, often without need for a large data center. The other suggests that the frontier systems of today are on the cusp of being able to recursively self-improve (if not already doing it in some ways) and that the probable result of this will be the near-term dawning of machine superintelligence. American companies are spending the better part of one trillion dollars to actualize this vision this year alone in what is surely among the grandest projects in the history of capitalism. None of this is a joke, none of it is a dream.
Why, then, do so many of the thousands of attendees of the global AI Summit pretend only their version of the story exists? What explains this odd dissonance?
—
I came to Delhi with a report in hand, co-authored with my friend and colleague Anton Leicht. We both perceived this dissonance well before the Summit and wanted to offer some arguments for why the second view of AI—the “superintelligence soon” view—should be a bigger part of both global events like the Summit and the internal conversations of every country. The way our report does this is essentially to grab the non-U.S. reader by the shoulders and exclaim, “the U.S. is spending one trillion dollars this year alone to build superintelligence, and the odds are high that your country had no strategy for what this means!”
I am optimistic that we changed at least some minds, but I know any advances we made were small. The audiences we encountered are perfectly capable of understanding our message; they simply deny that it is worth hearing. I believe they deny it for two reasons: first, because if it is true, it might mean that their country, their plans for the future, and their present way of life will be profoundly upended, and denial is the first stage of grief. Thus, at the object level, rejecting notions of AGI, superintelligence, and the like shifts the conversation from U.S. strengths (hyperscale cloud computing, leading-edge semiconductor design, frontier AI talent, etc.) to areas where many more countries in the world feel comfortable and confident.
Second, because ‘AGI’ in particular and the pronouncements of American technologists in general are perceived by the elite classes of countries worldwide as imperialist constructs that must be rejected out of hand. This is a rhetorical rebellion of people who perceive themselves, rightly or wrongly, as among the would-be colonized, and perceive America and especially its technology firms as the would-be colonizers.
The denial is an effort to turn what people like Anthropic CEO Dario Amodei frame as a scientific and universal story—the coming of AGI, the criticality of alignment and catastrophic risk mitigation, the inevitability of it all—as just one of many narratives on the shelf. It is almost as if the message is:
“Sure, these crazy Americans might talk about all that AGI stuff, but over here we are talking about things like local AI, ‘AI for all,’ and only the risks we want to talk about, and thus only the policies we want to impose on you. And if you don’t like our policies? Tough. You can’t dangle technology access in front of us, because we’ve got you this time. We can use open-source, and edge compute, and small models that are good enough for what we need. You Americans can keep your fancy computers and your frontier models. We don’t need them, and we don’t need you.”
This is a message that one can hear everywhere from Western Europe to South Asia to Sub-Saharan Africa. I understand why it feels good to say, and in some sense, it might be the case that we Americans deserve to hear a message like that.
Yet the central flaw of all this postcolonial narrativizing is, and always has been, that it exists within the domain of pure concept, not the real world. It’s about the map, and how to draw different ones, not the territory and how to navigate it (indeed, postcolonial thought has a tendency to make the poststructuralist assumption that changing the map is as or more important as changing the territory). And in the end, satisfying though these “anti-AGI” narratives may be to tell,they are probably empirically wrong in ways that will harm the very people whose independence and humanity they are ostensibly intended to defend.
A country that fails to adopt frontier AI systems rapidly and develop a hard-nosed AI strategy is one that will fall perilously behind both the U.S. and others. It is a country that will refuse to see and make tradeoffs worth making to secure its stake in the future. It is setting itself up for failure and dependence (if not outright subjugation) rather than prosperity and strength.
In the end, then, the rejection of ‘American’ or ‘Anglo’ AI concepts is simply a coping mechanism for weakness that masquerades as a show of strength. Ultimately, though, it makes all countries who embrace this manner of thinking weaker.
Getting out of this trap does not require one to “admit the American technologists were right.” It requires one to develop strategies that are robust to the increasingly likely scenarios where they are fundamentally correct. If the AI Summit is any indicator, most of the countries on Earth are developing AI strategies that bet against the central trends of deep learning, and thus the theses of the frontier labs. These strategies seem likely to fail if deep learning continues to work in anything like the way it has for the past fifteen years.
As the report I co-authored with Leicht argues, once countries have accepted the likely reality we occupy, a range of positive options become available. First and foremost, many Middle Powers and developing countries can bet that they have greater institutional flexibility than the more rigid U.S. Indeed, betting on continued American institutional sclerosis seems much safer than betting against deep learning. They can build new institutions, and reimagine existing ones, using the many new things frontier AI systems make possible (of which we have scratched the surface). This is actually a twist on one of the common features of existing Middle Power strategy: the aforementioned focus on diffusion. Currently, that diffusion effort is usually centered on small and sub-frontier models, but there is no reason that frontier diffusion cannot be the focus instead. However, given that the countries in question often have limited resources, concentrated bets will be a necessity. This is yet another reason clarity about the direction of AI progress is so important; it enables strategic focus.
The slogan of the Summit was “AI for All, Welfare of All,” and while the sentiment is nice, the truth is that the U.S., and to a meaningfully lesser extent China, hovered silently over a great many of the conversations in Delhi this week. “For all” is not so much an invitation but a provocation to those Great Powers. It seems to say, “there are billions of us, and this is our A.I.”
But I choose to take the slogan seriously. The revolution unfolding is not a fight between the Middle and Great Powers, between the U.S. and the Global South, or between China and the U.S. The fight worth winning is humanity’s fight to navigate this transition as smoothly as we can manage.
Humans themselves, with our self-defeating tendencies, are probably the single biggest barrier to success. But the first step to achieving productive cooperation requires recognition of a common problem, and a common problem we surely face. I went to Delhi with an outstretched hand, and outstretched it will remain.
—
On my walks and rides through Delhi, I kept coming back to one thought: this is a people with an unusually high tolerance for complexity, ambiguity, and improvisation. Perhaps these are the skills more essential than any others for success as the revolution in machine intelligence unfolds. Perhaps I should be more worried about my own society, with its stasis and intolerance for the new (and to be clear, I do). Complexity science has taught us that there are systems too ordered to survive in a dynamic environment. Delhi, ultimately, is the biological city; Paris, I am afraid, would quickly perish in any real body, though once that was not true.
Order and predictability are blessings, but they can also be seeds of destruction. This is one of the reasons I have argued that the U.S. should aim to think of itself more like a developing country during this era. If I had to choose between exquisite order and flexibility in the years to come, I’d err in the direction of flexibility—but I’d be nervous about erring either way. What you want, but can never achieve, is the perfect balance. To have even a chance at balance requires constant adjustment and wagering—something the people of Delhi understand as well as anybody on this planet, and probably something they understand much better than many of my fellow Americans.
This never-ending need to readjust and remeasure one’s surroundings is what makes life complicated, but it is also what makes life possible. And thus there is no easy resolution. Civilization requires the right angles and the squiggles alike, and none among us know in what ratio. So we walk down our paths, we see what we see, and we make our bargains and our wagers. The tension we feel never disappears, but only changes, creating itself in every moment.


Well put. The nature of India is such that every individual bit of complexity that is in front, the "never-ending need to readjust and remeasure one's surroundings", this is what I wrote of as a series of bilateral negotiations, this also acts as an almost coasean brake in some instance, and while I'm quite hopeful too that we will figure this out, it is a hard problem.
https://www.strangeloopcanon.com/p/life-in-india-is-a-series-of-bilateral