Quick Hits
Nvidia has announced their latest top-of-the-line, AI-focused chip, known as the Blackwell (named for the noted mathematician David Blackwell). Just one standard data center rack of these machines exceeds the computing power (by some measurements) of Frontier, a building-sized exascale supercomputer created by the US federal government in 2022. Obviously, there will be data centers with many of these racks. This is a good moment to step back and appreciate the power of exponentials. The chip will begin shipping in late 2024, in volume by early 2025.
Inflection AI, a startup that raised $1.3 billion last summer to acquire 22,000 H100s (the top-of-the-line before the aforementioned B100), has announced a strategic pivot. Rather than making models themselves, the company will now help other firms make and fine-tune models. Much of the top leadership and engineering talent, including Mustafa Suleyman (noted for his 2023 book The Coming Wave, which essentially argued that we require global authoritarianism in order to effectively manage AI), will be going to Microsoft. Suleyman will be leading Microsoft’s internal AI effort. Did Microsoft acquire Inflection in a way designed to avoid Lina Khan’s ire (or at least her jurisdiction)? Ben Thompson think so (paywall).
Sakana AI, a Japanese AI firm, has released a paper documenting a novel approach to creating AI models. Rather than train models from scratch, Sakana proposes combining different models into one. This is something that has been tried before with some success, but it is a non-trivial process. Sakana has developed an “evolutionary optimization” algorithm that can discover new ways to combine different models automatically. This may be a new method for open-source AI developers to make significantly more capable models without the substantial capital investment usually associated with that endeavor. For instance, I could take a language model and a vision model (trained on images), and combine the two to create a new model greater than the sum of its parts—with orders of magnitude less computing power than one would otherwise need to create a model with such capabilities.
As I have said before, “compute thresholds” are of limited use for policymakers. By the way, Japan has thus far articulated a laissez-faire approach to AI regulation. Regardless of what American policymakers do, research elsewhere in the world will continue, and at least some of it will be published online, freely available. Is there a way to “control” that without draconian measures? Well-funded safety organizations will write 250 page reports with many complex frameworks and lots of flow charts, but I’ll give you the summary: “no!” The paper linked above documents their approach; here are the merged models they created (open source, naturally).
A Classical Liberal Framework for Regulating AI
This week, I’m going to share a piece published today in National Affairs. This is the first bit of public writing I did on AI; it was largely completed in September 2023. In it, I articulate my broad perspective on AI regulation: Most human misconduct has been anticipated by our current legal and regulatory regimes, and we should rely on those existing systems rather than creating a new set of “AI laws” or an “AI agency” from scratch.
I had no public writing track record when I submitted this to Yuval Levin, the editor of National Affairs, and I thank him profusely for taking a chance on a new author.
Some excerpts. On tech diffusion:
The fact that a new tool or technology exists is merely the first step in the process of innovation. Indeed, much of the actual "innovation" in technology takes place not in the invention itself, but in its diffusion throughout society. The internet, to take one example, has led to the development of a host of new industries and other innovations that no one could have foreseen: Who in 1990 could have predicted that the World Wide Web would provide a launchpad for bloggers, gaming influencers, social-media gurus, and gig-working Uber drivers?
It is through diffusion that we discover the utility, or lack thereof, of new technologies. We need to go through such a process to mediate AI, to develop new informal and formal norms related to its use, to learn where the weak spots are that would benefit from new laws, and to discover the true function of the technology before we begin attempting to regulate it. With that concrete experience, we can react to misuse or maladaptations with the law as appropriate.
On technological destiny:
The transformative potential of AI has inspired crucial questions, such as how to ensure that the technology is not biased, that it does not spread misinformation, and that it is only used for "good" or "productive" purposes more generally. Exploring those questions quickly leads to the sort of epistemic and moral questions we have been discussing and debating for millennia: What is the truth, what is the good, and who gets to decide?
Such conversations are essential, but we will fundamentally constrain them if they take place primarily in the debate over what government should be doing about AI today. History makes clear that government is not always an ideal arbiter of what is productive or even good. Indeed, we have found, through millennia of conflict and hard work, that answering those questions in a centralized fashion tends to lead to corruption, oppression, and authoritarianism. We must grapple with that reality, not seek to wash it away; attempting to do so will drastically limit or perhaps even outlaw future progress in this profoundly promising field.
One of the tenets of the American experiment is that it is not the state's unmediated responsibility to answer fundamental epistemic and moral questions for us. Uncomfortable as the boundlessness of the future may be, we discover the truth and the good organically, and often with no specific plan to do so.
And in conclusion:
The British philosopher Michael Oakeshott once likened the human condition to sailing on a "boundless and bottomless sea." We can view the reality Oakeshott describes as a crisis, but it is surely more life affirming to accept and even embrace this fundamental truth. AI, like so many other challenges in human history, should be seen not as a tsunami to run away from, but as an invitation for humanity to be bold, to sail the boundless sea with courage. Each of us gets to choose whether to accept that invitation or to run from it. No matter which path we decide to pursue, we should bear in mind Oakeshott's wisdom: We have already set sail.
I recommend reading the whole thing.
Excellent work Dean.