Don't Overthink "The AI Stack"
Reflections on the Export Promotion Executive Order
Introduction
When President Trump introduced the AI Action Plan, he also signed three complementary executive orders. Of those three, by far the most complex is the Executive Order 14320, “Promoting the Export of the American AI Technology Stack.” The order tasks the Commerce and State Departments, the Office of Science and Technology Policy, development finance agencies, and others with devising and running a program to export the “full stack” of American AI.
There has been a great deal of confusion about what “the stack” means and how to implement this E.O. more broadly, both within and outside government. The Commerce Department published a request for information late last month that asked the public (primarily relevant private sector firms), among other things, whether anything in the order’s definition of AI tech stack should be “clarified or expanded upon.” Understandably, the private sector was less than thrilled with being asked these questions, since they were looking to the Commerce Department to answer them in the first place.
I was the primary staff author of this executive order. Typically when there is widespread confusion about a written product, the principal author has missed the mark in at least some important ways. I therefore feel that the burden is on me to attempt to clarify matters. At the same time, I am now a private citizen, and my views are purely my own. I do not, in any way, speak for the U.S. government about this or any other issue. But I do have strong opinions, and a perspective that is, literally, unique.
This post is my attempt to clarify what E.O. 14320 was trying to do from my perspective and what the “tech stack” really means. I also reflect on a key mistake I believe we made in writing the E.O.—and an easy fix to make implementation simpler.
My key message, however, is simple: do not overthink “the stack.”
The Purpose of the Export Promotion E.O.
The primary point of the AI exports program is to facilitate the construction of AI-focused data centers in other countries. American companies, from hyperscalers like AWS, Microsoft, Oracle, and Google to AI-specific “neoclouds” such as CoreWeave, are already leading AI data center construction worldwide. So, you might reasonably ask, what is the utility of the U.S. government getting involved?
There are a few reasons, presented here in no particular order:
American companies currently are the global leaders in chip design, cloud computing, AI models, and AI applications. Through TSMC, SK Hynix, ASML, and many other U.S.- and non-U.S.-based firms, America and its allies dominate in semiconductor manufacturing. This is unlikely to be true forever. We should press that advantage to its fullest while we have it.
The global market share of U.S. AI services is going to be an important metric for the health of our AI industry overall (though far from the only one). Advanced AI systems are likely to be something like operating systems, with network effects and ecosystem advantages that compound nonlinearly. Many countries, however, are understandably concerned about “sovereign AI.” This term means different things to different people, but one commonality among foreign governments is that they do not want to rely on data centers outside their borders for AI workloads they deem critical (public services, national security, etc.). It’s unlikely the U.S. would tolerate this for its own public services, and we should not expect foreign governments to do so either. We should instead try to meet them halfway. But as a matter of pure economics, it does not make tremendous sense to build that many small data centers (you want economies of scale); the activation of development finance authorities (loans to hyperscalers) can help improve the economics.
There are countries of geopolitical significance where AI infrastructure might not get built through market processes alone—or at least, not within the time-limited window described above. In some of these countries, development finance subsidies are de facto table stakes for getting in the door at all. In others, development finance is not strictly necessary, but can accelerate the timeline to construction.
It is quite possible that we will be under-provisioned on advanced semiconductor production by the late 2020s. It seems wise to send a demand signal to TSMC (and one day, I hope, competitor leading-edge foundries) that US-based chip production must continue growing.
As AI grows more powerful, there is plausible utility to data-center-based governance. Say, for example, that a Mexican cartel began using AI at scale for some nefarious or illicit activity. We might find it desirable—and really I mean almost everyone in the world, not just the U.S. government—to deny that cartel access to computing resources worldwide. Such a policy lever is more feasible to implement if a large fraction of AI data centers are either operated by American firms or by foreign firms with cybersecurity standards established by the U.S. government. Note, however, that the E.O. did not create this policy lever; it is merely a plausible benefit down the road. The wisdom of exercising this hypothetical governance mechanism would be highly fact-dependent, and ultimately a decision for future presidents and their advisors to make with considerable caution.
That is ample strategic motivation to write an E.O. aimed at building more data centers abroad.
But there is one problem: simply building data centers does not, on its own, satisfy all of the motivations I’ve described. We could end up constructing data centers abroad—and even using taxpayer dollars to subsidize that construction through development finance loans—only to find that the infrastructure is being used to run models from China or elsewhere. That outcome would mean higher sales of American compute, but would not be a significant strategic victory for the United States. If anything, it would be a strategic loss.
This is where the concept of “the stack” comes into play. Here is how the E.O. defines this idea:
(A) AI-optimized computer hardware (e.g., chips, servers, and accelerators), data center storage, cloud services, and networking, as well as a description of whether and to what extent such items are manufactured in the United States;
(B) data pipelines and labeling systems;
(C) AI models and systems;
(D) measures to ensure the security and cybersecurity of AI models and systems; and
(E) AI applications for specific use cases (e.g., software engineering, education, healthcare, agriculture, or transportation);
For some firms this is straightforward. Take OpenAI.
Earlier this year the company launched “Stargate,” a brand name for their AI infrastructure program. Their 1.2 gigawatt data center in Abilene, Texas, already partially online and set to be completed next summer, is being built by a company called Crusoe for the exclusive use of OpenAI. OpenAI already has fine-tuning and data labeling systems for both internal use and use with large customers (like governments). Of course, they make AI models and systems. And they have various infrastructure and procedures in place to ensure the cybersecurity of those models and systems. Finally, they make applications: Deep Research, Agent, Codex, and, I am sure, many more to come, are all examples of “applications” made by OpenAI itself. Other startups also build applications on top of OpenAI’s platform (one example is Harvey, a company that aims to provide AI services to white-shoe law firms).
OpenAI even anticipated our E.O. before anyone outside the White House knew it was in the works with their OpenAI for Countries initiative. This is close to exactly what I had in mind while my colleagues and I were formulating the early versions of the export promotion strategy. In fact, it is in some ways better than what we could have feasibly done within government: the initiative includes an effort to partner with host countries to develop a fund for local startups building on top of OpenAI’s models. This is precisely the sort of ecosystemic advantage for which we should aim.
The E.O. asks for industry to propose “consortia” of firms that could, together, constitute one instance of a “full stack” AI offering, with the notion that multiple consortia would accepted into the final program (meaning they’d be eligible for development finance subsidies).
OpenAI for Countries involves a kind of “consortium,” even if they do not call it that. In heavily stylized terms: Nvidia supplies the chips, Crusoe builds the data center, Oracle operates it, OpenAI uses it and supplies the software (the actual supply chain is of course vastly more complex).
Neither Google nor Anthropic have articulated similar “for countries” initiatives (to my knowledge), but both are well-positioned to furnish similar offerings for export (in Anthropic’s case, in partnership with Amazon Web Services).
But it is the “consortia” concept where I believe the drafters of this E.O. (ahem) went astray. The idea was meant to accomplish two things. First, the consortia were intended to enable the export program to present a simple “menu” of full-stack export packages for foreign governments to select from; for example, “the Google option,” “the OpenAI option,” “the model-vendor-neutral AWS option,” and the like. Second, the purpose was to make clear that we understood that no one company (save perhaps Google DeepMind) could independently offer a full-stack AI export package. Rather than clarifying, though, I think we ended up confusing.
Consider a company like Amazon. They are deeply partnered with Anthropic, to the point of co-designing their AI training and inference chips with the frontier lab. But they also offer a wide range of other models through their cloud computing platform, from their own to those of competing frontier labs. Viewed one way, Amazon/Anthropic is a prototypical consortium; viewed another way, Amazon as a model-vendor-neutral cloud provider is also equally viable.
Why make Amazon and Anthropic pick which one of these they want to be in the program? Why not let each company participate separately in the program? If a country is particularly enthused about Anthropic models (or if Anthropic is particularly enthused about serving a specific market), why not let them work that out with Amazon, the host country, and the relevant agencies in the U.S. government?
Far simpler than relying on industry to form itself into “consortia,” then, the fix here is simple: switch the E.O.’s emphasis to individual firms. The request for proposals could easily be re-oriented in this way. Rather than asking consortia to submit proposals, you would simply ask individual firms to submit proposals that demonstrate a credible ability to satisfy all components of the full-stack definition laid out above. Selected companies would all be offered to foreign governments as part of the program, and all would be equally eligible for development finance loans, grants, and other perks.
The key benefit to this, beyond the obvious simplification for the U.S. government officials implementing the E.O., is that essentially every company that could plausibly qualify for this program already does this in their efforts to build data centers abroad. Thus, rather than forcing firms to collaborate in unfamiliar ways, this relatively simple tweak would allow the E.O. to piggyback off existing corporate dealmaking efforts.
Conclusion
The Export Promotion E.O. is one of the more challenging parts of the AI Action Plan to implement. It puts the U.S. government into the posture of the global salesman—a new position for many American policymakers. It is perfectly understandable to err a bit in architecting such a novel policy. And in this case, I take a healthy dose of personal responsibility for the error.
There is nothing wrong with erring, so long as you can recognize the mistake, understand it, and correct it with alacrity. This is what I believe the U.S. government officials now implementing this E.O. should do. If they can, there is ample light at the end of the tunnel. Implemented well, this E.O. can help secure an enduring strategic victory for the American people and the world alike: bringing the benefits of American AI to the rest of humanity.


I think the main problem with the AI export strategy is that the current AI administration has made it clear that being an US allied, or a liberal democracy, is not sufficient to be treated respectfully or as a partner. If that is the case, how can allies trust that the US will not just sell them when convenient?
This is a really valuable clarification, thank you.