Introduction
You have to have a federal rule and regulation. We can’t stop [AI] with politics. We can’t stop it with foolish rules and even stupid rules. At the same time, we want to have rules, but they have to be smart. They have to be brilliant. They have to be more brilliant than even the technology itself.
President Donald J. Trump, Remarks at “Winning the AI Race”
The risk of state overregulation of AI has been a focus of my writing since almost literally the beginning of Hyperdimensional. I had the great honor of sitting in the audience this past July when President Trump acknowledged this danger and called for a federal framework to preempt state AI regulation.
We need rules, the President told us, rules more brilliant than AI itself—a great way of framing the challenge at hand. I knew this would be a major challenge when I was in government, and since leaving my doubts about the political prospects of federal preemption have only grown. We are profoundly divided about what AI is, what it should be, what it could be, and what it will be. It is difficult to reach common answers in our current discursive environment.
And that is fine; there are many unanswered questions about AI, and we do not need to answer them all now. It is justifiable, perhaps even the American way, to take more incremental steps in the near term.
Today, I am pleased to share my proposal for those incremental steps. It depends upon trust in the existing and enduring institution of the common law to offer our people redress for realized harms, as the common law is currently doing. It furnishes public insight and information through mandatory transparency disclosures. It does not regulate the development of the technology itself and instead requires developers to tell the public how they are building the technology.
And most importantly, my proposal asks our political class to exercise, for only a few years, that scarcest of civic virtues: restraint. It invites us all to stop yelling for a while and reason, collectively, about the bigger questions we face: what exactly is AI, and what do we wish it to become? It suggests that we all may benefit from a simple exercise AI researchers once told the early language models: “take a deep breath and work on this problem step by step.”
The Artificial Intelligence Transparency and Innovation Act
Here is what my proposal would do:
Recognize that existing common law, interpreted and applied by state and occasionally federal courts, applies to AI systems in a way that has not traditionally been true of internet services (due to the liability shield of Section 230). This alone is a profound difference between AI and almost all other digital technologies that have come before it. We don’t need new laws for “accountability,” a euphemism in policy circles for “liability.” We’ve already got them. In the long run, we may need to profoundly modify common law liability for AI if we wish to have a healthy AI industry; but for the next few years, and perhaps much longer, it it likely to serve us well.
Create a transparency requirement on frontier AI developers, as measured by their annual spending on AI-related research and development, to disclose their approach to frontier safety evaluation and risk mitigation.
Create a transparency requirement on developers of widely used (measured in monthly active users) language models to disclose their Model Specification—that is, a document outlining the intended behaviors, alignment methodologies, system instructions, as well as any top-down constraints or biases imposed by developers on their models. In addition, developers affected must publish an annex outlining what modifications they make for children using their service, if any, as well as how they identify children on their platform.
Establish a three-year learning period, during which no state laws targeting AI in any of the following ways may be enacted: algorithmic pricing, algorithmic discrimination, disclosure mandates, or mental health. Any existing laws of general applicability (that is, not AI specific), including common law, consumer protection, civil rights, privacy, intellectual property, and others, may be applied to AI as they would to any other technology. This learning period would automatically sunset three years after the date of enactment, but all other provisions would remain active federal law in perpetuity.
Legislative text is available here. I will publish a clause-by-clause companion report in the coming weeks.
I have spoken recently about my change of heart regarding the common law, and I will let that writing stand as my explanation for (1). The other provisions, however, merit additional explanation.
Transparency
I have written a great deal about the basic idea frontier AI safety transparency measures, in these pages and elsewhere. To summarize briefly: we have sufficient real-world evidence to believe that frontier AI systems can provide meaningful uplift in domains such as cyberoffense and synthetic biology. We do not know how or when or to what extent those risks will manifest themselves, but the risk is large enough that it merits modest policy action.
Prescriptive rules from government on how developers should mitigate frontier risks are likely to cause more problems than they solve. Therefore, our best solution is to require frontier AI developers to evaluate and mitigate these risks themselves and tell the public how they are doing so. So long as one crafts the requirement with humility and restraint, as all holders of the statutory pen always should, this is not a major burden on large developers (most frontier AI developers already write such documents). It is that simple.
The Model Specification, however, is somewhat newer a concept in AI policy. Nathan Lambert has written most often and eloquently about its potential utility as a policy instrument. I merely build on Nathan’s insight. Here is the definition of “model behavior specification” I employ in the draft legislative text:
(12) MODEL BEHAVIOR SPECIFICATION; MODEL SPEC.— The terms “Model Behavior Specification” or “Model Spec” mean a comprehensive, authoritative, and contemporaneous governance document that formally delineates the intended behaviors, operational parameters, alignment protocols, and any developer-imposed constraints of a covered artificial intelligence system. This specification shall serve as the foundational record of the developer’s directives governing the system’s outputs, responses, and actions.
It is the issues implicated by the Model Spec, I think, that concern Americans, and especially American families, the most. What type of conduct does a developer wish for their models to enact in the world? What protections do developers offer to children? What if my child is engaging in self-harm or suicidal ideation and seeks the model’s aid? Does this model have any top-down political, cultural, or other biases or constraints set by the developer? What will the model say if asked whether it is conscious or alive?
About these and other matters, the law makes no normative declarations. It merely says that developers of widely used language models must look the American people straight in the eye and tell us their answers.
The different thresholds for transparency requirements in (2) and (3) are intended to avoid mandating disclosure of documents that do not make sense for different kinds of businesses. For example, the company Character.AI makes popular “companion” chatbots, likely popular enough that they exceed the threshold in (3). But they are unlikely to spend sufficient money on research and development to trigger (2).
By contrast, Safe Superintelligence, Inc. is spending large sums of money on R&D and has the explicit goal of not releasing a single public-facing product until they reach “superintelligence.” It seems obviously in the public interest to understand how that company is approaching the various frontier AI risks that “superintelligence” clearly poses. Yet we would not want to burden them with releasing a Model Specification for a product that is not offered to the public.
The transparency requirement in (2) builds off a recently signed California law that many major industry actors ultimately supported. The requirement in (3) builds off a disclosure concept established in Executive Order 14319, “Preventing Woke AI in the Federal Government,” which I and other colleagues in the Trump Administration worked to draft. Many companies in the industry already produce both (2) and (3), though only OpenAI publicly discloses their Model Specification.
A small number of large companies will be affected by the transparency requirements in both (2) and (3). I do not believe this dual-disclosure mandate would represent an onerous burden for those firms. Indeed, this transparency is as modest a cost to bear in exchange for three years of state preemption as one could possibly hope. It is, in the final analysis, not all that much to ask of some of the best-resourced organizations in the history of business enterprise.
Scope of Preemption
My scoping of preemption is imperfect, as all such efforts are. I am trying to cover all the domains of law that could foreseeably create conflicting or premature state laws while leaving room for states to legislate on the matters normally within their purview. This is why the law preserves all common law, consumer protection, civil rights, state intellectual property, and privacy laws of general applicability.
The thematic areas preempted (pricing, discrimination, mental health, and transparency) only cover laws that apply to AI developers. In other words, if state lawmakers wish to impose regulations on how mental health professionals use AI within their state, they may do so. If they wish to bar schools within their state from using AI in certain ways, they may do so as well. What they may not do is write laws specifically targeting AI developers, requiring them to change their systems simply to serve customers in the state. However, if the state wants to prohibit a state agency or local government within its jurisdiction from buying certain AI systems, so be it.
Conclusion
AI preemption is a conceptually mangled thing. As Charlie Bullock points out, preemption is generally best suited for domains where there is clear consensus on the optimal regulatory framework. There is no such consensus here. Indeed, almost the only federal consensus on AI—if the House of Representative’s bipartisan AI task force last year is any barometer—is that an effluence of state AI legislation is a major risk to this industry. Almost all clear-eyed observers would agree this is a risk, even if they disagree about whether it is clear and present.
With AI preemption, then, we are seeking the destination without the travel, the muscle without the lift. It’s hard to know what to preempt when we do not know what we want to do. And yet, part of the reason for preemption is just this: the technology is too nascent for novel statute. Supporters of preemption rightfully want to avoid regulating too early, or regulating the wrong things, or regulating in the wrong way.
We can muddle through with state-led AI policymaking—some of it well-considered, some of it not. Laws about algorithmic discrimination, AI-enabled pricing algorithms, mental health, kids safety, bioweapons, and whatever else it becomes fashionable to talk about in the halls of state government power.
These laws may result in a patchwork, but even if they do not, in their sheer number and variety they will undoubtedly distract, and ultimately limit, the development of this vital technology. Whether those limits are wise is a spin of the roulette wheel. We are taking an enormous risk with this path, and any honest observer must acknowledge this barren fact.
I wish this basic intuition were shared by more people. The rapaciousness with which states have sought to regulate AI is unprecedented for an emerging macroinvention. Some of this state-led legislative vigor, I suspect, is at least partially downstream of justifiable concerns over “Big Tech” and social media more than it is about AI per se. States have found themselves largely unsuccessful in their attempts to regulate social media, not so much because of a lack of political consensus but because of legal and constitutional barriers to the laws they desire. Pressure to regulate something has been building up for years, and eventually there had to be a release valve. AI, sadly, is the victim.
The battles fought over social media have been vicious and destructive—the precise opposite of what we should aspire to for AI. If preemption can serve any function beyond the obvious, I hope it will help to calm our civic discourse about technology regulation. We have serious questions ahead of us, and little time to fight the last battle.
Maybe one of my goals for The Curve should be to get another model provider on the model spec life.