Introduction
This newsletter first started picking up readers because of my writing about AI regulations in state governments. I have a background in state and local policy, having spent the plurality of my career working at think tanks that specialized in such issues. In 2024, I was one of the early people to raise concerns about what I perceived to be a torrent of state-based AI laws, including, most notably, California’s now-failed SB 1047.
The torrent of state bills has only grown since then, with more than 1,000 having been introduced this year. Most of the lawmakers who draft such statutes are not acting with malice; instead, they feel motivated by an urge “to do something about AI.”
This may not feel like a great reason to pass “a law about AI,” (generally, laws should solve specific problems, not scratch the itches of random state legislators), but states are sovereign entities. In the absence of a federal legislative framework that preempts this sovereign power, this torrent will continue.
This is the path America is currently facing. The question of precisely how big of a problem this path is for the health of America’s AI sector or broader economy is unresolved. Unresolved, too, is how one would go about exercising federal preemptive power to avoid this path. The “moratorium,” which was debated in Congress this summer and failed to achieve the 50 Senate votes necessary to pass, was the first attempt to grapple with this issue. I’ll have more to say soon about what I think should be done.
But for now, I want to give you a sense of what is actually happening in state AI regulation by examining two categories of recent laws (“AI in mental health” and “frontier AI transparency”) that either have passed or seem likely to pass. After all, if preemption is going to return as a major issue, it would be helpful to understand what exactly it is we are preempting. Let’s have a look.
Background
First, allow me to return to that figure of “1,000 state AI bills” I just quoted you; it deserves some nuance. While the claim is factually accurate, it must also be said that the vast majority of these 1,000 bills are not worthy of concern.
Some bills are primarily procedural in nature (“the legislature instructs executive branch agency X to go write a report on the impacts of AI on Y”). Others create new paths to civil and criminal liability for people who do things like knowingly distribute malicious deepfakes. These bills are often, though by no means always, redundant with existing state statutory or common law. I don’t love seeing even more redundant laws pass (America has thousands already), but one must pick their battles. Occasionally a deepfake law will be written in a problematic way, but in general these are transparently unconstitutional efforts that can and will collapse in the face of legal challenges (though it is worth noting that the law in question was struck down on the grounds of Section 230, not the First Amendment—though I suspect it would not have withstood constitutional scrutiny).
After sorting out the anodyne laws, there remain only several dozen bills that are substantively regulatory. To be clear, that is still a lot of potential regulation, but it is also not “1,000 bills.”
The most notable trend since I last wrote about these issues is that states have decidedly stepped back from efforts to “comprehensively” regulate AI. In 2024 Colorado passed SB 205, a law focused on policing so-called “algorithmic discrimination.” By early this year, that basic legislative template had made its way into nearly 20 states, including red states like Texas. I wrote extensively about these bills, which were explicitly modeled on parts of the European Union’s AI Act and America’s only true candidates for “comprehensive” AI laws. It seemed quite possible that this deeply stupid framework could become America’s de facto nationwide AI legal regime.
Fortunately, that did not happen. The laws collapsed nearly everywhere, helped in no small part by the fact that Colorado’s implementation of SB 205 has been a nightmare. Indeed, Colorado Governor Jared Polis was one of the only governors from either party to publicly support the federal moratorium on state AI laws. The best testament to the fundamental flaws in this framework is that Governor Polis, a Democrat with national ambitions, chose to signal his public support for a Republican legislative initiative that would have taken power away from him. Unfortunately for the European Union, the law they elected to pass is an even worse version of what Colorado will soon desperately try to undo while allowing its politically powerful legislative sponsor (Sen. Robert Rodriguez) to save as much face as possible.
For now, then, comprehensive AI regulation has stalled. Colorado’s experience, and a more generalized turn against the “one big bill” approach for AI regulation, has caused states to focus on narrower issues. Two in particular have caught my eye.
Mental Health
Several states have banned (see also “regulated,” “put guardrails on” for the polite phraseology) the use of AI for mental health services. Nevada, for example, passed a law (AB 406) that bans schools from “[using] artificial intelligence to perform the functions and duties of a school counselor, school psychologist, or school social worker,” though it indicates that such human employees are free to use AI in the performance of their work provided that they comply with school policies for the use of AI. Some school districts, no doubt, will end up making policies that effectively ban any AI use at all by those employees. If the law stopped here, I’d be fine with it; not supportive, not hopeful about the likely outcomes, but fine nonetheless.
But the Nevada law, and a similar law passed in Illinois, goes further than that. They also impose regulations on AI developers, stating that it is illegal for them to explicitly or implicitly claim of their models that (quoting from the Nevada law):
(a) The artificial intelligence system is capable of providing professional mental or behavioral health care;
(b) A user of the artificial intelligence system may interact with any feature of the artificial intelligence system which simulates human conversation in order to obtain professional mental or behavioral health care; or
(c) The artificial intelligence system, or any component, feature, avatar or embodiment of the artificial intelligence system is a provider of mental or behavioral health care, a therapist, a clinical therapist, a counselor, a psychiatrist, a doctor or any other term commonly used to refer to a provider of professional mental health or behavioral health care.
First there is the fact that the law uses an extremely broad definition of AI that covers a huge swath of modern software. This means that it may become trickier to market older machine learning-based systems that have been used in the provision of mental healthcare, for instance in the detection psychological stress, dementia, intoxication, epilepsy, intellectual disability, or substance abuse (all conditions explicitly included in Nevada’s statutory definition of mental health).
But there is something deeper here, too. Nevada AB 406, and its similar companion in Illinois, deal with AI in mental healthcare by simply pretending it does not exist. “Sure, AI may be a useful tool for organizing information,” these legislators seem to be saying, “but only a human could ever do mental healthcare.”
And then there are hundreds of thousands, if not millions, of Americans who use chatbots for something that resembles mental healthcare every day. Should those people be using language models in this way? If they cannot afford a therapist, is it better that they talk to a low-cost chatbot, or no one at all? Up to what point of mental distress? What should or could the developers of language models do to ensure that their products do the right thing in mental health-related contexts? What is the right thing to do?
The State of Nevada would prefer not to think about such issues. Instead, they want to deny that they are issues in the first place and instead insist that school employees and occupationally licensed human professionals are the only parties capable of providing mental healthcare services (I wonder what interest groups drove the passage of this law?).
Ironically, this sort of avoidance of the big issues is precisely the kind of thing most mental health professionals would advise you against doing. At the margin, this law probably lowers the incentive of AI companies to speak honestly with the public about how people are using LLMs for mental health. Laws of this kind are essentially a machine for converting honesty and candor from desirable social traits to legal liabilities. That seems like a poor policy outcome, fostering less transparency from AI companies about a thing most AI observers agree is an emerging issue. This law is unlikely to stop people from using chatbots for mental health, but it will drive their use into the dark.
And of course, there is the classic issue that the drafters of these statutes do not know what they are doing, commanding AI developers to do things that are not really possible to do. From Illinois HB 1806:
An individual, corporation, or entity may not provide, advertise, or otherwise offer therapy or psychotherapy services, including through the use of Internet-based artificial intelligence, to the public in this State unless the therapy or psychotherapy services are conducted by an individual who is a licensed professional.
“Therapy or psychotherapy services” here means “services designed to diagnose, treat, or improve an individual’s mental health or behavioral health.” Nevada has a substantively similar provision in its law.
How, exactly, would an AI company comply with this? In the most utterly simple example, imagine that a user says to an LLM “I am feeling depressed and lonely today. Help me improve my mood.” The States of Illinois and Nevada have decided that the optimal experience for their residents is for an AI to refuse to assist them in this basic request for help. It is entirely unclear to me how in practice an AI company could ever comply with this even if they are a closed-source model provider with the full ability to monitor user activity (which corporate surveillance, incidentally, this law encourages), let alone an open-source model developer.
But the point of these laws isn’t so much to be applied evenly; it is to be enforced, aggressively, by government bureaucrats against deep-pocketed companies, while protecting entrenched interest groups (licensed therapists and public school staff) from technological competition. In this sense these laws resemble little more than the protection schemes of mafiosi and other organized criminals.
I do not see how any of this helps anyone, other than the bureaucrats.
I would, however, be remiss if I did not mention Utah HB 452 from Rep. Jefferson Moss, seemingly a much more reasonable measure under consideration but not passed by the legislature. It defines “mental health chatbot” more narrowly only to cover those specifically marketed as providing therapy, and then creates various requirements for the developers of such systems (prohibitions on data sharing, disclosure to consumers, and restrictions on advertising). I have not reviewed Utah HB 452 carefully, so will stop short of further commentary, other than to say that I believe it is possible to write prudent legislation on this topic.
Frontier AI Safety
California’s SB 53, introduced by Senator Scott Wiener, and New York’s RAISE Act, from Assemblyman Alex Bores, are the lone bills I am aware of in America focused on matters of frontier AI safety—catastrophic harms related to cyberattacks, biothreats, model autonomy, and the like. The RAISE Act has passed both houses of the legislature and is on New York Governor Kathy Hochul’s desk, where it can sit until as late as December. SB 53 will almost certainly pass both houses of the California legislature, and my current guess is that it will become law.
There is much to recommend these laws over the Nevada and Illinois bills I discussed above. Unlike those laws, SB 53 and RAISE are technically sophisticated, reflecting a clear understanding (for the most part) about what it is possible for AI developers to do. Here is what SB 53 does:
Requires developers of the largest AI models to publish a “safety and security protocol” describing the developers’ process of measuring, evaluating, and mitigating catastrophic risks (risks in which single incidents result in the death of more than 50 people or more than $1 billion in property damage) and dangerous capabilities (expert-level bioweapon or cyberattack advice/execution, engaging in murder, assault, extortion, theft, and the like, and evading developer control).
Requires developers to report to the California Attorney General “critical safety incidents,” which includes theft of model weights (assuming a closed-source model), loss of control over a foundation model resulting in injury or death, any materialization of a catastrophic risk (as defined above), model deception of developers (when the developer is not conducting experiments to try to elicit model deception), or any time a model first crosses dangerous capability thresholds as defined by their developers.
Requires developers to submit to an annual third-party audit, verifying that they comply with their own safety and security protocols, starting after 2030.
Creates whistleblower protections for the employees of the large developers covered by the bill.
Creates a consortium that is charged with “developing a framework” for a public compute cluster (“CalCompute”) owned by the State of California, because for political reasons, Scott Wiener still must pretend like he believes California can afford a public compute cluster. This is unlikely to ever happen, but you can safely ignore this provision of the law; it does not do much or authorize much spending.
The RAISE Act lacks the audit provision described in item (3) above as well as an analogous public compute section (though New York does have its own public compute program). Other than that it mostly aligns with this sketch of SB 53 I have given.
Here are the things I believe are good about SB 53 and RAISE:
They are in line with many policies I myself have advocated for, including frontier transparency and entity-based regulation (regulation tied to characteristics of the corporations that develop models, rather than the models themselves). Entity-based thresholds capture the largest developers while ensuring minimal risk to startups and other small/medium-sized businesses.
They are not prescriptive about what developers should do at a technical level; they allow developers to experiment with different approaches to technical risk management. The safety and security protocols are not required to be reviewed or approved by anyone in government. Developers alone determine what their protocols should be, and then are expected by these laws to comply with their own regulation. In this sense, the laws are pseudoregulatory—public regulation of private regulation (i.e., the developer’s own safety and security protocols).
Because the laws are just regulatory enough, though, they give America sufficient basis for an alternative transparency standard from the European Union’s AI Act Code of Practice, a much more intensive “voluntary” regulation. Over time, America can use its industrial and diplomatic power to encourage other governments—including the EU’s—to harmonize with our lighter-touch approach.
Here are the things I think are suboptimal:
These laws create path dependency related to a specific set of risks (CBRN risks, cyberattacks, etc.) that we do not empirically observe to be the major AI risks affecting the public today. There is a plausible case to be made for taking these catastrophic risks seriously, but I am skeptical that the public will be convinced that “we regulated AI” because of the passage of these laws. I am quite sure both the bills’ sponsors would heartily agree and have many, many ideas for future AI laws they could pass; their enthusiasm for passing more laws also worries me.
Because these laws are pseudoregulatory, they create an easy hook for future sessions of the California and New York legislatures to dial up the pseudoregulation to the point that it becomes regulation. Already, the law blurs the lines, with requirements for “transparency” about things like “the procedures the large developer will use to monitor critical safety incidents and the steps that a large developer would take to respond to a critical safety incident, including, but not limited to, whether the large developer has the ability to promptly shut down copies of foundation models owned and controlled by the large developer, who the large developer will notify, and the timeline on which the large developer would take these steps.” This is rather prescriptive, even if it does not fill in every detail. It’s easy to imagine how the bill’s current fragile equilibrium between prescriptiveness and flexibility for companies could break.
My guess is that the definition of “critical safety incident” in SB 53 is somewhat overbroad and could result in a ton of mandatory reports filed by AI developers with the Attorney General. This could be fixed by striking Section 2(c)(4) or by appending to the end of that section a clause like “when such deceptive techniques pose a material risk” or similar.
I worry that audit requirements will end up making these safety and security protocols less substantive over time, given how auditing outside of hard-ground-truth fields like accounting tends to converge to a simplistic box-checking exercise.
Finally, as with most laws about emerging technology, SB 53 and RAISE fix certain assumptions about the world into statutory concrete. For instance, today we assume that a “frontier AI developer” is someone who works on making purely digital artificial intelligence agents. Perhaps the “frontier AI developer” of 2030 will be something more like a company that builds foundation models for robots, or who manufactures robots themselves. Are we sure that we want to regulate robots the same way we regulate fully digital agents? Should robot regulations vary based on where the robots operate, e.g. in a school, a factory, or a household? SB 53 and RAISE are silent about such questions, and these questions were not foremost on the minds of their drafters. Yet these laws could end up being our society’s initial answer to questions like this, almost by mistake.
Much like the Illinois and Nevada bills, I don’t entirely see who these laws help, per se. Who is given additional certainty or clarity by the passage of these laws? Whose problems are solved, and how are they solved? What is the elevator pitch to the taxpayer? What is the compelling vision of the world these laws act in service of? What great things will they enable? For what grand human accomplishments will these laws serve as the foundation?
AI policy challenges us to contemplate questions like this, or at least it should. I don’t think SB 53 or RAISE deliver especially compelling answers. At the end of the day, however, these are laws about the management of tail risks—a task governments should take seriously—and I find the tail risks they focus on to be believable enough.
Conclusion
I find myself much more alarmed about bills like the Illinois and Nevada mental health bills than I do about SB 53 or RAISE. I also worry about the gaggle of bills that Senator Wiener’s colleagues in the California legislature have proposed, which together amount to a regulatory escapade far more adventurous than even Brussels has contemplated. In SB 53 and RAISE, the drafters have shown respect for technical reality, (mostly) reasonable intellectual humility appropriate to an emerging technology, and a measure of legislative restraint. Whether you agree with the substance or not, I believe all of this is worthy of applause.
Might it be possible to pass relatively non-controversial, yet substantive, frontier AI policy in the United States? Just maybe.
Good to have you back!
"If they cannot afford a therapist, is it better that they talk to a low-cost chatbot, or no one at all?" I believe that state legislatures have conclusively answered this in the case of florists, undertakers, barbers, and many other industries they have converted into guilds. I fear that mental health is merely a case with particularly prominence, particularly because of cases that have little to do with an intention to use a chatbot for mental health care (https://www.transformernews.ai/p/ai-psychosis-stories-roundup was a roundup I found decent), and we'll see a desperate attempt to protect every job with an effective enough lobbyist rather than making sensible or thoughtful policy.
Thanks Dean. Glad to see you writing about State policy again.