The AI Patchwork Emerges
An update on state AI law in 2026 (so far)
Dear readers,
I am pleased to announce a new paid tier of Hyperdimensional: “institutional.” This subscription is for firms seeking private, one-on-one or small-group conversations with me about the various matters of AI and AI policy I cover in this newsletter. Subscribers at this tier will get one-hour meeting of this kind per quarter. The price is $7,500 per year. Weekly Hyperdimensional articles will continue to remain free of charge. I also expect to announce new benefits for subscribers at my pre-existing paid tier in the near future. Please do not hesitate to email me with any questions. Those interested may subscribe here or at the button above this text.
Onto this week’s essay.
Introduction
State legislative sessions are kicking into gear, and that means a flurry of AI laws are already under consideration across America. In prior years, the headline number of introduced state AI laws has been large: famously, 2025 saw over 1,000 state bills related to AI in some way. But as I pointed out, the vast majority of those laws were harmless: creating committees to study some aspect of AI and make policy recommendations, imposing liability on individuals who distribute AI-generated child pornography, and other largely non-problematic bills. The number of genuinely substantive bills—the kind that impose novel regulations on AI development or diffusion—was relatively small.
In 2026, this is no longer the case: there are now numerous substantive state AI bills floating around covering liability, algorithmic pricing, transparency, companion chatbots, child safety, occupational licensing, and more. In previous years, it was possible for me to independently cover most, if not all, of the interesting state AI bills at the level of rigor I expect of myself, and that my readers expect of me. This is no longer the case. There are simply too many of them.
It’s not just the topics that vary. It’s also the approaches different bills take to each topic. There is not one “algorithmic pricing” or “AI transparency” framework; there are several of each.
The political economy of state lawmaking (in general, not specific to AI) tends to produce three outcomes. First, states sometimes do converge on common legislative standards—there are entire bodies of state law that are largely identical across all, or nearly all, states. The second possibility is that states settle on a handful of legal frameworks, with the strictest of the frameworks generally becoming the nationwide standard (this is how data privacy law in the U.S. works). Third, states will occasionally produce legitimate patchworks: distinct regulatory regimes that are not easily groupable into neat taxonomies.
We are early in this legislative session, and more broadly in AI policymaking, so we cannot yet jump to conclusions. However, the early signs from this year’s legislative session suggest a true patchwork of state AI law is not only possible, but perhaps even likely.
Therefore, I am going to take a different approach this legislative session: I will cover bills by theme, selecting a few exemplary bills from each for deeper analysis. Please know that there are limitations to this approach: even in a domain of law I cover, I am only giving you a blurry impression, and there will be some areas of law I skip altogether.
“Transparency”
The first clear takeaway from this year’s state legislative bills is that the concept of “transparency” as an instrument of AI governance has been crammed so heavily that it is no longer useful. Everyone wants to say their bill is a “transparency” mandate, because this sounds like a lighter touch than “regulation.” The result is that numerous public policy objectives have been shoehorned into the category of “transparency.” A few examples suffice.
Some bills require that employers who use AI disclose that use to employees, customers, the general public, the government, or other counterparties.
New York’s AB 8962, from Democratic Assemblywoman Nily Rozic, mandates that news outlets (defined broadly; the New York Times, Hyperdimensional, and Dwarkesh Patel’s podcast are all plausibly covered) mandates disclosure by management to “news media workers” (this term is not defined, so it is unclear whether the author intends to include in this part-time employees and contractors, or just full-time staff) all use of generative AI in the production of content. It also mandates similar disclosures to consumers, as well as giving individual employees of a news organization the right to opt out of any deals made by their publication to license their training data.
Rhode Island’s SB 2010, from a suite of Democratic lawmakers, requires that any insurer using AI of any kind in the administration of healthcare benefits must disclose details about the use of AI in their business processes, which systems they use, and track metrics such as the amount of time human employees who use AI spend reviewing cases (with the implication being that AI-assisted work requiring less human time—a phenomenon known in economics as “labor productivity”—is bad). They also must “disclose” details of the model developer’s training datasets and the developer’s “data governance measures.” This therefore also is a regulation on developers as well as healthcare insurers.
Missouri HB 1747, from Republican Scott Miller, requires that every single person who shares any image, video, or audio file that was “created or modified” using artificial intelligence disclose their use of AI in a “mark or statement.” What this means is left up to the Missouri Attorney General. The definition of AI is one of the typical broad ones, and could be construed by an aggressive enforcer to cover even basic machine-learning-based tools, such as Adobe Photoshop’s object and subject detection features; indeed, a contemporary camera’s autofocus feature is plausibly covered.
Other proposed bills require model developers to disclose various things to various parties. This is an enormous category, so I will only take a handful of examples here.
In New York, AB 8595, from Democrat Steve Otis, requires that every developer of any generative AI service (including all open-weight and open-source models) must post to their website the URL of every source of “video, audio, text or data” they used to train their models (or that they contracted a third party to collect—as written, this appears to include commonly used datasets such as Common Crawl), as well as a “detailed description” of every piece of content obtained from a “covered publication” (journalistic sources, but again defined so broadly that this newsletter is plausibly included).
Also in New York, AB 1456, from Democrat Pamela Hunter, requires that insurers who deploy any AI system for to determine whether a specific medical service is medically necessary, and thus covered by their insurance policy, to: “submit the artificial intelligence algorithms and training data sets that are being used or will be used.” An insurer that uses GPT-5.2 would need to submit GPT-5.2’s “algorithm” and its training data (as a side note: I do wish people would stop using the word “algorithm” to refer to the architecture of a language model. It is a kind of algorithm, yes, but “the GPT-5.2 algorithm” is really more or a mathematical architecture within which the model itself learns many algorithms from its training data, which are ultimately encoded in the model parameters).
Then there is Missouri’s novel HB 2239, a data-center transparency bill introduced by Democrat Marty Murray, which requires owners of data centers larger than 100 megawatts to disclose a truly enormous amount of information about their environmental impact and operations.
I hope one thing is clear: “transparency” is not in any meaningful sense synonymous with a light regulatory touch.
Child Safety
Child safety has been a hot-button issue in AI in the past year, so it should come as no surprise that state laws attempt to tackle this issue from many different angles. Some examples:
Washington State’s SB 5956, from Democrat T’wina Nobles, prohibits schools from using AI for a wide variety of tasks. For example, the bill prohibits all schools in the state from using “AI” (at this point I hope I don’t have to say that the term is defined very broadly and includes a huge swath of modern software, not just generative AI) to create “any predictive classification” of a student’s “likelihood of misconduct… criminal behavior,” and similar. This is more a curiosity to me than anything else; I wonder why this seems necessary to the legislator. We are okay with humans doing this, right? This law also imports the European Union’s prohibition on the use of AI in classroom to “infer emotional states,” though it broadens by including things like “mental health conditions” and “sensitive personal characteristics.” Again, I do not understand why; what is the problem with a diagnostic tool that, say, helps a school determine which students have dyslexia or other reading disabilities? There is a reasonable body of evidence which suggests that dyslexia and similar conditions are often missed by overworked teachers, and that early treatment can make a substantial difference. We also have some evidence that these conditions are disproportionately common in the incarcerated population. And finally, there is strong reason to believe that teachers’ lack of knowledge in identifying signs of reading disabilities is the key reason that they are not more systematically diagnosed. If AI tools can help with this, how is that not a cause for celebration?
In Florida, SB 1344 from Republican Colleen Burton requires “companion AI chatbots” (defined more narrowly than many definitions of “AI” in state laws, but without any thresholds for the size of the developer, the popularity of the model, or any exemption for open-source and open-weight models) to impose age verification measures and mandates a popup every 60 minutes reminding users (apparently regardless of whether they are children or adults) that the AI system they are interacting with is not human. The age-verification provision seems fine, if currently overbroad, while the mandatory popups (which are, by the way, rampant in this year’s crop of AI regulations) seem straightforwardly stupid to me. It does not seem as though there is mass confusion about AI chatbots being human; there is very little evidence so far, for example, that the tragic cases of AI-involved teenage suicidality were caused by the child’s confusion over the AI’s humanity.
On a positive note, if folks are looking for a better version of what Senator Burton is trying to do with FL SB 1344, I would point you to Washington State HB 2225, from Democrat Lisa Callan. This still has the popup mandate, but is generally a better starting point for legislation of this kind (though it is far from perfect).
On another extreme end, Tennessee SB 1493, from Republican Becky Massey, makes it a Class A Felony for a developer to train a model that can do things like “develop an emotional relationship with, or otherwise act as a companion,” and provide information about mental health and general healthcare. This is a morally disgusting bill in my view, and is also likely to be unconstitutional (as are several other bills I’ve mentioned here, though the cases are less open-and-shut than this one).
Then there is Missouri’s HB 1742, also from Republican Scott Miller, which bans minors from accessing language models for “recreational” and other purposes. So if my child lived in Missouri, they would be prohibited from making their own video games with, say, a coding agent, but would be allowed to engage with AI characters in other people’s video games. If you are a “conservative” and you think that the government has a role to play in the regulation of your child’s “recreational” use of software, I encourage you to consider switching parties, or moving to Europe.
A closing note on child safety law: my guess is that, under current Supreme Court precedent, most “chatbot” age verification laws are going to be deemed violations of the First Amendment, and probably should be unless they are carefully scoped. To make a long story short: courts have long held that minors have free speech rights, which includes the right to access speech, not just to communicate it themselves. These minor-held rights are abridged when compared to adults: a minor does not enjoy the same First Amendment right to pornography that an adult enjoys, for example. There is a case currently pending before the Supreme Court about social media age verification, which will be an interesting test case. But given the range of clearly educational and otherwise intellectually enriching uses of AI (in the general case, the best educational content on arbitrary topics you can find on the internet is now produced by frontier AI models, not humans), it is hard for me to imagine courts buying into fully general language model age-verification requirements.
Algorithmic Pricing
Numerous states are proposing bans or significant limitations on “algorithmic pricing” (which, to translate from overwrought policy lingo into natural English, means “using software to set prices”), when the pricing algorithm is informed by customer data.
Say that you recently purchased newborn diapers on Amazon, and then a day or so later you are shopping for fixed-length portrait camera lenses (I speak from experience). Perhaps Amazon’s “algorithm” would identify that I am probably buying the lens to take pictures of my newborn, and given the emotional valence of this, perhaps it would infer that I have a higher-than-usual willingness to pay for this particular lens. That’s the sort of thing that NY SB 8623 (introduced by Democrat Rachel May), TN HB 1468 (introduced by Democrat John Clemmons) and numerous others are trying to prevent.
Some of these bills contain the bare-minimum exemptions (the price of a DoorDash delivery intrinsically requires my “personal information,” i.e. my home address, to set properly), many do not. I am opposed to regulating something so fundamental and abstract as “the setting of prices using a customer’s information and software,” and I think it would be a good thing for the world if all of these laws failed. They probably will not all fail, however.
Algorithmic Discrimination
Long-time readers will recall my multi-month series of diatribes about the “algorithmic discrimination” bills introduced during last year’s state legislative session. These laws failed in every state in 2025, and the one stare where this EU-inflected framework did pass (Colorado, in 2024) regrets it so heavily that the state’s Democratic Governor supported the Republican moratorium on state AI legislation last summer.
Some states have not learned their lessons, though, so we have repeats of these laws introduced in Washington State (HB 2157, introduced by Democrat Cindy Ryu), New York, (AB 8884, introduced by Democrat Michaelle Solages), and New Mexico (HB 28, introduced by Democrat Chris Chandler as “The Artificial Intelligence Transparency Act”—see what I mean about “transparency”?).
These are awful laws, but I have said all I have to say about them already.
The Florida “Bill of Rights”
Florida Republicans, led by Governor Ron DeSantis, seem determined to make themselves a nationwide exemplar for “red-state AI governance” (despite the fact that President Trump has expressed in no uncertain terms that he would prefer states not “lead the way” in AI regulation). The current crystallization of this effort is the “Florida AI Bill of Rights,” introduced in the State Senate as SB 482 by Republican Tom Leek. I disfavor the term “bill of rights,” since we already have those at both the federal and state levels, so the term is largely meaningless. What does it say about the respect of so-called populist politicians for their voters when they call their proposed law a “bill of rights” and then include this provision (emphasis added) in said law?:
(2) Floridians may exercise the rights described in this section in accordance with existing law. This section may not be construed as creating new or independent rights or entitlements.
Beyond the silly title, the law basically does the following:
Prohibits the Florida government from contracting with Chinese AI companies (though it does not stop the Florida government from contracting with a U.S. company that uses Chinese AI models, including their APIs; thus it does nothing substantive to mitigate the presumed risk of sensitive Floridian data being transmitted to Chinese AI companies);
Reiterates deepfake protections that already existed in Florida law (creating civil liability for persons who knowingly distribute malicious deepfakes);
Imposes age verification, parental control, and similar requirements for AI services used by minors;
Establishes a requirement to do the “I’m not a human” pop-ups for all users of language models every 60 minutes (Why do politicians love attaching their laws to clearly obvious and almost-always-obnoxious features of software? Why do they want their voters to think of them in this way? The lack of policy strategy exhibited by self-styled “policy wonk” would-be technology regulators continues to perplex me.)
Data-protection requirements for AI developers that, near as I can tell, are largely redundant with Florida’s existing data privacy laws.
Name, image, and likeness protections that mirror those passed by Tennessee in its ELVIS Act about two years ago.
Compared to many of the other laws discussed here, this law is fine, though it is still probably harmful and mostly unnecessary.
Conclusion
As I write, we are just two weeks into 2026. And yet the volume and complexity of state AI laws is at an all-time high. Many, if not nearly all, of these laws have extraterritorial effect. Almost all of them have gaps in drafting so large as to make any sane reader question whether the drafters really understand what they are doing. And many states have not even begun their legislative session. We will see hundreds more bills introduced in the coming weeks. Meanwhile, the current politics of AI, in practice, render anyone who believes all this state lawmaking is a bit excessive as extremist “techno-libertarians.”
If I sound frustrated, it is because I am. A patchwork of ill-considered state rules—rules clearly drafted on the back of an envelope, rules that are sometimes about topics where no new rules are needed—is indeed proliferating. The lawmakers in question have now had three years to educate themselves about generative AI, and none of them have bothered. Yet they seem supremely confident that they know what is good for generative AI. I do not want such people, and such decadence, governing this tool whose utility and importance grows for millions of Americans every week.
I wish the Trump Administration the best of luck in its efforts to stop at least some of this through its recent Executive Order, though any honest observer must acknowledge that there are firm limitations on what the executive branch can unilaterally do here. I have long been a supporter of a federal AI law with broad (though not universal) preemption of state regulations. Though I scoped my proposed preemption more narrowly than many other supporters of a federal law, it is worth nothing that my proposal would have preempted almost literally every law I have described in this essay. That is because the specific set of laws we are now seeing is what I have been anticipating for months.
Regardless of whether a federal law looks anything like my proposal, here is the salient point: preemption really must be expansive, if short of an outright ban on state AI regulation. You cannot preempt all of these laws piece-by-piece; broad-based preemption of some kind is essential, and anyone who pretends otherwise is simply not engaging with the reality on the ground.
I look forward to seeing the White House’s federal AI legislation proposal, hopefully in the near future.

