Introduction
AI policy seems to be negatively polarizing along “accelerationist” versus “safetyist” lines. I have written before that this is a mistake. Most recently, for example, I have suggested that this kind of crass negative polarization renders productive political compromise impossible.
But there is something more practical: negative polarization like this causes commentators to focus only on a subset of policy initiatives or actions associated with specific, salient groups. The safetyists obsess about the coming accelerationist super PACs, for instance, while the accelerationist fret about SB 53, the really-not-very-harmful-and-actually-in-many-ways-good frontier AI transparency bill recently signed by California Governor Gavin Newsom.
Meanwhile, the protectors of the status quo—almost always the real drivers of politics—grind on. As a result, those most interested in and knowledgeable about AI policy have a tendency to miss the picture about what is happening in our own field.
California is a case in point. Undoubtedly, California is the principal battleground in American AI policy today, attracting the attention of doyens from both the accelerationist and safetyist camp. If you listened to these spokesmen, you might assume that the only bill worth mentioning in California this year was SB 53.
(A note: it is true that I, too, have only written about SB 53 in recent months. This is principally because I was working for the federal government until mid-August, and thus could not comment on state legislative matters.)
And yet Governor Newsom signed 8 AI-related bills this session. Arguably, SB 53 was among the lightest touch of these. Some of the other bills in this year’s cohort are far more wide-reaching and dangerous. Indeed, there is AI-related legislation signed by Governor Newsom in the past few weeks that is among the worst I have ever seen in my ten-year career as an observer of state policy.
To show you what I mean, let’s take a look at California’s lesser-discussed escapades into AI regulation.
AB 325 and the Regulation of Pricing
I will start with an area of AI law that has always worried me: the regulation of “algorithmic pricing.” Prices are a key way through which we convey information in our society; what the bloodstream is to the human body, prices are to the economy. Not all laws that implicate prices are unwise, but any proposed regulation of the price system should be examined with the utmost scrutiny.
That is why I found it a surprise that very few libertarians found it worth their time to discuss AB 325, introduced by Assembly Majority Leader Cecilia Aguiar-Curry. The bill focuses on the notion of a “common pricing algorithm,” which is defined as:
… any methodology, including a computer, software, or other technology, used by two or more persons, that uses competitor data to recommend, align, stabilize, set, or otherwise influence a price or commercial term.
Commercial term, in turn, is defined as:
any of the following:
(A) Level of service.
(B) Availability.
(C) Output, including quantities of products produced or distributed or the amount or level of service provided.
There is no carveout for publicly available data in AB 325, so if you use your competitor’s prices to help set your own prices, you are covered by these definitions, so long as you and at least one other person (a co-worker or business partner, say) used “a computer, software, or other technology” to do it. If you own a business with two or more employees and you write some of your competitors’ prices down in a spreadsheet, you are covered.
The operative clause of AB 325 is a little confusing, but bear with me:
It shall be unlawful for a person to use or distribute a common pricing algorithm if the person coerces another person to set or adopt a recommended price or commercial term recommended by the common pricing algorithm for the same or similar products or services in the jurisdiction of this state.
Say that you run an independent bed and breakfast in California, and that you use a low-cost algorithmic tool that incorporates the nightly rates of the nearby chain hotels to set your own room rates. And because AB 325 defines neither “use” nor “coerce” nor “set” nor “adopt,” it is entirely unclear whether AB 325 accidentally regulate effectively all market transactions.
On top of this, AB 325 imposes potential criminal penalties (up to three years of imprisonment) in addition to substantial civil penalties (up to $6 million “per violation,” a sixfold increase over the original California antitrust statute this law amends).
Do I expect this law to ever be enforced evenly? Of course not. It reminds me of the New York statute used to prosecute Donald Trump for several hundred million dollars, or the federal statute the Trump Administration is using today to go after New York Attorney General Letitia James (who prosecuted Trump) for “mortgage fraud.”
These overbroad statutes are ultimately just weapons, since everyone is violating them all the time. Still, rarely have I seen an American law more hostile to our country’s economy and way of life.
AB 853 and Synthetic Content Regulation
Last year I wrote about AB 3211, a law intended to mandate content provenance and watermarking standards. Ultimately, the law wasn’t enacted. But this year, large portions of AB 3211—including provisions affecting open-source AI developers and hosting platforms like Hugging Face—have been signed into law by Governor Newsom in the form of AB 853. As is usual for California law, AB 853 will have extraterritorial effect, applying, in practice, throughout the United States.
What does it do?
First, the reasonable-enough parts. The law requires “large online platforms” (think social media, but also many other web services, like Airbnb, Uber, etc.) to:
Provide a user interface to disclose the availability of system provenance data that reliably indicates that the content was generated or substantially altered by a GenAI system or captured by a capture device.
Of course there are unintended consequences to this. Do I really need a user interface in the Airbnb app that allows me to screen reviews for synthetic content? Would such an interface work? And most importantly, do I really need a law that mandates this?
Furthermore, the law imposes a new regulation on AI model hosting platforms like Hugging Face, primarily deputizing such platforms with the enforcement of yet another California law (the California AI Transparency Act, or SB 942, signed last year). Specifically, AB 853 states that model hosting platforms “shall not knowingly make available a GenAI system that does not place disclosures pursuant to [The California AI Transparency Act.”
Hugging Face and others, therefore, must now ensure that every model uploaded to their platform complies with the California AI Transparency Act’s extensive disclosure requirements, including (quoting from SB 942):
to make available an artificial intelligence (AI) detection tool at no cost to the user that meets certain criteria, including that the AI detection tool is publicly accessible. an option to include a manifest disclosure in image, video, or audio content, or content that is any combination thereof, created or altered by the covered provider’s generative artificial intelligence (GenAI) system that, among other things, identifies content as AI-generated content and is clear, conspicuous, appropriate for the medium of the content, and understandable to a reasonable person…
a latent disclosure in AI-generated image, video, audio content, or content that is any combination thereof, created by the covered provider’s GenAI system that, among other things, to the extent that it is technically feasible and reasonable conveys certain information, either directly or through a link to a permanent internet website, regarding the provenance of the content.
This applies to any model that has more than 1 million “users,” though neither AB 853 nor SB 942 provide any definition of “user,” so in the case of an open-source or open-weight model, we have very little idea how the law will be enforced. Most likely, it will just be another always-enforceable-but-rarely-enforced statutory creature in American legal life, constituting yet another government-mediated sledgehammer that can be applied to a great many business at any time.
But there is more: you may have noticed the phrase “capture device” at the end of the quote. That is because AB 853 also regulates all devices sold in California that contain cameras and microphones—specifically, “capture device” means:
a device that can record photographs, audio, or video content, including, but not limited to, video and still photography cameras, mobile phones with built-in cameras or microphones, and voice recorders.
Starting in 2028, firms that sell “capture devices” are required to:
(1) Provide a user with the option to include a latent disclosure in content captured by the capture device that conveys all of the following information:
(A) The name of the capture device manufacturer.
(B) The name and version number of the capture device that created or altered the content.
(C) The time and date of the content’s creation or alteration.
(2) Embed latent disclosures in content captured by the device by default.
(b) A capture device manufacturer shall comply with this section only to the extent technically feasible and compliant with widely adopted specifications adopted by an established standards-setting body.
As a general matter, I support giving users of physical recording devices (smartphones, standalone cameras, home security devices, etc.) the option to apply watermarks to the content created by those devices. Ultimately, I suspect, we will find it more productive to watermark and otherwise label human-created content rather than machine-created material; the human-created outputs will, after all, be the scarce ones in the long term.
The problem with AB 853 is that the definition of “capture device” is so broad that it mandates these watermarking standards even for devices where it may make little sense—and some, perhaps where it will be actively detrimental to user health and safety.
But what troubles me the most about the law is that it adds a new regulation on device makers of all sizes at just the moment when AI is creating novel opportunities for hardware startups of all sizes. From consumer devices like OpenAI’s rumored pin-like product to newly useful household or industrial robots to much else, I worry we are adding compliance burdens and uncertainty for young firms at precisely the wrong time.
Conclusion
There are other laws California passed this session which strike me as more productive, but still contain problematic provisions. For example, SB 243 is a bill that regulates chatbot companions, in particular requiring them to periodically remind users that they are not humans. I am not sure anyone was confused about this, but the concern the law is addressing is fair enough.
Yet, as written, the law would apply to characters in video games. Imagine, for instance, a game whose plot involves traveling around with a non-playable companion. In principle, SB 243 would require game developers to write in dialogue that periodically reminds the human player that the companion is not a human. This is of course pointless, and one of the many examples of unintended consequences of lawmakers projecting legal authority over a landscape they do not wholly comprehend.
Many of these bills contain subtle and obvious Constitutional flaws, as well. But I have long since come to accept the reality that legislators do not view it as their job to write Constitutional statutes; they leave that issue to the judges, whom they then attack for doing their jobs. There is at least a bright side: this phenomenon of legislators ignoring the Constitution in their statutory drafting has a tendency to create ample case law favorable to proponents of the First Amendment and other Constitutional rights.
In general, between this and the dozens of bills passed elsewhere, it would be accurate to say that AI is the most heavily regulated nascent, general-purpose consumer technology in modern history. It is probably already the case that we have blocked new entrants from competing in all kinds of markets, and we have no doubt quashed at least some good ideas in their cradles. Whether these self-imposed limitations are worth it—whether they have really served to make you feel “safer”—is a question I will leave for you to decide.
My only closing observation is that these bills got very little airtime in California’s AI policy debates, despite many being considerably more problematic and burdensome than the bill backed by the AI safety community, SB 53. One of these bills arguably makes software-enabled market transactions arbitrarily unlawful, and yet where were the techno-libertarians? They were busy fighting a battle with their perceived enemies: the AI safety community.
In service of fighting that battle, they forgot that the stewards of the status quo—the voracious and often downright stupid machine of American policymaking—was grinding along. I hope in the future that negative polarization does not blind us so thoroughly.


This is what is depressing about being a California resident. Our legislators lurch from panic to panic, passing long-term legislation with short-term thinking and no public debate.
I second this: “here's to legislators having an option to express sentiments without them necessarily ending up in code.”
I’m glad you’re back to covering these. I’d fully read a roundup of random ass ai laws from states, almost like I write and read my roundups of random ass open ai models in weird domains from academics around the world.