22 Comments
User's avatar
Steven Adler's avatar

Very useful review of those bills, thank you!

For anyone interested, I analyzed the 1,000+ bills claim around the time of the moratorium debate and likewise concluded there were a few dozen that matter https://open.substack.com/pub/stevenadler/p/mythbusting-the-supposed-1000-ai?r=4qacg

Expand full comment
Shannon C.'s avatar

Perhaps I am out of the loop here... mind you. What would be your considerations as good policies at the state level that can be intertwined with the federal level policies? The one in terms of safety and considerations of supporting the population, not just the company's pocket. The ones without overreaching...I mean, AI is moving so fast that we should be a guardrail without crashing into its growth.

Expand full comment
Shon Pan's avatar

Supporting humanity should be a basic expectation; sadly, many of the e/acc people do not seem to even pass that.

Expand full comment
the long warred's avatar

No guardrail please unless you’re an engineer.

Thank you.

Expand full comment
Shannon C.'s avatar

Okay, present your argument then...Hot air means nothing to this.

Expand full comment
the long warred's avatar

Hot air our exhaust, Karen. You’re in the way, you present the argument.

Or get run over.

We don’t stop for human roadblocks these days, we run them over.

The idiocy of asking for “guardrails” and missing the point about no guardrails unless you’re an engineer … perhaps your roads have guardrails installed by paralegals or barista wanna be’s. However most roads and most guardrails will have engineers design.

Once most laws weren’t automatic but only passed as needed and only resorted to as necessary.

Get out of the way.

Expand full comment
Shannon C.'s avatar

Okay Karen...here I go: You are missing the point when you argue for zero guardrails because that means you don't have to take responsibility for what happens when things go wrong. Allowing AI to grow without any guiding principles is not just about moving fast, it is about opening the door to serious misuse, from feeding massive misinformation, turning unforeseen systems into weapons, or even deleting the entire education system with a button. I am not saying we need to drown innovation in red tape, but we do need a mindset that puts the common good first. That means being transparent with high-impact systems, rewarding responsible and high-beneficial uses, and creating AI that does not deliberately cause harm. Think about Asimov’s three laws of robotics. They were not about stopping progress, they were about setting a baseline for safety and trust. The same idea applies here: avoid harm, aim for the common good, and stay accountable at least. That is not slowing things down, it is preventing the kind of catastrophe, something like entrenching the ultra-rich and ultimately pushing the rest of the society toward a hot flash point of frustration.

Expand full comment
the long warred's avatar

Karen C just a guy 619;

>>Let us begin at the end of your world; it’s over.<<

The world where avoiding harm is paramount.

That’s neither possible - even to eat is to harm at least a plant- or to slaughter an animal.

Great harm was done to many by “harm reduction” for too long.

We can take the Rust Belt for one example…Or metaphors that become absurd law only to be struck off as soon as they are written.

You don’t understand the new machines called AI so you wish for laws to calm your unease, and cover for your laziness?

Laziness in not learning what it is you’d regulate.

Because it could be used for harm? Everything can be used for harm.

And is and will be.

Always.

Or good.

Indeed harm is often done for good.

Now the smart lawyers are climbing down as was clear from this article. The Laws grossly overextended… and now fail. Discredited.

Law is the last resort before naked force.

… as the laws are now utterly discredited we are building new forges and bellows - to forge weapons. The new weapons will be dangerous - that’s their point- and we need AI as forge and bellows.

We’re in a fight for our survival and need all the tools and resources we can get, no guardrails or roadblocks on this road.

Don’t play in traffic either… because we’re not stopping.

Expand full comment
Shannon C.'s avatar

You’re calling me lazy for wanting some baselines, but that just shows you don’t get the risks. Neither of us really knows what’s going on inside these AI systems cause they’re black boxes. That’s exactly why thinking about responsibility matters. Wanting some guiding principles isn’t fear, it’s common sense. If you think ignoring the dangers is bravery, you’re either blind to what could happen or just clueless.

Expand full comment
Handle's avatar

Great write-up, thanks!

Expand full comment
Shreeharsh Kelkar's avatar

Great post. Very informative and I learned a lot about the regulatory landscape.

But do we really need hyperbole like this? "In this sense these laws resemble little more than the protection schemes of mafiosi and other organized criminals." Hardly. This is just how democratic politics works. Interest groups maneuver to protect their interests. The AI companies are interest groups just as much as the teacher's unions and the therapists. Ultimately, we want to make the best value trade-off.

Expand full comment
the long warred's avatar

I realize it’s not obvious because it’s not happening yet but we’re 🇺🇸 in the fight of our lives and we’re going to win or lose now in preparation, we 🇺🇸 need AI to build and operate machines, for the nation.

Example.

https://shield.ai/hivemind-solutions/

Expand full comment
the long warred's avatar

Good news for us 🇺🇸

Not so much for Europe

Expand full comment
Keller Scholl's avatar

Good to have you back!

"If they cannot afford a therapist, is it better that they talk to a low-cost chatbot, or no one at all?" I believe that state legislatures have conclusively answered this in the case of florists, undertakers, barbers, and many other industries they have converted into guilds. I fear that mental health is merely a case with particularly prominence, particularly because of cases that have little to do with an intention to use a chatbot for mental health care (https://www.transformernews.ai/p/ai-psychosis-stories-roundup was a roundup I found decent), and we'll see a desperate attempt to protect every job with an effective enough lobbyist rather than making sensible or thoughtful policy.

Expand full comment
Nomads Vagabonds's avatar

Thanks Dean. Glad to see you writing about State policy again.

Expand full comment
Séb Krier's avatar

He's back 😎

Expand full comment
Thomas Woodside's avatar

Glad to have you back to talking about the states!

Expand full comment
Gaurav Yadav's avatar

No mention of SB-813? I have assumed as of late that the Bill got axed, but I have no way to back this up at the moment.

'I worry that audit requirements will end up making these safety and security protocols less substantive over time, given how auditing outside of hard-ground-truth fields like accounting tends to converge to a simplistic box-checking exercise.'

Have you read: https://arxiv.org/pdf/2505.01643? Seems relevant to your point.

Expand full comment