15 Comments
User's avatar
Nathan Lambert's avatar

Maybe one of my goals for The Curve should be to get another model provider on the model spec life.

Expand full comment
Steven Adler's avatar

> By contrast, Safe Superintelligence, Inc. is spending large sums of money on R&D and has the explicit goal of not releasing a single public-facing product until they reach “superintelligence.” It seems obviously in the public interest to understand how that company is approaching the various frontier AI risks that “superintelligence” clearly poses.

This is a good point, & I like the separation of the transparency vs model spec components - appreciate you sharing this

Expand full comment
Anders Halvorsen's avatar

Dean, thanks for another excellent article on common sense AI regulation!

Expand full comment
Ben Schulz's avatar

We should ban autonomous recursively self-improving artificial intelligence in the same way we ban creating new strains of small pox, and plague.

Expand full comment
Jack Crovitz's avatar

Thank you for this proposal! I have a question about the scope of preemption.

Your preemption proposal reads: “Establish a three-year learning period, during which no state laws targeting AI in any of the following ways may be enacted: algorithmic pricing, algorithmic discrimination, disclosure mandates, or mental health.”

I am curious why “disclosure mandates” is included here. As you wrote, transparency is merely a “modest” cost for the frontier AI companies, and might have massive potential benefits by informing the public and policymakers about the potential risks of these systems. Given that the costs of transparency requirements are so massively outweighed by their potential benefits, what is your reasoning behind including them in the list of policy areas that states may not touch for three years?

One justification might be that state-mandated transparency requirements are unnecessary because points #1 and #2 of your proposal (the federally-mandated transparency requirements). But surely we cannot expect the details of the federal transparency requirements to be perfectly comprehensive on the first try. There’s a good chance that the bill will miss some useful information which could help the public understand the capabilities and risks of frontier AI systems. There’s also a good chance that (with the technology and safety practices advancing so rapidly) even a perfectly comprehensive set of transparency requirements will become deficient within a few months or years of the bill’s passage. In that case, wouldn’t it be useful for state governments to be able to fill in the gaps in the federal disclosure requirements?

Expand full comment
Dean W. Ball's avatar

Thanks Jack!

Traditionally, a federal law about some policy sub-topic will, by default, preempt state laws on that same sub-topic. So in one sense I am following tradition. But I also think this is the right call on the merits: a variety of conflicting state transparency mandates would be suboptimal. And done the wrong way, transparency can become extremely burdensome or even a form of prescriptive regulation (many climate transparency laws, for example, took the form of "please tell us how many megawatts of fossil fuel energy you replaced with solar this year").

Expand full comment
Jack Crovitz's avatar

Thanks for the explanation, Dean!

Expand full comment
Steve P's avatar

Nice job, Dean, I think your drive for a federal moratorium on state legislation is very sound! The EU AI act exempts defense applications from its provisions - would you do the same regarding model specifications or require some other form of overview/disclosure?

Expand full comment
Maxwell E's avatar

Hey Dean! Thank you for explicitly laying out the case for pre-emption. I continue to vehemently disagree and believe large AI companies should be vastly more regulated than they are currently, but it’s good to see a comprehensive and lucid argument as written here.

Expand full comment
Jacob Goble's avatar

"These laws may result in a patchwork, but even if they do not, in their sheer number and variety they will undoubtedly distract, and ultimately limit, the development of this vital technology." National opponents are unlikely to embrace the same limits until the technology is fully formed and available to all parties. The difference between firearms and nuclear weapons comes to mind. The chase is on until everybody stops running.

Expand full comment
Daniel Florian's avatar

I really like this approach; you probably did not write this with the EU AI Act in mind but since this is where I am coming from, I cannot but compare it to the EU AI Act. While there are some similarities re regulation of frontier and high-risk / widely used models, the key difference it seems is that your proposal is WAY less prescriptive in terms of how compliance needs to look like which means that market actors, rather than standard-setting bodies define what "good" AI looks like. It's a fascinating approach to find a balance between regulation and innovation!

Expand full comment
Chris L's avatar

Algorithmic pricing? Why would you want to protect that? How is it important enough to our society to ban states from regulating it (or even a net positive)? Or is your point that states can regular the actual businesses selling the product instead?

Expand full comment
Dean W. Ball's avatar

Banning algorithmic pricing in practice will lead to utterly Stalinist outcomes and should be treated with extreme caution.

Expand full comment
Chris L's avatar

I don't see it. How would banning algorithmic pricing end up any more Stalinist than not having the technology for it in the first place? Did America become less Stalinist once the technology was invented?

Expand full comment
Ian Slater's avatar

I believe I can see the conceptual stance Dean has here, but for the benefit of myself and readers I would really like it spelled out. Algorithmic pricing is something people react strongly to, because it feels inherently predatory, rather than value-providing.

Expand full comment