6 Comments
User's avatar
Steven Adler's avatar

> By contrast, Safe Superintelligence, Inc. is spending large sums of money on R&D and has the explicit goal of not releasing a single public-facing product until they reach “superintelligence.” It seems obviously in the public interest to understand how that company is approaching the various frontier AI risks that “superintelligence” clearly poses.

This is a good point, & I like the separation of the transparency vs model spec components - appreciate you sharing this

Expand full comment
Nathan Lambert's avatar

Maybe one of my goals for The Curve should be to get another model provider on the model spec life.

Expand full comment
Ben Schulz's avatar

We should ban autonomous recursively self-improving artificial intelligence in the same way we ban creating new strains of small pox, and plague.

Expand full comment
Jack Crovitz's avatar

Thank you for this proposal! I have a question about the scope of preemption.

Your preemption proposal reads: “Establish a three-year learning period, during which no state laws targeting AI in any of the following ways may be enacted: algorithmic pricing, algorithmic discrimination, disclosure mandates, or mental health.”

I am curious why “disclosure mandates” is included here. As you wrote, transparency is merely a “modest” cost for the frontier AI companies, and might have massive potential benefits by informing the public and policymakers about the potential risks of these systems. Given that the costs of transparency requirements are so massively outweighed by their potential benefits, what is your reasoning behind including them in the list of policy areas that states may not touch for three years?

One justification might be that state-mandated transparency requirements are unnecessary because points #1 and #2 of your proposal (the federally-mandated transparency requirements). But surely we cannot expect the details of the federal transparency requirements to be perfectly comprehensive on the first try. There’s a good chance that the bill will miss some useful information which could help the public understand the capabilities and risks of frontier AI systems. There’s also a good chance that (with the technology and safety practices advancing so rapidly) even a perfectly comprehensive set of transparency requirements will become deficient within a few months or years of the bill’s passage. In that case, wouldn’t it be useful for state governments to be able to fill in the gaps in the federal disclosure requirements?

Expand full comment
Dean W. Ball's avatar

Thanks Jack!

Traditionally, a federal law about some policy sub-topic will, by default, preempt state laws on that same sub-topic. So in one sense I am following tradition. But I also think this is the right call on the merits: a variety of conflicting state transparency mandates would be suboptimal. And done the wrong way, transparency can become extremely burdensome or even a form of prescriptive regulation (many climate transparency laws, for example, took the form of "please tell us how many megawatts of fossil fuel energy you replaced with solar this year").

Expand full comment
Steve P's avatar

Nice job, Dean, I think your drive for a federal moratorium on state legislation is very sound! The EU AI act exempts defense applications from its provisions - would you do the same regarding model specifications or require some other form of overview/disclosure?

Expand full comment