Discussion about this post

User's avatar
Eric Dane Walker's avatar

Two assumptions you're making:

(1) An AI-structured society is the future whether we want it to be or not.

(2) If our institutions are currently unhealthy, it's because they haven't been AI-ed yet, and AI-ing them will make them healthy.

1 is not obviously true. "X is inevitable" is, historically, something those who stand to profit from X often say in order to undemocratically make X an inextricable part of life, without any discussion or airing of views whose lives it will change. X is always subject to choice and deliberation. The problem is that only those who will profit from X are actually making the choices.

2 is not obviously true. There are good arguments out there that AI-governed institutions are just an intensified version of the opaque, algorithmic, metric-driven, and market-obsequious governance structures of pre-AI institutions (yes, including scientific ones). New boss, same as the old boss, but even less accountable and more embedded in the market.

None of this is to say your assumptions are NOT true. Just pointing out their non-obviousness.

Expand full comment
Chris L's avatar

This post updated me somewhat towards thinking that an arrangement might actually make sense.

*However*, the proposal you've floated doesn't seem anywhere close to balanced.

As Anton points out, what you've proposed essentially just offers the AIS movement SB53 at a federal level. That's not really worth much, the frontier labs aren't going to give up on California any time soon.

In exchange, accelerationists would recieve protection from an oncoming Tsunami of state legislation *and* forgiveness for years of unhinged and bad faith attacks on the AIS movement (which seem likely to resume even if there was some kind of arrangement).

A more balanced deal would likely include much more substantial concessions, such as a properly funded US AI Security Institute (on the level of the UK one). Actually, it doesn't even really make much sense to think of this as a substantial concession given a) how acceleratory AIS work has been overall (unfortunately) b) that it is in everyone's interest to have a better understanding of model risk c) that safe models are actually more useful models.

Expand full comment
7 more comments...

No posts