(1) An AI-structured society is the future whether we want it to be or not.
(2) If our institutions are currently unhealthy, it's because they haven't been AI-ed yet, and AI-ing them will make them healthy.
1 is not obviously true. "X is inevitable" is, historically, something those who stand to profit from X often say in order to undemocratically make X an inextricable part of life, without any discussion or airing of views whose lives it will change. X is always subject to choice and deliberation. The problem is that only those who will profit from X are actually making the choices.
2 is not obviously true. There are good arguments out there that AI-governed institutions are just an intensified version of the opaque, algorithmic, metric-driven, and market-obsequious governance structures of pre-AI institutions (yes, including scientific ones). New boss, same as the old boss, but even less accountable and more embedded in the market.
None of this is to say your assumptions are NOT true. Just pointing out their non-obviousness.
This post updated me somewhat towards thinking that an arrangement might actually make sense.
*However*, the proposal you've floated doesn't seem anywhere close to balanced.
As Anton points out, what you've proposed essentially just offers the AIS movement SB53 at a federal level. That's not really worth much, the frontier labs aren't going to give up on California any time soon.
In exchange, accelerationists would recieve protection from an oncoming Tsunami of state legislation *and* forgiveness for years of unhinged and bad faith attacks on the AIS movement (which seem likely to resume even if there was some kind of arrangement).
A more balanced deal would likely include much more substantial concessions, such as a properly funded US AI Security Institute (on the level of the UK one). Actually, it doesn't even really make much sense to think of this as a substantial concession given a) how acceleratory AIS work has been overall (unfortunately) b) that it is in everyone's interest to have a better understanding of model risk c) that safe models are actually more useful models.
Would happily fund CAISI—but wouldn’t merge approps into my particular proposal. IMO basically nobody on the /acc side would object to funding CAISI if that were the issue it came down to.
Apt: “As you consider whether you want to join the march, look around you. Have the institutions of the present day served you well? Do they seem healthy? Do they seem repairable? Are you happy about the status quo? “
I think you put your finger on the central tension - institutions can’t be static and must be able to shift and change. Otherwise we need new institutions.
If the status quo (or more accurately, participants in the status quo) is set on ossifying itself atop the rest of society, the rest of society will want a word. There is much to be optimistic about if we can avoid the false dichotomy of safety and progress as two things in tension.
Much of getting past that dichotomy is having a clear-eyed view of the flaws and strengths of new technologies in relation to the things our society values - autonomy, liberty, self-determination.
Hm, mostly absent from the piece is consideration of the risk that institutional change or replacement is not guaranteed to be a net positive for the average person, especially if not managed well and thoughtfully with substantial regulation. (I’m setting aside X-risks, I’m talking about more “normal” risks. I’m also setting aside the possibility that AI plateaus or disappoints in some longterm way.)
By analogy, the Industrial Revolution was a net benefit for the average person in the long run, but many have made the case that it was a net negative for the average person for many generations, before regulations and norms and unions tamed industrial capitalism. Consider the harms: skilled, independent crafting cottage industries replaced with rote and dangerous factory work; unimaginable levels of pollution in major cities; abusive bosses; very long hours; little to no safety net; few or no benefits; few or no consumer protections; and violent suppression of unionization.
I fear we could see something similar happen with AI. Imagine a world where capital owners and a few elite AI and business workers become incredibly rich, while the rest of white collar work is automated away, and 90% of us toil in low status, low pay, and boring service or manual labor jobs serving this small elite, where we’re micromanaged and surveilled by AI agent “managers”. A world where people who would have been lab techs become waiters and professors become butlers. It’s grim stuff. We’re already seeing the very beginnings of this with the decline in jobs in the arts and writing.
Lovely read. I particularly like the description of science as an institution that requires change, and find the imagined transformation compelling. I do think that Science is an institution that is a bit different from others and hard to draw parallels to more social ones. I think when it comes to more social institutions and visions of society, people tend to have differing visions (sometimes within camps themselves), and they have an imagined perception of what the vision of the others is, and sometimes that vision is scary. Some people view the Accelerationists and the techno-optimists as supporters of a world without human emotion or without a place for humans. I don't think that's what they espouse, but I think that's what many people think of them. That's partially why I think many people are resistant to change, and in particular with respect to post-AI society.
At the macro level, I agree with your argument and direction, but I am a bit hesitant with stretching out the big-tent approach too much, as I've seen what happens when groups focus on the 80% they agree on for too long. The 20% we disagree on matter greatly to the how vision is translated into reality, and I suspect that the 20% is what two sides of the AI debate are likely to fight about. I'm not sure there is any other way, but I could be wrong.
Making a deal explicit with a limited federal frontier AI safety bill is hard and unprecedented? Would the preemption also affect laws on the book as well e.g. SB 53?
I think change is upon us, whether we befriend it or not. I'm concerned I might not be up for the challenge, but I'm joining you on the march because there's only one way to find out.
Two assumptions you're making:
(1) An AI-structured society is the future whether we want it to be or not.
(2) If our institutions are currently unhealthy, it's because they haven't been AI-ed yet, and AI-ing them will make them healthy.
1 is not obviously true. "X is inevitable" is, historically, something those who stand to profit from X often say in order to undemocratically make X an inextricable part of life, without any discussion or airing of views whose lives it will change. X is always subject to choice and deliberation. The problem is that only those who will profit from X are actually making the choices.
2 is not obviously true. There are good arguments out there that AI-governed institutions are just an intensified version of the opaque, algorithmic, metric-driven, and market-obsequious governance structures of pre-AI institutions (yes, including scientific ones). New boss, same as the old boss, but even less accountable and more embedded in the market.
None of this is to say your assumptions are NOT true. Just pointing out their non-obviousness.
This post updated me somewhat towards thinking that an arrangement might actually make sense.
*However*, the proposal you've floated doesn't seem anywhere close to balanced.
As Anton points out, what you've proposed essentially just offers the AIS movement SB53 at a federal level. That's not really worth much, the frontier labs aren't going to give up on California any time soon.
In exchange, accelerationists would recieve protection from an oncoming Tsunami of state legislation *and* forgiveness for years of unhinged and bad faith attacks on the AIS movement (which seem likely to resume even if there was some kind of arrangement).
A more balanced deal would likely include much more substantial concessions, such as a properly funded US AI Security Institute (on the level of the UK one). Actually, it doesn't even really make much sense to think of this as a substantial concession given a) how acceleratory AIS work has been overall (unfortunately) b) that it is in everyone's interest to have a better understanding of model risk c) that safe models are actually more useful models.
Would happily fund CAISI—but wouldn’t merge approps into my particular proposal. IMO basically nobody on the /acc side would object to funding CAISI if that were the issue it came down to.
Apt: “As you consider whether you want to join the march, look around you. Have the institutions of the present day served you well? Do they seem healthy? Do they seem repairable? Are you happy about the status quo? “
I think you put your finger on the central tension - institutions can’t be static and must be able to shift and change. Otherwise we need new institutions.
If the status quo (or more accurately, participants in the status quo) is set on ossifying itself atop the rest of society, the rest of society will want a word. There is much to be optimistic about if we can avoid the false dichotomy of safety and progress as two things in tension.
Much of getting past that dichotomy is having a clear-eyed view of the flaws and strengths of new technologies in relation to the things our society values - autonomy, liberty, self-determination.
Great piece!
Hm, mostly absent from the piece is consideration of the risk that institutional change or replacement is not guaranteed to be a net positive for the average person, especially if not managed well and thoughtfully with substantial regulation. (I’m setting aside X-risks, I’m talking about more “normal” risks. I’m also setting aside the possibility that AI plateaus or disappoints in some longterm way.)
By analogy, the Industrial Revolution was a net benefit for the average person in the long run, but many have made the case that it was a net negative for the average person for many generations, before regulations and norms and unions tamed industrial capitalism. Consider the harms: skilled, independent crafting cottage industries replaced with rote and dangerous factory work; unimaginable levels of pollution in major cities; abusive bosses; very long hours; little to no safety net; few or no benefits; few or no consumer protections; and violent suppression of unionization.
I fear we could see something similar happen with AI. Imagine a world where capital owners and a few elite AI and business workers become incredibly rich, while the rest of white collar work is automated away, and 90% of us toil in low status, low pay, and boring service or manual labor jobs serving this small elite, where we’re micromanaged and surveilled by AI agent “managers”. A world where people who would have been lab techs become waiters and professors become butlers. It’s grim stuff. We’re already seeing the very beginnings of this with the decline in jobs in the arts and writing.
An excellent piece. I'm envious of your travels.
Lovely read. I particularly like the description of science as an institution that requires change, and find the imagined transformation compelling. I do think that Science is an institution that is a bit different from others and hard to draw parallels to more social ones. I think when it comes to more social institutions and visions of society, people tend to have differing visions (sometimes within camps themselves), and they have an imagined perception of what the vision of the others is, and sometimes that vision is scary. Some people view the Accelerationists and the techno-optimists as supporters of a world without human emotion or without a place for humans. I don't think that's what they espouse, but I think that's what many people think of them. That's partially why I think many people are resistant to change, and in particular with respect to post-AI society.
At the macro level, I agree with your argument and direction, but I am a bit hesitant with stretching out the big-tent approach too much, as I've seen what happens when groups focus on the 80% they agree on for too long. The 20% we disagree on matter greatly to the how vision is translated into reality, and I suspect that the 20% is what two sides of the AI debate are likely to fight about. I'm not sure there is any other way, but I could be wrong.
Making a deal explicit with a limited federal frontier AI safety bill is hard and unprecedented? Would the preemption also affect laws on the book as well e.g. SB 53?
I think change is upon us, whether we befriend it or not. I'm concerned I might not be up for the challenge, but I'm joining you on the march because there's only one way to find out.