"I am skeptical of “existential risk.” I am opposed to pauses and bans on AI development. I am uncertain about the labor-market impact of AI in the future, but I am skeptical of the notion that AI will destroy human work . . ."
These are the two themes that arise most frequently in my conversations with people across the AI sophistication spectrum.
I would appreciate the reasons for your skepticism of existential risk, especially given the express safety concerns from Hassabis, Hinton, Suleyman, et al (not Musk!) over many years.
I do not want agentic AI to destroy human work -- or at least please create as many new jobs as it replaces -- but why are you confident that it will not?
On existential risk, there are a few reasons for my skepticism. The first is that we have made significant progress on alignment--more than many alignment and existential risk researchers would have supposed a decade ago. We have not solved alignment by any means (I doubt it is the kind of problem that gets "solved," per se), but the progress is reassuring.
Second, I believe that knowledge and information are distributed across the world in extremely uneven ways. Even today, the biggest blocker on my use of AI agents for work is not the model capabilities but the simple availability of data. Nothing about this will disappear in the face of AI; the world will remain illegible, and there will be extremely strong market incentives against creating all-knowing singleton AIs. If I run a semiconductor fab and use agents, I will want extremely firm guarantees from the agent developer that my data will not be shared with competing fabs. This means that agent contexts will be controlled and supervised by both developers and deployers, and this, in turn, means that "what the agent knows" can be bounded through deliberate human effort.
There is one area of plausible existential risk where I fully admit humans are vulnerable, and this is in biology. It is plausible to imagine an AI designing some sort of bioweapon that can wipe out humanity. But, happily, this dovetails nicely with strong biosecurity protections, which I support anyway due to my concerns with catastrophic risks.
Finally, I am simply unpersuaded that the returns to intelligence for solving many of the practical problems we have in the world are that high. Or rather, I think they are VERY high, but not infinitely high, as the "doomers" often assume.
On labor, I have much higher uncertainty. But I don't think it is prudent to design a solution to a problem we aren't sure exists, and a problem that, even if it does exist, we don't understand.
Strongly aligned with your framing here, and would push it one step further. Even below the catastrophic-risk threshold you delineate, the market itself imposes an organic discipline. Frontier labs that are both safe and credibly perceived as safe should command lower cost of capital and more favorable insurance terms; I have long argued that the resulting pricing pressure does work toward stronger governance and greater transparency that no statute could mandate as efficiently.
The unannounced WH EO on pre-release review worries me along exactly the lines you draw. A review regime before a catastrophic incident even occurs would risk being precisely the preemptive overreach you warn against; regulating in advance of harm it presumes, and inviting exactly the national-security capture you flag. The hope is that it will land as a light-touch check-in rather than something that can harden into full regulatory control under future administrations.
Lots to like here but I want to push back on this specific bit: "I am skeptical of the notion that AI will destroy human work, and strongly opposed to regulations or taxes designed to remedy the “problem” of AI and labor, since today, there is no problem we can perceive."
I also wouldn't endorse any specific remedy for labor market impact at this time, but it's not too early to plan ahead. If national security concerns are likely to generate overreaction and bad policy from the Right, labor market disruption is likely to generate overreaction and bad policy from the Left -- and if labor market disruption hits in the next 2-5 years, as many fear, there is a substantial chance that the Left will be in power. I don't expect AI to lead to significant job loss in the near term, but it's possible, and it's worth sketching out some of the scenarios and best policies in each one. The fog of war will be thick, as AI layoffs may be conflated with a weak labor market generally. Some firms will brag about AI layoffs in an attempt to goose their stock price; meanwhile the labs may deny that it's happening in order to protect their reputation with the public. So it makes sense to game out the policies now, and think about which ones work best in which states of the world, in case we ever do end up in one of those states.
This helped me sort through my thinking on AI regulation and I especially loved your overview of your political philosophy. I often say that libertarianism would be the best system if it weren't for all the humans involved. It's inevitable that government regulation will be pernicious just as it's inevitable that it will happen. Could you say the same thing about at least the New Deal? I found it interesting you included that and the Great Society in the piece.
I want to welcome you, Dean W. Ball, to the fray. I also congratulate you on your appointment in the Administration. It is becoming clear that AI and its fans must find a middle ground between AI evangelists and AI doomsayers. The Trump White House can be an effective proponent and still lead the country into control policies that work. I hope you and they are successful.
An excellent essay Dean. Unlike many observers, you have a special way of getting to the core of AI policy problems without the customary fluff. But I have to pause at the idea that any government (or quasi-government) body can effectively mitigate the catastrophic risks you describe. Given the speed and neural nature of AI technology, I can't imagine how governments can do anything to substantially mitigate its risk - at least for now. We are simply not good at policymaking for probabilistic future events like climate change, nuclear war or that odd asteroid heading our way. We are good at fighting past wars. AI risks fit into the punctuated equilibrium theory of policy making which suggests that it takes a focusing event – a catastrophe – to force governments into a policy response. Quasi-government organizations like you suggest, along the model of federal labs or science boards with carefully monitored revolving doors, might be helpful in developing expertise for surveilling risks, i.e., serving as “canaries in the coal mine,” but don’t expect much in terms of regulations or policies until disaster makes it an imperative.
"I am skeptical of “existential risk.” I am opposed to pauses and bans on AI development. I am uncertain about the labor-market impact of AI in the future, but I am skeptical of the notion that AI will destroy human work . . ."
These are the two themes that arise most frequently in my conversations with people across the AI sophistication spectrum.
I would appreciate the reasons for your skepticism of existential risk, especially given the express safety concerns from Hassabis, Hinton, Suleyman, et al (not Musk!) over many years.
I do not want agentic AI to destroy human work -- or at least please create as many new jobs as it replaces -- but why are you confident that it will not?
Thanks!
Congressman Beyer,
Great questions!
On existential risk, there are a few reasons for my skepticism. The first is that we have made significant progress on alignment--more than many alignment and existential risk researchers would have supposed a decade ago. We have not solved alignment by any means (I doubt it is the kind of problem that gets "solved," per se), but the progress is reassuring.
Second, I believe that knowledge and information are distributed across the world in extremely uneven ways. Even today, the biggest blocker on my use of AI agents for work is not the model capabilities but the simple availability of data. Nothing about this will disappear in the face of AI; the world will remain illegible, and there will be extremely strong market incentives against creating all-knowing singleton AIs. If I run a semiconductor fab and use agents, I will want extremely firm guarantees from the agent developer that my data will not be shared with competing fabs. This means that agent contexts will be controlled and supervised by both developers and deployers, and this, in turn, means that "what the agent knows" can be bounded through deliberate human effort.
There is one area of plausible existential risk where I fully admit humans are vulnerable, and this is in biology. It is plausible to imagine an AI designing some sort of bioweapon that can wipe out humanity. But, happily, this dovetails nicely with strong biosecurity protections, which I support anyway due to my concerns with catastrophic risks.
Finally, I am simply unpersuaded that the returns to intelligence for solving many of the practical problems we have in the world are that high. Or rather, I think they are VERY high, but not infinitely high, as the "doomers" often assume.
On labor, I have much higher uncertainty. But I don't think it is prudent to design a solution to a problem we aren't sure exists, and a problem that, even if it does exist, we don't understand.
Strongly aligned with your framing here, and would push it one step further. Even below the catastrophic-risk threshold you delineate, the market itself imposes an organic discipline. Frontier labs that are both safe and credibly perceived as safe should command lower cost of capital and more favorable insurance terms; I have long argued that the resulting pricing pressure does work toward stronger governance and greater transparency that no statute could mandate as efficiently.
The unannounced WH EO on pre-release review worries me along exactly the lines you draw. A review regime before a catastrophic incident even occurs would risk being precisely the preemptive overreach you warn against; regulating in advance of harm it presumes, and inviting exactly the national-security capture you flag. The hope is that it will land as a light-touch check-in rather than something that can harden into full regulatory control under future administrations.
My head hurts from nodding vigorously throughout this post. Well done.
Lots to like here but I want to push back on this specific bit: "I am skeptical of the notion that AI will destroy human work, and strongly opposed to regulations or taxes designed to remedy the “problem” of AI and labor, since today, there is no problem we can perceive."
I also wouldn't endorse any specific remedy for labor market impact at this time, but it's not too early to plan ahead. If national security concerns are likely to generate overreaction and bad policy from the Right, labor market disruption is likely to generate overreaction and bad policy from the Left -- and if labor market disruption hits in the next 2-5 years, as many fear, there is a substantial chance that the Left will be in power. I don't expect AI to lead to significant job loss in the near term, but it's possible, and it's worth sketching out some of the scenarios and best policies in each one. The fog of war will be thick, as AI layoffs may be conflated with a weak labor market generally. Some firms will brag about AI layoffs in an attempt to goose their stock price; meanwhile the labs may deny that it's happening in order to protect their reputation with the public. So it makes sense to game out the policies now, and think about which ones work best in which states of the world, in case we ever do end up in one of those states.
This helped me sort through my thinking on AI regulation and I especially loved your overview of your political philosophy. I often say that libertarianism would be the best system if it weren't for all the humans involved. It's inevitable that government regulation will be pernicious just as it's inevitable that it will happen. Could you say the same thing about at least the New Deal? I found it interesting you included that and the Great Society in the piece.
I want to welcome you, Dean W. Ball, to the fray. I also congratulate you on your appointment in the Administration. It is becoming clear that AI and its fans must find a middle ground between AI evangelists and AI doomsayers. The Trump White House can be an effective proponent and still lead the country into control policies that work. I hope you and they are successful.
An excellent essay Dean. Unlike many observers, you have a special way of getting to the core of AI policy problems without the customary fluff. But I have to pause at the idea that any government (or quasi-government) body can effectively mitigate the catastrophic risks you describe. Given the speed and neural nature of AI technology, I can't imagine how governments can do anything to substantially mitigate its risk - at least for now. We are simply not good at policymaking for probabilistic future events like climate change, nuclear war or that odd asteroid heading our way. We are good at fighting past wars. AI risks fit into the punctuated equilibrium theory of policy making which suggests that it takes a focusing event – a catastrophe – to force governments into a policy response. Quasi-government organizations like you suggest, along the model of federal labs or science boards with carefully monitored revolving doors, might be helpful in developing expertise for surveilling risks, i.e., serving as “canaries in the coal mine,” but don’t expect much in terms of regulations or policies until disaster makes it an imperative.