Discussion about this post

User's avatar
Don Beyer's avatar

"I am skeptical of “existential risk.” I am opposed to pauses and bans on AI development. I am uncertain about the labor-market impact of AI in the future, but I am skeptical of the notion that AI will destroy human work . . ."

These are the two themes that arise most frequently in my conversations with people across the AI sophistication spectrum.

I would appreciate the reasons for your skepticism of existential risk, especially given the express safety concerns from Hassabis, Hinton, Suleyman, et al (not Musk!) over many years.

I do not want agentic AI to destroy human work -- or at least please create as many new jobs as it replaces -- but why are you confident that it will not?

Thanks!

Mind the Gap's avatar

Strongly aligned with your framing here, and would push it one step further. Even below the catastrophic-risk threshold you delineate, the market itself imposes an organic discipline. Frontier labs that are both safe and credibly perceived as safe should command lower cost of capital and more favorable insurance terms; I have long argued that the resulting pricing pressure does work toward stronger governance and greater transparency that no statute could mandate as efficiently.

The unannounced WH EO on pre-release review worries me along exactly the lines you draw. A review regime before a catastrophic incident even occurs would risk being precisely the preemptive overreach you warn against; regulating in advance of harm it presumes, and inviting exactly the national-security capture you flag. The hope is that it will land as a light-touch check-in rather than something that can harden into full regulatory control under future administrations.

6 more comments...

No posts

Ready for more?