Good post, I broadly agree. I want to clarify something about the intelligence explosion, for the benefit of readers (I think this won't be news to you)
You say:
"The most bullish case is that it will result in an intelligence explosion, with new research paradigms (such as the much-discussed “continual learning”) suddenly being solved, a rapid rise in reliability on long-horizon tasks, and a Cambrian explosion of model form factors, all scaling together rapidly to what we might credibly describe as “superintelligence” within a few months to at most a couple of years from when automated AI research begins happening in earnest."
It's important to emphasize 'when automated AI research begins happening in earnest' otherwise people might think you mean 'now.' Speaking as a believer in superintelligence and the intelligence explosion, I am NOT claiming that it's going to happen now.
My view, and the view of my colleagues at the AI Futures Project, is that the overall R&D speedup from today's AI systems is modest (e.g. <50% overall) because we are still firmly in the 'centaur' era where AIs can do some parts of the research but not all of it, not even close. We predict that, roughly around the time of *full* automation of AI R&D (which is probably still some years away, though admittedly could happen later this year for all we know) the pace of AI R&D will speed up dramatically and there will be an intelligence explosion. If you want to know more about our views they can be found here. https://www.aifuturesmodel.com/
As someone working in AI and robotics, here's something I've been sitting with for a while: these models are starting to feel less like tools we build and more like powerful natural phenomena we live alongside. Think monsoons or hurricanes. We understand the mechanics, but we still can't perfectly predict their path or control their force. Yet we've gotten pretty good at harnessing what they offer and building infrastructure to survive what they destroy. Nobody asks "should we allow hurricanes?" We ask "how do we build better forecasting and stronger levees?"
If something like AGI or even ASI does show up, that might just be the realistic version of coexistence: not mastery, not submission, but the kind of adaptive relationship humans have always had with forces bigger than ourselves.
Great post. I will help amplify this. Very glad you are doing this work.
> AI policy has now firmly entered its ‘science fiction’ era, where I suspect it will remain for many years to come.
I suspect it will remain there permanently.
Good post, I broadly agree. I want to clarify something about the intelligence explosion, for the benefit of readers (I think this won't be news to you)
You say:
"The most bullish case is that it will result in an intelligence explosion, with new research paradigms (such as the much-discussed “continual learning”) suddenly being solved, a rapid rise in reliability on long-horizon tasks, and a Cambrian explosion of model form factors, all scaling together rapidly to what we might credibly describe as “superintelligence” within a few months to at most a couple of years from when automated AI research begins happening in earnest."
It's important to emphasize 'when automated AI research begins happening in earnest' otherwise people might think you mean 'now.' Speaking as a believer in superintelligence and the intelligence explosion, I am NOT claiming that it's going to happen now.
My view, and the view of my colleagues at the AI Futures Project, is that the overall R&D speedup from today's AI systems is modest (e.g. <50% overall) because we are still firmly in the 'centaur' era where AIs can do some parts of the research but not all of it, not even close. We predict that, roughly around the time of *full* automation of AI R&D (which is probably still some years away, though admittedly could happen later this year for all we know) the pace of AI R&D will speed up dramatically and there will be an intelligence explosion. If you want to know more about our views they can be found here. https://www.aifuturesmodel.com/
Really sharp framing across both parts.
As someone working in AI and robotics, here's something I've been sitting with for a while: these models are starting to feel less like tools we build and more like powerful natural phenomena we live alongside. Think monsoons or hurricanes. We understand the mechanics, but we still can't perfectly predict their path or control their force. Yet we've gotten pretty good at harnessing what they offer and building infrastructure to survive what they destroy. Nobody asks "should we allow hurricanes?" We ask "how do we build better forecasting and stronger levees?"
If something like AGI or even ASI does show up, that might just be the realistic version of coexistence: not mastery, not submission, but the kind of adaptive relationship humans have always had with forces bigger than ourselves.