Discussion about this post

User's avatar
Jason Crawford's avatar

Great post. I will help amplify this. Very glad you are doing this work.

> AI policy has now firmly entered its ‘science fiction’ era, where I suspect it will remain for many years to come.

I suspect it will remain there permanently.

Daniel Kokotajlo's avatar

Good post, I broadly agree. I want to clarify something about the intelligence explosion, for the benefit of readers (I think this won't be news to you)

You say:

"The most bullish case is that it will result in an intelligence explosion, with new research paradigms (such as the much-discussed “continual learning”) suddenly being solved, a rapid rise in reliability on long-horizon tasks, and a Cambrian explosion of model form factors, all scaling together rapidly to what we might credibly describe as “superintelligence” within a few months to at most a couple of years from when automated AI research begins happening in earnest."

It's important to emphasize 'when automated AI research begins happening in earnest' otherwise people might think you mean 'now.' Speaking as a believer in superintelligence and the intelligence explosion, I am NOT claiming that it's going to happen now.

My view, and the view of my colleagues at the AI Futures Project, is that the overall R&D speedup from today's AI systems is modest (e.g. <50% overall) because we are still firmly in the 'centaur' era where AIs can do some parts of the research but not all of it, not even close. We predict that, roughly around the time of *full* automation of AI R&D (which is probably still some years away, though admittedly could happen later this year for all we know) the pace of AI R&D will speed up dramatically and there will be an intelligence explosion. If you want to know more about our views they can be found here. https://www.aifuturesmodel.com/

No posts

Ready for more?