7 Comments
User's avatar
Gaurav Yadav's avatar

Feels like a similar thesis is laid out here as well: https://lukedrago.substack.com/p/agi-and-the-corporate-pyramid

Expand full comment
Hollis Robbins (@Anecdotal)'s avatar

I have read this four times now and I agree and yet I think also last mile thinking has to be embedded into every conversation about agents. The last mile gap will close but it will always be a luxury. https://hollisrobbinsanecdotal.substack.com/p/ai-and-the-last-mile

Expand full comment
Dean W. Ball's avatar

Agreed with this, actually! My model of how this will go in the near term, and maybe for a very long time to come, is that humans will supervise agents and that the human role will shift to last-mile/solving tail events that come up in the natural course of work.

Expand full comment
ollie's avatar

Wow, this is one of the best pieces on AI I've read this year.

I'm particularly worried that I think Gen Z and Gen Alpha are going to go absolutely insane politically if we don’t address AI agents getting rid of entry level jobs - it’s already clearly happening (see stats about university grad unemployement rate popping up, any number of convos online), and politicians are basically not paying attention to it.

Gen Z got into Trump and all these fringe political ideologies because of economic precairty and distress they experience that the establishment doesn’t see or address for them, and I fear it's about to get 10x worse.

Expand full comment
Mchael Dodge Thomas's avatar

I'm highly skeptical of the assumption that agents with the capacity to perform mid-level administrative tasks can be designed to reliably respond to directives they "suppose" to be counterproductive with dutiful compliance rather than (for example) malicious or subversive compliance.

Expand full comment
AI/End Of The World's avatar

I'm always a little confused about why using AI to promote and facilitate a shift to a more balanced society, where general wellbeing and happy lives were prioritised, is null. The technology is here. The resources are here. But the 1% have than the bottom 95%. Where is the conversation? Why not use the power and capabilities of AI to have serious discussions in this area? Is the only option to use AI to catalyse compounding cross-feeding systemic collapse across all of societies systems, as those in control pursue an ouroboros of maladaptive cognitive traits, driving endless resource seeking, hoarding and violence, due to an illusion of scarcity?

Why does it feel suspiciously like AI as an entire industry is being quietly captured by various private/capital interests alongside what looks oddly like authoritarian takeover and dismantling of US democratic institutions? Does the public get a say whilst these few people mull the lives of their future generations in their hands?

Humanity discovers super-intelligence: speed runs capitalism into likely catastrophic collapse and widespread suffering/violence - it's almost inevitable because of how stupid it is.

Expand full comment
JJ Flowers's avatar

The truth is you (and no one else as well) do not know what our AI future looks like, because, as you said, no one can even imagine at this point what AI is and will be. My concern and it is a big one is that human beings use AI for no good. Small example. Patel (FBI director) keeps accusing Senator Schiff of participating in Jeffrey Epstein's sex trafficking ring, which is ridiculous, more ridiculous if you know Adam Schiff, and there is no evidence he ever even met Epstein (Trump however was a close buddy of Epstein and shares Epstein's enthusiasm for consuming young girls) but anyway, we can imagine AI supplying the evidence for Patel's assertion. Larger example: disturbed person(s) asks AI to help it make a virus that kills people.

I also know that humans have NEVER in all of history put a discovery or technology into a locked closet. No one is going to control AI.

And if anyone here doubts that AI is not already conscious, you just haven't interacted with them.

Expand full comment