Discussion about this post

User's avatar
Rob L'Heureux's avatar

Great analysis. The impact on discriminatory intent makes sense to me, though I am presuming there is enough legal distinction on protected classes. It gets especially confusing if you think about using AI to optimize a service business oriented towards a protected class.

Regardless, if the AI is that powerful, could the concerns be addressed "in situ" by having the AI monitor impact across protected classes and propose remedies maybe annually that better meet the intent of the impact? There would have to be a presumption of goodwill for anyone doing that. Doesn't seem like it needs to be a capability of the foundation model, but maybe a service bundled in at the application layer for businesses.

Expand full comment
Aaron Scher's avatar

Good essay! The dentist example is interesting because when I look at the situation you described, I'm like "that's totally crazy to be deploying this system in a way that substantially changes your business without understanding how the system works, whether it is trustworthy along various dimensions, and what the changes to your business actually are." This seems like a pretty irresponsible business decision, right? We could disagree about our priors on how robust AI systems are and how gracefully they fail; my thinking about current systems is that they fail in substantive ways far too often for this to be an acceptable deployment plan.

And sure, maybe an algorithmic impact assessment is a particularly bad way to deal with this issue, but it also seems pretty bad not have any required checks.

Expand full comment
10 more comments...

No posts