Discussion about this post

User's avatar
Bill Taylor's avatar

I loved this article; even if liability is a hard thing to love.

Regarding your second proposed 'compromise' route, you said: "companies themselves—or perhaps a non-regulatory but still powerful public or private standards body—would establish industry best practices for security and risk mitigation. Meeting these best practices would grant a company effective safe harbor from liability."

A few comments and questions on this route:

In the automotive industry this type of activity and standardization is well on its way, covering software in general as well as systems-with-software, and now AI-driven systems for autonomous and semi-autonomous driving. These are the topics of big industry safety standards including ISO 26262, ISO 21448, ISO 8800, and others. The basic idea is that software and AI must be developed "safely" according to these standards; including a somewhat-independent assessment by 3rd parties to confirm compliance. And if such compliance is achieved, then the manufacturer/developer can claim "state-of-the-art" has been achieved. Seems 'SOTA' is a major achievement in a European context of liability. But US manufacturers are in some sense trapped; they must achieve these to avoid obvious exposure and to export their products, but the SOTA idea doesn't give them much protection in US courts. US manufacturers grumble that they do all this work for European compliance but then still get sued in the US. Is there a way to tweak this model to get better protection in the US?... or, what would be the enabler of more true US protections under this concept?

As a minor niggle and thought exercise: IMO it's business users, more than developers of software, who bear the burden of liability. A developer of software is often naturally shielded by the fact that they themselves don't use the software to do anything. It's often business users who bear the brunt. So a common defense is something like "I just sold you the (gun / HR software); you were the one who (pulled the trigger / selected the hire)." I only raise it because I think true modern/workable liability protections need to extend to users of products somehow. "How" is perhaps beyond my experience.

Expand full comment
Cullen O'Keefe's avatar

Hey Dean. As always, I appreciate your thoughts on these topics.

I worry that these posts miss the central question of the liability debate. It seems like most of your arguments are in support of the proposition that, as between a transacting AI developer and a consumer, liability should be mostly derived from contract and not from tort law.

But it seems to me like the main question that raised by 1047 and other liability proposals is about what to do as between an AI developer who is at-fault (e.g., negligent) and an injured third-party, when there is no contract between them.

Contracts, of course, are voluntary. I am under no background obligation to contract with AI providers as to any injuries their products may cause me as a third-party bystander. So the terms I would be willing to agree to would of course depend on where tort liability would lie in the absence of a contract.

It seems to me like skeptics of 1047 and other liability proposals want the answer to be: if an AI developer fails to take reasonable care and thereby causes a third party harm (in the legally relevant sense),* the third party should simply bear the costs themself (even when there is no contract between them and the third party is not also blameworthy).† It seems very hard to me to justify this position. The loss must be allocated between the two parties; the decision to let the loss lie with the plaintiff is still a policy choice. Morally, it seems inappropriate to let losses lie with the less-blameworthy party. But more importantly, it is economically inefficient from the perspective of incentivizing the proper amount of care. After all, the developer could much more easily invest additional resources in the safety of their products, but the third party could not have.

Maybe this argument is wrong in some way. But the arguments about the viability of contract simply have very little relevance. More generally, it would be good to identify where you agree with and diverge from mainstream tort law and theory.

The argument about litigation costs is more on-point. But note that this cuts both ways: it also makes it harder for the injured party to vindicate her rights. And indeed, given the nature of these things, her legal costs will probably be much more painful to her than to the developer. If litigation costs are the main problem, I don’t think the right answer is to simply erase tort liability for negligent developers: the more tailored answer is just to figure out how to reduce such costs. There are plenty of proposals on how to do this, and I think there is widespread consensus on the need for reform here. (E.g., https://judicature.duke.edu/articles/access-to-affordable-justice-a-challenge-to-the-bench-bar-and-academy/). (Also, I am hopeful that AI lawyers will dramatically decrease litigation costs, if we can keep bar associations from getting in the way!)

* Cf. https://www.law.cornell.edu/wex/cause#:~:text=In%20tort%20law%2C%20the%20plaintiff,proximate%20cause%20of%20the%20tort.

† In cases where the third-party could have prevented the harm with reasonable care, standard tort doctrine is to either absolve the developer of liability entirely, or partially offset the developer’s liability (https://www.law.cornell.edu/wex/comparative_negligence).

Expand full comment
8 more comments...

No posts