8 Comments
User's avatar
Steven Adler's avatar

Very useful analysis of the facts, thanks!

I wish that AI companies would increase their safety investment so that they can adequately cover both the bioweapons-y risks and also the self-harm risks

I don't think that caring about one precludes caring about the other, or that one hasn't gotten enough attention because of a care for the other, which I read as one implicit claim of this piece. Perhaps it isn't meant to claim that though?

Expand full comment
Dean W. Ball's avatar

Definitely not! I don’t think these things need to be rivalrous, and in many ways (as I write in the conclusion) I think the underlying issues are similar so research is complementary

Expand full comment
Hollis Robbins (@Anecdotal)'s avatar

"OpenAI’s own strategy of iterative deployment, whether they realized it or not, goes hand-in-hand with some number of tort lawsuits." this seems right to me.

Expand full comment
Scott Joy's avatar

Thx for the honest assessment of this lawsuit.

My partner is a telehealth psychologist that is both fascinated by the potential of chatbots and alarmed about just this possibility.

Expand full comment
Handle's avatar

When the new, dangerous technology of automobiles came upon the American scene, disputes about liabilities for harms attributable to drivers or manufactures were adjudicated in the common law tort system. Eventually this gave way to the current liability insurance and heavy regulation model, where the connection in reality to common law tort trials seems for the most part to be either having some extraordinary element (as something of a fallback option only exercised in rare circumstances) or else a kind of elaborate legal fiction. Similar things could be said about many areas of law, and the institutional constellations of copes that have been established and widely adopted specifically to circumvent the inadequacies and negative features of the modern tort system.

Now one can interpret that, and the speed at which it happened, in multiple plausible ways, one of which is to say the tort system, at least the one it evolved into in modern times, simply proved practically unworkable in terms of something capable of dealing in a fair, predictable, speedy, lightly-burdensome manner to a large number of commonplace claims. Not to mention one that is politically neutral and does not allow judges to go beyond restitution and act ultra vires in crafting public policy without legislative input.

I'm all for the ideals of gradually discovering and so forth. At the same time, it seems a stretch to look at a system that has failed so badly for so long and in so many ways and to expect it to nevertheless perform its functions satisfactorily in this new, highly charged context.

Expand full comment
Steve Brecher's avatar

"...efforts by the model to encourage Raine to seek mental healthcare, talk to a human, or take other preventive steps (and I’m sure GPT-4o did this *many* times)."

Basis of this certainty?

Expand full comment
Anton's avatar

Maybe it's a little too futurist of me but it might be the case that as we move closer toward AGI capabilities, the legal frameworks we develop around managing the potential impact of *general intelligence* might present an opportunity to port some of those lessons back to the legal system which governs actual humans.

Expand full comment
Kyle Munkittrick's avatar

The points about the failure to predict a real risk that in retrospect seems obvious are great. Nearly everyone trying very very hard to predict all the bad things big and small, plausible and remote, didn’t catch this as a risk. If they did, they didn’t see it as enough of one make sycophancy a priority in a material way. Forces a difficult evaluation of how much risk (and associated harms) to tolerate now to learn about unforeseen threats that would be worse if discovered tomorrow. Seems like we should update in favor of “muddle through”?

Expand full comment