Discussion about this post

User's avatar
Andrew Bowlus's avatar

Almost all major AI labs have called for NIST to develop a single set of standards for AI safety, specifically standards relating to encryption of user data and model weights. Google asked for international standards from the International Organization for Standardization / International Electrotechnical Commission in their OSTP AI plan proposal. All that to say, it seems as if labs want standard safety regulations to (1) cover their liability (tort liability requires the establishment of “duty” — the defendant owing a legal duty to the plaintiff, and by following a set of standards they can argue that they were operating within the law if harm came from a malicious use of their model) and (2) ensure that smaller AI startups do not compromise their progress by committing safety errors that result in harsher regulation.

Expand full comment
MT's avatar
2hEdited

Interesting post. Are there examples of (serious)* AI bills pushing for this type of broad liability? SB 1047 was a liability bill, but it only kicked in if (a) something truly catastrophic happened, **and** (b) the AI company did not follow their own safety procedures. I get that there are downsides to this narrower form of liability, but your argument's weight rested on examples of pretty extreme liability.

* "serious" is doing a lot of heavy lifting here. I basically mean, I see you arguing against SB 1047-types, so "serious" means bills coming out of that vein.

Expand full comment
14 more comments...

No posts