Discussion about this post

User's avatar
Andrew Bowlus's avatar

Almost all major AI labs have called for NIST to develop a single set of standards for AI safety, specifically standards relating to encryption of user data and model weights. Google asked for international standards from the International Organization for Standardization / International Electrotechnical Commission in their OSTP AI plan proposal. All that to say, it seems as if labs want standard safety regulations to (1) cover their liability (tort liability requires the establishment of “duty” — the defendant owing a legal duty to the plaintiff, and by following a set of standards they can argue that they were operating within the law if harm came from a malicious use of their model) and (2) ensure that smaller AI startups do not compromise their progress by committing safety errors that result in harsher regulation.

Expand full comment
ai-plans's avatar

Have you read https://docs.google.com/spreadsheets/d/1UvPVStwCZQeDcdlRw4x3Tg5KUWOhGsjTeLsS5HyT3cQ/edit?gid=0#gid=0?

Team in the AI Liability Ideathon did useful work on highlighting which stakeholders can be held liable for what reason: https://docs.google.com/spreadsheets/d/1UvPVStwCZQeDcdlRw4x3Tg5KUWOhGsjTeLsS5HyT3cQ/edit?gid=0#gid=0

Expand full comment
13 more comments...

No posts