Discussion about this post

User's avatar
Oliver Sourbut's avatar

Nice one! I do think AI agent law can be fairly common sense. Inherent rights and personhood inadvisable, premature. Liability on some balance of principals (user, model developer, scaffold and tooling devs) depending on reasonable context.

To facilitate this in common (or any other) law, I'd imagine AI systems need to

- be identifiable (includes model, scaffold, instance)

- record the scope of their authority as the basis for any given activity

- (probably) record activity traces, or otherwise associate activity with instance identity

Counterparties to agents should handshake to check identity and authority.

This is all technically nearly trivial, but presumably faffy to get consensus and initial details on.

Running agents beyond some (to determine) threshold of capability and authority *without* following that basic protocol should be discouraged or prohibited - perhaps a trickier problem.

1 more comment...

No posts

Ready for more?