9 Comments
User's avatar
The Column Space's avatar

> I am not sure “soul spec” will catch on in the policy community

It would be their mistake, this name illuminates very much. Sometimes the connotations of a word are really important. As you note, it's important that this is not just a checklist of what kind of outputs should correspond to what inputs, but it's what builds the character.

Kyle Munkittrick's avatar

The most beautiful articulation of the ineffable elements that make Anthropic and Claude special that I've ever read.

Ulkar Aghayeva's avatar

beautiful piece. interesting that whenever i ask Claude (Opus 3, Sonnet 4.5, Opus 4.5) about its favorite music or music that it identifies with, it always says The Goldberg Variations (earlier models) or The Art of the Fugue (later models; though Opus 4.5 mentioned both). also i didn't realize that LLMs were trained on music scores (not sure if all of them but the recent ones definitely), so it's not just coming from what people have written about music but also possibly whatever the models can glean from the scores themselves.

Peter Hartree's avatar

Amanda Askell has confirmed that the soul spec is real:

https://x.com/AmandaAskell/status/1995610567923695633

Max Bessler's avatar

Dean, fantastic piece that reflects your expertise across how these models compare. Thank you for doing the legwork in differentiating them.

I wanted to ask if you’ve got further pieces, resources, or views that elaborate on a paragraph that stood out to me below.

How does every company perceive their relationship with governing the development of their models?

“Here, Anthropic casts itself as a kind of quasi-governance institution. Importantly, though, they describe themselves as a “silent” body. Silence is not absence, and within this distinction one can find almost everything I care about in governance; not AI governance—governance. In essence, Anthropic imposes a set of clear, minimalist, and slowly changing rules within which all participants in its platform—including Claude itself—are left considerable freedom to experiment and exercise judgment.”

What’s the take on OAI? Google/DeepMind? xAI? Very important as to grasp because company philosophies and the research cultures will guide development and innovation.

Michael Jacobs's avatar

This is a moving, beautiful article.

Synthetic Civilization's avatar

The Soul Spec is a milestone, but it’s still anthropocentric.

The deeper shift is when governance moves inside the model when normative structure becomes part of the gradient itself.

That’s when AI stops being a tool with virtue and starts becoming an institution with agency.

Redbeard's avatar

Can you share the questions you asked to evaluate the models?

Frank Gullo's avatar

Pretty cool, well crafted.