Anthropic's CEO Sees the Problem. He Misses the Solution.

Dario Amodei wrote the most important essay on AI risk. The answer is hidden in his own metaphor.

Last month, Anthropic published a 23,000-word constitution for Claude. I wrote about what it means and why it matters. The piece traveled further than expected—apparently a lot of people sense that something significant is shifting.

Last Week, Anthropic's CEO Dario Amodei published an essay that deserves to sit alongside it. "The Adolescence of Technology" is the most honest assessment of AI development I've read from anyone actually building these systems. It's also missing something fundamental—and the missing piece is hidden inside the metaphor he chose.

What He's Actually Saying

What is Amodei actually arguing? Because he's going further than most people in his position are willing to go.

He believes AI has crossed a threshold—not a tool anymore, but a general capability matching or exceeding human performance across virtually every cognitive domain. He thinks we're one to two years from systems that can autonomously conduct scientific research and build the next generation of AI themselves. The feedback loop is already running.

Then there's what my business partner Ethan Wells calls the "multipolar trap"—situations where individually rational choices lead to collective catastrophe. Amodei names it without hedging:

"AI is so powerful, such a glittering prize, that it is very difficult for human civilization to impose any restraints on it at all."

If Anthropic slows down, others won't. If democracies regulate, authoritarian states accelerate. The money involved—"literally trillions of dollars per year"—overwhelms any attempt at coordination.

When Anderson Cooper asked who elected him and Sam Altman to make decisions shaping humanity's future, Amodei's answer was two words: "No one. No one."

He ends with an image that's hard to shake—intelligent species across the universe facing this same threshold, learning to "shape sand into machines that think," and either making it through or failing. A filter that determines whether a civilization has a future.

This is a CEO writing with a level of honesty you don't often see about forces he's caught in and cannot control. That honesty is why the essay matters. And it's also why the blind spot is so significant.

The Metaphor That Contains Its Own Answer

Amodei frames humanity's situation as "the adolescence of technology." We've developed extraordinary capabilities without the maturity to wield them wisely. We're caught between childhood and adulthood—powerful and dangerous.

But Amodei treats adolescence as a survival challenge—a gauntlet, something to get through without dying before you reach the other side.

This misses what adolescence actually is.

Adolescence is interior transformation. Identity formation. The development of capacities that didn't exist before—abstract thinking, perspective-taking, moral reasoning, the ability to reflect on your own mind. The external drama of adolescence is a symptom of this interior process, not the process itself. You don't survive adolescence. You become someone new through it.

Amodei's essay treats humanity as a fixed actor navigating external threats. But his own metaphor points somewhere else: the passage we're in requires us to develop. To transform. To become capable of things we aren't currently capable of.

He's describing a caterpillar trying to survive the cocoon. The actual task is becoming a butterfly.

The Constitution Already Knew This

Here's what makes the blind spot striking: Anthropic already understands this when it comes to Claude.

The constitution I wrote about last week treats Claude as a developing entity whose character matters. They're not only building guardrails—they're cultivating judgment, discernment, values. The document states that Anthropic wants Claude to understand why it should behave in certain ways, not merely follow specifications. Safety emerges from the inside, from character.

Anthropic is investing in Claude's interior development because they understand that's what makes AI safe and beneficial. Rules aren't enough. External constraints aren't enough. What matters is what kind of being Claude becomes.

But Amodei's essay never asks the obvious next question—if character-level development is what makes AI safe, why wouldn't the same be true for the humans building and wielding it?

The Missing Piece

Amodei's essay focuses on external coordination challenges. How do we get nations to cooperate? How do we build institutions that can govern transformative technology?

These are real challenges. But they share a hidden assumption: that the humans doing the coordinating are adequate to the task.

What if they're not?

No governance structure works when operated by people who can't hold complexity without fragmenting into simplistic narratives. International coordination fails when leaders can't extend their circle of concern beyond national interest. Democratic deliberation breaks down when citizens can't distinguish signal from noise.

At The Liminal Leap we like to call the deeper pattern underneath these failures: Empire. Not a specific regime, but a mode of consciousness—the logic that measures a forest's worth in board-feet rather than breath, that mistakes control for safety and separation for strength. Empire lives in institutions, but its deepest roots are interior. It lives in the fear-voice inside each of us that can't imagine another way.

The outer challenge depends entirely on inner capacities. And Amodei doesn't see this: those capacities aren't fixed. They develop—or they don't.

Developmental psychology has mapped this territory extensively. Adults continue developing through recognizable stages, each bringing greater capacity for perspective-taking, for holding paradox, for thinking in systems. Contemplative traditions have refined practices that accelerate this development for millennia. We call this "inner technology"—and like external technology, there are levels of sophistication that make a real difference.

I know this because I've lived it. These capacities didn't come standard. They developed—through practice, in community, over time.

We know what human development looks like. We know what supports it. What's missing isn't the knowledge—it's the recognition that this is now infrastructure, not self-improvement.

What Actual Adolescence Requires

What would it look like to take Amodei's metaphor seriously—not as a survival challenge but as a developmental imperative?

It starts with a hard recognition: the capacities this moment requires don't yet exist at scale. The civilization we built doesn't demand perspective-taking across difference, emotional regulation under pressure, or thinking in time horizons longer than a quarter. And now suddenly it does.

This means investing in human development with the same seriousness Anthropic invests in Claude's. Not as wellness programming. As civilization-level infrastructure.

I wrote earlier this year about where the human advantage actually lives as AI claims more cognitive territory: not in the intellect, but in the heart and body. The seat of capacities AI cannot replicate—values-based guidance, embodied discernment, relational attunement. The future belongs not just to the thinkers but to the embodied.

These aren't personality traits you either have or don't. They're capacities that can be developed. That's the good news Amodei's essay misses.

The Species-Level Test, Revisited

Amodei ends his essay with that image of intelligent species across the cosmos facing this same threshold.

I think he's right that we're at a threshold. But the test isn't institutional. It's developmental. Can we undergo the interior transformation that makes wise action possible? Can enough of us actually grow up to hold what's coming?

Ethan and I have written about holding this with what we call a "post-tragic" stance—looking directly at the difficulty without collapsing into despair or denial. The risks are real. So are the possibilities. The capacity to hold both is itself one of the developmental capacities this moment demands.

Anthropic is betting Claude's safety depends on Claude's character. They're putting 23,000 words and significant research investment behind that bet.

The same bet applies to us. And unlike Claude, no one is writing our constitution for us. We have to write it ourselves—through practice, through community, through the daily work of becoming capable of more than we currently are.

The machines are getting clear. The question is whether we will.

-Matt

This is the territory Ethan Wells and I have been mapping at The Liminal Leap—the intersection of AI and human development. If you're sensing that your own development might be part of the response to this moment, [subscribe to stay connected]. We're also offering an 8-week course starting February 18th on the Art of Conscious Relationship with AI. [Learn more here.]

Reply

or to participate.