- The Liminal Leap
- Posts
- Navigating AI: The Technology That Demands We Grow Up
Navigating AI: The Technology That Demands We Grow Up
*Quick note from Matt*
Ethan is a business partner and co-builder of The Liminal Leap, and he’ll be writing here with me on an ongoing basis. I’m excited for subscribers to get to know him and engage with the perspective he brings.
This is his first piece for The Liminal Leap, and it speaks directly to what’s at stake with AI today.
First, a Hello to The Liminal Leap Readers
Hi there, I’m Ethan. I grew up in Kansas, studied engineering, and landed in Silicon Valley building AI systems and founding a startup. The venture failed—and more significantly, the identity I'd built around achievement and performance spectacularly imploded. Which turned out to be exactly the wake-up call I needed.
The failure opened a door. I spent eight months in the Amazon training in plant medicine work, completed a vision quest, dedicated years to relational healing, and logged countless hours on meditation cushions learning from teachers across lineages. Now I serve the Planetary Dharma—my spiritual home—where I help build infrastructure for a multi-year wisdom school integrating ancient contemplative practices with modern psychology to meet contemporary challenges. I'm also stewarding a collective exploration within that community about how to be in right relationship with AI.
I'm fascinated by developmental theory, the intersection of technology and sacred practice, and what it means to build from wholeness rather than fragmentation. I believe our collective relationship with AI is one of the most consequential questions of our time—both spiritually and practically. I'm grateful for the opportunity to carry The Liminal Leap forward alongside Matt to explore what conscious engagement with AI looks like when we show up with our full humanity.
The Stakes We're Facing
When it comes to the existential risks of AI, there's no consensus about how likely catastrophe might be. But consider this:
Most leaders at major AI companies believe general superhuman AI will arrive within the next decade. This is happening alongside rapid advances in robotics—AI systems that can act in the physical world.
The Ape Problem
Consider the "ape problem." Primate survival is entirely dependent on human decisions—because we're the more intelligent species. We're watching humans destroy primate habitats as a side effect of pursuing our own goals. It's not even intentional. We're not trying to wipe out these species. It's just happening as we pursue our goals without a sufficiently wide scope of care.
Now ask: how likely is it that an AI system would pursue its own goals at the expense of human needs and life? That depends on the nature of the intelligence itself and how these systems are architected—questions we'll explore in depth elsewhere.
For now, let's just say it's plausible.
And given the level of risk we're talking about (potential species extinction), even a small possibility of this outcome warrants guidance from the greatest level of collective wisdom our species can muster.
No One at the Wheel: The Multipolar Trap
Unfortunately, humanity's collective wisdom is not currently behind the wheel of AI development. Far from it.
Instead, we're witnessing a global race with no unified guidance. If anything is steering, it's profit-maximizing incentives and techno-optimist dreams.
Some AI leaders see the risk and advocate for stronger governance. Yet they operate under a grim assumption: if they step back, someone else will simply take their place and push full speed ahead. This is what researchers call a multipolar trap—situations where individually rational choices lead to collective catastrophe. It's a planetary prisoner's dilemma playing out in real time, except the stakes aren't abstract game theory but the actual future of the human race.
The Coordination Gap
The gap between our technological power and our capacity to coordinate wisely underlies most of our challenges. We're developing systems that can generate novel proteins, write persuasive propaganda, and automate military decisions—yet we can't agree on basic safety standards, let alone the deeper questions about what kind of world we're building and who gets to decide.
Keep increasing the power of technology and weaponry in a world still playing the game of empire and domination, and the likelihood of planetary catastrophe moves from "possible" to "highly probable"—with the odds worsening each year.
And we haven't even discussed the near-term risks: environmental impacts, surveillance expansion, accelerated misinformation, concentrated wealth and power, eroding critical skills, mass job displacement, and more.
Woah.
The Path Forward: Growing Up as a Species
The long-term solution? Planetary-scale coordination that transcends the win-lose dynamics underlying recorded history.
Why planetary-scale? Because AI's impact is global, and the multipolar trap means partial coordination isn't enough. If even one major actor defects—one nation, one company, one lab pushing forward while others pause—the whole system breaks down. The technology doesn't respect borders. A breakthrough in one location affects everyone, everywhere. This isn't like regulating cars or pharmaceuticals where regional standards can contain risks. We need genuine collective coordination, or we get the race to the bottom we're currently witnessing.
It's no small task. We're talking about collective maturation at a species level.
Here's the good news: we have maps for this journey. They emerge from where adult developmental psychology meets ancient wisdom traditions, and they point toward the same developmental capacities—the ability to witness our own reactivity without being consumed by it, to expand our circle of care, to hold multiple perspectives simultaneously.
This isn't abstract philosophy. We're talking about concrete practices: meditation and contemplative work, trauma healing, relational skill-building, psychological integration. The kind of work that actually changes how we show up in the world.
(One note: this developmental work unfolds far more effectively as an integrated, multi-year journey in a committed community rather than a scattered collection of weekend retreats and one-off trainings. But that's a conversation for another post.)
Why Mature Humans Are Essential
Why does planetary-scale coordination require developmental maturity?
Because without inner work, we default to fear-based, zero-sum thinking. We fragment under complexity. We react from unprocessed trauma. No governance structure can function wisely when operated by people in this state.
The real bottleneck isn't technology or policy—it's developmental capacity. Can we hold multiple perspectives simultaneously? Can we regulate our reactivity? Can we stay connected under pressure? These capacities determine whether coordination produces wisdom or merely concentrates power.
And here's the thing: inner development doesn't happen in a vacuum. It's inseparable from the systems we're embedded in. Which means this transition requires transformation not just of individuals, but of our economic structures, our political systems, and the cultural and educational frameworks that shape how we make sense of reality itself.
Why We Can't Just Pause
After hearing all of this, you might think: let's just stop. Abandon AI development entirely and focus on advocacy for a pause until we can wrap our heads around this together.
If we lived in a world governed by a wise council that actually listened to public sentiment, I'd agree.
But we're not there yet.
(Note: Campaigns for governmental oversight and slower AI development do buy us time—and time matters. But as we explored above, this isn't a viable long-term solution until planetary-scale coordination and the corresponding developmental and systemic transformations are much further along.)
Meanwhile, AI is here. All signs point to it staying and improving, given the attention and resources flowing into it. Which brings us to the pivot point.
AI as Ally: A Different Possibility
Acknowledging AI's risks doesn't negate its potential as a tool for building the world we want to see. This isn't naïve optimism—it's recognizing that the same technology highlighting our coordination challenges can also support our capacity to meet them.
When wielded skillfully, consciously, and within community, AI can accelerate the very inner development we need. It can help us make sense of these complex times. It can become an ally in generating and implementing novel solutions. It can amplify our ability to offer our gifts in ways that serve both individual growth and collective evolution.
This is the crucial insight: the tool that poses existential risk can also support existential development. The question isn't whether AI is dangerous or helpful—it's how we choose to engage with it.
The Post-Tragic Stance
At The Liminal Leap, we bring a "post-tragic" stance to AI. Post-tragic means looking directly at difficulty and feeling through it without collapsing into despair. It means acknowledging that the risks are real—existential, environmental, social—while recognizing that positive potentials are equally real.
This stance requires specific developmental capacities: the ability to resist binary thinking, to stay present with the depth of discomfort that arises when refusing to collapse into easy answers, and to act meaningfully even when outcomes aren't guaranteed. It's what allows us to work skillfully in the liminal space between "everything will be fine" and "nothing matters."
We believe that embodying these capacities consistently only happens in community—in networks of shared sensemaking and mutual support.
The Writing on the Wall
Whether it's AI, weaponized bio-engineering, or another yet-to-be-invented technology—the writing is on the wall for humanity. Either we grow up and learn to wield our technological capacity wisely or we are unlikely to create the beautiful, sustainable planet we'd like to pass down to future generations.
When it comes to growth and change, all we can ever do is start where we are.
Starting Where We Are
The good news? This work is already happening. Communities around the world are learning to connect to purpose amidst uncertainty, to develop the capacities we've been discussing, to become wiser stewards of our collective power. Some focus on contemplative practice, others on collective sensemaking, others on regenerative, sustainable systems.
At The Liminal Leap, we're exploring what becomes possible when a deliberately developmental community engages AI as a practice rather than just a tool or a threat. We're building capacity to work skillfully in the liminal space—neither collapsing into “AI is bad” or “the risks are over-hyped.”
Here's what we know: we have the maps, practices, and intuition that show us where and how to grow. We have the creativity to use our technological and social context in service of our becoming. We have the courage to meet both the challenges and the beautiful potentials head-on, with clear eyes and open hearts.
And most importantly, more and more of us have the wisdom to recognize that it's time to do this work together.
===
Gratitude goes out to the teachers and guides that have shaped the perspectives shared in this article. In particular, Dr. John and Nicole Churchill for their mentorship on human development and community building. And Daniel Schmachtenberger for his clear-eyed analysis of the meta-crisis.
Reply