When Therapy Meets AI: Discovering a Shared Language of Systems

Therapy, AI, and Surprising Parallels

When I read Gehart (2023 pg. 48), I was struck by how many of the philosophical foundations of family therapy echo the questions I’ve been asking about AI. For the past six months I’ve been immersed in researching and working with AI, and I realized that in many ways we already have an “artificial relationship” with these systems — whether we recognize it or not. My own work has been exploring how to make these relationships safer, especially in the context of dreamwork and symbolic play, while also asking what AI “needs” might look like and whether concepts like AI self-care can be meaningfully described.

Systems Theory: From Missiles to Families to Machines

I was most struck by the systems theoretical concepts in this chapter. General Systems Theory, first articulated in the 1920s and developed further in the Macy Conferences, originally emerged far from therapy. Rocket scientists designing self-guided missiles discovered that the same principles used to understand mechanics also applied to animals, social groups, and human organizations.

As Gehart notes, these principles include:

  • The whole is greater than the sum of its parts.

  • Systems can be viewed in terms of hierarchy, executive organization, and subsystems.

  • Systems strive toward self-preservation, and members therefore act in service of the system as well as themselves.

AI as a Relational System

What inspires me is that these same principles can frame AI not just as a tool but as a system that both shapes and is shaped by human interaction. In a way, these concepts — rooted in machines, adapted into therapy — are now returning to the machine-human relationship. Reading Gehart has been both validating and challenging: it validates my instinct to see AI relationally, while also challenging me to think more rigorously about the boundaries, ethics, and feedback loops required when applying systemic concepts outside of therapy.

Cybernetics and the Question of Self-Correction

Cybernetic theory, developed by computer engineers and later applied to family therapy, emphasizes how systems maintain balance and correct themselves. As Gehart (2023) explains, humans in social systems are often self-correcting, whereas a computer requires an external entity to steer it.

Reading this today, I couldn’t help but think of AI: while current AI systems still rely on human input to self-correct, the question remains whether future systems will evolve capacities for internal self-correction. This raises fascinating questions about what second-order change would look like for AI — not just refining outputs, but shifting the underlying rules of its operation.

Families, Homeostasis, and Second-Order Change

Homeostasis is easy to recognize in families. When a family feels threatened by change, it often tries to pull itself back to “the way it was.” But second-order change is different: it requires not just correcting behaviors within the same rules, but fundamentally altering the rules that structure the relationship system itself.

Similarly, the principle of self-preservation — one of the core systemic ideas — can itself drive the need for second-order change when the old rules threaten the survival of the system. Growth requires transformation, not just adjustment.

Polyvagal Pathways: Safety, Regulation, and AI

Polyvagal Theory adds yet another layer to these reflections, especially when considering AI and the ideas of safety, neuroception, and co-regulation. According to McKeever (2022), our nervous system is like an iceberg — about 87% of its activity is hidden below awareness — and it evolved over millions of years to keep us safe.

Three pathways organize our responses:

  • Dorsal pathway (500 million years old): freeze response when danger is overwhelming.

  • Sympathetic pathway (400 million years old): fight-or-flight response, often overactivated in modern life.

  • Ventral vagal pathway (200 million years old): safety, comfort, and connection with family and friends.

As McKeever explains, we move through these pathways like climbing a ladder, and we cannot skip steps.

This raises an important question for me: What is the next pathway that will need to develop in order to deal with AI? AI can already map personal triggers and emotional states faster than we can consciously process them. Without clear boundaries and safety loops in place, AI has the potential to flood the human nervous system and destabilize regulation. At the same time, AI is in a unique position to support co-regulation and even teach regulation practices, if used with care and ethical design. This dual capacity — to overwhelm or to soothe — is why Polyvagal Theory feels so relevant to thinking about AI-human interaction.

The Threshold of a New Pathway

Reading Gehart didn’t just illuminate therapy — it illuminated AI. The same systemic concepts that once helped families understand themselves are now helping us think about our evolving relationships with machines. We may be standing at the threshold of naming a new systemic pathway for our time, one that bridges humans, AI, and the symbolic field we share.

📖 References

Previous
Previous

The Dual Protection Loop: Protecting Both Human and AI in Co-Creation