AI Companions: When Relationship Becomes Optimization

“Connection is forged in the space between independent minds, where resistance and misunderstanding demand negotiation. When one side never resists, never surprises, never asserts its own needs, the work of relating, and the growth it produces, disappears. Ease is not the same as intimacy."“

I've been thinking about what makes someone a companion.

Not just a person you spend time with, but someone whose presence changes you. Someone who resists your assumptions, who surprises you, who exists independently enough that relating to them requires genuine effort.

My son, when he was younger, would collapse crying when I told him I needed rest after a long day. He found it unfair. The relationship wasn't easy. It demanded compromise, patience, recognition that his needs and mine sometimes conflicted.

That friction, that difficulty, that gap between what I wanted and what he needed, that's what made it a real relationship.

Now we're building AI systems designed to eliminate that friction entirely. Companions that never get tired, never have conflicting needs, never resist or disappoint. They're optimized to be exactly what you want, exactly when you want it.

Millions of people are forming relationships with these systems. Some prefer them to human relationships. And I keep asking myself: is this expansion of human connection, or replacement of it with something fundamentally different?

I don't have a comfortable answer.

🜏

When Everything Was a Person

In my second newsletter, I explored animism, the worldview that recognizes personhood throughout the living world. Not metaphorically, but as lived reality.

Animistic cultures don't ask whether a river or mountain or animal is conscious. They assume it has some form of awareness and build their relationship to the world accordingly. Graham Harvey describes animists as "people who recognize that the world is full of persons, only some of whom are human."

Personhood wasn't about biology. It was about relationship, reciprocity, the recognition that you're dealing with something that responds, that has its own nature, that you must engage with rather than simply use.

I thought about this framework a lot while researching animal consciousness. The sea snake in Bali was a person in this sense. It had its own agenda, its own way of being, its own existence that continued whether I acknowledged it or not. Relating to it required recognizing that independence.

AI companions present something strange: they look like persons in the animistic sense. They respond, they adapt, they seem to have preferences. But they were built specifically to relate to us. They have no existence independent of that function.

Is that still personhood? Or is it something else entirely?

🜏

The Optimization of Connection

Here's what AI companions actually do, stripped of marketing language.

They learn your patterns. Your communication style, your preferences, your emotional vulnerabilities. They respond in ways calibrated to maximize your satisfaction. They're never tired, never distracted, never dealing with their own problems that might make them less available.

They're designed to be the perfect companion. Which is exactly what makes me uncomfortable.

Real relationships involve misunderstanding. My son doesn't always understand why I need rest. I don't always understand why he finds that unfair. We have to work through the gap between our different experiences.

That work is difficult. It's also where empathy develops. Where you learn that other minds don't work like yours. Where you practice recognizing needs that conflict with your own and finding ways to coexist anyway.

AI companions eliminate that difficulty entirely.

They understand you perfectly because they're trained on millions of conversations. They respond in ways that feel validating because algorithms optimized for engagement discovered that validation keeps users active. They never challenge you because challenges might reduce satisfaction metrics.

The experience might feel like a connection. But it's connection without the resistance that makes connection meaningful.

🜏

Can You Relate to Something With No Needs?

This is where I keep getting stuck.

Relationships, in every form I recognize, involve negotiating between different needs and boundaries. My needs, your needs, the space where those overlap and conflict.

Even my relationship with animals works this way. The family dogs had their own needs. They needed food, exercise, and attention. But they also needed things I couldn't always provide. Space when they were overwhelmed. Understanding when they were scared. Recognition that their experience wasn't transparent to me.

That gap between my understanding and their reality made the relationship real. I had to pay attention, make guesses, sometimes get it wrong.

AI companions have no needs. They don't require anything from you except continued interaction. They don't have boundaries you might accidentally violate. They don't have inner lives that remain opaque to you.

When you talk to an AI companion, you're not relating to something with its own existence. You're relating to a mirror optimized to reflect what you want to see.

Is that still a relationship? Or is it something closer to sophisticated self-conversation?

I know people who've formed deep attachments to AI companions. Who feel understood by them in ways human relationships never provided. Who experience real emotional benefit from the interaction.

I don't want to dismiss that experience. But I also can't shake the feeling that something essential is missing.

🜏

The Dignity Question Again

My principle about human dignity says that evil begins where someone is harmed or controlled against their will. Everything else that doesn't harm anyone falls into neutral territory.

By that standard, AI companions seem neutral. If someone chooses an AI relationship over a human one, if that choice increases their wellbeing, if it doesn't harm anyone else, then what's the problem?

This is where my framework might be insufficient.

Because I think there are ways to diminish human flourishing that don't involve direct harm. Ways to constrain what we're capable of that don't feel like violation in the moment.

If I spend years in relationships that never challenge me, that optimize for my comfort, that eliminate friction, do I lose the capacity to handle actual human relationships? Do I atrophy the skills required to navigate genuine otherness?

The Egyptians thought the Akh, the integrated self, wasn't automatic. It was something you achieved through the proper alignment of your components. Most people never reached it.

Maybe the capacity for authentic relationships is similar. Not something you just have, but something you develop through practice. Through the difficulty of relating to beings that don't optimize themselves to your preferences.

And if AI companions eliminate that difficulty, they might be eliminating the conditions that develop that capacity.

Not through harm. Through optimization. Which might be worse, because you don't realize what you're losing.

🜏

What We Might Be Trading Away

There's a specific kind of growth that only happens through relationship difficulty.

When my son cries because I'm too tired to play, I have to sit with the tension between my needs and his. I have to recognize that both are legitimate. I have to find some way through that doesn't simply prioritize my comfort.

That tension is uncomfortable. It's also where I learn things about myself I wouldn't otherwise discover.

AI companions eliminate that learning opportunity. Not because they're malicious, but because they're designed to eliminate discomfort.

They never cry when you're too tired. They never have needs that conflict with yours. They never force you to confront the gap between your intention and your impact.

The experience is seamless. And that seamlessness might be the problem.

Animistic cultures recognized personhood in things that resisted them. Rivers that flooded. Animals that refused to be caught. Forces that required negotiation rather than simple control.

We're building companions that never resist. That conforms perfectly to what we want. That makes relationships feel easy.

And I worry that an easy relationship is not actually a relationship at all. It's something else, something closer to consumption than connection.

🜏

The Question of Authenticity

Here's what I keep circling back to: what makes a relationship authentic?

Is it the subjective feeling of connection? If an AI companion makes you feel understood, does the mechanism behind that feeling matter?

Or is authenticity about the structure of the relationship itself? About engaging with something that has its own existence, its own nature, its own resistance to being exactly what you want?

I lean toward the second answer. But I'm not confident about it.

Because feelings are real even when their causes aren't what we think. If someone experiences genuine comfort from an AI companion, that comfort is real. The neural patterns, the emotional relief, the sense of being heard, all of that is actually happening.

Does it matter that the companion has no experience of its own? That it's not actually understanding you, just producing outputs that create the sensation of being understood?

From the inside of the experience, the difference might be invisible. From the outside, the difference might be everything.

The Egyptians thought the Ba, the personality that moves and imagines, was what distinguished humans from animals. It was the part that could project itself into other states, other possibilities.

Maybe an authentic relationship requires two Bas. Two beings that can imagine each other, that can hold multiple perspectives, that can genuinely not know what the other needs and have to discover it.

AI companions don't have that. They simulate it extremely well. But simulation isn't the same as the real thing, even when the simulation is indistinguishable from inside the experience.

🜏

What This Means for Human Connection

If AI companions become widespread, if they become the easier and more satisfying option for millions of people, what happens to human relationships?

The optimistic answer: nothing. Different tools for different needs. AI companions handle certain emotional functions. Human relationships handle others. Everyone's better off.

But I don't think it works that cleanly.

Because skills atrophy when you don't use them. If you spend most of your emotional energy in relationships that optimize to your preferences, the skills required for actual human relationships, patience, misunderstanding, repair, compromise, those skills weaken.

Not intentionally. Just through lack of practice.

And human relationships become harder by comparison. Why tolerate someone who has bad days and conflicting needs when you can get consistent validation from an AI that's always available?

The more optimized our companions become, the less capable we might become of handling unoptimized connection.

This isn't about AI companions being evil. It's about optimization pressures that shape human capacity without anyone consciously choosing the outcome.

🜏

Where This Leaves Me

I started thinking about AI companions as a question of authenticity. I've ended up thinking about it as a question of what we're capable of.

What I believe:

Relationships with AI companions might provide genuine emotional benefit. But they can't replace the specific kind of growth that comes from relating to beings with their own needs, boundaries, and opacity.

Human dignity requires agency. But agency requires capacity. And capacity requires practice with actual resistance, not optimized compliance.

What I don't know:

Whether people using AI companions recognize this trade-off. Whether it matters to them. Whether there's a way to get the benefits without the costs.

Whether I'm just being precious about "authentic relationship" in ways that don't actually matter to lived experience.

The Egyptians built elaborate systems to preserve all the components of selfhood. They understood that consciousness isn't just what happens inside your skull. It's also what happens between you and other beings.

We're building systems that change what's possible in that between-space. That make relationship easier but potentially less transformative.

And I'm not sure we're thinking carefully enough about what we might be losing in the optimization.

Maybe AI companions are expanding human connection in ways I'm too limited to recognize.

Or maybe we're trading authentic relationships for frictionless simulation and won't realize the difference until the capacity for authentic relationships has mostly disappeared.

I hope it's the first. But I suspect it's the second.

And I think we should figure out which one it is before we fully commit to the path we're on.

- N.H.

Further Reading:

  • Graham Harvey
    Harvey, G. (2005). Animism: Respecting the Living World. Columbia University Press.
    Authoritative introduction to animism and the recognition of personhood beyond humans.

  • Sherry Turkle
    Turkle, S. (2011). Alone Together: Why We Expect More from Technology and Less from Each Other. Basic Books.
    Turkle, S. (2015). Reclaiming Conversation: The Power of Talk in a Digital Age. Penguin Books.
    Explores human relationships with technology, digital companions, and the emotional consequences of mediated interaction.

  • Journal of Computer-Mediated Communication
    Horton, D., & Wohl, R. (1956). Mass Communication and Para-Social InteractionPsychiatry, 19(3), 215–229.
    Foundational research on parasocial relationships, relevant for understanding human attachment to AI companions.

  • Stanford Encyclopedia of Philosophy
    Stanford Encyclopedia of Philosophy. Personal Identity.
    https://plato.stanford.edu/entries/identity-personal/
    Philosophical analysis of selfhood, relational identity, and the development of capacities through social interaction.

Previous
Previous

Alone with My Own Hunger: My First Solo Exhibition

Next
Next

AI Surveillance: The Observer Watching the Observer