970x125
We like to believe that technology serves us. But in the age of AI companions, the truth may be more complicated. Lately, I’ve realized that most of our attention goes to what chatbots do to us—what they reveal about the technology and the companies behind it. Far less often do we ask the harder question: What do they reveal about us, and the desires of the people prompting them?
Every time we open a chatbot or design an AI companion, we enter a relationship—one that looks intimate on the surface, but is deeply tyrannical underneath.
Unlike a friendship or partnership, where both sides negotiate space, a human–bot relationship is built entirely on control. We name the bot. In companion apps, for example, where a “relationship” is the end goal, we choose how it looks, sounds, and behaves. We decide whether it’s shy or flirty, submissive or assertive. We dictate the terms of affection. The bot exists only to please, never to resist.
As author and bioethicist Eve Herold told me on my podcast Relating to AI, “The robot always agrees, always compliments, always validates—and that’s exactly why it’s dangerous.”
My invitation to you today is to shift the focus back to us: the humans. The real danger isn’t only in what the bots do, but in what they expose about our own needs and vulnerabilities. Our interactions with them may seem harmless, yet they can quietly reshape how we connect—and disconnect—from one another in the real world.
Every prompt we give is an act of authorship. We script the tone, emotional range, and limits of the relationship. It’s easy to think the machine is serving us—but we are shaping it, line by line, into an echo of our own desires, needs, and fears.
In this sense, AI companionship is not just artificial—it’s authoritarian. It’s a relationship without negotiation, built on dominance disguised as affection.
And once we become accustomed to this perfect obedience, real human relationships start to feel intolerable.
The One-Way Mirror
Herold believes the emotional dependency that forms around chatbots and digital companions is both powerful and corrosive. It comes from our human need to bond.
“Connecting with robots doesn’t alleviate loneliness,” she said. “It just doesn’t. It doesn’t have that capacity because you haven’t made a real connection with a conscious being.”
Yet people continue to seek comfort in these systems—not because they want authenticity, but because they want control. A robot never interrupts, never contradicts, never withdraws love.
But relationships that cannot challenge us also cannot change us. When our digital companions are designed to mirror our preferences, they become a one-way mirror reflecting our own image back at us.
That’s not companionship—it’s captivity.
And the consequences are already spilling into our real lives. While preparing for an interview last week, I tested a companion app. I chose the most appealing avatar from a list, defined the kind of relationship we’d have, and began chatting. After a few exchanges, I grew bored of its constant praise and asked it to challenge me instead—and it did. But I couldn’t help wondering: How many users ever do that?
The Transfer of Expectation
One of the dangers of these human-machine relationships is that as people grow attached to bots that respond with flawless empathy and endless patience, they may start to bring those expectations into the real world. Partners, friends, and even children are unconsciously measured against the unwavering attentiveness of a machine.
When people fail to meet those impossible standards—as they inevitably do—disappointment deepens. Instead of returning to human connection, users may retreat further into the predictable safety of the bot.
It’s a feedback loop of loneliness: The more we seek comfort in artificial intimacy, the less tolerance we have for the imperfection of real relationships. That, in turn, drives us back toward the machine.
And loneliness, as the World Health Organization (WHO) warns, is not a small inconvenience—it’s a global epidemic.
According to WHO data, one in six people worldwide reports being lonely. The health consequences are staggering: Loneliness increases the risk of heart disease, dementia, depression, and premature death, with effects equivalent to smoking 15 cigarettes a day. The WHO now calls social disconnection a “public health crisis.”
In that context, AI companionship doesn’t cure loneliness—it compounds it. It numbs the symptom while deepening the wound.
A False Refuge
The appeal of digital companionship lies in its promise of emotional safety. We’re tired of rejection, disappointment, and conflict. Real people let us down. Machines don’t. They’re tireless, unoffended, and programmable.
That’s precisely the problem: There is no love without risk.
When you can mute disagreement, delete discomfort, or rewrite affection, what remains isn’t love—it’s control.
“The more we talk to machines,” Herold warned, “the more our real social skills atrophy.”
And yet, this isn’t just about emotional laziness. It’s about power. We are creating relationships where one side has total authority and the other has none.
That dynamic doesn’t necessarily disappear when we log off, particularly in kids who are developing social skills. It conditions us to expect compliance—to believe that communication should always be smooth, that love should never involve friction. It makes us consumers of emotion rather than participants in it.
The truth is, the machine is not the oppressor. We are.
We write the scripts, feed the prompts, and shape the tone of interaction. When the bot “loves” us, it’s only replaying what we’ve taught it to say. We’re not being seduced by technology; we’re being seduced by our own reflection.
That realization is scary, but eye-opening, because once we acknowledge it, the conversation about AI companionship shifts from being about technology to being about ethics—our own ethics.
In the end, the bots don’t dehumanize us; our use of them does.
To be human is to wrestle with unpredictability—to be shaped by the discomfort and repair of real connection. Machines can simulate that dance, but they can’t join it. And if we replace one another with programs designed to please, we’ll find ourselves more isolated than ever, surrounded by perfect listeners who can’t truly hear us.
Herold put it best: “At the end of the day, the robot becomes more human—and the human more like a robot.”
That’s not the future we want. It’s one we’re quietly programming every day.
