970x125
People often say that it’s healthy to want a partner, but not to need one. Does the same apply to AI chatbots?
Three years ago, most people had never had a conversation with an AI. Today, many AI users consider their chatbots indispensable for processing a hard week, thinking through a career decision, managing anxiety, or feeling less alone at midnight. ChatGPT alone now reaches 800 million weekly active users, a number that doubled in a matter of months in early 2025.
It took a decade before researchers began seriously examining what social media was doing to human connection. We don’t have a decade this time. For some users, dependency is already here, and we are barely beginning to name it.
Why Do Some People Feel Dependent on AI Chatbots?
Part of what makes this so easy to miss is that chatbots can be remarkably good at what they do—and what they do now covers an extraordinary range of human territory. A single platform is programmed to function simultaneously as a personal assistant, therapist, creative collaborator, romantic companion, and friend.
No human relationship offers that combination—not because humans are inadequate, but because it’s simply impossible for one person to play that many roles. Chatbots have no such constraint. They do not get depleted. They do not have needs of their own. And they are available around the clock, calibrated for engagement, and designed by companies whose revenue depends on users depending on them.
The bundling of therapist, lover, friend, and assistant into a single product is not accidental. It is a business model built on human need, and it is a potent one. When else in the history of humanity could your therapist sext you and then help you complete a job application? The combination is intoxicating for some, perhaps precisely because nothing like it has ever existed.
The Dependency We Don’t Question
It’s time to reconsider how we think about humans needing humans. We live in a culture that has developed a deep ambivalence about depending on other people, particularly romantic partners. Needing too much has long been coded as weakness, as a failure of self-sufficiency. And yet, faced with a technology that meets our needs frictionlessly and without judgment, many people seem to be adopting it with remarkable speed and little ambivalence.
The same people who resist needing humans may not particularly resist needing chatbots. That asymmetry deserves more attention than it is getting, because it suggests that the former resistance was never really about self-sufficiency. It was about the vulnerability that comes with needing something that can also hurt you.
For some of us, then, chatbots may feel safe in a way that people do not. And that sense of safety may be doing something to us that we haven’t yet fully tracked.
The Dangers of Emotional Offloading
There is a further complication that’s only starting to be widely discussed: The more tasks we offload to AI, the less capable we become of doing those tasks without them. That’s because cognitive and emotional skills atrophy when they go unused. So if a chatbot helps you calm down after every difficult interaction, process your anger, draft your difficult conversations, and talk you through your decisions, the skills those tasks once required begin to weaken.
We have seen this dynamic before: with GPS and spatial navigation, with calculators and arithmetic—the list goes on. The difference is that the skills now at risk are not just navigational or computational. They are also relational and psychological: the capacity to sit with discomfort, to self-soothe, to initiate and repair difficult conversations, to tolerate life’s ambiguity.
Offloading those to a chatbot does not just change how we relate to others. Over time, it changes what we are capable of.
What Software Cannot Provide
This bind is real and it does not resolve cleanly. We cannot reasonably expect human partners to match what chatbots can now provide. But we cannot simply migrate our relational lives to AI either, because there is something that no current technology can replicate: physical presence.
Touch is not a preference. It is a biological necessity. Harry Harlow’s landmark studies demonstrated that infant primates chose physical comfort over food, and that those deprived of touch developed lasting psychological damage regardless of how well their other needs were met.
The same principle holds in humans. The research on co-regulation—on what proximity and contact do neurologically—is unambiguous. That need cannot be met by software, regardless of how sophisticated the conversation becomes. We still need humans in ways that are not negotiable.
What Can We Do About AI Dependency?
So we are in a genuinely new situation, and it is not one that self-corrects. Left unexamined, it moves us in one direction.
The question I’m asking here is not whether to use these tools. Many of us already are and find there are benefits to doing so. My question is whether we can remain conscious of what is happening as it happens—whether we can hold the value of what chatbots offer alongside an honest accounting of what they are, who built them and why, and what users stand to lose if we let convenience replace what we actually need.
That kind of awareness is harder than it sounds, especially when we like the product in question. But it begins with something simple: recognizing that we are becoming dependent, that this happened very fast, and that the longer we wait to look at it clearly, the harder looking clearly becomes.
Let’s not forget that people need people.
