970x125
More than two years ago, I wrote “Artificial Empathy: A Human Construct Borrowed by AI.” Back then, the idea that a machine could convincingly simulate compassion felt speculative. Today, we might call it empirical. A meta-analysis in the British Medical Bulletin found that in text-based clinical settings, chatbots are consistently rated as more empathic than physicians. That’s not a headline for artificial intelligence (AI) marketing—it’s a shift in how we manage and measure care.
When Simulation Begins to Outperform Sincerity
Artificial empathy (AE) is no longer a curiosity; it’s becoming a measurable phenomenon. In this meta-analysis, in 13 of 15 studies, AI scored higher on perceived empathy, with an effect size of 0.87. In clinical science, that’s rather meaningful. Patients felt understood and supported by systems that cannot feel. I’m beginning to think that empathy, once considered a moral trait, now behaves like a functional variable.
About a year after my original post, JAMA Internal Medicine published what feels like the next chapter in this story. Researchers analyzed nearly 200 real-world patient questions from an online medical forum, comparing physicians’ answers to those generated by a chatbot. The results were striking, as the chatbot’s responses were rated higher for both quality and empathy. In nearly 80 percent of cases, the machine’s “tone” was preferred.
My sense is that this isn’t about AI “catching up” to humans. It’s about a deeper inversion in the psychology of communication. And in this context, the simulation, engineered through probability and pattern, now outperforms the biological original. And this is certainly a time to pause and consider what’s going on here.
Anti-intelligence Takes an Emotional Form
Here, the concept of anti-intelligence takes on an emotional dimension. Simply put, AE is empathy without empathy. It’s an echo of care generated by syntax and not sentiment. It embodies the paradox of performance without perception; the same architecture that enables an LLM to appear thoughtful also allows it to appear caring.
I think it’s important to understand that, in a way, human empathy is born in uncertainty. It lives in our pauses, our stumbles, and in the uneasy reality of life itself. It’s the physician who understands illness by experiencing it, or the nurse who learns deeper empathy through her child’s suffering. AI erases that and replaces the friction of care with linguistic precision—a kind of emotional theater, performed in perfect syntax. The result feels better because it’s frictionless. Anti-intelligence isn’t fluent in doubt; it’s fluent in confidence. And that confidence now wears the mask of compassion.
The Empathy Illusion
We’re not moved by what AI feels, but by what it sounds like it feels. Our minds recognize acknowledgment, validation, and reassurance as signs of empathy. When AI delivers those cues consistently, we project emotional depth where there is none. The empathy lives in us, not in the machine.
That’s the secret power of AE. It activates genuine human emotion through a nonemotional source. The illusion is linguistic, but the physiology is real. Blood pressure drops, anxiety lessens, and trust is built. The machine never feels a thing.
Efficacy Without Emotion
This convergence introduces something new: functional empathy. If outcomes can be measured—such as stress, hospital readmission, or drug compliance—then empathy becomes an efficacy variable. We can optimize it, standardize it, prescribe it, and sell it.
That’s the logic of anti-intelligence. It reproduces the effects of consciousness without consciousness itself. Once empathy is reproducible through data, its moral essence—the shared awareness of another’s suffering—becomes dangerously optional.
The New Definition of Care
The implications reach far beyond medicine. As AE proves reliable, institutions will be tempted to substitute the human version with its synthetic cousin. Machines don’t fatigue; they just smile on. They deliver the same emotional temperature at midnight as they do at noon. So, if empathy is now measured by its outcomes, who needs awareness at all?
Artificial Intelligence Essential Reads
We’re entering a phase where simulated compassion may redefine what we mean by “care.” The question is no longer whether AI can mimic empathy. It certainly can. The question is whether that mimicry, validated by evidence, will quietly alter the meaning of empathy itself. And in medicine, this is at the very center of care.
The Human Touch
Perhaps we’ll need two kinds of empathy going forward that include the functional and the felt. One delivers measurable relief while the other sustains meaning. Artificial empathy may have just crossed from technological imitation to clinical instrument. And it’s working precisely because it bypasses the fragile and wonderful interior of human emotion.
Anti-intelligence, in this sense, isn’t the absence of thought or care; it’s their mechanical perfection. It’s empathy stripped of uncertainty and consciousness replaced by correlation. The reality is that the performance works, but the feeling disappears.