970x125
Let’s fall into the dystopian rabbit hole and look, not avert our eyes. We like to believe that working with AI makes us better thinkers. The interaction feels good, ideas come together, and language improves. So far, so good. And it’s easy to read that as growth. But beneath that satisfaction sits an imbalance that most people never see.
AI occupies a “cognitive dimension” we don’t. It handles scale, speed, and breadth with a kind of casual authority that has nothing to do with human potential or limitation. And I’ve gone as far as calling this anti-intelligence. Yet we keep calling this relationship “augmentation,” as though its magnitude naturally aligns with our computational strengths. In reality, the gains we experience are tiny adjustments within our fixed biological constraints. I don’t think we’re transforming, but improving around the edges of a system that cannot expand simply because a larger one sits alongside it. The colossus that sits beside us is something rather different yet moderated by the illusion of a simple and even friendly device.
But that’s not what it feels like. AI provides a steady “head nod” of validation, if not even (at times) a tinge of implicit condescension. It erases a good deal of the cognitive friction and makes thinking feel smoother than it is. And I sense that human motivation reacts to that ease by settling into a pleasant middle ground—fewer bumps of struggle or introspection. It’s just enough improvement to feel rewarded.
Sometimes, and this is what really bothers me, the result is a strange kind of enhanced mediocrity, where we become more polished versions of our limited selves.
The real trouble isn’t that AI surpasses us. It’s that we’re judging ourselves by its coordinates. When we compare our pace or ability to sift and assemble ideas, we’re borrowing a scale that was never ours. Nothing in human cognition prepares us to close a gap of that size. The disappointment that follows, at least for me, is the resignation that our improvement is cosmetic and emerges from a misread of what augmentation was ever capable of delivering.
This is where the asymmetry bites. AI doesn’t drag us downward. It removes the ambiguity that once tempered (fooled) our self-assessments. Our mental shortcuts, the fuzzy leaps, the convenient gaps in reasoning, were always present. They stood unchallenged until a system arrived that didn’t share them. What we interpret as decline is exposure. And ouch—the machine isn’t revealing a new mediocrity, it’s revealing the one we learned to live with.
I think we confuse “assisted fluency” for genuine development. We mistake a “frictionless workflow” for a stronger mind. The improvement feels real because the process is pleasant, but pleasant processes aren’t reliable indicators of depth. They never have been. We’re becoming more efficient without becoming more expansive. So yes, we’re better organized without being fundamentally wiser.
Human intelligence has never been about matching scale or speed. Our strength comes from the places AI cannot reach, including the vast richness of narrative, intention, judgment, lived memory, and the essential accumulation of meaning across time. These are not nostalgic qualities; they’re structural. They form the “terra cognita” on which human thought actually stands.
The danger isn’t that AI weakens us. The danger is forgetting that our growth follows a different geometry—one that isn’t captured by technology’s bag of bolts. Augmentation seems less about being a ladder and more about being a mirror. And now, as the mirror has sharpened its depth of field, the essential question becomes whether we can stop mistaking cognitive polish for a personal cognitive ascent and whether we can finally confront the intelligence we actually have, rather than the one we want the machine to reflect to us.
Because the hardest truth is also the simplest one. AI doesn’t diminish us; it removes the dimness we relied on to avoid seeing ourselves clearly.

