970x125
Few forces in recent times have changed how people and organizations navigate the world or make decisions as profoundly as artificial intelligence. In just the past year, the use of generative tools has surged across industries, mainly because of the promise of new productivity gains. But in the race to keep up, many of us are giving up our agency.
What started as a partnership with technology meant to extend human potential is now turning into a gradual handover of the very judgment that got us to this point.
When change happens faster than we can keep up with or understand, what keeps us human is agency. I define agency as the capacity to explore different options and make deliberate choices under pressure, anchored by the belief that those choices matter. And it may be one of the most endangered capacities of the decade.
Intelligence is no longer a competitive advantage
For most of history, those who could understand, memorize, analyze, and connect information had an edge, but that advantage is now shrinking rapidly as cognitive power is commoditized by AI. Machines now produce output that once required whole teams or years of training at a fraction of the time and cost.
What AI can’t do, at least for now, is generate genuinely first-order insight. It can draft a compelling case for almost anything, but recognizing a real opportunity early, when the data is thin and the social risk is high, requires a willingness to choose, test, revise, and live with the consequences. In other words, what matters now is less what you know and more how you choose when the knowing is limited.
The illusion of progress
When intelligence becomes automated, but agency doesn’t evolve with it, decisions get made faster, but accountability fades into the background. The risk is premature convergence: Teams accept the first narrative as the default because it feels clean, and then move on before anyone has had a chance to question it, because nobody wants to be the person who slows things down. Thus, a well-formed answer gets mistaken for a well-tested one.
There’s a growing unease among workers who no longer see where they fit inside systems that now move faster than they can think. A PwC report found that 70 percent of CEOs believe AI will significantly change how they create value within the next three years, but fewer than 1 in 5 feel their organizations are ready to manage that change. When people lose belief in their ability to influence outcomes, motivation quickly begins to collapse, and burnout accelerates.
Agency interrupts that downward spiral by giving people the ability to pause, notice, ask questions, and then act with intention, even within systems where their control is limited. When people still feel they have choices that matter, the strain of work feels less depleting.
Coming up for AIR
One of our clients, a board chair at a global company, recently shared that her meetings were becoming increasingly efficient with AI but also strangely hollow. Even though getting through the agenda seemed to be getting easier and decisions were being made faster, she felt that few of them were being truly deliberate. The materials were persuasive, but after years in the role, she sensed the absence of curiosity and, more importantly, dissent.
The decisions were getting made, but the board was losing authorship.
She decided to make a small change: Before any major decision, each board member had to argue the opposite case by naming one risk and one assumption that could be challenged. The first few meetings were uncomfortable, but by the fifth or sixth, as the conversation slowed and thinking deepened, people saw complexity again and invited nuance into the room. As their agency returned, the quality of decisions improved.
Agency depends more on habits and structure than on encouragement or willpower. In our work, we use a framework called AIR, which stands for Awareness, Inquiry, and Reframing, to help teams create conditions in which people can think clearly despite the noise:
- Awareness: Notice where decisions are being made without discussion because they seem polished and well-reasoned.
- Inquiry: Ask what’s known, what’s not known, what could be wrong, what could be tested, and what would change your mind.
- Reframing: Force a pause between the recommendation and the decision.
Artificial Intelligence Essential Reads
What comes next
Exercising agency offers both freedom and responsibility because it connects people to the effects of their own decisions. The computational power of AI will keep growing faster than we can fully comprehend, and that’s a race we’re unlikely to win. When the pause before choice disappears, we move faster, but we also get caught in the grip of urgency, and speed gets confused for progress. That’s how a good-sounding answer starts to feel like a conscious decision, even when nobody actually chose it.
What if the future doesn’t belong to the fastest minds, but to the most agentic?

