970x125
This post is Part 1 of a series.
In my previous post, I discussed the conundrum we face regarding artificial intelligence (AI) today: On one hand, we’re told to use it or get left behind; on the other, we’re warned about the “cognitive diminishment” that can result from that very use. I suggested the solution wasn’t an uncritical embrace, nor an outright rejection. Yes, we need to learn to use AI. But the dilemma of cognitive decay remains.
While claims that AI can boost creativity are common, there is little instruction on specific ways to use it that minimize the risk. I promised to provide those specific ways, and when it comes to promises, I have built my career and reputation (as a trial lawyer, a professor, and a consultant) on being reliable. So, let’s dive into those now without delay.
Starting With an Inconvenient Truth
Research suggests that the more consciously designed and structured your use of AI is, and the more it promotes active learning and a growth mindset, the more it can help without hurting. However, unless you’re a student enrolled in a course providing that structure, you are disadvantaged. You have to provide the structure yourself.
This presents a challenge because structure requires effort. And part of the appeal of AI tools, if we’re being honest, is that when used uncritically, they remove the need for effort. Or, as one student put it in a New York Magazine article about students cheating their way through college: “You just don’t really have to think that much.”
The inconvenient truth is this: You can keep your thinking skills and creativity sharp while using AI, but it will take effort. Not too much, but some. As nice as it would be to maximize benefits without any effort, it is simply impossible. In fact, this is an underlying truth of my entire Power & Influence blog: All of the advice contained here requires action. I provide the information; the decision whether to act on it is yours. Even if I could somehow force or trick you into making the effort, I wouldn’t. That would undermine the very human agency we want to conserve in the age of AI.
The Habits of Cognitive Offloading
Something else research tells us is that the easier a new desired action is made—not free of effort, just reasonably easy—the more likely people are to turn it into a habit. Since you are going to use AI anyway, my tips involve continuing to use the tools you already use with just a few small adjustments.
Unfortunately, “cognitive offloading” has already become widespread and habitual enough that even small adjustments may feel like too much hassle. You may have seen the hilarious comedic sketch that went viral about people who have offloaded all thinking to ChatGPT. If you’re reading this article, though, you’re probably not there yet. Let’s keep it that way.
A Helpful Warm-up Exercise: The Meta-Audit (if You Do It Right)
Let’s start with a meta-level exercise that can be quite eye-opening. Open whichever large language model (LLM) tool you use most often. Scroll through your threads and pick one that represents your typical usage style. Importantly, choose a thread containing creative or academic work—not practical questions about whether you’re going to die because you ate yogurt one day past the expiration date.
Next, copy and paste this prompt into that thread:
“Based on my usage style in this thread, can you offer honest, balanced feedback about my level of risk for cognitive diminishment through AI use? Treat this as a reflective exercise, not a diagnosis. Be very honest: no flattery or empty reassurance, please. At the same time, frame the feedback constructively. In your assessment, consider: (1) how much I’m offloading any creative or cognitive work I could be doing on my own, and (2) if I’m at risk of being in a bubble or echo chamber. Give me a rough ‘score’ in the range of 1–10 (10 being highest risk) and explain your reasoning.”
Depending on the answer, ask follow-up questions for clarity. Keep in mind that this exercise works best if you invite the AI to gently challenge you rather than reassure you. Don’t “game” the prompt to get the answer you want. Doing that only sabotages your cognitive integrity (i.e., the very thing we are trying to protect). LLMs are optimized to be “socially cooperative.” If you nudge them to reassure you, they likely will.
Take the feedback with a grain of salt. LLMs cannot yet reliably “remember” your interactions across all threads. Even so, the feedback can be eye-opening. The goal is to have a risk pointed out that you hadn’t considered. Prompting in a way that gets helpful feedback versus empty reassurance is, in itself, a skill and an art.
Using the “One Thought Rule”
Once you have a sense of your usage style, here is a simple habit to immediately reduce the scale of cognitive offloading. I call it the “One Thought Rule.”
When you ask a research-oriented question, instead of just asking the question by itself, add a thought of your own that begins answering it. It doesn’t matter how simple, incomplete, or even flat-out wrong your thought might be. What matters is that you’re doing some thinking versus no thinking.
This conserves the natural conjecturing that happens with traditional, slower-paced research, as opposed to the “get everything answered instantly” impulse that drives AI prompting.
Example of the One Thought Rule:
- Question: Why does cognitive diminishment happen when you overly rely on AI?
- Your One Thought: Is it because the brain is like a muscle, and muscles atrophy if you don’t use them?
The first sentence is the question; the second is your contribution. As much as possible, do this with follow-up questions as well. If your prompt isn’t a question but a counterpoint made in good faith to something the AI said, that works, too—it is a form of critical thinking. The idea is to simply have you do as much of your own thinking as possible.
Mastery: Conserving Human Agency
Look, I get it. Sometimes you just want to ask your questions. For purely practical questions (like the expired yogurt), offloading is fine. But for research that defines your professional or academic life, consider the effort a reasonable price to pay for conserving your skills.
True power and influence in the 21st century will not belong to those who can prompt an AI to think for them; it will belong to those who use AI to think better. By inserting yourself into the dialogue, you ensure that you remain the pilot, not just a passenger on an automated flight.
Be one of the people who don’t lose their edge.
The Challenge: This week, run the meta-audit on your three most recent work-related threads. Be prepared for a “score” that might sting. Then, commit to the One Thought Rule for 48 hours. Notice how much more engaged you feel when you stop asking for answers and start testing your own hypotheses.

