The research keeps piling up, and the direction is consistent. Students who lean on generative AI for cognitive tasks show reduced critical thinking, weaker metacognitive monitoring, and a tendency to accept AI output without questioning it.
Gerlich (2025) documented it with cognitive offloading. Fan et al. (2025) named it metacognitive laziness. Shaw and Nave (2026) called it cognitive surrender. Kosmyna et al. (2025) at MIT went further and measured it with brain imaging, finding measurably lower cognitive effort during AI-assisted writing compared to independent work. I’ve reviewed all of these studies on my blog at medkharbach.com, and the pattern is clear enough to take seriously.
But the response to this research can’t be to ban AI or pretend it’s going away. The real question is what we do about it in our classrooms right now, and that’s what this guide tries to answer.
Critical Thinking Activities for the Age of AI is a free PDF guide containing thirteen hands-on activities designed to turn AI interactions into critical thinking exercises. It grew out of a workshop I delivered at the NENA ICHRIE 2026 conference, and I built every activity around Robert Ennis’s (2015) framework of critical thinking skills: analyzing arguments, judging source credibility, handling ambiguity, evaluating assumptions, recognizing fallacies, and asking clarification questions.
The core idea is straightforward. The problem with how most students use AI right now is the cycle: prompt, receive, accept, move on. That’s where thinking goes quiet. These activities are designed to break that cycle by inserting friction, a moment where students have to stop, question what AI produced, evaluate it against their own knowledge, and respond with their own reasoning before they move forward.
Let me walk you through a few examples so you get a sense of what’s in here.
The Push-Back Protocol, adapted from Tawnya Means, gives students a simple structure for challenging AI output. They prompt an AI tool with a question, read the response, and then go through multiple rounds of structured questioning: demanding evidence, questioning assumptions, asking for alternative perspectives. What starts as a routine chatbot exchange becomes an active exercise in argument analysis.
The Hallucination Detective, originally designed by Michael Kingston for AI for Education, has students fact-checking AI-generated text with embedded errors, fake citations, wrong dates, made-up statistics. I open it with the Steven Schwartz case, the New York lawyer sanctioned by a federal judge for submitting a ChatGPT-generated brief full of fabricated case citations. That story sets the stakes fast and makes the verification exercise feel urgent, not academic.
One of my favorites is the Human vs. AI Blind Comparison, which I created for a graduate course I taught at MSVU. You present two essays on the same topic, one human-written and one AI-generated, with no labels. Students read both and try to figure out which is which. The guessing part is fun, but the real learning happens when students start articulating what they noticed: the AI text sounds polished but stays vague, sources are missing, and reasoning rarely goes below the surface.
The Funhouse Mirror activity, based on work by Roe, Furze, and Perkins (2024), asks participants to prompt AI on a topic they genuinely know well and then identify where the output distorts, flattens, or misses nuances that any expert would catch. It raises a question that tends to land hard with students: if AI can mislead you on your own area of expertise, what happens when you encounter a topic you’re seeing for the first time?
All thirteen activities include timing estimates (most run between ten and twenty minutes), the specific critical thinking skills they target from Ennis’s framework, step-by-step instructions, and original sources. You can use them as standalone exercises, string several together into a workshop, or embed them into your existing course design. They work across disciplines and student levels.
I also included a critical thinking rubric based on Ennis’s six skills, a reflective memo template that I strongly recommend using after every activity, and a full introduction that contextualizes the cognitive research. I’m upfront in the guide about my two reservations with this body of research: the narrow populations in many studies, and the fact that generative AI is still too new for the longitudinal evidence we’ll eventually need. The patterns are suggestive and worth acting on, but they’re not the final word.
The guide is available as a free PDF. You can download it from the link below. If you’ve been looking for a way to keep AI in your classroom without losing the critical thinking that makes learning meaningful, these activities are a good place to start.
References
- Fan, Y., Tang, L., Le, H., Shen, K., Tan, S., Zhao, Y., Shen, Y., Li, X., & GaÅ¡ević, D. (2025). Beware of metacognitive laziness: Effects of generative artificial intelligence on learning motivation, processes, and performance. British Journal of Educational Technology, 56(2), 489–530. https://doi.org/10.1111/bjet.13544Â
- Gerlich, M. (2025). AI tools in society: Impacts on cognitive offloading and the future of critical thinking. Societies, 15(1), Article 6. https://doi.org/10.3390/soc15010006Â
- Kosmyna, N., Hauptmann, E., Yuan, Y. T., Situ, J., Liao, X.-H., Beresnitzky, A. V., Braunstein, I., & Maes, P. (2025). Your brain on ChatGPT: Accumulation of cognitive debt when using an AI assistant for essay writing tasks. MIT Media Lab. https://www.media.mit.edu/publications/your-brain-on-chatgpt/  Â
- Shaw, S. D., & Nave, G. (2026). Thinking fast, slow, and artificial: How AI is reshaping human reasoning and the rise of cognitive surrender. Working paper, The Wharton School, University of Pennsylvania. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=6097646Â




