Critical thinking (CT) is key to any meaningful use of AI in education. In fact, CT matters with or without AI, but once AI is involved, the stakes become clearer.
If you are wondering how to integrate CT into AI-enabled activities, here are four examples I tried with my own (graduate) students in the past that consistently lead to strong discussions.
1. Compare human and AI writing
In this activity, I prepare a short essay myself, written entirely by me (human generated) and I make sure it draws on rigorous academic sources that are directly relevant to the topic.
I then generate a second essay using AI on the same topic. I give students both texts without telling them which one is human written and which one is AI generated.
Students read the two pieces closely and look for differences. I ask them to focus on the quality of references, the use of sources, and the accuracy of information.
- Are the references seminal works?
- Do they point to established scholars in the field?
- Or do they rely on vague citations, blog posts, or loosely connected online material?
What makes this activity powerful is the depth of thinking it invites. Students move beyond surface features like tone or fluency and start evaluating credibility, relevance, and scholarly judgment.
They debate their choices, defend their reasoning, and often revise their initial assumptions. By the end, they are no longer asking which text sounds better. They are asking which one demonstrates expertise and why.
2. Same prompt, different outputs
I use this activity to show students the danger of the kind of homogenous thinking AI tends to produce. AI rarely gives the exact same answer twice, but the variation is usually linguistic, not conceptual. The wording shifts, the sentence structure changes, but the underlying ideas remain largely the same.
In practice, I give all students the same prompt and ask them to generate an AI response. Then I place them in small groups and ask them to compare what they received. Very quickly, patterns emerge. The same claims appear again and again. The structure feels familiar. Arguments follow a predictable path, and nuance is thin.
That comparison becomes the teaching moment. Students begin to see that fluency can mask shallow thinking.
From there, we talk about originality, voice, and why meaningful ideas rarely come from templates. The activity helps students understand that deep thinking requires struggle, friction, risk, and judgment, even when AI makes writing look effortless.
3. Make bias visible
I used this activity to get students talking about the limitations of AI, especially when it comes to bias. I would ask the system to generate an image tied to a professional role, such as a tech CEO. Before offering any explanation, I invite students to simply describe what they see.
The pattern appears quickly. The CEO is almost always a white male, often middle-aged or older, wearing a suit. From there, the questions start. Why a man? Why white? Why a suit? Why this age? Students begin to surface assumptions that feel familiar but are rarely questioned.
That moment opens the discussion. We talk about where these images come from, the kinds of data AI systems are trained on, and whose representations dominate that data.
Students begin to see that AI is not inventing these patterns out of nowhere. It is reproducing social norms and power structures embedded in its training material.
What matters pedagogically is that bias becomes visible and concrete. Students move from abstract talk about fairness to a clear example they can point to, analyze, and critique.
AI becomes a mirror of existing inequalities, and critical thinking helps students recognize that what looks neutral or objective often carries a history behind it.
4. Use AI as a feedback tool
Instead of positioning AI as a writer, I often frame it as a reviewer. Students upload a draft they have already written and ask the system to respond as a critical reader.
For example, they might prompt it to act as a peer reviewer, a journal reviewer, or an informed but skeptical reader and then ask it to comment on clarity, argument strength, use of evidence, or gaps in reasoning
What matters here is not the feedback itself but what students do with it. I ask them to slow down and interrogate the AI’s comments:
Which suggestions actually improve the argument?
Which ones miss the point of the assignment or oversimplify the ideas?
Where does the feedback align with disciplinary expectations, and where does it fall short?
Students must justify their decisions, explain why they accepted some suggestions and rejected others, and revise with intention. I am more interested here in the decision-making process students engaged in. This is where critical thinking shows itself most clearly. AI provides input, but students remain responsible for evaluation, interpretation, and final decisions.




