The conversation around AI critical thinking skills in education keeps circling back to the same tired question: should we allow it or ban it? But that question misses the point entirely. The real question is whether students know how to think critically about what AI gives them.
Most of them don’t. And that’s on us.
Robert Ennis published a framework back in 2015 that outlined what critical thinkers actually do. They analyze arguments, judge sources, question assumptions, spot logical fallacies, and evaluate evidence. The framework was built for general critical thinking, but it maps beautifully onto the challenges students face when they interact with AI tools like ChatGPT, Claude, or Gemini.
AI Critical Thinking Skills
I took Ennis’s 15 skills and applied each one directly to AI use in the classroom. Here’s how they break down.

1. Have a Focus and Pursue It
Students love to open ChatGPT and type vague prompts like “tell me about climate change.” That’s the equivalent of walking into a library and saying “give me a book.” Teach students to frame precise questions with a clear purpose. What are they trying to learn? What do they plan to do with the response? The habit of intentional inquiry changes everything about how they interact with AI.
2. Analyze Arguments
Ask students to prompt AI to generate arguments on a topic, then pull those arguments apart. Where’s the evidence? Is the reasoning sound? Does the conclusion actually follow from the premises? AI tends to produce confident-sounding arguments that fall apart under scrutiny. That makes it a perfect sparring partner for building analytical skills.
3. Ask and Answer Clarification Questions
AI responses are often vague or overly general on the first try. Teach students to follow up. What do you mean by that? Can you be more specific? Give me an example. The skill of iterative prompting is really just the skill of asking better questions, and that’s a skill that transfers well beyond AI.
4. Understand and Use Graphs and Maths
AI tools can generate graphs, charts, and numerical summaries. But can they do it accurately? Have students use AI to interpret data, then check the output against the source material. Does the graph actually represent what the data says? Are the numbers right? AI is surprisingly error-prone with quantitative reasoning, which makes this a rich exercise.
5. Judge the Credibility of a Source
This is a big one. AI generates text that sounds authoritative, complete with citations that sometimes don’t exist. Challenge students to verify every claim. Did the AI cite a source? Can you find that source? Is it credible? Is it even real? The habit of verification is arguably the most important skill students can develop in an AI-saturated environment.
6. Observe, and Judge Observation Reports
Have students use AI to summarize an article or a research paper, then compare the summary with the original. What did the AI emphasize? What did it leave out? Did it distort anything? This exercise builds an eye for editorial bias and selective representation, skills that matter just as much when evaluating human-written summaries.
7. Use Background Knowledge
Students already know things. The question is whether they use that knowledge when they read an AI response. Train them to notice when AI contradicts what they’ve learned in class, when it misses context that matters, or when it applies a general answer to a situation that requires specificity. Background knowledge is a filter. Students need to keep it active.
8. Deduce, and Judge Deductions
Ask AI to walk through a logical argument step by step. Then have students evaluate each step. Does the conclusion follow from the premises? Are there leaps in reasoning? AI often produces chains of logic that look complete on the surface but skip crucial steps when you read carefully. Catching those gaps is a valuable thinking exercise.
9. Make, and Judge, Inductive Inferences and Arguments
AI loves to generalize. Give it three examples and it’ll draw sweeping conclusions. Have students test this tendency. Ask the AI to generalize from a set of cases, then evaluate whether the inference is well supported. How many examples would you need to draw that conclusion? What counterexamples exist? This builds the kind of statistical intuition that serves students across disciplines.
10. Make, and Judge, Value Judgments
Give AI an ethical dilemma or a social issue and ask it to weigh in. Then have students examine the values embedded in the response. AI tools are trained on massive datasets that carry cultural assumptions. A response about justice, fairness, or equality will reflect certain values and omit others. Students should learn to spot what’s baked into the answer.
11. Define Terms, and Judge Definitions
Ask AI to define a concept like “bias,” “intelligence,” or “freedom.” Then have students critique the definition. Is it clear? Is it complete? Does it carry hidden assumptions? AI definitions tend to sound clean and polished, which can mask real conceptual complexity. This exercise teaches students that definitions are arguments in disguise.
12. Handle Equivocation Appropriately
AI sometimes shifts the meaning of a word mid-response without flagging the change. “Intelligence” might start as a cognitive ability and end up meaning data or information. “Learning” might slide from deep understanding to surface-level pattern matching. Train students to catch these semantic shifts and understand why they matter. Sloppy language leads to sloppy thinking.
13. Attribute and Judge Unstated Assumptions
Every AI response rests on assumptions. The question is whether students can identify them. When AI recommends a study strategy, what assumptions does it make about the student’s learning style, prior knowledge, or access to resources? When it gives advice about a career decision, what values does it assume? The ability to surface hidden assumptions is one of the highest-order critical thinking skills there is.
14. Think Suppositionally
“What if” questions are powerful learning tools, and AI is a great place to explore them. What if this historical event had gone differently? What if we changed one variable in this experiment? What if the opposite argument were true? Students can use AI to explore hypothetical scenarios, test alternative perspectives, and reason through counterfactuals. The key is teaching them to treat these explorations as starting points for their own thinking.
15. Deal with Fallacy Labels
AI can generate flawed arguments on demand. Ask it to produce a strawman argument, a slippery slope, or an appeal to authority. Then have students identify the fallacy, explain why it’s flawed, and reconstruct a stronger version of the argument. This turns AI into a logic training tool, one that produces unlimited practice material at exactly the difficulty level your students need.
Putting It All Together
Ennis developed his framework long before generative AI entered the classroom. But the skills he identified are precisely the ones students need most right now. The fluency and confidence of AI-generated text makes critical evaluation harder, because the output looks and sounds like it was written by someone who knows what they’re talking about.
That’s where we come in. Every one of these 15 skills can be taught explicitly, practiced regularly, and reinforced across subjects. The goal is to raise a generation of students who use AI as a thinking tool, one that sharpens their reasoning and deepens their understanding, not one that replaces it.
I created a free visual guide (slide deck) with all 15 skills broken down for classroom use. You can grab it from the link below.
Check out the full PDF guide of AI Critical Thinking Skills
Reference
Ennis, R. H. (2015). Critical thinking: A streamlined conception. In M. Davies & R. Barnett (Eds.), The Palgrave handbook of critical thinking in higher education (pp. 31–47). Palgrave Macmillan.



