I’ve been writing about AI literacy for a while now, and one thing keeps bothering me about how the conversation unfolds in schools. Most of what teachers hear about AI literacy is functional: learn the tools, write better prompts, understand how the technology works. That’s necessary. But it’s not sufficient. And confusing the two, treating functional competence as the whole picture, produces students who can use AI fluently without ever questioning what it’s doing to their thinking, whose data built it, or who benefits from their compliance.
That’s why I put together this guide. It’s called Critical AI Literacy: A Short Guide for Teachers, and it’s available as a free download at the end of this post.
What the Guide Covers
The guide does three things. First, it places AI literacy definitions alongside critical AI literacy definitions so teachers can see the difference clearly. I pulled definitions from Mills et al. (2024), the U.S. Department of Education (2024), the OECD’s 2025 AI literacy framework, aiEDU, Maha Bali (2023), Roe, Furze, and Perkins (2024, 2025), and Goodlad and Stoerger (2024). The side-by-side comparison shows something important: AI literacy asks “how do I use this tool effectively?” while critical AI literacy asks “what does this tool do to people, knowledge, and power?” Both questions matter, but they lead to very different classroom conversations.
Second, the guide includes a comparison table that breaks down eight dimensions where AI literacy and critical AI literacy diverge: core question, orientation, relationship to AI, what counts as knowledge, approach to outputs, role of ethics, pedagogy, and goal. Teachers can use this to see quickly where their own instruction falls on the spectrum. If every activity in your classroom is about prompting well and evaluating accuracy, you’re doing AI literacy. You’re not yet doing critical AI literacy.
Third, and this is the most practical section, the guide offers a Critical AI Literacy Questions Framework. Six domains, 24 questions, each designed to move students from passive use toward active critical engagement. The six domains are: Output Evaluation, Bias Awareness, Thinking Ownership, System Understanding, Ethical Awareness, and Strategic Use.
Why I Built It Around Questions
I could have built this around activities or lesson plans. I chose questions deliberately. Questions travel across subjects and grade levels in a way that prescriptive activities don’t. A question like “What did you think about this topic before you asked AI?” works in a Grade 5 science class and a graduate seminar. A question like “Whose work was used to train this model, and were they compensated?” opens a conversation that doesn’t require a specific tool or platform to land.
Questions also put the cognitive load where it belongs: on the student. The whole point of critical AI literacy is that students become active decision-makers who can interrogate, refuse, and redirect AI systems. You don’t build that capacity by giving students a checklist. You build it by teaching them to ask the right questions until questioning becomes automatic.
Take the “Thinking Ownership” domain as an example. One question asks: “Can you explain this answer without referring back to the AI output?” Another asks: “What did you think about this topic before you asked AI?” These aren’t trick questions. They’re prompts that force students to separate their own understanding from what the tool produced. That separation is exactly what the cognitive offloading research has been warning us about.
I’ve written about Gerlich’s (2025) findings on how AI use correlates with reduced critical thinking, and the mechanism is precisely this: students stop doing their own cognitive work when a tool does it for them. Questions that make thinking ownership visible are one way to interrupt that cycle.
I’ve covered Roe, Furze, and Perkins’ (2025) “digital plastic” metaphor for critical AI literacy on this blog before, and their framework shaped how I think about the gap between using AI and thinking critically about AI. The questions in my guide push in the same direction they argue for: treating AI outputs as material to be shaped and questioned, not as answers to be accepted.
How Teachers Can Use It
The guide includes a “How to Use” section with four entry points. Before an AI-assisted task, pick 2-3 questions from the framework and share them as thinking prompts. After an AI-assisted task, use the questions as reflection prompts where students write or discuss what they noticed about the AI output and their own thinking process.
In assessment design, build critical AI literacy into rubrics by evaluating whether students questioned, verified, and critically engaged with AI outputs. In professional development, use the comparison table to help colleagues see what critical AI literacy actually looks like in a classroom.
I recently shared another guide on critical thinking activities for the classroom, and it serves as a natural companion to this one. Critical thinking and critical AI literacy overlap significantly. The habit of questioning sources, identifying assumptions, and evaluating evidence is the same habit that critical AI literacy demands. The difference is the object: in one case you’re questioning a text or an argument, in the other you’re questioning a system and its outputs.
The Bigger Picture
The AI literacy frameworks coming out of UNESCO (2024), the OECD (2025), and Chee, Ahn, and Lee (2025) all acknowledge that functional skills alone aren’t enough. But many of them still treat technical understanding and safe use as the primary goals, with ethical reasoning as a secondary add-on. Critical AI literacy flips that hierarchy. Ethics isn’t a module you cover in Week 12. It’s woven into every interaction with AI from day one.
I think the “Bias Awareness” and “Ethical Awareness” domains in the guide make this concrete. When a student asks “Whose work was used to train this model, and were they compensated?” or “If this model was trained mostly on English-language internet data, what perspectives might be missing?”, they’re not just learning to use a tool. They’re learning to think about the systems behind the tool, the labor that built them, the data that shaped them, and the communities those systems affect. That’s a fundamentally different kind of literacy.
I’ve been arguing on this blog that pedagogy determines whether AI helps or hurts. This guide is one attempt to put that argument into a form teachers can use immediately. Teaching students to prompt well without teaching them to think critically about what they’re prompting produces a very specific kind of learner: fluent, efficient, and uncritical. We can do better.
The guide is licensed under Creative Commons (CC BY-NC-SA 4.0), so you’re free to share it, adapt it, and use it in your professional development sessions as long as you credit the source and keep it non-commercial.
Download the guide here:https://www.educatorstechnology.com/wp-content/uploads/2026/04/Critical-AI-Literacy-short-Guide-for-Teachers.pdf
References
- aiEDU. (n.d.). What is AI literacy? AI Education Project. Retrieved October 15, 2025, from https://www.aiedu.org/about
- Bali, M. (2023, April 1). What I mean when I say critical AI literacy. Reflecting Allowed. https://blog.mahabali.me/educational-technology-2/what-i-mean-when-i-say-critical-ai-literacy/
- Goodlad, L. M. E., & Stoerger, S. (2024). Teaching critical AI literacies: “Explainer” and resources for the new semester. Critical AI @ Rutgers. https://hcommons.org/?get_group_doc=1004981/1734718038-TeachingCriticalAILiteracies_livingdocument.pdf
- Mills, K., Ruiz, P., Lee, K., Coenraad, M., Fusco, J., Roschelle, J. & Weisgrau, J. (2024, May). AI Literacy: A Framework to Understand, Evaluate, and Use Emerging Technology. https://doi.org/10.51388/20.500.12265/218
- OECD. (2025). Empowering learners for the age of AI: An AI literacy framework for primary and secondary education (review draft). OECD. Paris. https://ailiteracyframework.org/wp-content/uploads/2025/05/AILitFramework_ReviewDraft.pdf
- Roe, J., Furze, L., & Perkins, M. (2024). Funhouse mirror or echo chamber? A methodological approach to teaching critical AI literacy through metaphors (arXiv: 2411.14730). arXiv. https:// doi.org/10.48550/arXiv.2411.14730
- Roe, J., Furze, L., & Perkins, M. (2025): Digital plastic: a metaphorical framework for Critical AI Literacy in the multiliteracies era, Pedagogies: An International Journal, https://doi.org/10.1080/1554480X.2025.2557491
- U.S. Department of Education, Office of Educational Technology. (2024). Empowering education leaders: A toolkit for safe, ethical, and equitable AI integration. U.S. Department of Education. https://files.eric.ed.gov/fulltext/ED661924.pdf




