How should students use AI in the classroom?
This question keeps coming up in every conversation about AI in education. And most of the answers fall into two unhelpful camps: ban it completely or let students use it for everything.
In 2023, Ethan Mollick and Lilach Mollick from the Wharton School published a paper that moved the conversation forward. They identified seven distinct approaches to using AI for learning, each with its own pedagogical purpose and risks.
I find this categorization remarkably to the point.
The framework shifts our thinking from “should we allow AI?” to “how should we design AI-supported learning experiences?” That’s a much more productive question.
Here are the seven approaches and what each one means for teaching with AI.
1. AI as Coach
In this role, AI prompts reflection and self-regulation through structured questions. The AI helps students think about how they learn, what went well, and what to adjust next time.
The main value here is metacognition. Students often struggle to step back and examine their own learning processes. An AI coach can ask the right questions at the right time to prompt that reflection.
A practical example: after completing a group project, students can work with an AI coach to analyze what worked, what didn’t, and what they would do differently. The AI guides the reflection without providing answers.
2. AI as Tutor
This is probably the most common use case people imagine when they hear “AI tutor.” The AI provides direct instruction, explains concepts, asks guiding questions, and adapts explanations to a student’s level.
AI tutoring can offer personalized instruction at scale. A student who needs more time with fractions can get it without holding back the rest of the class.
The risk here is passivity. Students must actively question and verify what they’re taught. An AI tutor that just delivers information creates dependent learners. The best implementations require students to engage critically with the explanations, not just consume them.
3. AI as Mentor
In the mentor role, AI offers formative feedback on drafts, projects, or ideas. The key word is formative. Students get input during the process, not just at the end.
Frequent feedback supports improvement. A student working on an essay can get suggestions on structure and clarity before submitting the final version.
The danger appears when students treat AI feedback as authoritative. AI feedback works best as one perspective among many, not as a final verdict on quality. Teachers should frame it that way explicitly.
4. AI as Teammate
AI can support group work by offering alternative viewpoints, challenging assumptions, or helping teams organize roles. It can play devil’s advocate to reduce groupthink.
This role can strengthen collaboration, especially when teams get stuck or fall into echo chambers. The AI introduces friction in a productive way.
The risk is over-reliance. Teams must retain control and judgment. If the AI becomes the decision-maker, the collaborative learning breaks down.
5. AI as Tool
In this role, AI helps students complete tasks more efficiently. Outlining, summarizing, transforming content, generating first drafts. This is where productivity gains happen.
The benefit is extended capacity. Students can accomplish more in less time.
The core risk is outsourcing thinking. If students use AI to skip the hard cognitive work, they miss the learning. This approach requires the clearest boundaries and the most explicit guidance about what’s appropriate.
6. AI as Simulator
AI can create simulated scenarios for practice, role-play, or rehearsal. Medical students can rehearse patient conversations. Law students can practice negotiations. Business students can simulate difficult management conversations.
This supports application and transfer of knowledge in low-stakes settings. Students can make mistakes without real consequences.
The risk appears when simulations lack realism or accuracy. If the AI produces scenarios that don’t reflect real-world complexity, learning becomes shallow or misleading.
7. AI as Student
This approach flips the script. Students teach concepts to the AI and correct its misunderstandings.
Teaching remains one of the most effective ways to check understanding. When students explain ideas clearly enough for the AI to follow, gaps in knowledge surface quickly. The learning-by-teaching effect is well documented.
The main risk comes from confident but incorrect AI responses. The AI might push back on correct explanations or accept incorrect ones. Students need enough subject knowledge to stay in control of the exchange.
The Mollick framework does something important: it treats AI as a set of pedagogical roles with distinct purposes and risks. This moves educators past the binary “allow or ban” debate and into more useful territory.
Each approach requires different scaffolding, different student skills, and different teacher oversight. A first-year undergraduate using AI as a tool needs more guardrails than a graduate student using AI as a mentor.
One Gap Worth Noting
The framework doesn’t fully address when each approach is most appropriate for different learners. A high school student and a doctoral candidate might both benefit from AI as tutor, but the level of autonomy, the verification expectations, and the scaffolding should look very different.
Context matters. Teachers still need to make those judgment calls based on their students, their subject, and their learning goals.The framework offers a starting point and that’s exactly how it should be used.

Reference
Mollick, E., & Mollick, L. (2023). Assigning AI: Seven approaches for students, with prompts. Wharton School of the University of Pennsylvania. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4475995




