In 2023, Ethan Mollick and Lilach Mollick published a paper titled Assigning AI: Seven Approaches for Students, with Prompts. At the time, generative AI tools were far less capable than what we now see in classrooms, universities, and everyday academic work. Still, the framework they proposed has aged well.
What makes this paper useful is not the specific tools it references, but the way it frames AI as a set of pedagogical roles. Instead of asking vague questions about whether AI belongs in education, the authors offer a concrete way to think about how AI shows up in learning contexts and what risks come with each use.
Below is a practical walkthrough of the seven approaches they outline, with brief reflections on why each still matters today.
1. AI as Tutor
In this role, AI provides direct instruction. It explains concepts, asks guiding questions, and adapts explanations to a learner’s level. When used carefully, this approach supports understanding and practice, especially outside class time. The risk sits in the AI’s tendency to sound confident even when it is wrong. Students must actively question explanations, check facts, and treat the AI as a support, not an authority.
2. AI as Coach
Here, AI supports reflection, planning, and self-regulation through structured questions. It helps learners think about how they learn, what worked, and what needs adjustment. The real value of this approach lies in metacognition, not answers. The AI does not decide. Students do. Advice may miss context or nuance, so judgment stays with the learner.
3. AI as Mentor
In the mentor role, AI offers formative feedback on drafts, projects, or ideas. Frequent feedback during the process can support improvement more effectively than comments at the end. The danger appears when feedback takes on too much weight. AI feedback works best as one perspective among many, not as a final verdict on quality.
4. AI as Teammate
AI can support group work by offering alternative viewpoints, questioning assumptions, or helping teams organize roles. It can even play devil’s advocate to reduce groupthink. This role can strengthen collaboration, but only when teams retain control. Over-reliance or uncritical acceptance of suggestions can weaken collective judgment.
5. AI as Tool
In this role, AI helps students complete tasks more efficiently. Examples include outlining, summarizing, or transforming content. The benefit here is clear: productivity and extended capacity. The risk is just as clear: outsourcing thinking. When AI replaces effort rather than supporting it, learning suffers.
6. AI as Simulator
AI can create simulated scenarios for practice, rehearsal, or role-play. This supports application and transfer of knowledge in low-stakes contexts. Simulations work best when they stay grounded in realistic constraints. Poor fidelity or inaccurate scenarios can lead to shallow or misleading learning experiences.
7. AI as Student
This final approach flips the script. Students teach concepts to the AI and correct its misunderstandings. Teaching remains one of the most effective ways to check understanding. When students explain ideas clearly enough for the AI to follow, gaps in knowledge surface quickly. The main risk comes from confident but incorrect AI responses that distract rather than clarify.

Reference
Mollick, E., & Mollick, L. (2023). Assigning AI: Seven approaches for students, with prompts. Wharton School of the University of Pennsylvania. https://ssrn.com/abstract=4475995




