AI continues to astonish us with its rapid advancements. Just as we begin to grasp new updates, even newer ones emerge, driving unprecedented exponential growth. Unlike Moore’s Law, which predicts computing power doubling every two years, AI seems to be evolving at a pace that feels more like every month.

For us in education, the impact is visibly transformative. We are already talking about radical changes that will soon make traditional teaching methods, assessment practices, and learning models feel outdated.
AI Grading
Today, while scrolling through my LinkedIn feed, I came across a thought-provoking discussion on AI grading. I joined the conversation and shared my insights there, and I thought it would be valuable to bring some of those ideas here as well.
One of the instructional areas where the impact of AI is making a huge difference is in assessment. Teachers are able to generate a wide variety of assessment types, cutomize their assessments to meet their learning goals, align their existing assessment with curriculum standards and objectives and many more. Besides the generation of assessment, AI also helps in grading and generating feedback.
However, some teachers, see Linkedin discussion, voice their concerns over the idea that offloading grading and feedback generation to AI would probably undermine the entire grading process and render it a ‘mechanical’ activity.
This is definitely a legitimate concern. And I understand that a total cognitive offloading to AI risks creating ‘a technology-dependence’ syndrome that could diminish teachers’ engagement with students’ work, weaken their ability to provide nuanced feedback, and ultimately reduce the human element in assessment.
I also believe that over-reliance on AI can result in detaching you as the teacher from the deeper pedagogical insights that come from personally evaluating student progress. In fact, using AI in this way is a real problem, regardless of the instructional area you are working on.
Your approach to AI integration in education, should be a collaborative one. You treat AI as a thinking partner, a form of co-intelligence (Mollick, 2024) that is there to enhance your thinking and not replace it. You engage with its content, ask for clarifications, refine and edit it and never take it at face value.
The same applies to assessment! You want to use AI to help facilitate the grading process by streamlining the various repetitive tasks involved in grading. You are not letting it take over.
You get to guide and refine its output, add your own interpretations of student work based on your background knowledge of the student, their progress, and the context of their learning journey. AI can assist with efficiency, but the depth, nuance, and personalized feedback must come from you as an educator.
Grading, as we all know, is not just about assigning marks or placing checkmarks on student work. Itโs a deeply nuanced process that involves understanding individual student progress, recognizing effort, and interpreting responses in context. Therefore teachers MUST be in the loop!
Teachers bring their experience, professional judgment, and personal knowledge of their students into the grading process, these are all factors that AI, for all its efficiency and utility, is incapable of replicating.
So, I would join my voice to Priten Shah in advocating for a balanced approach to AI grading. Shah (2023) has beautifully articulate this idea in the following lengthy quote:
The current risks of AIโ led grading and feedback are not insignificant. AI tools may misinterpret complex or nuanced student responses and thus provide inaccurate grading or feedback. They may also not be able to take into account the context that the teacher has from their experience with the student or even classroom discussions. Concerns about privacy, data security, and potential biases in the existing AI systems must be addressed before AI tools can be fully trusted to decide grades unanimously. For now, educators should adopt a balanced approach and use AIโ generated feedback to supplement their expertise and observations. Thus, educators can ensure any grades or feedback aligns with their understanding of the student’s progress. Teachers can also combine AIโ generated feedback with their own comments to prevent leaving out anything that would help students learn from their work. (p. 151)
So, relying entirely on AI for grading is simply not a viable solution. Beyond sidelining the human element (which is an essential part of the grading process), AI-generated assessments still grapple with issues of bias, misrepresentation, privacy, and data security, all of which complicate its widespread adoption.
A balanced approach, as Shah argues, is the key: educators should use AI as a tool to support and enhance grading, not as a wholesale replacement for human judgment and expertise.
References
- Mollick, E. (2024). Teaching with AI: The promise and practice of co-intelligence for education. MIT Press.
- Shah, P. (2023). AI in education: A practical guide for teachers. Routledge.