Over the past few days, I have been in full marathon reading mode while working on a chapter on AI and assessment. As part of my reference mining and snowballing, I came across a set of research papers that really stood out.
Most of these were published this year, with one exception, and all of them raise thoughtful questions about AI, writing, feedback, integrity, and assessment in higher education. I thought I would share a brief overview of each here in case some of you find them useful for your own research projects.
1. Corbin, T., Tai, J., & Flenady, G. (2025). Understanding the place and value of GenAI feedback: A recognition-based framework.
This paper argues that AI feedback is useful, but it is not the same as human feedback. Corbin, Tai, and Flenady show that effective feedback depends on recognition, trust, and shared vulnerability between teachers and students. Since AI cannot genuinely recognize learners, its feedback works best as a low-stakes space to practice and refine ideas, not as a substitute for relational, human feedback.
2. Costello, E. (2024). ChatGPT and the educational AI chatter: Full of bullshit or trying to tell us something?
Costello reads ChatGPT less as a thinking machine and more as a reflection of our own habits of language. Using the idea of philosophical bullshit and the Buddhist notion of papañca, he shows how fluent AI talk pulls us in through plausibility rather than meaning, inviting educators to slow down and question what language, human or AI, is actually doing.
3. Butson, R., & Spronken-Smith, R. (2024). AI and its implications for research in higher education: A critical dialogue.
This paper stages a dialogue between two academics to surface the real tensions around AI in research. Rather than promoting quick wins, Butson and Spronken-Smith focus on what AI changes at a deeper level: academic identity, judgement, authorship, and the ethics of inquiry. The takeaway is not acceptance or rejection of AI, but the need for deliberate, reflective use that keeps human thinking, responsibility, and scholarly values at the center.
4. Curtis, G. J. (2025). The two-lane road to hell is paved with good intentions: Why an all-or-none approach to generative AI, integrity, and assessment is insupportable.
Curtis challenges the popular two-lane approach to AI in assessment that either bans AI completely or allows it without limits. He argues this binary thinking weakens learning, assessment validity, and academic integrity. Instead, he makes a strong case for a middle ground, where AI use is allowed with clear constraints, taught incrementally, and enforced through layered integrity practices that support learning rather than bypass it.
5. Bao, T., Zhao, Y., Mao, J., & Zhang, C. (2025). Examining linguistic shifts in academic writing before and after the launch of ChatGPT: A study on preprint papers
This large-scale study shows that academic writing has shifted since the release of ChatGPT. Analyzing over 800,000 arXiv abstracts, the authors find more AI-favored adjectives and adverbs, simpler sentence structures, and increased vocabulary, but also weaker cohesion and lower readability. The changes are strongest in computer science and among non-native English scholars, suggesting AI is reshaping how research sounds, not just how it is written
6. Bannister, P. (2025). ParadAIse L0st?
Bannister offers a sharp critique of how academia is responding to generative AI, arguing that the real danger is not the technology but our drift toward formulaic, fear-driven writing. He shows how AI panic, detection obsessions, and publish-or-perish pressures are pushing scholars toward self-censorship and stylistic conformity, especially harming multilingual researchers. The paper calls for human-led hybrid authorship, where AI can support thinking but responsibility, judgement, and meaning remain firmly human.

References
Bannister, P. (2025). ParadAIse L0st? Higher Education Research & Development. https://doi.org/10.1080/07294360.2025.2586653
Bao, T., Zhao, Y., Mao, J., & Zhang, C. (2025). Examining linguistic shifts in academic writing before and after the launch of ChatGPT: A study on preprint papers. Scientometrics, 130, 3597–3627. https://doi.org/10.1007/s11192-025-05341-y
Butson, R., & Spronken-Smith, R. (2024). AI and its implications for research in higher education: A critical dialogue. Higher Education Research & Development, 43(3), 563–577. https://doi.org/10.1080/07294360.2023.2280200
Corbin, T., Tai, J., & Flenady, G. (2025). Understanding the place and value of GenAI feedback: A recognition-based framework. Assessment & Evaluation in Higher Education, 50(5), 718–731. https://doi.org/10.1080/02602938.2025.2459641
Costello, E. (2024). ChatGPT and the educational AI chatter: Full of bullshit or trying to tell us something? Postdigital Science and Education, 6, 425–430. https://doi.org/10.1007/s42438-023-00398-5
Curtis, G. J. (2025). The two-lane road to hell is paved with good intentions: Why an all-or-none approach to generative AI, integrity, and assessment is insupportable. Higher Education Research & Development. https://doi.org/10.1080/07294360.2025.2476516




