The arrival of generative AI tools such as ChatGPT has disrupted how we think about assessment in higher education. As educators, we’re facing a critical question: What should we actually be assessing when students have access to these powerful tools?
Our recent study explored how 28 Canadian higher education educators are navigating this challenge. Through in-depth interviews, we discovered that educators are positioning themselves as “stewards of learning with integrity” – carefully drawing boundaries between acceptable and unacceptable uses of chatbots in student assessments.

Where Educators Found Common Ground
Across disciplines, participants agreed that prompting skills and critical thinking are appropriate to assess with chatbot integration. Prompting requires students to demonstrate foundational knowledge, clear communication skills, and ethical principles like transparency and respect. Critical thinking assessments can leverage chatbots’ current limitations – their unreliable arguments, weak fact-checking, and inability to explain reasoning – positioning students as evaluators of AI-generated content.
The Nuanced Territory of Writing Assessment
Writing skills proved far more controversial. Educators accepted chatbot use for brainstorming (generating initial ideas) and editing (grammar checking after independent writing), but only under specific conditions: students must voice their own ideas, complete the core writing independently, and critically evaluate any AI suggestions.
Notably absent from discussions was the composition phase – the actual process of developing and organizing original arguments. This silence suggests educators view composition as distinctly human cognitive work that should remain student-generated, even as peripheral tasks might accommodate technological assistance.
Broader Concerns
Participants raised important challenges beyond specific skill assessments: language standardization that erases student voice, potential for overreliance on AI, blurred authorship boundaries, and untraceable forms of academic misconduct. Many emphasized that students training to become professional communicators shouldn’t rely on AI for core writing tasks.
Moving Forward
Our findings suggest that ethical AI integration in assessment requires more than policies, it demands ongoing conversations about what makes learning authentic in technology-mediated environments. Educators need support in identifying which ‘cognitive offloads’ are appropriate, understanding how AI works, and building students’ evaluative judgment skills.
The key insight? Assessment in the AI era isn’t about banning technology, but about distinguishing between tasks where AI can enhance learning and those where independent human cognition remains essential. As one participant reflected: we must continue asking ourselves, “What should we be assessing exactly?”
The postplagiarism era requires us to protect academic standards while preparing students for technology-rich professional environments – a delicate balance that demands ongoing dialogue, flexibility, and our commitment to learning and student success.
Read the full article: https://doi.org/10.1080/02602938.2025.2587246
______________
Share this post: What Should We Be Assessing in a World with AI? Insights from Higher Education Educators – https://drsaraheaton.com/2025/11/25/what-should-we-be-assessing-in-a-world-with-ai-insights-from-higher-education-educators/
Sarah Elaine Eaton, PhD, is a Professor and Research Chair in the Werklund School of Education at the University of Calgary, Canada. Opinions are my own and do not represent those of my employer.
Posted by Sarah Elaine Eaton, Ph.D. 

You must be logged in to post a comment.