What Should We Be Assessing in a World with AI? Insights from Higher Education Educators

November 25, 2025

The arrival of generative AI tools such as ChatGPT has disrupted how we think about assessment in higher education. As educators, we’re facing a critical question: What should we actually be assessing when students have access to these powerful tools?

Our recent study explored how 28 Canadian higher education educators are navigating this challenge. Through in-depth interviews, we discovered that educators are positioning themselves as “stewards of learning with integrity” – carefully drawing boundaries between acceptable and unacceptable uses of chatbots in student assessments.

Screenshot of an academic journal article header from Assessment & Evaluation in Higher Education, published by Routledge. The article title reads: “What should we be assessing exactly? Higher education staff narratives on gen AI integration of assessment in a postplagiarism era.” Authors listed are Sarah Elaine Eaton, Beatriz Antonieta Moya Figueroa, Brenda McDermott, Rahul Kumar, Robert Brennan, and Jason Wiens, with institutional affiliations including University of Calgary, Pontificia Universidad Católica de Chile, Brock University, and others. The DOI link is visible at the top: https://doi.org/10.1080/02602938.2025.2587246.

Where Educators Found Common Ground

Across disciplines, participants agreed that prompting skills and critical thinking are appropriate to assess with chatbot integration. Prompting requires students to demonstrate foundational knowledge, clear communication skills, and ethical principles like transparency and respect. Critical thinking assessments can leverage chatbots’ current limitations – their unreliable arguments, weak fact-checking, and inability to explain reasoning – positioning students as evaluators of AI-generated content.

The Nuanced Territory of Writing Assessment

Writing skills proved far more controversial. Educators accepted chatbot use for brainstorming (generating initial ideas) and editing (grammar checking after independent writing), but only under specific conditions: students must voice their own ideas, complete the core writing independently, and critically evaluate any AI suggestions.

Notably absent from discussions was the composition phase – the actual process of developing and organizing original arguments. This silence suggests educators view composition as distinctly human cognitive work that should remain student-generated, even as peripheral tasks might accommodate technological assistance.

Broader Concerns

Participants raised important challenges beyond specific skill assessments: language standardization that erases student voice, potential for overreliance on AI, blurred authorship boundaries, and untraceable forms of academic misconduct. Many emphasized that students training to become professional communicators shouldn’t rely on AI for core writing tasks.

Moving Forward

Our findings suggest that ethical AI integration in assessment requires more than policies, it demands ongoing conversations about what makes learning authentic in technology-mediated environments. Educators need support in identifying which ‘cognitive offloads’ are appropriate, understanding how AI works, and building students’ evaluative judgment skills.

The key insight? Assessment in the AI era isn’t about banning technology, but about distinguishing between tasks where AI can enhance learning and those where independent human cognition remains essential. As one participant reflected: we must continue asking ourselves, “What should we be assessing exactly?”

The postplagiarism era requires us to protect academic standards while preparing students for technology-rich professional environments – a delicate balance that demands ongoing dialogue, flexibility, and our commitment to learning and student success.

Read the full article: https://doi.org/10.1080/02602938.2025.2587246

______________

Share this post: What Should We Be Assessing in a World with AI? Insights from Higher Education Educators – https://drsaraheaton.com/2025/11/25/what-should-we-be-assessing-in-a-world-with-ai-insights-from-higher-education-educators/

Sarah Elaine Eaton, PhD, is a Professor and Research Chair in the Werklund School of Education at the University of Calgary, Canada. Opinions are my own and do not represent those of my employer.


Breaking Barriers: Academic Integrity and Neurodiversity

November 20, 2025

When we talk about academic integrity in universities, we often focus on preventing plagiarism and cheating. But what if our very approach to enforcing these standards is unintentionally creating barriers for some of our most vulnerable students?

My recent research explores how current academic integrity policies and practices can negatively affect neurodivergent students—those with conditions like ADHD, dyslexia, Autism, and other learning differences. Our existing systems, structures, and policies can further marginalize students with cognitive differences.

The Problem with One-Size-Fits-All

Neurodivergent students face unique challenges that can be misunderstood or ignored. A dyslexic student who struggles with citation formatting isn’t necessarily being dishonest. They may be dealing with cognitive processing differences that make these tasks genuinely difficult. A student with ADHD who has trouble managing deadlines and tracking sources is not necessarily lazy or unethical. They may be navigating executive function challenges that affect time management and organization. Yet our policies frequently treat these struggles as potential misconduct rather than as differences that deserve support.

Yet our policies frequently treat these struggles as potential misconduct rather than as differences that deserve support.

The Technology Paradox for Neurodivergent Students

Technology presents a particularly thorny paradox. On one hand, AI tools such as ChatGPT and text-to-speech software can be academic lifelines for neurodivergent students, helping them organize thoughts, overcome writer’s block, and express ideas more clearly. These tools can genuinely level the playing field.

On the other hand, the same technologies designed to catch cheating—especially AI detection software—appear to disproportionately flag neurodivergent students’ work. Autistic students or those with ADHD may be at higher risk of false positives from these detection tools, potentially facing misconduct accusations even when they have done their own work. This creates an impossible situation: the tools that help are the same ones that might get students in trouble.

Moving Toward Epistemic Plurality

So what’s the solution? Epistemic plurality, or recognizing that there are multiple valid ways of knowing and expressing knowledge. Rather than demanding everyone demonstrate learning in the exact same way, we should design assessments that allow for different cognitive styles and approaches.

This means:

  • Rethinking assessment design to offer multiple ways for students to demonstrate knowledge
  • Moving away from surveillance technologies like remote proctoring that create anxiety and accessibility barriers
  • Building trust rather than suspicion into our academic cultures
  • Recognizing accommodations as equity, not as “sanctioned cheating”
  • Designing universally, so accessibility is built in from the start rather than added as an afterthought

What This Means for the Future

In the postplagiarism era, where AI and technology are seamlessly integrated into education, we move beyond viewing academic integrity purely as rule-compliance. Instead, we focus on authentic and meaningful learning and ethical engagement with knowledge.

This does not mean abandoning standards. It means recognizing that diverse minds may meet those standards through different pathways. A student who uses AI to help structure an essay outline isn’t necessarily cheating. They may be using assistive technology in much the same way another student might use spell-check or a calculator.

Call to Action

My review of existing research showed something troubling: we have remarkably little data about how neurodivergent students experience academic integrity policies. The studies that exist are small, limited to English-speaking countries, and often overlook the voices of neurodivergent individuals themselves.

We need larger-scale research, global perspectives, and most importantly, we need neurodivergent students to be co-researchers and co-authors in work about them. “Nothing about us without us” is not just a slogan, but a call to action for creating inclusive academic environments.

Key Messages

Academic integrity should support learning, not create additional barriers for students who already face challenges. By reimagining our approaches through a lens of neurodiversity and inclusion, we can create educational environments where all students can thrive while maintaining academic standards.

Academic integrity includes and extends beyond student conduct; it means that everyone in the learning system acts with integrity to support student learning. Ultimately, there can be no integrity without equity.

Read the whole article here:
Eaton, S. E. (2025). Neurodiversity and academic integrity: Toward epistemic plurality in a postplagiarism era. Teaching in Higher Educationhttps://doi.org/10.1080/13562517.2025.2583456

______________

Share this post: Breaking Barriers: Academic Integrity and Neurodiversity – https://drsaraheaton.com/2025/11/20/breaking-barriers-academic-integrity-and-neurodiversity/

Sarah Elaine Eaton, PhD, is a Professor and Research Chair in the Werklund School of Education at the University of Calgary, Canada. Opinions are my own and do not represent those of my employer.


AI Use and Ethics Among Jordanian University Students

November 19, 2025

885 university students in Jordan “generally viewed AI use for tasks such as translation, literature reviews, and exam preparation as ethically acceptable, whereas using AI to cheat or fully complete assignments was widely regarded as unacceptable.”

Check out the latest article in the International Journal for Educational Integrity by Marwa M. Alnsour, Hamzeh Almomani, Latifa Qouzah, Mohammad Q.M. Momani, Rasha A. Alamoush & Mahmoud K. AL-Omiri, “Artificial intelligence usage and ethical concerns among Jordanian University students: a cross-sectional study“.

Screenshot of the title page of a research article published in the International Journal for Educational Integrity. The article is titled “Artificial intelligence usage and ethical concerns among Jordanian University students: a cross-sectional study.” It is marked as “Research” and “Open Access” with a purple header. Authors listed are Marwa M. Alnsour, Hamzeh Almomani, Latifa Qouzah, Mohammad Q.M. Momani, Rasha A. Alamoush, and Mahmoud K. Al-Omiri. The DOI link and journal details appear at the top.

Synopsis

This cross-sectional study examined artificial intelligence usage patterns and ethical awareness among 885 higher education students across various disciplines. Findings showed how Jordanian university students engage with AI tools like ChatGPT in their academic work.

Key Findings

High AI Adoption: A substantial 78.1% of students reported using AI during their studies, with approximately half using it weekly or daily. ChatGPT emerged as the most popular tool (85.2%), primarily used for answering academic questions (53.9%) and completing assignments (46.4%).

Knowledge Gaps: Although 57.5% considered themselves moderately to very knowledgeable about AI, only 44% were familiar with ethical guidelines. Notably, 41.8% were completely unaware of principles guiding AI use, revealing a significant gap between usage and ethical understanding.

Disciplinary Differences: Science and engineering students demonstrated the highest usage rates and knowledge levels, while humanities students showed lower engagement but expressed the strongest interest in training. Health sciences students displayed greater ethical concerns, possibly reflecting the high-stakes nature of their field.

Ethical Perceptions: Students generally viewed AI use for translation, proofreading, literature reviews, and exam preparation as acceptable. However, 39.8% had witnessed unethical AI use, primarily involving cheating or total dependence on AI. Only 35% expressed concern about ethical implications, suggesting many may not fully recognize potential risks.

Demographic Patterns: Female students demonstrated higher ethical awareness than males. Older students and those in advanced programs (particularly PhD students) showed greater AI knowledge and ethical consciousness, with each additional year of age correlating with increased awareness scores.

Training Needs: More than three quarters (76.7%) of students expressed interest in professional training on ethical AI use, with 83.7% agreeing that guidance is necessary. However, 46.6% indicated their institutions had not provided adequate support (which should surprise exactly no one, since similar findings have been found in other studies.)

Implications

The author call for Jordanian universities to develop clear, discipline-specific ethical guidelines and structured training programs. The researchers recommend implementing mandatory online modules, discipline-tailored workshops, and establishing dedicated AI ethics bodies to promote responsible use. These findings underscore the broader challenge facing higher education globally: ensuring students can leverage AI’s benefits while maintaining academic integrity and developing critical thinking skills.

______________

Share this post: AI Use and Ethics Among Jordanian University Students https://drsaraheaton.com/2025/11/19/ai-use-and-ethics-among-jordanian-university-students/

Sarah Elaine Eaton, PhD, is a Professor and Research Chair in the Werklund School of Education at the University of Calgary, Canada. Opinions are my own and do not represent those of my employer.


A Brief History of Postplagiarism: Or, Why Fabrication is Not the New Flattery

October 13, 2025
Infographic titled "Postplagiarism: A Brief History" by Sarah Elaine Eaton, PhD, showing a timeline from 2021 to 2025 that highlights key milestones in the development of the concept of postplagiarism.
2021: Eaton introduces postplagiarism in her book Plagiarism in Higher Education, building on Rebecca Moore Howard’s work.
2023: Eaton explicitly defines postplagiarism in an article published in the International Journal for Educational Integrity.
2024: Eaton and Kumar launch www.postplagiarism.com, offering multilingual translations and open-access content.
2025: Rahul Kumar publishes the first empirical study on postplagiarism in the same journal, analyzing student reactions.

I am always excited to hear about new work that showcases postplagiarism. Imagine my dismay when I read a new article, published in an (allegedly) peer-reviewed journal, that foregrounded the tenets of postplagiarism, but was rife with fabricated sources, including references to work attributed to me, but that I never wrote.

I have opted not to ‘name and shame’ the authors. Anyone who is curious enough need only do an Internet search to find the offending article and those who wrote it.

Instead, I prefer to take a more productive approach. Here I provide a brief timeline of the development of postplagiarism as both a framework and a theory:

2021: Plagiarism in Higher Education: Tackling Tough Topics in Academic Integrity

The book begins with a history of plagiarism. Then, I discuss plagiarism in modern times. In the concluding chapter I contemplate the future of plagiarism. Building on the scholarship of Rebecca Moore Howard, I proposed that  the age of generative artificial intelligence (Gen AI) could launch us into a post-plagiarism era in which human-AI hybrid writing becomes the norm.

2023: Expanding on the ideas first presented in the final chapter of my book, I wrote my first article dedicated to the topic: “Postplagiarism: Transdisciplinary ethics and integrity in the age of artificial intelligence and neurotechnology”, published in the International Journal for Educational Integrity.

2024: Dr. Rahul Kumar (Brock University, Canada) and I launch our website, http://www.postplagiarism.com. We provide open access resources free of charge. Thanks to the generosity of colleagues and friends who speak multipole language, we offer translations of the postplagiarism infographic in multiple languages.

Also, in this year, Rahul Kumar begins a study to test the tenets of postplagiarism.

2025: Rahul Kumar publishes the results of the first empirical article on the tenets of postplagiarism. His article, “Understanding PSE students’ reactions to the postplagiarism concept: a quantitative analysis” is published in the International Journal for Educational Integrity.

If you see references to our work on postplagiairsm as we have conceptualized it that pre-date our work, dig deeper to see if the work is real. There are now fabricated sources published on the Internet that do not — and never did — exist.

Imitation is flattery, as the saying goes. This quip has been used as a way to dismiss plagiarism concerns, as students learn to imitate great writers by quoting them without attribution. The saying digs deep into cultural and historical understandings that are beyond the scope of a blog post. What I can say is that in the postplagiarism era, fabrication is not the new flattery.

One of the tenets of postplagiarism is that humans can relinquish control over what they write to an AI, but we do not relinquish responsibility. The irony of seeing fabricated references about postplagiarism in fabricated is as absurd as it is puzzling. There is no need to fabricate references to post plagiarism, especially since we provide numerous free and open access to resources and research on the topic.

______________

Share this post: A Brief History of Postplagiarism: Or, Why Fabrication is Not the New Flattery – https://drsaraheaton.com/2025/10/13/a-brief-history-of-postplagiarism-or-why-fabrication-is-not-the-new-flattery/

Sarah Elaine Eaton, PhD, is a Professor and Research Chair in the Werklund School of Education at the University of Calgary, Canada. Opinions are my own and do not represent those of my employer.


2025-2026 Postplagiarism Speaker Series: Navigating AI in Education

September 26, 2025

Join us for an innovative speaker series exploring how artificial intelligence is transforming education, academic integrity, and the future of learning. The 2025-2026 Postplagiarism Speaker Series brings together leading researchers and educators from around the world to examine how we can navigate the integration of AI tools in educational settings while maintaining ethical standards and fostering authentic learning. This series is hosted by the Centre for Artificial Intelligence Ethics, Literacy, and Integrity (CAIELI), University of Calgary.

What is Postplagiarism? Postplagiarism (Eaton, 2023) refers to our current era where artificial intelligence has become part of everyday life, fundamentally changing how we teach, learn, and create. Rather than viewing AI as a threat to academic integrity, the postplagiarism framework offers practical approaches for embracing AI as a collaborative tool while preserving the values of authentic learning and ethical scholarship.

Series Highlights: This multi-part series features international experts who will share research-based insights and practical strategies for educators, administrators, and policymakers. Topics include foundational concepts of postplagiarism, assessment redesign, policy development, and innovative teaching approaches that prepare students for an AI-integrated world.

Who Should Attend:

  • Faculty and instructors across all disciplines
  • Educational administrators and policymakers
  • Graduate students in education
  • Academic integrity professionals
  • Anyone interested in the future of education and AI

Format: Each session combines research presentations with practical applications, offering attendees actionable insights they can implement in their own educational contexts. Sessions are hybrid so participants can attend either in person or online. Sessions are open to the public and free for everyone to attend.

The series showcases the groundbreaking postplagiarism framework developed at the University of Calgary, which has gained international recognition and been translated into multiple languages. Participants will gain cutting-edge knowledge about navigating the challenges and opportunities of generative AI in education.

Time: All sessions are held 12:00 p.m. (noon) to 13:00 Mountain time. Please convert to your local time zone.

Session 1 : Postplagiarism Fundamentals: Integrity and Ethics in the Age of GenAI

A promotional image for an AI Speaker Series event hosted by the University of Calgary. The background is orange with a geometric pattern. The text reads: 'AI Speaker Series, Sept 17, Dr. Sarah Eaton, University of Calgary, Postplagiarism Fundamentals: Integrity and Ethics in the Age of GenAI.' A blurred image of Dr. Sarah Eaton appears on the right. The CAIELI (Centre for Artificial Intelligence Ethics, Literacy and Integrity) logo is in the bottom left corner.

Date: September 17, 2025

Description: Join us to learn about the award-winning postplagiarism framework that has been translated into half a dozen languages and has received worldwide attention. Postplagiarism  refers to an era in human society in which artificial intelligence is part of everyday life, including how we teach, learn, and interact daily. (Eaton, 2023). Learn more about the six tenets of postplagiarism and how you can apply them to support students’ success.

Speaker: Dr. Sarah Elaine Eaton, University of Calgary

Bio: Sarah Elaine Eaton is a Werklund Research Professor at the University of Calgary. She researches academic integrity and ethics in educational contexts. Her work on postplagiarism marks her most important contribution to research, pedagogy and advocacy.

Check out the recording here.

Get a copy of the slides here.

Session 2: Smart or Shallow? Postplagiarism, Trust, and the Future of Learning with GenAI

A promotional image for the AI Speaker Series at the University of Calgary featuring Dr. Rahul Kumar from Brock University. The event is scheduled for October 1 and will discuss "Smart or Shallow? Postplagiarism, Trust and the Future of Learning with GenAI." The image includes a blurred-out photo of Dr. Rahul Kumar in a suit, standing outdoors with greenery in the background. The bottom section mentions CAIELI (Centre for Artificial Intelligence Ethics, Literacy and Integrity).

Date: October 1, 2025

Description: In the postplagiarism era, GenAI compels educators to confront a fundamental choice: should it be trusted as a complementary tool that enhances learning, or as a competitive tool that undermines it? This talk explores how cognitive and affective trust explain the differences among student, educator, and employer perspectives on AI’s role in education. Drawing on David C. Krakauer’s distinction between cognitive artifacts, Kumar argues that academic integrity now requires more than attribution or authorship. It requires deliberate pedagogical practices that guide learners to use AI in ways that enhance, rather than diminish, human intelligence.

Speaker: Dr. Rahul Kumar, Brock University

Bio: Dr. Rahul Kumar is an Assistant Professor, Department of Educational Studies, Brock University. In his research he focuses on the disruptive force of GenAI on education. Its effect on academic integrity and how to cope with it. Though most of his work has focused on higher education, he has also undertaken research projects on how secondary school teachers are dealing with GenAI in their classrooms and schools.

Register here: https://workrooms.ucalgary.ca/event/3939984

Session 3: Assessment in a Postplagiarism era: The AI Assessment Scale as a framework for academic integrity in an AI transformed world

Date: October 15, 2025

Description: Developments in Generative AI are leading us closer to the concept of ‘postplagiriaism’, with traditional concepts of academic integrity being fundamentally challenged by these technologies. This lecture explores how the AI Assessment Scale (AIAS) offers a pragmatic response to this upcoming paradigm shift, moving beyond futile attempts at AI detection towards thoughtful assessment redesign. In a world where AI-generated content is becoming indistinguishable from human work, the AIAS (Perkins et al., 2024) provides a five-level framework that acknowledges this new reality whilst maintaining academic authenticity.

Rather than treating AI as a threat to be policed, the AIAS embraces it as a tool to be thoughtfully integrated where appropriate. From ‘No AI’ assessments that preserve foundational skill development, to ‘AI Exploration’ tasks that prepare students for an AI-saturated workplace, this framework offers educators practical strategies for the postplagiarism landscape. This talk will demonstrate how institutions can move from an adversarial ‘catch and punish’ mentality to a collaborative approach that recognises both learning integrity and technological advancement. The session will challenge traditional academic integrity paradigms and offer actionable insights for this new era of university assessment.

Speaker: Dr. Mike Perkins, British University Vietnam

Bio: Dr. Mike Perkins heads the Centre for Research & Innovation at British University Vietnam, Hanoi. He is an Associate Professor and leads GenAI policy integration and trains Vietnamese educators and policymakers on this topic. Mike is one of the authors of the AI Assessment Scale, which has been adopted across more than 250 schools and universities worldwide, and translated into 20+ languages. His research focuses on GenAI’s impact on education, and has explored various areas within this field. This has included AI text detectors, attitudes to AI technologies, and the ethical integration of AI in assessments through the AI Assessment Scale. His work bridges technology, education, and academic integrity.

Register here: https://workrooms.ucalgary.ca/event/3925369

Session 4: Designing for Integrity: Learning and Assessment in the Postplagiarism Era

Date: November 19, 2025

Description: In the postplagiarism era, where generative AI and related technologies are embedded in how ideas are produced and shared, academic integrity must be reimagined. Rather than treating plagiarism as a violation to be detected and punished, integrity becomes something to be intentionally cultivated through the design of both learning and assessment. This talk will explore how postplagiarism challenges traditional notions of authorship, originality, and attribution, inviting educators to move beyond rule enforcement toward fostering creativity, responsibility, and agency. I will discuss how aligning learning activities with authentic, meaningful assessment can reduce plagiarism incentives while preparing students for ethical participation in a world where human and AI contributions are intertwined. Participants will be encouraged to rethink not just how we assess, but why—and to envision integrity as a shared, evolving value in the age of AI.

Speaker: Dr. Soroush Sabbaghan, University of Calgary

Bio: Dr. Soroush Sabbaghan is an Associate Professor and the Educational Leader in Residence in Generative AI at the University of Calgary’s Taylor Institute for Teaching and Learning. His work centres on human-centred design and the creation of human–AI collaborative environments, exploring the ethical, theoretical, and pedagogical implications of generative AI across K–12 and higher education. Drawing on research, teaching, and international collaborations, he examines how AI is reshaping notions of authorship, originality, and scholarly practice. Soroush is the editor of Navigating Generative AI in Higher Education: Ethical, Theoretical and Practical Perspectives, a collection that invites educators to critically engage with AI while maintaining care, dignity, and agency as core values. In his work, he encourages institutions to move beyond compliance-based approaches toward fostering creativity, responsibility, and adaptability in a hybrid human–AI world—principles that are at the heart of the postplagiarism era.

Register here: https://workrooms.ucalgary.ca/event/3939986

Session 5: Teaching Postplagiarism Tenets Through AI-Enhanced Creative Problem-Solving Model

Date: January 14, 2026

Description: While the concept of postplagiarism has gained increasing attention in the past two years, much of the discussion remains focused on writing tasks, leaving a gap in understanding how this framework applies to broader creative processes. Even less focus has been placed on strategies for teaching its tenets. This presentation bridges that gap by applying the Creative Problem Solving (CPS) model, enhanced with narrow AI tools such as chatbots, to explore how postplagiarism can be taught and understood in diverse creative contexts. By mapping the stages of CPS to postplagiarism’s key tenets, the session reveals nuanced connections between the two frameworks and offers what may be one of the earliest structured models for explaining current human–AI co-creation practices.

Speaker: Fuat Ramazanov, Acsenda School of Management

Bio: Fuat Ramazanov is the Program Director at Acsenda School of Management and a doctoral student at the University of Calgary. His doctoral research examines undergraduate students’ perceptions of the interplay between human and AI creativity throughout the creative process. A strong advocate for teaching for creativity, Fuat promotes approaches that cultivate creative thinking skills in students. His interests include innovative approaches to teaching, pedagogy in the age of AI, and the theory and application of postplagiarism framework.

Register here: https://workrooms.ucalgary.ca/event/3939987

Session 6: From Policy to Practice: A Postplagiarism Readiness Framework for AI Integration in Higher Education

Date: January 28, 2026

Description: This workshop introduces a readiness framework based on the six tenets of postplagiarism to critically assess institutional policies guiding faculty in using generative artificial intelligence. Participants will explore how the framework can be applied as a diagnostic tool to evaluate whether existing policies provide sufficient guidance, identify gaps, and support ethical, transparent, and future-ready integration of AI into teaching and learning. The session will combine conceptual grounding with practical analysis, offering participants strategies to strengthen policy and practice alignment in the age of AI.

Speaker: Dr. Beatriz Moya, Pontificia Universidad Católica de Chile

Bio: Dr. Beatriz Moya is an assistant professor at the Institute of Applied Ethics and the Pontificia Universidad Católica de Chile in Santiago, Chile. In her research she focuses on the intersection of academic integrity, educational leadership, and the Scholarship of Teaching and Learning (SoTL).

Register here: https://workrooms.ucalgary.ca/event/3939988

Session 7: A Transformative Model for Learning Academic Integrity in the Postplagiarism Era

Date: February 11, 2026

Description: The Postplagiarism framework has gained considerable attention, reshaping the landscape of academic integrity and ethical decision-making in education and research. This paradigm shift encourages educators to embrace the potential of artificial intelligence integration in transforming the competencies students need for their careers and communities. In this presentation I focus on two of the postplagiarism tenets:  enhanced human creativity and the disappearance of language barriers. I will showcase preliminary findings of my doctoral research on academic integrity. As we recognize students’ diversity, these two postplagiarism-tenets provide a framework for fostering creative communication and accessible educational environments while disappearing potential obstacles within a transformative model for learning academic integrity.

Speaker: Bibek Dahal, MPhil, University of Calgary

Bio: Bibek Dahal, MPhil is a PhD Candidate in higher education leadership, policy, and governance at the Werklund School of Education, University of Calgary, Canada. Bridging Southern epistemologies and justice-centered transformative research, his scholarship focuses on academic integrity and ethics in global higher education. His doctoral study investigates a transformative model for learning academic integrity in international higher education.

Register here: https://workrooms.ucalgary.ca/event/3945787

Session 8: Designing Authentic Assessment in the PostPlagiarism GenAI Era: Making Judgement Visible

Date: February 25, 2026

Description: GenAI shifts academic integrity from detection to design by asking educators to assign work only students can do. Aimed at educators, this workshop presents the 3Cs framework (construct, collaborate, create), developed in secondary classrooms and adapted for teacher education. Sharma will introduce amplified intelligence as a lens and centre capability-agnostic design so tasks remain valid as GenAI tools evolve. Practice is anchored in six postplagiarism tenets: hybrid human and AI writing becomes normal, creativity is enhanced, language barriers diminish, control may be delegated but responsibility cannot, attribution remains important, and definitions of plagiarism are evolving. Participants examine how purpose sets permissions for GenAI use and translate that stance into prompts, checkpoints, and reflections that make process as visible as product. Classroom-tested examples provide assessment patterns and syllabus language for disclosure, verification, and boundaries. Outcomes: visible judgement, honoured student agency, reduced outsourcing.

Speaker: Dr. Sunaina Sharma, Assistant Professor, Brock University

Bio: Dr. Sunaina Sharma, is an Assistant Professor in the Department of Educational Studies at Brock University, Ontario, Canada, specializing in secondary education and curriculum development. With 23 years of experience as a secondary teacher and 10 years as a program leader, she is deeply committed to creating a space for secondary students and educators to share their voices. Her recent research examines Ontario secondary school teachers’ responses to the proliferation of generative artificial intelligence (GenAI), focusing on their questions, concerns, and instructional needs. Dr. Sharma’s research on digital technology and student engagement underscores that engagement arises not from the digital tools themselves but from students’ ability to construct knowledge through their use. Her work contributes to ethical GenAI adoption and advances effective pedagogical practices in educational settings.

Register here: https://workrooms.ucalgary.ca/event/3945789

Session 9: The SETA Framework for Integrity Education in the Postplagiarism Era

Date: March 4, 2026

Description: Technological  Innovations are fuelling the call for changes in how students are taught and evaluated at all levels of the system. One visible change is a new focus on academic integrity, as the use of generative AI tools such as large language models (LLMs) have rendered the traditional discourse regarding plagiarism as obsolete, inadequate for the new environment. This presentation focuses on the  support, education for integrity, teaching and learning, and assessment (SETA) framework. It identifies the various elements that are necessary in educating students for academic integrity within the GenAI-enabled environment. It focuses on the role of policies and their importance in creating the context within which academic integrity education takes place, and includes the punitive element which is necessary in instances where students choose to act contrary to the requirements of the academy.  This discussion includes data gathered from students and librarians who participated in a 20 hour training for the development of academic integrity from across the Caribbean.

Speaker: Dr. Ruth Baker-Gardner, University of the West Indies, Jamaica

Bio: Dr. Ruth Baker-Gardner is the foremost voice on academic integrity in the Caribbean. She is a lecturer in librarianship at the University of the West Indies, Mona Campus in Jamaica. Dr. Baker-Gardner is author of Academic Integrity in the Caribbean which was awarded the Principal’s Research Award for outstanding Publication in the Book Category. She was also awarded the International Center for Academic Integrity Exemplar for Academic Integrity Award and the European Network for Academic Integrity Outstanding Researcher Award. Her latest publication Academic Integrity meets Artificial Intelligence, the Case of the Anglophone Caribbean, examines the region’s readiness for artificial intelligence use by examining its academic integrity structures and practices.

Register here: https://workrooms.ucalgary.ca/event/3945790

Session 10: Postplagiarism Perspectives: Comparative Insights from K-12 and Postsecondary Research

Date: March 18, 2026

Description: As generative AI technologies reshape educational landscapes, academic integrity must be reconceptualized across both K–12 and postsecondary contexts. Drawing from two doctoral research studies, this presentation explores the complex interplay between technological adoption, ethical formation, and institutional change.

The first study examines K–12 administrators navigating pedagogical and ethical uncertainties introduced by human-AI collaboration. Employing the Technology Acceptance Model, Innovation Diffusion Theory, and the 4M Framework, this research explores how administrators balance AI integration with pedagogical values during ‘AI arbitrage’, the liminal space where early adopting students outpace institutional adaptation.

The second study explores how CPA-accredited accounting programs embed ethical competencies through assessment, particularly regarding AI-enabled misconduct. Employing Rest’s Four-Component Model of Morality, Biggs’ Constructive Alignment, and an Integrity–Assessment Alignment Matrix, this research examines professional ethics education amid technological disruption.

Together, anchored in Eaton’s postplagiarism concept, these complementary theoretical lenses provide comprehensive analytical approaches to understanding educational transformation across the learning continuum.

Speakers: Naomi Paisley & Myke Healy

Bio: Naomi Paisley is a Chartered Professional Accountant (CPA) with over 20 years of experience in accounting, audit, and taxation. She currently teaches at the Southern Alberta Institute of Technology (SAIT), where she develops and delivers curriculum in financial reporting, assurance, and Canadian tax. Naomi is also a co-author of nationally adopted Canadian auditing and accounting textbooks and collaborates on the integration of evolving standards, DEI, ESG, and Indigenous perspectives in accounting education. As a doctoral candidate in the EdD program at the University of Calgary’s Werklund School of Education, her research explores how CPA-accredited undergraduate accounting programs prepare students for ethical challenges in the profession, particularly considering AI-enabled misconduct. Her study uses Rest’s Four-Component Model of Morality and Biggs’ Constructive Alignment to analyze ethics education and assessment practices in accounting programs. Naomi’s work supports the alignment of academic integrity initiatives with the expectations of the accounting profession and CPA Canada.

Myke Healy is an educational leader with over 20 years of experience in K-12 teaching and administration. He currently serves as Assistant Head – Teaching & Learning at Trinity College School, where he leads academic strategy and faculty development. Myke holds an M.Ed. in assessment and evaluation from Queen’s University and annually facilitates AI-focused modules at the Canadian Accredited Independent Schools (CAIS) Leadership Institute. As a doctoral candidate in the EdD program at the University of Calgary’s Werklund School of Education, his research examines how K-12 administrators navigate generative AI and academic integrity challenges during technological adoption. His study uses the Technology Acceptance Model, Innovation Diffusion Theory, and the 4M Framework to analyze AI integration and postplagiarism concepts in secondary education. Myke presents nationally and internationally on AI in education and serves on the Ontario College of Teachers’ accreditation roster, the board of eLearning Consortium Canada, and instructs leadership and assessment courses at Queen’s University.

Register here: https://workrooms.ucalgary.ca/event/3945792

______________

Share this post: 2025 – 2026 Postplagiarism Speaker Series: Navigating AI in Education – https://drsaraheaton.com/2025/09/26/2025-2026-postplagiarism-speaker-series-navigating-ai-in-education/

Sarah Elaine Eaton, PhD, is a Professor and Research Chair in the Werklund School of Education at the University of Calgary, Canada. Opinions are my own and do not represent those of my employer.