Research Integrity Oversight in Canada: A Postplagiarism Perspective

April 11, 2026

The Canadian Panel on Responsible Conduct of Research (PRCR) is proposing substantive changes to Canada’s research integrity framework, and the public comment window closes April 17, 2026. If you care about research ethics in this country, you have days left to weigh in.

I want to flag a few things about these proposed changes and why they matter to those of us working in postplagiarism research.

The most consequential proposal is the removal of any statute of limitations on allegations of research misconduct. As attorney Minal Caron told Retraction Watch, the existing policy is silent on this question. The proposed language would require institutions to review allegations regardless of how much time has passed since the work was published, which would be a significant shift. It’s also a long-overdue one. Complainants often delay coming forward out of fear of retaliation, and a policy that turns away allegations on procedural grounds protects no one except those who benefit from institutional inaction.

The PRCR also proposes to require institutions to hold respondents accountable even after they have left, and to accept anonymous allegations and allegations already circulating in the public domain as grounds for review. These aren’t radical ideas. They’re basic conditions for a credible oversight system.

I’ve written and spoken at length about how postplagiarism requires us to rethink accountability in an age of AI. But accountability without enforcement infrastructure is a philosophical position, not a policy. These proposed changes represent a concrete attempt to build infrastructure. They will not resolve every tension in Canadian research oversight, and the critics quoted in the article are right to flag gaps, particularly around the vagueness of institutional RCR education requirements.

One of the scholars quoted in the Retraction Watch piece is Gengyan Tang, a PhD candidate and a member of our Postplagiarism Research Lab, who studies research integrity policy. His observation that the proposed language around RCR education is too ambiguous is precise and fair. Institutions can host an ‘Academic Integrity Week’ and check a compliance box without delivering anything substantive. Policies that do not specify how education is to be delivered or evaluated leave too much room for performative compliance.

The Pruitt case, cited in the article as a catalyst for some of this reform momentum, is worth naming directly. Jonathan Pruitt was found to have fabricated and falsified data. The case exposed how the 2011 framework’s absence of relevant procedures allowed institutions to deflect rather than investigate. Requiring institutions to act regardless of elapsed time or an individual’s current affiliation is a direct response to that failure.

Postplagiarism, as a framework, asks us to think past the categories we have inherited. The academic integrity arms race that I have discuss in my research applies just as much to research misconduct oversight as it does to student cheating. Detection tools, policies, and procedures are only as good as the institutional will to apply them rigorously. These proposed changes push toward compulsion rather than discretion, which warrants close attention.

The comment period is open until April 17, 2026. If you work in research integrity, this is your chance: read the proposed revisions and submit feedback.

__________

Reposted from: Research Integrity Oversight in Canada: A Postplagiarism Perspective – https://postplagiarism.com/2026/04/11/research-integrity-oversight-in-canada-a-postplagiarism-perspective/


Interfacing with the Future: Reflections on the National Day of Learning 2026

April 1, 2026

On March 28, 2026, I had the pleasure of joining educators from across Canada for the National Day of Learning, hosted by Let’s Talk Science. This one-day, nation-wide professional learning event brought together K–12 teachers, post-secondary educators, and policy leaders to explore some of the most pressing issues shaping education today, with artificial intelligence high on the agenda.

I was invited to deliver a session titled “Interfacing with the Future: Wearable AI and Academic Integrity for K–12 and Higher Ed.” What follows are a few reflections and key ideas from that conversation, hosted by Dr. Alec Couros.

Moving into the Postplagiarism Era

One of the central ideas framing my talk is postplagiarism. In this reality, artificial intelligence is no longer an external tool that students occasionally use, but rather, it is embedded into everyday life and learning.

Students are already engaging with AI in ways that challenge traditional notions of authorship, originality, and academic work. The question is no longer if students will use AI, but how.

This shift requires a corresponding change in how we think about academic integrity. Detection and surveillance, long relied upon as primary strategies, are no longer sufficient. Instead, we must rethink how we design learning environments that foster integrity from the ground up.

From Tools to Wearables: How AI is Advancing

A key focus of my presentation was the rapid evolution from AI tools to AI wearables — particularly smart glasses and other forms of cosmetically invisible interfaces. The talk was based, in part, on our recent article in Canadian Perspectives on Academic Integrity

Wearable technologies integrate AI directly into our physical experience of the world. Rather than pulling out a device, users can access real-time information, transcription, and prompts seamlessly through their field of vision.

This shift introduces both opportunities and tensions:

  • Cognitive offloading: Learners can reduce mental load by accessing information instantly. (Phill Dawson has done some great work on cognitive offloading that I recommend reading.)
  • Enhanced presence: Wearables allow users to maintain eye contact and engagement without device distraction.
  • Efficiency gains: Tasks such as note-taking or translation can be automated in real time.

At the same time, these benefits come with real challenges including information overload, privacy concerns, and technical limitations. More importantly for educators, they fundamentally disrupt assumptions about what it means to “know” something independently.

New Technology ≠ Cheating

One of the most important messages I emphasized is this: new technology does not automatically equal academic misconduct.

If a tool is permitted, then its use is not cheating. The real issue lies in unauthorized use or misuse in ways that create unfair advantage. 

We must also remain attentive to equity and accessibility. Some wearable technologies may be used as accommodations, making it essential that our integrity policies are inclusive and nuanced rather than rigid and punitive.

Designing for Integrity (Not Surveillance)

Rather than doubling down on detection, I encourage educators to shift their focus toward designing for integrity.

This means:

  • Prioritizing assessment validity: If an AI system can complete a task without genuine understanding, then the task itself needs to be rethought.
  • Moving beyond “gotcha” approaches: Surveillance-based strategies erode trust and are increasingly ineffective.
  • Supporting diverse learners: Students bring different technological access, needs, and experiences. Our designs must reflect that.
  • Building a culture of integrity: Integrity is not enforced; it is cultivated through meaningful learning experiences.

Bridging K–12 and Post-Secondary Education

Another key theme was the gap between K–12 and post-secondary expectations.

In K–12 environments, students are often encouraged to explore technology as part of their learning. In contrast, post-secondary institutions frequently operate under the assumption that students already understand complex academic integrity rules.

As AI continues to evolve, this gap becomes more pronounced. We need stronger alignment across educational sectors to ensure that students are supported, rather than being set up for failure, as they transition between systems. (Myke Healy has a great paper on the topic of GenAI in the K-12 context that is worth reading.) 

Looking Ahead

If there is one takeaway from this experience, it is this: wearable AI is not a future scenario. It is already here.

As educators, we are being called to respond not with fear, but with thoughtful, research-informed approaches. The challenge is not simply to manage technology, but to reimagine teaching, learning, and assessment in ways that remain meaningful in an AI-integrated world.

Events like the National Day of Learning remind me of the power of community. Bringing educators together to share ideas, ask difficult questions, and explore new possibilities is essential as we navigate this rapidly changing landscape.

Thank you to Let’s Talk Science and to Dr. Alec Couros for the opportunity to be part of this important conversation, and to all the educators who continue to lead with curiosity, courage, and care.

______________

Share this post: Interfacing with the Future: Reflections on the National Day of Learning 2026 –  https://drsaraheaton.com/2026/04/01/interfacing-with-the-future-reflections-on-the-national-day-of-learning-2026/

Sarah Elaine Eaton, PhD, is a Professor and Research Chair in the Werklund School of Education at the University of Calgary, Canada. Opinions are my own and do not represent those of my employer.


Call for Proposals: Special issue on Postplagiarism and Generativism: Human-AI Hybrid Approaches to Ethical Teaching, Learning, and Assessment

March 17, 2026

Special Issue Call for Papers

Postplagiarism and Generativism: Human-AI Hybrid Approaches to Ethical Teaching, Learning, and Assessment

For publication in the Journal of University Teaching and Learning Practice

Guest editors

Background

Every new technology brings with it societal and moral panic (Orben, 2020). When the Internet first became popular, concerns about plagiarism increased. Even though there is scant empirical evidence that the Internet was actually responsible for increases in rates of plagiarism, the perception that new technology resulted in more academic cheating persisted (Panning Davies & Howard, 2016).

Some plagiarism scholars have been emphatic that the majority of student plagiarism cases are not an intent to deceive, but rather a lack of academic literacy and poor academic practice, and have even advocated for disposing of plagiarism in academic misconduct policies in favour of increased student support (Howard, 1992; Jamieson & Howard, 2021). The idea that plagiarism could be decoupled from academic misconduct seems somewhat unlikely, but by the 2020s it was obvious to some that generative artificial intelligence (GenAI) would have an impact on writing, and by extension, on plagiarism (Mindzak & Eaton, 2021).

In response to these technological shifts, various frameworks have emerged to conceptualize academic integrity in the GenAI era. The postplagiarism framework, first introduced by Eaton (2021, 2023) and since discussed by scholars worldwide (Bali, 2023; Bagenal, 2024; Kenny, 2024), offers one approach. Other perspectives, such as Generativism (Pratschke, 2023), AI Literacy frameworks (Ng et al., 2021; Pretorius & Cahusac de Caux, 2024), and UNESCO’s Guidance for Generative AI in Education (2023), provide complementary or alternative viewpoints on similar phenomena.

Postplagiarism is based on six tenets (Eaton, 2023): (1) human-AI hybrid writing will become the norm; (2) creativity can be enhanced by AI; (3) AI can help to overcome language barriers; (4) we can outsource control of our writing to AI, but we do not outsource responsibility for what is written; (5) attribution remains important; and (6) historical definitions of plagiarism may require rethinking.

Empirical testing of these and related frameworks has shown differing levels of acceptance and application across educational contexts (Kumar, 2025).

Equity, Diversity, Inclusion, and Accessibility in a Postplagiarism Age

As higher education institutions aim to promote social justice through equity, diversity, and inclusion (EDI), GenAI holds the potential to either break down or reinforce barriers related to linguistic, cultural, socioeconomic, and ability differences requires critical examination.

Assessment practices should be designed proactively to enable all students to demonstrate their learning without being unfairly disadvantaged by their personal characteristics or circumstances (Tai et al., 2022). Similarly, McDermott (2024) highlights the importance of considering accessibility, equity, and inclusion in assessment and academic integrity.

GenAI offers opportunities to enhance equity by providing personalized support, overcoming language barriers, and assisting learners with diverse needs. However, without careful implementation, it may exacerbate existing inequities through unequal access to technology, algorithmic biases, or assessment designs that privilege certain ways of knowing and communicating.

In this special edition, we propose to examine the broader question: “How are pedagogies, learning, and teaching approaches evolving in response to GenAI, and what frameworks best support ethical academic practice in a postplagiarism landscape?”

We invite researchers and practitioners to submit their original research papers exploring the transformation of teaching, learning, and assessment in a GenAI age. We welcome both theoretical and empirical contributions, including positions that may present contrasting viewpoints. Potential topics of interest include, but are not limited to:

  • New developments in postplagiarism, generativism, and other emerging frameworks for understanding academic integrity in the GenAI era
  • Empirical studies testing these frameworks in different contexts and disciplines
  • The use of these frameworks to design or reform academic misconduct policies and procedures
  • The relationship between GenAI, academic literacies, and related competencies (e.g., digital literacy, information literacy)
  • Pedagogical approaches that embrace GenAI while maintaining academic integrity
  • Case studies of successful integration of GenAI into teaching, learning, and assessment
  • Critical perspectives on the limitations or challenges of current approaches to GenAI in education
  • Position papers presenting new or alternative frameworks for understanding GenAI in teaching and learning

We particularly encourage submissions that engage in dialogue with existing frameworks, offering either supportive evidence or critical alternatives. Our goal is to foster a robust debate about the future of teaching and learning in a GenAI (and even a post-GenAI) world.

We welcome submissions from both established researchers and early-career scholars from diverse academic and cultural backgrounds. All submissions will be peer-reviewed by an international panel of experts. Accepted papers will be published in a special issue of the Journal of University Teaching and Learning Practice.

Types of publications accepted into this Special Issue

The types of publications that are eligible for acceptance into this Special Issue include:

  • Research papers
  • Review articles (e.g., systematic review or meta-analysis)
  • Case studies and evidence-based good practice examples

Developing a high-quality proposal

We recommend the creation of a single document in Word (.doc or .docx) format that contains the following:

  • Proposed article title
  • Proposed authors names, affiliations, and ORCid
  • A clear evidence-based rationale for the line of inquiry proposed
  • Research question(s)
  • Proposed method (for both theoretical and empirical manuscripts)
  • Practice-based implications of the proposed research

The word limit for the proposal is 250 words (not including references) and is designed to give the Editorial Team a sense of the rigour of the manuscript proposed and the possible implications of such research. The Editorial Team may return with an invitation to combine similar manuscripts. Acceptance of proposals does not guarantee acceptance of final manuscripts.

Timeline

  • Proposals due – April 30, 2026
  • Proposal acceptance notifications: May 14, 2026
  • Full articles due: August 31, 2026

Submit your abstract via this online form: https://forms.gle/6sKjc2jkKGWCtGgw7

For further information contact Professor Sarah Elaine Eaton, University of Calgary.

References

Bali, M. (2023, March 3). Are We Approaching a Postplagiarism Era? https://blog.mahabali.me/educational-technology-2/are-we-approaching-a-postplagiarism-era/

Bagenal, J. (2024). Generative artificial intelligence and scientific publishing: Urgent questions, difficult answers. The Lancet, 403(10432), 1118–1120. https://doi.org/10.1016/S0140-6736(24)00416-1

Eaton, S. E. (2021). Plagiarism in Higher Education: Tackling Tough Topics in Academic Integrity. Bloomsbury.

Eaton, S. E. (2023). Postplagiarism: Transdisciplinary ethics and integrity in the age of artificial intelligence and neurotechnology. International Journal for Educational Integrity, 19(1), 1–10. https://doi.org/10.1007/s40979-023-00144-1

Orben, A. (2020). The Sisyphean cycle of technology panics. Perspectives on Psychological Science, 15(5), 1143–1157. https://doi.org/10.1177/1745691620919372

Howard, R. M. (1992). A plagiarism pentimento. Journal of Teaching Writing, 11(2), 233–245.


Stop wasting my time! AI Agents Infiltrate Scholarly Publishing

February 6, 2026

As the Editor-in-Chief of the International Journal for Educational Integrity, I have witnessed (and become super frustrated with) threats to academic publishing and research integrity from Gen AI. Don’t get me wrong, I am not opposed to AI, but I have been clear in my research and writing that technology can be used in good and helpful ways or ways that are unethical and inappropriate. Recently, our editorial office received a manuscript with the file name ‘Blinded manuscript generated by artificial intelligence.’

My reaction was, “Are you kidding me?! Well, that’s bold!” Although the honesty of the title may be rarity, the submission itself is symptomatic of a burgeoning crisis in academic publishing: the rise of ‘AI slop.’ Since the proliferation of large language models (LLMs), we have seen a dramatic increase in submissions. Now, I’m pretty sure that a portion of the manuscripts we are receiving are written entirely by AI agents or bots, sending submissions on behalf of authors.

ChatGPT generated image. A puppet seated at a desk in an office, holding a printed document titled “Blinded manuscript generated by artificial intelligence.” The desk is covered with papers, a pair of glasses, a pen, and a coffee mug, with bookshelves and a bulletin board visible in the background.

As a journal editor, let me be clear: The volume of manuscripts you send out does not equate to the value to the readership. It is not that I oppose the use of AI carte blanche, but I do object to manuscripts prepared and sent by bots, with no human interaction in the process. If a manuscript does not bring value to our readers, it gets an immediate desk rejections, and for good reason.

The Problem with AI Slop in Research

Academic journals exist to advance the frontiers of human knowledge. A manuscript is expected to contribute new and original findings to scholarship and science. AI-generated papers, by their very nature, struggle to meet this requirement.

  • Lack of Empirical Depth: AI excels at synthesizing existing information but cannot conduct original fieldwork, clinical trials, or archival research. It mimics the structure of a study without performing the substance of it.
  • Axiological Misalignment: There is a gap between the automated generation of text and the values-driven process of human inquiry. Research requires a commitment to truth, ethics, and accountability, qualities a machine cannot possess.
  • The Echo Chamber Effect: These submissions often present fabricated or corrupted  citations or circular logic that offers little to no utility to the reader. They clutter the ecosystem without moving the needle on critical conversations.

Upholding the Integrity of the Record

Our editorial board remains committed to a rigorous peer-review process, but let’s be clear: the ‘publish or perish’ culture, now supercharged by Gen AI, is threatening to overwhelm the very systems meant to ensure quality.

If an academic paper submitted for publication does not offer an original contribution or if it lacks the human oversight necessary to guarantee its validity, it has no place in a scholarly journal. We in a postplagiarism era where the focus must shift from merely detecting copied text to evaluating the originality of thought and the integrity of the research process. Postplagiarism does not mean that we throw out academic and research integrity or that ‘anything goes’. We recognize that co-creation with GenAI may be normal for some writers today. But having an AI agent write and submit manuscripts on your behalf wastes everyone’s time.

To our contributors: scholarship is a human endeavor. We value your insights, your unique perspectives, and your rigorous labour. In the meantime, we will continue with our commitment to quality, and I expect that the journal’s rejection rate will continue to be high as we focus on papers that bring value to our readership.

______________

Share this post: Stop wasting my time! AI Agents Infiltrate Scholarly Publishing – https://drsaraheaton.com/2026/02/06/stop-wasting-my-time-ai-agents-infiltrate-scholarly-publishing/

Sarah Elaine Eaton, PhD, is a Professor and Research Chair in the Werklund School of Education at the University of Calgary, Canada. Opinions are my own and do not represent those of my employer.


ChatGPT is in classrooms. What now?

February 2, 2026

“What should we be assessing exactly?” This was a question one of our research participants asked when we interviewed them as part of our project on artificial intelligence and academic integrity, sponsored by a University of Calgary Teaching Grant.

In an article published in The Conversation, we provide highlights of the results from our interviews with 28 educators across Canada, as well as our analysis of 15 years of research that looked at how AI affects education. (Spoiler alert: AI is a double-edged sword for educators and there are no easy answers.)

Alt text: Screenshot of The Conversation website showing a blurred smartphone screen with the ChatGPT app icon. Overlaid headline reads, “ChatGPT is in classrooms. How should educators now assess student learning?”
Screenshot from The Conversation.

We emphasize that, “in a post-plagiarism context, we consider that humans and AI co-writing and co-creating does not automatically equate to plagiarism.” Check out the full article in The Conversation.

You can check out the scholarly paper that we published in Assessment and Evaluation in Higher Education that goes into more detail about the methods and findings of our interviews.

I’d like to give a shoutout to all the project team members who worked with us on various aspects of this research: Robert (Bob) Brennan (Schulich School of Engineering, University of Calgary), Jason Weins (Faculty of Arts, University of Calgary), Brenda McDermott (Student Accessibility Services, University of Calgary), Rahul Kumar (Faculty of Education, Brock University), Beatriz Moya (Instituto de Éticas Aplicadas, Pontificia Universidad Católica de Chile) and the student research assistants who helped along the way (who have now all successfully graduated and moved on to the next phase of their careers): Jonathan Lesage, Helen Pethrick, and Mawuli Tay.

Related posts:

What Should We Be Assessing in a World with AI? Insights from Higher Education Educators – https://drsaraheaton.com/2025/11/25/what-should-we-be-assessing-in-a-world-with-ai-insights-from-higher-education-educators/

______________

Share this post: ChatGPT is in classrooms. What now? https://drsaraheaton.com/2026/02/02/chatgpt-is-in-classrooms-what-now/

Sarah Elaine Eaton, PhD, is a Professor and Research Chair in the Werklund School of Education at the University of Calgary, Canada. Opinions are my own and do not represent those of my employer.