“What should we be assessing exactly?” This was a question one of our research participants asked when we interviewed them as part of our project on artificial intelligence and academic integrity, sponsored by a University of Calgary Teaching Grant.
In an article published in The Conversation, we provide highlights of the results from our interviews with 28 educators across Canada, as well as our analysis of 15 years of research that looked at how AI affects education. (Spoiler alert: AI is a double-edged sword for educators and there are no easy answers.)
Screenshot from The Conversation.
We emphasize that, “in a post-plagiarism context, we consider that humans and AI co-writing and co-creating does not automatically equate to plagiarism.” Check out the full article in The Conversation.
You can check out the scholarly paper that we published in Assessment and Evaluation in Higher Education that goes into more detail about the methods and findings of our interviews.
I’d like to give a shoutout to all the project team members who worked with us on various aspects of this research: Robert (Bob) Brennan (Schulich School of Engineering, University of Calgary), Jason Weins (Faculty of Arts, University of Calgary), Brenda McDermott (Student Accessibility Services, University of Calgary), Rahul Kumar (Faculty of Education, Brock University), Beatriz Moya (Instituto de Éticas Aplicadas, Pontificia Universidad Católica de Chile) and the student research assistants who helped along the way (who have now all successfully graduated and moved on to the next phase of their careers): Jonathan Lesage, Helen Pethrick, and Mawuli Tay.
Sarah Elaine Eaton, PhD, is a Professor and Research Chair in the Werklund School of Education at the University of Calgary, Canada. Opinions are my own and do not represent those of my employer.
When we talk about academic integrity in universities, we often focus on preventing plagiarism and cheating. But what if our very approach to enforcing these standards is unintentionally creating barriers for some of our most vulnerable students?
My recent research explores how current academic integrity policies and practices can negatively affect neurodivergent students—those with conditions like ADHD, dyslexia, Autism, and other learning differences. Our existing systems, structures, and policies can further marginalize students with cognitive differences.
The Problem with One-Size-Fits-All
Neurodivergent students face unique challenges that can be misunderstood or ignored. A dyslexic student who struggles with citation formatting isn’t necessarily being dishonest. They may be dealing with cognitive processing differences that make these tasks genuinely difficult. A student with ADHD who has trouble managing deadlines and tracking sources is not necessarily lazy or unethical. They may be navigating executive function challenges that affect time management and organization. Yet our policies frequently treat these struggles as potential misconduct rather than as differences that deserve support.
Yet our policies frequently treat these struggles as potential misconduct rather than as differences that deserve support.
The Technology Paradox for Neurodivergent Students
Technology presents a particularly thorny paradox. On one hand, AI tools such as ChatGPT and text-to-speech software can be academic lifelines for neurodivergent students, helping them organize thoughts, overcome writer’s block, and express ideas more clearly. These tools can genuinely level the playing field.
On the other hand, the same technologies designed to catch cheating—especially AI detection software—appear to disproportionately flag neurodivergent students’ work. Autistic students or those with ADHD may be at higher risk of false positives from these detection tools, potentially facing misconduct accusations even when they have done their own work. This creates an impossible situation: the tools that help are the same ones that might get students in trouble.
Moving Toward Epistemic Plurality
So what’s the solution? Epistemic plurality, or recognizing that there are multiple valid ways of knowing and expressing knowledge. Rather than demanding everyone demonstrate learning in the exact same way, we should design assessments that allow for different cognitive styles and approaches.
This means:
Rethinking assessment design to offer multiple ways for students to demonstrate knowledge
Moving away from surveillance technologies like remote proctoring that create anxiety and accessibility barriers
Building trust rather than suspicion into our academic cultures
Designing universally, so accessibility is built in from the start rather than added as an afterthought
What This Means for the Future
In the postplagiarism era, where AI and technology are seamlessly integrated into education, we move beyond viewing academic integrity purely as rule-compliance. Instead, we focus on authentic and meaningful learning and ethical engagement with knowledge.
This does not mean abandoning standards. It means recognizing that diverse minds may meet those standards through different pathways. A student who uses AI to help structure an essay outline isn’t necessarily cheating. They may be using assistive technology in much the same way another student might use spell-check or a calculator.
Call to Action
My review of existing research showed something troubling: we have remarkably little data about how neurodivergent students experience academic integrity policies. The studies that exist are small, limited to English-speaking countries, and often overlook the voices of neurodivergent individuals themselves.
We need larger-scale research, global perspectives, and most importantly, we need neurodivergent students to be co-researchers and co-authors in work about them. “Nothing about us without us” is not just a slogan, but a call to action for creating inclusive academic environments.
Key Messages
Academic integrity should support learning, not create additional barriers for students who already face challenges. By reimagining our approaches through a lens of neurodiversity and inclusion, we can create educational environments where all students can thrive while maintaining academic standards.
Academic integrity includes and extends beyond student conduct; it means that everyone in the learning system acts with integrity to support student learning. Ultimately, there can be no integrity without equity.
Read the whole article here: Eaton, S. E. (2025). Neurodiversity and academic integrity: Toward epistemic plurality in a postplagiarism era. Teaching in Higher Education. https://doi.org/10.1080/13562517.2025.2583456
Sarah Elaine Eaton, PhD, is a Professor and Research Chair in the Werklund School of Education at the University of Calgary, Canada. Opinions are my own and do not represent those of my employer.
885 university students in Jordan “generally viewed AI use for tasks such as translation, literature reviews, and exam preparation as ethically acceptable, whereas using AI to cheat or fully complete assignments was widely regarded as unacceptable.”
This cross-sectional study examined artificial intelligence usage patterns and ethical awareness among 885 higher education students across various disciplines. Findings showed how Jordanian university students engage with AI tools like ChatGPT in their academic work.
Key Findings
High AI Adoption: A substantial 78.1% of students reported using AI during their studies, with approximately half using it weekly or daily. ChatGPT emerged as the most popular tool (85.2%), primarily used for answering academic questions (53.9%) and completing assignments (46.4%).
Knowledge Gaps: Although 57.5% considered themselves moderately to very knowledgeable about AI, only 44% were familiar with ethical guidelines. Notably, 41.8% were completely unaware of principles guiding AI use, revealing a significant gap between usage and ethical understanding.
Disciplinary Differences: Science and engineering students demonstrated the highest usage rates and knowledge levels, while humanities students showed lower engagement but expressed the strongest interest in training. Health sciences students displayed greater ethical concerns, possibly reflecting the high-stakes nature of their field.
Ethical Perceptions: Students generally viewed AI use for translation, proofreading, literature reviews, and exam preparation as acceptable. However, 39.8% had witnessed unethical AI use, primarily involving cheating or total dependence on AI. Only 35% expressed concern about ethical implications, suggesting many may not fully recognize potential risks.
Demographic Patterns: Female students demonstrated higher ethical awareness than males. Older students and those in advanced programs (particularly PhD students) showed greater AI knowledge and ethical consciousness, with each additional year of age correlating with increased awareness scores.
Training Needs: More than three quarters (76.7%) of students expressed interest in professional training on ethical AI use, with 83.7% agreeing that guidance is necessary. However, 46.6% indicated their institutions had not provided adequate support (which should surprise exactly no one, since similar findings have been found in other studies.)
Implications
The author call for Jordanian universities to develop clear, discipline-specific ethical guidelines and structured training programs. The researchers recommend implementing mandatory online modules, discipline-tailored workshops, and establishing dedicated AI ethics bodies to promote responsible use. These findings underscore the broader challenge facing higher education globally: ensuring students can leverage AI’s benefits while maintaining academic integrity and developing critical thinking skills.
Sarah Elaine Eaton, PhD, is a Professor and Research Chair in the Werklund School of Education at the University of Calgary, Canada. Opinions are my own and do not represent those of my employer.
I am always excited to hear about new work that showcases postplagiarism. Imagine my dismay when I read a new article, published in an (allegedly) peer-reviewed journal, that foregrounded the tenets of postplagiarism, but was rife with fabricated sources, including references to work attributed to me, but that I never wrote.
I have opted not to ‘name and shame’ the authors. Anyone who is curious enough need only do an Internet search to find the offending article and those who wrote it.
Instead, I prefer to take a more productive approach. Here I provide a brief timeline of the development of postplagiarism as both a framework and a theory:
The book begins with a history of plagiarism. Then, I discuss plagiarism in modern times. In the concluding chapter I contemplate the future of plagiarism. Building on the scholarship of Rebecca Moore Howard, I proposed that the age of generative artificial intelligence (Gen AI) could launch us into a post-plagiarism era in which human-AI hybrid writing becomes the norm.
2024: Dr. Rahul Kumar (Brock University, Canada) and I launch our website, http://www.postplagiarism.com. We provide open access resources free of charge. Thanks to the generosity of colleagues and friends who speak multipole language, we offer translations of the postplagiarism infographic in multiple languages.
Also, in this year, Rahul Kumar begins a study to test the tenets of postplagiarism.
If you see references to our work on postplagiairsm as we have conceptualized it that pre-date our work, dig deeper to see if the work is real. There are now fabricated sources published on the Internet that do not — and never did — exist.
Imitation is flattery, as the saying goes. This quip has been used as a way to dismiss plagiarism concerns, as students learn to imitate great writers by quoting them without attribution. The saying digs deep into cultural and historical understandings that are beyond the scope of a blog post. What I can say is that in the postplagiarism era, fabrication is not the new flattery.
One of the tenets of postplagiarism is that humans can relinquish control over what they write to an AI, but we do not relinquish responsibility. The irony of seeing fabricated references about postplagiarism in fabricated is as absurd as it is puzzling. There is no need to fabricate references to post plagiarism, especially since we provide numerous free and open access to resources and research on the topic.
Sarah Elaine Eaton, PhD, is a Professor and Research Chair in the Werklund School of Education at the University of Calgary, Canada. Opinions are my own and do not represent those of my employer.
One of the tenets of postplgiarism is that artificial intelligence technologies will help us overcome language barriers and understand each other in countless languages (Eaton, 2023).
We already have apps that translate text from photos taken on our phones. These apps help when travelling in countries where you don’t speak the language. Now we have applications extending this idea further into wearable technology.
Wearable technology has existed for years. We wear fitness gadgets on our wrists to track steps. AI technology will become more embedded into the software that drives these devices.
New wearable devices have emerged quickly, with varying levels of success. One example was introduced about a year after ChatGPT was released. The company was called Humane and the device was powered by OpenAI technology.
The Humane pin was wearable technology that included a square-shaped pin and a battery pack that attached magnetically to your shirt or jacket. It was marketed as enabling users to communicate in just about any language (Pierce, 2023). To Star Trek fans, the resemblance to a communicator badge was unmistakable.
The device retailed for $700 US and required a software subscription of $24 USD per month, which provided data coverage for real-time use through their proprietary software based on a Snapdragon processor (Pierce, 2023). The device only worked with the T-Mobile network in the United States. Since I live in Canada and T-Mobile isn’t available here, I never bought one.
Like others, I watched with enthusiasm, hoping the product would succeed so it could expand to other markets. Pre-order sales indicated huge potential for success. By late 2023, the Humane pin was heralded as “Silicon Valley’s ‘next big thing'” (Chokkattu, 2025a). (I can’t help but wonder if the resemblance to a Star Trek communicator badge was part of the allure.)
When tech enthusiasts received the product in 2024, the reviews were dismal. One reviewer gave it 4 out of 10 and called it a “party trick” (Chokkattu, 2024). (Ouch.) The Humane pin did not live up to its promises. Less than a year after its release, the device was dead. HP acquired the company and retired the product at the end of February 2025.
Tech writer Julian Chokkattu declared the device was e-waste and suggested it could be used as a paperweight or stored in a box in the attic. Chokkattu (2025b) says, “In 50 years, you’ll accidentally find it in the attic and then tell your grandkids how this little gadget was once—for a fleeting moment—supposed to be the next big thing.”
Learning from Failure: The Promise Remains
The failure of the Humane AI Pin does not invalidate the vision of AI-powered real-time translation. The device failed because of execution problems—poor battery life, overheating, an annoying projector interface, and limited functionality (Chokkattu, 2024). The core AI translation capabilities were among the features that actually worked.
Real-time translation represents one of the most compelling applications of generative AI. When the technology works seamlessly, it can transform human communication. The Humane pin showed us what not to do: create a standalone device with too many functions, none executed well.
The future of AI translation likely lies not in dedicated hardware but in integration with devices we already use. Our smartphones, earbuds, and smart glasses will become the vehicles for breaking down language barriers. The underlying AI models continue to improve rapidly, and the infrastructure for real-time translation grows more robust.
The Humane pin’s failure teaches us that good ideas require good execution. But we should not abandon the goal of using AI to help humans understand each other across languages. That goal remains as important as ever in our increasingly connected world. The technology will improve, the interfaces will become more intuitive, and the promise of the postplagiarism tenet—that language barriers will begin to disappear—will eventually be realized.
The Humane AI pin may be dead, but we should keep our hope alive that AI technology will help us overcome language barriers and provide new opportunities for communication.
Eaton, S. E. (2023). Postplagiarism: Transdisciplinary ethics and integrity in the age of artificial intelligence and neurotechnology. International Journal for Educational Integrity, 19(1), 1–10. https://doi.org/10.1007/s40979-023-00144-1
Sarah Elaine Eaton, PhD, is a Professor and Research Chair in the Werklund School of Education at the University of Calgary, Canada. Opinions are my own and do not represent those of my employer.
You must be logged in to post a comment.