What Should We Be Assessing in a World with AI? Insights from Higher Education Educators

November 25, 2025

The arrival of generative AI tools such as ChatGPT has disrupted how we think about assessment in higher education. As educators, we’re facing a critical question: What should we actually be assessing when students have access to these powerful tools?

Our recent study explored how 28 Canadian higher education educators are navigating this challenge. Through in-depth interviews, we discovered that educators are positioning themselves as “stewards of learning with integrity” – carefully drawing boundaries between acceptable and unacceptable uses of chatbots in student assessments.

Screenshot of an academic journal article header from Assessment & Evaluation in Higher Education, published by Routledge. The article title reads: “What should we be assessing exactly? Higher education staff narratives on gen AI integration of assessment in a postplagiarism era.” Authors listed are Sarah Elaine Eaton, Beatriz Antonieta Moya Figueroa, Brenda McDermott, Rahul Kumar, Robert Brennan, and Jason Wiens, with institutional affiliations including University of Calgary, Pontificia Universidad Católica de Chile, Brock University, and others. The DOI link is visible at the top: https://doi.org/10.1080/02602938.2025.2587246.

Where Educators Found Common Ground

Across disciplines, participants agreed that prompting skills and critical thinking are appropriate to assess with chatbot integration. Prompting requires students to demonstrate foundational knowledge, clear communication skills, and ethical principles like transparency and respect. Critical thinking assessments can leverage chatbots’ current limitations – their unreliable arguments, weak fact-checking, and inability to explain reasoning – positioning students as evaluators of AI-generated content.

The Nuanced Territory of Writing Assessment

Writing skills proved far more controversial. Educators accepted chatbot use for brainstorming (generating initial ideas) and editing (grammar checking after independent writing), but only under specific conditions: students must voice their own ideas, complete the core writing independently, and critically evaluate any AI suggestions.

Notably absent from discussions was the composition phase – the actual process of developing and organizing original arguments. This silence suggests educators view composition as distinctly human cognitive work that should remain student-generated, even as peripheral tasks might accommodate technological assistance.

Broader Concerns

Participants raised important challenges beyond specific skill assessments: language standardization that erases student voice, potential for overreliance on AI, blurred authorship boundaries, and untraceable forms of academic misconduct. Many emphasized that students training to become professional communicators shouldn’t rely on AI for core writing tasks.

Moving Forward

Our findings suggest that ethical AI integration in assessment requires more than policies, it demands ongoing conversations about what makes learning authentic in technology-mediated environments. Educators need support in identifying which ‘cognitive offloads’ are appropriate, understanding how AI works, and building students’ evaluative judgment skills.

The key insight? Assessment in the AI era isn’t about banning technology, but about distinguishing between tasks where AI can enhance learning and those where independent human cognition remains essential. As one participant reflected: we must continue asking ourselves, “What should we be assessing exactly?”

The postplagiarism era requires us to protect academic standards while preparing students for technology-rich professional environments – a delicate balance that demands ongoing dialogue, flexibility, and our commitment to learning and student success.

Read the full article: https://doi.org/10.1080/02602938.2025.2587246

______________

Share this post: What Should We Be Assessing in a World with AI? Insights from Higher Education Educators – https://drsaraheaton.com/2025/11/25/what-should-we-be-assessing-in-a-world-with-ai-insights-from-higher-education-educators/

Sarah Elaine Eaton, PhD, is a Professor and Research Chair in the Werklund School of Education at the University of Calgary, Canada. Opinions are my own and do not represent those of my employer.


Breaking Barriers: Academic Integrity and Neurodiversity

November 20, 2025

When we talk about academic integrity in universities, we often focus on preventing plagiarism and cheating. But what if our very approach to enforcing these standards is unintentionally creating barriers for some of our most vulnerable students?

My recent research explores how current academic integrity policies and practices can negatively affect neurodivergent students—those with conditions like ADHD, dyslexia, Autism, and other learning differences. Our existing systems, structures, and policies can further marginalize students with cognitive differences.

The Problem with One-Size-Fits-All

Neurodivergent students face unique challenges that can be misunderstood or ignored. A dyslexic student who struggles with citation formatting isn’t necessarily being dishonest. They may be dealing with cognitive processing differences that make these tasks genuinely difficult. A student with ADHD who has trouble managing deadlines and tracking sources is not necessarily lazy or unethical. They may be navigating executive function challenges that affect time management and organization. Yet our policies frequently treat these struggles as potential misconduct rather than as differences that deserve support.

Yet our policies frequently treat these struggles as potential misconduct rather than as differences that deserve support.

The Technology Paradox for Neurodivergent Students

Technology presents a particularly thorny paradox. On one hand, AI tools such as ChatGPT and text-to-speech software can be academic lifelines for neurodivergent students, helping them organize thoughts, overcome writer’s block, and express ideas more clearly. These tools can genuinely level the playing field.

On the other hand, the same technologies designed to catch cheating—especially AI detection software—appear to disproportionately flag neurodivergent students’ work. Autistic students or those with ADHD may be at higher risk of false positives from these detection tools, potentially facing misconduct accusations even when they have done their own work. This creates an impossible situation: the tools that help are the same ones that might get students in trouble.

Moving Toward Epistemic Plurality

So what’s the solution? Epistemic plurality, or recognizing that there are multiple valid ways of knowing and expressing knowledge. Rather than demanding everyone demonstrate learning in the exact same way, we should design assessments that allow for different cognitive styles and approaches.

This means:

  • Rethinking assessment design to offer multiple ways for students to demonstrate knowledge
  • Moving away from surveillance technologies like remote proctoring that create anxiety and accessibility barriers
  • Building trust rather than suspicion into our academic cultures
  • Recognizing accommodations as equity, not as “sanctioned cheating”
  • Designing universally, so accessibility is built in from the start rather than added as an afterthought

What This Means for the Future

In the postplagiarism era, where AI and technology are seamlessly integrated into education, we move beyond viewing academic integrity purely as rule-compliance. Instead, we focus on authentic and meaningful learning and ethical engagement with knowledge.

This does not mean abandoning standards. It means recognizing that diverse minds may meet those standards through different pathways. A student who uses AI to help structure an essay outline isn’t necessarily cheating. They may be using assistive technology in much the same way another student might use spell-check or a calculator.

Call to Action

My review of existing research showed something troubling: we have remarkably little data about how neurodivergent students experience academic integrity policies. The studies that exist are small, limited to English-speaking countries, and often overlook the voices of neurodivergent individuals themselves.

We need larger-scale research, global perspectives, and most importantly, we need neurodivergent students to be co-researchers and co-authors in work about them. “Nothing about us without us” is not just a slogan, but a call to action for creating inclusive academic environments.

Key Messages

Academic integrity should support learning, not create additional barriers for students who already face challenges. By reimagining our approaches through a lens of neurodiversity and inclusion, we can create educational environments where all students can thrive while maintaining academic standards.

Academic integrity includes and extends beyond student conduct; it means that everyone in the learning system acts with integrity to support student learning. Ultimately, there can be no integrity without equity.

Read the whole article here:
Eaton, S. E. (2025). Neurodiversity and academic integrity: Toward epistemic plurality in a postplagiarism era. Teaching in Higher Educationhttps://doi.org/10.1080/13562517.2025.2583456

______________

Share this post: Breaking Barriers: Academic Integrity and Neurodiversity – https://drsaraheaton.com/2025/11/20/breaking-barriers-academic-integrity-and-neurodiversity/

Sarah Elaine Eaton, PhD, is a Professor and Research Chair in the Werklund School of Education at the University of Calgary, Canada. Opinions are my own and do not represent those of my employer.


Latest IJEI article is out! “Exploring the nexus of academic integrity and artificial intelligence in higher education: a bibliometric analysis” 

August 29, 2025

One of the great joys of being a journal editor is getting to share good news when a new article is published. I am going to make more of an effort to do this on my blog because the International Journal for Educational Integrity is a high quality (Q1) journal with lots to offer when it comes to academic integrity. We accept only about 10% of manuscripts submitted to the journal, so having an article published is a great achievment!

Check out the latest article, “Exploring the nexus of academic integrity and artificial intelligence in higher education: a bibliometric analysis” by Daniela Avello and Samuel Aranguren Zurita.

The image shows a webpage from the International Journal for Educational Integrity, part of Springer Nature. The header includes navigation links for Home, About, Articles, and Submission Guidelines, along with a "Submit manuscript" button. The featured article is titled "Exploring the nexus of academic integrity and artificial intelligence in higher education: a bibliometric analysis" by Daniela Avello and Samuel Aranguren Zurita. It is marked as open access, published on 29 August 2025, and appears in volume 21, article number 24. Citation options are available at the bottom.

Abstract

Background

Artificial intelligence has created new opportunities in higher education, enhancing teaching and learning methods for both students and educators. However, it has also posed challenges to academic integrity.

Objective

To describe the evolution of scientific production on academic integrity and artificial intelligence in higher education.

Methodology

A bibliometric analysis was carried out using VOSviewer software and the Bibliometrix package in R. A total of 467 documents published between 2017 and 2025, retrieved from the Web of Science database, were analyzed.

Results

The analysis reveals a rapid expansion of the field, with an annual growth rate of 71.97%, concentrated in journals specializing in education, academic ethics, and technology. The field has evolved from a focus on the use of artificial intelligence in dishonest practices to the study of its integration in higher education. Four main lines of research were identified: the impact and adoption of artificial intelligence, implications for students, academic dishonesty, and associated psychological factors.

Conclusions

The field is at an early stage of development but is expanding rapidly, albeit with fragmented evolution, limited collaboration between research teams, and high editorial dispersion. The analysis shows a predominance of descriptive approaches, leaving room for the development of theoretical frameworks.

Originality or value

This study provides an overview and updated of the evolution of research on artificial intelligence and academic integrity, identifying trends, collaborations, and conceptual gaps. It highlights the need to promote theoretical reflection to guide future practice and research on the ethical use of artificial intelligence in higher education.

Check out the full article here.

________________________

Share this post: Latest IJEI article is out! “Exploring the nexus of academic integrity and artificial intelligence in higher education: a bibliometric analysis” – https://drsaraheaton.com/2025/08/29/latest-ijei-article-is-out-exploring-the-nexus-of-academic-integrity-and-artificial-intelligence-in-higher-education-a-bibliometric-analysis/

Sarah Elaine Eaton, PhD, is a Professor and Research Chair in the Werklund School of Education at the University of Calgary, Canada. Opinions are my own and do not represent those of my employer.


When Good Ideas Meet Poor Execution: The Humane AI Pin and the Future of Language Translation

May 18, 2025

One of the tenets of postplgiarism is that artificial intelligence technologies will help us overcome language barriers and understand each other in countless languages (Eaton, 2023). 

We already have apps that translate text from photos taken on our phones. These apps help when travelling in countries where you don’t speak the language. Now we have applications extending this idea further into wearable technology.

Wearable technology has existed for years. We wear fitness gadgets on our wrists to track steps. AI technology will become more embedded into the software that drives these devices.

New wearable devices have emerged quickly, with varying levels of success. One example was introduced about a year after ChatGPT was released. The company was called Humane and the device was powered by OpenAI technology.

The Humane pin was wearable technology that included a square-shaped pin and a battery pack that attached magnetically to your shirt or jacket. It was marketed as enabling users to communicate in just about any language (Pierce, 2023). To Star Trek fans, the resemblance to a communicator badge was unmistakable.

The device retailed for $700 US and required a software subscription of $24 USD per month, which provided data coverage for real-time use through their proprietary software based on a Snapdragon processor (Pierce, 2023). The device only worked with the T-Mobile network in the United States. Since I live in Canada and T-Mobile isn’t available here, I never bought one.

Like others, I watched with enthusiasm, hoping the product would succeed so it could expand to other markets. Pre-order sales indicated huge potential for success. By late 2023, the Humane pin was heralded as “Silicon Valley’s ‘next big thing'” (Chokkattu, 2025a). (I can’t help but wonder if the resemblance to a Star Trek communicator badge was part of the allure.)

A person wearing a light blue dress shirt and a dark blue suit jacket. The shirt has a button labeled 'A7' on the collar. Attached to the collar is a small, square electronic device with a screen displaying an icon of a circular arrow, indicating a loading or refresh symbol. The background features an out-of-focus world map.

When tech enthusiasts received the product in 2024, the reviews were dismal. One reviewer gave it 4 out of 10 and called it a “party trick” (Chokkattu, 2024). (Ouch.) The Humane pin did not live up to its promises. Less than a year after its release, the device was dead. HP acquired the company and retired the product at the end of February 2025.

Tech writer Julian Chokkattu declared the device was e-waste and suggested it could be used as a paperweight or stored in a box in the attic. Chokkattu (2025b) says, “In 50 years, you’ll accidentally find it in the attic and then tell your grandkids how this little gadget was once—for a fleeting moment—supposed to be the next big thing.”

Learning from Failure: The Promise Remains

The failure of the Humane AI Pin does not invalidate the vision of AI-powered real-time translation. The device failed because of execution problems—poor battery life, overheating, an annoying projector interface, and limited functionality (Chokkattu, 2024). The core AI translation capabilities were among the features that actually worked.

Real-time translation represents one of the most compelling applications of generative AI. When the technology works seamlessly, it can transform human communication. The Humane pin showed us what not to do: create a standalone device with too many functions, none executed well.

The future of AI translation likely lies not in dedicated hardware but in integration with devices we already use. Our smartphones, earbuds, and smart glasses will become the vehicles for breaking down language barriers. The underlying AI models continue to improve rapidly, and the infrastructure for real-time translation grows more robust.

The Humane pin’s failure teaches us that good ideas require good execution. But we should not abandon the goal of using AI to help humans understand each other across languages. That goal remains as important as ever in our increasingly connected world. The technology will improve, the interfaces will become more intuitive, and the promise of the postplagiarism tenet—that language barriers will begin to disappear—will eventually be realized.

The Humane AI pin may be dead, but we should keep our hope alive that AI technology will help us overcome language barriers and provide new opportunities for communication.

Live long and prosper.

References

Chokkattu, J. (2024, April 11). Review: Humane Ai Pin. https://www.wired.com/review/humane-ai-pin/

Chokkattu, J. (2025a, February 22). The Humane Ai Pin Will Become E-Waste Next Week. Wired. https://www.wired.com/story/humane-ai-pin-will-become-e-waste-next-week/

Chokkattu, J. (2025b, February 28). What to Do With Your Defunct Humane Ai Pin. Wired. https://www.wired.com/story/what-to-do-with-your-humane-ai-pin/

Eaton, S. E. (2023). Postplagiarism: Transdisciplinary ethics and integrity in the age of artificial intelligence and neurotechnology. International Journal for Educational Integrity, 19(1), 1–10. https://doi.org/10.1007/s40979-023-00144-1 

Pierce, D. (2023, November 9). Humane officially launches the AI Pin, its OpenAI-powered wearable. The Verge. https://www.theverge.com/2023/11/9/23953901/humane-ai-pin-launch-date-price-openai 

Note: This is a re-post of a piece originally posted on the Postplagiarism blog.

________________________

Share this post: When Good Ideas Meet Poor Execution: The Humane AI Pin and the Future of Language Translation – https://drsaraheaton.com/2025/05/18/when-good-ideas-meet-poor-execution-the-humane-ai-pin-and-the-future-of-language-translation/

Sarah Elaine Eaton, PhD, is a Professor and Research Chair in the Werklund School of Education at the University of Calgary, Canada. Opinions are my own and do not represent those of my employer.


Postplagiarism as a Blueprint for Academic Integrity in an AI Age

April 28, 2025

The landscape of academic integrity continues to evolve. Don’t get me wrong. There are timeless aspects to academic integrity that remain constant, like everyone in the educational eco-system following established expectations that are clearly communicated and supported.

Having said that, our world has changed a lot since COVID-19. Digital learning is pretty much embedded into the educational systems of every high-income county and many others, too.

Our approach to plagiarism and academic misconduct must evolve with new developments in technology. The traditional model—focused on catching and punishing—has reached its limits. With a  post-plagiarism framework we can prepare students for their future while honouring their dignity.

Moving Beyond Detection and Punishment

The plagiarism detection industry grew from legitimate concerns about academic misconduct. However, this approach positions students as potential cheaters rather than emerging scholars. Detection software creates an atmosphere of suspicion rather than trust. Students submit work feeling anxious about false positives rather than proud of their learning.

Universities spend millions (billions?) on detection services annually. These resources could support student learning instead. What if we redirected these funds toward writing centers, tutoring programs, and faculty development?

Students as Partners in Academic Integrity

A post-plagiarism approach positions students as partners. They help develop academic integrity policies. They contribute to classroom discussions about citation practices. They mentor peers in proper source use.

Student partnership requires trust. Faculty must believe students want to succeed honestly. Students must trust faculty to guide rather than police. This mutual trust creates space for authentic learning.

Students who participate in policy development understand expectations better. They develop ownership of academic integrity standards. These experiences prepare them for professional environments where ethical conduct matters.

Preserving Dignity in Digital Learning

Technology changes how we learn and create knowledge. AI writing tools now generate sophisticated text. Students need skills to use these tools ethically.

A post-plagiarism approach acknowledges this reality. Rather than banning technology, we teach students to use it responsibly. We help them understand when AI assistance is appropriate and when independent work matters.

Preserving dignity means treating students as capable decision-makers. They need practice making ethical choices about technology use. Our guidance should focus on developing judgment rather than following rules.

Preparing Students for Tomorrow’s Challenges

Today’s students will work in environments transformed by automation and AI. Their value will come from distinctly human capabilities—critical thinking, creativity, collaboration, and ethical reasoning.

Citation skills matter less than attribution.  Students need to evaluate sources critically, synthesize diverse perspectives, and contribute original insights. A post-plagiarism framework prioritizes these higher-order skills.

Assessment methods can evolve accordingly. Assignments that ask students to demonstrate their thinking process resist plagiarism naturally. Projects requiring personal reflection or real-world application showcase authentic learning.

A Blueprint for Change

Practical steps toward a post-plagiarism future include:

  1. Redesign assessments to emphasize process over product
  2. Involve students in academic integrity policy development
  3. Teach technology literacy alongside information literacy
  4. Invest in support systems rather than detection systems
  5. Create classroom cultures that value original thinking

This blueprint requires institutional commitment. Faculty need professional development opportunities. Administrators need courage to question established practices. Students need meaningful involvement in governance.

Conclusion

A post-plagiarism framework offers hope. It acknowledges technological reality while preserving educational values. It treats students as partners rather than suspects. It prepares graduates who understand integrity as professional responsibility rather than compliance obligation.

The future of education requires this shift. Our students deserve learning environments that honor their dignity, nurture their capabilities, and prepare them for tomorrow’s challenges. By moving beyond plagiarism detection toward partnership, we create educational experiences worthy of their potential.

________________________

Share this post: Postplagiarism as a Blueprint for Academic Integrity in an AI Age – https://drsaraheaton.com/2025/04/28/postplagiarism-as-a-blueprint-for-academic-integrity-in-an-ai-age/

Sarah Elaine Eaton, PhD, is a Professor and Research Chair in the Werklund School of Education at the University of Calgary, Canada. Opinions are my own and do not represent those of my employer.