From Courtrooms to Classrooms: Smart Glasses and Integrity in a Postplagiarism Era

March 18, 2026

by Sarah Elaine Eaton – March 18, 2026

A London judge recently concluded that a witness was receiving coached answers through a pair of smart glasses connected to his mobile phone during cross-examination (Jacobs, 2026). The case involved a routine insolvency dispute, but the technology at the centre of the judge’s findings was anything but routine. The witness, who gave evidence through a Lithuanian interpreter, was found to have been receiving audio from an unidentified caller routed through smart glasses paired to his handset. Once the glasses were removed, his phone began broadcasting a voice from his jacket pocket. The judge rejected the witness’s testimony in full, describing it as unreliable and untruthful.

The incident is instructive for those of us working at the intersection of technology, integrity, and institutional policy. It demonstrates that smart glasses do not need advanced AI capabilities to compromise a formal proceeding. Simple Bluetooth audio connectivity was sufficient.

In our recent paper (Eaton et al., 2026), we examined the implications of AI-enabled smart glasses for teaching, learning, assessment, and academic integrity. One of our central arguments applies here: the reflexive instinct to treat wearable technology as a cheating device, while understandable, risks missing the structural challenge these technologies present to the systems designed to ensure honest participation.

Courts, like universities, depend on observable behaviours and verifiable evidence to assess credibility and ensure procedural fairness. As we noted, AI glasses can embed cognitive or communicative assistance into a user’s perceptual field in ways that leave no external trace (Eaton et al., 2026). The London case illustrates what happens when that assistance leaves a trace, but only because something went wrong: the interpreter heard voices, and the phone began playing audio at the wrong moment.

The question this case raises is not whether courts should ban smart glasses. A blanket prohibition would create its own problems, particularly for individuals who depend on wearable technology for vision correction or accessibility. We argued that institutional responses should focus on redesigning processes rather than policing devices (Eaton et al., 2026). For courts, this means developing protocols for the use of wearable technology during testimony, much as we recommended that educational institutions establish centralized accommodation protocols for AI-enabled devices.

The London ruling also reinforces our observation that enforcement models built around detection are fragile. The coaching was discovered through a combination of the interpreter’s alertness, call log records, and the witness’s inability to explain the contact saved as “abra kadabra” on his phone. These are investigative tools, not systemic safeguards. As smart glasses become more common and more discreet, relying on detection alone will prove insufficient in both courtrooms and classrooms.

What this case calls for is not alarm but preparation. Institutions responsible for the integrity of formal proceedings, whether legal or academic, need forward-looking frameworks that address the capabilities of wearable technology before the next incident occurs. The technology is not going away. Our systems must adapt.

References

Eaton, S. E., Kumar, R., Dahal, B., Tang, G., Ramazanov, F., & Moya Figueroa, B. A. (2026). AI smart glasses and the future of academic integrity in a postplagiarism era. Canadian Perspectives on Academic Integrity, 9(1), 1–5. http://doi.org/10.55016/ojs/cpai.v9i1/82885

Jacobs, S. (2026, March 17). A London judge says a witness was being coached in real time through smart glasses. TechSpot. https://www.techspot.com/news/111710-london-judge-witness-coached-real-time-through-smart.html

____________

Cross posted from:

From Courtrooms to Classrooms: Smart Glasses and Integrity in a Postplagiarism Era – https://postplagiarism.com/2026/03/18/from-courtrooms-to-classrooms-smart-glasses-and-integrity-in-a-postplagiarism-era/


Call for Proposals: Special issue on Postplagiarism and Generativism: Human-AI Hybrid Approaches to Ethical Teaching, Learning, and Assessment

March 17, 2026

Special Issue Call for Papers

Postplagiarism and Generativism: Human-AI Hybrid Approaches to Ethical Teaching, Learning, and Assessment

For publication in the Journal of University Teaching and Learning Practice

Guest editors

Background

Every new technology brings with it societal and moral panic (Orben, 2020). When the Internet first became popular, concerns about plagiarism increased. Even though there is scant empirical evidence that the Internet was actually responsible for increases in rates of plagiarism, the perception that new technology resulted in more academic cheating persisted (Panning Davies & Howard, 2016).

Some plagiarism scholars have been emphatic that the majority of student plagiarism cases are not an intent to deceive, but rather a lack of academic literacy and poor academic practice, and have even advocated for disposing of plagiarism in academic misconduct policies in favour of increased student support (Howard, 1992; Jamieson & Howard, 2021). The idea that plagiarism could be decoupled from academic misconduct seems somewhat unlikely, but by the 2020s it was obvious to some that generative artificial intelligence (GenAI) would have an impact on writing, and by extension, on plagiarism (Mindzak & Eaton, 2021).

In response to these technological shifts, various frameworks have emerged to conceptualize academic integrity in the GenAI era. The postplagiarism framework, first introduced by Eaton (2021, 2023) and since discussed by scholars worldwide (Bali, 2023; Bagenal, 2024; Kenny, 2024), offers one approach. Other perspectives, such as Generativism (Pratschke, 2023), AI Literacy frameworks (Ng et al., 2021; Pretorius & Cahusac de Caux, 2024), and UNESCO’s Guidance for Generative AI in Education (2023), provide complementary or alternative viewpoints on similar phenomena.

Postplagiarism is based on six tenets (Eaton, 2023): (1) human-AI hybrid writing will become the norm; (2) creativity can be enhanced by AI; (3) AI can help to overcome language barriers; (4) we can outsource control of our writing to AI, but we do not outsource responsibility for what is written; (5) attribution remains important; and (6) historical definitions of plagiarism may require rethinking.

Empirical testing of these and related frameworks has shown differing levels of acceptance and application across educational contexts (Kumar, 2025).

Equity, Diversity, Inclusion, and Accessibility in a Postplagiarism Age

As higher education institutions aim to promote social justice through equity, diversity, and inclusion (EDI), GenAI holds the potential to either break down or reinforce barriers related to linguistic, cultural, socioeconomic, and ability differences requires critical examination.

Assessment practices should be designed proactively to enable all students to demonstrate their learning without being unfairly disadvantaged by their personal characteristics or circumstances (Tai et al., 2022). Similarly, McDermott (2024) highlights the importance of considering accessibility, equity, and inclusion in assessment and academic integrity.

GenAI offers opportunities to enhance equity by providing personalized support, overcoming language barriers, and assisting learners with diverse needs. However, without careful implementation, it may exacerbate existing inequities through unequal access to technology, algorithmic biases, or assessment designs that privilege certain ways of knowing and communicating.

In this special edition, we propose to examine the broader question: “How are pedagogies, learning, and teaching approaches evolving in response to GenAI, and what frameworks best support ethical academic practice in a postplagiarism landscape?”

We invite researchers and practitioners to submit their original research papers exploring the transformation of teaching, learning, and assessment in a GenAI age. We welcome both theoretical and empirical contributions, including positions that may present contrasting viewpoints. Potential topics of interest include, but are not limited to:

  • New developments in postplagiarism, generativism, and other emerging frameworks for understanding academic integrity in the GenAI era
  • Empirical studies testing these frameworks in different contexts and disciplines
  • The use of these frameworks to design or reform academic misconduct policies and procedures
  • The relationship between GenAI, academic literacies, and related competencies (e.g., digital literacy, information literacy)
  • Pedagogical approaches that embrace GenAI while maintaining academic integrity
  • Case studies of successful integration of GenAI into teaching, learning, and assessment
  • Critical perspectives on the limitations or challenges of current approaches to GenAI in education
  • Position papers presenting new or alternative frameworks for understanding GenAI in teaching and learning

We particularly encourage submissions that engage in dialogue with existing frameworks, offering either supportive evidence or critical alternatives. Our goal is to foster a robust debate about the future of teaching and learning in a GenAI (and even a post-GenAI) world.

We welcome submissions from both established researchers and early-career scholars from diverse academic and cultural backgrounds. All submissions will be peer-reviewed by an international panel of experts. Accepted papers will be published in a special issue of the Journal of University Teaching and Learning Practice.

Types of publications accepted into this Special Issue

The types of publications that are eligible for acceptance into this Special Issue include:

  • Research papers
  • Review articles (e.g., systematic review or meta-analysis)
  • Case studies and evidence-based good practice examples

Developing a high-quality proposal

We recommend the creation of a single document in Word (.doc or .docx) format that contains the following:

  • Proposed article title
  • Proposed authors names, affiliations, and ORCid
  • A clear evidence-based rationale for the line of inquiry proposed
  • Research question(s)
  • Proposed method (for both theoretical and empirical manuscripts)
  • Practice-based implications of the proposed research

The word limit for the proposal is 250 words (not including references) and is designed to give the Editorial Team a sense of the rigour of the manuscript proposed and the possible implications of such research. The Editorial Team may return with an invitation to combine similar manuscripts. Acceptance of proposals does not guarantee acceptance of final manuscripts.

Timeline

  • Proposals due – April 30, 2026
  • Proposal acceptance notifications: May 14, 2026
  • Full articles due: August 31, 2026

Submit your abstract via this online form: https://forms.gle/6sKjc2jkKGWCtGgw7

For further information contact Professor Sarah Elaine Eaton, University of Calgary.

References

Bali, M. (2023, March 3). Are We Approaching a Postplagiarism Era? https://blog.mahabali.me/educational-technology-2/are-we-approaching-a-postplagiarism-era/

Bagenal, J. (2024). Generative artificial intelligence and scientific publishing: Urgent questions, difficult answers. The Lancet, 403(10432), 1118–1120. https://doi.org/10.1016/S0140-6736(24)00416-1

Eaton, S. E. (2021). Plagiarism in Higher Education: Tackling Tough Topics in Academic Integrity. Bloomsbury.

Eaton, S. E. (2023). Postplagiarism: Transdisciplinary ethics and integrity in the age of artificial intelligence and neurotechnology. International Journal for Educational Integrity, 19(1), 1–10. https://doi.org/10.1007/s40979-023-00144-1

Orben, A. (2020). The Sisyphean cycle of technology panics. Perspectives on Psychological Science, 15(5), 1143–1157. https://doi.org/10.1177/1745691620919372

Howard, R. M. (1992). A plagiarism pentimento. Journal of Teaching Writing, 11(2), 233–245.