Interfacing with the Future: Reflections on the National Day of Learning 2026

April 1, 2026

On March 28, 2026, I had the pleasure of joining educators from across Canada for the National Day of Learning, hosted by Let’s Talk Science. This one-day, nation-wide professional learning event brought together K–12 teachers, post-secondary educators, and policy leaders to explore some of the most pressing issues shaping education today, with artificial intelligence high on the agenda.

I was invited to deliver a session titled “Interfacing with the Future: Wearable AI and Academic Integrity for K–12 and Higher Ed.” What follows are a few reflections and key ideas from that conversation, hosted by Dr. Alec Couros.

Moving into the Postplagiarism Era

One of the central ideas framing my talk is postplagiarism. In this reality, artificial intelligence is no longer an external tool that students occasionally use, but rather, it is embedded into everyday life and learning.

Students are already engaging with AI in ways that challenge traditional notions of authorship, originality, and academic work. The question is no longer if students will use AI, but how.

This shift requires a corresponding change in how we think about academic integrity. Detection and surveillance, long relied upon as primary strategies, are no longer sufficient. Instead, we must rethink how we design learning environments that foster integrity from the ground up.

From Tools to Wearables: How AI is Advancing

A key focus of my presentation was the rapid evolution from AI tools to AI wearables — particularly smart glasses and other forms of cosmetically invisible interfaces. The talk was based, in part, on our recent article in Canadian Perspectives on Academic Integrity

Wearable technologies integrate AI directly into our physical experience of the world. Rather than pulling out a device, users can access real-time information, transcription, and prompts seamlessly through their field of vision.

This shift introduces both opportunities and tensions:

  • Cognitive offloading: Learners can reduce mental load by accessing information instantly. (Phill Dawson has done some great work on cognitive offloading that I recommend reading.)
  • Enhanced presence: Wearables allow users to maintain eye contact and engagement without device distraction.
  • Efficiency gains: Tasks such as note-taking or translation can be automated in real time.

At the same time, these benefits come with real challenges including information overload, privacy concerns, and technical limitations. More importantly for educators, they fundamentally disrupt assumptions about what it means to “know” something independently.

New Technology ≠ Cheating

One of the most important messages I emphasized is this: new technology does not automatically equal academic misconduct.

If a tool is permitted, then its use is not cheating. The real issue lies in unauthorized use or misuse in ways that create unfair advantage. 

We must also remain attentive to equity and accessibility. Some wearable technologies may be used as accommodations, making it essential that our integrity policies are inclusive and nuanced rather than rigid and punitive.

Designing for Integrity (Not Surveillance)

Rather than doubling down on detection, I encourage educators to shift their focus toward designing for integrity.

This means:

  • Prioritizing assessment validity: If an AI system can complete a task without genuine understanding, then the task itself needs to be rethought.
  • Moving beyond “gotcha” approaches: Surveillance-based strategies erode trust and are increasingly ineffective.
  • Supporting diverse learners: Students bring different technological access, needs, and experiences. Our designs must reflect that.
  • Building a culture of integrity: Integrity is not enforced; it is cultivated through meaningful learning experiences.

Bridging K–12 and Post-Secondary Education

Another key theme was the gap between K–12 and post-secondary expectations.

In K–12 environments, students are often encouraged to explore technology as part of their learning. In contrast, post-secondary institutions frequently operate under the assumption that students already understand complex academic integrity rules.

As AI continues to evolve, this gap becomes more pronounced. We need stronger alignment across educational sectors to ensure that students are supported, rather than being set up for failure, as they transition between systems. (Myke Healy has a great paper on the topic of GenAI in the K-12 context that is worth reading.) 

Looking Ahead

If there is one takeaway from this experience, it is this: wearable AI is not a future scenario. It is already here.

As educators, we are being called to respond not with fear, but with thoughtful, research-informed approaches. The challenge is not simply to manage technology, but to reimagine teaching, learning, and assessment in ways that remain meaningful in an AI-integrated world.

Events like the National Day of Learning remind me of the power of community. Bringing educators together to share ideas, ask difficult questions, and explore new possibilities is essential as we navigate this rapidly changing landscape.

Thank you to Let’s Talk Science and to Dr. Alec Couros for the opportunity to be part of this important conversation, and to all the educators who continue to lead with curiosity, courage, and care.

______________

Share this post: Interfacing with the Future: Reflections on the National Day of Learning 2026 –  https://drsaraheaton.com/2026/04/01/interfacing-with-the-future-reflections-on-the-national-day-of-learning-2026/

Sarah Elaine Eaton, PhD, is a Professor and Research Chair in the Werklund School of Education at the University of Calgary, Canada. Opinions are my own and do not represent those of my employer.


From Courtrooms to Classrooms: Smart Glasses and Integrity in a Postplagiarism Era

March 18, 2026

by Sarah Elaine Eaton – March 18, 2026

A London judge recently concluded that a witness was receiving coached answers through a pair of smart glasses connected to his mobile phone during cross-examination (Jacobs, 2026). The case involved a routine insolvency dispute, but the technology at the centre of the judge’s findings was anything but routine. The witness, who gave evidence through a Lithuanian interpreter, was found to have been receiving audio from an unidentified caller routed through smart glasses paired to his handset. Once the glasses were removed, his phone began broadcasting a voice from his jacket pocket. The judge rejected the witness’s testimony in full, describing it as unreliable and untruthful.

The incident is instructive for those of us working at the intersection of technology, integrity, and institutional policy. It demonstrates that smart glasses do not need advanced AI capabilities to compromise a formal proceeding. Simple Bluetooth audio connectivity was sufficient.

In our recent paper (Eaton et al., 2026), we examined the implications of AI-enabled smart glasses for teaching, learning, assessment, and academic integrity. One of our central arguments applies here: the reflexive instinct to treat wearable technology as a cheating device, while understandable, risks missing the structural challenge these technologies present to the systems designed to ensure honest participation.

Courts, like universities, depend on observable behaviours and verifiable evidence to assess credibility and ensure procedural fairness. As we noted, AI glasses can embed cognitive or communicative assistance into a user’s perceptual field in ways that leave no external trace (Eaton et al., 2026). The London case illustrates what happens when that assistance leaves a trace, but only because something went wrong: the interpreter heard voices, and the phone began playing audio at the wrong moment.

The question this case raises is not whether courts should ban smart glasses. A blanket prohibition would create its own problems, particularly for individuals who depend on wearable technology for vision correction or accessibility. We argued that institutional responses should focus on redesigning processes rather than policing devices (Eaton et al., 2026). For courts, this means developing protocols for the use of wearable technology during testimony, much as we recommended that educational institutions establish centralized accommodation protocols for AI-enabled devices.

The London ruling also reinforces our observation that enforcement models built around detection are fragile. The coaching was discovered through a combination of the interpreter’s alertness, call log records, and the witness’s inability to explain the contact saved as “abra kadabra” on his phone. These are investigative tools, not systemic safeguards. As smart glasses become more common and more discreet, relying on detection alone will prove insufficient in both courtrooms and classrooms.

What this case calls for is not alarm but preparation. Institutions responsible for the integrity of formal proceedings, whether legal or academic, need forward-looking frameworks that address the capabilities of wearable technology before the next incident occurs. The technology is not going away. Our systems must adapt.

References

Eaton, S. E., Kumar, R., Dahal, B., Tang, G., Ramazanov, F., & Moya Figueroa, B. A. (2026). AI smart glasses and the future of academic integrity in a postplagiarism era. Canadian Perspectives on Academic Integrity, 9(1), 1–5. http://doi.org/10.55016/ojs/cpai.v9i1/82885

Jacobs, S. (2026, March 17). A London judge says a witness was being coached in real time through smart glasses. TechSpot. https://www.techspot.com/news/111710-london-judge-witness-coached-real-time-through-smart.html

____________

Cross posted from:

From Courtrooms to Classrooms: Smart Glasses and Integrity in a Postplagiarism Era – https://postplagiarism.com/2026/03/18/from-courtrooms-to-classrooms-smart-glasses-and-integrity-in-a-postplagiarism-era/