Interfacing with the Future: Reflections on the National Day of Learning 2026

April 1, 2026

On March 28, 2026, I had the pleasure of joining educators from across Canada for the National Day of Learning, hosted by Let’s Talk Science. This one-day, nation-wide professional learning event brought together K–12 teachers, post-secondary educators, and policy leaders to explore some of the most pressing issues shaping education today, with artificial intelligence high on the agenda.

I was invited to deliver a session titled “Interfacing with the Future: Wearable AI and Academic Integrity for K–12 and Higher Ed.” What follows are a few reflections and key ideas from that conversation, hosted by Dr. Alec Couros.

Moving into the Postplagiarism Era

One of the central ideas framing my talk is postplagiarism. In this reality, artificial intelligence is no longer an external tool that students occasionally use, but rather, it is embedded into everyday life and learning.

Students are already engaging with AI in ways that challenge traditional notions of authorship, originality, and academic work. The question is no longer if students will use AI, but how.

This shift requires a corresponding change in how we think about academic integrity. Detection and surveillance, long relied upon as primary strategies, are no longer sufficient. Instead, we must rethink how we design learning environments that foster integrity from the ground up.

From Tools to Wearables: How AI is Advancing

A key focus of my presentation was the rapid evolution from AI tools to AI wearables — particularly smart glasses and other forms of cosmetically invisible interfaces. The talk was based, in part, on our recent article in Canadian Perspectives on Academic Integrity

Wearable technologies integrate AI directly into our physical experience of the world. Rather than pulling out a device, users can access real-time information, transcription, and prompts seamlessly through their field of vision.

This shift introduces both opportunities and tensions:

  • Cognitive offloading: Learners can reduce mental load by accessing information instantly. (Phill Dawson has done some great work on cognitive offloading that I recommend reading.)
  • Enhanced presence: Wearables allow users to maintain eye contact and engagement without device distraction.
  • Efficiency gains: Tasks such as note-taking or translation can be automated in real time.

At the same time, these benefits come with real challenges including information overload, privacy concerns, and technical limitations. More importantly for educators, they fundamentally disrupt assumptions about what it means to “know” something independently.

New Technology ≠ Cheating

One of the most important messages I emphasized is this: new technology does not automatically equal academic misconduct.

If a tool is permitted, then its use is not cheating. The real issue lies in unauthorized use or misuse in ways that create unfair advantage. 

We must also remain attentive to equity and accessibility. Some wearable technologies may be used as accommodations, making it essential that our integrity policies are inclusive and nuanced rather than rigid and punitive.

Designing for Integrity (Not Surveillance)

Rather than doubling down on detection, I encourage educators to shift their focus toward designing for integrity.

This means:

  • Prioritizing assessment validity: If an AI system can complete a task without genuine understanding, then the task itself needs to be rethought.
  • Moving beyond “gotcha” approaches: Surveillance-based strategies erode trust and are increasingly ineffective.
  • Supporting diverse learners: Students bring different technological access, needs, and experiences. Our designs must reflect that.
  • Building a culture of integrity: Integrity is not enforced; it is cultivated through meaningful learning experiences.

Bridging K–12 and Post-Secondary Education

Another key theme was the gap between K–12 and post-secondary expectations.

In K–12 environments, students are often encouraged to explore technology as part of their learning. In contrast, post-secondary institutions frequently operate under the assumption that students already understand complex academic integrity rules.

As AI continues to evolve, this gap becomes more pronounced. We need stronger alignment across educational sectors to ensure that students are supported, rather than being set up for failure, as they transition between systems. (Myke Healy has a great paper on the topic of GenAI in the K-12 context that is worth reading.) 

Looking Ahead

If there is one takeaway from this experience, it is this: wearable AI is not a future scenario. It is already here.

As educators, we are being called to respond not with fear, but with thoughtful, research-informed approaches. The challenge is not simply to manage technology, but to reimagine teaching, learning, and assessment in ways that remain meaningful in an AI-integrated world.

Events like the National Day of Learning remind me of the power of community. Bringing educators together to share ideas, ask difficult questions, and explore new possibilities is essential as we navigate this rapidly changing landscape.

Thank you to Let’s Talk Science and to Dr. Alec Couros for the opportunity to be part of this important conversation, and to all the educators who continue to lead with curiosity, courage, and care.

______________

Share this post: Interfacing with the Future: Reflections on the National Day of Learning 2026 –  https://drsaraheaton.com/2026/04/01/interfacing-with-the-future-reflections-on-the-national-day-of-learning-2026/

Sarah Elaine Eaton, PhD, is a Professor and Research Chair in the Werklund School of Education at the University of Calgary, Canada. Opinions are my own and do not represent those of my employer.


Teaching Fact-Checking Through Deliberate Errors: An Essential AI Literacy Skill

April 23, 2025

Abstract

This teaching resource explores an innovative pedagogical approach for developing AI literacy in a postplagiarism era. The document outlines a method of teaching fact-checking skills by having students critically evaluate AI-generated content containing deliberate errors. It provides practical guidance for educators on creating content with strategic inaccuracies, structuring verification activities, teaching source evaluation through a 5-step process, understanding AI error patterns, and implementing these exercises throughout courses. By engaging students in systematic verification processes, this approach helps develop metacognitive awareness, evaluative judgment, and appropriate skepticism when consuming AI-generated information. The resource emphasizes assessing students on their verification process rather than solely on error detection, preparing them to navigate an information landscape where distinguishing fact from fiction is increasingly challenging yet essential.

Here is a downloadable .pdf of this teaching activity:

Introduction

In a postplagiarism era, one of the most valuable skills we can teach students is how to critically evaluate AI-generated content. This can help them to cultivate meta-cognition and evaluative judgement, which have been identified as important skills for feedback and evaluation (e.g., Bearman and Luckin, 2020; Tai et al., 2018). Gen AI tools present information with confidence, regardless of accuracy. This characteristic creates an ideal opportunity to develop fact-checking competencies that serve students throughout their academic and professional lives.

Creating Content with Strategic Errors

Begin by generating content through an AI tool that contains factual inaccuracies. There are several approaches to ensure errors are present:

  • Ask the AI about obscure topics where it lacks sufficient training data
  • Request information about recent events beyond its knowledge cutoff
  • Pose questions about specialized fields with technical terminology
  • Combine legitimate questions with subtle misconceptions in your prompts

For example, ask a Large Language Model (LLM), such as ChatGPT (or any similar tool) to ‘Explain the impact of the Marshall-Weaver Theory on educational psychology’. There is no such theory, at least to the best of my knowledge. I have fabricated it for the purposes of illustration. The GenAI will likely fabricate details, citations, and research.

Structured Verification Activities

Provide students with the AI-generated content and clear verification objectives. Structure the fact-checking process as a systematic investigation.

First, have students highlight specific claims that require verification. This focuses their attention on identifying testable statements versus general information.

  • Next, assign verification responsibilities using different models:
  • Individual verification where each student investigates all claims
  • Jigsaw approach where students verify different sections then share findings
  • Team-based verification where groups compete to identify the most inaccuracies

Require students to document their verification methods for each claim. This documentation could include:

  • Sources consulted
  • Search terms used
  • Alternative perspectives considered
  • Confidence level in their verification conclusion

Requiring students to document how they verified each claim can help them develop meta-cognitive awareness about their own learning and experience how GenAI’s outputs should be treated with some skepticism and gives them specific strategies to verify content for themselves.

Teaching Source Evaluation: A 5-Step Process

The fact-checking process creates a natural opportunity to reinforce source evaluation skills.

As teachers, we can guide students to follow a 5-step plan to learn how to evaluate the reliability, truthfulness, and credibility of sources.

  • Step 1: Distinguish between primary and secondary sources. (A conversation about how terms such as ‘primary source’ and ‘secondary source’ can mean different things in different academic disciplines could also be useful here.)
  • Step 2: Recognize the difference between peer-reviewed research and opinion pieces. For opinion pieces, editorials, position papers, essays, it can be useful to talk about how these different genres are regarded in different academic subject areas. For example, in the humanities, an essay can be considered an elevated form of scholarship; however, in the social sciences, it may be considered less impressive than research that involves collecting empirical data from human research participants.
  • Step 3: Evaluate author credentials and institutional affiliations. Of course, we want to be careful about avoiding bias when doing this. Just because an author may have an affiliation with an ivy league university, for example, that does not automatically make them a credible source. Evaluating credentials can — and should — include conversations about avoiding and mitigating bias.
  • Step 4: Identify publication date and relevance. Understanding the historical, social, and political context in which a piece was written can be helpful.
  • Step 5: Consider potential biases in information sources. Besides bias about an author’s place of employment, consider what motivations they may have. This can include a personal or political agenda, or any other kind of motive. Understanding a writer’s biases can help us evaluate the credibility of what they write.

Connect these skills to your subject area by discussing authoritative sources specific to your field. What makes a source trustworthy in history differs from chemistry or literature.

Understanding Gen AI Error Patterns

One valuable aspect of this exercise goes beyond identifying individual errors to recognizing patterns in how AI systems fail. As educators, we can facilitate discussions about:

  • Pattern matching versus genuine understanding
  • How training data limitations affect AI outputs
  • The concept of AI ‘hallucination’ and why it occurs
  • Why AI presents speculative information as factual
  • How AI systems blend legitimate information with fabricated details

Connect these skills to your subject area by discussing authoritative sources specific to your field. What makes a source trustworthy in history differs from chemistry or literature.

Practical Implementation

Integrate these fact-checking exercises throughout your course rather than as a one-time activity. Start with simple verification tasks and progress to more complex scenarios. Connect fact-checking to course content by using AI-generated material related to current topics.

Assessment should focus on the verification process rather than simply identifying errors. Evaluate students on their systematic approach, source quality, and reasoning—not just error detection.

As AI-generated content becomes increasingly prevalent, fact-checking skills are an important academic literacy skill. By teaching students to approach information with appropriate skepticism and verification methods, we prepare them to navigate a postplagiarism landscape where distinguishing fact from fiction becomes both more difficult and more essential.

References

Eaton, S. E. (2023). Postplagiarism: Transdisciplinary ethics and integrity in the age of artificial intelligence and neurotechnology. International Journal for Educational Integrity, 19(1), 1-10. https://doi.org/10.1007/s40979-023-00144-1

Edwards, B. (2023, April 6). Why ChatGPT and Bing Chat are so good at making things up. Arts Technica. https://arstechnica.com/information-technology/2023/04/why-ai-chatbots-are-the-ultimate-bs-machines-and-how-people-hope-to-fix-them/

Tai, J., Ajjawi, R., Boud, D., Dawson, P., & Panadero, E. (2018). Developing evaluative judgement: enabling students to make decisions about the quality of work. Higher Education, 76(3), 467-481. https://doi.org/10.1007/s10734-017-0220-3

Disclaimer: This content is crossposted from: https://postplagiarism.com/2025/04/23/teaching-fact-checking-through-deliberate-errors-an-essential-ai-literacy-skill/

________________________

Share this post: Teaching Fact-Checking Through Deliberate Errors: An Essential AI Literacy Skill – https://drsaraheaton.com/2025/04/23/teaching-fact-checking-through-deliberate-errors-an-essential-ai-literacy-skill/

Sarah Elaine Eaton, PhD, is a Professor and Research Chair in the Werklund School of Education at the University of Calgary, Canada. Opinions are my own and do not represent those of my employer.