Teaching Fact-Checking Through Deliberate Errors: An Essential AI Literacy Skill

April 23, 2025

Abstract

This teaching resource explores an innovative pedagogical approach for developing AI literacy in a postplagiarism era. The document outlines a method of teaching fact-checking skills by having students critically evaluate AI-generated content containing deliberate errors. It provides practical guidance for educators on creating content with strategic inaccuracies, structuring verification activities, teaching source evaluation through a 5-step process, understanding AI error patterns, and implementing these exercises throughout courses. By engaging students in systematic verification processes, this approach helps develop metacognitive awareness, evaluative judgment, and appropriate skepticism when consuming AI-generated information. The resource emphasizes assessing students on their verification process rather than solely on error detection, preparing them to navigate an information landscape where distinguishing fact from fiction is increasingly challenging yet essential.

Here is a downloadable .pdf of this teaching activity:

Introduction

In a postplagiarism era, one of the most valuable skills we can teach students is how to critically evaluate AI-generated content. This can help them to cultivate meta-cognition and evaluative judgement, which have been identified as important skills for feedback and evaluation (e.g., Bearman and Luckin, 2020; Tai et al., 2018). Gen AI tools present information with confidence, regardless of accuracy. This characteristic creates an ideal opportunity to develop fact-checking competencies that serve students throughout their academic and professional lives.

Creating Content with Strategic Errors

Begin by generating content through an AI tool that contains factual inaccuracies. There are several approaches to ensure errors are present:

  • Ask the AI about obscure topics where it lacks sufficient training data
  • Request information about recent events beyond its knowledge cutoff
  • Pose questions about specialized fields with technical terminology
  • Combine legitimate questions with subtle misconceptions in your prompts

For example, ask a Large Language Model (LLM), such as ChatGPT (or any similar tool) to ‘Explain the impact of the Marshall-Weaver Theory on educational psychology’. There is no such theory, at least to the best of my knowledge. I have fabricated it for the purposes of illustration. The GenAI will likely fabricate details, citations, and research.

Structured Verification Activities

Provide students with the AI-generated content and clear verification objectives. Structure the fact-checking process as a systematic investigation.

First, have students highlight specific claims that require verification. This focuses their attention on identifying testable statements versus general information.

  • Next, assign verification responsibilities using different models:
  • Individual verification where each student investigates all claims
  • Jigsaw approach where students verify different sections then share findings
  • Team-based verification where groups compete to identify the most inaccuracies

Require students to document their verification methods for each claim. This documentation could include:

  • Sources consulted
  • Search terms used
  • Alternative perspectives considered
  • Confidence level in their verification conclusion

Requiring students to document how they verified each claim can help them develop meta-cognitive awareness about their own learning and experience how GenAI’s outputs should be treated with some skepticism and gives them specific strategies to verify content for themselves.

Teaching Source Evaluation: A 5-Step Process

The fact-checking process creates a natural opportunity to reinforce source evaluation skills.

As teachers, we can guide students to follow a 5-step plan to learn how to evaluate the reliability, truthfulness, and credibility of sources.

  • Step 1: Distinguish between primary and secondary sources. (A conversation about how terms such as ‘primary source’ and ‘secondary source’ can mean different things in different academic disciplines could also be useful here.)
  • Step 2: Recognize the difference between peer-reviewed research and opinion pieces. For opinion pieces, editorials, position papers, essays, it can be useful to talk about how these different genres are regarded in different academic subject areas. For example, in the humanities, an essay can be considered an elevated form of scholarship; however, in the social sciences, it may be considered less impressive than research that involves collecting empirical data from human research participants.
  • Step 3: Evaluate author credentials and institutional affiliations. Of course, we want to be careful about avoiding bias when doing this. Just because an author may have an affiliation with an ivy league university, for example, that does not automatically make them a credible source. Evaluating credentials can — and should — include conversations about avoiding and mitigating bias.
  • Step 4: Identify publication date and relevance. Understanding the historical, social, and political context in which a piece was written can be helpful.
  • Step 5: Consider potential biases in information sources. Besides bias about an author’s place of employment, consider what motivations they may have. This can include a personal or political agenda, or any other kind of motive. Understanding a writer’s biases can help us evaluate the credibility of what they write.

Connect these skills to your subject area by discussing authoritative sources specific to your field. What makes a source trustworthy in history differs from chemistry or literature.

Understanding Gen AI Error Patterns

One valuable aspect of this exercise goes beyond identifying individual errors to recognizing patterns in how AI systems fail. As educators, we can facilitate discussions about:

  • Pattern matching versus genuine understanding
  • How training data limitations affect AI outputs
  • The concept of AI ‘hallucination’ and why it occurs
  • Why AI presents speculative information as factual
  • How AI systems blend legitimate information with fabricated details

Connect these skills to your subject area by discussing authoritative sources specific to your field. What makes a source trustworthy in history differs from chemistry or literature.

Practical Implementation

Integrate these fact-checking exercises throughout your course rather than as a one-time activity. Start with simple verification tasks and progress to more complex scenarios. Connect fact-checking to course content by using AI-generated material related to current topics.

Assessment should focus on the verification process rather than simply identifying errors. Evaluate students on their systematic approach, source quality, and reasoning—not just error detection.

As AI-generated content becomes increasingly prevalent, fact-checking skills are an important academic literacy skill. By teaching students to approach information with appropriate skepticism and verification methods, we prepare them to navigate a postplagiarism landscape where distinguishing fact from fiction becomes both more difficult and more essential.

References

Eaton, S. E. (2023). Postplagiarism: Transdisciplinary ethics and integrity in the age of artificial intelligence and neurotechnology. International Journal for Educational Integrity, 19(1), 1-10. https://doi.org/10.1007/s40979-023-00144-1

Edwards, B. (2023, April 6). Why ChatGPT and Bing Chat are so good at making things up. Arts Technica. https://arstechnica.com/information-technology/2023/04/why-ai-chatbots-are-the-ultimate-bs-machines-and-how-people-hope-to-fix-them/

Tai, J., Ajjawi, R., Boud, D., Dawson, P., & Panadero, E. (2018). Developing evaluative judgement: enabling students to make decisions about the quality of work. Higher Education, 76(3), 467-481. https://doi.org/10.1007/s10734-017-0220-3

Disclaimer: This content is crossposted from: https://postplagiarism.com/2025/04/23/teaching-fact-checking-through-deliberate-errors-an-essential-ai-literacy-skill/

________________________

Share this post: Teaching Fact-Checking Through Deliberate Errors: An Essential AI Literacy Skill – https://drsaraheaton.com/2025/04/23/teaching-fact-checking-through-deliberate-errors-an-essential-ai-literacy-skill/

Sarah Elaine Eaton, PhD, is a Professor and Research Chair in the Werklund School of Education at the University of Calgary, Canada. Opinions are my own and do not represent those of my employer.


Dignity: A Foundation of Academic Integrity

April 8, 2025

In our pursuit of knowledge, we often focus on policies, consequences, and detection systems. Yet at the heart of academic integrity lies something fundamental: human dignity.

When we produce original work, we honor our intellectual journey and the dignity of those whose ideas we build upon. Attribution acknowledges that knowledge creation is a collaborative endeavor spanning generations.

Academic integrity shouldn’t be about avoiding punishment, but rather, about recognizing the worth in honest intellectual exchange. It’s understanding that shortcuts diminish our growth and the trust that sustains scholarly communities.

As educators and learners, we can frame integrity less as compliance and more as respect – for ourselves, peers, and institutions that facilitate collective wisdom. When we approach academic work with dignity as our compass, integrity follows.

–——–-

Share this post: Dignity: A Foundation of Academic Integrity – https://drsaraheaton.com/2025/04/08/dignity-a-foundation-of-academic-integrity/

This blog has had over 3.7 million views thanks to readers like you. If you enjoyed this post, please ‘Like’ it using the button below or share it on social media. Thanks!

Sarah Elaine Eaton, PhD, is a Professor and Research Chair in the Werklund School of Education at the University of Calgary, Canada. Opinions are my own and do not represent those of my employer. 


Embracing AI as a Teaching Tool: Practical Approaches for the Post-plagiarism Classroom

March 23, 2025

Artificial intelligence (AI) has moved from a futuristic concept to an everyday reality. Rather than viewing AI tools like ChatGPT as threats to academic integrity, forward-thinking educators are discovering their potential as powerful teaching instruments. Here’s how you can meaningfully incorporate AI into your classroom while promoting critical thinking and ethical technology use.

Making AI Visible in the Learning Process

One of the most effective approaches to teaching with AI is to bring it into the open. When we demystify these tools, students develop a more nuanced understanding of the tools’ capabilities and limitations.

Start by dedicating class time to explore AI tools together. You might begin with a demonstration of how ChatGPT or similar tools respond to different types of prompts. Ask students to compare the quality of responses when the tool is asked to:

  • Summarize factual information
  • Analyze a complex concept
  • Solve a problem in your discipline
A teaching tip infographic titled "Postplagiarism Teaching Tip by Sarah Elaine Eaton: Make AI Visible in the Learning Process." The infographic features a central image of a thinking face emoji, with three connected bubbles highlighting different aspects of AI integration in learning:

Summarize Factual Information (blue): Encourages understanding of basic facts and data handling, represented by an icon of a document with a magnifying glass.

Analyze Complex Concepts (green): Develops critical thinking and deep analysis skills, represented by an icon of a puzzle piece.

Solve Discipline-Specific Problems (orange): Enhances problem-solving skills in specific subjects, represented by an icon of tools (wrench and screwdriver).
In the bottom right corner, there’s a Creative Commons license (CC BY-NC) icon.

Have students identify where the AI excels and where it falls short. Hands-on experience that is supervised by an educator helps students understand that while AI can be impressive and  capable, it has clear boundaries and weaknesses.

From AI Drafts to Critical Analysis

AI tools can quickly generate content that serves as a starting point for deeper learning. Here is a step-by-step approach for using AI-generated drafts as teaching material:

  1. Assignment Preparation: Choose a topic relevant to your course and generate a draft response using an AI tool such as ChatGPT.
  2. Collaborative Analysis: Share the AI-generated draft with students and facilitate a discussion about its strengths and weaknesses. Prompt students with questions such as:
    • What perspectives are missing from this response?
    • How could the structure be improved?
    • What claims require additional evidence?
    • How might we make this content more engaging or relevant?

The idea is to bring students into conversations about AI, to build their critical thinking and also have them puzzle through the strengths and weaknesses of current AI tools.

  • Revision Workshop: Have students work individually or in groups to revised an AI draft into a more nuanced, complete response. This process teaches students that the value lies not in generating initial content (which AI can do) but in refining, expanding, and critically evaluating information (which requires human judgment).
  • Reflection: Ask students to document what they learned through the revision process. What gaps did they identify in the AI’s understanding? How did their human perspective enhance the work? Building in meta-cognitive awareness is one of the skills that assessment experts such as Bearman and Luckin (2020) emphasize in their work.

This approach shifts the educational focus from content creation to content evaluation and refinement—skills that will remain valuable regardless of technological advancement.

Teaching Fact-Checking Through Deliberate Errors

AI systems often present information confidently, even when that information is incorrect or fabricated. This characteristic makes AI-generated content perfect for teaching fact-checking skills.

Try this classroom activity:

  1. Generate Content with Errors: Use an AI tool to create content in your subject area, either by requesting information you know contains errors or by asking about obscure topics where the AI might fabricate details.
  2. Fact-Finding Mission: Provide this content to students with the explicit instruction to identify potential errors and verify information. You might structure this as:
    • Individual verification of specific claims
    • Small group investigation with different sections assigned to each group
    • A whole-class collaborative fact-checking document
  3. Source Evaluation: Have students document not just whether information is correct, but how they determined its accuracy. This reinforces the importance of consulting authoritative sources and cross-referencing information.
  4. Meta-Discussion: Use this opportunity to discuss why AI systems make these kinds of errors. Topics might include:
  • How large language models are trained
  • The concept of ‘hallucination’ in AI
  • The difference between pattern recognition and understanding
  • Why AI might present incorrect information with high confidence

These activities teach students not just to be skeptical of AI outputs but to develop systematic approaches to information verification—an essential skill in our information-saturated world.

Case Studies in AI Ethics

Ethical considerations around AI use should be explicit rather than implicit in education. Develop case studies that prompt students to engage with real ethical dilemmas:

  1. Attribution Discussions: Present scenarios where students must decide how to properly attribute AI contributions to their work. For example, if an AI helps to brainstorm ideas or provides an outline that a student substantially revises, how could this be acknowledged?
  2. Equity Considerations: Explore cases highlighting AI’s accessibility implications. Who benefits from these tools? Who might be disadvantaged? How might different cultural perspectives be underrepresented in AI outputs?
  3. Professional Standards: Discuss how different fields are developing guidelines for AI use. Medical students might examine how AI diagnostic tools should be used alongside human expertise, while creative writing students could debate the role of AI in authorship.
  4. Decision-Making Frameworks: Help students develop personal guidelines for when and how to use AI tools. What types of tasks might benefit from AI assistance? Where is independent human work essential?

These discussions help students develop thoughtful approaches to technology use that will serve them well beyond the classroom.

Implementation Tips for Educators

As you incorporate these approaches into your teaching, consider these practical suggestions:

  • Start small with one AI-focused activity before expanding to broader integration
  • Be transparent with students about your own learning curve with these technologies
  • Update your syllabus to clearly outline expectations for appropriate AI use
  • Document successes and challenges to refine your approach over time
  • Share experiences with colleagues to build institutional knowledge

Moving Beyond the AI Panic

The concept of postplagiarism does not mean abandoning academic integrity—rather, it calls for reimagining how we teach integrity in a technologically integrated world. By bringing AI tools directly into our teaching practices, we help students develop the critical thinking, evaluation skills, and ethical awareness needed to use these technologies responsibly.

When we shift our focus from preventing AI use to teaching with and about AI, we prepare students not just for academic success, but for thoughtful engagement with technology throughout their lives and careers.

References

Bearman, M., & Luckin, R. (2020). Preparing university assessment for a world with AI: Tasks for human intelligence. In M. Bearman, P. Dawson, R. Ajjawi, J. Tai, & D. Boud (Eds.), Re-imagining University Assessment in a Digital World (pp. 49-63). Springer International Publishing. https://doi.org/10.1007/978-3-030-41956-1_5 

Eaton, S. E. (2023). Postplagiarism: Transdisciplinary ethics and integrity in the age of artificial intelligence and neurotechnology. International Journal for Educational Integrity, 19(1), 1-10. https://doi.org/10.1007/s40979-023-00144-1

Edwards, B. (2023, April 6). Why ChatGPT and Bing Chat are so good at making things up. Arts Technica. https://arstechnica.com/information-technology/2023/04/why-ai-chatbots-are-the-ultimate-bs-machines-and-how-people-hope-to-fix-them/ 

________________________

Share this post: Embracing AI as a Teaching Tool: Practical Approaches for the Postplagiarism Classroom – https://drsaraheaton.com/2025/03/23/embracing-ai-as-a-teaching-tool-practical-approaches-for-the-post-plagiarism-classroom/

This blog has had over 3.7 million views thanks to readers like you. If you enjoyed this post, please ‘Like’ it using the button below or share it on social media. Thanks!

Sarah Elaine Eaton, PhD, is a Professor and Research Chair in the Werklund School of Education at the University of Calgary, Canada. Opinions are my own and do not represent those of my employer.