Embracing AI as a Teaching Tool: Practical Approaches for the Post-plagiarism Classroom

March 23, 2025

Artificial intelligence (AI) has moved from a futuristic concept to an everyday reality. Rather than viewing AI tools like ChatGPT as threats to academic integrity, forward-thinking educators are discovering their potential as powerful teaching instruments. Here’s how you can meaningfully incorporate AI into your classroom while promoting critical thinking and ethical technology use.

Making AI Visible in the Learning Process

One of the most effective approaches to teaching with AI is to bring it into the open. When we demystify these tools, students develop a more nuanced understanding of the tools’ capabilities and limitations.

Start by dedicating class time to explore AI tools together. You might begin with a demonstration of how ChatGPT or similar tools respond to different types of prompts. Ask students to compare the quality of responses when the tool is asked to:

  • Summarize factual information
  • Analyze a complex concept
  • Solve a problem in your discipline
A teaching tip infographic titled "Postplagiarism Teaching Tip by Sarah Elaine Eaton: Make AI Visible in the Learning Process." The infographic features a central image of a thinking face emoji, with three connected bubbles highlighting different aspects of AI integration in learning:

Summarize Factual Information (blue): Encourages understanding of basic facts and data handling, represented by an icon of a document with a magnifying glass.

Analyze Complex Concepts (green): Develops critical thinking and deep analysis skills, represented by an icon of a puzzle piece.

Solve Discipline-Specific Problems (orange): Enhances problem-solving skills in specific subjects, represented by an icon of tools (wrench and screwdriver).
In the bottom right corner, there’s a Creative Commons license (CC BY-NC) icon.

Have students identify where the AI excels and where it falls short. Hands-on experience that is supervised by an educator helps students understand that while AI can be impressive and  capable, it has clear boundaries and weaknesses.

From AI Drafts to Critical Analysis

AI tools can quickly generate content that serves as a starting point for deeper learning. Here is a step-by-step approach for using AI-generated drafts as teaching material:

  1. Assignment Preparation: Choose a topic relevant to your course and generate a draft response using an AI tool such as ChatGPT.
  2. Collaborative Analysis: Share the AI-generated draft with students and facilitate a discussion about its strengths and weaknesses. Prompt students with questions such as:
    • What perspectives are missing from this response?
    • How could the structure be improved?
    • What claims require additional evidence?
    • How might we make this content more engaging or relevant?

The idea is to bring students into conversations about AI, to build their critical thinking and also have them puzzle through the strengths and weaknesses of current AI tools.

  • Revision Workshop: Have students work individually or in groups to revised an AI draft into a more nuanced, complete response. This process teaches students that the value lies not in generating initial content (which AI can do) but in refining, expanding, and critically evaluating information (which requires human judgment).
  • Reflection: Ask students to document what they learned through the revision process. What gaps did they identify in the AI’s understanding? How did their human perspective enhance the work? Building in meta-cognitive awareness is one of the skills that assessment experts such as Bearman and Luckin (2020) emphasize in their work.

This approach shifts the educational focus from content creation to content evaluation and refinement—skills that will remain valuable regardless of technological advancement.

Teaching Fact-Checking Through Deliberate Errors

AI systems often present information confidently, even when that information is incorrect or fabricated. This characteristic makes AI-generated content perfect for teaching fact-checking skills.

Try this classroom activity:

  1. Generate Content with Errors: Use an AI tool to create content in your subject area, either by requesting information you know contains errors or by asking about obscure topics where the AI might fabricate details.
  2. Fact-Finding Mission: Provide this content to students with the explicit instruction to identify potential errors and verify information. You might structure this as:
    • Individual verification of specific claims
    • Small group investigation with different sections assigned to each group
    • A whole-class collaborative fact-checking document
  3. Source Evaluation: Have students document not just whether information is correct, but how they determined its accuracy. This reinforces the importance of consulting authoritative sources and cross-referencing information.
  4. Meta-Discussion: Use this opportunity to discuss why AI systems make these kinds of errors. Topics might include:
  • How large language models are trained
  • The concept of ‘hallucination’ in AI
  • The difference between pattern recognition and understanding
  • Why AI might present incorrect information with high confidence

These activities teach students not just to be skeptical of AI outputs but to develop systematic approaches to information verification—an essential skill in our information-saturated world.

Case Studies in AI Ethics

Ethical considerations around AI use should be explicit rather than implicit in education. Develop case studies that prompt students to engage with real ethical dilemmas:

  1. Attribution Discussions: Present scenarios where students must decide how to properly attribute AI contributions to their work. For example, if an AI helps to brainstorm ideas or provides an outline that a student substantially revises, how could this be acknowledged?
  2. Equity Considerations: Explore cases highlighting AI’s accessibility implications. Who benefits from these tools? Who might be disadvantaged? How might different cultural perspectives be underrepresented in AI outputs?
  3. Professional Standards: Discuss how different fields are developing guidelines for AI use. Medical students might examine how AI diagnostic tools should be used alongside human expertise, while creative writing students could debate the role of AI in authorship.
  4. Decision-Making Frameworks: Help students develop personal guidelines for when and how to use AI tools. What types of tasks might benefit from AI assistance? Where is independent human work essential?

These discussions help students develop thoughtful approaches to technology use that will serve them well beyond the classroom.

Implementation Tips for Educators

As you incorporate these approaches into your teaching, consider these practical suggestions:

  • Start small with one AI-focused activity before expanding to broader integration
  • Be transparent with students about your own learning curve with these technologies
  • Update your syllabus to clearly outline expectations for appropriate AI use
  • Document successes and challenges to refine your approach over time
  • Share experiences with colleagues to build institutional knowledge

Moving Beyond the AI Panic

The concept of postplagiarism does not mean abandoning academic integrity—rather, it calls for reimagining how we teach integrity in a technologically integrated world. By bringing AI tools directly into our teaching practices, we help students develop the critical thinking, evaluation skills, and ethical awareness needed to use these technologies responsibly.

When we shift our focus from preventing AI use to teaching with and about AI, we prepare students not just for academic success, but for thoughtful engagement with technology throughout their lives and careers.

References

Bearman, M., & Luckin, R. (2020). Preparing university assessment for a world with AI: Tasks for human intelligence. In M. Bearman, P. Dawson, R. Ajjawi, J. Tai, & D. Boud (Eds.), Re-imagining University Assessment in a Digital World (pp. 49-63). Springer International Publishing. https://doi.org/10.1007/978-3-030-41956-1_5 

Eaton, S. E. (2023). Postplagiarism: Transdisciplinary ethics and integrity in the age of artificial intelligence and neurotechnology. International Journal for Educational Integrity, 19(1), 1-10. https://doi.org/10.1007/s40979-023-00144-1

Edwards, B. (2023, April 6). Why ChatGPT and Bing Chat are so good at making things up. Arts Technica. https://arstechnica.com/information-technology/2023/04/why-ai-chatbots-are-the-ultimate-bs-machines-and-how-people-hope-to-fix-them/ 

________________________

Share this post: Embracing AI as a Teaching Tool: Practical Approaches for the Postplagiarism Classroom – https://drsaraheaton.com/2025/03/23/embracing-ai-as-a-teaching-tool-practical-approaches-for-the-post-plagiarism-classroom/

This blog has had over 3.7 million views thanks to readers like you. If you enjoyed this post, please ‘Like’ it using the button below or share it on social media. Thanks!

Sarah Elaine Eaton, PhD, is a Professor and Research Chair in the Werklund School of Education at the University of Calgary, Canada. Opinions are my own and do not represent those of my employer.


Re-Released as a Free Open Access Resource: 101 Ways to Market Your Language Program (2002)

December 6, 2024

In 2002, I published the first edition of 101 Ways to Market Your Language Program. I have just re-released the book as a free open access resource under a Creative Commons (CC BY-NC-ND 4.0). The full book is now available as a free download.

How to cite this work:
Eaton, S. E. (2002/2024). 101 Ways to Market Your Language Program: A Practical Guide for Language Schools and Programs (2024 OA ed.). Eaton International Consulting Inc. https://hdl.handle.net/1880/120145

Abstract

2024 Re-release of the 2002 first edition of this book. The author and copyright holder has released this work under a Creative Commons (CC BY-NC-ND 4.0) license.

This book provides 101 ideas and strategies to empower overall marketing efforts: (1) “Put On Your Thinking Cap” (e.g., define the problem before marketing it, set reasonable goals, and create a niche); (2) “Secrets to Boost Your Marketing Power” (e.g., emphasize the benefits, check out the competition, and sell oneself in as many languages as possible); (3) “Marketing Materials: Tools and Tips to Do the Job Better” (e.g., make a brochure, get mentioned in other brochures, and make it easy to phone for information); (4) “Going Beyond the Basics to Increase Enrollment” (e.g, offer volume discounts and guarantees and give away tuition); (5) “Specialty Tips for Programs at Large Institutions” (e.g., make sure the Web site is easy to find, partner with other educational programs, and get the program mentioned in the calendar); (6) “The Power of People: A Human Touch to Increase Enrollment and Polish Your Image” (e.g., build loyalty with host families, establish win-win relationships, and follow exceptional service standards); (7) “Continue Marketing While Your Students are Enrolled” (e.g., meet students at the airport, partner with local businesses, and create happy memories); and (8) “How to Keep Marketing Once Your Program is Finished” (e.g., create an alumni network, review successes and failures, and plan ahead for next year).

Why Am I Re-Releasing This Work?

I have been invited to deliver the Werklund School of Education 2024-2025 Distinguished Research Lecture.

As part of the lead-up to the lecture, I have decided to make as much of my work as I can available as free, open access resources. This is recognition is a once-in-a-career kind of award and I’m working hard to make sure I can deliver. It is a hybrid public event and you can attend in person or online on March 20, 2025. If you’re interested, you can register here.

Since I am both the author and the copyright holder for this book, I can share it however I want. I am more committed now than ever to make as much of my work as possible freely available to others. Over the coming weeks, I’ll be sharing more links to more freely downloadable resources.

I am super grateful to the University of Calgary digital resources team who are helping me to archive these works.

If you know of someone working in second languages who could use a resource on marketing and recruitment for their program, feel free to share this with them.

________________________

Share this post: Re-released as a Free Open Access Resource: 101 Ways to Market Your Language Program (2002) – https://drsaraheaton.com/2024/12/06/re-released-as-a-free-open-access-resource-101-ways-to-market-your-language-program-2002/

This blog has had over 3 million views thanks to readers like you. If you enjoyed this post, please “like” it or share it on social media. Thanks!

Sarah Elaine Eaton, PhD, is a Professor and Research Chair in the Werklund School of Education at the University of Calgary, Canada. Opinions are my own and do not represent those of my employer.


Event: The Intersection of Academic Integrity and Inclusion: A Fireside Chat

October 10, 2024

Next week is Academic Integrity Week at the Univeristy of Calgary. This year, I have the honour of moderating a fireside chat with one of my very own Werklund School of Education Doctor of Education (EdD) students, Colleen Fleming. 

Join us for a thought-provoking discussion during Academic Integrity Week 2024!

A poster with text in black, red, and orange. The University of Calgary logo appears at the top. On the right-hand side there is art featuring a woman wearing a headset.

Discover the crucial link between academic integrity and inclusion in higher education with our distinguished speaker, Colleen Fleming, EdD student, Werklund School of Education.

Moderated by Dr. Sarah Elaine Eaton, this conversation will explore:

  • Defining academic integrity in an inclusive context
  • Challenges in maintaining integrity across diverse student populations
  • Practical strategies for educators to promote both integrity and inclusion

Don’t miss this opportunity to gain insights from Colleen’s extensive experience as a K-12 practitioner and her cutting-edge doctoral research. Engage in a live Q&A session and contribute to this important conversation.

A bit about Colleen…

A photograph of a woman with chin-length blonde hair. She is wearing a white top. The background is blue.

Colleen Fleming (she/her/hers) is a K-12 practitioner at a designated special education school in Calgary. She has a keen interest in developing a culture of integrity among learners through the promotion of equity, diversity, and inclusion. As a Doctor of Education student at Werklund, her research involves proactively educating students about academic integrity in preparation for higher education.

Event details

Date: October 16, 2024

Time: 12:00 – 1:00 p.m.

Location: University of Calgary, Taylor Family Digital Library, Gallery Hall

https://events.ucalgary.ca/library/event/481166-academic-integrity-and-inclusion-with-colleen-fleming

This event is free and open to the public. Everyone is welcome!

________________________

Share this post: The Intersection of Academic Integrity and Inclusion: A Fireside Chat – https://drsaraheaton.com/2024/10/10/event-the-intersection-of-academic-integrity-and-inclusion-a-fireside-chat/

This blog has had over 3 million views thanks to readers like you. If you enjoyed this post, please “like” it or share it on social media. Thanks!

Sarah Elaine Eaton, PhD, is a Professor and Research Chair in the Werklund School of Education at the University of Calgary, Canada. Opinions are my own and do not represent those of my employer.


Ethical Reasons to Avoid Using AI Apps for Student Assessment

September 10, 2024

It’s the start of a new school year here in North America. We are into the second week of classes and already I am hearing from administrators in both K-12 and higher education institutions who are frustrated with educators who have turned to ChatGPT and other publicly-available Gen AI apps to help them assess student learning.

Although customized AI apps designed specifically to assist with the assessment of student learning already exist, many educators do not yet have access to such tools. Instead, I am hearing about educators turning to large language models (LLMs) like ChatGPT to help them provide formative or summative assessment of students’ work. There are some good reasons not to avoid using ChatGPT or other LLMs to assess student learning.

I expect that not everyone will agree with these points, please take them with the spirit in which they are intended, which to provide guidance on ethical ways to teach, learn, and assess students’ work.

8 Tips on Why Educators Should Avoid Using AI Apps to Help with Assessment of Student Learning

Intellectual Property

In Canada at least, a student’s work is their intellectual property. Unless you have permission to use it outside of class, then avoid doing so. The bottom line here is that student’s intellectual work is not yours to share to a large-language model (LLM) or any other third party application, with out their knowledge and consent.

Privacy

A student’s personal data, including their name, ID number and other details should never be uploaded to an external app without consent. One reason for this blog post is to respond to stories I am hearing about educators uploading entire student essays or assignments, including the cover page with all the identifying information, to a third-party GenAI app.

Data security

Content uploaded to an AI tool may be added to its database and used to train the tool. Uploading student assignments to GenAI apps for feedback poses several data security risks. These include potential breaches of data storage systems, privacy violations through sharing sensitive student information, and intellectual property concerns. Inadequate access controls or encryption could allow unauthorized access to student work. 

AI model vulnerabilities might enable data extraction, while unintended leakage could occur through the AI app’s responses. If the educator’s account is compromised, it could expose all of the uploaded assignments. The app’s policies may permit third-party data sharing, and long-term data persistence in backups or training sets could extend the risk timeline. Also, there may be legal and regulatory issues around sharing student data, especially for minors, without proper consent.

Bias

AI apps are known to be biased. Feedback generated by an AI app can be biased, unfair, and even racist. To learn more check out this article published in Nature. AI models can perpetuate existing biases present in their training data, which may not represent diverse student populations adequately. Apps might favour certain writing styles (e.g., standard American English), cultural references, or modes of expression, disadvantaging students from different backgrounds. 

Furthermore, the AI’s feedback could be inconsistent across similar submissions or fail to account for individual student progress and needs. Additionally, the app may not fully grasp nuanced or creative approaches, leading to standardized feedback that discourages unique thinking.

Lack of context

An AI app does not know your student like you do. Although GenAI tools can offer quick assessments and feedback, they often lack the nuanced understanding of a student’s unique context, learning style, and emotional or physical well-being. Overreliance on AI-generated feedback might lead to generic responses, diminishing the personal connection and meaningful interaction that educators provide, which are vital for effective learning.

Impersonal

AI apps can provide generic feedback, but as an educator, you can personalize feedback to help the student grow. AI apps can provide generic feedback but may not help to scaffold a student’s learning. Personalized feedback is crucial, as it fosters individual student growth, enhances understanding, and encourages engagement with the material. Tailoring feedback to specific strengths and weaknesses helps students recognize their progress and areas needing improvement. In turn, this helps to build their confidence and motivation. 

Academic Integrity

Educators model ethical behaviour, this includes transparent and fair assessment. If you are using tech tools to assess student learning, it is important to be transparent about it. In this post, I write more about how and why deceptive and covert assessment tactics are unacceptable.

Your Employee Responsibilities

If your job description includes assessing student work , you may be violating your employment contract if you offload assessment to an AI app.

Concluding Thoughts

Unless your employer has explicitly given you permission to use AI apps for assessing student work then, at least for now, consider providing feedback and assessment in the ways expected by your employer. If we do not want students to use AI apps to take shortcuts, then it is up to us as educators to model the behavior we expect from students.

I understand that educators have excessive and exhausting workloads. I appreciate that we have more items on our To Do Lists than is reasonable. I totally get it that we may look for shortcuts and ways to reduce our workload. The reality is that although Gen AI may have the capability to help with certain tasks, not all employers have endorsed their use in same way.

Not all institutions or schools have artificial intelligence policies or guideline, so when in doubt, ask your supervisor if you are not sure about the expectations. Again, there is a parallel here with student conduct. If we expect students to avoid using AI apps unless we make it explicit that it is OK, then the same goes for educators. Avoid using unauthorized tech tools for assessment without the boss knowing about it.

I am not suggesting that Gen AI apps don’t have the capability to assist with AI, but I am suggesting that many educational institutions have not yet approved the use of such apps for use in the workplace. Trust me, when there are Gen AI apps to help with the heaviest aspects of our workload as educators, I’ll be at the front of the line to use them. In the meantime, there’s a balance to be struck between what AI can do and what one’s employer may permit us to use AI for. It’s important to know the difference — and to protect your livelihood.

Related post:

The Use of AI-Detection Tools in the Assessment of Student Work https://drsaraheaton.wordpress.com/2023/05/06/the-use-of-ai-detection-tools-in-the-assessment-of-student-work/

____________________________

Share this post:
Ethical Reasons to Avoid Using AI Apps for Student Assessment – https://drsaraheaton.com/2024/09/10/ethical-reasons-to-avoid-using-ai-apps-for-student-assessment/

This blog has had over 3.6 million views thanks to readers like you. If you enjoyed this post, please “like” it or share it on social media. Thanks!

Sarah Elaine Eaton, PhD, is a faculty member in the Werklund School of Education at the University of Calgary, Canada. Opinions are my own and do not represent those of my employer.

Sarah Elaine Eaton, PhD, Editor-in-Chief, International Journal for Educational Integrity


Exploring the Contemporary Intersections of Artificial Intelligence and Academic Integrity

May 17, 2022
Title slide from CSSHE 2022 panel discussion: AI & AI: Exploring the contemporary intersections of artificial intelligence and academic integrity (Kumar, Mindzak, Eaton & Morrison)

For more than a year there have been small teams of us across Canada studying the impact of artificial intelligence on academic integrity. Today I am pleased to be part of a panel discussion on this topic at the annual conference of the Canadian Society for the Study of Higher Education (CSSHE), which is part of Congress 2022.

Our panel is led by Rahul Kumar (Brock University, Canada), together with Michael Mindzak (Brock University, Canada) and Ryan Morrison (George Brown College, Canada)

Here is the information about our panel:

Session G3: Panel: AI & AI: Exploring the Contemporary Intersections of Artificial Intelligence and Academic Integrity (Live, remote) 

Panel Chair: Rahul Kumar 

  • Rahul Kumar (Brock University): Ethical application with practical examples
  • Michael Mindzak (Brock University): Implications on labour 
  • Ryan Morrison (George Brown College): Large language models: An overview for educators 
  • Sarah Elaine Eaton (University of Calgary): Academic integrity and assessment 

We have developed a combined slide deck for our panel discussion today. You can download the entire slide deck from the link noted in the citation below:

Kumar, R., Mindzak, M., Morrison, R., & Eaton, S. E. (2022, May 17). AI & AI: Exploring the contemporary intersections of artificial intelligence and academic integrity [online]. Paper presented at the Canadian Society for the Study of Higher Education (CSSHE). http://hdl.handle.net/1880/114647

Related posts:

New project: Artificial Intelligence and Academic Integrity: The Ethics of Teaching and Learning with Algorithmic Writing Technologies – https://drsaraheaton.wordpress.com/2022/04/19/new-project-artificial-intelligence-and-academic-integrity-the-ethics-of-teaching-and-learning-with-algorithmic-writing-technologies/

Keywords: artificial intelligence, large language models, GPT-3, academic integrity, academic misconduct, plagiarism, higher education, teaching, learning, assessment

_________________________________

Share or Tweet this: Exploring the Contemporary Intersections of Artificial Intelligence and Academic Integrity https://drsaraheaton.wordpress.com/2022/05/17/exploring-the-contemporary-intersections-of-artificial-intelligence-and-academic-integrity/

This blog has had over 3 million views thanks to readers like you. If you enjoyed this post, please “like” it or share it on social media. Thanks!

Sarah Elaine Eaton, PhD, is a faculty member in the Werklund School of Education, and the Educational Leader in Residence, Academic Integrity, University of Calgary, Canada. Opinions are my own and do not represent those of the University of Calgary.