Embracing AI as a Teaching Tool: Practical Approaches for the Post-plagiarism Classroom

March 23, 2025

Artificial intelligence (AI) has moved from a futuristic concept to an everyday reality. Rather than viewing AI tools like ChatGPT as threats to academic integrity, forward-thinking educators are discovering their potential as powerful teaching instruments. Here’s how you can meaningfully incorporate AI into your classroom while promoting critical thinking and ethical technology use.

Making AI Visible in the Learning Process

One of the most effective approaches to teaching with AI is to bring it into the open. When we demystify these tools, students develop a more nuanced understanding of the tools’ capabilities and limitations.

Start by dedicating class time to explore AI tools together. You might begin with a demonstration of how ChatGPT or similar tools respond to different types of prompts. Ask students to compare the quality of responses when the tool is asked to:

  • Summarize factual information
  • Analyze a complex concept
  • Solve a problem in your discipline
A teaching tip infographic titled "Postplagiarism Teaching Tip by Sarah Elaine Eaton: Make AI Visible in the Learning Process." The infographic features a central image of a thinking face emoji, with three connected bubbles highlighting different aspects of AI integration in learning:

Summarize Factual Information (blue): Encourages understanding of basic facts and data handling, represented by an icon of a document with a magnifying glass.

Analyze Complex Concepts (green): Develops critical thinking and deep analysis skills, represented by an icon of a puzzle piece.

Solve Discipline-Specific Problems (orange): Enhances problem-solving skills in specific subjects, represented by an icon of tools (wrench and screwdriver).
In the bottom right corner, there’s a Creative Commons license (CC BY-NC) icon.

Have students identify where the AI excels and where it falls short. Hands-on experience that is supervised by an educator helps students understand that while AI can be impressive and  capable, it has clear boundaries and weaknesses.

From AI Drafts to Critical Analysis

AI tools can quickly generate content that serves as a starting point for deeper learning. Here is a step-by-step approach for using AI-generated drafts as teaching material:

  1. Assignment Preparation: Choose a topic relevant to your course and generate a draft response using an AI tool such as ChatGPT.
  2. Collaborative Analysis: Share the AI-generated draft with students and facilitate a discussion about its strengths and weaknesses. Prompt students with questions such as:
    • What perspectives are missing from this response?
    • How could the structure be improved?
    • What claims require additional evidence?
    • How might we make this content more engaging or relevant?

The idea is to bring students into conversations about AI, to build their critical thinking and also have them puzzle through the strengths and weaknesses of current AI tools.

  • Revision Workshop: Have students work individually or in groups to revised an AI draft into a more nuanced, complete response. This process teaches students that the value lies not in generating initial content (which AI can do) but in refining, expanding, and critically evaluating information (which requires human judgment).
  • Reflection: Ask students to document what they learned through the revision process. What gaps did they identify in the AI’s understanding? How did their human perspective enhance the work? Building in meta-cognitive awareness is one of the skills that assessment experts such as Bearman and Luckin (2020) emphasize in their work.

This approach shifts the educational focus from content creation to content evaluation and refinement—skills that will remain valuable regardless of technological advancement.

Teaching Fact-Checking Through Deliberate Errors

AI systems often present information confidently, even when that information is incorrect or fabricated. This characteristic makes AI-generated content perfect for teaching fact-checking skills.

Try this classroom activity:

  1. Generate Content with Errors: Use an AI tool to create content in your subject area, either by requesting information you know contains errors or by asking about obscure topics where the AI might fabricate details.
  2. Fact-Finding Mission: Provide this content to students with the explicit instruction to identify potential errors and verify information. You might structure this as:
    • Individual verification of specific claims
    • Small group investigation with different sections assigned to each group
    • A whole-class collaborative fact-checking document
  3. Source Evaluation: Have students document not just whether information is correct, but how they determined its accuracy. This reinforces the importance of consulting authoritative sources and cross-referencing information.
  4. Meta-Discussion: Use this opportunity to discuss why AI systems make these kinds of errors. Topics might include:
  • How large language models are trained
  • The concept of ‘hallucination’ in AI
  • The difference between pattern recognition and understanding
  • Why AI might present incorrect information with high confidence

These activities teach students not just to be skeptical of AI outputs but to develop systematic approaches to information verification—an essential skill in our information-saturated world.

Case Studies in AI Ethics

Ethical considerations around AI use should be explicit rather than implicit in education. Develop case studies that prompt students to engage with real ethical dilemmas:

  1. Attribution Discussions: Present scenarios where students must decide how to properly attribute AI contributions to their work. For example, if an AI helps to brainstorm ideas or provides an outline that a student substantially revises, how could this be acknowledged?
  2. Equity Considerations: Explore cases highlighting AI’s accessibility implications. Who benefits from these tools? Who might be disadvantaged? How might different cultural perspectives be underrepresented in AI outputs?
  3. Professional Standards: Discuss how different fields are developing guidelines for AI use. Medical students might examine how AI diagnostic tools should be used alongside human expertise, while creative writing students could debate the role of AI in authorship.
  4. Decision-Making Frameworks: Help students develop personal guidelines for when and how to use AI tools. What types of tasks might benefit from AI assistance? Where is independent human work essential?

These discussions help students develop thoughtful approaches to technology use that will serve them well beyond the classroom.

Implementation Tips for Educators

As you incorporate these approaches into your teaching, consider these practical suggestions:

  • Start small with one AI-focused activity before expanding to broader integration
  • Be transparent with students about your own learning curve with these technologies
  • Update your syllabus to clearly outline expectations for appropriate AI use
  • Document successes and challenges to refine your approach over time
  • Share experiences with colleagues to build institutional knowledge

Moving Beyond the AI Panic

The concept of postplagiarism does not mean abandoning academic integrity—rather, it calls for reimagining how we teach integrity in a technologically integrated world. By bringing AI tools directly into our teaching practices, we help students develop the critical thinking, evaluation skills, and ethical awareness needed to use these technologies responsibly.

When we shift our focus from preventing AI use to teaching with and about AI, we prepare students not just for academic success, but for thoughtful engagement with technology throughout their lives and careers.

References

Bearman, M., & Luckin, R. (2020). Preparing university assessment for a world with AI: Tasks for human intelligence. In M. Bearman, P. Dawson, R. Ajjawi, J. Tai, & D. Boud (Eds.), Re-imagining University Assessment in a Digital World (pp. 49-63). Springer International Publishing. https://doi.org/10.1007/978-3-030-41956-1_5 

Eaton, S. E. (2023). Postplagiarism: Transdisciplinary ethics and integrity in the age of artificial intelligence and neurotechnology. International Journal for Educational Integrity, 19(1), 1-10. https://doi.org/10.1007/s40979-023-00144-1

Edwards, B. (2023, April 6). Why ChatGPT and Bing Chat are so good at making things up. Arts Technica. https://arstechnica.com/information-technology/2023/04/why-ai-chatbots-are-the-ultimate-bs-machines-and-how-people-hope-to-fix-them/ 

________________________

Share this post: Embracing AI as a Teaching Tool: Practical Approaches for the Postplagiarism Classroom – https://drsaraheaton.com/2025/03/23/embracing-ai-as-a-teaching-tool-practical-approaches-for-the-post-plagiarism-classroom/

This blog has had over 3.7 million views thanks to readers like you. If you enjoyed this post, please ‘Like’ it using the button below or share it on social media. Thanks!

Sarah Elaine Eaton, PhD, is a Professor and Research Chair in the Werklund School of Education at the University of Calgary, Canada. Opinions are my own and do not represent those of my employer.


10 Recommendations for Academic Integrity Action: An AI-Assisted Experiment

January 27, 2025

I worked with a GenAI LLM (Claude) to create an infographic based on this open access article that I wrote:

Eaton, S. E. (2022). New priorities for academic integrity: equity, diversity, inclusion, decolonization and Indigenization. International Journal for Educational Integrity, 1-12. https://doi.org/10.1007/s40979-022-00105-0 

Here’s the result:

An AI-generated infographic with 10 circles, each of a different colour, with black text inside each of them.
AI-Generated infographic summarizing key ideas in a research article.

I wanted to see how #AI could be used to help with knowledge mobilization and communicating complex research ideas in simple and visually appealing ways. I am not 100% happy with this, but I am also not entirely unhappy. I am curious to see how researchers and scholars can use AI to help make our research more digestible to the general public.

At the end of the experiment, I asked Claude: Based on our interaction just now, what would you say are the benefits and drawbacks of working with AI to create infographics to share findings of research articles?

Claude’s reply:

“I can identify several key benefits and drawbacks of using AI to create infographics for research articles:

Benefits:

  1. Rapid Prototyping and Ideation
  • Quick generation of multiple concept ideas (as shown when I suggested 4 different infographic approaches)
  • Fast iteration based on feedback (like when we added your name and citation)
  • Ability to quickly visualize complex concepts from text
  1. Structured Organization
  • Systematic breakdown of complex research findings into visual elements
  • Logical arrangement of information hierarchies
  • Consistent application of design principles
  1. Customization and Flexibility
  • Ability to modify designs based on specific requests
  • Easy incorporation of branding elements or author attribution
  • Adaptable to different formats and styles
  1. Technical Precision
  • Exact color codes and measurements
  • Consistent spacing and alignment
  • Precise vector graphics that maintain quality at any size

Drawbacks:

  1. Format and Output Limitations
  • Cannot directly generate common file formats like JPG or PowerPoint
  • Unable to provide downloadable files
  • Limited to specific output formats (in our case, SVG)
  1. Technical Barriers
  • Users need to know how to convert or adapt the outputs
  • May require additional tools or software to get desired final format
  • Some technical knowledge needed to implement suggestions
  1. Creative Constraints
  • Limited to certain types of visual representations
  • May not capture nuanced design choices that a human designer would make
  • Restricted by pre-programmed design patterns
  1. Implementation Challenges
  • Gap between AI suggestions and practical implementation
  • Need for manual conversion or recreation in other software
  • Extra steps required to get to final usable format
  1. Quality Control Requirements
  • Need for human oversight and verification
  • May require professional designer input for polishing
  • Could need additional refinement for professional publication

For optimal results, it seems the best approach would be to use AI as a rapid prototyping and ideation tool, then work with a professional designer or use appropriate software to create the final, polished version in the desired format. This combines the efficiency of AI for initial concept development with the precision and format flexibility of professional design tools.”

Note: The numbering issue is a function of formatting in WordPress. I’ll fix that later.

This was an interesting experiment. I’ll be interested to see how AI apps can help us better communicate our research and scholarship to members of the general public.

________________________

Share this post: 10 Recommendations for Academic Integrity Action: An AI-Assisted Experiment – https://drsaraheaton.com/2025/01/27/10-recommendations-for-academic-integrity-action-an-ai-assisted-experiment/

This blog has had over 3.7 million views thanks to readers like you. If you enjoyed this post, please ‘Like’ it using the button below or share it on social media. Thanks!

Sarah Elaine Eaton, PhD, is a Professor and Research Chair in the Werklund School of Education at the University of Calgary, Canada. Opinions are my own and do not represent those of my employer.


In Memory of J. Tim Goddard: More Than a Supervisor

January 23, 2025
A photo of J. Tim Goddard sitting on the steps of his artist studio. His black dog is sitting next to him.

The academic world has lost a remarkable educator and mentor with the passing of J. Tim Goddard, former University of Calgary professor and administrator who then went on to serve as  Dean of Education at the University of Prince Edward Island. Tim’s impact on higher education extended far beyond his administrative roles – he was a transformative figure in the lives of countless graduate students, myself included.

As my PhD supervisor from 2005 to 2009, Tim demonstrated an extraordinary ability to balance academic rigour with compassionate mentorship. He took a chance on my research into revenue-generating programs in higher education, recognizing the importance of examining how such initiatives could be administered ethically in an era of government funding cuts.

Tim’s dedication to his students was unwavering, even in the face of profound personal tragedy. In 2006, his daughter Captain Nicola Goddard was the first Canadian female soldier to be killed in combat during her time in Afghanistan. Despite this devastating loss, Tim continued to guide and support his graduate students with remarkable strength and grace.

What set Tim apart was his practical wisdom, delivered through memorable maxims that still guide many of us today. “This isn’t your magnum opus!” and “A good thesis is a finished thesis!” were frequent reminders to stay focused on completion rather than perfection. He had an uncanny ability to guide students back to their research path whenever they strayed too far into tangential explorations.

His mentorship extended beyond the academic realm. When I traveled to Charlottetown to write my thesis, Tim and his wife Sally opened their home to me – a gesture typical of how Tim viewed his students as extended family. Yet this warmth never compromised his high standards; his insistence on impeccable writing, citations, and APA formatting shaped many of us into better scholars.

J. Tim Goddard and Sarah Elaine Eaton in PEI. There is a lighthouse in the left background.

After retiring in 2018, Tim embraced new creative pursuits in painting and novel writing, approaching these endeavours with the same passion he brought to academia. His legacy lives on through the countless educators and researchers he mentored, who continue to apply his teachings in their own work and supervision of the next generation.

Tim Goddard understood that a PhD journey was not just about producing research, but about preparing for a career. He showed us how to be not just scholars, but mentors, teachers, and advocates for our own students. His influence will continue to ripple through generations of educators, researchers, and leaders in higher education.

________________________

Share this post: In Memory of J. Tim Goddard: More Than a Supervisor – https://drsaraheaton.com/2025/01/23/in-memory-of-j-tim-goddard-more-than-a-supervisor/

This blog has had over 3.7 million views thanks to readers like you. If you enjoyed this post, please ‘Like’ it using the button below or share it on social media. Thanks!

Sarah Elaine Eaton, PhD, is a Professor and Research Chair in the Werklund School of Education at the University of Calgary, Canada. Opinions are my own and do not represent those of my employer. 


Postplagiarism in THE

January 15, 2025

How lovely to wake up this morning and see an article about postplagiarism in Times Higher Education. Karen Kenny from Exeter University writes about the six tenets and then extends the discussion to assessment.

It seems that the idea of postplagiarism is catching on. Dr. Rahul Kumar and I have launched a postplagiarism online community, where we share blogs, news, articles, and translations of the work into other languages. You can check out all our resources over on our other site.

________________________

Share this post: Postplagiarism in THE – https://drsaraheaton.com/2025/01/15/postplagiarism-in-the/

This blog has had over 3.7 million views thanks to readers like you. If you enjoyed this post, please ‘Like’ it using the button below or share it on social media. Thanks!

Sarah Elaine Eaton, PhD, is a Professor and Research Chair in the Werklund School of Education at the University of Calgary, Canada. Opinions are my own and do not represent those of my employer. 


Neuralink’s Clinical Trials in Canada

January 11, 2025

Last month CBC’s Geoff Leo did a great article on called, ‘No consequences’, for violating human rights in privately funded research in Canada. This was a bit of an eye opener, even for me.

He writes that, “Roughly 85 per cent of clinical trials in Canada are privately funded” and that research undergoes very little scrutiny by anyone.

One of the cases Geoff wrote about involved a research study that ran from 2014-2016 involving Indigenous children in Saskatchewan, aged 12-15, who were research subjects in a study that monitored their brainwaves. Student participants were recruited with the help of a Canadian school board.

The study was led by James Hardt, who runs something called the Biocybernaut Institute, a privately run business. According to Leo, James Hardt claims that “brainwave training can make participants smarter, happier and enable them to overcome trauma. He said it can also allow them to levitate, walk on water and visit angels.”

Geoff Leo digs deep into some of the ethical issues and I recommend reading his article.

So, that was last month. This month, I happened to notice that according to Elon Musk’s Neuralink website, Musk’s product has now been approved by Health Canada to recruit research participants. There’s a bright purple banner at the top of the Neuralink home page showing a Canadian flag that says, “We’ve received approval from Health Canada to begin recruitment for our first clinical trial in Canada”.

A screenshot of the Neuralink.com home page. On the bottom right is a blurred photo of a man wearing a ball cap, who appears to be in a wheelchair and using tubes as medical assistance. There is white text on the right-hand side. At the top is a purple banner with white text and a small Canadian flag.

When you click on the link, you get to another page that shows the flags for the US, Canada, and the UK, where clinical trials are either underway or planned, it seems.

A screenshot of a webpage from the Neuralink web site. It has a white background with black text. In the upper left-hand corner there are three small flags, one each for the USA, Canada, and the UK.

The Canadian version is called CAN-PRIME. There’s a YouTube video promo/recruitment video for patients interested in joining, “this revolutionary journey”.

According to the website, “This study involves placing a small, cosmetically invisible implant in a part of the brain that plans movements. The device is designed to interpret a person’s neural activity, so they can operate a computer or smartphone by simply intending to move – no wires or physical movement are required.”

A screenshot from the Neuralink web page. The background is grey with black text.

So, just to connect the dots here… ten years ago in Canada there was a study involving neurotechnology that “exploited the hell out of” Indigenous kids, according to Janice Parente who leads the Human Research Standards Organization

Now we have Elon Musk’s company actively recruiting people from across Canada, the US, and the UK, for research that would involve implanting experimental technology into people’s brains without, it seems, much research ethics oversight at all.

What could possibly go wrong?

Reference

Leo, G. (2024, December 2). ‘No consequences’ for violating human rights in privately funded research in Canada, says ethics expert. https://www.cbc.ca/news/canada/saskatchewan/ethics-research-canada-privately-funded-1.7393063

________________________

Share this post: Neuralink’s Clinical Trials in Canada – https://drsaraheaton.com/2025/01/11/neuralinks-clinical-trials-in-canada/

This blog has had over 3.7 million views thanks to readers like you. If you enjoyed this post, please ‘Like’ it using the button below or share it on social media. Thanks!

Sarah Elaine Eaton, PhD, is a Professor and Research Chair in the Werklund School of Education at the University of Calgary, Canada. Opinions are my own and do not represent those of my employer.