Celebrating 5 Years of Integrity Hour in Canadian Higher Education

March 31, 2025

Five years ago we started Integrity Hour, an online community of Practice by and for Canadian higher education #AcademicIntegrity enthusiasts, professionals, educators, researchers, and students. 

Today we had our five-year celebration, which also served as a closure of sorts. After serving as a co-steward of the community almost since the beginning, Dr. Beatriz Moya has started the next chapter of her career. 

We are working with some of our long-standing partners to reconceptualize what the next iteration of Integrity Hour will look like. For now, we will take a little pause as we regroup.

At our anniversary celebration meeting today, Brooklin Schneider encouraged us to share this guide widely, so we are posting it here, as an open access resource: “Integrity Hour: A Guide to  Developing and Facilitating an Online Community of Practice for Academic Integrity”.

Our collective outputs have been collaboratively conceptualized and co-developed. Here are a couple of other resources we have worked on over the years:

Reflections on the first year of Integrity Hour: An online community of practice for academic integrity

Academic Integrity Leadership and Community Building in Canadian Higher Education

In my remarks today I shared that being part of this weekly community of practice has influenced and informed my thinking, advocacy, and practice in way I could never have imagined. 

My gratitude to everyone who has been part of our community, sharing wisdom, knowledge, and resources. What an incredible half a decade it has been!

________________________

Share this post: Celebrating 5 Years of Integrity Hour in Canadian Higher Education – https://drsaraheaton.com/2025/03/31/celebrating-5-years-of-integrity-hour-in-canadian-higher-education/

This blog has had over 3.7 million views thanks to readers like you. If you enjoyed this post, please ‘Like’ it using the button below or share it on social media. Thanks!

Sarah Elaine Eaton, PhD, is a Professor and Research Chair in the Werklund School of Education at the University of Calgary, Canada. Opinions are my own and do not represent those of my employer. 


Embracing AI as a Teaching Tool: Practical Approaches for the Post-plagiarism Classroom

March 23, 2025

Artificial intelligence (AI) has moved from a futuristic concept to an everyday reality. Rather than viewing AI tools like ChatGPT as threats to academic integrity, forward-thinking educators are discovering their potential as powerful teaching instruments. Here’s how you can meaningfully incorporate AI into your classroom while promoting critical thinking and ethical technology use.

Making AI Visible in the Learning Process

One of the most effective approaches to teaching with AI is to bring it into the open. When we demystify these tools, students develop a more nuanced understanding of the tools’ capabilities and limitations.

Start by dedicating class time to explore AI tools together. You might begin with a demonstration of how ChatGPT or similar tools respond to different types of prompts. Ask students to compare the quality of responses when the tool is asked to:

  • Summarize factual information
  • Analyze a complex concept
  • Solve a problem in your discipline
A teaching tip infographic titled "Postplagiarism Teaching Tip by Sarah Elaine Eaton: Make AI Visible in the Learning Process." The infographic features a central image of a thinking face emoji, with three connected bubbles highlighting different aspects of AI integration in learning:

Summarize Factual Information (blue): Encourages understanding of basic facts and data handling, represented by an icon of a document with a magnifying glass.

Analyze Complex Concepts (green): Develops critical thinking and deep analysis skills, represented by an icon of a puzzle piece.

Solve Discipline-Specific Problems (orange): Enhances problem-solving skills in specific subjects, represented by an icon of tools (wrench and screwdriver).
In the bottom right corner, there’s a Creative Commons license (CC BY-NC) icon.

Have students identify where the AI excels and where it falls short. Hands-on experience that is supervised by an educator helps students understand that while AI can be impressive and  capable, it has clear boundaries and weaknesses.

From AI Drafts to Critical Analysis

AI tools can quickly generate content that serves as a starting point for deeper learning. Here is a step-by-step approach for using AI-generated drafts as teaching material:

  1. Assignment Preparation: Choose a topic relevant to your course and generate a draft response using an AI tool such as ChatGPT.
  2. Collaborative Analysis: Share the AI-generated draft with students and facilitate a discussion about its strengths and weaknesses. Prompt students with questions such as:
    • What perspectives are missing from this response?
    • How could the structure be improved?
    • What claims require additional evidence?
    • How might we make this content more engaging or relevant?

The idea is to bring students into conversations about AI, to build their critical thinking and also have them puzzle through the strengths and weaknesses of current AI tools.

  • Revision Workshop: Have students work individually or in groups to revised an AI draft into a more nuanced, complete response. This process teaches students that the value lies not in generating initial content (which AI can do) but in refining, expanding, and critically evaluating information (which requires human judgment).
  • Reflection: Ask students to document what they learned through the revision process. What gaps did they identify in the AI’s understanding? How did their human perspective enhance the work? Building in meta-cognitive awareness is one of the skills that assessment experts such as Bearman and Luckin (2020) emphasize in their work.

This approach shifts the educational focus from content creation to content evaluation and refinement—skills that will remain valuable regardless of technological advancement.

Teaching Fact-Checking Through Deliberate Errors

AI systems often present information confidently, even when that information is incorrect or fabricated. This characteristic makes AI-generated content perfect for teaching fact-checking skills.

Try this classroom activity:

  1. Generate Content with Errors: Use an AI tool to create content in your subject area, either by requesting information you know contains errors or by asking about obscure topics where the AI might fabricate details.
  2. Fact-Finding Mission: Provide this content to students with the explicit instruction to identify potential errors and verify information. You might structure this as:
    • Individual verification of specific claims
    • Small group investigation with different sections assigned to each group
    • A whole-class collaborative fact-checking document
  3. Source Evaluation: Have students document not just whether information is correct, but how they determined its accuracy. This reinforces the importance of consulting authoritative sources and cross-referencing information.
  4. Meta-Discussion: Use this opportunity to discuss why AI systems make these kinds of errors. Topics might include:
  • How large language models are trained
  • The concept of ‘hallucination’ in AI
  • The difference between pattern recognition and understanding
  • Why AI might present incorrect information with high confidence

These activities teach students not just to be skeptical of AI outputs but to develop systematic approaches to information verification—an essential skill in our information-saturated world.

Case Studies in AI Ethics

Ethical considerations around AI use should be explicit rather than implicit in education. Develop case studies that prompt students to engage with real ethical dilemmas:

  1. Attribution Discussions: Present scenarios where students must decide how to properly attribute AI contributions to their work. For example, if an AI helps to brainstorm ideas or provides an outline that a student substantially revises, how could this be acknowledged?
  2. Equity Considerations: Explore cases highlighting AI’s accessibility implications. Who benefits from these tools? Who might be disadvantaged? How might different cultural perspectives be underrepresented in AI outputs?
  3. Professional Standards: Discuss how different fields are developing guidelines for AI use. Medical students might examine how AI diagnostic tools should be used alongside human expertise, while creative writing students could debate the role of AI in authorship.
  4. Decision-Making Frameworks: Help students develop personal guidelines for when and how to use AI tools. What types of tasks might benefit from AI assistance? Where is independent human work essential?

These discussions help students develop thoughtful approaches to technology use that will serve them well beyond the classroom.

Implementation Tips for Educators

As you incorporate these approaches into your teaching, consider these practical suggestions:

  • Start small with one AI-focused activity before expanding to broader integration
  • Be transparent with students about your own learning curve with these technologies
  • Update your syllabus to clearly outline expectations for appropriate AI use
  • Document successes and challenges to refine your approach over time
  • Share experiences with colleagues to build institutional knowledge

Moving Beyond the AI Panic

The concept of postplagiarism does not mean abandoning academic integrity—rather, it calls for reimagining how we teach integrity in a technologically integrated world. By bringing AI tools directly into our teaching practices, we help students develop the critical thinking, evaluation skills, and ethical awareness needed to use these technologies responsibly.

When we shift our focus from preventing AI use to teaching with and about AI, we prepare students not just for academic success, but for thoughtful engagement with technology throughout their lives and careers.

References

Bearman, M., & Luckin, R. (2020). Preparing university assessment for a world with AI: Tasks for human intelligence. In M. Bearman, P. Dawson, R. Ajjawi, J. Tai, & D. Boud (Eds.), Re-imagining University Assessment in a Digital World (pp. 49-63). Springer International Publishing. https://doi.org/10.1007/978-3-030-41956-1_5 

Eaton, S. E. (2023). Postplagiarism: Transdisciplinary ethics and integrity in the age of artificial intelligence and neurotechnology. International Journal for Educational Integrity, 19(1), 1-10. https://doi.org/10.1007/s40979-023-00144-1

Edwards, B. (2023, April 6). Why ChatGPT and Bing Chat are so good at making things up. Arts Technica. https://arstechnica.com/information-technology/2023/04/why-ai-chatbots-are-the-ultimate-bs-machines-and-how-people-hope-to-fix-them/ 

________________________

Share this post: Embracing AI as a Teaching Tool: Practical Approaches for the Postplagiarism Classroom – https://drsaraheaton.com/2025/03/23/embracing-ai-as-a-teaching-tool-practical-approaches-for-the-post-plagiarism-classroom/

This blog has had over 3.7 million views thanks to readers like you. If you enjoyed this post, please ‘Like’ it using the button below or share it on social media. Thanks!

Sarah Elaine Eaton, PhD, is a Professor and Research Chair in the Werklund School of Education at the University of Calgary, Canada. Opinions are my own and do not represent those of my employer.


Neuralink’s Clinical Trials in Canada

January 11, 2025

Last month CBC’s Geoff Leo did a great article on called, ‘No consequences’, for violating human rights in privately funded research in Canada. This was a bit of an eye opener, even for me.

He writes that, “Roughly 85 per cent of clinical trials in Canada are privately funded” and that research undergoes very little scrutiny by anyone.

One of the cases Geoff wrote about involved a research study that ran from 2014-2016 involving Indigenous children in Saskatchewan, aged 12-15, who were research subjects in a study that monitored their brainwaves. Student participants were recruited with the help of a Canadian school board.

The study was led by James Hardt, who runs something called the Biocybernaut Institute, a privately run business. According to Leo, James Hardt claims that “brainwave training can make participants smarter, happier and enable them to overcome trauma. He said it can also allow them to levitate, walk on water and visit angels.”

Geoff Leo digs deep into some of the ethical issues and I recommend reading his article.

So, that was last month. This month, I happened to notice that according to Elon Musk’s Neuralink website, Musk’s product has now been approved by Health Canada to recruit research participants. There’s a bright purple banner at the top of the Neuralink home page showing a Canadian flag that says, “We’ve received approval from Health Canada to begin recruitment for our first clinical trial in Canada”.

A screenshot of the Neuralink.com home page. On the bottom right is a blurred photo of a man wearing a ball cap, who appears to be in a wheelchair and using tubes as medical assistance. There is white text on the right-hand side. At the top is a purple banner with white text and a small Canadian flag.

When you click on the link, you get to another page that shows the flags for the US, Canada, and the UK, where clinical trials are either underway or planned, it seems.

A screenshot of a webpage from the Neuralink web site. It has a white background with black text. In the upper left-hand corner there are three small flags, one each for the USA, Canada, and the UK.

The Canadian version is called CAN-PRIME. There’s a YouTube video promo/recruitment video for patients interested in joining, “this revolutionary journey”.

According to the website, “This study involves placing a small, cosmetically invisible implant in a part of the brain that plans movements. The device is designed to interpret a person’s neural activity, so they can operate a computer or smartphone by simply intending to move – no wires or physical movement are required.”

A screenshot from the Neuralink web page. The background is grey with black text.

So, just to connect the dots here… ten years ago in Canada there was a study involving neurotechnology that “exploited the hell out of” Indigenous kids, according to Janice Parente who leads the Human Research Standards Organization

Now we have Elon Musk’s company actively recruiting people from across Canada, the US, and the UK, for research that would involve implanting experimental technology into people’s brains without, it seems, much research ethics oversight at all.

What could possibly go wrong?

Reference

Leo, G. (2024, December 2). ‘No consequences’ for violating human rights in privately funded research in Canada, says ethics expert. https://www.cbc.ca/news/canada/saskatchewan/ethics-research-canada-privately-funded-1.7393063

________________________

Share this post: Neuralink’s Clinical Trials in Canada – https://drsaraheaton.com/2025/01/11/neuralinks-clinical-trials-in-canada/

This blog has had over 3.7 million views thanks to readers like you. If you enjoyed this post, please ‘Like’ it using the button below or share it on social media. Thanks!

Sarah Elaine Eaton, PhD, is a Professor and Research Chair in the Werklund School of Education at the University of Calgary, Canada. Opinions are my own and do not represent those of my employer. 


The GenAI Gender Gap

January 10, 2025

There is a gender gap when it comes to GenAI.

Just 26.3% of the European Union’s artificial intelligence (AI) professionals are women, according to a report from LinkedIn.

In my work with of the Women for Ethical AI (W4EAI) UNESCO platform, we had similar findings in our gender outlook study.

An AI-generated image of a group of women.

There are no easy solutions to this gap, but for those working in this area, some five concrete things you can do to promote gender inclusion (and equity in general) are:

  • 
Invite women into leadership roles, strategic planing for artificial intelligence and advanced technology.
  • Ensure that policies explicitly include women, girls, and other equity-deserving groups.
  • Invite women (and in particular, early career women and those who are precariously employed) to share and showcase their expertise and knowledge (and compensate them for their contributions).
  • Create formal sponsorship programs for women and girls who want to develop their knowledge and cp-competencies related to AI, with ongoing opportunities for learning and skill development.
An AI-generated image of a group of women.

There are a myriad of ethical complexities when it comes to artificial intelligence and gender is only one of them. Acknowledging inequalities and then working to support equity, fairness, and justice will remain ongoing work in the years to come.

References

AI in the EU: 2024 Trends and Insights from LinkedIn. (2024). https://economicgraph.linkedin.com/content/dam/me/economicgraph/en-us/PDF/AI-in-the-EU-Report.pdf

United Nations Educational Scientific and Cultural Organization (UNESCO). (2024). UNESCO Women for Ethical AI: Outlook study on artificial intelligence and gender. https://unesdoc.unesco.org/ark:/48223/pf0000391719

________________________

Share this post: The GenAI Gender Gap – https://drsaraheaton.com/2025/01/10/the-genai-gender-gap/

This blog has had over 3.7 million views thanks to readers like you. If you enjoyed this post, please ‘Like’ it using the button below or share it on social media. Thanks!

Sarah Elaine Eaton, PhD, is a Professor and Research Chair in the Werklund School of Education at the University of Calgary, Canada. Opinions are my own and do not represent those of my employer. 


Upcoming Talk: From Plagiarism to Postplagiarism: Navigating the GenAI Revolution in Higher Education

January 3, 2025
An promo announcement on a white background. There is a red stripe down the left-hand site. The University of Calgary logo appears on the top right. The following text is written in black, orange and red:
From Plagiarism to Postplagiarism: Navigating the GenAI Revolution in Higher Education
The first 2025 public presentation about #Postplagiarism
is now open for registration!

Free and open to the public.
Join us in person or via webinar.
January 29, 2025| 12:00 – 13:00 Mountain time

https://workrooms.ucalgary.ca/event/3854045

Join us for our first presentation of 2025:

From Plagiarism to Postplagiarism: Navigating the GenAI Revolution in Higher Education

Format: Hybrid (in person or live stream)

I am delighted to kick off a speaker series on GenAI hosted by my colleague, Dr. Soroush Sabbaghan, through the Centre for Artificial Intelligence Ethics, Literacy, and Integrity (CAIELI) at the University of Calgary.

Description

Generative AI (GenAI) is transforming teaching, learning, and assessment in higher education.

Learn to integrate GenAI effectively while maintaining academic integrity and enhancing student agency.

Dr. Sarah Eaton shares innovative strategies that promote critical thinking and original scholarship. Explore how GenAI reshapes academic practices and discover proactive approaches to leverage its potential.

This session equips educators, administrators, and policymakers to lead purposefully in a dynamic academic landscape.

Speaker bio

Sarah Elaine Eaton is a Professor and research chair at the Werklund School of Education at the University of Calgary (Canada). She is an award-winning educator, researcher, and leader. She leads transdisciplinary research teams focused on the ethical implications of advanced technology use in educational contexts. Dr. Eaton also holds a concurrent appointment as an Honorary Associate Professor, Deakin University, Australia.

More Details

Date: January 29, 2025

Time: 12:00 – 13:00 Mountain time

This talk is free and open to the public, but there are only 20 seats available to join us in person! We can also accommodate folks online.

Get more details and register here.

________________________

Share this post: Upcoming talk: From Plagiarism to Postplagiarism: Navigating the GenAI Revolution in Higher Education – https://drsaraheaton.com/2025/01/03/upcoming-talk-from-plagiarism-to-postplagiarism-navigating-the-genai-revolution-in-higher-education/

This blog has had over 3.7 million views thanks to readers like you. If you enjoyed this post, please “like” it or share it on social media. Thanks!

Sarah Elaine Eaton, PhD, is a Professor and Research Chair in the Werklund School of Education at the University of Calgary, Canada. Opinions are my own and do not represent those of my employer.