Five years ago we started Integrity Hour, an online community of Practice by and for Canadian higher education #AcademicIntegrity enthusiasts, professionals, educators, researchers, and students.
Today we had our five-year celebration, which also served as a closure of sorts. After serving as a co-steward of the community almost since the beginning, Dr. Beatriz Moya has started the next chapter of her career.
We are working with some of our long-standing partners to reconceptualize what the next iteration of Integrity Hour will look like. For now, we will take a little pause as we regroup.
Our collective outputs have been collaboratively conceptualized and co-developed. Here are a couple of other resources we have worked on over the years:
In my remarks today I shared that being part of this weekly community of practice has influenced and informed my thinking, advocacy, and practice in way I could never have imagined.
My gratitude to everyone who has been part of our community, sharing wisdom, knowledge, and resources. What an incredible half a decade it has been!
This blog has had over 3.7 million views thanks to readers like you. If you enjoyed this post, please ‘Like’ it using the button below or share it on social media. Thanks!
Sarah Elaine Eaton, PhD, is a Professor and Research Chair in the Werklund School of Education at the University of Calgary, Canada. Opinions are my own and do not represent those of my employer.
Artificial intelligence (AI) has moved from a futuristic concept to an everyday reality. Rather than viewing AI tools like ChatGPT as threats to academic integrity, forward-thinking educators are discovering their potential as powerful teaching instruments. Here’s how you can meaningfully incorporate AI into your classroom while promoting critical thinking and ethical technology use.
Making AI Visible in the Learning Process
One of the most effective approaches to teaching with AI is to bring it into the open. When we demystify these tools, students develop a more nuanced understanding of the tools’ capabilities and limitations.
Start by dedicating class time to explore AI tools together. You might begin with a demonstration of how ChatGPT or similar tools respond to different types of prompts. Ask students to compare the quality of responses when the tool is asked to:
Summarize factual information
Analyze a complex concept
Solve a problem in your discipline
Have students identify where the AI excels and where it falls short. Hands-on experience that is supervised by an educator helps students understand that while AI can be impressive and capable, it has clear boundaries and weaknesses.
From AI Drafts to Critical Analysis
AI tools can quickly generate content that serves as a starting point for deeper learning. Here is a step-by-step approach for using AI-generated drafts as teaching material:
Assignment Preparation: Choose a topic relevant to your course and generate a draft response using an AI tool such as ChatGPT.
Collaborative Analysis: Share the AI-generated draft with students and facilitate a discussion about its strengths and weaknesses. Prompt students with questions such as:
What perspectives are missing from this response?
How could the structure be improved?
What claims require additional evidence?
How might we make this content more engaging or relevant?
The idea is to bring students into conversations about AI, to build their critical thinking and also have them puzzle through the strengths and weaknesses of current AI tools.
Revision Workshop: Have students work individually or in groups to revised an AI draft into a more nuanced, complete response. This process teaches students that the value lies not in generating initial content (which AI can do) but in refining, expanding, and critically evaluating information (which requires human judgment).
Reflection: Ask students to document what they learned through the revision process. What gaps did they identify in the AI’s understanding? How did their human perspective enhance the work? Building in meta-cognitive awareness is one of the skills that assessment experts such as Bearman and Luckin (2020) emphasize in their work.
This approach shifts the educational focus from content creation to content evaluation and refinement—skills that will remain valuable regardless of technological advancement.
Teaching Fact-Checking Through Deliberate Errors
AI systems often present information confidently, even when that information is incorrect or fabricated. This characteristic makes AI-generated content perfect for teaching fact-checking skills.
Try this classroom activity:
Generate Content with Errors: Use an AI tool to create content in your subject area, either by requesting information you know contains errors or by asking about obscure topics where the AI might fabricate details.
Fact-Finding Mission: Provide this content to students with the explicit instruction to identify potential errors and verify information. You might structure this as:
Individual verification of specific claims
Small group investigation with different sections assigned to each group
A whole-class collaborative fact-checking document
Source Evaluation: Have students document not just whether information is correct, but how they determined its accuracy. This reinforces the importance of consulting authoritative sources and cross-referencing information.
Meta-Discussion: Use this opportunity to discuss why AI systems make these kinds of errors. Topics might include:
The difference between pattern recognition and understanding
Why AI might present incorrect information with high confidence
These activities teach students not just to be skeptical of AI outputs but to develop systematic approaches to information verification—an essential skill in our information-saturated world.
Case Studies in AI Ethics
Ethical considerations around AI use should be explicit rather than implicit in education. Develop case studies that prompt students to engage with real ethical dilemmas:
Attribution Discussions: Present scenarios where students must decide how to properly attribute AI contributions to their work. For example, if an AI helps to brainstorm ideas or provides an outline that a student substantially revises, how could this be acknowledged?
Equity Considerations: Explore cases highlighting AI’s accessibility implications. Who benefits from these tools? Who might be disadvantaged? How might different cultural perspectives be underrepresented in AI outputs?
Professional Standards: Discuss how different fields are developing guidelines for AI use. Medical students might examine how AI diagnostic tools should be used alongside human expertise, while creative writing students could debate the role of AI in authorship.
Decision-Making Frameworks: Help students develop personal guidelines for when and how to use AI tools. What types of tasks might benefit from AI assistance? Where is independent human work essential?
These discussions help students develop thoughtful approaches to technology use that will serve them well beyond the classroom.
Implementation Tips for Educators
As you incorporate these approaches into your teaching, consider these practical suggestions:
Start small with one AI-focused activity before expanding to broader integration
Be transparent with students about your own learning curve with these technologies
Update your syllabus to clearly outline expectations for appropriate AI use
Document successes and challenges to refine your approach over time
Share experiences with colleagues to build institutional knowledge
Moving Beyond the AI Panic
The concept of postplagiarism does not mean abandoning academic integrity—rather, it calls for reimagining how we teach integrity in a technologically integrated world. By bringing AI tools directly into our teaching practices, we help students develop the critical thinking, evaluation skills, and ethical awareness needed to use these technologies responsibly.
When we shift our focus from preventing AI use to teaching with and about AI, we prepare students not just for academic success, but for thoughtful engagement with technology throughout their lives and careers.
References
Bearman, M., & Luckin, R. (2020). Preparing university assessment for a world with AI: Tasks for human intelligence. In M. Bearman, P. Dawson, R. Ajjawi, J. Tai, & D. Boud (Eds.), Re-imagining University Assessment in a Digital World (pp. 49-63). Springer International Publishing. https://doi.org/10.1007/978-3-030-41956-1_5
Eaton, S. E. (2023). Postplagiarism: Transdisciplinary ethics and integrity in the age of artificial intelligence and neurotechnology. International Journal for Educational Integrity, 19(1), 1-10. https://doi.org/10.1007/s40979-023-00144-1
This blog has had over 3.7 million views thanks to readers like you. If you enjoyed this post, please ‘Like’ it using the button below or share it on social media. Thanks!
Sarah Elaine Eaton, PhD, is a Professor and Research Chair in the Werklund School of Education at the University of Calgary, Canada. Opinions are my own and do not represent those of my employer.
He writes that, “Roughly 85 per cent of clinical trials in Canada are privately funded” and that research undergoes very little scrutiny by anyone.
One of the cases Geoff wrote about involved a research study that ran from 2014-2016 involving Indigenous children in Saskatchewan, aged 12-15, who were research subjects in a study that monitored their brainwaves. Student participants were recruited with the help of a Canadian school board.
The study was led by James Hardt, who runs something called the Biocybernaut Institute, a privately run business. According to Leo, James Hardt claims that “brainwave training can make participants smarter, happier and enable them to overcome trauma. He said it can also allow them to levitate, walk on water and visit angels.”
Geoff Leo digs deep into some of the ethical issues and I recommend reading his article.
So, that was last month. This month, I happened to notice that according to Elon Musk’s Neuralink website, Musk’s product has now been approved by Health Canada to recruit research participants. There’s a bright purple banner at the top of the Neuralink home page showing a Canadian flag that says, “We’ve received approval from Health Canada to begin recruitment for our first clinical trial in Canada”.
When you click on the link, you get to another page that shows the flags for the US, Canada, and the UK, where clinical trials are either underway or planned, it seems.
The Canadian version is called CAN-PRIME. There’s a YouTube video promo/recruitment video for patients interested in joining, “this revolutionary journey”.
According to the website, “This study involves placing a small, cosmetically invisible implant in a part of the brain that plans movements. The device is designed to interpret a person’s neural activity, so they can operate a computer or smartphone by simply intending to move – no wires or physical movement are required.”
Now we have Elon Musk’s company actively recruiting people from across Canada, the US, and the UK, for research that would involve implanting experimental technology into people’s brains without, it seems, much research ethics oversight at all.
This blog has had over 3.7 million views thanks to readers like you. If you enjoyed this post, please ‘Like’ it using the button below or share it on social media. Thanks!
Sarah Elaine Eaton, PhD, is a Professor and Research Chair in the Werklund School of Education at the University of Calgary, Canada. Opinions are my own and do not represent those of my employer.
Just 26.3% of the European Union’s artificial intelligence (AI) professionals are women, according to a report from LinkedIn.
In my work with of the Women for Ethical AI (W4EAI) UNESCO platform, we had similar findings in our gender outlook study.
There are no easy solutions to this gap, but for those working in this area, some five concrete things you can do to promote gender inclusion (and equity in general) are:
Invite women into leadership roles, strategic planing for artificial intelligence and advanced technology.
Ensure that policies explicitly include women, girls, and other equity-deserving groups.
Invite women (and in particular, early career women and those who are precariously employed) to share and showcase their expertise and knowledge (and compensate them for their contributions).
Create formal sponsorship programs for women and girls who want to develop their knowledge and cp-competencies related to AI, with ongoing opportunities for learning and skill development.
There are a myriad of ethical complexities when it comes to artificial intelligence and gender is only one of them. Acknowledging inequalities and then working to support equity, fairness, and justice will remain ongoing work in the years to come.
United Nations Educational Scientific and Cultural Organization (UNESCO). (2024). UNESCO Women for Ethical AI: Outlook study on artificial intelligence and gender. https://unesdoc.unesco.org/ark:/48223/pf0000391719
This blog has had over 3.7 million views thanks to readers like you. If you enjoyed this post, please ‘Like’ it using the button below or share it on social media. Thanks!
Sarah Elaine Eaton, PhD, is a Professor and Research Chair in the Werklund School of Education at the University of Calgary, Canada. Opinions are my own and do not represent those of my employer.
Generative AI (GenAI) is transforming teaching, learning, and assessment in higher education.
Learn to integrate GenAI effectively while maintaining academic integrity and enhancing student agency.
Dr. Sarah Eaton shares innovative strategies that promote critical thinking and original scholarship. Explore how GenAI reshapes academic practices and discover proactive approaches to leverage its potential.
This session equips educators, administrators, and policymakers to lead purposefully in a dynamic academic landscape.
Speaker bio
Sarah Elaine Eaton is a Professor and research chair at the Werklund School of Education at the University of Calgary (Canada). She is an award-winning educator, researcher, and leader. She leads transdisciplinary research teams focused on the ethical implications of advanced technology use in educational contexts. Dr. Eaton also holds a concurrent appointment as an Honorary Associate Professor, Deakin University, Australia.
More Details
Date: January 29, 2025
Time: 12:00 – 13:00 Mountain time
This talk is free and open to the public, but there are only 20 seats available to join us in person! We can also accommodate folks online.
This blog has had over 3.7 million views thanks to readers like you. If you enjoyed this post, please “like” it or share it on social media. Thanks!
Sarah Elaine Eaton, PhD, is a Professor and Research Chair in the Werklund School of Education at the University of Calgary, Canada. Opinions are my own and do not represent those of my employer.
You must be logged in to post a comment.