As a new academic year begins here in the northern hemisphere, I’m worried. I am worried that equity-deserving students, including racialized and linguistic-minority students, disabled and neurodivergent students, and others from equity-deserving groups will fall through the cracks again this year.
Conversations about academic integrity often centre around detection and discipline.
How many students will be accused of — and investigated for — academic cheating this year when what they actually needed was learning support? Or language support? Or just a clearer understanding of what academic integrity is and how to uphold it?
It doesn’t have to be this way.
Academic integrity is also about creating a learning environment grounded in fairness and opportunity for every student. Social justice, equity, inclusion, diversity, and accessibility shape how students experience integrity in real ways:
Equity reminds us that students enter the classroom with different levels of preparation and support.
Inclusion ensures every student can participate in learning and assessment.
Accessibility removes barriers that make it harder for some students to meet expectations.
A social justice lens helps us see patterns in who is reported or penalized for breaches of integrity and why.
Here are some actions educators can take in the first month of classes to support student success:
Review course materials to ensure instructions and policies about integrity are written in plain, accessible language.
Dedicate class time to talking with students about what integrity looks like in your course and why it matters.
Share examples of proper citation and collaboration that are relevant to your discipline.
Make time for questions about assessments so students understand what is expected and where to find help.
Connect students early to campus supports such as writing centres, student services, and accessibility services.
This is just a start.
My point is this: Do not assume that students should just know what academic integrity means. Take the time to explain your expectations and policies. In order for students to follow the rules, they need to know what the rules are.
Academic integrity is not only about avoiding plagiarism or cheating. It is also about fostering trust and fairness so that all students have a fair chance to learn and succeed. The choices we make in the first few weeks of the term set the tone for the entire year.
What steps are you taking at the start of this new school year to build a more inclusive and equitable approach to academic integrity?
Sarah Elaine Eaton, PhD, is a Professor and Research Chair in the Werklund School of Education at the University of Calgary, Canada. Opinions are my own and do not represent those of my employer.
Artificial intelligence (AI) has moved from a futuristic concept to an everyday reality. Rather than viewing AI tools like ChatGPT as threats to academic integrity, forward-thinking educators are discovering their potential as powerful teaching instruments. Here’s how you can meaningfully incorporate AI into your classroom while promoting critical thinking and ethical technology use.
Making AI Visible in the Learning Process
One of the most effective approaches to teaching with AI is to bring it into the open. When we demystify these tools, students develop a more nuanced understanding of the tools’ capabilities and limitations.
Start by dedicating class time to explore AI tools together. You might begin with a demonstration of how ChatGPT or similar tools respond to different types of prompts. Ask students to compare the quality of responses when the tool is asked to:
Summarize factual information
Analyze a complex concept
Solve a problem in your discipline
Have students identify where the AI excels and where it falls short. Hands-on experience that is supervised by an educator helps students understand that while AI can be impressive and capable, it has clear boundaries and weaknesses.
From AI Drafts to Critical Analysis
AI tools can quickly generate content that serves as a starting point for deeper learning. Here is a step-by-step approach for using AI-generated drafts as teaching material:
Assignment Preparation: Choose a topic relevant to your course and generate a draft response using an AI tool such as ChatGPT.
Collaborative Analysis: Share the AI-generated draft with students and facilitate a discussion about its strengths and weaknesses. Prompt students with questions such as:
What perspectives are missing from this response?
How could the structure be improved?
What claims require additional evidence?
How might we make this content more engaging or relevant?
The idea is to bring students into conversations about AI, to build their critical thinking and also have them puzzle through the strengths and weaknesses of current AI tools.
Revision Workshop: Have students work individually or in groups to revised an AI draft into a more nuanced, complete response. This process teaches students that the value lies not in generating initial content (which AI can do) but in refining, expanding, and critically evaluating information (which requires human judgment).
Reflection: Ask students to document what they learned through the revision process. What gaps did they identify in the AI’s understanding? How did their human perspective enhance the work? Building in meta-cognitive awareness is one of the skills that assessment experts such as Bearman and Luckin (2020) emphasize in their work.
This approach shifts the educational focus from content creation to content evaluation and refinement—skills that will remain valuable regardless of technological advancement.
Teaching Fact-Checking Through Deliberate Errors
AI systems often present information confidently, even when that information is incorrect or fabricated. This characteristic makes AI-generated content perfect for teaching fact-checking skills.
Try this classroom activity:
Generate Content with Errors: Use an AI tool to create content in your subject area, either by requesting information you know contains errors or by asking about obscure topics where the AI might fabricate details.
Fact-Finding Mission: Provide this content to students with the explicit instruction to identify potential errors and verify information. You might structure this as:
Individual verification of specific claims
Small group investigation with different sections assigned to each group
A whole-class collaborative fact-checking document
Source Evaluation: Have students document not just whether information is correct, but how they determined its accuracy. This reinforces the importance of consulting authoritative sources and cross-referencing information.
Meta-Discussion: Use this opportunity to discuss why AI systems make these kinds of errors. Topics might include:
The difference between pattern recognition and understanding
Why AI might present incorrect information with high confidence
These activities teach students not just to be skeptical of AI outputs but to develop systematic approaches to information verification—an essential skill in our information-saturated world.
Case Studies in AI Ethics
Ethical considerations around AI use should be explicit rather than implicit in education. Develop case studies that prompt students to engage with real ethical dilemmas:
Attribution Discussions: Present scenarios where students must decide how to properly attribute AI contributions to their work. For example, if an AI helps to brainstorm ideas or provides an outline that a student substantially revises, how could this be acknowledged?
Equity Considerations: Explore cases highlighting AI’s accessibility implications. Who benefits from these tools? Who might be disadvantaged? How might different cultural perspectives be underrepresented in AI outputs?
Professional Standards: Discuss how different fields are developing guidelines for AI use. Medical students might examine how AI diagnostic tools should be used alongside human expertise, while creative writing students could debate the role of AI in authorship.
Decision-Making Frameworks: Help students develop personal guidelines for when and how to use AI tools. What types of tasks might benefit from AI assistance? Where is independent human work essential?
These discussions help students develop thoughtful approaches to technology use that will serve them well beyond the classroom.
Implementation Tips for Educators
As you incorporate these approaches into your teaching, consider these practical suggestions:
Start small with one AI-focused activity before expanding to broader integration
Be transparent with students about your own learning curve with these technologies
Update your syllabus to clearly outline expectations for appropriate AI use
Document successes and challenges to refine your approach over time
Share experiences with colleagues to build institutional knowledge
Moving Beyond the AI Panic
The concept of postplagiarism does not mean abandoning academic integrity—rather, it calls for reimagining how we teach integrity in a technologically integrated world. By bringing AI tools directly into our teaching practices, we help students develop the critical thinking, evaluation skills, and ethical awareness needed to use these technologies responsibly.
When we shift our focus from preventing AI use to teaching with and about AI, we prepare students not just for academic success, but for thoughtful engagement with technology throughout their lives and careers.
References
Bearman, M., & Luckin, R. (2020). Preparing university assessment for a world with AI: Tasks for human intelligence. In M. Bearman, P. Dawson, R. Ajjawi, J. Tai, & D. Boud (Eds.), Re-imagining University Assessment in a Digital World (pp. 49-63). Springer International Publishing. https://doi.org/10.1007/978-3-030-41956-1_5
Eaton, S. E. (2023). Postplagiarism: Transdisciplinary ethics and integrity in the age of artificial intelligence and neurotechnology. International Journal for Educational Integrity, 19(1), 1-10. https://doi.org/10.1007/s40979-023-00144-1
This blog has had over 3.7 million views thanks to readers like you. If you enjoyed this post, please ‘Like’ it using the button below or share it on social media. Thanks!
Sarah Elaine Eaton, PhD, is a Professor and Research Chair in the Werklund School of Education at the University of Calgary, Canada. Opinions are my own and do not represent those of my employer.
Earlier this semester, I accepted a new leadership role in the Werklund School of Education as the Academic Coordinator of the Master of Education (MEd) graduate topic in Inclusive Education. (We are accepting applications the 2025-2026 academic year, in case you’ve been thinking about doing an MEd. It is a fully online four-course topic.)
This got me thinking about academic integrity through an inclusive lens. My interest in the connection between social justice, equity, inclusion, and accessibility goes back a few years. In 2022, I partnered with a Werklund graduate student in educational psychology, Rachel Pagaling, and Dr. Brenda McDermott, Senior Manager, Student Accessibility Services to write up a brief open access report on Academic Integrity Considerations for Accessibility, Equity and Inclusion.
We know that academic integrity is a cornerstone of both K-12 and higher education. We want to ensure that learning, assessment, and credentials uphold the highest ethical standards. However, as educators, we can — and should — consider how the principles of inclusive education can strengthen and complement our approach to academic integrity.
Inclusive education means ensuring that all students, regardless of their background, abilities, or learning needs, have equitable access to educational opportunities and can meaningfully participate. Thomas and May sum it up nicely when they say that being inclusive means “proactively making higher education accessible, relevant and engaging to all students” (p. 5). Of course, the same thinking could be extended to K-12 education, too. Applying these inclusive principles to academic integrity means recognizing that diverse learners may express and demonstrate their knowledge in different ways.
Inclusion is not only about students with physical disabilities, developmental disabilities, or neurodivergence, but rather it is about creating conditions where all students can thrive. Associate Professor Joanna Tai and colleagues have a great article on Assessment for Inclusion that helps us think about how to design equitable and rigorous.
The point here is that by fostering an inclusive academic culture, we empower all students to bring their best selves to school and learn with integrity.
Beyond accessibility and cultural responsiveness, inclusive academic integrity also means actively addressing systemic barriers and implicit biases. If certain groups of students consistently struggle with academic integrity issues, it may reveal deeper inequities that need to be examined and addressed. In other words, we can look at the barriers to success, rather than the limitations of our students, as being the problem. As Juuso Nieminen and I have pointed out, even accommodations policies have an underlying assumption that students who need accommodations are out to cheat the system.
If you’re interested in reading more about disability justice to inform your thinking, I highly recommend Doron Dorfman’s article on the fear of the disability con and Jay Dolman’s work on academic ableism.
The benefits of this holistic, inclusive approach to academic integrity are numerous. When students feel respected, supported, and able to succeed, they are more engaged and motivated. This, in turn, leads to better learning outcomes. Moreover, graduates who have internalized inclusive academic integrity will be better equipped to uphold ethical standards in their future careers and communities.
As educators, we have a responsibility to nurture academic integrity in ways that are inclusive, accessible, culturally responsive, and empowering for diverse learners. By applying the principles of inclusive education, we can transform academic integrity from a rigid set of rules into a collaborative, values-driven endeavor that brings out the best in our students and ourselves.
References
Davis, M. (2022). Examining and improving inclusive practice in institutional academic integrity policies, procedures, teaching and support. International Journal for Educational Integrity, 18(1), 14. https://doi.org/10.1007/s40979-022-00108-x
Dolmage, J. T. (2017). Academic Ableism: Disability and Higher Education. University of Michigan Press.
Dorfman, D. (2019). Fear of the disability con: Perceptions of fraud and special rights discourse. Law & society review, 53(4), 1051-1091. https://doi.org/10.1111/lasr.12437
Elkhoury, E. (2024). An Equitable Approach to Academic Integrity Through Alternative Assessment. In S. E. Eaton (Ed.), Second Handbook of Academic Integrity (pp. 1261-1272). Springer Nature Switzerland. https://doi.org/10.1007/978-3-031-54144-5_135
Nieminen, J. H., & Eaton, S. E. (2023). Are assessment accommodations cheating? A critical policy analysis. Assessment & Evaluation in Higher Education, 1-16. https://doi.org/10.1080/02602938.2023.2259632
Pagaling, R., Eaton, S. E., & McDermott, B. (2022, April 4). Academic Integrity: Considerations for Accessibility, Equity, and Inclusion. http://hdl.handle.net/1880/114519
Tai, J., Ajjawi, R., Bearman, M., Boud, D., Dawson, P., & Jorre de St Jorre, T. (Eds.). (2022). Assessment for inclusion: rethinking contemporary strategies in assessment design. Routledge. https://doi.org/10.1080/07294360.2022.2057451
This blog has had over 3 million views thanks to readers like you. If you enjoyed this post, please “like” it or share it on social media. Thanks!
Sarah Elaine Eaton, PhD, is a Professor and Research Chair in the Werklund School of Education at the University of Calgary, Canada. Opinions are my own and do not represent those of my employer.
It’s the start of a new school year here in North America. We are into the second week of classes and already I am hearing from administrators in both K-12 and higher education institutions who are frustrated with educators who have turned to ChatGPT and other publicly-available Gen AI apps to help them assess student learning.
Although customized AI apps designed specifically to assist with the assessment of student learning already exist, many educators do not yet have access to such tools. Instead, I am hearing about educators turning to large language models (LLMs) like ChatGPT to help them provide formative or summative assessment of students’ work. There are some good reasons not to avoid using ChatGPT or other LLMs to assess student learning.
I expect that not everyone will agree with these points, please take them with the spirit in which they are intended, which to provide guidance on ethical ways to teach, learn, and assess students’ work.
8 Tips on Why Educators Should Avoid Using AI Apps to Help with Assessment of Student Learning
Intellectual Property
In Canada at least, a student’s work is their intellectual property. Unless you have permission to use it outside of class, then avoid doing so. The bottom line here is that student’s intellectual work is not yours to share to a large-language model (LLM) or any other third party application, with out their knowledge and consent.
Privacy
A student’s personal data, including their name, ID number and other details should never be uploaded to an external app without consent. One reason for this blog post is to respond to stories I am hearing about educators uploading entire student essays or assignments, including the cover page with all the identifying information, to a third-party GenAI app.
Data security
Content uploaded to an AI tool may be added to its database and used to train the tool. Uploading student assignments to GenAI apps for feedback poses several data security risks. These include potential breaches of data storage systems, privacy violations through sharing sensitive student information, and intellectual property concerns. Inadequate access controls or encryption could allow unauthorized access to student work.
AI model vulnerabilities might enable data extraction, while unintended leakage could occur through the AI app’s responses. If the educator’s account is compromised, it could expose all of the uploaded assignments. The app’s policies may permit third-party data sharing, and long-term data persistence in backups or training sets could extend the risk timeline. Also, there may be legal and regulatory issues around sharing student data, especially for minors, without proper consent.
Bias
AI apps are known to be biased. Feedback generated by an AI app can be biased, unfair, and even racist. To learn more check out this article published in Nature. AI models can perpetuate existing biases present in their training data, which may not represent diverse student populations adequately. Apps might favour certain writing styles (e.g., standard American English), cultural references, or modes of expression, disadvantaging students from different backgrounds.
Furthermore, the AI’s feedback could be inconsistent across similar submissions or fail to account for individual student progress and needs. Additionally, the app may not fully grasp nuanced or creative approaches, leading to standardized feedback that discourages unique thinking.
Lack of context
An AI app does not know your student like you do. Although GenAI tools can offer quick assessments and feedback, they often lack the nuanced understanding of a student’s unique context, learning style, and emotional or physical well-being. Overreliance on AI-generated feedback might lead to generic responses, diminishing the personal connection and meaningful interaction that educators provide, which are vital for effective learning.
Impersonal
AI apps can provide generic feedback, but as an educator, you can personalize feedback to help the student grow. AI apps can provide generic feedback but may not help to scaffold a student’s learning. Personalized feedback is crucial, as it fosters individual student growth, enhances understanding, and encourages engagement with the material. Tailoring feedback to specific strengths and weaknesses helps students recognize their progress and areas needing improvement. In turn, this helps to build their confidence and motivation.
Academic Integrity
Educators model ethical behaviour, this includes transparent and fair assessment. If you are using tech tools to assess student learning, it is important to be transparent about it. In this post, I write more about how and why deceptive and covert assessment tactics are unacceptable.
Your Employee Responsibilities
If your job description includes assessing student work , you may be violating your employment contract if you offload assessment to an AI app.
Concluding Thoughts
Unless your employer has explicitly given you permission to use AI apps for assessing student work then, at least for now, consider providing feedback and assessment in the ways expected by your employer. If we do not want students to use AI apps to take shortcuts, then it is up to us as educators to model the behavior we expect from students.
I understand that educators have excessive and exhausting workloads. I appreciate that we have more items on our To Do Lists than is reasonable. I totally get it that we may look for shortcuts and ways to reduce our workload. The reality is that although Gen AI may have the capability to help with certain tasks, not all employers have endorsed their use in same way.
Not all institutions or schools have artificial intelligence policies or guideline, so when in doubt, ask your supervisor if you are not sure about the expectations. Again, there is a parallel here with student conduct. If we expect students to avoid using AI apps unless we make it explicit that it is OK, then the same goes for educators. Avoid using unauthorized tech tools for assessment without the boss knowing about it.
I am not suggesting that Gen AI apps don’t have the capability to assist with AI, but I am suggesting that many educational institutions have not yet approved the use of such apps for use in the workplace. Trust me, when there are Gen AI apps to help with the heaviest aspects of our workload as educators, I’ll be at the front of the line to use them. In the meantime, there’s a balance to be struck between what AI can do and what one’s employer may permit us to use AI for. It’s important to know the difference — and to protect your livelihood.
This blog has had over 3.6 million views thanks to readers like you. If you enjoyed this post, please “like” it or share it on social media. Thanks!
Sarah Elaine Eaton, PhD, is a faculty member in the Werklund School of Education at the University of Calgary, Canada. Opinions are my own and do not represent those of my employer.
Sarah Elaine Eaton, PhD, Editor-in-Chief, International Journal for Educational Integrity
People have been asking if they should be using detection tools to identify text written by ChatGPT or other artificial intelligence writing apps. Just this week I was a panelist in a session on “AI and You: Ethics, Equity, and Accessibility”, part of ETMOOC 2.0. Alec Couros asked what I was seeing across Canada in terms of universities using artificial intelligence detection in misconduct cases.
The first thing I shared was the University of British Columbia web page stating that the university was not enabling Turnitin’s AI-detection feature. UBC is one of the few universities in Canada that subscribes to Turnitin.
Key message: Tools to detect text written by artificial intelligence aren’t really reliable or effective. It would be wise to be skeptical of any marketing claims to the contrary.
There are news reports about students being falsely accused of misconduct when the results of AI writing detection tools were used as evidence. See news stories here and here, for example.
There have been few studies done on the impact of a false accusation of student academic misconduct, but if we turn to the literature on false accusations in criminal offences, there is evidence showing that false accusations can result in reputation damage, self-stigma, depression, anxiety, PTSD, sleep problems, social isolation, and strained relationships, among other outcomes. Falsely accusing students of academic misconduct can be devastating, including dying by suicide as a result. You can read some stories about students dying by suicide after false allegations of academic cheating in the United States and in India. Of course, stories about student suicide are rarely discussed in the media, for a variety of reasons. The point here is that false accusations of students for academic cheating can have a negative impact on their mental and physical health.
Key message: False accusations of academic misconduct can be devastating for students.
Although reporting allegations of misconduct remains a responsibility of educators, having fully developed (and mandatory) case management and investigation systems is imperative. Decisions about whether misconduct has occurred should be made carefully and thoughtfully, using due process that follows established policies.
It is worth noting that AI-generated text can be revised and edited such that the end product is neither fully written by AI, nor fully written by a human. At our university, the use of technology to detect possible misconduct may not be used deceptively or covertly. For example, we do not have an institutional license to any text-matching software. Individual professors can get a subscription if they wish, but the use of detection tools should be declared in the course syllabus. If detection tools are used post facto, it can be considered a deception on the part of the professor because the students were not made aware of the technology prior to handing in their assessment.
Key message: Students can appeal any misconduct case brought forward with the use of deceptive or undisclosed assessment tools or technology (and quite frankly, they would probably win the appeal).
If we expect students to be transparent about their use of tools, then it is up to educators and administrators also to be transparent about their use of technology prior to assessment and not afterwards. A technology arms race in the name of integrity is antithetical to teaching and learning ethically and can perpetuate antagonistic and adversarial relationships between educators and students.
Ethical Principles for Detecting AI-Generated Text in Student Work
Let me be perfectly clear: I am not at all a fan of using detection tools to identify possible cases of academic misconduct. But, if you insist on using detection tools, for heaven’s sake, be transparent and open about your use of them.
Here is an infographic you are welcome to use and share: Infographic: “Ethical Principles for Detecting AI-Generated Text in Student Work” (Creative Commons License: Attribution-NonCommercial-ShareAlike 4.0 International). The text inside the infographic is written out in full with some additional details below.
Here is some basic guidance:
Check your Institutional Policies First
Before you use any detection tools on student work, ensure that the use of such tools is permitted according to your school’s academic integrity policy. If your school does not have such a policy or if the use of detection tools is not mentioned in the policy, that does not automatically mean that you have the right to use such tools covertly. Checking the institutional policies and regulations is a first step, but it is not the only step in applying the use of technology ethically in assessment of student work.
Check with Your Department Head
Whether the person’s title is department head, chair, headmaster/headmistress, principal, or something else, there is likely someone in your department, faculty or school whose job it is to oversee the curriculum and/or matters relating to student conduct. Before you go rogue using detection tools to catch students cheating, ask the person to whom you report if they object to the use of such tools. If they object, then do not go behind their back and use detection tools anyway. Even if they agree, then it is still important to use such tools in a transparent and open way, as outlined in the next two recommendations.
Include a Statement about the Use of Detection Tools in Your Course Syllabus
Include a clear written statement in your course syllabus that outlines in plain language exactly which tools will be used in the assessment of student work. A failure to inform students in writing about the use of detection tools before they are used could constitute unethical assessment or even entrapment. Detection tools should not be used covertly. Their use should be openly and transparently declared to students in writing before any assessment or grading begins.
Of course, having a written statement in a course syllabus does not absolve educators of their responsibility to have open and honest conversations with students, which is why the next point is included.
Talk to Students about Your Use of Tools or Apps You will Use as Part of Your Assessment
Have open and honest conversations with students about how you plan to use detection tools. Point out that there is a written statement in the course outline and that you have the support of your department head and the institution to use these tools. Be upfront and clear with students.
It is also important to engage students in evidence-based conversations about the limitations tools to detect artificial intelligence writing, including the current lack of empirical evidence about how well they work.
Conclusion
Again, I emphasize that I am not at all promoting the use of any AI detection technology whatsoever. In fact, I am opposed to the use of surveillance and detection technology that is used punitively against students, especially when it is done in the name of teaching and learning. However, if you are going to insist on using technology to detect possible breaches of academic integrity, then at least do so in an open and transparent way — and acknowledge that the tools themselves are imperfect.
Key message: Under no circumstances should the results from an AI-writing detection tool be used as the only evidence in a student academic misconduct allegation.
I am fully anticipating some backlash to this post. There will be some of you who will object to the use detection tools on principle and counter that any blog post talking about how they can be used is in itself unethical. You might be right, but the reality remains that thousands of educators are currently using detection tools for the sole purpose of catching cheating students. As much as I rally against a “search and destroy” approach, there will be some people who insist on taking this position. This blog post is to offer some guidelines to avoid deceptive assessment and covert use of technology in student assessment.
Key message: Deceptive assessment is a breach of academic integrity on the part of the educator. If we want students to act with integrity, then it is up to educators to model ethical behaviour themselves.
References
Sadasivan, V. S., Kumar, A., Balasubramanian, S., Wang, W., & Feizi, S. (2023). Can AI-Generated Text be Reliably Detected? ArXiv. https://doi.org/10.48550/arXiv.2303.11156
This blog has had over 3 million views thanks to readers like you. If you enjoyed this post, please “like” it or share it on social media. Thanks! Sarah Elaine Eaton, PhD, is a faculty member in the Werklund School of Education, and the Educational Leader in Residence, Academic Integrity, University of Calgary, Canada. Opinions are my own and do not represent those of the University of Calgary.
You must be logged in to post a comment.