The handbook is now in its final stages of production, and the standalone second edition will be released in hard copy in January, 2024. To celebrate, Dr. Zeenath Reza Khan, who serves as one of the handbook’s section editors and contributors, and was a co-chair of the 1st Asia -Middle East – Africa Conference on Academic and Research Integrity (ACARI) 2023, 17-19 December led the organization of the soft launch for the handbook during the conference.
The launch was held during the closing ceremony of the conference on the final day in an auditorium at the prestigious Middlesex University Dubai, was both festive and scholarly, as it brought together educators, researchers, and advocates for academic integrity. In addition to conference delegates, a number of esteemed dignitaries attended including, His Excellency, Jamal Hossain, Consul General of Bangladesh to UAE; Dr. Mohammad Ali Reza Khan, Award-winning Expert Wildlife Specialist, Dubai Municipality, along with Professor Cedwyn Fernandes, Pro Vice Chancellor of Middlesex University and Director of Middlesex University Dubai. Special thanks to Ms. Rania Sheir, Senior Specialist, Entrepreneurship and Innovation, Ministry of Education, UAE, who not only attended the launch, but also posted about it on LinkedIn.
The Handbook, meticulously curated by leading experts in the field, is a compendium of insights, strategies, and best practices aimed at upholding the ethical practices in academia and research. It covers a diverse range of topics, from plagiarism to artificial intelligence, to the promotion of ethical behaviour in academic research, and much more. The multidimensional approach of the Handbook of Academic Integrity ensures that it caters to the needs of educators, administrators, and students alike.
A number of contributing authors were in attendance, including:
Dr. Zeenath generously gifted two colleagues and me with authentic saris that we wore during the closing ceremony and the launch. As you can see from the photo below, I was given one in dark green and I just love it! I had an opportunity to say a few words about the book and its importance in the field, and to thank the organizers and authors. Each contributor was gifted a symbolic souvenir cut-out of the front cover of the handbook and following the formalities, we signed the back of one another’s covers.
The book launch culminated in a celebratory atmosphere, with attendees leaving inspired. The Handbook of Academic Integrity (2nd ed.), now poised to be a cornerstone in the field, builds on the first edition and stands as a testament to the collective commitment to nurturing a culture of integrity throughout every level of education and research.
This blog has had over 3 million views thanks to readers like you. If you enjoyed this post, please “like” it or share it on social media. Thanks!
Sarah Elaine Eaton, PhD, is a faculty member in the Werklund School of Education at the University of Calgary, Canada. Opinions are my own and do not represent those of my employer.
Our team has been busy since we launched our research in April, 2022. I haven’t done an update in a while, so I wanted to let you know what we’ve been up to. Check out our project website (https://osf.io/4cnvp/) for links to our peer-reviewed publications and other information about our work.
We are still collecting data for our survey and you’re welcome to participate! To take the survey, click here.
Expansion of our research
In April 2022, we received funding from the Social Science and Humanities Research Council of Canada (SSHRC) in the form of a Connection Grant to host a public research forum. We included partners from the University of Saskatchewan, Brock University, Toronto Metropolitan University, and Deakin University (Australia). We hosted our research symposium on June 7-8, 2023 at the University of Calgary.
In case you missed this SSHRC-funded research symposium on the impact of artificial intelligence on higher education and academic integrity, you can catch up with the slides here:
Eaton, S. E., Dawson, P., McDermott, B., Brennan, R., Wiens, J., Moya, B., Dahal, B., Hamilton, M., Kumar, R., Mindzak, M., Miller, A., & Milne, N. (2023). Understanding the Impact of Artificial Intelligence on Higher Education. Calgary, Canada. https://hdl.handle.net/1880/116624
Also, you can catch Phill Dawson’s keynote from the event which is now archived on YouTube:
Phill Dawson’s keynote: Don’t Fear the Robot, University of Calgary, June 8, 2023
We are excited about next steps for this work and I’m happy to answer any questions you have about academic integrity and artificial intelligence.
Acknowledgements
We are grateful to the sponsors of this research:
Social Sciences and Humanities Research Council (SSHRC)
University of Calgary Teaching and Learning Grant
University of Calgary International Research Partnership Workshop Grant
Werklund School of Education, University of Calgary
Brock University
Deakin University
Toronto Metropolitan Univeristy
University of Saskatchewan
Related posts:
Invitation to Participate: Research Study on Artificial Intelligence and Academic Integrity: The Ethics of Teaching and Learning with Algorithmic Writing Technologies – https://wp.me/pNAh3-2U3
This blog has had over 3 million views thanks to readers like you. If you enjoyed this post, please “like” it or share it on social media. Thanks! Sarah Elaine Eaton, PhD, is a faculty member in the Werklund School of Education, and the Educational Leader in Residence, Academic Integrity, University of Calgary, Canada. Opinions are my own and do not represent those of the University of Calgary.
People have been asking if they should be using detection tools to identify text written by ChatGPT or other artificial intelligence writing apps. Just this week I was a panelist in a session on “AI and You: Ethics, Equity, and Accessibility”, part of ETMOOC 2.0. Alec Couros asked what I was seeing across Canada in terms of universities using artificial intelligence detection in misconduct cases.
The first thing I shared was the University of British Columbia web page stating that the university was not enabling Turnitin’s AI-detection feature. UBC is one of the few universities in Canada that subscribes to Turnitin.
Key message: Tools to detect text written by artificial intelligence aren’t really reliable or effective. It would be wise to be skeptical of any marketing claims to the contrary.
There are news reports about students being falsely accused of misconduct when the results of AI writing detection tools were used as evidence. See news stories here and here, for example.
There have been few studies done on the impact of a false accusation of student academic misconduct, but if we turn to the literature on false accusations in criminal offences, there is evidence showing that false accusations can result in reputation damage, self-stigma, depression, anxiety, PTSD, sleep problems, social isolation, and strained relationships, among other outcomes. Falsely accusing students of academic misconduct can be devastating, including dying by suicide as a result. You can read some stories about students dying by suicide after false allegations of academic cheating in the United States and in India. Of course, stories about student suicide are rarely discussed in the media, for a variety of reasons. The point here is that false accusations of students for academic cheating can have a negative impact on their mental and physical health.
Key message: False accusations of academic misconduct can be devastating for students.
Although reporting allegations of misconduct remains a responsibility of educators, having fully developed (and mandatory) case management and investigation systems is imperative. Decisions about whether misconduct has occurred should be made carefully and thoughtfully, using due process that follows established policies.
It is worth noting that AI-generated text can be revised and edited such that the end product is neither fully written by AI, nor fully written by a human. At our university, the use of technology to detect possible misconduct may not be used deceptively or covertly. For example, we do not have an institutional license to any text-matching software. Individual professors can get a subscription if they wish, but the use of detection tools should be declared in the course syllabus. If detection tools are used post facto, it can be considered a deception on the part of the professor because the students were not made aware of the technology prior to handing in their assessment.
Key message: Students can appeal any misconduct case brought forward with the use of deceptive or undisclosed assessment tools or technology (and quite frankly, they would probably win the appeal).
If we expect students to be transparent about their use of tools, then it is up to educators and administrators also to be transparent about their use of technology prior to assessment and not afterwards. A technology arms race in the name of integrity is antithetical to teaching and learning ethically and can perpetuate antagonistic and adversarial relationships between educators and students.
Ethical Principles for Detecting AI-Generated Text in Student Work
Let me be perfectly clear: I am not at all a fan of using detection tools to identify possible cases of academic misconduct. But, if you insist on using detection tools, for heaven’s sake, be transparent and open about your use of them.
Here is an infographic you are welcome to use and share: Infographic: “Ethical Principles for Detecting AI-Generated Text in Student Work” (Creative Commons License: Attribution-NonCommercial-ShareAlike 4.0 International). The text inside the infographic is written out in full with some additional details below.
Here is some basic guidance:
Check your Institutional Policies First
Before you use any detection tools on student work, ensure that the use of such tools is permitted according to your school’s academic integrity policy. If your school does not have such a policy or if the use of detection tools is not mentioned in the policy, that does not automatically mean that you have the right to use such tools covertly. Checking the institutional policies and regulations is a first step, but it is not the only step in applying the use of technology ethically in assessment of student work.
Check with Your Department Head
Whether the person’s title is department head, chair, headmaster/headmistress, principal, or something else, there is likely someone in your department, faculty or school whose job it is to oversee the curriculum and/or matters relating to student conduct. Before you go rogue using detection tools to catch students cheating, ask the person to whom you report if they object to the use of such tools. If they object, then do not go behind their back and use detection tools anyway. Even if they agree, then it is still important to use such tools in a transparent and open way, as outlined in the next two recommendations.
Include a Statement about the Use of Detection Tools in Your Course Syllabus
Include a clear written statement in your course syllabus that outlines in plain language exactly which tools will be used in the assessment of student work. A failure to inform students in writing about the use of detection tools before they are used could constitute unethical assessment or even entrapment. Detection tools should not be used covertly. Their use should be openly and transparently declared to students in writing before any assessment or grading begins.
Of course, having a written statement in a course syllabus does not absolve educators of their responsibility to have open and honest conversations with students, which is why the next point is included.
Talk to Students about Your Use of Tools or Apps You will Use as Part of Your Assessment
Have open and honest conversations with students about how you plan to use detection tools. Point out that there is a written statement in the course outline and that you have the support of your department head and the institution to use these tools. Be upfront and clear with students.
It is also important to engage students in evidence-based conversations about the limitations tools to detect artificial intelligence writing, including the current lack of empirical evidence about how well they work.
Conclusion
Again, I emphasize that I am not at all promoting the use of any AI detection technology whatsoever. In fact, I am opposed to the use of surveillance and detection technology that is used punitively against students, especially when it is done in the name of teaching and learning. However, if you are going to insist on using technology to detect possible breaches of academic integrity, then at least do so in an open and transparent way — and acknowledge that the tools themselves are imperfect.
Key message: Under no circumstances should the results from an AI-writing detection tool be used as the only evidence in a student academic misconduct allegation.
I am fully anticipating some backlash to this post. There will be some of you who will object to the use detection tools on principle and counter that any blog post talking about how they can be used is in itself unethical. You might be right, but the reality remains that thousands of educators are currently using detection tools for the sole purpose of catching cheating students. As much as I rally against a “search and destroy” approach, there will be some people who insist on taking this position. This blog post is to offer some guidelines to avoid deceptive assessment and covert use of technology in student assessment.
Key message: Deceptive assessment is a breach of academic integrity on the part of the educator. If we want students to act with integrity, then it is up to educators to model ethical behaviour themselves.
References
Sadasivan, V. S., Kumar, A., Balasubramanian, S., Wang, W., & Feizi, S. (2023). Can AI-Generated Text be Reliably Detected? ArXiv. https://doi.org/10.48550/arXiv.2303.11156
This blog has had over 3 million views thanks to readers like you. If you enjoyed this post, please “like” it or share it on social media. Thanks! Sarah Elaine Eaton, PhD, is a faculty member in the Werklund School of Education, and the Educational Leader in Residence, Academic Integrity, University of Calgary, Canada. Opinions are my own and do not represent those of the University of Calgary.
Edwards explains in clear language, with lots of details and examples, how and why large language models (LLMs) such as ChatGPT make up content. As I read this article, it occurred to me that it could serve as a really great way to have pro-active and generative conversations with students about the impact of artificial intelligence for teaching, learning, assessment, and academic
integrity. So, here is a quick lesson plan about how to use this article in class:
Education level
Secondary school and post-secondary (e.g., community college, polytechnic, undergraduate or graduate university courses)
Lesson Plan Title: Understanding ChatGPT: Benefits and Limitations
Learning Objectives
By the end of this lesson students will be able to:
Understand how and why AI-writing apps make up content.
Explain the term “confabulation”.
Discuss the implications of fabricated content on academic integrity
Generate ideas about how to fact-check AI-generated content to ensure its accuracy
Class discussion (large group format if the class is small or small group format with a large group debrief at the end):
Possible guiding questions:
What is your experience with ChatGPT and other AI writing apps?
What were the main points in this article? (Alternate phrasing: What were your key takeaways from this article?)
What are some of the risks when AI apps engage in confabulation (i.e., fabrication)?
Discuss this quotation from the article, “ChatGPT as it is currently designed, is not a reliable source of factual information and cannot be trusted as such.”
Fabrication and falsification are commonly included in academic misconduct policies. What do you think the implications are for students and researchers when they write with AI apps?
What are some strategies or tips we can use to fact-check text generated by AI apps?
What is the importance of prompt-writing when working with AI writing apps?
Duration
The time commitment for the pre-reading will vary from one student to the next. The duration of the learning activity can be adjusted to suit the needs of your class.
Students’ pre-reading of the article: 60-minutes or less
Learning activity: 45-60 minutes
Lesson closure
Thank students for engaging actively in the discussion and sharing their ideas.
Possible Follow-up Activities
Tips for fact-checking. Have students in the class generate their own list of tips to fact-check AI-generated content (e.g., in a shared Google doc or by sharing ideas orally in class that one person inputs into a document on behalf of the class.)
Prompt-writing activity. Have students use different prompts to generate content from AI writing apps. Ask them to document each prompt and write down their observations about what worked and what didn’t. Discuss the results as a class.
Academic Integrity Policy Treasure Hunt and Discussion. Have students locate the school’s academic misconduct / academic integrity policy. Compare the definitions and categories for academic misconduct in the school’s policies with concepts presented in this article such as confabulation. Have students generate their own ideas about how to uphold the school’s academic integrity policies when using AI apps.
Creative Commons License
This lesson plan is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License (CC BY-NC 4.0). This license applies only to the lesson plan, not to the original article by Ben Edwards.
Additional Notes
This is a generic (and imperfect) lesson plan. It can (and probably should) be adapted or personalized depending on the needs of the learners.
Acknowledgements
Thanks to Dr. Rahul Kumar, Brock University for providing an open peer review of this lesson plan.
_________________________________
Share or Tweet this:
How to Talk to Your Students about ChatGPT: A Lesson Plan for High School and College Students – https://drsaraheaton.wordpress.com/2023/04/07/how-to-talk-to-your-students-about-chatgpt-a-lesson-plan-for-high-school-and-college-students
This blog has had over 3 million views thanks to readers like you. If you enjoyed this post, please “like” it or share it on
social media. Thanks!
Sarah Elaine Eaton, PhD, is a faculty member in the Werklund School of Education, and the Educational Leader in Residence, Academic Integrity, University of Calgary, Canada. Opinions are my own and do not represent those of the University of Calgary.
In the meantime, I wanted to share some high level thoughts on the topic since many of you have been asking. Even though I am on Research and Scholarship Leave (RSL, a.k.a. sabbatical) this year, I’ve got another big project on the go that is taking up a lot of my time and focus right now, in addition to the research project above. I am serving as the Editor-in-Chief of the the Handbook of Academic Integrity (2nd ed.) The first edition of the Handbookwas edited by Tracey Bretag who passed away in 2020.
The second edition is well underway and I’ve been working with an amazing team of Section Editors (giving a wave of gratitude to the team: Brenda M. Stoesz, Silvia Rossi, Joseph F. Brown, Guy Curtis, Irene Glendinning, Ceceilia Parnther, Loreta Tauginienė, Zeenath Reza Khan, and Wendy Sutherland-Smith). We have more than 100 chapters in the second edition, including some from the first edition as well as lots of new chapters. (Giving a wave of gratitude to all the contributors! Thank you for your amazing contributions!) It is a massive project and it has been a major focus of my sabbatical.
Suffice to say, I have not had a spare moment to put fingers to keyboard to write in depth about this topic on social media, but I wanted to share a few high level ideas here. I will have to unpack them in a future blog post or maybe an editorial, but for now, let me just say that I think the moral panic over the use of artificial intelligence is not the answer. But so you know where I stand on the issue, here are some thoughts:
I am happy to chat more, but let me just say that if you are afraid of an explosion of cheating in your classes because of ChatGPT or any other new technological advance, you are not alone, but honestly, technology isn’t the problem.
This blog has had over 3 million views thanks to readers like you. If you enjoyed this post, please “like” it or share it on social media. Thanks! Sarah Elaine Eaton, PhD, is a faculty member in the Werklund School of Education, and the Educational Leader in Residence, Academic Integrity, University of Calgary, Canada. Opinions are my own and do not represent those of the University of Calgary.
You must be logged in to post a comment.