The Use of AI-Detection Tools in the Assessment of Student Work

May 6, 2023

People have been asking if they should be using detection tools to identify text written by ChatGPT or other artificial intelligence writing apps. Just this week I was a panelist in a session on “AI and You: Ethics, Equity, and Accessibility”, part of ETMOOC 2.0. Alec Couros asked what I was seeing across Canada in terms of universities using artificial intelligence detection in misconduct cases.

The first thing I shared was the University of British Columbia web page stating that the university was not enabling Turnitin’s AI-detection feature. UBC is one of the few universities in Canada that subscribes to Turnitin.

The Univeristy of British Columbia declares the university is not enabling Turnitin’s AI-detection feature.

Turnitin’s rollout of AI detection earlier this year was widely contested and I won’t go into that here. What I will say is that whether AI detection is a new feature embedded into existing product lines or a standalone product, there is little actual scientific evidence to show that AI-generated text can be effectively detected (see Sadasivan et al., 2023). In a TechCrunch article, Open AI, the company that developed ChatGPT, talked about its own detection tool, noting that its success rate was around 26%

Key message: Tools to detect text written by artificial intelligence aren’t really reliable or effective. It would be wise to be skeptical of any marketing claims to the contrary.

There are news reports about students being falsely accused of misconduct when the results of AI writing detection tools were used as evidence. See news stories here and here, for example. 

There have been few studies done on the impact of a false accusation of student academic misconduct, but if we turn to the literature on false accusations in criminal offences, there is evidence showing that false accusations can result in reputation damage, self-stigma, depression, anxiety, PTSD, sleep problems, social isolation, and strained relationships, among other outcomes. Falsely accusing students of academic misconduct can be devastating, including dying by suicide as a result. You can read some stories about students dying by suicide after false allegations of academic cheating in the United States and in India. Of course, stories about student suicide are rarely discussed in the media, for a variety of reasons. The point here is that false accusations of students for academic cheating can have a negative impact on their mental and physical health.

Key message: False accusations of academic misconduct can be devastating for students.

Although reporting allegations of misconduct remains a responsibility of educators, having fully developed (and mandatory) case management and investigation systems is imperative. Decisions about whether misconduct has occurred should be made carefully and thoughtfully, using due process that follows established policies.

It is worth noting that AI-generated text can be revised and edited such that the end product is neither fully written by AI, nor fully written by a human. At our university, the use of technology to detect possible misconduct may not be used deceptively or covertly. For example, we do not have an institutional license to any text-matching software. Individual professors can get a subscription if they wish, but the use of detection tools should be declared in the course syllabus. If detection tools are used post facto, it can be considered a deception on the part of the professor because the students were not made aware of the technology prior to handing in their assessment. 

Key message: Students can appeal any misconduct case brought forward with the use of deceptive or undisclosed assessment tools or technology (and quite frankly, they would probably win the appeal).

If we expect students to be transparent about their use of tools, then it is up to educators and administrators also to be transparent about their use of technology prior to assessment and not afterwards. A technology arms race in the name of integrity is antithetical to teaching and learning ethically and can perpetuate antagonistic and adversarial relationships between educators and students.

Ethical Principles for Detecting AI-Generated Text in Student Work

Let me be perfectly clear: I am not at all a fan of using detection tools to identify possible cases of academic misconduct. But, if you insist on using detection tools, for heaven’s sake, be transparent and open about your use of them.

Here is an infographic you are welcome to use and share: Infographic: “Ethical Principles for Detecting AI-Generated Text in Student Work” (Creative Commons License: Attribution-NonCommercial-ShareAlike 4.0 International). The text inside the infographic is written out in full with some additional details below.

Here is some basic guidance:

Check your Institutional Policies First

Before you use any detection tools on student work, ensure that the use of such tools is permitted according to your school’s academic integrity policy. If your school does not have such a policy or if the use of detection tools is not mentioned in the policy, that does not automatically mean that you have the right to use such tools covertly. Checking the institutional policies and regulations is a first step, but it is not the only step in applying the use of technology ethically in assessment of student work.

Check with Your Department Head

Whether the person’s title is department head, chair, headmaster/headmistress, principal, or something else, there is likely someone in your department, faculty or school whose job it is to oversee the curriculum and/or matters relating to student conduct. Before you go rogue using detection tools to catch students cheating, ask the person to whom you report if they object to the use of such tools. If they object, then do not go behind their back and use detection tools anyway. Even if they agree, then it is still important to use such tools in a transparent and open way, as outlined in the next two recommendations.

Include a Statement about the Use of Detection Tools in Your Course Syllabus

Include a clear written statement in your course syllabus that outlines in plain language exactly which tools will be used in the assessment of student work. A failure to inform students in writing about the use of detection tools before they are used could constitute unethical assessment or even entrapment. Detection tools should not be used covertly. Their use should be openly and transparently declared to students in writing before any assessment or grading begins.

Of course, having a written statement in a course syllabus does not absolve educators of their responsibility to have open and honest conversations with students, which is why the next point is included.

Talk to Students about Your Use of Tools or Apps You will Use as Part of Your Assessment 

Have open and honest conversations with students about how you plan to use detection tools. Point out that there is a written statement in the course outline and that you have the support of your department head and the institution to use these tools. Be upfront and clear with students.

It is also important to engage students in evidence-based conversations about the limitations tools to detect artificial intelligence writing, including the current lack of empirical evidence about how well they work.

Conclusion

Again, I emphasize that I am not at all promoting the use of any AI detection technology whatsoever. In fact, I am opposed to the use of surveillance and detection technology that is used punitively against students, especially when it is done in the name of teaching and learning. However, if you are going to insist on using technology to detect possible breaches of academic integrity, then at least do so in an open and transparent way — and acknowledge that the tools themselves are imperfect.

Key message: Under no circumstances should the results from an AI-writing detection tool be used as the only evidence in a student academic misconduct allegation.

I am fully anticipating some backlash to this post. There will be some of you who will object to the use detection tools on principle and counter that any blog post talking about how they can be used is in itself unethical. You might be right, but the reality remains that thousands of educators are currently using detection tools for the sole purpose of catching cheating students. As much as I rally against a “search and destroy” approach, there will be some people who insist on taking this position. This blog post is to offer some guidelines to avoid deceptive assessment and covert use of technology in student assessment.

Key message: Deceptive assessment is a breach of academic integrity on the part of the educator. If we want students to act with integrity, then it is up to educators to model ethical behaviour themselves.

References

Sadasivan, V. S., Kumar, A., Balasubramanian, S., Wang, W., & Feizi, S. (2023). Can AI-Generated Text be Reliably Detected? ArXiv. https://doi.org/10.48550/arXiv.2303.11156

Fowler, G. A. (2023, April 3). We tested a new ChatGPT-detector for teachers. It flagged an innocent student. Washington Post. https://www.washingtonpost.com/technology/2023/04/01/chatgpt-cheating-detection-turnitin/

Jimenez, K. (2023, April 13). Professors are using ChatGPT detector tools to accuse students of cheating. But what if the software is wrong? USA Today. https://www.usatoday.com/story/news/education/2023/04/12/how-ai-detection-tool-spawned-false-cheating-case-uc-davis/11600777002/

_________________________________

Share this post: The Use of AI-Detection Tools in the Assessment of Student Work https://drsaraheaton.wordpress.com/2023/05/06/the-use-of-ai-detection-tools-in-the-assessment-of-student-work/

This blog has had over 3 million views thanks to readers like you. If you enjoyed this post, please “like” it or share it on social media. Thanks! Sarah Elaine Eaton, PhD, is a faculty member in the Werklund School of Education, and the Educational Leader in Residence, Academic Integrity, University of Calgary, Canada. Opinions are my own and do not represent those of the University of Calgary.


Exploring the Contemporary Intersections of Artificial Intelligence and Academic Integrity

May 17, 2022
Title slide from CSSHE 2022 panel discussion: AI & AI: Exploring the contemporary intersections of artificial intelligence and academic integrity (Kumar, Mindzak, Eaton & Morrison)

For more than a year there have been small teams of us across Canada studying the impact of artificial intelligence on academic integrity. Today I am pleased to be part of a panel discussion on this topic at the annual conference of the Canadian Society for the Study of Higher Education (CSSHE), which is part of Congress 2022.

Our panel is led by Rahul Kumar (Brock University, Canada), together with Michael Mindzak (Brock University, Canada) and Ryan Morrison (George Brown College, Canada)

Here is the information about our panel:

Session G3: Panel: AI & AI: Exploring the Contemporary Intersections of Artificial Intelligence and Academic Integrity (Live, remote) 

Panel Chair: Rahul Kumar 

  • Rahul Kumar (Brock University): Ethical application with practical examples
  • Michael Mindzak (Brock University): Implications on labour 
  • Ryan Morrison (George Brown College): Large language models: An overview for educators 
  • Sarah Elaine Eaton (University of Calgary): Academic integrity and assessment 

We have developed a combined slide deck for our panel discussion today. You can download the entire slide deck from the link noted in the citation below:

Kumar, R., Mindzak, M., Morrison, R., & Eaton, S. E. (2022, May 17). AI & AI: Exploring the contemporary intersections of artificial intelligence and academic integrity [online]. Paper presented at the Canadian Society for the Study of Higher Education (CSSHE). http://hdl.handle.net/1880/114647

Related posts:

New project: Artificial Intelligence and Academic Integrity: The Ethics of Teaching and Learning with Algorithmic Writing Technologies – https://drsaraheaton.wordpress.com/2022/04/19/new-project-artificial-intelligence-and-academic-integrity-the-ethics-of-teaching-and-learning-with-algorithmic-writing-technologies/

Keywords: artificial intelligence, large language models, GPT-3, academic integrity, academic misconduct, plagiarism, higher education, teaching, learning, assessment

_________________________________

Share or Tweet this: Exploring the Contemporary Intersections of Artificial Intelligence and Academic Integrity https://drsaraheaton.wordpress.com/2022/05/17/exploring-the-contemporary-intersections-of-artificial-intelligence-and-academic-integrity/

This blog has had over 3 million views thanks to readers like you. If you enjoyed this post, please “like” it or share it on social media. Thanks!

Sarah Elaine Eaton, PhD, is a faculty member in the Werklund School of Education, and the Educational Leader in Residence, Academic Integrity, University of Calgary, Canada. Opinions are my own and do not represent those of the University of Calgary.


New project: Artificial Intelligence and Academic Integrity: The Ethics of Teaching and Learning with Algorithmic Writing Technologies

April 19, 2022

Today the University of Calgary announced the recipients of the 2022 Teaching and Learning Grants. I’m pleased to share that our project was among those awarded funding. Here are the details of our project:

Artificial Intelligence and Academic Integrity: The Ethics of Teaching and Learning with Algorithmic Writing Technologies

Research Team (all from the University of Calgary)

  • Sarah Elaine Eaton, PhD, Werklund School of Education, Principal Investigator
  • Robert Brennan, PhD, Schulich School of Engineering, Co-Investigator
  • Jason Wiens, PhD, Department of English, Faculty of Arts, Co-Investigator
  • Brenda McDermott, PhD, Student Accessibility Services, Co-Investigator
  • Helen Pethrick, MA, Project Manager
  • Beatriz Moya, PhD student, Werklund School of Education, Research Assistant
  • Jonathan Lesage, MSc student, Schulich School of Engineering, Research Assistant

Focus area (as aligned with University of Calgary research priority areas): Innovation and entrepreneurial thinking 

Grant type: Scholarship of Teaching and Learning (SoTL) Grants

Project scope: This project will be conducted at the University of Calgary. Data will be collected from faculty and students, upon successful ethics approval of the project.

Funding amount: $40,000 CAD

Project duration: 2022-2025

Project status

This project just received funding and have submitted documentation to have the project set up in the university systems. We are waiting for that step to be approved. In the meantime, we are preparing our application to the Conjoint Faculties Research Ethics Board (CFREB) at the University of Calgary.

Please note: This is an internal University of Calgary grant. We are not able to include any external collaborators in this particular project.

_________________________________

Share or Tweet this: New project: Artificial Intelligence and Academic Integrity: The Ethics of Teaching and Learning with Algorithmic Writing Technologieshttps://drsaraheaton.wordpress.com/2022/04/19/new-project-artificial-intelligence-and-academic-integrity-the-ethics-of-teaching-and-learning-with-algorithmic-writing-technologies/

This blog has had over 3 million views thanks to readers like you. If you enjoyed this post, please “like” it or share it on social media. Thanks!

Sarah Elaine Eaton, PhD, is a faculty member in the Werklund School of Education, and the Educational Leader in Residence, Academic Integrity, University of Calgary, Canada. Opinions are my own and do not represent those of the University of Calgary.


Comparing E-Proctoring Software to Hydroxychloroquine: An Apt Analogy

November 4, 2020

Image courtesy of patrisyu at FreeDigitalPhotos.net

Image courtesy of patrisyu at FreeDigitalPhotos.net

To help educators and administrators understand why I urge caution, and even skepticism about the use of e-proctoring software and other surveillance technologies such as those that lockdown students’ Internet browsers, here’s an analogy I have been using that seems to resonate:

In my opinion, e-proctoring software is to higher education what Hydroxycloroquine has been to the COVID-19 virus.

It’s not that e-proctoring software is bad, it is that it was never designed to be used under the current conditions. There are colleagues who would disagree with me about this kind of software being bad in principle. I accept their position. Let’s look at this through the eyes of scholar who is trained to reserve judgement on an issue without evidence to back it up. If we assume the software was designed for a specific purpose – to invigilate exams taken via a computer, then it fulfills that purpose. So, in that sense, it does what it is supposed to do. However, that is not the whole story.

We can turn to Hydroxychloroquine as an analogy to help us understand why we should be skeptical.

Hydroxychloroquine is an anti-malaria drug, also used to treat arthritis. It was never designed to be used against the SARS-CoV-2 (COVID-19) virus. Hasty attempts to do research on the coronavirus, including studies on Hydroxychloroquine, have resulted in numerous papers now being retracted from scientific journals. People ran to this drug as a possible antidote the coronavirus, just as schools are running to e-proctoring software as an antidote for exam cheating. Neither e-proctoring software nor Hydroxychloroquine were designed to be used during the current pandemic. People flocked to them both as if they were some kind of magic pill that would solve a massively complex problem, without sufficient evidence that either would actually do what they so desperately wanted it to do.

The reality is that there is scant scientific data to show that e-proctoring actually works in the way that people want it to, that is, to provide a way of addressing academic misconduct during the pandemic. By “scientific data” I do not mean sales pitches. I am talking about independent scholarly studies undertaken by qualified academic researchers employed at reputable universities. By “independent scholarly studies” I mean research that has not been funded in any way by the companies that produce the products. That kind of research is terrifyingly lacking.

We need to back up for a minute and look about why we invigilate exams in the first place. To invigilate means “to keep watch over”. Keeping watch over students while they write an exam is part of ensuring that testing conditions are fair and objective.

The point of a test, in scientific terms, involves controlling all variables except one. In traditional testing, all other factors are controlled, including the conditions under which the test was administered such as the exam hall with desks separated, same lighting and environment for all test-takers, length of time permitted to take the test, how it is invigilated, and so on. All variables are presumably controlled except one: the student’s knowledge of the subject matter. That’s what’s being tested, the student’s knowledge.

Exams are administered in what could be termed, academically sterile environments. In an ideal situation, academic hygiene is the starting point for administering a test. Invigilation is just one aspect of ensuring academic hygiene during testing, but it is not the only factor that contributes to this kind of educational hygiene that we need to ensure testing conditions control for all possible variables except a student’s knowledge of the subject matter.

During the pandemic, with the shift to remote learning, we cannot control all the variables. We simply cannot assure an academically hygienic environment for testing. Students may have absolutely no control over who else is present in their living/studying quarters. They may have no control over a family member (including their own children) who might enter a room unannounced during a test. The conditions under which students are being tested during the pandemic are not academically hygienic. And that’s not their fault.

E-proctoring may address one aspect of exam administration: invigilation. It cannot, however, ensure that all variables are controlled.

As an academic integrity scholar, I am distressed by the lack of objective, peer-reviewed data about e-proctoring software. Schools have turned to e-proctoring software as if it were some kind of magic pill that will make academic cheating go away. We have insufficient evidence to substantiate that e-proctoring software, or any technology for that matter, can serve as a substitute for an in-person academically hygienic testing environment.

Schools that were using e-proctoring before the pandemic, such as Thompson Rivers University or Athabasca University in Canada, offered students a choice about whether students preferred to take their exams online, at home, using an e-proctoring service, or whether they preferred to drive to an in-person exam centre. During the pandemic, students’ choice has been taken away.

We all want an antidote to academic misconduct during remote learning, but I urge you educators and administrators to think like scholars and scientists. In other words, approach this “solution” with caution, and even skepticism. At present, we lack sufficient evidence to make informed decisions. Educators need to be just as skeptical about this technology and how it works during pandemic conditions as physicians and the FDA have been about using Hydroxychloroquine as a treatment for the coronavirus. Its use as being effective against the coronavirus is a myth. The use of e-proctoring software as being an effective replacement for in-person exams is also a myth, one perpetuated by the companies that sell the product.

Forcing surveillance technology on students against their will during a pandemic is tantamount to forcing an untested treatment on a patient; it is unethical to the extreme.

______

Share or Tweet this: Comparing E-Proctoring Software to Hydroxychloroquine: An Apt Analogy – https://drsaraheaton.wordpress.com/2020/11/04/comparing-e-proctoring-software-to-hydroxychloroquine-an-apt-analogy/(opens in a new tab)

This blog has had over 2 million views thanks to readers like you. If you enjoyed this post, please “like” it or share it on social media. Thanks!

Sarah Elaine Eaton, PhD, is a faculty member in the Werklund School of Education, and the Educational Leader in Residence, Academic Integrity, University of Calgary, Canada. Opinions are my own and do not represent those of the University of Calgary.


Portfolios to Assess Literacy and Second Languages: An Annotated Bibliography

July 5, 2011

Portfolios to assess literacy and second languages by Sarah EatonFor a few years now I’ve been interested in the topic of using portfolios and asset-based (also known as strength-based) approaches to assessment. Significant theoretical research and applied classroom practice has been done in the field of alternative assessment, and specifically in area of using portfolios and e-portfolios.

The practice of using portfolios for second and foreign language teaching has increased in popularly, with an increased understanding and adoption of the Common European Framework of Reference for Languages. Almost simultaneously, there has been a rise in the use of similar frameworks in the field of literacy. However, there is little collaboration between those who work in literacy and those who teach second and modern languages.

This annotated bibliography is an attempt to collect, select and share resources that may be relevant, helpful and useful to professionals working in both the second language and literacy sectors. The deeper values that guide this work are predicated on the belief that researchers and practitioners working in both fields have much in common and would benefit greatly from increased dialogue and shared resources.

Download a copy here: http://hdl.handle.net/1880/51923

Check out these related posts:

Student portfolios for Language Learning: What They Are and How to Use Them

Using Portfolios for Effective Learning

_____________

Share this post: Portfolios to Assess Literacy and Second Languages: An Annotated Bibliography http://wp.me/pNAh3-NB

Update – January 2018 – This blog has had over 1.8 million views thanks to readers like you. If you enjoyed this post, please “like” it or share it on social media. Thanks!

Sarah Elaine Eaton is a faculty member in the Werklund School of Education, University of Calgary, Canada.