Exploring Postplagiarism with Google NotebookLM

September 23, 2024

This weekend I have been playing around with Google’s NotebookLM. This is a brand new, completely free service that can help users learn content in a variety of ways. It is powered by Gemini 1.5 Pro and it’s worth checking out. 

To test it out, I uploaded a copy of “Postplagiarism: Transdisciplinary ethics and integrity in the age of artificial intelligence and neurotechnology” in .pdf format.

Here is what NotebookLM came up with:

Audio Conversation

Perhaps the most impressive of the outputs is a podcast-style audio conversation that is completely AI-generated. You can listen to the Postplagiarism conversation here.

Postplagiarism: Navigating Ethics and Integrity in the Age of AI: Study Guide

Screenshot from Google’s NotebookLM, taken 22-SEP-2024

Summary

This article introduces the concept of “postplagiarism,” an era defined by the seamless integration of artificial intelligence (AI) and neurotechnology into everyday life, including education. Eaton explores the implications of this integration for academic integrity, arguing that traditional definitions of plagiarism may become obsolete as AI blurs the lines of authorship.

The author posits six key tenets of postplagiarism:

  1. Hybrid human-AI writing will become the norm: Co-created content will make distinguishing human and AI input increasingly difficult.
  2. Human creativity is enhanced: AI serves as a tool to augment human creativity, not replace it.
  3. Language barriers disappear: AI translation tools have the potential to facilitate cross-cultural communication.
  4. Humans can relinquish control, but not responsibility: While AI can assist in content creation, individuals remain accountable for accuracy and ethical use.
  5. Attribution remains important: Acknowledging sources and intellectual contributions remains crucial, with evolving considerations for Indigenous knowledge systems.
  6. Historical definitions of plagiarism no longer apply: Shifting technological landscapes necessitate new frameworks for ethical engagement with information and knowledge creation.

Eaton challenges Chomsky’s view of AI as “high-tech plagiarism,” arguing instead that it demands a reevaluation of existing definitions of plagiarism within the context of rapidly evolving technological capabilities. She advocates for pre-emptive research into the ethical implications of AI and neurotechnology, particularly commercially available brain-computer interfaces (BCIs), highlighting the need for transdisciplinary collaboration to address these complex issues.

The article concludes with a call to action, urging educators to prepare students for a future characterized by ubiquitous AI and neurotechnology, emphasizing ethical considerations and societal impact.

Quiz

Instructions: Answer the following questions in 2-3 sentences.

  1. What does Eaton mean by “postplagiarism”?
  2. Why does Eaton believe that AI will make traditional definitions of plagiarism obsolete?
  3. What is Eaton’s stance on Chomsky’s characterization of AI as “high-tech plagiarism”?
  4. Explain the concept of “hybrid human-AI writing” and its implications.
  5. According to Eaton, how does the emergence of AI impact human creativity?
  6. Why does Eaton believe that attribution remains important in the age of AI?
  7. What ethical concerns does Eaton raise regarding the use of neurotechnology in education?
  8. Why does Eaton believe that educators were caught off guard by both the COVID-19 pandemic and the emergence of AI like ChatGPT?
  9. What type of research does Eaton advocate for in relation to AI and neurotechnology in education?
  10. What is Eaton’s primary call to action for educators in light of the emerging postplagiarism era?

Quiz Answer Key

  1. “Postplagiarism” refers to an era where advanced technologies like AI and neurotechnology are fully integrated into everyday life, including education. This integration significantly impacts how we teach, learn, communicate, and engage with knowledge.
  2. Eaton argues that traditional definitions of plagiarism, often focused on verbatim copying, become inadequate in the face of AI. This is because AI tools enable the creation of sophisticated, original content, blurring the lines between human and machine authorship and challenging the notion of plagiarism as we know it.
  3. Eaton disagrees with Chomsky’s view, arguing that AI should not be simply dismissed as “high-tech plagiarism.” Instead, she calls for a nuanced understanding of AI’s capabilities and a re-evaluation of existing plagiarism definitions within the context of evolving technology.
  4. “Hybrid human-AI writing” refers to the collaborative process where humans and AI tools work together to create content. This collaboration makes it difficult to discern human input from AI-generated content, challenging traditional authorship and plagiarism detection methods.
  5. Eaton argues that AI enhances, rather than threatens, human creativity. She sees AI tools as resources that can augment human imagination and inspire new forms of creative expression, rather than replacing human ingenuity.
  6. Eaton stresses the continued importance of attribution in acknowledging intellectual debts and recognizing the contributions of others. In the age of AI, this includes responsibly crediting both human and AI sources, acknowledging the collaborative nature of knowledge creation.
  7. Eaton raises ethical concerns related to privacy, consent, and equitable access when it comes to neurotechnology in education. She particularly highlights concerns regarding commercially available brain-computer interfaces (BCIs) and their potential for misuse or exacerbating existing inequalities.
  8. Eaton suggests that educators were caught off guard by both the rapid shift to online learning during the COVID-19 pandemic and the sudden rise of AI tools like ChatGPT. She attributes this to a lack of preparedness for large-scale disruptions and the exponential pace of technological advancement.
  9. Eaton advocates for pre-emptive, transdisciplinary research on the ethical implications of AI and neurotechnology in education. This research would involve collaboration between academics, policymakers, and industry experts to address complex questions surrounding equitable access, privacy, and the impact on learning and teaching practices.
  10. Eaton calls on educators to proactively prepare students for a future deeply integrated with AI and neurotechnology. This preparation involves fostering critical thinking skills, ethical awareness regarding technology use, and adaptability in the face of evolving technological landscapes.

Essay Questions

  1. Analyse Eaton’s argument that traditional definitions of plagiarism are no longer applicable in the age of AI. How does AI challenge existing notions of authorship and originality?
  2. Discuss the ethical implications of commercially available neurotechnology in education. What are the potential benefits and risks associated with using such technologies in learning environments?
  3. How should educational institutions adapt their policies and practices to address the challenges and opportunities presented by postplagiarism?
  4. In what ways can AI be used to enhance, rather than replace, human creativity and critical thinking skills in educational settings? Provide specific examples to support your arguments.
  5. Discuss the potential impact of AI-driven translation tools on language learning and cross-cultural communication in education. How can we ensure equitable access to these tools and address concerns related to language diversity and cultural representation?

Glossary of Key Terms

  • Postplagiarism: An era characterized by the seamless integration of advanced technologies like AI and neurotechnology into everyday life, impacting how we learn, communicate, and create knowledge.
  • Artificial Intelligence (AI): The ability of a computer or a robot controlled by a computer to perform tasks that are usually done by humans because they require human intelligence and discernment.
  • Neurotechnology: Technology that interacts directly with the brain or nervous system, encompassing tools for monitoring, recording, or modulating neural activity.
  • Brain-Computer Interface (BCI): A device that establishes a direct communication pathway between the brain and an external device, often a computer, enabling the control of the device using brain signals.
  • Hybrid Human-AI Writing: Collaborative content creation involving both human input and AI-generated text, blurring the lines of authorship.
  • Large Language Model (LLM): A type of AI trained on a massive dataset of text and code to generate human-quality text, translate languages, write different kinds of creative content, and answer your questions in an informative way.
  • Decolonization: The process of undoing the harmful effects of colonialism, particularly in educational contexts, challenging Eurocentric perspectives and promoting Indigenous knowledge systems.
  • Knowledge Caretaker: An individual or community responsible for preserving, transmitting, and respectfully engaging with knowledge, particularly within Indigenous cultures.

This study guide provides a comprehensive overview of the article, aiding in your understanding of the key concepts and arguments presented. Remember to utilize the quiz and essay questions to test your knowledge and delve deeper into the complexities of postplagiarism.

Postplagiarism FAQ

1. What is postplagiarism?

Postplagiarism refers to an era where advanced technologies, such as artificial intelligence (AI) and neurotechnology like brain-computer interfaces (BCIs), are deeply integrated into our daily lives, including education. This integration will significantly impact how we teach, learn, and interact.

2. How will AI impact academic writing?

AI writing tools are becoming increasingly sophisticated, making it difficult to distinguish between human and AI-generated text. This rise of hybrid human-AI writing presents challenges to traditional notions of plagiarism. While AI can enhance creativity by assisting with writing tasks, it also raises concerns about authenticity and the need to uphold ethical writing practices.

3. Will AI make learning languages irrelevant?

While AI translation tools are becoming more powerful, they won’t render language learning obsolete. Although AI can facilitate communication across language barriers, it cannot replace the cultural understanding and nuanced communication that comes with learning a language.

4. Can I use AI to complete my assignments?

Even with AI assistance, students are ultimately responsible for their academic work. Using AI to complete assignments without proper attribution or understanding can be considered a form of academic misconduct. Educators have a responsibility to adapt assessment methods to ensure students demonstrate genuine learning.

5. How does postplagiarism affect attribution practices?

While traditional citation methods remain important, postplagiarism challenges us to rethink how we acknowledge and value knowledge sources, particularly Indigenous knowledge systems often overlooked in standard academic practices. Respectful attribution in the postplagiarism era requires moving beyond technical citations to engage deeply with the works and ideas of others.

6. Are current academic integrity policies equipped to handle AI?

Existing definitions and policies related to plagiarism may need to be reevaluated in the age of postplagiarism. The lines are becoming blurred, demanding a more nuanced understanding of academic integrity that goes beyond simply detecting copied text.

7. What about neurotechnology? How will it impact education?

The emergence of neurotechnology, particularly BCIs, brings both opportunities and challenges. While it has the potential to revolutionize learning and assist individuals with disabilities, it also raises ethical concerns regarding privacy, autonomy, and potential misuse for academic cheating.

8. What can educators do to prepare for the postplagiarism era?

Educators should actively engage in discussions surrounding AI and neurotechnology in education. This includes researching ethical implications, promoting responsible AI use, adapting teaching practices, and fostering critical thinking skills in students to navigate this evolving landscape. Collaboration between educators, policymakers, and researchers is crucial to address these challenges proactively.

Sarah’s Reflections on NotebookLM

Notebook LM is a game-changer for teaching and learning. Students can upload any material to the app and generate content in plain language in both text and audio format.

NotebookLM is a game-changer for teaching and learning. Students can upload any material to the app and generate content in plain language in both text and audio format. There are benefits and drawbacks to any technology and here are some that come to mind for NotebookLM:

Benefits

  • Personalized Learning: NotebookLM can tailor learning experiences to individual students’ needs, pace, and preferences. It can provide personalized explanations, examples, and practice test questions. I like this aspect of NotebookLM because it allows learners to personalize their own learning experience, rather than having a teacher do it for them.
  • Enhanced Engagement: By offering interactive and engaging content, NotebookLM can increase student interest and motivation by situating the locus of control for the learning with the student. I like this because the app can help to promote learner autonomy and agency. It can also facilitate collaborative learning through features like group discussions and shared notes.
  • Accessibility and UDL: The tool can make learning more accessible to students with disabilities, learning difficulties or really, just any learner. It does this by providing the content in a variety of formats such as text-based summaries or the audio pod-cast style conversation.
  • 24/7 Support: NotebookLM can be available to students at any time, providing a resource for independent learning and practice. No matter when a student prefers to do their learning,”just-in-time” tools like this meet learners where they are at, on their timeline, not the teacher’s timeline.

Drawbacks

  • Lack of Human Interaction: Although NotebookLM can provide valuable support, it cannot fully replace the human connection and guidance that educators offer. The affective aspects of teaching and learning and the social connections, remain important.
  • Dependency on Technology: Overreliance on NotebookLM could lead to technological issues and disruptions in learning.  For example, students who are overly dependent on technology may struggle to adapt to situations where the tool is not available or appropriate. Tools like this may — or may not — help students to develop metacognitve skills and evaluative judgement. (For more info on assessment in the age of generative AI, check out this article by Margaret Bearman and Rosemary Luckin.)
  • Perpetuation of Inequities: Students from disadvantaged backgrounds may have limited access to technology or to Internet connectivity, creating a digital divide and exacerbating educational inequalities. So, just as tools like this can enhance accessibility, they may simultaneously erode equity in different ways.
  • Data Privacy Concerns: The collection and use of student data raise privacy concerns and require careful consideration of data protection measures. There are also questions about copyright and what happens when students upload work to which others hold the copyright.
  • Potential for Misuse: NotebookLM could be misused by students to cheat or generate inaccurate content, requiring educators to implement appropriate safeguards. So, like any other technology, it can be used ethically, or unethically. Students may or may not know what is allowed or expected and so having conversations with students about expectations remains important.

Thank you to my friend and colleague, Dr. Soroush Sabbaghan, Associate Professor (Teaching) at the University of Calgary, for introducing me to NotebookLM a few days ago. I am keen to hear what learners and educators think of this tool.

References

Eaton, S. E. (2023). Postplagiarism: Transdisciplinary ethics and integrity in the age of artificial intelligence and neurotechnology. International Journal for Educational Integrity, 19(1), 1-10. https://doi.org/10.1007/s40979-023-00144-1 

Related posts:

_________________________________

Share this post: Exploring Postplagiarism Using Google’s Notebook LM – https://drsaraheaton.com/2024/09/23/exploring-postplagiarism-with-google-notebooklm/

This blog has had over 3 million views thanks to readers like you. If you enjoyed this post, please “like” it or share it on social media. Thanks!

Sarah Elaine Eaton, PhD, is a faculty member in the Werklund School of Education, and the Educational Leader in Residence, Academic Integrity, University of Calgary, Canada. Opinions are my own and do not represent those of the University of Calgary.


The Use of AI-Detection Tools in the Assessment of Student Work

May 6, 2023

People have been asking if they should be using detection tools to identify text written by ChatGPT or other artificial intelligence writing apps. Just this week I was a panelist in a session on “AI and You: Ethics, Equity, and Accessibility”, part of ETMOOC 2.0. Alec Couros asked what I was seeing across Canada in terms of universities using artificial intelligence detection in misconduct cases.

The first thing I shared was the University of British Columbia web page stating that the university was not enabling Turnitin’s AI-detection feature. UBC is one of the few universities in Canada that subscribes to Turnitin.

The Univeristy of British Columbia declares the university is not enabling Turnitin’s AI-detection feature.

Turnitin’s rollout of AI detection earlier this year was widely contested and I won’t go into that here. What I will say is that whether AI detection is a new feature embedded into existing product lines or a standalone product, there is little actual scientific evidence to show that AI-generated text can be effectively detected (see Sadasivan et al., 2023). In a TechCrunch article, Open AI, the company that developed ChatGPT, talked about its own detection tool, noting that its success rate was around 26%

Key message: Tools to detect text written by artificial intelligence aren’t really reliable or effective. It would be wise to be skeptical of any marketing claims to the contrary.

There are news reports about students being falsely accused of misconduct when the results of AI writing detection tools were used as evidence. See news stories here and here, for example. 

There have been few studies done on the impact of a false accusation of student academic misconduct, but if we turn to the literature on false accusations in criminal offences, there is evidence showing that false accusations can result in reputation damage, self-stigma, depression, anxiety, PTSD, sleep problems, social isolation, and strained relationships, among other outcomes. Falsely accusing students of academic misconduct can be devastating, including dying by suicide as a result. You can read some stories about students dying by suicide after false allegations of academic cheating in the United States and in India. Of course, stories about student suicide are rarely discussed in the media, for a variety of reasons. The point here is that false accusations of students for academic cheating can have a negative impact on their mental and physical health.

Key message: False accusations of academic misconduct can be devastating for students.

Although reporting allegations of misconduct remains a responsibility of educators, having fully developed (and mandatory) case management and investigation systems is imperative. Decisions about whether misconduct has occurred should be made carefully and thoughtfully, using due process that follows established policies.

It is worth noting that AI-generated text can be revised and edited such that the end product is neither fully written by AI, nor fully written by a human. At our university, the use of technology to detect possible misconduct may not be used deceptively or covertly. For example, we do not have an institutional license to any text-matching software. Individual professors can get a subscription if they wish, but the use of detection tools should be declared in the course syllabus. If detection tools are used post facto, it can be considered a deception on the part of the professor because the students were not made aware of the technology prior to handing in their assessment. 

Key message: Students can appeal any misconduct case brought forward with the use of deceptive or undisclosed assessment tools or technology (and quite frankly, they would probably win the appeal).

If we expect students to be transparent about their use of tools, then it is up to educators and administrators also to be transparent about their use of technology prior to assessment and not afterwards. A technology arms race in the name of integrity is antithetical to teaching and learning ethically and can perpetuate antagonistic and adversarial relationships between educators and students.

Ethical Principles for Detecting AI-Generated Text in Student Work

Let me be perfectly clear: I am not at all a fan of using detection tools to identify possible cases of academic misconduct. But, if you insist on using detection tools, for heaven’s sake, be transparent and open about your use of them.

Here is an infographic you are welcome to use and share: Infographic: “Ethical Principles for Detecting AI-Generated Text in Student Work” (Creative Commons License: Attribution-NonCommercial-ShareAlike 4.0 International). The text inside the infographic is written out in full with some additional details below.

Here is some basic guidance:

Check your Institutional Policies First

Before you use any detection tools on student work, ensure that the use of such tools is permitted according to your school’s academic integrity policy. If your school does not have such a policy or if the use of detection tools is not mentioned in the policy, that does not automatically mean that you have the right to use such tools covertly. Checking the institutional policies and regulations is a first step, but it is not the only step in applying the use of technology ethically in assessment of student work.

Check with Your Department Head

Whether the person’s title is department head, chair, headmaster/headmistress, principal, or something else, there is likely someone in your department, faculty or school whose job it is to oversee the curriculum and/or matters relating to student conduct. Before you go rogue using detection tools to catch students cheating, ask the person to whom you report if they object to the use of such tools. If they object, then do not go behind their back and use detection tools anyway. Even if they agree, then it is still important to use such tools in a transparent and open way, as outlined in the next two recommendations.

Include a Statement about the Use of Detection Tools in Your Course Syllabus

Include a clear written statement in your course syllabus that outlines in plain language exactly which tools will be used in the assessment of student work. A failure to inform students in writing about the use of detection tools before they are used could constitute unethical assessment or even entrapment. Detection tools should not be used covertly. Their use should be openly and transparently declared to students in writing before any assessment or grading begins.

Of course, having a written statement in a course syllabus does not absolve educators of their responsibility to have open and honest conversations with students, which is why the next point is included.

Talk to Students about Your Use of Tools or Apps You will Use as Part of Your Assessment 

Have open and honest conversations with students about how you plan to use detection tools. Point out that there is a written statement in the course outline and that you have the support of your department head and the institution to use these tools. Be upfront and clear with students.

It is also important to engage students in evidence-based conversations about the limitations tools to detect artificial intelligence writing, including the current lack of empirical evidence about how well they work.

Conclusion

Again, I emphasize that I am not at all promoting the use of any AI detection technology whatsoever. In fact, I am opposed to the use of surveillance and detection technology that is used punitively against students, especially when it is done in the name of teaching and learning. However, if you are going to insist on using technology to detect possible breaches of academic integrity, then at least do so in an open and transparent way — and acknowledge that the tools themselves are imperfect.

Key message: Under no circumstances should the results from an AI-writing detection tool be used as the only evidence in a student academic misconduct allegation.

I am fully anticipating some backlash to this post. There will be some of you who will object to the use detection tools on principle and counter that any blog post talking about how they can be used is in itself unethical. You might be right, but the reality remains that thousands of educators are currently using detection tools for the sole purpose of catching cheating students. As much as I rally against a “search and destroy” approach, there will be some people who insist on taking this position. This blog post is to offer some guidelines to avoid deceptive assessment and covert use of technology in student assessment.

Key message: Deceptive assessment is a breach of academic integrity on the part of the educator. If we want students to act with integrity, then it is up to educators to model ethical behaviour themselves.

References

Sadasivan, V. S., Kumar, A., Balasubramanian, S., Wang, W., & Feizi, S. (2023). Can AI-Generated Text be Reliably Detected? ArXiv. https://doi.org/10.48550/arXiv.2303.11156

Fowler, G. A. (2023, April 3). We tested a new ChatGPT-detector for teachers. It flagged an innocent student. Washington Post. https://www.washingtonpost.com/technology/2023/04/01/chatgpt-cheating-detection-turnitin/

Jimenez, K. (2023, April 13). Professors are using ChatGPT detector tools to accuse students of cheating. But what if the software is wrong? USA Today. https://www.usatoday.com/story/news/education/2023/04/12/how-ai-detection-tool-spawned-false-cheating-case-uc-davis/11600777002/

_________________________________

Share this post: The Use of AI-Detection Tools in the Assessment of Student Work https://drsaraheaton.wordpress.com/2023/05/06/the-use-of-ai-detection-tools-in-the-assessment-of-student-work/

This blog has had over 3 million views thanks to readers like you. If you enjoyed this post, please “like” it or share it on social media. Thanks! Sarah Elaine Eaton, PhD, is a faculty member in the Werklund School of Education, and the Educational Leader in Residence, Academic Integrity, University of Calgary, Canada. Opinions are my own and do not represent those of the University of Calgary.


Invitation to Participate: Research Study on Artificial Intelligence and Academic Integrity: 

April 19, 2023

The Ethics of Teaching and Learning with Algorithmic Writing Technologies 

On the right there is a black robotic hand and forearm. On the left there is a human hand and forearm. The forearm is tatooed. One finger from each hand is touching the other.
Photo by cottonbro studio on Pexels.com

Academic misconduct has taken various forms in present-day educational systems. One method that is on the rise is the use of artificially generated software compositions. The capabilities and sophistication of these new technologies are improving steadily. We are conducting a study to gauge the sophistication of the current artificial intelligence (AI) software-generated text. To that end, we are recruiting participants to evaluate the level of writing level of small compositions (260 words in length at most).

Your participation in this study would be to evaluate two small pieces of text presented in a survey and optionally make comments on your observation. We appreciate your consideration in this matter. This research provides an opportunity for the participants to contribute to the state of AI software used for various educational purposes. Participation in this study is voluntary, and you are free to terminate the survey and withdraw at any time and for any reason without censor. There are no known physical, psychological, or social risks associated with participation in the study.

All demographic data collected will be kept strictly confidential. Only the researchers listed in this letter will have access to the raw data. The data (in electronic format) will be retained indefinitely. Participation in the study will be asked for some basic demographic information and then presented with a 260- word length composition. After reading, the participants will be asked to evaluate the level, assign a mark to the composition, and note any pertinent observations. The second piece of composition, also of the same length, will be followed by the same set of questions. The total anticipated time for completing the survey is about 9-12 minutes, but it can vary based on reading speed and consideration afforded to the assigned grade.

If you have any questions or concerns about your participation in this study, you can contact the Principal Investigator, Dr. Sarah Elaine Eaton, seaton (at) ucalgary.ca

This study is funded by a University of Calgary Teaching and Learning Grant. This study has been approved by the Conjoint Faculties Research Ethics Board at the University of Calgary: REB22-0137.

To take the survey, click here.

_________________________________

Share this post: Invitation to Participate: Research Study on Artificial Intelligence and Academic Integrity: The Ethics of Teaching and Learning with Algorithmic Writing Technologies – https://wp.me/pNAh3-2U3

This blog has had over 3 million views thanks to readers like you. If you enjoyed this post, please “like” it or share it on social media. Thanks! Sarah Elaine Eaton, PhD, is a faculty member in the Werklund School of Education, and the Educational Leader in Residence, Academic Integrity, University of Calgary, Canada. Opinions are my own and do not represent those of the University of Calgary.

 


How to Talk to Your Students about ChatGPT: A Lesson Plan for High School and College Students

April 7, 2023
bionic hand and human hand finger pointing

Photo by cottonbro studio on Pexels.com

This article by Ben Edwards in ArtsTechnica (April 6, 2023) is worth a read, “Why ChatGPT and Bing Chat are so good at making things up”.

Edwards explains in clear language, with lots of details and examples, how and why large language models (LLMs) such as ChatGPT make up content. As I read this article, it occurred to me that it could serve as a really great way to have pro-active and generative conversations with students about the impact of artificial intelligence for teaching, learning, assessment, and academic integrity. So, here is a quick lesson plan about how to use this article in class:

Education level

Secondary school and post-secondary (e.g., community college, polytechnic, undergraduate or graduate university courses)

Lesson Plan Title: Understanding ChatGPT: Benefits and Limitations

Learning Objectives

By the end of this lesson students will be able to:

  • Understand how and why AI-writing apps make up content.
  • Explain the term “confabulation”.
  • Discuss the implications of fabricated content on academic integrity
  • Generate ideas about how to fact-check AI-generated content to ensure its accuracy

Lesson Preparation

Prior to the class, students should read this article: “Why ChatGPT and Bing Chat are so good at making things up by Ben Edwards, published in ArtsTechnica (April 6, 2023)

Come to class prepared to discuss the article.

Learning Activity

Class discussion (large group format if the class is small or small group format with a large group debrief at the end):

Possible guiding questions:

  • What is your experience with ChatGPT and other AI writing apps?
  • What were the main points in this article? (Alternate phrasing: What were your key takeaways from this article?)
  • What are some of the risks when AI apps engage in confabulation (i.e., fabrication)?
  • Discuss this quotation from the article, “ChatGPT as it is currently designed, is not a reliable source of factual information and cannot be trusted as such.”
  • Fabrication and falsification are commonly included in academic misconduct policies. What do you think the implications are for students and researchers when they write with AI apps?
  • What are some strategies or tips we can use to fact-check text generated by AI apps?
  • What is the importance of prompt-writing when working with AI writing apps?

Duration

The time commitment for the pre-reading will vary from one student to the next. The duration of the learning activity can be adjusted to suit the needs of your class.
  • Students’ pre-reading of the article: 60-minutes or less
  • Learning activity: 45-60 minutes

Lesson closure

Thank students for engaging actively in the discussion and sharing their ideas.

Possible Follow-up Activities

  • Tips for fact-checking. Have students in the class generate their own list of tips to fact-check AI-generated content (e.g., in a shared Google doc or by sharing ideas orally in class that one person inputs into a document on behalf of the class.)
  • Prompt-writing activity. Have students use different prompts to generate content from AI writing apps. Ask them to document each prompt and write down their observations about what worked and what didn’t. Discuss the results as a class.
  • Academic Integrity Policy Treasure Hunt and Discussion. Have students locate the school’s academic misconduct / academic integrity policy. Compare the definitions and categories for academic misconduct in the school’s policies with concepts presented in this article such as confabulation. Have students generate their own ideas about how to uphold the school’s academic integrity policies when using AI apps.

Creative Commons License

This lesson plan is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License (CC BY-NC 4.0). This license applies only to the lesson plan, not to the original article by Ben Edwards.

Additional Notes

This is a generic (and imperfect) lesson plan. It can (and probably should) be adapted or personalized depending on the needs of the learners.

Acknowledgements

Thanks to Dr. Rahul Kumar, Brock University for providing an open peer review of this lesson plan.

 _________________________________

Share or Tweet this: How to Talk to Your Students about ChatGPT: A Lesson Plan for High School and College Students – https://drsaraheaton.wordpress.com/2023/04/07/how-to-talk-to-your-students-about-chatgpt-a-lesson-plan-for-high-school-and-college-students This blog has had over 3 million views thanks to readers like you. If you enjoyed this post, please “like” it or share it on social media. Thanks! Sarah Elaine Eaton, PhD, is a faculty member in the Werklund School of Education, and the Educational Leader in Residence, Academic Integrity, University of Calgary, Canada. Opinions are my own and do not represent those of the University of Calgary.

Sarah’s Thoughts: Artificial Intelligence and Academic Integrity

December 9, 2022

The release of ChatGPT has everyone abuzz about artificial intelligence. I’ve been getting lots of questions about our research project Artificial Intelligence and Academic Integrity: The Ethics of Teaching and Learning with Algorithmic Writing Technologies. We are ready to start data collection in January so I do not yet have results to share. Our team has two preliminary papers under review, but I won’t say much about them until they are published.

In the meantime, I wanted to share some high level thoughts on the topic since many of you have been asking. Even though I am on Research and Scholarship Leave (RSL, a.k.a. sabbatical) this year, I’ve got another big project on the go that is taking up a lot of my time and focus right now, in addition to the research project above. I am serving as the Editor-in-Chief of the the Handbook of Academic Integrity (2nd ed.) The first edition of the Handbook was edited by Tracey Bretag who passed away in 2020.

The second edition is well underway and I’ve been working with an amazing team of Section Editors (giving a wave of gratitude to the team: Brenda M. Stoesz, Silvia Rossi, Joseph F. Brown, Guy Curtis, Irene Glendinning, Ceceilia Parnther, Loreta Tauginienė, Zeenath Reza Khan, and Wendy Sutherland-Smith). We have more than 100 chapters in the second edition, including some from the first edition as well as lots of new chapters. (Giving a wave of gratitude to all the contributors! Thank you for your amazing contributions!) It is a massive project and it has been a major focus of my sabbatical.

Suffice to say, I have not had a spare moment to put fingers to keyboard to write in depth about this topic on social media, but I wanted to share a few high level ideas here. I will have to unpack them in a future blog post or maybe an editorial, but for now, let me just say that I think the moral panic over the use of artificial intelligence is not the answer. But so you know where I stand on the issue, here are some thoughts:

I am happy to chat more, but let me just say that if you are afraid of an explosion of cheating in your classes because of ChatGPT or any other new technological advance, you are not alone, but honestly, technology isn’t the problem.

Stay tuned for more…

Related posts:

Artificial Intelligence and Academic Integrity: The Ethics of Teaching and Learning with Algorithmic Writing Technologies 

University of Calgary Graduate Assistant (Research) (GAR) – Job posting “Artificial Intelligence and Academic Integrity: The Ethics of Teaching and Learning with Algorithmic Writing” https://drsaraheaton.wordpress.com/2022/11/30/university-of-calgary-research-assistant-job-posting-artificial-intelligence-and-academic-integrity-the-ethics-of-teaching-and-learning-with-algorithmic-writing/

_________________________________

Share or Tweet this: Sarah’s Thoughts: Artificial Intelligence and Academic Integrity https://drsaraheaton.wordpress.com/2022/12/09/sarahs-thoughts-artificial-intelligence-and-academic-integrity/

This blog has had over 3 million views thanks to readers like you. If you enjoyed this post, please “like” it or share it on social media. Thanks! Sarah Elaine Eaton, PhD, is a faculty member in the Werklund School of Education, and the Educational Leader in Residence, Academic Integrity, University of Calgary, Canada. Opinions are my own and do not represent those of the University of Calgary.