When Good Ideas Meet Poor Execution: The Humane AI Pin and the Future of Language Translation

May 18, 2025

One of the tenets of postplgiarism is that artificial intelligence technologies will help us overcome language barriers and understand each other in countless languages (Eaton, 2023). 

We already have apps that translate text from photos taken on our phones. These apps help when travelling in countries where you don’t speak the language. Now we have applications extending this idea further into wearable technology.

Wearable technology has existed for years. We wear fitness gadgets on our wrists to track steps. AI technology will become more embedded into the software that drives these devices.

New wearable devices have emerged quickly, with varying levels of success. One example was introduced about a year after ChatGPT was released. The company was called Humane and the device was powered by OpenAI technology.

The Humane pin was wearable technology that included a square-shaped pin and a battery pack that attached magnetically to your shirt or jacket. It was marketed as enabling users to communicate in just about any language (Pierce, 2023). To Star Trek fans, the resemblance to a communicator badge was unmistakable.

The device retailed for $700 US and required a software subscription of $24 USD per month, which provided data coverage for real-time use through their proprietary software based on a Snapdragon processor (Pierce, 2023). The device only worked with the T-Mobile network in the United States. Since I live in Canada and T-Mobile isn’t available here, I never bought one.

Like others, I watched with enthusiasm, hoping the product would succeed so it could expand to other markets. Pre-order sales indicated huge potential for success. By late 2023, the Humane pin was heralded as “Silicon Valley’s ‘next big thing'” (Chokkattu, 2025a). (I can’t help but wonder if the resemblance to a Star Trek communicator badge was part of the allure.)

A person wearing a light blue dress shirt and a dark blue suit jacket. The shirt has a button labeled 'A7' on the collar. Attached to the collar is a small, square electronic device with a screen displaying an icon of a circular arrow, indicating a loading or refresh symbol. The background features an out-of-focus world map.

When tech enthusiasts received the product in 2024, the reviews were dismal. One reviewer gave it 4 out of 10 and called it a “party trick” (Chokkattu, 2024). (Ouch.) The Humane pin did not live up to its promises. Less than a year after its release, the device was dead. HP acquired the company and retired the product at the end of February 2025.

Tech writer Julian Chokkattu declared the device was e-waste and suggested it could be used as a paperweight or stored in a box in the attic. Chokkattu (2025b) says, “In 50 years, you’ll accidentally find it in the attic and then tell your grandkids how this little gadget was once—for a fleeting moment—supposed to be the next big thing.”

Learning from Failure: The Promise Remains

The failure of the Humane AI Pin does not invalidate the vision of AI-powered real-time translation. The device failed because of execution problems—poor battery life, overheating, an annoying projector interface, and limited functionality (Chokkattu, 2024). The core AI translation capabilities were among the features that actually worked.

Real-time translation represents one of the most compelling applications of generative AI. When the technology works seamlessly, it can transform human communication. The Humane pin showed us what not to do: create a standalone device with too many functions, none executed well.

The future of AI translation likely lies not in dedicated hardware but in integration with devices we already use. Our smartphones, earbuds, and smart glasses will become the vehicles for breaking down language barriers. The underlying AI models continue to improve rapidly, and the infrastructure for real-time translation grows more robust.

The Humane pin’s failure teaches us that good ideas require good execution. But we should not abandon the goal of using AI to help humans understand each other across languages. That goal remains as important as ever in our increasingly connected world. The technology will improve, the interfaces will become more intuitive, and the promise of the postplagiarism tenet—that language barriers will begin to disappear—will eventually be realized.

The Humane AI pin may be dead, but we should keep our hope alive that AI technology will help us overcome language barriers and provide new opportunities for communication.

Live long and prosper.

References

Chokkattu, J. (2024, April 11). Review: Humane Ai Pin. https://www.wired.com/review/humane-ai-pin/

Chokkattu, J. (2025a, February 22). The Humane Ai Pin Will Become E-Waste Next Week. Wired. https://www.wired.com/story/humane-ai-pin-will-become-e-waste-next-week/

Chokkattu, J. (2025b, February 28). What to Do With Your Defunct Humane Ai Pin. Wired. https://www.wired.com/story/what-to-do-with-your-humane-ai-pin/

Eaton, S. E. (2023). Postplagiarism: Transdisciplinary ethics and integrity in the age of artificial intelligence and neurotechnology. International Journal for Educational Integrity, 19(1), 1–10. https://doi.org/10.1007/s40979-023-00144-1 

Pierce, D. (2023, November 9). Humane officially launches the AI Pin, its OpenAI-powered wearable. The Verge. https://www.theverge.com/2023/11/9/23953901/humane-ai-pin-launch-date-price-openai 

Note: This is a re-post of a piece originally posted on the Postplagiarism blog.

________________________

Share this post: When Good Ideas Meet Poor Execution: The Humane AI Pin and the Future of Language Translation – https://drsaraheaton.com/2025/05/18/when-good-ideas-meet-poor-execution-the-humane-ai-pin-and-the-future-of-language-translation/

Sarah Elaine Eaton, PhD, is a Professor and Research Chair in the Werklund School of Education at the University of Calgary, Canada. Opinions are my own and do not represent those of my employer.


Teaching Fact-Checking Through Deliberate Errors: An Essential AI Literacy Skill

April 23, 2025

Abstract

This teaching resource explores an innovative pedagogical approach for developing AI literacy in a postplagiarism era. The document outlines a method of teaching fact-checking skills by having students critically evaluate AI-generated content containing deliberate errors. It provides practical guidance for educators on creating content with strategic inaccuracies, structuring verification activities, teaching source evaluation through a 5-step process, understanding AI error patterns, and implementing these exercises throughout courses. By engaging students in systematic verification processes, this approach helps develop metacognitive awareness, evaluative judgment, and appropriate skepticism when consuming AI-generated information. The resource emphasizes assessing students on their verification process rather than solely on error detection, preparing them to navigate an information landscape where distinguishing fact from fiction is increasingly challenging yet essential.

Here is a downloadable .pdf of this teaching activity:

Introduction

In a postplagiarism era, one of the most valuable skills we can teach students is how to critically evaluate AI-generated content. This can help them to cultivate meta-cognition and evaluative judgement, which have been identified as important skills for feedback and evaluation (e.g., Bearman and Luckin, 2020; Tai et al., 2018). Gen AI tools present information with confidence, regardless of accuracy. This characteristic creates an ideal opportunity to develop fact-checking competencies that serve students throughout their academic and professional lives.

Creating Content with Strategic Errors

Begin by generating content through an AI tool that contains factual inaccuracies. There are several approaches to ensure errors are present:

  • Ask the AI about obscure topics where it lacks sufficient training data
  • Request information about recent events beyond its knowledge cutoff
  • Pose questions about specialized fields with technical terminology
  • Combine legitimate questions with subtle misconceptions in your prompts

For example, ask a Large Language Model (LLM), such as ChatGPT (or any similar tool) to ‘Explain the impact of the Marshall-Weaver Theory on educational psychology’. There is no such theory, at least to the best of my knowledge. I have fabricated it for the purposes of illustration. The GenAI will likely fabricate details, citations, and research.

Structured Verification Activities

Provide students with the AI-generated content and clear verification objectives. Structure the fact-checking process as a systematic investigation.

First, have students highlight specific claims that require verification. This focuses their attention on identifying testable statements versus general information.

  • Next, assign verification responsibilities using different models:
  • Individual verification where each student investigates all claims
  • Jigsaw approach where students verify different sections then share findings
  • Team-based verification where groups compete to identify the most inaccuracies

Require students to document their verification methods for each claim. This documentation could include:

  • Sources consulted
  • Search terms used
  • Alternative perspectives considered
  • Confidence level in their verification conclusion

Requiring students to document how they verified each claim can help them develop meta-cognitive awareness about their own learning and experience how GenAI’s outputs should be treated with some skepticism and gives them specific strategies to verify content for themselves.

Teaching Source Evaluation: A 5-Step Process

The fact-checking process creates a natural opportunity to reinforce source evaluation skills.

As teachers, we can guide students to follow a 5-step plan to learn how to evaluate the reliability, truthfulness, and credibility of sources.

  • Step 1: Distinguish between primary and secondary sources. (A conversation about how terms such as ‘primary source’ and ‘secondary source’ can mean different things in different academic disciplines could also be useful here.)
  • Step 2: Recognize the difference between peer-reviewed research and opinion pieces. For opinion pieces, editorials, position papers, essays, it can be useful to talk about how these different genres are regarded in different academic subject areas. For example, in the humanities, an essay can be considered an elevated form of scholarship; however, in the social sciences, it may be considered less impressive than research that involves collecting empirical data from human research participants.
  • Step 3: Evaluate author credentials and institutional affiliations. Of course, we want to be careful about avoiding bias when doing this. Just because an author may have an affiliation with an ivy league university, for example, that does not automatically make them a credible source. Evaluating credentials can — and should — include conversations about avoiding and mitigating bias.
  • Step 4: Identify publication date and relevance. Understanding the historical, social, and political context in which a piece was written can be helpful.
  • Step 5: Consider potential biases in information sources. Besides bias about an author’s place of employment, consider what motivations they may have. This can include a personal or political agenda, or any other kind of motive. Understanding a writer’s biases can help us evaluate the credibility of what they write.

Connect these skills to your subject area by discussing authoritative sources specific to your field. What makes a source trustworthy in history differs from chemistry or literature.

Understanding Gen AI Error Patterns

One valuable aspect of this exercise goes beyond identifying individual errors to recognizing patterns in how AI systems fail. As educators, we can facilitate discussions about:

  • Pattern matching versus genuine understanding
  • How training data limitations affect AI outputs
  • The concept of AI ‘hallucination’ and why it occurs
  • Why AI presents speculative information as factual
  • How AI systems blend legitimate information with fabricated details

Connect these skills to your subject area by discussing authoritative sources specific to your field. What makes a source trustworthy in history differs from chemistry or literature.

Practical Implementation

Integrate these fact-checking exercises throughout your course rather than as a one-time activity. Start with simple verification tasks and progress to more complex scenarios. Connect fact-checking to course content by using AI-generated material related to current topics.

Assessment should focus on the verification process rather than simply identifying errors. Evaluate students on their systematic approach, source quality, and reasoning—not just error detection.

As AI-generated content becomes increasingly prevalent, fact-checking skills are an important academic literacy skill. By teaching students to approach information with appropriate skepticism and verification methods, we prepare them to navigate a postplagiarism landscape where distinguishing fact from fiction becomes both more difficult and more essential.

References

Eaton, S. E. (2023). Postplagiarism: Transdisciplinary ethics and integrity in the age of artificial intelligence and neurotechnology. International Journal for Educational Integrity, 19(1), 1-10. https://doi.org/10.1007/s40979-023-00144-1

Edwards, B. (2023, April 6). Why ChatGPT and Bing Chat are so good at making things up. Arts Technica. https://arstechnica.com/information-technology/2023/04/why-ai-chatbots-are-the-ultimate-bs-machines-and-how-people-hope-to-fix-them/

Tai, J., Ajjawi, R., Boud, D., Dawson, P., & Panadero, E. (2018). Developing evaluative judgement: enabling students to make decisions about the quality of work. Higher Education, 76(3), 467-481. https://doi.org/10.1007/s10734-017-0220-3

Disclaimer: This content is crossposted from: https://postplagiarism.com/2025/04/23/teaching-fact-checking-through-deliberate-errors-an-essential-ai-literacy-skill/

________________________

Share this post: Teaching Fact-Checking Through Deliberate Errors: An Essential AI Literacy Skill – https://drsaraheaton.com/2025/04/23/teaching-fact-checking-through-deliberate-errors-an-essential-ai-literacy-skill/

Sarah Elaine Eaton, PhD, is a Professor and Research Chair in the Werklund School of Education at the University of Calgary, Canada. Opinions are my own and do not represent those of my employer.


Gemini Live: Breaking Educational Barriers with AI

April 19, 2025

Gemini Live is Google’s new conversational AI assistant that responds to voice commands in real-time. Unlike text-based interactions, Gemini Live allows for natural, flowing conversations. This voice-first approach opens new possibilities for accessibility in educational settings. It was released last month, and I just got around to trying it today. Here’s how it went:

I was impressed by the tool’s interactivity and speed. In this test I scanned a laptop sticker with the hashtag #UHaveIntegrity, which is from our academic integrity campaign at the University of Calgary. The app correctly identified it and gave me a brief description.

I did a few subsequent tests with other items afterwards. It did not always have 100% accuracy, but with additional prompting, it corrected errors and provided updated information.

I can think of a variety of uses for this kind of app for teaching and learning. In particular, I am excited about the possibilities to enhance accessibility, inclusion, and equity.

Breaking Down Barriers with Voice Interaction

The voice interface of Gemini Live can remove some barriers for students. Students with mobility limitations, visual impairments, or reading difficulties can participate in learning activities through speech. This creates a more level playing field in the classroom.

Imagine a scenario where a teacher uses Gemini Live to help a student with dyslexia engage with research projects. The student could ask questions verbally and receive information without struggling with text. This hypothetical case illustrates how voice interaction might lead to increased confidence and class participation.

Multilingual Support for Diverse Classrooms

Language barriers often create obstacles in education. Gemini Live supports multiple languages and can translate between them. This feature helps:

  • Non-native English speakers follow lessons in their first language
  • International students integrate into new learning environments
  • Teachers communicate with students from different linguistic backgrounds
  • Parents who speak other languages stay involved in their children’s education

Learning Accommodations Made Simple

Every student learns differently. Gemini Live can adapt content to different learning needs. Here are some examples:

  1. It can explain complex concepts in simpler terms for students who need additional support
  2. It provides alternative explanations when students don’t understand a topic the first time
  3. It offers audio descriptions of visual content for visually impaired students
  4. It can generate study materials in different formats to match learning preferences

Real-Time Assistance in the Classroom

Teachers often struggle to provide individual attention to every student in a classroom. Gemini Live can serve as an additional resource that students can turn to when they need help. This can reduce wait times and frustration.

As a hypothetical example, a high school math teacher could implement Gemini Live as a ‘homework helper’ station in the classroom. Students who get stuck on problems could ask Gemini Live for guidance without waiting for the teacher to become available. This approach would allow more students to receive timely support while waiting for personalized attention from their teacher.

Digital Equity Through Voice Access

Not all students have equal access to technology or equal ability to use traditional interfaces. Voice technology lowers the technical barriers to using digital tools. Students without keyboards, mice, or touchscreens can still access information and complete assignments through voice commands.

Practical Implementation Tips

In thinking about how we could use use Gemini Live and similar tools for accessibility and inclusion, here are some ideas:

  • Create specific prompts that students can use to get help with different subjects
  • Set up dedicated stations where students can interact with Gemini Live
  • Teach students how to ask effective questions
  • Combine Gemini Live with other AI tools for a comprehensive accessibility solution

Challenges and Considerations

It is important for teachers to be aware that the tool is not perfect (at least as it currently stands). Although Gemini Live offers benefits, it currently has certain limitations.

  • Voice recognition may struggle with some speech patterns or accents
  • Private conversations require appropriate spaces to avoid classroom disruption
  • Students need guidance on when AI assistance is appropriate and when it isn’t
  • Technology should supplement, not replace, human teaching and interaction

Looking Forward

As AI assistants like Gemini Live continue to evolve, they will provide even more tools for inclusive education. The most successful classrooms will be those that thoughtfully blend technology with human instruction.

By incorporating Gemini Live into teaching practices, educators can create learning environments that accommodate more students. The goal isn’t just to make education accessible but to ensure every student feels valued and included in the learning process. When we remove barriers to education, we unlock potential — and that’s one of the most fun parts of being an educator.

–——–-

Share this post: Gemini Live: Breaking Educational Barriers with AI – https://drsaraheaton.com/2025/04/19/gemini-live-breaking-educational-barriers-with-ai/

This blog has had over 3.7 million views thanks to readers like you. If you enjoyed this post, please ‘Like’ it using the button below or share it on social media. Thanks!

Sarah Elaine Eaton, PhD, is a Professor and Research Chair in the Werklund School of Education at the University of Calgary, Canada. Opinions are my own and do not represent those of my employer. 


Celebrating 5 Years of Integrity Hour in Canadian Higher Education

March 31, 2025

Five years ago we started Integrity Hour, an online community of Practice by and for Canadian higher education #AcademicIntegrity enthusiasts, professionals, educators, researchers, and students. 

Today we had our five-year celebration, which also served as a closure of sorts. After serving as a co-steward of the community almost since the beginning, Dr. Beatriz Moya has started the next chapter of her career. 

We are working with some of our long-standing partners to reconceptualize what the next iteration of Integrity Hour will look like. For now, we will take a little pause as we regroup.

At our anniversary celebration meeting today, Brooklin Schneider encouraged us to share this guide widely, so we are posting it here, as an open access resource: “Integrity Hour: A Guide to  Developing and Facilitating an Online Community of Practice for Academic Integrity”.

Our collective outputs have been collaboratively conceptualized and co-developed. Here are a couple of other resources we have worked on over the years:

Reflections on the first year of Integrity Hour: An online community of practice for academic integrity

Academic Integrity Leadership and Community Building in Canadian Higher Education

In my remarks today I shared that being part of this weekly community of practice has influenced and informed my thinking, advocacy, and practice in way I could never have imagined. 

My gratitude to everyone who has been part of our community, sharing wisdom, knowledge, and resources. What an incredible half a decade it has been!

________________________

Share this post: Celebrating 5 Years of Integrity Hour in Canadian Higher Education – https://drsaraheaton.com/2025/03/31/celebrating-5-years-of-integrity-hour-in-canadian-higher-education/

This blog has had over 3.7 million views thanks to readers like you. If you enjoyed this post, please ‘Like’ it using the button below or share it on social media. Thanks!

Sarah Elaine Eaton, PhD, is a Professor and Research Chair in the Werklund School of Education at the University of Calgary, Canada. Opinions are my own and do not represent those of my employer. 


Embracing AI as a Teaching Tool: Practical Approaches for the Post-plagiarism Classroom

March 23, 2025

Artificial intelligence (AI) has moved from a futuristic concept to an everyday reality. Rather than viewing AI tools like ChatGPT as threats to academic integrity, forward-thinking educators are discovering their potential as powerful teaching instruments. Here’s how you can meaningfully incorporate AI into your classroom while promoting critical thinking and ethical technology use.

Making AI Visible in the Learning Process

One of the most effective approaches to teaching with AI is to bring it into the open. When we demystify these tools, students develop a more nuanced understanding of the tools’ capabilities and limitations.

Start by dedicating class time to explore AI tools together. You might begin with a demonstration of how ChatGPT or similar tools respond to different types of prompts. Ask students to compare the quality of responses when the tool is asked to:

  • Summarize factual information
  • Analyze a complex concept
  • Solve a problem in your discipline
A teaching tip infographic titled "Postplagiarism Teaching Tip by Sarah Elaine Eaton: Make AI Visible in the Learning Process." The infographic features a central image of a thinking face emoji, with three connected bubbles highlighting different aspects of AI integration in learning:

Summarize Factual Information (blue): Encourages understanding of basic facts and data handling, represented by an icon of a document with a magnifying glass.

Analyze Complex Concepts (green): Develops critical thinking and deep analysis skills, represented by an icon of a puzzle piece.

Solve Discipline-Specific Problems (orange): Enhances problem-solving skills in specific subjects, represented by an icon of tools (wrench and screwdriver).
In the bottom right corner, there’s a Creative Commons license (CC BY-NC) icon.

Have students identify where the AI excels and where it falls short. Hands-on experience that is supervised by an educator helps students understand that while AI can be impressive and  capable, it has clear boundaries and weaknesses.

From AI Drafts to Critical Analysis

AI tools can quickly generate content that serves as a starting point for deeper learning. Here is a step-by-step approach for using AI-generated drafts as teaching material:

  1. Assignment Preparation: Choose a topic relevant to your course and generate a draft response using an AI tool such as ChatGPT.
  2. Collaborative Analysis: Share the AI-generated draft with students and facilitate a discussion about its strengths and weaknesses. Prompt students with questions such as:
    • What perspectives are missing from this response?
    • How could the structure be improved?
    • What claims require additional evidence?
    • How might we make this content more engaging or relevant?

The idea is to bring students into conversations about AI, to build their critical thinking and also have them puzzle through the strengths and weaknesses of current AI tools.

  • Revision Workshop: Have students work individually or in groups to revised an AI draft into a more nuanced, complete response. This process teaches students that the value lies not in generating initial content (which AI can do) but in refining, expanding, and critically evaluating information (which requires human judgment).
  • Reflection: Ask students to document what they learned through the revision process. What gaps did they identify in the AI’s understanding? How did their human perspective enhance the work? Building in meta-cognitive awareness is one of the skills that assessment experts such as Bearman and Luckin (2020) emphasize in their work.

This approach shifts the educational focus from content creation to content evaluation and refinement—skills that will remain valuable regardless of technological advancement.

Teaching Fact-Checking Through Deliberate Errors

AI systems often present information confidently, even when that information is incorrect or fabricated. This characteristic makes AI-generated content perfect for teaching fact-checking skills.

Try this classroom activity:

  1. Generate Content with Errors: Use an AI tool to create content in your subject area, either by requesting information you know contains errors or by asking about obscure topics where the AI might fabricate details.
  2. Fact-Finding Mission: Provide this content to students with the explicit instruction to identify potential errors and verify information. You might structure this as:
    • Individual verification of specific claims
    • Small group investigation with different sections assigned to each group
    • A whole-class collaborative fact-checking document
  3. Source Evaluation: Have students document not just whether information is correct, but how they determined its accuracy. This reinforces the importance of consulting authoritative sources and cross-referencing information.
  4. Meta-Discussion: Use this opportunity to discuss why AI systems make these kinds of errors. Topics might include:
  • How large language models are trained
  • The concept of ‘hallucination’ in AI
  • The difference between pattern recognition and understanding
  • Why AI might present incorrect information with high confidence

These activities teach students not just to be skeptical of AI outputs but to develop systematic approaches to information verification—an essential skill in our information-saturated world.

Case Studies in AI Ethics

Ethical considerations around AI use should be explicit rather than implicit in education. Develop case studies that prompt students to engage with real ethical dilemmas:

  1. Attribution Discussions: Present scenarios where students must decide how to properly attribute AI contributions to their work. For example, if an AI helps to brainstorm ideas or provides an outline that a student substantially revises, how could this be acknowledged?
  2. Equity Considerations: Explore cases highlighting AI’s accessibility implications. Who benefits from these tools? Who might be disadvantaged? How might different cultural perspectives be underrepresented in AI outputs?
  3. Professional Standards: Discuss how different fields are developing guidelines for AI use. Medical students might examine how AI diagnostic tools should be used alongside human expertise, while creative writing students could debate the role of AI in authorship.
  4. Decision-Making Frameworks: Help students develop personal guidelines for when and how to use AI tools. What types of tasks might benefit from AI assistance? Where is independent human work essential?

These discussions help students develop thoughtful approaches to technology use that will serve them well beyond the classroom.

Implementation Tips for Educators

As you incorporate these approaches into your teaching, consider these practical suggestions:

  • Start small with one AI-focused activity before expanding to broader integration
  • Be transparent with students about your own learning curve with these technologies
  • Update your syllabus to clearly outline expectations for appropriate AI use
  • Document successes and challenges to refine your approach over time
  • Share experiences with colleagues to build institutional knowledge

Moving Beyond the AI Panic

The concept of postplagiarism does not mean abandoning academic integrity—rather, it calls for reimagining how we teach integrity in a technologically integrated world. By bringing AI tools directly into our teaching practices, we help students develop the critical thinking, evaluation skills, and ethical awareness needed to use these technologies responsibly.

When we shift our focus from preventing AI use to teaching with and about AI, we prepare students not just for academic success, but for thoughtful engagement with technology throughout their lives and careers.

References

Bearman, M., & Luckin, R. (2020). Preparing university assessment for a world with AI: Tasks for human intelligence. In M. Bearman, P. Dawson, R. Ajjawi, J. Tai, & D. Boud (Eds.), Re-imagining University Assessment in a Digital World (pp. 49-63). Springer International Publishing. https://doi.org/10.1007/978-3-030-41956-1_5 

Eaton, S. E. (2023). Postplagiarism: Transdisciplinary ethics and integrity in the age of artificial intelligence and neurotechnology. International Journal for Educational Integrity, 19(1), 1-10. https://doi.org/10.1007/s40979-023-00144-1

Edwards, B. (2023, April 6). Why ChatGPT and Bing Chat are so good at making things up. Arts Technica. https://arstechnica.com/information-technology/2023/04/why-ai-chatbots-are-the-ultimate-bs-machines-and-how-people-hope-to-fix-them/ 

________________________

Share this post: Embracing AI as a Teaching Tool: Practical Approaches for the Postplagiarism Classroom – https://drsaraheaton.com/2025/03/23/embracing-ai-as-a-teaching-tool-practical-approaches-for-the-post-plagiarism-classroom/

This blog has had over 3.7 million views thanks to readers like you. If you enjoyed this post, please ‘Like’ it using the button below or share it on social media. Thanks!

Sarah Elaine Eaton, PhD, is a Professor and Research Chair in the Werklund School of Education at the University of Calgary, Canada. Opinions are my own and do not represent those of my employer.