Neuralink’s Clinical Trials in Canada

January 11, 2025

Last month CBC’s Geoff Leo did a great article on called, ‘No consequences’, for violating human rights in privately funded research in Canada. This was a bit of an eye opener, even for me.

He writes that, “Roughly 85 per cent of clinical trials in Canada are privately funded” and that research undergoes very little scrutiny by anyone.

One of the cases Geoff wrote about involved a research study that ran from 2014-2016 involving Indigenous children in Saskatchewan, aged 12-15, who were research subjects in a study that monitored their brainwaves. Student participants were recruited with the help of a Canadian school board.

The study was led by James Hardt, who runs something called the Biocybernaut Institute, a privately run business. According to Leo, James Hardt claims that “brainwave training can make participants smarter, happier and enable them to overcome trauma. He said it can also allow them to levitate, walk on water and visit angels.”

Geoff Leo digs deep into some of the ethical issues and I recommend reading his article.

So, that was last month. This month, I happened to notice that according to Elon Musk’s Neuralink website, Musk’s product has now been approved by Health Canada to recruit research participants. There’s a bright purple banner at the top of the Neuralink home page showing a Canadian flag that says, “We’ve received approval from Health Canada to begin recruitment for our first clinical trial in Canada”.

A screenshot of the Neuralink.com home page. On the bottom right is a blurred photo of a man wearing a ball cap, who appears to be in a wheelchair and using tubes as medical assistance. There is white text on the right-hand side. At the top is a purple banner with white text and a small Canadian flag.

When you click on the link, you get to another page that shows the flags for the US, Canada, and the UK, where clinical trials are either underway or planned, it seems.

A screenshot of a webpage from the Neuralink web site. It has a white background with black text. In the upper left-hand corner there are three small flags, one each for the USA, Canada, and the UK.

The Canadian version is called CAN-PRIME. There’s a YouTube video promo/recruitment video for patients interested in joining, “this revolutionary journey”.

According to the website, “This study involves placing a small, cosmetically invisible implant in a part of the brain that plans movements. The device is designed to interpret a person’s neural activity, so they can operate a computer or smartphone by simply intending to move – no wires or physical movement are required.”

A screenshot from the Neuralink web page. The background is grey with black text.

So, just to connect the dots here… ten years ago in Canada there was a study involving neurotechnology that “exploited the hell out of” Indigenous kids, according to Janice Parente who leads the Human Research Standards Organization

Now we have Elon Musk’s company actively recruiting people from across Canada, the US, and the UK, for research that would involve implanting experimental technology into people’s brains without, it seems, much research ethics oversight at all.

What could possibly go wrong?

Reference

Leo, G. (2024, December 2). ‘No consequences’ for violating human rights in privately funded research in Canada, says ethics expert. https://www.cbc.ca/news/canada/saskatchewan/ethics-research-canada-privately-funded-1.7393063

________________________

Share this post: Neuralink’s Clinical Trials in Canada – https://drsaraheaton.com/2025/01/11/neuralinks-clinical-trials-in-canada/

This blog has had over 3.7 million views thanks to readers like you. If you enjoyed this post, please ‘Like’ it using the button below or share it on social media. Thanks!

Sarah Elaine Eaton, PhD, is a Professor and Research Chair in the Werklund School of Education at the University of Calgary, Canada. Opinions are my own and do not represent those of my employer. 


Academic integrity and artificial intelligence in higher education (HE) contexts: A rapid scoping review

September 4, 2024

In this post, I’d like to give a shoutout to Beatriz Moya, who led a rapid review on academic integrity and artificial intelligence.

A screenshot of a title page of an academic article. There is purple and black text on a white background.
Title page of “Academic Integrity and artificial intelligence in higher education (HE) contexts: A rapid scoping review”.

Here is the reference:

Moya, B. A., Eaton, S. E., Pethrick, H., Hayden, A. K., Brennan, R., Wiens, J., & McDermott, B. (2024). Academic integrity and artificial intelligence in higher education (HE) contexts: A rapid scoping review. Canadian Perspectives on Academic Integrity, 7(3). https://doi.org/10.55016/ojs/cpai.v7i3

Abstract

Artificial intelligence (AI) developments challenge higher education institutions’ teaching, learning, assessment, and research practices. To contribute evidence-based recommendations for upholding academic integrity, we conducted a rapid scoping review focusing on what is known about academic integrity and AI in higher education before the emergence of ChatGPT. We followed the Updated Reviewer Manual for Scoping Reviews from the Joanna Briggs Institute (JBI) and the Preferred Reporting Items for Systematic reviews Meta-Analysis for Scoping Reviews (PRISMA-ScR) reporting standards. Five databases were searched, and the eligibility criteria included higher education stakeholders of any age and gender engaged with AI in the context of academic integrity from 2007 through November 2022 and available in English. The search retrieved 2,223 records, of which 14 publications with mixed methods, qualitative, quantitative, randomized controlled trials, and text and opinion studies met the inclusion criteria. The results showed bounded and unbounded ethical implications of AI. Perspectives included: AI for cheating; AI as legitimate support; an equity, diversity, and inclusion lens into AI; and emerging recommendations to tackle AI implications in higher education. The evidence from the sources provides guidance that can inform educational stakeholders in decision-making processes for AI integration, in the analysis of misconduct cases involving AI, and in the exploration of AI as legitimate assistance. Likewise, this rapid scoping review signals possibilities for future research, which we explore in our discussion.

Keywords

academic integrity, artificial intelligence, academic misconduct, higher education, rapid scoping review, large language models (LLM)

This is a fully open access article. You can download a copy of the full article here: https://doi.org/10.55016/ojs/cpai.v7i3

Related posts:

Exploring the Contemporary Intersections of Artificial Intelligence and Academic Integrity https://drsaraheaton.wordpress.com/2022/05/17/exploring-the-contemporary-intersections-of-artificial-intelligence-and-academic-integrity/

New project: Artificial Intelligence and Academic Integrity: The Ethics of Teaching and Learning with Algorithmic Writing Technologieshttps://drsaraheaton.wordpress.com/2022/04/19/new-project-artificial-intelligence-and-academic-integrity-the-ethics-of-teaching-and-learning-with-algorithmic-writing-technologies/

The Use of AI-Detection Tools in the Assessment of Student Workhttps://drsaraheaton.wordpress.com/2023/05/06/the-use-of-ai-detection-tools-in-the-assessment-of-student-work/

____________________________

Share this post: Academic integrity and artificial intelligence in higher education (HE) contexts: A rapid scoping review – https://drsaraheaton.com/2024/09/04/academic-integrity-and-artificial-intelligence-in-higher-education-he-contexts-a-rapid-scoping-review/

This blog has had over 3.6 million views thanks to readers like you. If you enjoyed this post, please “like” it or share it on social media. Thanks!

Sarah Elaine Eaton, PhD, is a faculty member in the Werklund School of Education at the University of Calgary, Canada. Opinions are my own and do not represent those of my employer.

Sarah Elaine Eaton, PhD, Editor-in-Chief, International Journal for Educational Integrity


Academic Integrity and Artificial Intelligence: Research Project Update

June 22, 2023
A red banner with white text.

Our team has been busy since we launched our research in April, 2022. I haven’t done an update in a while, so I wanted to let you know what we’ve been up to. Check out our project website (https://osf.io/4cnvp/) for links to our peer-reviewed publications and other information about our work.

We are still collecting data for our survey and you’re welcome to participate! To take the survey, click here.

Expansion of our research

In April 2022, we received funding from the Social Science and Humanities Research Council of Canada (SSHRC) in the form of a Connection Grant to host a public research forum. We included partners from the University of Saskatchewan, Brock University, Toronto Metropolitan University, and Deakin University (Australia). We hosted our research symposium on June 7-8, 2023 at the University of Calgary.

In case you missed this SSHRC-funded research symposium on the impact of artificial intelligence on higher education and academic integrity, you can catch up with the slides here:

Eaton, S. E., Dawson, P., McDermott, B., Brennan, R., Wiens, J., Moya, B., Dahal, B., Hamilton, M., Kumar, R., Mindzak, M., Miller, A., & Milne, N. (2023). Understanding the Impact  of Artificial Intelligence on Higher Education. Calgary, Canada. https://hdl.handle.net/1880/116624

Also, you can catch Phill Dawson’s keynote from the event which is now archived on YouTube:

Phill Dawson’s keynote: Don’t Fear the Robot, University of Calgary, June 8, 2023

We are excited about next steps for this work and I’m happy to answer any questions you have about academic integrity and artificial intelligence.

Acknowledgements

We are grateful to the sponsors of this research:

  • Social Sciences and Humanities Research Council (SSHRC)
  • University of Calgary Teaching and Learning Grant
  • University of Calgary International Research Partnership Workshop Grant
  • Werklund School of Education, University of Calgary
  • Brock University
  • Deakin University
  • Toronto Metropolitan Univeristy
  • University of Saskatchewan

Related posts:

Invitation to Participate: Research Study on Artificial Intelligence and Academic Integrity: The Ethics of Teaching and Learning with Algorithmic Writing Technologieshttps://wp.me/pNAh3-2U3

Exploring the Contemporary Intersections of Artificial Intelligence and Academic Integrity https://drsaraheaton.wordpress.com/2022/05/17/exploring-the-contemporary-intersections-of-artificial-intelligence-and-academic-integrity/

New project: Artificial Intelligence and Academic Integrity: The Ethics of Teaching and Learning with Algorithmic Writing Technologieshttps://drsaraheaton.wordpress.com/2022/04/19/new-project-artificial-intelligence-and-academic-integrity-the-ethics-of-teaching-and-learning-with-algorithmic-writing-technologies/

The Use of AI-Detection Tools in the Assessment of Student Workhttps://drsaraheaton.wordpress.com/2023/05/06/the-use-of-ai-detection-tools-in-the-assessment-of-student-work/

_________________________________

Share this post: Academic Integrity and Artificial Intelligence: Research Project Update – https://drsaraheaton.wordpress.com/2023/06/22/academic-integrity-and-artificial-intelligence-research-project-update/

This blog has had over 3 million views thanks to readers like you. If you enjoyed this post, please “like” it or share it on social media. Thanks! Sarah Elaine Eaton, PhD, is a faculty member in the Werklund School of Education, and the Educational Leader in Residence, Academic Integrity, University of Calgary, Canada. Opinions are my own and do not represent those of the University of Calgary.


Invitation to Participate: Research Study on Artificial Intelligence and Academic Integrity: 

April 19, 2023

The Ethics of Teaching and Learning with Algorithmic Writing Technologies 

On the right there is a black robotic hand and forearm. On the left there is a human hand and forearm. The forearm is tatooed. One finger from each hand is touching the other.
Photo by cottonbro studio on Pexels.com

Academic misconduct has taken various forms in present-day educational systems. One method that is on the rise is the use of artificially generated software compositions. The capabilities and sophistication of these new technologies are improving steadily. We are conducting a study to gauge the sophistication of the current artificial intelligence (AI) software-generated text. To that end, we are recruiting participants to evaluate the level of writing level of small compositions (260 words in length at most).

Your participation in this study would be to evaluate two small pieces of text presented in a survey and optionally make comments on your observation. We appreciate your consideration in this matter. This research provides an opportunity for the participants to contribute to the state of AI software used for various educational purposes. Participation in this study is voluntary, and you are free to terminate the survey and withdraw at any time and for any reason without censor. There are no known physical, psychological, or social risks associated with participation in the study.

All demographic data collected will be kept strictly confidential. Only the researchers listed in this letter will have access to the raw data. The data (in electronic format) will be retained indefinitely. Participation in the study will be asked for some basic demographic information and then presented with a 260- word length composition. After reading, the participants will be asked to evaluate the level, assign a mark to the composition, and note any pertinent observations. The second piece of composition, also of the same length, will be followed by the same set of questions. The total anticipated time for completing the survey is about 9-12 minutes, but it can vary based on reading speed and consideration afforded to the assigned grade.

If you have any questions or concerns about your participation in this study, you can contact the Principal Investigator, Dr. Sarah Elaine Eaton, seaton (at) ucalgary.ca

This study is funded by a University of Calgary Teaching and Learning Grant. This study has been approved by the Conjoint Faculties Research Ethics Board at the University of Calgary: REB22-0137.

To take the survey, click here.

_________________________________

Share this post: Invitation to Participate: Research Study on Artificial Intelligence and Academic Integrity: The Ethics of Teaching and Learning with Algorithmic Writing Technologies – https://wp.me/pNAh3-2U3

This blog has had over 3 million views thanks to readers like you. If you enjoyed this post, please “like” it or share it on social media. Thanks! Sarah Elaine Eaton, PhD, is a faculty member in the Werklund School of Education, and the Educational Leader in Residence, Academic Integrity, University of Calgary, Canada. Opinions are my own and do not represent those of the University of Calgary.

 


Exploring the Contemporary Intersections of Artificial Intelligence and Academic Integrity

May 17, 2022
Title slide from CSSHE 2022 panel discussion: AI & AI: Exploring the contemporary intersections of artificial intelligence and academic integrity (Kumar, Mindzak, Eaton & Morrison)

For more than a year there have been small teams of us across Canada studying the impact of artificial intelligence on academic integrity. Today I am pleased to be part of a panel discussion on this topic at the annual conference of the Canadian Society for the Study of Higher Education (CSSHE), which is part of Congress 2022.

Our panel is led by Rahul Kumar (Brock University, Canada), together with Michael Mindzak (Brock University, Canada) and Ryan Morrison (George Brown College, Canada)

Here is the information about our panel:

Session G3: Panel: AI & AI: Exploring the Contemporary Intersections of Artificial Intelligence and Academic Integrity (Live, remote) 

Panel Chair: Rahul Kumar 

  • Rahul Kumar (Brock University): Ethical application with practical examples
  • Michael Mindzak (Brock University): Implications on labour 
  • Ryan Morrison (George Brown College): Large language models: An overview for educators 
  • Sarah Elaine Eaton (University of Calgary): Academic integrity and assessment 

We have developed a combined slide deck for our panel discussion today. You can download the entire slide deck from the link noted in the citation below:

Kumar, R., Mindzak, M., Morrison, R., & Eaton, S. E. (2022, May 17). AI & AI: Exploring the contemporary intersections of artificial intelligence and academic integrity [online]. Paper presented at the Canadian Society for the Study of Higher Education (CSSHE). http://hdl.handle.net/1880/114647

Related posts:

New project: Artificial Intelligence and Academic Integrity: The Ethics of Teaching and Learning with Algorithmic Writing Technologies – https://drsaraheaton.wordpress.com/2022/04/19/new-project-artificial-intelligence-and-academic-integrity-the-ethics-of-teaching-and-learning-with-algorithmic-writing-technologies/

Keywords: artificial intelligence, large language models, GPT-3, academic integrity, academic misconduct, plagiarism, higher education, teaching, learning, assessment

_________________________________

Share or Tweet this: Exploring the Contemporary Intersections of Artificial Intelligence and Academic Integrity https://drsaraheaton.wordpress.com/2022/05/17/exploring-the-contemporary-intersections-of-artificial-intelligence-and-academic-integrity/

This blog has had over 3 million views thanks to readers like you. If you enjoyed this post, please “like” it or share it on social media. Thanks!

Sarah Elaine Eaton, PhD, is a faculty member in the Werklund School of Education, and the Educational Leader in Residence, Academic Integrity, University of Calgary, Canada. Opinions are my own and do not represent those of the University of Calgary.