Neuralink’s Clinical Trials in Canada

January 11, 2025

Last month CBC’s Geoff Leo did a great article on called, ‘No consequences’, for violating human rights in privately funded research in Canada. This was a bit of an eye opener, even for me.

He writes that, “Roughly 85 per cent of clinical trials in Canada are privately funded” and that research undergoes very little scrutiny by anyone.

One of the cases Geoff wrote about involved a research study that ran from 2014-2016 involving Indigenous children in Saskatchewan, aged 12-15, who were research subjects in a study that monitored their brainwaves. Student participants were recruited with the help of a Canadian school board.

The study was led by James Hardt, who runs something called the Biocybernaut Institute, a privately run business. According to Leo, James Hardt claims that “brainwave training can make participants smarter, happier and enable them to overcome trauma. He said it can also allow them to levitate, walk on water and visit angels.”

Geoff Leo digs deep into some of the ethical issues and I recommend reading his article.

So, that was last month. This month, I happened to notice that according to Elon Musk’s Neuralink website, Musk’s product has now been approved by Health Canada to recruit research participants. There’s a bright purple banner at the top of the Neuralink home page showing a Canadian flag that says, “We’ve received approval from Health Canada to begin recruitment for our first clinical trial in Canada”.

A screenshot of the Neuralink.com home page. On the bottom right is a blurred photo of a man wearing a ball cap, who appears to be in a wheelchair and using tubes as medical assistance. There is white text on the right-hand side. At the top is a purple banner with white text and a small Canadian flag.

When you click on the link, you get to another page that shows the flags for the US, Canada, and the UK, where clinical trials are either underway or planned, it seems.

A screenshot of a webpage from the Neuralink web site. It has a white background with black text. In the upper left-hand corner there are three small flags, one each for the USA, Canada, and the UK.

The Canadian version is called CAN-PRIME. There’s a YouTube video promo/recruitment video for patients interested in joining, “this revolutionary journey”.

According to the website, “This study involves placing a small, cosmetically invisible implant in a part of the brain that plans movements. The device is designed to interpret a person’s neural activity, so they can operate a computer or smartphone by simply intending to move – no wires or physical movement are required.”

A screenshot from the Neuralink web page. The background is grey with black text.

So, just to connect the dots here… ten years ago in Canada there was a study involving neurotechnology that “exploited the hell out of” Indigenous kids, according to Janice Parente who leads the Human Research Standards Organization

Now we have Elon Musk’s company actively recruiting people from across Canada, the US, and the UK, for research that would involve implanting experimental technology into people’s brains without, it seems, much research ethics oversight at all.

What could possibly go wrong?

Reference

Leo, G. (2024, December 2). ‘No consequences’ for violating human rights in privately funded research in Canada, says ethics expert. https://www.cbc.ca/news/canada/saskatchewan/ethics-research-canada-privately-funded-1.7393063

________________________

Share this post: Neuralink’s Clinical Trials in Canada – https://drsaraheaton.com/2025/01/11/neuralinks-clinical-trials-in-canada/

This blog has had over 3.7 million views thanks to readers like you. If you enjoyed this post, please ‘Like’ it using the button below or share it on social media. Thanks!

Sarah Elaine Eaton, PhD, is a Professor and Research Chair in the Werklund School of Education at the University of Calgary, Canada. Opinions are my own and do not represent those of my employer. 


Future-proofing integrity in the age of artificial intelligence and neurotechnology: Prioritizing human rights, dignity, and equity

November 13, 2024

Once a year I write an editorial for the International Journal for Educational Integrity. I take a big idea, ground it in literature written by some of the best in the world and then call for ways to improve our field even more. In 2023 I wrote about Postplagiarism and in 2022, I focused on equity, diversity, inclusion, accessibility and decolonization as new priorities for academic integrity. Here is this year’s editorial:

Future-proofing integrity in the age of artificial intelligence and neurotechnology: prioritizing human rights, dignity, and equity

A screenshot of an article title page. There is black text on a white background with a green banner at the top.
Here is a link to the original: https://edintegrity.biomedcentral.com/articles/10.1007/s40979-024-00175-2

Abstract

In this article I argue for the prioritisation of human rights when developing and implementing misconduct policies. Existing approaches may be perpetuate inequities, particularly for individuals from marginalised groups. A human-rights-by-design approach, which centres human rights in policy development, revision, and implementation, ensuring that every individual is treated with dignity and respect.

Recommendations for implementing a human-rights approach to misconduct investigations and case management are offered, covering areas such as procedural fairness, privacy, equity, and the right to education. Additional topics covered are the need to limit surveillance technologies, and the need to recognize that not all use of artificial intelligence tools automatically constitutes misconduct. I disentangle

the differences between equity and equality and explain how both are important when considering ethics and integrity. A central argument of this paper is that a human-rights-by-design approach to integrity does not diminish standards but rather strengthens educational systems by cultivating ethical awareness and respect for personhood. I conclude with a call to action with a seven-point plan for institutions to adopt a human-rights-based approach to ethics and integrity. In the age of artificial intelligence and neurotechnology, insisting on human rights and dignity when we investigate and address misconduct allegations is an ethical imperative that has never been more important.

Keywords Academic misconduct, Academic dishonesty, Plagiarism, Policy, Human rights, Restorative justice, Artificial intelligence, Neurotechnology, Higher education, Education

Commentary

As I reflect on the current state of academic and research integrity, I am struck by a glaring omission in our discussions: the connection between misconduct and human rights. We often treat these as separate entities, failing to recognize the profound impact that misconduct investigations and policies can have on the fundamental rights of individuals. This oversight is particularly concerning in the age of artificial intelligence (AI) and neurotechnology, where the potential for harm is magnified.

Take, for example, the case of a professor in Canada who physically assaulted international students accused of plagiarism. This horrifying example demonstrates how the pursuit of academic integrity can be twisted into a justification for degrading and inhumane treatment, violating the very principles of dignity and respect that should guide our actions. While this is an extreme case, it highlights the need for a fundamental shift in our approach.

In this editorial, I offer a call to action to move beyond simply adhering to legal requirements and embrace a ‘human-rights-by-design’ approach that embeds human rights principles into our policies and practices. This means ensuring procedural fairness throughout investigations, safeguarding the privacy of individuals, and recognizing the right to be presumed innocent until there is proof to the contrary. It also requires us to acknowledge the diverse backgrounds and circumstances of our students and staff, striving for equitable treatment that addresses systemic inequalities and provides the support needed for everyone to succeed.

In the face of rapidly evolving technologies like AI, we must be especially vigilant in upholding human rights. The temptation to rely on unproven AI detection tools or to rush to judgement based on suspicion rather than evidence is strong, but it is a path that leads us away from justice and fairness. We cannot allow fear or expediency to erode our commitment to human dignity.

By centring human rights in our approach to integrity, we can create educational and research environments that are not only ethically sound but also truly just and equitable. This is not about lowering standards; it is about building a culture of integrity that upholds the inherent worth of every individual.

________________________

Share this post: Future-proofing integrity in the age of artificial intelligence and neurotechnology: Prioritizing human rights, dignity, and equity – https://drsaraheaton.com/2024/11/13/future-proofing-integrity-in-the-age-of-artificial-intelligence-and-neurotechnology-prioritizing-human-rights-dignity-and-equity/
This blog has had over 3 million views thanks to readers like you. If you enjoyed this post, please “like” it or share it on social media. Thanks!

Sarah Elaine Eaton, PhD, is a Professor and Research Chair in the Werklund School of Education at the University of Calgary, Canada. Opinions are my own and do not represent those of my employer.