The complexity of the human brain is as fascinating as it is mystifying. Increasing our understanding of its ability to synthesise information, respond quickly to stimuli, and remodel itself could unlock many opportunities to benefit society.
It is no surprise, then, that a recent UNESCO report has shown research and patents relating to neurotechnology have increased twentyfold in two decades. When we combine this insight with our latest horizon scan, which highlights neurotechnologies as an area where there is potential for developments to spark ethical concern, it becomes pertinent for us to reflect upon what has changed since the publication of our novel neurotechnologies report in 2013.
At that time, we were particularly interested in how they were being used to intervene in the human brain in both clinical practice and non-medical settings. We found there were inconsistency and fragmentation of the regulatory landscape, which we warned could present issues in assuring safe use.
Therapeutic vs. non-therapeutic
Ten years on, the adoption of neurotechnologies outside of traditional clinical environments has expanded, and the purposes for which they are being used could be suggested to traverse both the therapeutic and non-therapeutic. Indeed, changes in the way healthcare is delivered and what we understand to be ‘health’, might merit revisiting whether a clear ethical distinction between therapeutic and non-therapeutic can still be drawn.
Should we understand ‘therapeutic use’ as being limited to the treatment of diagnosable illness and injury, or should it be expanded to uses to address inequities, for example? New or refined answers to old ethical questions may be needed for policymakers to effectively oversee the neurotechnological space.
Another related question might be whether therapeutic uses of neurotechnology should be limited to treatment of neurological or psychiatric disorders. For example, if data collected by a brain-computer interface offers insight into the functioning of other bodily systems, should we expand its use?
When novelty runs out
Another issue that has come into focus in recent years is the consequence of technology becoming obsolete. For example, issues have arisen with implantable medical devices that are relied upon, but are no longer being maintained by the manufacturer. This leads to a lack of continued benefit and potential safety issues for patients when the device stops functioning as it should.
It also raises the question of how the possibility of technology becoming obsolete should shape the consent process, and whether it should be discussed with patients before they agree to treatment.
Tolerating inequity
In 2013, we identified safety, privacy, autonomy, equity and trust as interests that merit protection when researching, designing and using neurotechnologies.
Ensuring that interventions aimed at promoting equity do not unintentionally have the opposite effect can be challenging. For example, Dr Emma Meaburn, our visiting senior researcher, recently blogged about how polygenic scores can inform us of an individual’s genetic predisposition to traits and disorders relevant to education. This information could potentially be used to direct early educational resources and support to those who are more likely to struggle in school. However, polygenic scores come with a host of assumptions and biases, meaning there is a real risk that instead of mitigating social and educational inequities as intended, they might further exacerbate them.
Is the same true for neurotechnologies? When it comes to using neurotechnologies, is the perpetuation of inequity a compromise we can or should tolerate to provide benefit? This is a key ethical question.
So, while it remains the case that our 2013 report provides a solid foundation for researchers and policymakers, developments suggest review is needed to ensure ethical oversight is maintained and interests are protected in this dynamic and evolving space.
We look forward to exploring this further and are interested to hear your thoughts too.