March 28, 2024

Ethical standards urgently needed for neurotechnology, say researchers and ethicists

Otherwise we might end up as puppies for AI, as Elon Musk has warned

A group of researchers and ethicists delivered a warning in Nature in November about the dangers of neurotechnology and AI (sorry, guys, we missed this earlier). The Morningside group, headed by Columbia University neuroscientist Rafael Yuste, claims that existing ethical standards have been outpaced by galloping technology:

we are on a path to a world in which it will be possible to decode people's mental processes and directly manipulate the brain mechanisms underlying their intentions, emotions and decisions; where individuals could communicate with others simply by thinking; and where powerful computational systems linked directly to people's brains aid their interactions with the world such that their mental and physical abilities are greatly enhanced.

Most research in the area is currently in medical applications, such as helping people with brain and spinal cord damage. But it will soon have commercial and military applications which raise important ethical issues:

the technology could also exacerbate social inequalities and offer corporations, hackers, governments or anyone else new ways to exploit and manipulate people. And it could profoundly alter some core human characteristics: private mental life, individual agency and an understanding of individuals as entities bound by their bodies.

Hence the Morningside scholars propose four ethical principles to be incorporated into ethical codes and legislation.

Privacy and consent. Most people are already hooked up to the internet through their smartphones, which can collect huge amounts of revealing data. Google says that people touch their phones 1 million times a year. “We believe that citizens should have the ability — and right — to keep their neural data private,” assert the authors. Opting out of sharing this data should be the default choice on devices. The transfer of neural data should be regulated like organ donation to prevent a market from developing.

Agency and identity. “Neurotechnologies could clearly disrupt people's sense of identity and agency, and shake core assumptions about the nature of the self and personal responsibility — legal or moral.” Therefore “neurorights” should be protected by international treaties. Consent forms should warn patients about the risk of changes in mood, sense of self and personality.

Augmentation. Neurotechnology could allow people to radically increase their endurance or intelligence, creating discrimination and changes in social norms. The researchers urge that “guidelines [be] established at both international and national levels to set limits on the augmenting neurotechnologies that can be implemented, and to define the contexts in which they can be used — as is happening for gene editing in humans.”

Bias. Research has shown that bias can be incorporated into AI system, and can be devilishly hard to eliminate. “Probable user groups (especially those who are already marginalized) have input into the design of algorithms and devices as another way to ensure that biases are addressed from the first stages of technology development,” comments the Morningside Group.

Within academia these proposals may seem like Ethics 101. But history shows that once devices are commercialised ethics are easily forgotten:  

History indicates that profit hunting will often trump social responsibility in the corporate world. And even if, at an individual level, most technologists set out to benefit humanity, they can come up against complex ethical dilemmas for which they aren't prepared. We think that mindsets could be altered and the producers of devices better equipped by embedding an ethical code of conduct into industry and academia.

Creative commons
https://www.bioedge.org/images/2008images/FB_brains.jpg
artificial intelligence
neurotechnology