April 18, 2024

Great suffering software!

Oxford bioethicist Anders Sandberg asks whether software can suffer. If so, what are the ethics of creating, modifying and deleting it from our hard drives?

Most bioethical discourse deals with tangible, nitty-gritty situations like surrogate mothers, stem cells, abortion, assisted suicide, or palliative care.

But there is a theoretical avant garde in bioethics, too. Theoretical bioethics tries to anticipate ethical issues which could arise if advanced technology becomes available. There are always a lot of ifs – but these are what bring joy to an academic’s heart.

The other day an intriguing example appeared in the Journal of Experimental & Theoretical Artificial Intelligence. Oxford bioethicist Anders Sandberg asks whether software can suffer. If so, what are the ethics of creating, modifying and deleting it from our hard drives?

We’re all familiar with software that makes us suffer because of corrupted files and crashes. But whimpering, yelping, moaning software?

This is a bit more plausible that it sounds at first. There are at least two massive “brain mapping” projects under way. The US$1.6 billion Human Brain Project funded by the European Commission is being compared to the Large Hadron Collider in its importance. The United States has launched its own US$100 million brain mapping initiative. The idea of both projects is to build a computer model of the brain, doing for our grey matter what the Human Genome Project did for genetics.

Theoretically, the knowledge gained from these projects could be used to emulate the brains of animals and humans on computers. No one knows whether this is possible, but it is tantalising for scientists who are seeking a low-cost way to conduct animal experiments.

This implies that a being – is it too much to call it a person? — is alive on the hard drive. And building on the ethics of animal experimentation, it could be argued that tweaking the software to emulate pain would be wrong.

How would we know whether the software is suffering? That is a philosophical conundrum. Sandberg believes that the best option is to “assume that any emulated system could have the same mental properties as the original system and treat it correspondingly”. In other words, software brains should be treated with the same respect as the experimental animal; virtual mistreatment would be just as wrong as real mistreatment in a laboratory.

How about the most difficult of all bioethical issues, euthanasia? For animals, death is death. But if there are identical copies of the software, is the emulated being really dead? On the other hand, would we be respecting the software’s dignity if we kept deleting copies?

Even trickier problems crop up with emulations of the human brain. What if a virus turns software schizophrenic or anorexic? “If we are ethically forbidden from pulling the plug of a counterpart biological human,” writes Sandberg, “we are forbidden from doing the same to the emulation. This might lead to a situation where we have a large number of emulation ‘patients’ requiring significant resources, yet not contributing anything to refining the technology nor having any realistic chance of a ‘cure’.”

And what about software “rights”? Could the emulations demand a right to be run from time to time? How will their privacy rights be protected? What legal redress will they have if they are hacked?

Watch this space.

https://www.bioedge.org/images/2008images/suffering_software.jpg
Creative commons
animal rights
robot ethics
software
transhumanism