Although bioethics deals with living beings and robot ethics deals with machines, their destinies are intertwined. As you can read in this week’s article about the development of Lethal Autonomous Robots (LARs), scientists, armies and politicians need to create ethical “codes” for killing machines. And quickly.
At the moment a human operator instructs a drone to release its missiles. But the time is not far off when the drones will “decide” for themselves. And since their decision-making power needs to be programmed by someone, who better than bioethicists?
The great science fiction writer Isaac Asimov created his famous Three Laws of Robotics as long ago as 1942. These are useful, but they really don’t apply to what currently worries the United Nations about these new weapons. The First Law is that no robot may harm a human being – but several countries are designing robots whose main function is to kill.
The task of creating a robot ethics will be harder than it seems. The fractured history of bioethics offers a cautionary tale: are they to be utilitarian robots, or deontological robots, or principalist robots, or feminist robots, or what? Anyhow, I suspect that there will be job opportunities for unemployed bioethicists with programming skills in the near future…
The intertwined destiny of robot ethics and bioethics.
- Queensland legalises ‘assisted dying’ - September 19, 2021
- Is abortion a global public health emergency? - April 11, 2021
- Dutch doctors cleared to euthanise dementia patients who have advance directives - November 22, 2020