If we haven’t managed to sort out standards for bioethics yet, what chance is there for creating a widely-accepted non-controversial robot ethics?
If you cannot see the accompanying video from The Economist, click here.
Remember Isaac Azimov’s Three Laws of Robot Ethics? Once they were science fiction, but today they are a starting point for controlling the lethal weaponry being used on the battlefield by a number of countries. The Economist’s cover story last week is “Morals and the machine: teaching robots right from wrong“. But if we haven’t managed to sort out standards for bioethics yet, what chance is there for creating a widely-accepted non-controversial robot ethics?
Already problems loom. Robot weapons — drones — have allowed the US and its allies to conduct many targeted assassinations of al-Qaeda insurgents. While these have sometimes involved civilian casualities, US officials contend that they actually spare many civilian lives. They certainly keep American casualty statistics down.
At the moment, a human operator makes the final decision on whether to launch a missile. But sooner or later, the machines will be making the decisions themselves, based on ethics protocols designed by human ethicists. The Economist says that “the judgments they make need to be ones that seem right to most people”. This seems a bit naive and already contains an ethical judgement.
So will robots be utilitarians, or deontologists, or essentialists? Or what? There is a cheerful side to all this. If bioethicists ever run out of work, there could be plenty of new jobs programming moral robots. This video of a conversation between two journalists at The Economist gives a good idea of the issues at stake.
- Queensland legalises ‘assisted dying’ - September 19, 2021
- Is abortion a global public health emergency? - April 11, 2021
- Dutch doctors cleared to euthanise dementia patients who have advance directives - November 22, 2020