March 19, 2024

Can machines be moral?

One prominent bioethicist argues “no”.

One big question arising from the rapid development of AI technology in the 21st century is whether it is possible to create moral machines.

Many researchers in the field of artificial intelligence recognise that machines will find themselves in ethically challenging situations. One example might be the so-called ‘trolley-problems’ that self-driving cars are likely to encounter.

In light of this, researchers are attempting to program AI machines for ethical decision-making. MIT’s Moral Machine project, for example, has collected data from millions of people across the world about how they would respond to the ethical dilemmas of driving. The researchers will use the survey to program self-driving cars.

But can we build ethics into AI machines? While some experts believe it’s just a matter of surveying drivers, others argue that this is merely a simulacrum of ethics. Real ethics has a personal and experiential character.

Australian ethicist Rob Sparrow recently tackled this issue in AI & Society. Sparrow offers a lucid critique of the superficial understanding of ethics often implicit in discourses about ethics in AI machines. He draws heavily upon the ethical writings of Raymond Gaita, another Australian philosopher who offered a deeply humanistic account of ethics focusing on human emotions, the expressive capacities of human beings, and the importance of life history for ethical authority.

Sparrow begins by drawing a distinction between moral and non-moral dilemmas, and suggests that ethics is much more than a matter of science or opinion. Specifically, he argues that there is an element of personal responsibility in ethical decision-making that isn’t found in decisions of a more scientific or opinionable character.

We might outsource difficult personal decisions about financial matters to a computer program that will calculate the risks. I could get a mobile phone app to choose my ice cream.

But we cannot outsourse ethical decision-making. Sparrow imagines a son who must choose whether to preserve the life of his ailing father or to let him die and allow his organs to be harvested to save the lives of three other people. Hypothetically, a mobile phone app might exist that offers ethical expertise in morally challenging situations such as this, and the son might turn to this app for moral guidance. Yet this would not take away the son’s personal responsibility for his decision.

As Sparrow writes, any attempt to outsource ethics is at best “a caricature of moral reasoning rather than an exemplar of what it is to choose wisely in the face of competing ethical considerations”. The son cannot escape personal ethical responsibility by acting on the advice of others, much less on the advice of computer algorithm.

Sparrow goes on to argue that the way in which one handles a moral dilemma is to some extent an agent-relative matter, and that “the character, life history, of the individual facing the dilemma may enter into our account of their reasoning about the dilemma and thus, to a certain extent, into our account of the nature of the dilemma”.

How about the son and his father? It matters that the protagonist is a son and that the subject of his decisions is his father; if they were strangers, the decision would be different. Life history- has relevance to moral decision-making.

Machines lack this moral personality. A machine can mimic the behavior of a human being in this situation, but it would never be capable of acting from the perspective of a son.

After discussing the complex topic of moral authority and its relationship to human personality, Sparrow suggests that machines can never feel remorse in such a way that they might be said to have a human sense of moral responsibility. He writes:

“No matter how they are programmed or have learned to behave, machines will not be capable of being ethical — or acting ethically — because any decisions that they make will not be decisions for them… for the foreseeable future machines will lack sufficient moral personality to make it intelligible that they might feel remorse for what they have done”.

The topic of AI ethics will continue to generate scholarly discussion. But as Sparrow writes, “before we try to build ethics into machines, we should ensure that we understand ethics”.

Xavier Symons is a Postdoctoral Research Fellow at the Plunkett Centre for Ethics, Australian Catholic University, and 2020 Fulbright Future Postdoctoral Scholar.

Can machines be moral?
Xavier Symons
Creative commons
https://www.bioedge.org/images/2008images/ai_ethics_2.jpg
ai
machine learning
moral responsibility
self-driving vehicles
the trolley problem