Can machines be moral? -BioEdge

A prominent bioethicist answered “no”.

A big question arising from the rapid development of AI technology in the 21st century is whether it is possible to create moral machines.

Many researchers in the field of artificial intelligence recognize that machines will find themselves in ethically difficult situations. An example could be the so-called “cart issues” that self-driving cars are likely to experience.

In light of this, researchers are trying to program AI machines for ethical decision-making. MIT Morale Machine Project, for example, collected data from millions of people around the world on how they would respond to the ethical dilemmas of driving. Researchers will use the survey to program self-driving cars.

But can we integrate ethics into AI machines? While some experts think it’s just a matter of investigating drivers, others argue it’s just a travesty of ethics. True ethics is personal and experiential.

Australian ethicist Rob Sparrow recently addressed this issue in AI & Society. Sparrow offers a lucid critique of the superficial understanding of ethics often implicit in discourses of ethics in AI machines. It draws heavily on the ethical writings of Raymond Gaita, another Australian philosopher who offered a deeply humanistic account of ethics focusing on human emotions, the expressive capacities of human beings, and the importance of the history of life for ethical authority.

Sparrow begins by distinguishing between moral and non-moral dilemmas and suggests that ethics is much more than a matter of science or opinion. Specifically, he argues that there is an element of personal responsibility in ethical decision-making that is not found in more scientific or opinion-based decisions.

We may outsource difficult personal decisions regarding financial matters to a computer program that will calculate risk. I could have a mobile app to choose my ice cream.

But we can’t outsource ethical decision-making. Sparrow imagines a son who must choose between keeping his sick father alive or letting him die and allowing his organs to be harvested to save the lives of three other people. Hypothetically, there could be a cell phone app that offers ethical expertise in morally difficult situations like this, and the son could turn to that app for moral advice. Yet this would not remove the personal responsibility of the son in his decision.

As Sparrow writes, any attempt to externalize ethics is at best “a caricature of moral reasoning rather than an example of what it is to choose wisely in the face of competing ethical considerations.” The son cannot escape his personal ethical responsibility by acting on the advice of others, much less on the advice of a computer algorithm.

Sparrow goes on to say that how one deals with a moral dilemma is to some extent a matter of the agent, and that “the character, the life story, of the individual faced with the dilemma may come into our account of his reasoning on the dilemma and thus, to some extent, in our explanation of the nature of the dilemma”.

And the son and his father? It matters that the protagonist is a son and that the subject of his decisions is his father; if they were foreigners, the decision would be different. Life story – is relevant for moral decision-making.

Machines lack this moral personality. A machine can mimic the behavior of a human being in this situation, but it would never be able to act from a son’s point of view.

After discussing the complex subject of moral authority and its relationship to human personality, Sparrow suggests that machines can never feel remorse to the point that they can be said to have a human sense of moral responsibility. He writes:

“No matter how they are programmed or learned to behave, machines will not be able to to be ethical – or acting ethically – because not all decisions they make will be decisions for them…for the foreseeable future, machines will not have enough moral personality to make them realize that they might feel remorse for what they have done”.

The topic of AI ethics will continue to spark scientific discussion. But as Sparrow writes, “before we try to build ethics into machines, we need to make sure we understand ethics.”

Xavier Symons is a Postdoctoral Fellow at the Plunkett Center for Ethics at Australian Catholic University and a 2020 Fulbright Future Postdoctoral Fellow.

Can machines be moral?
Xavier Symons
Creative Commons
https://www.bioedge.org/images/2008images/ai_ethics_2.jpg
have
machine learning
moral responsibility
autonomous vehicles
the cart problem

Latest articles from Xavier Symons (see everything)

James G. Williams