Tomorrow is good: what about morality in machines?

What could machines capable of moral behavior actually mean in practice? The Swedish fiction series real humans presents an interesting twist on this issue. ‘Portholes‘ – contraction of ‘human’ and ‘robot’ – living together with human beings. They then revolt and demand more rights. People should not only respect them, but also take their concerns seriously. This type of robot can also be held morally and legally responsible. If it turns out that robots are morally better (less fallible) than us, then, in principle, they could become our “moral mentors”.

real humans is, of course, fiction. This is an interesting thought experiment. But no more than that. A program that works “well” simply means that it does what it is supposed to do. What about morality in machines? Are we able to “program” this? The technical and functional aspects already pose quite a challenge. Other than that, there really is no “perfect moral blueprint” that contains all the data we need to be able to easily train a machine. After all, people are morally fallible by nature.

No consensus

There is also no consensus as to which ethical theory should form the basis of ethical standards. Philosophers have been discussing this for centuries. Does the AI ​​system have to respect moral rules? Or must she act in such a way as to increase the happiness of the greatest number?

In the former case (as in adhering to moral rules), the machine must be programmed with explicit rules to which it must adhere in order to be able to make a moral decision. Let’s keep this rule simple: the age-old golden rule: “Don’t do to someone else what you wouldn’t do to yourself.” The rule may seem simple, but it is extremely complex in its application. The computer must be able to determine for itself what it wants and what it does not want, in various hypothetical contexts, and to evaluate for itself the consequences of the actions of others.

Even if the computer feels no real empathy, it must at least have the ability to “empathy” to calculate the consequences of its own actions on others. In order to estimate how much the machine itself would like to be treated similarly. And in doing so, the system must also take into account different individual views and preferences.

Utilitarianism or consequentialism

Can this even be summed up in mathematical terms? Perhaps it is “easier” to feed the system with an ethical theory that focuses on improving the happiness of the many. In order to incorporate this ethical theory (utilitarianism or consequentialism) into a machine, the effects of any action on each member of the moral public should be given a numerical value.

Yet it is impossible to do this in real time for every stock in the world. Especially since the effects of each action lead to new consequences. As such, you can mitigate computational issues by setting a threshold beyond which further consequence estimation is no longer deemed necessary. Even that is incredibly complex. Moreover, an incredible amount of suffering and pain can be caused just beyond this limit. We would inevitably consider this a morally wrong act.

Just like with real humans, ideas about “moral machines” encompass some interesting thought experiments. But for now, not much more than that.

About this column:

In a weekly column, written alternately by Tessie Hartjes, Floris Beemster, Bert Overlack, Mary Fiers, Peter de Kock, Eveline van Zeeland, Lucien Engelen, Jan Wouters, Katleen Gabriel and Auke Hoekstra, Innovation Origins trying to figure out what the future will look like. These columnists, sometimes joined by guest bloggers, all work in their own way on solutions to the problems of our time. So that tomorrow is good. Here are all previous articles.

James G. Williams