Machines misbehave – why AI can never be moral

Daniele Pucci of the Italian Institute of Technology takes a picture of the icub 3 robot, designed to test embodied AI algorithms © Marco Bertorello/AFP/Getty Images

On a list of the most influential personalities of the 20th century, several names jump out: Albert Einstein, Mahatma Gandhi and Franklin D Roosevelt on the positive side of the ledger and that trio of tyrants, Hitler, Stalin and Mao, who did evil inexplicable.

But in Misbehaving machines, Toby Walsh convincingly demonstrates that 1,000 years in the future (assuming humanity survives that long), the answer will be crystal clear: Alan Turing. As the pioneer of computing and the founder of artificial intelligence, Turing will be seen as the driving intellectual force behind the “pervasive and critical technology” that will then invisibly permeate every aspect of our lives. Mathematician IJ Good, another codebreaker at Bletchley Park during World War II, predicted that the invention of an “ultra-intelligent machine”, as envisioned by Turing, would lead to an “explosion of intelligence”.

“Thus the first ultra-intelligent machine is the last invention man ever needs to make, provided the machine is docile enough to tell us how to keep it under control,” Good wrote in 1965. Good then advised Stanley Kubrick on the making of the film 2001: A Space Odysseywhich introduced viewers to the wonders and dangers of such an ultra-intelligent machine, named HAL 9000.

While it’s fascinating to speculate how artificial intelligence will have changed the world by 3022, Walsh focuses most of his attention on the here and now. The computers we have today may not yet match the intelligence of a two-year-old, he argues, but artificial intelligence is already being used for impressive and ever-increasing purposes: detecting malware, checking legal contracts for errors, identifying birdsong, discovering new material, and (controversially) predicting crime and planning police patrols. Walsh’s goal is to get us thinking about the unintended consequences of using such powerful technology in all of these ways, and more.

As a professor of AI at the University of New South Wales, Walsh is excited about the power and promise of the technology. Computers can help automate dirty, difficult, boring and dangerous jobs unsuitable for humans. Indian police have used facial recognition technology to identify 10,000 missing children. AI is also being used to tackle the climate emergency by optimizing electricity supply and demand, predicting weather patterns, and maximizing wind and solar power capacity. But Walsh insists we need to think very carefully before allowing such technology to creep into every corner of our lives. Big Tech companies deploying AI are driven by profit rather than the good of society.

The most interesting and original section of the book concerns the question of whether machines can function morally. One of the most important and fascinating experiments in this field is the moral machine project led by the Media Lab of the Massachusetts Institute of Technology. This digital platform was used to crowdsource the moral choices of 40mn users, by asking them about the decision-making processes of self-driving cars, for example.

How do users react to the moral dilemma known as the “trolley problem”, dreamed up by English philosopher Philippa Foot in 1967. Would you change the course of a runaway trolley to stop it killing five people on a lead at the cost of killing another person on another spur? In polls, about 90% of people say they would save all five lives at the cost of one.

But, like many computer scientists, Walsh is skeptical of the applicability of such moral choices and whether they could ever be written into a machine’s operating system. First, we often say one thing and do another. Second, some of the things we do we know we shouldn’t (order ice cream when we’re on a diet). Third, moral crowdsourcing depends on the choices of a self-selected group of Internet users, which do not reflect the diversity of different societies and cultures. Finally, the moral decisions made by machines cannot be the fuzzy average of what people tend to do. Morals are changing: democratic societies no longer deny women the right to vote or enslave people, as they once did.

“We cannot today build moral machines, machines that capture our human values ​​and can be held accountable for their decisions. And there are many reasons why I suspect we will never be able to,” Walsh writes.

But that doesn’t mean companies deploying AI need to go wild. Lawmakers have a responsibility to delineate where it is acceptable for algorithms to substitute for human decision-making and where it is not. Walsh himself is an active campaigner against the use of killer robots or lethal autonomous weapons systems. To date, 30 countries have asked the UN to ban these weapons, although none of the world’s major military powers are yet among them.

For a technologist, Walsh refreshingly emphasizes the primacy of human decision-making, even though it is so often flawed. “It might be better to have human empathy and human responsibility despite human fallibility. And that might be preferable to the logic of cold, irresponsible and somewhat less fallible machines,” he concludes.

Machines misbehave: The Morality of AI by Toby Walsh Flint £20, 288 pages

John Thornhill is the FT’s innovation editor

Join our online book group on Facebook at FT Books Coffee

James G. Williams