machines behave badly | The Saturday newspaper

There is a tendency in discussions of artificial intelligence to oscillate between gushing visions of a magical future and grim predictions of a dystopia. Toby Walsh’s latest offering, Machines misbehave: the morality of AI, navigates these twin trends constructively. At the outset, he notes that one of the causes of optimism in what might otherwise be a grim reality is that many people working in the AI ​​industry are “waking up to their important ethical responsibilities” and that ” this book is a small part of my own revival”.

It is heartening to see Walsh, a highly skilled and respected thinker in the field, taking responsibility for confronting ethical issues in the fields of engineering and computer science that have too often been perceived – or dismissed – as out of the question. of their scope. This is accessible reading, with a nuanced mix of optimism and steady, practical caution.

Walsh observes that industry is leading the development of AI, but the values ​​of many companies doing this work may not be ethical or responsible. How to solve this problem ? This is undeniably a broad question that goes well beyond computing, provoking questions about law – including how it is made and by whom – the role and limits of cultural context and philosophical issues thorny issues associated with the concept of fairness. While there is something deeply commendable about the task Walsh has set for himself, it’s fair to say that some of them are explored with more insight and depth than others.

At some level, there are questions that need to be explored before asking how to make AI ethical, including whether we need it. The case for AI is rarely questioned and often simply assumed, leaving opportunity costs unexamined. It is perhaps understandable that Walsh is less inclined to explore these issues, assuming, for example, that self-driving cars are not only inevitable but desirable. This leaves unanswered the question of whether we should seize the opportunities offered by the digital revolution to discuss reducing reliance on personal vehicles, which have reshaped our cityscapes at significant cost. It also begs the broader social questions about companies seeking to advance this technology and how they might seek to limit their own liability. If we want to make AI ethical, we need to ask questions not only about its applied functionality, but also about who decides where and when it should be deployed. Perhaps Walsh’s ethical awakening doesn’t extend that far, but these are important and fundamental issues and it would be helpful to hear his thoughts on them.

The risk of shifting emphasis in this way is, of course, that it becomes difficult to make any meaningful analysis of the book’s ostensible subject matter. by Kate Crawford AI Atlas, for example, carries out a meticulous and careful examination of the material inputs that go into various AI systems. While Crawford’s work is invaluable, on the other hand, there’s also something practical about Walsh’s determination to work through the realities of existing AI systems and whether or not the limitations placed on them by laws and social conventions. As distinct and often complementary approaches to the same subject, it is useful to read the two together.

A critical issue is that the AI ​​does not sit in a vacuum. It is a human creation and it is used and misused by humans – as Crawford clearly puts it, AI is “neither artificial nor intelligent”. Even the most autonomous weapons have been created through human labor and will be deployed by human decision makers. Wanting to regulate the technology itself, independently of its political risks, misses this human context. This means that the problems posed by AI are less novel than they often seem, and traditional rules and laws are more relevant than we assume.

Walsh is at his best when talking about his own field and creating bespoke, granular tools to map out the ethical conundrums that those working in it are likely to face. On a few occasions, he lists or proposes solutions to ethical problems, and it’s easy and sensible reading, with seriousness. He is less persuasive when talking about thoughts that are naturally less familiar to him – human rights, for example, are discussed and discussed in just two pages.

That seems a shame, because so many of the challenges faced in this area – balancing rights and negotiating ideas of fairness – have occupied human rights thinkers for decades. But overall, there’s a lot to admire in someone like Walsh, with his deep expertise grappling with the trials and tribulations of regulating human behavior.

This book is perfect for readers who are immersed in the world of machine learning and are looking for a functional and engaging introduction to the importance of ethical thinking. It is an important book because Walsh speaks from a position of authority on the benefits of caution and reflection, which serve as a critical counterbalance to the breathless utopianism that is not uncommon in the field.

It is not a definitive work, but perhaps that is the point: we hope it will encourage more interdisciplinary conversations about the social and political contexts in which AI operates. The task couldn’t be more urgent given that technology will play a role in solving some of the biggest problems facing humanity, including climate change, wealth inequality and precarious work. Walsh’s contributions will invariably bring us closer to ethical solutions.

La Trobe University Press, 288 pages, $32.99

La Trobe University Press is a Schwartz trademark.

This article first appeared in the print edition of The Saturday Paper on June 4, 2022 under the headline “Machines Behaving Badly, Toby Walsh”.

A free press is a paid press. Now is the time to subscribe.

James G. Williams