STORRS, CONNECTICUT — The prospect of artificial intelligence (AI) has long been a source of tricky ethical questions. But the focus has often been on how we creators can and should use advanced bots. What is missing from the discussion is the need to develop a set of ethics for the machines themselves, as well as a way for the machines to resolve ethical dilemmas as they arise. Only then can intelligent machines operate autonomously, making ethical choices in performing their tasks, without human intervention.
There are many activities that we would like to be able to entrust entirely to machines operating autonomously. Robots can perform very dangerous or extremely unpleasant tasks. They can fill gaps in the labor market. And they can perform extremely repetitive or detail-oriented tasks – which are better suited to robots than humans.
But no one would be comfortable with machines acting independently, without an ethical framework to guide them. (Hollywood has done a pretty good job of highlighting these risks over the years.) That’s why we need to train robots to identify and weigh the ethically relevant characteristics of a given situation (for example, those that indicate potential benefits or harm to a person). And we must instill in them the duty to act appropriately (to maximize the benefits and minimize the harms).
We hope you enjoy Project union.
To continue reading and receive unlimited access to all content, subscribe now.
Unlock additional comments for FREE by signing up.
Already have an account? Login