‘Ethical Machines’ Breaks Down AI Ethical Risk Mitigation Planning | Article

Maybe your organization has a team of developers building cutting-edge AI from the ground up. Maybe your company’s human resources department wants to get a recruiting engine to help with the hiring process. Whatever the objective function of AI, the prospect of automating a defined task opens the door to new efficiencies, saving your business valuable time and money.

It all sounds wonderful from a utopian worldview.

As Reid Blackman illustrates at length in his new book, “Ethical Machines,” the ethical risks of deploying AI are vast. In some use cases, they are unpredictable. In others they emerge aside, even when consciously anticipated.

Take Amazon, for example: a company that realized the ethical risks of AI development the hard (i.e. expensive and reputation-damaging) way.

A team of engineers at Amazon has developed a resume-reading AI to help humans check tens of thousands of resumes a day, Blackman details in his book. The team trained the AI ​​on a decade of hiring data and told the machine to look for the “interview-worthy” model.

“Women are not interview worthy,” was the model he spat.

The AI ​​was biased. More specifically, it learned be biased based on the troves of labeled data that have been entered. Amazon eventually ditched its recruiting engine, despite subsequent efforts to weed out discriminatory entries. There was no guarantee that the AI ​​wouldn’t discover other patterns in the data that could lead to discriminatory judgments.

Machine learning (ML), a subset of AI, is “software that learns by example,” Blackman told Compliance Week in an interview. Importantly, ML is not an ethically neutral tool, he argued. It’s not like, say, a screwdriver.

“The people who commission, design, develop, deploy, and approve AI are quite different from the people who make screwdrivers,” Blackman wrote. “When you develop AI, you develop ethical or unethical machines.”

Thus, it is the people – collectively, the company – involved in each stage of the life cycle of an AI who are responsible for its learning, its decisions and its impacts, whether intentional or not. Blackman’s book makes the case for companies to adopt a comprehensive AI ethics program that infuses thoughtful decision-making into every stage of AI development, from design to deployment, and engages a cross-functional team of experts (not just “technicians”) to avoid costly expenses. dead ends like Amazon’s or worse.

Bias, explainability, confidentiality – where compliance comes into play

With a dry humor that permeates the book, Blackman poked fun at the three most talked about issues in AI ethics: bias, explainability, and confidentiality. He wasn’t joking about the problems themselves; these are real. In fact, he devoted entire chapters to demystifying and outlining the steps to deal with it. But he joked about the clichéd way people talk about them.

Everyone knows these are critical challenges, but the so-called “subjectivity” of ethics muddies the conversation, Blackman observed, to the point that people shrug their shoulders and drop out of the debate, allowing the confusion to prevail. He made a compelling case that ethics is not subjective when it comes to deploying AI within an organization. Business leaders need to be clear about what they are willing to ethically defend and risk in the name of AI. Thus, different companies will have different appetites for ethical risk.

“One thing I try to emphasize is that this is an ethical risk mitigationno ethical risk elimination. You have to mitigate that risk against other types of risk, like simple risk, bottom line, and profit,” Blackman told CW.

“I’m not trying to get them to radically rethink their ethics priorities. I impress upon them that there are real risks here that need to be fed into the deliberative process, which surely should include compliance officers,” he added.

Here is a brief overview of the Big Three, according to “Ethical Machines”:

  • Bias: As seen in the Amazon example, bias occurs when an ML creates a set of discriminatory outputs or automated decisions that range from ethically problematic to egregious depending on the impact of those decisions on people. In addition to gender bias, consider racial profiling.
  • Explainability: The extent to which an ML’s algorithm (i.e. what happens between its inputs and its outputs) can be deciphered by humans. A “black box” ML is inexplicable; the pattern identified by the machine is too complex for humans to understand. This question becomes important when people are required to explain an ML’s decisions in terms of human decency. For example, an applicant who is denied a mortgage or parole based on an IA’s decision might reasonably demand an intelligible explanation.
  • Privacy: Data is the basis on which an ML is formed. The more, the better. The issue of privacy arises when data is collected about individuals without their knowledge or consent. Additionally, an ML trained in personal data will learn to make decisions that also threaten to invade people’s privacy. Consider facial recognition software.

Compliance could and should play a watchdog role in these difficult areas, Blackman recommended. He advised any company considering investing in AI to form a cross-functional AI ethics committee and include a compliance person on the board (in addition to data scientists, subject matter experts and people with business and legal expertise).

“It’s not that a compliance officer has to be involved in every project. It’s that they need to be involved whenever ethical smoke is detected,” Blackman explained. The formal involvement of compliance in the AI ​​ethical risk due diligence process also helps to ensure that “there is a procedure to get the right people to review [the potential ethical risks] to see if there really is a fire there,” he said.

“It’s not that a compliance officer has to be involved in every project. It’s that they need to be involved whenever ethical smoke is detected.

Reid Blackman, author of “Ethical Machines”

Clearly, compliance officers will be involved to ensure companies meet upcoming AI regulatory requirements. The European Commission proposed a draft regulation on AI in April 2021, establishing a legal framework that will impose a wide range of requirements on the private and public sectors.

What kinds of regulations would the AI ​​law entail?

“These three big issues that I raise – the challenges of bias, explainability and confidentiality – will 100% show up in regulations in the near future,” Blackman said.

He thinks the European AI law will come into force in three years.

“Three may seem like a lot, but when you’re talking about the level of organizational change required to be truly compliant, you’re not starting six months before the regulations roll out. We are talking about training tens of thousands of people. We’re talking about updating policies, [key performance indicators], infrastructure and governance. It’s a big elevator.

“Ethical Machines” is a good starting point.

James G. Williams