How can human-centric AI combat biases in machines and people?

Companies invested around $50 billion in artificial intelligence systems last year. This figure should more than doubleto $110 billion, by 2024. Such a boom in investment raises many questions, but at the heart of them for MIT lecturer Sloanis how to recognize and counter the bias that exists in AI-based decision making.

“A lot of research has highlighted the problems of algorithmic bias and the threat it poses systemically,” Gosline says. in a new MIT Sloan Experts Series conference, available below. “It’s a huge problem – one that I don’t think we can take seriously enough.”

In the discussion with Data Scientist Cathy O’Neil and Salesforce Architect of Ethical AI Practice Kathy Baxter, Gosline explains human-centered AI, the practice of including the contributions of people from different backgrounds, experiences and lifestyles in the design of AI systems. The mainstream wisdom assumes that the role of algorithms “is to correct the biases that humans have,” Gosline says. “It follows the assumption that algorithms can just come in and help us make better decisions – and I don’t think that’s an assumption we should be operating under.”

Rather, we need a thorough understanding of the biases that exist both in humans and algorithms as well as how different groups place their trust in AI. With this information, decision-making processes can be designed in which algorithms and humans, working together to compensate for respective blind spots, can arrive at clearer and less biased results.

Watch the full conference below to learn more about:

  • The value of greater transparency around the data and assumptions that feed AI models;
  • The need to address the vast information asymmetry between data scientists working in this field and all those whose lives are deeply affected by the work – those who “eat the exit”, as Gosline puts it;
  • The ways black people and people of color “end up disproportionately on the wrong side” of algorithmic bias;
  • The ability and responsibility of globally influential companies like Salesforce to use their position to advance more equitable approaches to AI.

“If we truly understand how these systems can both constrain and empower humans,” Gosline says, “we’ll do a better job of improving them and removing bias.”

James G. Williams