CMU AI researchers present new study to achieve fairness and accuracy in machine learning systems for public policy

The rapid rise of machine learning applications in criminal justice, employment, health care, and social service intents is having a huge impact on society. These widespread applications have heightened concerns about their potential functioning among machine learning and artificial intelligence researchers. New methods and established theoretical limits have been developed to improve the performance of ML systems. With such advances, it becomes necessary to understand how these methods and limitations translate into policy decisions and impact society. Researchers continue to thrive for unbiased and accurate models that can be used in various fields.

A deep-rooted conjecture is that there is a trade-off between accuracy and fairness when using machine learning systems. Precision here refers to the accuracy of the model’s prediction with respect to the task at hand rather than the specific statistical property. The ML predictor is called unfair if it treats people incongruously based on sensitive or protected attributes (racial minorities, economically disadvantaged). In order to handle this, adjustments are made to data, labels, model training, scoring systems, and other aspects related to the ML system. However, such changes tend to make the system less accurate.

Carnegie Mellon University researchers say this trade-off is practically negligible in various policy areas thanks to their study published in Nature Machine Intelligence. This study focuses on testing putative fairness-accuracy trade-offs in resource allocation problems.

The researchers focused on circumstances where the requested resources are scarce, and machine learning systems were used to allocate these resources. Emphasis was placed on the following four areas:

  • Prioritize limited outreach to mental health care based on a person’s risk of returning to prison to reduce re-incarceration.
  • Predict serious security breaches.
  • Model the risk of students not graduating from high school in time to recognize those who need support.
  • Help teachers meet crowdfunding goals for classroom needs.

In each of these settings, it was observed that optimized models could effectively predict outcomes, but indicated considerable disparity in intervention recommendations. However, when the adjustments are implemented, inconsistencies based on race, age, or income could be addressed without loss of accuracy.

Source: https://www.nature.com/articles/s42256-021-00396-x.pdf

All these results suggest that there is no need for new and complex machine learning methods or an immense sacrifice of precision contrary to what is assumed. Instead, setting equity goals up front and making design decisions based on need would be the first steps to achieving that goal.

This research aims to inform fellow researchers and policy makers that the widespread belief about trade-off is not necessarily true if one deliberately designs fair and equitable systems.

The machine learning, artificial intelligence, and computing communities need to start designing systems that maximize accuracy and fairness and embrace machine learning as a decision-making tool.

Paper: https://www.nature.com/articles/s42256-021-00396-x

Reference: https://techxplore.com/news/2021-10-machine-fair-accurate.html

James G. Williams