Artificial Intelligence Researchers Offer ‘Bias Bounties’ to Put Ethical Principles into Practice

Join today’s top leaders online at the Data Summit on March 9. Register here.


Researchers from Google Brain, Intel, OpenAI, and top research labs in the United States and Europe joined forces this week to release what the group calls a toolkit for putting the principles into practice. of AI ethics. The kit for organizations creating AI models includes the idea of ​​paying developers to find bias in AI, similar to bug bounties offered in security software.

This recommendation and other ideas for ensuring that AI is done with public trust and societal well-being in mind were detailed in a preprint article published this week. The bug bounty hunting community may be too small to create strong assurances, but developers could still uncover more bias than the metrics in place today reveal, the authors say.

“Security biases and bounties would extend the bug bounty concept to AI and could complement existing efforts to better document datasets and models for their performance limitations and other properties,” the paper says. . “We focus here on primes for uncovering bias and safety issues in AI systems as a starting point for analysis and experimentation, but note that primes for other properties (such as security, privacy or interpretability) could also be explored.”

The authors of the article published on Wednesday, titled “Towards Trusted AI Development: Mechanisms for Supporting Verifiable Claims”, also recommend “red-teaming” to find loopholes or vulnerabilities and connecting independent third-party auditing and government policy to create a regulatory marketplace, among other techniques.

The idea of ​​bias bounties for AI was initially suggested in 2018 by co-author JB Rubinovitz. Meanwhile, Google alone said it paid $21 million to security bug researchers, while bug bounty platforms like HackerOne and Bugcrowd have raised seed rounds in recent months.

Former DARPA director Regina Dugan also advocated red team exercises to address the ethical challenges of AI systems. And a team led primarily by prominent Google AI ethics researchers has released a framework for internal use in organizations to close what they see as an ethical accountability gap.

The document shared this week includes 10 recommendations on how to put AI ethics principles into practice. In recent years, more than 80 organizations — including OpenAI, Google, and even the US military — have drafted AI ethics principles, but the authors of this article argue that the AI ​​ethics principles do not are “just a first step for [ensuring] the beneficial societal outcomes of AI” and say that “existing regulations and standards in industry and academia are insufficient to ensure responsible development of AI”.

They also make a number of recommendations:

  • Share AI incidents as a community and optionally create centralized incident databases
  • Establish audit trails to capture information during the development and deployment process for security-critical applications of AI systems
  • Provide open source alternatives to commercial AI systems and increase scrutiny of business models
  • Increase government funding for academic researchers to verify hardware performance
  • Support privacy-centric machine learning techniques developed in recent years, such as federated learning, differential privacy, and encrypted computing

The document is the culmination of ideas proposed at a workshop held in April 2019 in San Francisco that included approximately 35 representatives from universities, industrial laboratories and civil society organizations. The recommendations were made to address what the authors call a gap in the effective evaluation of claims made by AI practitioners and provide pathways to “verify AI developers’ commitments to responsible AI development.” AI”.

As AI continues to proliferate in business, government and society, the authors say there has also been an increase in concern, research and activism around AI, particularly in regarding issues such as bias amplification, ethics washout, loss of privacy, digital addictions, face recognition abuse, misinformation and job loss.

AI systems have been found to reinforce existing racial and gender biases, leading to issues such as facial recognition bias in police work and inferior health care for millions of African Americans. As a recent example, the The US Department of Justice has recently come under fire for using the notorious racially biased PATTERN risk assessment tool to decide which prisoners are sent home early to reduce population size due to COVID-19 concerns.

The authors argue that it is necessary to go beyond non-binding principles that do not hold developers accountable. Google Brain co-founder Andrew Ng described this problem to NeurIPS last year. Speaking at a panel in December, he said he read an OECD ethics principle to engineers he works with, who responded by saying language would not affect how which they do their job.

“With the rapid technical advancements in artificial intelligence (AI) and the spread of AI-based applications in recent years, there has been growing concern about how to ensure that the development and deployment of AI are beneficial – not detrimental – to humanity,” the paper read. “Artificial intelligence has the potential to transform society in ways that are both beneficial and detrimental. Beneficial applications are more likely to be realized, and risks more likely to be avoided, if AI developers earn rather than assume the trust of society and each other. This report has fleshed out one way to gain such confidence, namely the formulation and evaluation of verifiable claims about the development of AI through a variety of mechanisms.

In other recent AI ethics news, in February the IEEE Standards Association, part of one of the world’s largest organizations for engineers, released a white paper calling for an evolution. towards “Earth-friendly AI”, the protection of children online and the exploration of new metrics for the measurement of societal well-being.

VentureBeat’s mission is to be a digital public square for technical decision makers to learn about transformative enterprise technology and conduct transactions. Learn more

James G. Williams