Artificial intelligence researchers focus on explainability and learning

There is no doubt that AI is making its way into every facade of our lives, whether by businesses to improve customer experience and efficiency, or by government agencies to save money and identify the frauds.

Yet even proponents recognize that some major problems continue to plague AI, especially in areas such as biases that are exacerbated by poor explainability. It is for this and other reasons that ethics has increasingly become a major consideration, with governments and big tech companies beginning to roll out guidelines for ethical AI.

Additionally, ML models also tend to be highly specialized without a general understanding of the world. This has led some experts to claim that we could reach the the limits of AIwith diminishing returns and an AI that lacks true understanding.

Researchers around the world are rapidly working on these concerns, and recent reports offer a ray of hope for continued progress.

Strengthening explainability in AI

As noted in a Blog on MIT News, “explanation methods” in ML models often describe how certain features of a model contribute to the final prediction. But existing techniques can be confusing and don’t fully address the concerns of different user groups, says a group of researchers behind a new paper on AI.

“The term ‘interpretable feature’ is not specific or detailed enough to capture the full extent of the impact features have on the usefulness of ML explanations,” wrote the authors, who say that model builders of IAs should consider using interpretable features early in development. progress – and not work on explainability after the fact.

To allay the concerns of domain experts who often don’t trust AI models because they don’t understand the features that influence predictions, the researchers suggested paying more attention to features useful to domain experts. domain that take action in the real world.

Based on the idea that one size does not fit all when it comes to interpretability, the researchers drew on years of fieldwork to develop a taxonomy to help developers build features easier. to understand for the target audience. This was done by defining properties that make the features interpretable for five distinct types of users, from AI experts to people affected by the prediction of a machine learning model.

To be clear, there is a trade-off between providing interpretable functionality and model accuracy, although lead author Alexandra Zytek claims it is very small in “a lot” of areas. In terms of risks, one possibility is that a malicious developer could, for example, put a characteristic of race into a broad and abstract concept such as “socio-economic factors” to hide its effects.

You can read the full article titled “The Need for Interpretable Features: Motivation and Taxonomy” here (pdf).

Learn physics through videos

As reported by new scientistan algorithm created by Google DeepMind – which previously created an AI to beat the Go world champion, can now distinguish between videos in which objects obey the laws of physics and those in which they do not.

The DeepMind team was trying to train the AI ​​in “intuitive physics”, which is the human ability to comprehend the physical world. As the researchers noted, existing AI systems pale in their understanding of intuitive physics, even when compared to very young children.

To bridge this gap between humans and machines, researchers have turned to concepts from the field of developmental psychology. An AI called Physics Learning through Auto-encoding and Tracking Objects (PLATO) was created and trained to identify objects and their interactions using simulated object videos.

According new scientistsome of the videos “showed objects obeying the laws of physics, while others depicted absurd actions, such as a ball rolling behind a pillar, not emerging from the other side, but then reappearing behind another pillar more far on its course.

When set to predict what would happen next, PLATO was “generally” correct, suggesting that the AI ​​had developed an intuitive grasp of physics.

Importantly, the code for PLATO is unpublished, with the researchers claiming that their “implementation of PLATO is not externally viable.” However, they remain open to be contacted to clarify questions or implementation details.

The paper is available here.

Paul Mah is the editor of DSAITrends. A former system administrator, programmer and professor of computer science, he enjoys writing code and prose. You can reach him at [email protected].​

Photo credit: iStockphoto/Black_Kira

James G. Williams