Researchers find that public trust in AI varies widely by application

Prompted by the growing importance of artificial intelligence (AI) in society, researchers from the University of Tokyo investigated public attitudes towards the ethics of AI. Their findings quantify how different demographic and ethical scenarios affect these attitudes. As part of this study, the team developed an octagonal visual metric, analogous to a rating system, which could be useful for AI researchers who want to know how their work may be perceived by the public.

Many people believe that the rapid development of technology often outpaces that of the social structures that implicitly guide and regulate it, such as law or ethics. AI is a particular example of this, because it has become so pervasive in the daily lives of so many people, seemingly overnight. This proliferation, coupled with the relative complexity of AI compared to more familiar technology, can breed fear and distrust of this key element of modern life. Who distrusts AI and how are questions that would be useful for developers and regulators of AI technology to know, but such questions are not easy to quantify.

Researchers from the University of Tokyo, led by Professor Hiromi Yokoyama of the Kavli Institute for the Physics and Mathematics of the Universe, set out to quantify public attitudes towards ethical issues related to AI. There were two questions that the team, through the analysis of the surveys, sought to answer in particular: how attitudes change depending on the scenario presented to a respondent, and how the demographics of the respondent themselves have changed the attitudes.

Ethics can’t really be quantified, so to measure attitudes toward AI ethics, the team used eight themes common to many AI applications that raised ethical issues: privacy, accountability, safety and safety, transparency and explainability, fairness and non-discrimination, mastery of technology, professional responsibility and promotion of human values. These, which the group called “octagonal measurements,” were inspired by a 2020 paper by Harvard University researcher Jessica Fjeld and her team.

Survey respondents were given a series of four scenarios to judge against these eight criteria. Each scenario looked at a different application of AI. These were: AI Generated Art, Customer Service AI, Autonomous Weapons, and Crime Prediction.

Survey respondents also provided researchers with information about themselves such as age, gender, occupation and level of education, as well as a measure of their level of interest in science and technology through a series of supplementary questions. This information was essential for researchers to see which characteristics of people would correspond to certain attitudes.

“Previous studies have shown that risk is perceived more negatively by women, older people and those more knowledgeable about the subject. I expected to see something different in this survey given how AI is mainstream, but surprisingly, we’ve seen similar trends here,” says Yokoyama. “However, we saw something that was expected, and that was how the different scenarios were viewed, the idea of ​​AI weapons being met with much more skepticism than the other three scenarios.”

The team hopes the findings could lead to the creation of some sort of universal scale for measuring and comparing ethical issues around AI. This survey was limited to Japan, but the team has already started collecting data in several other countries.

“With a universal scale, researchers, developers and regulators could better measure the acceptance of specific AI applications or impacts and act accordingly,” said Assistant Professor Tilman Hartwig. “One thing I discovered while developing the scenarios and the questionnaire is that many topics within AI require significant explanation, more than we thought. This shows that there is a huge gap between perception and reality in AI.”

Source of the story:

Materials provided by University of Tokyo. Note: Content may be edited for style and length.

James G. Williams