Is your face gay? Conservative? Criminal? AI researchers are asking the wrong questions

An illustration of facial recognition. Credit: Gerd Altmann/Pixabay.

Artificial intelligence (AI) offers the promise of a world made more convenient by algorithms designed to mimic human cognitive abilities. In a story we tell ourselves about progress, AI systems will perform tasks that will improve our lives and manage more of our work, so that we can spend our time in more enjoyable or productive pursuits. The real story of AI, however, is told through the questions researchers use it to answer. And sadly, some don’t pursue ideas that can advance a utopian vision of the future. Instead, they ask questions that risk re-creating misguided human thinking from the past.

Disturbingly, some modern AI applications are leaning into physiognomy, a set of pseudoscientific ideas that first emerged thousands of years ago with the ancient Greeks. In antiquity, Pythagoras, the Greek mathematician, bases its decisions on accepting students on whether they “looked” gifted. For the philosopher Aristotle, bulbous noses denoted a insensitive person and round faces signaled courage. Some ancient Greeks, at least, believed that if someone’s facial features resembled a lion, that person would have lion traits such as courage, strength, or dignity.

During a peak period popularity in the 19and Century, the field of physiognomy has become more sinister and racist than its eccentric manifestation during the time of the ancient Greeks. In the 1852 book, Comparative physiognomy or resemblances between men and animals, author James W. Redfield, physician, makes the case for the usefulness of the physiognomy in relating in each chapter an animal to a region or to a race of people. In one chapter, he compared the character of Middle Easterners to camels. In another, he compared the Chinese to pigs. Redfield and other authors of the time used supposed science to reinforce their biases, and for a time their claims were considered science.

Contents of the book.
The table of contents of James W. Redfield’s 1852 book on physiognomy.

There are things like the wrong questions. These days, AI researchers are asking troubling questions about the technology and trying to use it to tie facial attributes to fundamental character. If they don’t say “yippee, look what AI can do” (in fact, some say they’re trying to highlight the risks), by publishing in scientific journals, they risk giving credibility to the idea even to use AI in problematic and physiognomic situations. manners. Recent research has attempted to show that political orientation, sexuality, and criminality can be inferred from images of faces.

Political orientation. In a 2021 article in Nature Scientific Reports, Stanford researcher Michal Kosinski found that using open source code and publicly available facial data and images, facial recognition technology can accurately judge a person’s political orientation 68% of the time. , even controlling for demographic factors. In this research, the main algorithm learned the average face of conservatives and liberals and then predicted the political tendencies of the unfamiliar faces by comparing them to the reference images. Kosinski wrote that his findings about AI capabilities have serious implications: “The privacy threats posed by facial recognition technology are, in many respects, unprecedented.”

Although the questions underlying this line of inquiry do not immediately raise alarm bells, the underlying premise still fits perfectly into the physiognomy, predicting personality traits from facial features.

Sexuality. In 2017, Kosinski published another work showing that a neural network trained on facial images could be used to distinguish between homosexuals and heterosexuals. Surprisingly, experiments using the system showed 81% accuracy for men and 74% for women. Based on these results, the machine learning model performed well, but what was the value of the question posed by the study? Often, the inference of people’s sexual orientation is used for purposes of discrimination or criminal prosecution. In reality, there are still more than 50 countries with laws against same-sex sexual acts on the books, with a severity of punishment ranging from imprisonment to death. In this case, improving the accuracy of AI tools in applications like these only amplifies the damage that can result. That’s not even to mention cases where the tool’s predictions are incorrect. According to the BBC, organizations representing the LGBTQ community were strongly critical of Kosinski and his colleague’s research, calling it “junk science”. As with the study on political tendencies, the authors argued that their study highlights the risk that facial recognition technology could be misused.

Crime. AI, like humans, attempts to predict future events based on historical data. Additionally, just like humans, if the historical events that are presented to the AI ​​predictor are biased, the resulting predictions would contain artifacts of those biases. The simplest example is trying to infer criminality from facial images. In the United States, many studies have analyzed racial disparities in the justice system. These studies consistently point out that racial minorities are far more likely to be both arrested and incarcerated for crimes, even when white people commit those crimes at the same rate. If an AI were trained to predict crime on historical data collected in the United States, it would invariably find that being male and having darker skin is the best predictor of crime. He would learn the same biases that have historically plagued the justice system.

In 2020, as researchers from the University of Harrisburg were about to publish a paper on the use of facial images to predict crime, hundreds of academics signed a letter to Springer Publishing urging the company to rescind the publishing offer. Along with methodological concerns, their criticisms focused on the ethical doubt of the research and the question it attempted to answer. And in the end, the company announcement that he would not go ahead with the study. In this case, besides the obvious dangers of predicting crime before it happens, a confounding factor challenges all of the study’s assumptions: the training data used to predict crime will train an AI model which reflects the biases of current judicial systems.

The algorithmic tools behind these studies are of course more advanced than the methods of Aristotle, but the questions posed by these studies are just as problematic as those of the ancient Greeks. By asking whether sexuality, criminality, or political views can be inferred from a person’s face and showing that AI systems can indeed provide predictions, practitioners lend credibility to the idea of ​​using algorithms to perpetuate crimes. prejudices. If governments or others were to follow these lines of research and use algorithms as the researchers did in these studies, the results would likely be facial recognition used to discriminate against or oppress groups of people.

The road to the AI-enabled utopia that optimists have predicted looks increasingly rocky, including avenues of research that scientists shouldn’t pursue but do, even when they claim that they are only pointing out the risks of AI technology. AI has the same fundamental limitation as all tools: flawed AI systems, like flawed humans, can answer questions that an ethical society would avoid. It’s up to us to fight against the misapplication of AI that serves to automate bias.

James G. Williams