Facebook AI researchers created a ‘fashion map’ with your social media photos

A person in a gray coat is pictured on the street during New York Fashion Week 2022

Jeremy Moeller/Getty Images

Artificial intelligence researchers, some of whom are affiliated with Facebook parent company Meta and Cornell University, used more than 7 million geotagged public photos on Instagram and Flickr social media to construct what they call a “underground fashion mapwhich spans 37 cities. The map can reveal groups of people within a city, including the most “trendy” or “progressive” areas, and draws on an Amazon-funded AI tool called GeoStyle predicting fashion trends, according to a Press release on research.

“A person unfamiliar with a city could find out which neighborhoods might suit them, for example, to satisfy their interests in outdoor activities, shopping, or tourist areas,” researchers wrote in a recently published report realized during an internship with Facebook AI Research. They also claim that anthropologists could take advantage of maps to infer trends in a city over time.

The project’s affiliation with Facebook and Amazon raises bigger questions about the unexpected ways tech companies use personal data, often without explicitly notifying users.

Tamara Berg, report co-author and director of Meta AI—Facebook’s artificial intelligence research center—did not respond to Motherboard’s query about Facebook’s potential use of the data, or whether Instagram and Flickr users knew their photos were being used to build fashion maps.

Contacted, a spokesperson for Cornell University initially said that “the researchers are not affiliated with Facebook and do not work with it”, but later acknowledged that several people involved in the project were employees of Facebook. Facebook, including a researcher who was a Facebook intern. at the start of the project.

Study co-author Kavita Bala, a professor and chair of computer science at Cornell University, has previously developed products used by Facebook. In 2020, Facebook acquired GrokStepan AI-based product recognition startup she co-founded.

The obvious use case for AI-assisted social mapping would be for Facebook and other companies to further refine demographically and geographically targeted ads, said Jathan Sadowski, a researcher and professor on the social dimensions of artificial intelligence, in an email to Motherboard.

But the potential implications of social mapping research go beyond targeted advertising for new clothes, he said. “My immediate concern is with the use of such data analytics by the finance, insurance and real estate industries,” Sadowski told Motherboard. “These are industries that have a long history of using ‘alternative data’ to inform decision-making and justify discriminatory decisions. Often as proxies for other protected categories like race, gender, and class.

Facebook’s ad targeting program has been repeatedly criticized for its discriminatory practices. For years, ads on Facebook’s platform have allowed businesses to exclude certain gendered and racialized groups to learn about housing and employment opportunities, according to lawsuits filed by the ACLUthe National Fair Housing Allianceand other groups.

“As more and more people turn to the internet to find jobs, apartments and loans, there is a real risk that ad targeting will reproduce and even exacerbate existing racial and gender biases in society,” wrote the ACLU in a statement. blog post on the lawsuit settlement. “Whether you call it weblining, algorithmic discriminationWhere automated inequalityit is now clear that the rise of big data and the highly personalized marketing it enables have led to these new forms of discrimination. »

While Facebook claims to have stopped explicitly allowing housing, jobs and credit advertisers to target users based on racial categories, researchers at the Brookings Institution have found that Facebook’s algorithms continue to discriminate based on race and ethnicity as recently as last year.

In September 2020, the Trump Administration’s Housing and Urban Development Department (HUD) released a final rule which relaxed the Fair Housing Act (FHA) discriminatory effects standard, which critics say legally justified the use of discriminatory algorithmic practices. In June 2021, HUD proposed restore the norm. However, insurance companies have long practiced discrimination, despite regulation, by hide risk classifications with statistics and mathematical models.

Sadowski said he could easily imagine fintech or insurance tech companies using fashion chart data to conclude that a particular fashion trend popular among young black people correlates with financial risk or decline in real estate values.

“But unlike redlining, the data source and the analysis method have been whitewashed by AI, which removes a few layers of proxy discrimination,” he said. “Thus plausible deniability – or easy excuse – to those in need.”

James G. Williams