AI researchers target COVID-19 “infodemi

As the COVID-19 pandemic grew, the World Health Organization and the United Nations released a stern warning: An “infodemic” of online rumors and fake news about COVID-19 was hampering public health efforts and causing unnecessary deaths. “Misinformation costs lives,” the organizations warned. “Without the proper trust and correct information… the virus will continue to thrive.”

In an effort to solve this problem, researchers at Stevens Institute of Technology are developing a scalable solution: an AI tool capable of detecting “fake news” relating to COVID-19 and automatically flagging misleading reports and publications about social networks. “During the pandemic, things have become incredibly polarized,” explained KP Subbalakshmi, AI expert at Stevens Institute for Artificial Intelligence and a teacher of electrical and computer engineering. “We urgently need new tools to help people find information they can trust.”

To develop an algorithm capable of detecting misinformation about COVID-19, Dr. Subbalakshmi first worked with Stevens graduate students Mingxuan Chen and Xingqiao Chu to collect approximately 2,600 news articles about COVID-19 vaccines, drawn from from 80 different publishers over the course of 15 months. The team then cross-checked the articles with reputable media rating websites and labeled each article as either credible or untrustworthy.

Next, the team gathered more than 24,000 Twitter posts that mentioned the indexed stories and developed a “position detection” algorithm capable of determining whether a tweet supported or rejected the article in question. “In the past, researchers assumed that if you tweeted about a news article, you agreed with its position. But that’s not necessarily the case – you might say ‘Can you believe this nonsense!?’ Said Dr. Subbalakshmi. “Using location detection gives us a much richer perspective and helps us detect fake news much more effectively.”

Using their labeled datasets, the Stevens team trained and tested a new AI architecture designed to detect the subtle linguistic cues that distinguish real reports from fake news. It’s a powerful approach because it doesn’t require the AI ​​system to check the factual content of a text or track the evolution of public health messages; instead, the algorithm detects stylistic fingerprints that match trustworthy or untrustworthy texts.

“It’s possible to take any written sentence and turn it into a data point – a vector in N-dimensional space – that represents the author’s use of language,” Dr. Subbalakshmi explained. “Our algorithm looks at these data points to decide if an article is more or less likely to be fake news.”

More pompous or emotional language, for example, often correlates with false claims, Dr. Subbalakshmi explained. Other factors such as the time of publication, the length of an article and even the number of authors can be used as by an AI algorithm, allowing it to determine the reliability of an article. These stats come with their newly curated dataset. Their core architecture is able to detect fake news with around 88% accuracy, significantly better than most previous AI tools at detecting fake news.

It’s an impressive breakthrough, especially using data that was collected and analyzed in near real time, Dr Subbalakshmi said. Yet, there is still a long way to go to create tools that are powerful and rigorous enough to deploy in the real world. “We have created a very precise algorithm to detect misinformation,” Dr. Subbalakshmi said. “But our real contribution to this work is the dataset itself. We hope other researchers will go ahead and use it to help them better understand fake news.

A key area for further research: the use of embedded images and videos in indexed news articles and social media posts to increase detection of fake news. “So far we have focused on the text,” Dr. Subbalakshmi said. “But news and tweets contain all kinds of media, and we have to digest all of that to figure out what’s fake and what’s not.”

Working with short texts such as social media posts presents a challenge, but Dr. Subbalakshmi’s team has already developed AI tools that can identify misleading tweets and tweets that spread fake news and theories of the plot. Combining bot detection and linguistic analysis algorithms could enable the creation of more powerful and scalable AI tools, Dr Subbalakshmi said.

With the Surgeon General now calling for the development of AI tools to help combat misinformation about COVID-19, such solutions are urgently needed. Yet, warned Dr Subbalakshmi, there is still a long way to go. Fake news is insidious, she explained, and people and groups who spread false rumors online work hard to avoid detection and develop their own tools.

“Every time we take a step forward, bad actors are able to learn from our methods and build something even more sophisticated,” she said. “It’s a constant battle – the trick is just to stay a few steps ahead.”


Warning: AAAS and EurekAlert! are not responsible for the accuracy of press releases posted on EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.

James G. Williams