In an AI vs. AI battle, researchers brace for the next wave of deepfake propaganda

An investigative reporter receives a video from an anonymous whistleblower. It shows a presidential candidate admitting to illegal activities. But is this video real? If so, it would be huge news – the scoop of a lifetime – and could completely turn the coming election upside down. But the journalist runs the video through a specialized tool, which tells her that the video is not what it seems. In fact, it’s a “fake deep», a video made using artificial intelligence with deep learning.

Journalists around the world could soon use a tool like this. In a few years, a tool like this could even be used by everyone to weed out fake content from their social media feeds.

As researchers who have studied deepfake detection and develop a tool for journalists, we see a future for these tools. However, they will not solve all of our problems and they will only be part of the arsenal in the broader fight against misinformation.

The problem with deepfakes

Most people know that you can’t believe everything you see. Over the past two decades, savvy consumers have grown accustomed to seeing images manipulated with photo editing software. Videos, however, are another story. Hollywood directors can spend millions of dollars on special effects to create a realistic scene. But using deepfakes, hobbyists with a few thousand dollars worth of computer equipment and a few weeks to spare could make something almost as real as life.

Deepfakes put people in movie scenes they’ve never been in – think Tom Cruise playing Iron Man – which makes for entertaining videos. Unfortunately, this also creates pornography without consent people represented. So far, these people, almost all women, are the biggest victims when deepfake technology is misused.

Deepfakes can also be used to create videos of political leaders saying things they never said. The Belgian Socialist Party released a low-quality but still fake non-deepfake video of President Trump insults Belgiumwhich has generated enough reactions to show the potential risks of higher quality deepfakes.

University of California, Hany Farid of Berkeley explains how deepfakes are made.

May be scariest of allthey can be used to create doubt about the content of real videossuggesting it could be deepfakes.

Given these risks, it would be extremely valuable to be able to detect deepfakes and label them clearly. This would ensure that the fake videos do not deceive the public and the real videos can be received as genuine.

Spot counterfeits

Deepfake detection as an area of ​​research started a bit more three years ago. Early work focused on detecting visible problems in videos, such as deepfakes that didn’t blink. Over time, however, the counterfeits have improved mimic real videos and become more difficult for people and detection tools to spot.

There are two broad categories of deepfake detection research. The first involves watch people’s behavior in the videos. Suppose you have a lot of videos of someone famous, like President Obama. Artificial intelligence can use this video to learn its patterns, from its hand gestures to its pauses in speech. He can then watch a deepfake of him and note where it does not match these patterns. This approach has the advantage of working even if the video quality itself is essentially perfect.

Aaron Lawson of SRI International describes an approach to detecting deepfakes.

Other researchers, including our team, focused on the differences that all deepfakes have compared to real videos. Deepfake videos are often created by merging individually generated images to form videos. With this in mind, our team’s methods extract essential data from faces in individual frames of a video and then track them through sets of simultaneous frames. This allows us to detect inconsistencies in the flow of information from frame to frame. We also use a similar approach for our fake audio detection system.

These subtle details are hard for people to see, but show how deepfakes aren’t quite perfect again. Detectors like these can work for anyone, not just a few world leaders. In the end, both types of deepfake detectors may be needed.

Recent detection systems work very well on videos specifically collected to evaluate tools. Unfortunately, even the best models do wrong about videos found online. Improving these tools to be more robust and useful is the next key step.

[Get facts about coronavirus and the latest research. Sign up for The Conversation’s newsletter.]

Who should use deepfake detectors?

Ideally, a deepfake verification tool should be available to everyone. However, this technology is still in its infancy. Researchers need to improve the tools and protect them from hackers before releasing them widely.

At the same time, however, the tools to create deepfakes are available to anyone who wants to fool the public. Staying away is not an option. For our team, the right balance was to work with journalists, as they are the first line of defense against the spread of misinformation.

Before publishing articles, journalists should verify the information. They already have proven methods, like checking with sources and having more than one person verify key facts. So by putting the tool in their hands, we give them more information, and we know they won’t just rely on the technology, because it can make mistakes.

Can detectors win the arms race?

It is encouraging to see teams of Facebook and Microsoft invest in technology to understand and detect deepfakes. This area needs more research to keep up with the speed of progress in deepfake technology.

Journalists and social media platforms also need to figure out how best to notify people of deepfakes when detected. Research has shown that people remember the lie, but not the fact that it was a lie. Will it be the same for fake videos? Simply putting “Deepfake” in the headline might not be enough to counter some types of misinformation.

Deepfakes are here to stay. Managing misinformation and protecting the public will be more difficult than ever as artificial intelligence becomes more powerful. We are part of a growing research community tackling this threat, and detecting it is only the first step.

James G. Williams