This report is part of ongoing coverage of the Russia-Ukraine war. Visit our dedicated page for more on this topic.
In mid-March, as the Russian invasion of Ukraine entered its third week, an unusual video began making the rounds on social media and was even broadcast on the Ukraine 24 TV channel due to hacking efforts.
The video appears to show Ukrainian President Volodymyr Zelenskyy, his head moving and his body largely still, calling on his country's citizens to stop fighting Russian soldiers and surrender their weapons. He had already fled Kiev, the video claimed.
Except those weren't the words of the real Zelensmallkyy. The video was a “deepfake” or content created using artificial intelligence. In a deepfake, people train computers to imitate real people to make what appears to be authentic video. Shortly after the deepfake aired, it was debunked by Zelenskyy itself, was removed from prominent online sources such as Facebook and YouTube and ridiculed by Ukrainians for its poor quality, according to Atlantic Council.
However, just because the video was quickly discredited doesn't mean it didn't cause harm. In an increasingly politically polarized world, in which media consumers may believe information that reinforces their biases, regardless of the apparent legitimacy of the content, deepfakes pose a significant threat, warns Northeastern University computer science and philosophy professor Don Fallis.
“It's kind of interesting the respect in which it wasn't a particularly high-quality deepfake. There were all kinds of indicators that the individual consumer of information might think, “This doesn't seem right,” Fallis says of Zelenskyy's deepfake. “That said, with all of these sources of misinformation, no matter how reliable the information seems, if you're strongly inclined toward a certain point of view, if you receive information that confirms that preexisting bias, the source of that information—and the veracity of that information—may not matter.”
In his research, Fallis—who studies epistemology, or the theory of knowledge—attempts to put contemporary issues such as deepfakes and fake news into the broader philosophical context of how individuals acquire and assimilate true knowledge, as well as disinformation.
In 2018, he authored an article titled “Fake news is fake news,” with Northeastern philosophy professor Kay Mathiesen. The article examined the threat to democracy and knowledge posed by fake news and attempted to define the concept. Two years later, he wrote an article about deepfakes, “The Epistemic Threat of Deepfakes,” in which he concluded that deepfakes can lead to false beliefs, undermine the justification of true beliefs, and prevent people from acquiring true beliefs.
Fallis argues that both fake news and deepfakes have the negative effect of delegitimizing real news. He says they reduce the amount of real information available, reduce consumer trust in authentic media and put an additional burden on fact-checkers to authenticate the vast amount of content online.
“In the case of fake news, you create this online presence that's supposed to look like a legitimate news site,” Fallis says. “Similarly, in the case of deepfakes, you create video and audio that is supposed to look like legitimate media.”
In addition, combined with tools used to mass harvest individual users' personal information, deepfakes can also be used maliciously to target large audiences and manipulate them by playing on their ingrained biases, Fallis says.
“It might not just be this killer technology,” he says. “It's not like deepfakes are going to be the only thing that will drive us off the cliff. It's a whole range of potentially problematic technology.”
Increased political divisiveness has a similar impact on how people interpret fake news, where users clearly seek out and accept information that is compatible with their preconceived notions, notes Northeastern political science and computer science professor David Lazer. However, it is not clear how much one loses critical thinking skills when encountering media that reinforces one's worldview.
“Certainly, we've seen an increased polarization in public opinion, and that's clearly one of the factors that may be at play with the spread of misinformation,” Lazer says. “It is quite plausible that political polarization and the spread of misinformation go hand in hand, but this is an area of necessary research.”
Director of Northeastern's Lazer Lab, which conducts research on social influence and networks, Lazer's studies focus primarily on the spread of misinformation on social media. In 2019, he co-authored a study on the prevalence of fake news on Twitter during the 2016 presidential election cycle.
Deepfake technology is also “quite relevant” to his studies, Lazer says, but more research is needed into the different types of disinformation, how it spreads and its psychological impact on media consumers. The rise of political polarization and its impact on media consumption is also a high-priority area of study, he adds.
“We can definitely say that over the last 40 years there has been an increased polarization of many species, and that's worrying,” says Lazer.
Beyond the issue of users failing to challenge the deep fakes they encounter if the content confirms their existing worldview, the technology raises other important concerns.
One of the most problematic uses of the technology is when a person's likeness, usually a woman, is manipulated and placed in a sex video, making it look like the person they're targeting is engaging in sexual activity, says Marc Berkman. the executive director of the Agency for Social Media Safety, a nonprofit organization dedicated to making social media safe through advocacy and education.
Moreover, as in the case of Zelenskyy's deepfake, the world is witnessing the political impact of the technology, says Berkman. Deepfakes can potentially influence democratic elections and be used as propaganda to sow division and doubt, he says.
Fallis and Berkman emphasize the importance of users cultivating critical thinking skills when going online. One way to protect people from deep fakes is to engage in safe social media use: Approach content, especially news, with a critical eye.
The Agency for Social Media Safety currently supports media education in public schools, helping children understand news sources so they can take a nonpartisan approach to evaluating and understanding the credibility of content.
“It's incredibly important for our democracy to understand what's real and what's not,” says Berkman. “Limiting time on social media to healthy amounts is also important so people can avoid deepfakes being used for propaganda purposes.”
However, Fallis and Berkman note, individual efforts cannot replace structural change in businesses and governments aimed at combating the spread of this potentially dangerous technology.
Social media giants such as Facebook have adopted policies pledging to remove deepfakes from their platforms if encountered certain criteriaand some state governments, such as California, have adopted laws that impose civil liability on the creators of intentionally harmful deepfakes.
In California, Berkman says, his organization is working to pass a state law that would also criminalize the creators of malicious pornographic deepfakes, with the hope that this type of law will be expanded to other states and that the federal government will adopt similar legislation.
For media inquiriescontact media@northeastern.edu.