ITEM: Perhaps inevitably, a deepfake video which claimed to show Ukrainian President Volodymyr Zelenskyy asking his soldiers to lay down their weapons surfaced online and went viral. The good news: it was a poor-quality deepfake and debunked rather quickly. The bad news: we’re going to see more of this, and it’s going to make our current digital disinformation problem worse.
Deepfake technology – which leverages AI to map the image of someone’s face onto someone else – has been around for a few years now, and it’s getting increasingly sophisticated. Up to now, its uses have been limited to memes, satire, pranks and (horrifyingly) revenge porn. But it was only a matter of time before it started being used for digital disinformation campaigns, and experts have been warning for some time that deepfake disinformation could have serious consequences.
In this case, the Zelenskyy deepfake didn’t work, partly because the Ukrainian government had been expecting deepfakes to be part of Russia’s digital disinformation campaign and warned citizens about the possibility a few weeks ago, and partly because the deepfake was pretty shoddy. It’s fairly easy to make a deepfake with little more than an iPhone and an app, but the quality isn’t good enough to fool most people. It takes sophisticated software to make something that’s harder to detect. But that software already exists, and even the smartphone version will improve over time.
Meanwhile, some experts say that the real danger of deepfakes isn’t making it look like someone said something they didn’t, but creating higher distrust of information.
Hany Farid, a professor at the University of California, Berkeley who is an expert in digital media forensics, tells NPR:
“It pollutes the information ecosystem, and it casts a shadow on all content, which is already dealing with the complex fog of war,” he said. “The next time the president goes on television, some people might think, ‘Wait a minute — is this real?’ “
Think of it this way: the more we’re aware that bad guys have the technology to fake a video, the less likely we are to trust the authenticity of any video regardless of whether it’s real.
Technology Review’s Karen Hao wrote about this all the way back in 2019, citing a real-world example of Gabon president Ali Bongo releasing a video after disappearing for a few months assuring people he was okay. It was denounced by many as a deepfake:
Subsequent forensic analysis never found anything altered or manipulated in the video. That didn’t matter. The mere idea of deepfakes had been enough to accelerate the unraveling of an already precarious situation.
Facebook and other companies are working on tools to spot and flag deepfakes – but that may not be enough. It will also require human moderators in media companies (including social media) to learn how to spot deepfakes (and other forms of digital disinformation) and how to verify if a given viral video is real or not.
It also requires audiences to be more media-savvy about digital disinformation in general, although given the speed and scale at which the conspiracy theory about US bioweapons labs in Ukraine has gained traction, that’s probably going to take several generations to achieve.
Snopes offers more details on how to spot deepfakes here.