

Zelensky’s Deepfake Is the Peak of the Misinformation War
source link: https://albertoromgar.medium.com/zelenskys-deepfake-is-the-peak-of-the-misinformation-war-36ba2fe6cd59
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.

Zelensky’s Deepfake Is the Peak of the Misinformation War
How AI can influence a global conflict — and everything else.
A global conflict and the impossibility to tell real from fake is a cocktail for disaster.
Two weeks ago Volodymyr Zelensky was making headlines everywhere. But this time, it wasn’t really him. It was a fake video in which Ukraine’s president supposedly appeared declaring he had decided to “return Donbas” and advising civilians to “lay down arms” before the Russian military and “return to [their] families,” as reported by Sky News.
The video was viewed hundreds of thousands of times on Youtube, Facebook, and Twitter before it was taken down, following policies on manipulated media. The hackers (no one has claimed authorship) even managed to put it live on the TV channel Ukraine-24. Soon after, the channel posted a message on Facebook saying it was a fake video — and one fairly easy to spot as the sizes of head and body didn’t match and the audio was off. Zelensky himself issued a statement on Instagram debunking the deepfake.
This is the first known instance of deepfakes being used in the war between Russia and Ukraine — and could very well be the first in any armed conflict.
We’ve seen other forms of propaganda and false information during this war (e.g. spamming bot accounts or fake news articles from dubious sources). However, deepfakes — which have protagonized worrying discussions about AI risks in the past — are at the technological forefront of false information. Because they threaten one of the sources we’re inherently predisposed to believe: Moving faces and talking voices.
Deepfakes: explain like I’m five
What is it?
For those of you who don’t know what a deepfake is, here’s the simplest explanation: A deepfake is a video (or image) that displays a person doing or saying something they haven’t, built with AI models.
A tech-savvy person can easily access deepfake software to modify a real video (e.g. switching people’s faces) or create a fake one from scratch (e.g. simulating mouth movements that correspond to a prerecorded audio file). Some parts are real, some aren’t. And when merged, they create a solid illusion that combines the face, voice, and mannerisms of the victim with the movements or words of the original.
Fake media have existed forever, but deep learning algorithms provide a qualitatively new degree of likeness between fake and real. Also, as Professor Sandra Wachter, an expert on the law and ethics of AI at the University of Oxford, told Sky News, now “we have the Internet where such information can spread more widely which makes it distinct from historical examples.”
How to spot one?
Zelensky’s deepfake was easy to spot because, technically speaking, it was a bad job.
Hany Farid, a professor at UC Berkeley, and digital forensics expert explained to CNN the “obvious signs” of the deepfake. First, it was a “low-quality, low-resolution recording,” intended to “hide the distortions.” Second, the arms and body don’t move (high-quality deepfakes can do it convincingly, but it isn’t easy). And third, “there are little visual inconsistencies” that appear during the creation process.
There are other non-technical heuristics we can apply to detect a deepfake. Questioning the info, looking for trustworthy sources, or comparing the video against others with an internet search.
Finally, the most powerful tool we can use — but often take for granted: common sense. Just one question can do the trick: Does it make sense that, given my beliefs and knowledge, this can be true? Of course, our beliefs can be already subjected to falsehood, so it’s not infallible.
When we don’t know what to believe anymore
Deepfakes we can’t spot
But not even the most sophisticated tools will always work. Zelensky’s video was easily detected because it was cheap but also because Zelensky is one of the highest-profile people in the world right now.
Barely anyone knew who Zelensky was before the Russian invasion started, but now he’s been everywhere. People know how he talks and moves. We aren’t easily fooled if we have an internal representation of what we’re seeing. Now, imagine the consequences of this scenario with a less-known political leader who could also impact the lives of millions. The social and political effects could be devastating.
Wired senior writer Tom Simonite says that “as deepfake technology continues to get easier to access and more convincing, Zelensky is unlikely to be the last political leader targeted by fake video.” With more powerful computers and skilled algorithms, anyone could theoretically create a high-quality fake video hard to debunk even for experts paying attention. And deepfakes evolve fast. As soon as a weakness is found, a correction is applied.
Luckily, the best resources to create fake media are beyond reach for most — conventional computers can’t achieve good results because they lack enough computing power and memory capacity. However, technology always finds its way towards lower costs. It’s a matter of time before the creation of indistinguishable deepfakes is a few clicks away, rendering video information inherently unusable for establishing our beliefs.
A complex array of social consequences
The immediate consequence of deepfakes is that people may believe as true something false. But that’s just the obvious effect of a four-legged problem. The other three only become apparent after some reflection on the nature of the issue.
First, the other side of the coin. If people can ascribe truthness to something false, they could also ascribe falseness to something true. As deepfakes proliferate, people will be more aware of them and the rate of false negatives will increase. People will regard as false videos or images that are indeed real.
Soon, a significant amount of beliefs we hold as true could be false and vice versa. We may not know what to believe anymore.
But the branching of the problem doesn’t end there. People who don’t know better may share fake news they consider true, helping hoaxes proliferate. Some of the sharers are likely trusted sources of others, creating invisible barriers for well-intended scrutiny.
Lastly, the less common consequence, but also potentially the most impactful: Powerful people who have been recorded misbehaving can always resort to the excuse of the deepfake to avoid responsibility. This phenomenon is called the liar’s dividend. It’s misinformation about misinformation, providing politicians and other public figures with an effective shelter for their otherwise vulnerable reputation.
Final reflection
René Descartes spent his life looking after a doubtless truth. Something he could believe to be true regardless of everything else. Doubt, he thought, is the beginning of knowledge and the end of false belief.
David Hume realized we inevitably rely on the testimony of others to create our beliefs. Most of what we qualify as true — the supposed facts to which we ascribe truthness — are no more than second-hand testimonies at best. How can we make sure that the reality we perceive matches that beyond our reach? By delegating our beliefs to trusted sources, he thought.
But neither doubt nor trusted sources are enough in our increasingly complex world. Both Descartes and Hume would realize just how deep the problem of misinformation is today. Zelensky’s deepfake is just the tip of the iceberg, and not only in terms of what will come but because of what may already be here that we don’t know.
Doubt is a powerful tool, but we can only doubt what we can think. And our trusted sources face not a better prospect. What we can’t think about, those realities we don’t know that we don’t know — the unknown unknowns — will remain in the realm of mysteries, forever beyond the limits that separate our few doubtless truths from abundant illusory certainties.
If you’ve read this far, consider subscribing to my free biweekly newsletter Minds of Tomorrow! News, research, and insights on AI and Technology every two weeks!
You can also support my work directly and get unlimited access by becoming a Medium member using my referral link here! :)
Recommend
About Joyk
Aggregate valuable and interesting links.
Joyk means Joy of geeK