Worse than “deep fakes” – new and more powerful applications of disinfo

On June 24, 2022, Berlin Mayor Franziska Giffey “is completely normal conversation” with Kyiv mayor Vitaly Klitschko.
Or so he thought.
-
Volodomyr Zelensky tells Ukrainian troops to surrender – or not. Another “deep fake” (Photo: Twitter)
He became suspicious when the supposed mayor asked him to support a gay pride parade in the middle of war-torn Kyiv.
It turned out that he was not Klitschko, but an impostor. Giffey’s office later said that the person may have used deep forgery technology to trick Berlin’s mayor (although the underlying technology remains unclear).
A year or two ago, few people knew about deepfakes; most people today. Its popularity is largely due to its prominence in popular applications such as face swapping or AI-based lip-syncing technology on TikTok.
They were once merely a means of entertainment, but disinformation actors began to exploit them. This year alone, 2022, has seen several similar high-profile stunts, from the relatively innocuous – such as the JK Rowling hoax – to the potentially dangerous, such as the deep-fake impersonation of Ukrainian President Volodymyr Zelensky, who ordered his citizens to lie down juice. their arms.
But what’s even scarier is that deepfakes are quickly becoming the “old fashioned” method of creating fake video content.
This year’s new kid is completely synthetic media. Unlike deepfakes, which are partially synthetic and graft an image of one person’s face onto the body of another in an existing video, fully synthetic media can be created seemingly out of thin air.
This year saw the proliferation of text-to-image software that does exactly that.
It’s not real magic, but the technology behind the generators is hardly less mysterious. The models that power the text-to-image software rely on machine learning and massive artificial neural networks that mimic the brain’s natural neural networks and ability to learn and recognize patterns. The models are trained to recognize millions of paired images and their textual descriptions.
All the user has to do is enter a simple text prompt and – hey presto! – the picture comes out. The most popular programs are Stable Diffusion and DALL-E – and both are now free and open access.
This suggests a troubling potential: these tools are the dream of a disinformation actor who needs only to imagine the “evidence” needed to support his narrative and then create it.
These technologies are already beginning to permeate social media, and images are just the beginning.
Recently, in September, Meta released “Make-A-Video,” which allows users to create “short, high-quality video clips” from text prompts. Experts warn that synthetic videos are even more worrisome than synthetic images, given that social media now favors quick and clipped videos over text or images.
Fun aside, the intrusion of synthetic media into an app like TikTok is particularly worrying. TikTok focuses on user-generated content and encourages people to take existing media, add their own edits and re-upload it – an operating model not too dissimilar to deepfaking.
Recent research by the Associated Press has shown that one in five videos on TikTok is misinformation, and that young people are increasingly using the app as a search engine for important issues such as Covid-19, climate change or Russia’s invasion of Ukraine.
It’s also significantly harder to control than other apps like Twitter.
In short, the TikTok app is the perfect incubator for such new tactics, which then often spread across the web through cross-platform sharing.
Most disinformation is still created using common tactics like video and audio editing software. By splicing videos together, changing speed, changing the voice or simply taking the video out of context, disinformation actors can easily incite rumors.
Seeing is still believing
However, the danger of becoming a text-image is already real and present. It doesn’t take a lot of creative energy to imagine a not-too-distant future when untraceable synthetic media will pop up en masse on our phones and laptops. With trust in institutions and reliable media already weak, the potential effects on our democracy are sobering.
The sheer density of today’s news is an aggravating part of the problem. We all have a finite capacity to consume news—let alone fact-check. We know that exposure is a slow and ineffective solution. For many of us, seeing is still believing.
We need to provide a simple and widespread solution for users to immediately recognize and understand fake images or videos. Solutions that do not allow users – and journalists – to identify fake news faster, easier and more independently, are always one step behind.
Currently, the most promising solutions focus on provenance: technology that embeds media with a signature or invisible watermark at the time of creation, as suggested by the Adobe Content Authenticity Initiative. This is a promising but complex solution that requires the cooperation of several industries. Policy makers, especially in Europe, should pay more attention to it.
We live in a fast paced world and disinformation spreads faster than our current solutions. It’s time to catch up.