INICIO | ESPECIAL
ESPECIAL

Deepfakes: will anything be trusted?

Por: MA. Clara Franco Yáñez
Master en Asuntos Internacionales, por el Instituto de Posgrados en Estudios Internacionales y del Desarrollo en Ginebra, Suiza
clara.franco@graduateinstitute.ch

Share This:

“It’s not what it looks like!” … We hear this in movies and, most often, the punchline of the scene is that it’s exactly “what it looks like”. A combination of artificial intelligence learning at a terrifying pace, and the more and more ubiquitous state of video and social media in our everyday lives, means that “deepfakes' ' will be more convincing than ever – indistinguishable from a real video of a person. Defined as “synthetic media in which a person in an existing image or video is replaced with someone else’s likeness”, you’ve probably seen one;maybe unknowingly. Not to forget that one of the most “popular” uses of deepfake videos was to create fake pornographic videos featuring celebrities. People have created fake videos of Barack Obama, Vladimir Putin, Morgan Freeman… Some of the media pieces were created precisely with the meta-objective of raising awareness about the deepfake technology itself and its unsettling potential.

You may have seen one, and true, many of them are still far from perfect. Maybe, only once you’ve been warned that it’s a fake, the “uncanny valley” feel to them suddenly materializes and you realize “well of course, it does look fake”. But the technology keeps improving. The blinking and the “lipsync” get better. Coupled with a convincing voice acting or voice “masking” ( we’ve known for decades that it’s not impossible to learn how to imitate voices perfectly), it is honestly unsettling and frightening to think of the possibilities. Scamming, fake messages which could even start international conflicts, or something much closer to home and much more terrifying for all of us: “what if someone creates a ‘deepfake’ featuring me in a porn video?!”… At the company where I work for, we already had an incident of an attempted scam, where the CEO’s voice and mannerisms were perfectly imitated by phone – it almost caused significant financial loss.

Will it become harder and harder to tell what is real from what isn’t?... Or will the hype and moral panic about “deepfakes” eventually die down, as have many others? Every new disruptive technology tends to generate fears, some of them wildly exaggerated – such as the fear that self-driven cars would essentially be murder machines. Technology also tends to bring its own “antidotes” in various forms (think simply of a virus + the antivirus), or in this case, reliable ways to detect and warn about even “perfect” deepfakes.

But moral panic or not, some argue that just the mere fact that deepfakes exist (the very idea that they are possible) already further contributes to dissolve trust in this increasingly polarized and mistrustful society. That their existence breaks down (yet another?) layer of trust in what our own senses perceive. Might it become the case that we will be unable to truly trust anything coming from the media online?... Much has been written about the difficulties and the mental training required to think critically and to examine information with an analytical mind. This already assumes that people even want, or care, to distinguish fact from fiction. Might be much worse if people don’t actually care in the first place, choosing instead to simply reinforce their own preconceptions to infinity.

Perhaps in some time “deepfakes” will simply fade down to nothing. But at this point it doesn’t seem so far-fetched to think that perhaps, only in our small local, physical community will we find the level of trust that our senses aren’t deceiving us. Yes, variations of this idea have been said since the dawn of time but, in some ways, it does seem that the more technology advances the more we take steps backwards. And yes, this has become another tired cliché catchphrase, but it would seem that the more “connected” we are via technology, the more “disconnected” we risk becoming: to each other, to reliability and fact, or in this case, to the questionable reality of what our senses perceive. Soon, will only the people we are seeing physically in front of us be truly real? Even then, what incredible technology might prove able to fool us in-person tomorrow?