Future Society

From porn to politics: the inexorable rise of the Deepfake IA

8 January 2019 | Written by La redazione

The artificial intelligence for the simulation of images of humans is a new "spearhead" of disinformation

A new threat looms over the web world, promising a level of disinformation never achieved until today. This is the Deepfake, a technique for the simulation of images of human beings realized thanks to artificial intelligence systems. We have not yet learned to recognize the common (and often trivial) fake news and we already have to prepare ourselves to face fakes that are much more difficult to identify, with implications that can affect different areas, even very distant from each other. The latest to denounce the danger of Deepfake was the Hollywood star Scarlett Johansson, who in an interview with the Washington Post expressed his frustration at the impossibility of countering the spread of pornographic films with his face, made just with these software.

How Deepfake IA works. The concept is simple: imagine being able to “attack” the face of an individual to the body of another in a video, creating a fake very difficult to recognize. The software (among the best known is Fakeapp), not particularly complex to use, elaborate the images of a subject, his expressions and his posture. Once they have “learned” these features, the software can faithfully reproduce the facial movements of the individual in question. Then, the program inserts the new clip into a video and, thanks to the vocal synthesis, finally, it can also reproduce the voice of the subject, creating a false (almost) perfect. The only fundamental prerequisite, then, is to have a large number of images of the subject that is meant to “falsify”: for this reason, the VIPs have been, until now, the favourite targets. However, given the vast amount of information that we all disseminate on the web every day, the danger Deepfake could soon affect everyone.

From porn to politics. The industry that has been the trailblazer for using this technology has been porn. Although the danger of Deepfake software has emerged for some time, we have not yet identified adequate measures to counter it. This was precisely one of the complaints launched by Scarlett Johansson in the recent interview: “The fact – said the actress – is that trying to protect yourself from the internet and its depravity is fundamentally and largely a lost cause”.
The pitfalls, however, may very soon also touch politics, with potentially disastrous results: imagine the spread of a video in which a head of state let himself go to uncomfortable and inconsiderate statements or, even worse, declares war on another country. The experiment conducted by Jordan Peele can only serve as a warning in this sense: the American director and screenwriter, in fact, has made a video soon viral in which he falsified a speech by former US President Barack Obama, making him say things never pronounced.

How to counteract false videos. From the development of AI systems able to identify fakes to solutions based on the use of blockchain, there are many advanced proposals to be able to combat the dissemination of videos created with Deepfake systems. To date, however, the solutions do not seem to be particularly effective: already in February last year a series of online platforms including Twitter and Pornhub had taken the field to curb the phenomenon but to date, the solutions adopted do not seem to have given the expected results. In recent months, the US Department of Defense has also entered the field, which is funding a project to detect falsified video and audio: some of the best digital forensics experts are working to generate fake fakes through IA. At the same time, they will try to develop tools capable of automatically recognizing fakes.

The war on fake news has just begun and disinformation will increasingly rely on a new and powerful ally: the Deepfake IA.

La redazione
La redazione

read more