Audio used a deepfake tool to simulate the voice of the anchor William Bonner – Projeto Comprova/Reproduction
In a election whose final stretch is marked by exchanges of accusations, exploitation of the opponent’s political scandals and montages to reach the most unsuspecting Brazilian, two audio and video manipulation techniques have worried political campaigns and observers of the Superior Electoral Court (TSE): the deepfake it’s the shallowfake.
Although the names and often the goals – ulterior – are the same, the difference between deepfake and shallowfake is the technique used to tamper with sounds and images and deceive the recipient of the message. In the first case, artificial intelligence mechanisms are used. In the last one, a refined editing of speeches and images.
The most famous shallowfake in this election was the one in which Jornal Nacional presenter Renata Vasconcellos appears to be releasing a false poll of voting intentions allegedly registered by IPEC and which would show President Jair Bolsonaro (PL) in the lead in the reelection race, which never happened. . The case is shallowfake, according to technology specialist Bruno Sartori, because old audios of Renata pronouncing numbers used in the fake survey were used, such as the 45% attributed to Bolsonaro, without the use of artificial intelligence. “In the case of JN, we have an edition with traditional editing software, so it’s not deepfake. He didn’t use artificial intelligence to create something that would pass for real. This is content that has been reedited to take it out of its original context,” says Sartori.
Created from artificial intelligence techniques, deepfake is more sophisticated and uses computer programs and even avatar creation to develop a fake message from scratch. Studies by the Dutch company Deeptrace with data from 2019 show that 96% of the deepfakes created are used for pornography, with distortions that simulate the voice and even change the faces in the videos.
On the political spectrum, a recent deepfake involved an alleged surrender speech by Ukrainian President Volodymyr Zelensky. In it, Vladimir Putin’s opponent gives instructions to Ukrainian troops to abandon their weapons and surrender to the Russian army. Unlike the video attributed to Renata Vasconcellos, in fact Zelensky’s voice was manufactured from computer programs, which also simulate the movements of the president’s lips.
“Bad intent is everywhere and it all depends on how one uses artificial intelligence. A security forces sniper and a bandit use the same tool. We cannot give space to those who distort the work”, says Sartori about the deepfake. Until the second round, Sartori intends to develop “deepfakes of good”, with popular figures like Pope Francis, giving a warning in Portuguese about the appropriation of technology to do evil.
Continues after advertising