Deepfake: The Information Boogeyman

Deepfake: The Information Boogeyman

The line between reality and simulation is becoming increasingly curtailed when it comes to media outlets. Speaking of false audiovisuals at a time in which we are still learning to deal with fake news is adding a seemingly innocuous reagent to a potentially harmful reactive mixture. I see two hypothetical polarised outcomes spanning from such a reaction. Either we obtain products of interest, such as computer-generated characters for cinematic purposes or voice clones to restore people’s voices when they lose them to disease, or we generate a catastrophic explosion by creating authentic vectors of disinformation. 

Deepfakes are the most well-known example of “synthetic media”, which consists of pictures, sound, and video that look to have been made using traditional methods but were really created using complex software (1). Although its most prevalent usage so far has been to transplant the heads of celebrities onto the bodies of actors in pornographic videos, deepfakes have the ability to make convincing footage of anybody doing anything, anywhere, constituting a major threat to society (1). The deepfake technology uses artificial intelligence (AI) to substitute one person’s likeness with that of another in recorded video by continually improving its ability to recognize the individual’s expressions and mannerisms (2). 

How does this actually work? There are two main methods for creating deepfakes. The most common one relies on the use of deep neural networks, artificial neural networks with multiple layers between the input and output layers, to perform face-swapping. You’ll need a target video to utilize as the deepfake’s foundation, e.g. Hollywood movie, as well as a collection of clips of the person you want to place in the target, e.g. random TikTok videos (2). First, you run thousands of videos of the two people through a deep learning AI algorithm called encoder. The encoder learns commonalities between the two faces, reducing them to their shared characteristics through a compression mechanism. The faces are then recovered from the compressed files using a second AI system called a decoder. You train one decoder to recover the first person’s face and another decoder to retrieve the second person’s face since the faces are different. Finally, to achieve the face swap, you just send encoded photos into the “wrong” decoder (2) (3). The other method employs what is known as Generative Adversarial Networks (GANs) in a positive feedback loop between two AI algorithms to generate deepfakes. The first algorithm, a generator, takes random noise and converts it into a picture. This synthetic picture is then added to a stream of actual photographs that are fed to the discriminator, the second algorithm. The synthetic pictures will initially appear to be nothing like faces. However, if the procedure is repeated many times with performance feedback, the discriminator and generator will both improve and produce the expected result (3). I know this might seem complex and difficult to understand but let’s analyse the possible implications of such a technology.

The use of machine learning algorithms for the manipulation of images and videos raises concerns about the spread of fraudulent material to manipulate public opinion during, for example, critical socio-political occasions. When Hong Kong pro-democracy activists received Telegram messages from Australia’s Finance Minister, Simon Birmingham, they were overjoyed. However, when “Birmingham” requested they transfer money into a Hong Kong bank account, the activists quickly realized something was wrong. In truth, Birmingham’s contact book had been stolen by a computer hacker who had managed to verify a Telegram account with his phone number (4). Also, a video that Amnesty International claims depicts Cameroonian military murdering citizens was recently denounced as a product of deepfake technology by Cameroon’s minister of information (3). Given the danger that these forms of communication pose, the European Commission launched a plan of action against disinformation in December 2018, more than doubling the budget of the Action Group for Strategic Communication of the European Union to combat disinformation and increase sensitivity to its impact (5). Similarly, in the United States, this phenomenon is being studied by the Pentagon, through the Defense Advanced Research Projects Agency (DARPA). In cooperation with relevant national institutions, the agency is trying to develop mechanisms for the automatic detection of these manipulated videos (6).

However, there is still an unanswered question. What is the legal basis of this technology? Deepfakes are not illegal per se, but makers and distributors may easily get in trouble. A deepfake may infringe copyright, violate data protection laws, or even be libelous if it exposes the victim to mockery. There’s also the crime of sharing sexual and confidential images without authorization, which is oftentimes associated with this technology (3). A major concern outside of the popularity sphere is the use of deep fake technology against ordinary people, since the widespread availability of video material on social media might open up entirely new pathways for non-celebrity deep fakes (1). Celebrities and politicians may be shielded by their status, but a regular person may have difficulty demonstrating their innocence if their peers get footage that looks to show them doing something reprehensible (1).

How to be then prepared for this new wave of information manipulation? The only way is to scrutinize the information we are being fed with and ask ourselves: Does this make sense to us as human beings? Through critical thinking and a clear eye, we will be able to stand against this ever growing threat. Education is the key. Educate yourselves.

Rafael Luis Pereira Santos


(1) Biggs T. and Moran R. (2021), “What is a deep fake?”, The Sydney Morning Herald.

(2) Johson D. (2021), “What is a deepfake? Everything you need to know about the AI-powered fake media”, Insider.

(3) Sample I. (2020), “What are deepfakes – and how can you spot them?”, The Guardian.

(4) Galloway A. (2021), “The new world of ‘deep fake’: How cyber attackers impersonated senior ministers, diplomats”, The Sydney Morning Herald.

(5) High Representative of the Union for Foreign Affairs and Security Policy (2018), “Action Plan Against Disinformation”, European Commission.

(6) O’Sullivan D. et al (2019), “When seeing is no longer believing”, CNN Business.

Leave a Reply

Your email address will not be published.