The term deep fake is commonly referred to as a video in which a person’s face has been convincingly replaced by a computer-generated face. Deepfakes came to prominence, with the spread of the internet, in 2017 through the online community called “deepfakes” on Reddit, whose members began sharing pornographic videos that appeared to feature famous female celebrities.
Deep fakes are the most important and well-known form of what is called “synthetic media”:
That appear to have been created by traditional means but which were actually built by complex software, which will be analyzed later in this article.
The origin of the terms Deepfakes come from the fact that are videos created using deep learning technology.
In the historical moment we are experiencing, thanks to simple software tools such as FakeApp and DeepFaceLab it is possible to achieve a truly comparable effect. The technology offers truly exciting possibilities for various creative industries, from dubbing, video enhancement and repair to resolving the disturbing valley effect in video games, avoiding actors having to repeat a line of action.
As already said, the term deepfake is now used generically in the media to refer to any video in which faces (mainly part of the body modified) have been swapped or altered digitally, with the help of artificial intelligence. Let’s try to find out how the world of deepfakes works from a technical point of view. The idea of manipulating videos is not new. In the 1990s, some universities were already conducting major academic research on computer vision. Much of the effort during this period has focused on using artificial intelligence (AI) and machine learning to edit existing video footage of a speaking person and combine them with a different audio track.
A deepfake video leverages two machine learning (ML) models.
- One model creates the fakes from a sample video dataset, while the other tries to detect if the video is indeed a fraud.
- The second model can no longer tell if the video is fake, then the deepfake is probably quite believable even for a human viewer. This technique is called generative adversarial network (GAN).
Deepfake examples are becoming more and more compelling. The spoofed videos originally caused a sense of disquiet and appeared to be used primarily in jokes, but we’ve since seen deepfake enter the use of mainstream media in movies and even news broadcasts. You can now find examples of deepfake on YouTube that are better than the CGI footage found in the original movie the scenes were taken from. This has led to concerns about how technology could be abused to create lifelike fake videos, made for a negative purpose. As we have already established, creating a deepfake video is not difficult but creating good deep fake content requires knowledge accessible to a few. Today, fake content can still be recognized. Here are some examples or tips on things to recognize deepfakes:
- Glitch: for less technical users also called shots in the image, inconsistencies in the continuity of the movements of the face, of the mouth that suggest an image built ad hoc. To recognize these details, it is better to watching suspicious deepfake videos on computer and not on smartphone.
- Eye movements: in fake videos, the subject’s eyes (in the articular iris and pupils) move in an unnatural way.
- Voice of the subject: an attentive ear would notice the distorted tone and accent of the voice.
- Face lighting: almost always does not change in videos and is flat.
There has been a lot of discussion about deepfakes and the political and legal implications they could have. Waiting for lawmakers, many websites have already taken a stand against deepfakes. Some online platforms, for example, are actively trying to ban all non-consensual deepfake videos. Others are concerned that, without proper regulation, many overdue historical actors and figures may be “digitally resurrected” without the explicit consent of their relatives. On the other hand, there is a fear that stricter rules could throw a bad light on deepfakes, which could slow research in many AI-related fields.
But with better and better technology, we are heading into a future where videos cannot be trusted so easily. The best way to protect ourselves from fake news and disinformation is to hone our critical thinking skills. Always be skeptical of the content you see online, especially when it is designed to be shocking, sensational or irritating. And before reposting something on social media, try double-checking its authenticity to reduce the spread of fake content, whether it’s a video or a news article.
The Sidney morning, What is a fake news?, available at: https://www.smh.com.au/technology/what-is-the-difference-between-a-fake-and-a-deepfake-20200729-p55ghi.html
Creative Bloq, 14 deepfake examples that terrified and amused the internet, available at: https://www.creativebloq.com/features/deepfake-examples
Insider, What is a deepfake? Everything you need to know about the AI-powered fake media, available at: https://www.businessinsider.com/what-is-deepfake
Techslan, What is Deepfake Technology?, available at: https://www.techslang.com/what-is-deepfake-technology/
BBC Bitesize, Deepfakes: What are they and why would I make one?, available at: https://www.bbc.co.uk/bitesize/articles/zfkwcqt
Technology innovation management review, The Emergence of Deepfake Technology: A Review, available at: https://timreview.ca/article/1282
The regulatory review, Responding to Deepfakes and Disinformation, available at: https://www.theregreview.org/2021/08/14/saturday-seminar-responding-deepfakes-disinformation/