Episode 14: This Person Does Not Exist

In 2015 to 2016, while lecturing at the University of Roehampton London for a module on Scientific Thinking, Dr Bahijja Raimi-Abraham happened upon The Reilly Top 10 List. Now known as the Tech Top 10 List, it was founded by Dr. Jessica Baron and focuses on concerns and ethnical dilemmas in science and technology. In this episode, hear Dr, Bahijja explore a topic-featured on the 2019 Tech Top 10 list: AI deep-fake technology.

Photo by Magnus Engø.

What are deep-fakes?

The term “deep-fake” comes from a combination of the term “deep-learning (in reference to deep-learning AI) and the word “fake”, although it can also be referred to as “face swapping” in some contexts. Generally, it is considered to be generated or synthetic video, audio or photographic content where a person’s face or likeness has been replaced with another person’s face.

So, how are deep fakes actually created?

While many of us are familiar with the concept of self-learning artificial intelligence, the average person likely hasn’t heard of the main method by which deep fakes are generated, being generative adversarial networks (or GAN).

A “GAN” is a machine learning model consisting of two different computers that play off of each other to generate near undetectable forgeries, in this case, of faces. One computer (the generative network), is responsible for generating content, usually based off of information, such as photos, from a data pool. The discriminative network then repeatedly analyses the content from the generative network for flaws or any signs that might indicate that it is fake. Once the detection model can no longer detect any signs in forgery, then the generated content — in this case, the deepfake — is complete.

While many deepfakes contain the likeness of individuals that already exist, not all of them do. In 2019, at The MedTech Conference, a website dubbed “thispersondoesnotexist.com” had attention drawn to it by speakers. Created by Philip Wang, using a GAN learning model, the This Person Does Not Exist algorithm was “trained” to generate new and unique faces using thousands of preexisting photos.

A generated face from thispersondoesnotexist.com.

Should we be worried about deepfakes?

Although much of the initial novelty surrounding deep-fakes came from relatively harmless programs like thispersondoesnotexist.com, they have a huge potential for harm, whether it be revenge or fake pornographic images, defamatory images, fake news, hoaxes, financial fraud, or stolen identities.

One such example of the possible downsides of deepfakes comes from editorial writer, Oliver Taylor from the University of Birmingham, self-described as loving politics and coffee. In December 2019, the UK-based academic published inflammatory articles on several reputable new-sites, such as The Israel Times, The Algemeiner, Arutz Sheva (also known as “Israeli National News”) and The Jerusalem Post. Following several complaints from people mentioned in the article, several editors, such as at The Israel Times, attempted to contact Taylor. While he initially responded, the attempts to vet Taylor’s identify proved futile, and an investigation by Reuters eventually revealed that he did not exist — and his profile picture was a deep-fake. Although most websites have now removed his articles, some have yet to.

A photo of “Oliver Taylor”, beside a heat map by Cyabra identifying areas of suspected manipulation, Via Reuters.

As noted by The Israel Times editor Miriam Herschlag, these sort of deepfakes are incredibly dangerous, as they “could distort the public discourse” and “[make] people in her position less willing to take chances on unknown writers”.

Not only are deep-fakes an issue with public discourse, but they have also became an issue within the world of online dating.

As noted by writer Ali Foster, in as early as 2017, an organisation called Internet Removal was encountering deep-fake scammers on Tinder. One such scam consisted of a Tinder user who would encourage those who matched to video chat with “her”. Upon video-chatting, it would show a video feed of a “woman” undressing herself, encouraging the person on the other end to do the same. Terrifyingly, the only sign of the video being fake was the occasional issue of the audio lining up the with movement of the “woman’s” mouth.

Are deep-fakes legal?

Deep-fakes, being a new and often poorly understand technology, unfortunately exist in a sort of legal grey zone. Although some countries, like the United Kingdom can allow deep-fakes to fall under harassment law, in the case of a 2018 defendant who was jailed and fined after creating deep-faked pornographic content, many other aspects such as intellectual property, are not regulated as heavily (or at all).

Although largely unexplored, has has been some examination of the potential implications of deep-fakes. In January 2020, in the Journal of Intellectual Property Law & Practice, the article “Regulating deep fakes: legal and ethical considerations” explores many of the potential benefits, disadvantages, and potential unintended consequences of this emerging technology.

Although more regulatory measures could help prevent the creation of socially harmful content, such as revenge pornography, there is a lack of sufficient technological knowledge or resources available to regulators. As with any other issue, it is hard to solve a problem if you don’t understand it fully.

Regardless, the fate of deep-fakes will largely depend on how well governments apply regulatory measures, whether that be legally or otherwise.

Interested in hearing more? Please check out the full episode of the Monday Science podcast on Spotify, Apple Podcasts, Google Podcasts, and Stitcher.

If you have any questions you’d like to be answered by Dr Bahijja, or have any thoughts on deep-fake technology, feel free to send them in via the website chat, or email MondayScience2020@gmail.com. You can also send us your questions as a voice message via https://anchor.fm/mondayscience/message. We love to hear your thoughts!

Were you confused about any of the terms used in this summary?

Deep-learning AI — A computer learning method used with artificial intelligence that mimics the human brain, whether it be with pattern recognition, etc.

Machine Learning Model — As noted by Expert System, a machine learning model is a method used in AI to allow a computer to learn and gain experience without human intervention. Sometimes referred to as self-learning AI.

GAN A generative adversarial network. A type of machine learning model that has one computer generating content, and second computer actively looking for flaws in the content generated by the first computer. By identifying flaws, this allows the first computer to correct its output until there are no identifiable flaws in a particular reference field.

Image Credits:

Photo of Motherboard by Magnus Engø.

Photo of generated deep-fake by thispersondoesnotexist.com.

Photo of “Oliver Taylor” by Reuters and Cyabra.

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store