By now you’ve undoubtedly been exposed to Deepfake, a process by which artificial intelligence and machine learning can convincingly superimpose one person’s face onto another in any video. Whether you were aware of it or not, Deepfake videos have spread across the internet far and wide as a popular meme format and display of programming prowess.

You’ve probably seen Jennifer Lawrence being interviewed after the Golden Globes wearing the face of Steve Buscemi and discussing her favorite characters on Vanderpump Rules. Another popular Deepfake features Bill Hader on Conan doing an Arnold Schwarzenegger impression and looking just like The Terminator himself.

These kinds of Deepfake videos are spectacles; meant to create humor through juxtaposition and impress with unbelievable visual accuracy. They are not created to mislead, no one would be fooled into thinking Steve Buscemi had donned a red dress at an award show to discuss his favorite reality TV stars. The reality of Deepfake, however, is not entirely innocent. Born of ethically dubious purpose, the technology has enormous potential to affect politics, filmmaking, and the way society consumes and perceives information forever.

In early 2017 a Reddit user named u/Deepfakes began sharing videos he made depicting Hollywood stars such as Gal Gadot and Scarlett Johansson in pornographic films. In reality the videos were created using software based on open source libraries, such as Tensorflow, to create fake, but very convincing celebrity porn.

The way it works is that images and videos of a particular subject, such as Gal Gadot, are collected through Google and Youtube. These images are compiled through a process known as deep learning: using an artificial neural network to “learn” about the image and apply an algorithm to essentially match Gal Gadot’s face to the body of an actual adult actress in an actual adult scene.

It sounds rather complicated and technobabbley, but the central concept is thus: U/Deepfakes taught a computer how to replicate an actresses face by supply it was tons and tons of images and videos. Once the computer learned enough about what the actress looks like, it was then able to autonomously change the face of someone completely different to look exactly like her. The process is simple, requires very little hardware, and can be done by anyone, without any sophisticated knowledge of programming, and in a very short amount of time.

It’s so simple in fact, that within months of being exposed to the mainstream by a Vice article, an app was created called Fakeapp that provided user friendly tools for anyone to create their own Deepfake videos quickly and easily. The creator, another Reddit user named U/deepfakeapp, told Vice how he hoped to develop the app further:

“I think the current version of the app is a good start, but I hope to streamline it even more in the coming days and weeks,” he said. “Eventually, I want to improve it to the point where prospective users can simply select a video on their computer, download a neural network correlated to a certain face from a publicly available library, and swap the video with a different face with the press of one button.”

R/Deepfake, the hub for these videos on Reddit, was eventually shut down. Deepfake videos are banned from Reddit and Twitter now, as they violate the sites’ consent policy and are comparable in that sense to revenge porn. The existence of Deepfake pornography is disturbing for many, including Scarlett Johanssen, who spoke to The Washington Post in 2018. In a prepared statement she expressed concern for vulnerable woman who may become victims of Deepfake videos and are not protected by their fame, saying “But nothing can stop someone from cutting and pasting my image or anyone else’s onto a different body and making it look as eerily realistic as desired.”

These concerns about obscuring reality by creating fake videos are not limited to women and pornography either. It didn’t take long for creators to clue into the potential to wade into politics with Deepfake. You may have seen a doctored video of Nancy Pelosi seemingly stammering drunkenly during a news conference. President Trump himself tweeted the video, giving it unearned validity.

Many who saw the video will never learn it was fake, and this poses significant concerns for the country as is relates to matters of national security. What if someone were to create a Deepfake video of the president saying he has launched a nuclear attack? These concerns have gotten significant media attention, including a PSA by Jordan Peele and Buzz Feed in which a Deepfake video of president Obama was created to remind us not to believe everything we see.

While the potential for harm against individual reputation and democracy at large are matters of grave importance, there’s another side to Deepfake, one that is not foreboding, but inspiring. The potential that Deepfake has to create visual accuracy in film is nothing short of astounding. What today takes visual artists thousands of man hours and millions of dollars will one day be done completely autonomously using machine learning. In fact, we already have several incredible examples that highlight the enormous potential that Deepfake has for visual effects technology.

A creator known as Derpfake recreated the young princess leia scene from Rogue One using Deepfake. The recreation is practically indistinguishable from the original. The creators comments on the video hone in on exactly what this technology could mean for the industry:
“Top is original footage from Rogue One with a strange CGI Carrie Fisher. Movie budget: $200m. Bottom is a 20 minute fake that could have been done in essentially the same way with a visually similar actress. My budget: $0 and some Fleetwood Mac tunes.”

The best example of Deepfake as a tool that shows the potential in mainstream film is this recreation of a famous T2: Judgement Day scene in which the titular Terminator, played by Arnold Schwarzenegger, has been replaced by his 80s rival Sylvester Stallone. The transition isn’t flawless, but it is breathtaking. At times, it is absolutely impossible to see the seams; it is almost as if you are watching the film in an alternate timeline.

The opportunities for Deepfake are obvious. How much money would have been saved in the production of All the Money in the World had they been able to use Deepfake to replace Kevin Spacey with Christopher Plummer? How much more Paul Walker could we have seen in the Fast and the Furious franchise after his death? Not only can any actor be replaced in any past film, but films can be planned and produced in the future with the intention of replacing the actor.

Imagine a new movie experience, one in which the film was shot and edited with any unknown actor, but you, the viewer, can choose later who you want to star in the movie. Choosing from a selection of actors who have all had their likeness tirelessly analyzed and recreated using machine learning, you can have a unique theater experience in which any one at all can be digitally placed into the film, why not even watch yourself as the star of the movie?

Don’t forget, either, this process is done automatically without sophisticated hardware. As visual fidelity and definition increases the required processing and graphics hardware needs will increase, certainly. But this process can be done in a fraction of the time without any of the cost of modern CGI technology. Actors can be brought back from the dead to star in endless franchises. Other can be replaced entirely, and it can all be done without affecting the bottom line.

The future of Deepfake technology is as thrilling as it is intimidating. What we do with it will influence the way lawmakers chose to address it: if Deepfake continues to be used as a tool to obscure facts and push political agendas, then it will forever be remembered as a weapon. But if the technology continues to grow and support the visual art industry, then there is truly no limit to what it can do.


Also check out our article about the history of the Terminator franchise.


Article written by Eric Switzer. Eric Switzer is a filmmaker and writer living in Los Angeles. His work tends to focus on the lighter side of entropy, dystopic futures, and man’s innate struggle with his own mortality. He can be found on twitter @epicswitzer or reached via email at ericswitzerfilm@gmail.com.

Allan Torp Jensen
Author: Allan Torp Jensen

Allan has worked on visual effects for feature films and television for 20 years. He has experience of the full VFX pipeline but has focused on compositing for the past 15 years and has been a Lead Compositor and Compositing Supervisor on various shows. He has worked with the talented people at Cinesite, Bluebolt VFX, Automatik VFX in London, and Weta Digital in New Zealand. For the past five years, he has worked remotely at his own Torper Studio on various high-end TV and feature film projects.

Comments

comments