The fifth radio column for my new series on CBC Radio’s Ottawa Morning. This one is a discussion with Stu Mills on whether or not we can believe videos we see online.
You Can’t Believe Your Eyes
Special effects have brought magic to screen for decades now. Not matter how often we see it, we know that aliens haven’t destroyed landmarks around the world. Sharks don’t rain down from the sky. That magic doesn’t actually exist. Despite knowing what’s possible in videos, we still tend to believe what we see.
This belief continues in spite of the decline of trust in the written word and photos.
Enter DeepFakes…
Origins
In the fall of 2017, a Reddit user going by the name “deepfakes” posted some pornographic videos to the site. These videos seemed to portray famous actors commit various sex acts. These videos were of course fake but also disturbingly believable out of context.
The user also posted the rudimentary code. Others users quickly improved the code. This subreddit grew—eventually getting banned—and communities outside began to take notice.
The user also posted the rudimentary code. Other users quickly improved the code.This subreddit grew—eventually getting banned)—and communities outside began to take notice.
FakeApp was the result of a series of simplifications. This put deep fakes within reach of non-technical users.
Malicious Use
Using this technology to create pornographic videos without consent is disturbing. Several prominent sites have added bans specific to these types of videos. Law makers have moved to ban this use of the technology via legislation.
Implications to individuals are nothing short of terrifying. Revenge porn is already disturbing trend online, now made worse. After swift action, this use of deep fakes should fall under existing legislation.
What about the bigger picture, beyond the individual? This technology could create a video of a world leader making a false statement.
This is exactly what Jordan Peele and Buzzfeed News did…as a public service announcement.
This professionally edited video smoothed out the imperfections common with most deep fakes. Additionally, Peele does a decent Obama impression so the voice work is passable. On first glance, you might believe that this in fact a legitimate video.
Technical Hurdles
This technology isn’t perfect but it is improving. At the moment, the requirements to build a deep fake are simple;
- a large library of images of the person to superimpose on the video
- a similarity between the person to overlay and the one who is being superimposed
- a destination video clip that conveys the desired context
- dubbing audio to align with the superimposed visual
The first issue is easily addressable for celebrities, politicians, and other public figures. For others, active social media profiles make it easier to assemble a similar set of images.
The second and third issues only take a short time surfing YouTube or any other video site. The push to video content has made it trivial to find a destination clip that meets a specific need.
The fourth issue was a stumbling block until recently. In the Obama PSA video, Jordan Peele’s passable impression makes the clip credible. For most, this will be a challenge.
At least until a recent advancement from Adobe. In an unrelated appearance by Peele, Adobe announced their VoCo technology. The technology can generate audio that sounds like a specific speaker. It’s moniker, “Photoshop for Voices” is spot on.
The creative uses are obvious. No need to reshoot or re-record when you generate the required soundtrack. The downside? It removes the final hurdle in the deep fake process.
The technology is not yet generally available but it’s only a matter of time.
Positive Use
Why does this technology exists beyond the malicious uses already outlined? Like facial recognition, deep fakes are a neutral technology. It’s the specific use cases that need further discussion.
The deep fake technique is useful in movie production. The Justice League movie famously underwent reshooting. At the time, Henry Cavill (Superman) was filming Mission Impossible: Fallout.
The contract for Mission Impossible required Cavill to keep his moustache. If you’ve ever seen Superman, you know that a moustache is very much against character.
The studio reshot the required scenes and in post-production removed Cavill’s moustache. The results were subpar.
The community had it’s opinions, so did Cavill. One enterprising fan “reshot” the reshoot using deep fake technology. The result was comparable if not better than the studio version but done with a $500 computer.
Princess Leia made an appearance in Rogue One thanks to this type of technology. Recent movies contain flashbacks of younger versions of actors, powered by similar technology.
The technology could also clean up videos when conditions are poor. We accept this for photos, why not video?
Limits Needed
The negative and malicious use of deep fakes is a very real concern. It’s one more nail in the coffin of trust and believability on the internet.
The malicious uses we’ve seen out of the gate are disgusting. The can have terrifying impact on an individual level. Take the BuzzFeed News PSA a step further, and the question must be asked, “How will citizens trust their leader’s statements?”.
Rumours and memes spread fast. Internet culture isn’t currently setup for deeper inspection and verification of content.
On some level, that must stop. A larger discussion needs to take place on the use of this technology. What should should we do in cases on misuse? How we can verify content?
We need answers to these questions and more.