DEEPFAKE


None of these people exist. These images were generated using deepfake technology.

 THISPERSONDOESNOTEXIST.COM

"Only those who have been diligent students of the Scriptures and who have received the love of the truth will be shielded from the powerful delusion that takes the world captiveBy the Bible testimony these will detect the deceiver in his disguise. To all the testing time will come. By the sifting of temptation the genuine Christian will be revealed. Are the people of God now so firmly established upon His word that they would not yield to the evidence of their senses? Would they, in such a crisis, cling to the Bible and the Bible only? Satan will, if possible, prevent them from obtaining a preparation to stand in that day. He will so arrange affairs as to hedge up their way, entangle them with earthly treasures, cause them to carry a heavy, wearisome burden, that their hearts may be overcharged with the cares of this life and the day of trial may come upon them as a thief."-{GC 625.3}

Last month during ESPN’s hit documentary series The Last Dance, State Farm debuted a TV commercial that has become one of the most widely discussed ads in recent memory. It appeared to show footage from 1998 of an ESPN analyst making shockingly accurate predictions about the year 2020.

As it turned out, the clip was not genuine: it was generated using cutting-edge AI. The commercial surprised, amused and delighted viewers.

What viewers should have felt, though, was deep concern.

The State Farm ad was a benign example of an important and dangerous new phenomenon in AI: deepfakes. Deepfake technology enables anyone with a computer and an Internet connection to create realistic-looking photos and videos of people saying and doing things that they did not actually say or do.

A combination of the phrases “deep learning” and “fake”, deepfakes first emerged on the Internet in late 2017, powered by an innovative new deep learning method known as generative adversarial networks (GANs).

Several deepfake videos have gone viral recently, giving millions around the world their first taste of this new technology: President Obama using an expletive to describe President Trump, Mark Zuckerberg admitting that Facebook's true goal is to manipulate and exploit its users, Bill Hader morphing into Al Pacino on a late-night talk show.\

The amount of deepfake content online is growing at a rapid rate. At the beginning of 2019 there were 7,964 deepfake videos online, according to a report from startup Deeptrace; just nine months later, that figure had jumped to 14,678. It has no doubt continued to balloon since then.

While impressive, today's deepfake technology is still not quite to parity with authentic video footage—by looking closely, it is typically possible to tell that a video is a deepfake. But the technology is improving at a breathtaking pace. Experts predict that deepfakes will be indistinguishable from real images before long.

“In January 2019, deep fakes were buggy and flickery,” saidHany Farid, a UC Berkeley professor and deepfake expert. “Nine months later, I’ve never seen anything like how fast they’re going. This is the tip of the iceberg.”

Today we stand at an inflection point. In the months and years ahead, deepfakes threaten to grow from an Internet oddity to a widely destructive political and social force. Society needs to act now to prepare itself.


When Seeing Is Not Believing

The first use case to which deepfake technology has been widely applied—as is often the case with new technologies—is pornography. As of September 2019, 96% of deepfake videos online were pornographic, according to the Deeptrace report.

A handful of websites dedicated specifically to deepfake pornography have emerged, collectively garnering hundreds of millions of views over the past two years. Deepfake pornography is almost always non-consensual, involving the artificial synthesis of explicit videos that feature famous celebrities or personal contacts.

From these dark corners of the web, the use of deepfakes has begun to spread to the political sphere, where the potential for mayhem is even greater.

It does not require much imagination to grasp the harm that could be done if entire populations can be shown fabricated videos that they believe are real. Imagine deepfake footage of a politician engaging in bribery or sexual assault right before an election; or of U.S. soldiers committing atrocities against civilians overseas; or of President Trump declaring the launch of nuclear weapons against North Korea. In a world where even some uncertainty exists as to whether such clips are authentic, the consequences could be catastrophic.

Because of the technology’s widespread accessibility, such footage could be created by anyone: state-sponsored actors, political groups, lone individuals.

In a recent report, The Brookings Institution grimly summed up the range of political and social dangers that deepfakes pose: “distorting democratic discourse; manipulating elections; eroding trust in institutions; weakening journalism; exacerbating social divisions; undermining public safety; and inflicting hard-to-repair damage on the reputation of prominent individuals, including elected officials and candidates for office.”

Given the stakes, U.S. lawmakers have begun to pay attention.

“In the old days, if you wanted to threaten the United States, you needed 10 aircraft carriers, and nuclear weapons, and long-range missiles,” U.S. Senator Marco Rubio said recently. “Today....all you need is the ability to produce a very realistic fake video that could undermine our elections, that could throw our country into tremendous crisis internally and weaken us deeply.”

Technologists agree. In the words of Hani Farid, one of the world's leading experts on deepfakes: “If we can't believe the videos, the audios, the image, the information that is gleaned from around the world, that is a serious national security risk.”

This risk is no longer just hypothetical: there are early examples of deepfakes influencing politics in the real world. Experts warn that these incidents are canaries in a coal mine.

Last month, a political group in Belgium released a deepfake video of the Belgian prime minister giving a speech that linked the COVID-19 outbreak to environmental damage and called for drastic action on climate change. At least some viewers believed the speech was real.

Even more insidiously, the mere possibility that a video could be a deepfake can stir confusion and facilitate political deception regardless of whether deepfake technology has actually been used. The most dramatic example of this comes from Gabon, a small country in central Africa. 

In late 2018, Gabon's president Ali Bongo had not been seen in public for months. Rumors were swirling that he was no longer healthy enough for office or even that he had died. In an attempt to allay these concerns and reassert Bongo’s leadership over the country, his administration announced that he would give a nationwide televised address on New Years Day.

In the video address (which is worth examining firsthand yourself), Bongo appears stiff and stilted, with unnatural speech and facial mannerisms. The video immediately inflamed suspicions that the government was concealing something from the public. Bongo’s political opponents declared that the footage was a deepfake and that the president was incapacitated or dead. Rumors of a deepfake conspiracy spread quickly on social media.

The political situation in Gabon rapidly destabilized. Within a week, the military had launched a coup—the first in the country since 1964—citing the New Years video as proof that something was amiss with the president.

To this day experts cannot definitively say whether the New Years video was authentic, though most believe that it was. (The coup proved unsuccessful; Bongo has since appeared in public and remains in office today).

But whether the video was real is almost beside the point. The larger lesson is that the emergence of deepfakes will make it increasingly difficult for the public to distinguish between what is real and what is fake, a situation that political actors will inevitably exploit—with potentially devastating consequences.

“People are already using the fact that deepfakes exist to discredit genuine video evidence,” said USC professor Hao Li. “Even though there’s footage of you doing or saying something, you can say it was a deepfake and it's very hard to prove otherwise.”

In two recent incidents, politicians in Malaysia and in Brazil have sought to evade the consequences of compromising video footage by claiming that the videos were deepfakes. In both cases, no one has been able to definitively establish otherwise—and public opinion has remained divided.

Researcher Aviv Ovadya warns of what she terms “reality apathy”: “It’s too much effort to figure out what’s real and what’s not, so you’re more willing to just go with whatever your previous affiliations are.”

In a world in which seeing is no longer believing, the ability for a large community to agree on what is true—much less to engage in constructive dialogue about it—suddenly seems precarious.


A Game of Technological Cat-And-Mouse

The core technology that makes deepfakes possible is a branch of deep learning known as generative adversarial networks (GANs). GANs were invented by Ian Goodfellow in 2014 during his PhD studies at the University of Montreal, one of the world's top AI research institutes.

In 2016, AI great Yann LeCun called GANs “the most interesting idea in the last ten years in machine learning.”

Before the development of GANs, neural networks were adept at classifying existing content (for instance, understanding speech or recognizing faces) but not at creating new content. GANs gave neural networks the power not just to perceive, but to create.

Goodfellow’s conceptual breakthrough was to architect GANs using two separate neural networks—one known as the “generator”, the other known as the “discriminator”—and pit them against one another.

Starting with a given dataset (say, a collection of photos of human faces), the generator begins generating new images that, in terms of pixels, are mathematically similar to the existing images. Meanwhile, the discriminator is fed photos without being told whether they are from the original dataset or from the generator's output; its task is to identify which photos have been synthetically generated.

As the two networks iteratively work against one another—the generator trying to fool the discriminator, the discriminator trying to suss out the generator’s creations—they hone one another’s capabilities. Eventually the discriminator’s classification success rate falls to 50%, no better than random guessing, meaning that the synthetically generated photos have become indistinguishable from the originals.

One reason deepfakes have proliferated is the machine learning community’s open-source ethos: starting with Goodfellow’s original paper, whenever a research advance in generative modeling occurs, the technology is generally made available for free for anyone in the world to download and make use of.

Given that deepfakes are based on AI in the first place, some look to AI as a solution to harmful deepfake applications. For instance, researchers have built sophisticated deepfake detection systems that assess lighting, shadows, facial movements, and other features in order to flag images that are fabricated. Another innovative defensive approach is to add a filter to an image file that makes it impossible to use that image to generate a deepfake.

A handful of startups have emerged that offer software to defend against deepfakes, including Truepic and Deeptrace.

Yet such technological solutions are not likely to stem the spread of deepfakes over the long term. At best they will lead to an endless cat-and-mouse dynamic, similar to what exists in cybersecurity today, in which breakthroughs on the deepfake detection side spur further innovation in deepfake generation. The open-source nature of AI research makes this all the more likely.

To give one example, in 2018 researchers at the University of Albany published analysis showing that blinking irregularities were often a telltale sign that a video was fake. It was a helpful breakthrough in the fight against deepfakes—until, within months, new deepfake videos began to emerge that corrected for this blinking imperfection.

“We are outgunned,” said Farid. “The number of people working on the video-synthesis side, as opposed to the detector side, is 100 to 1.”


The Path Forward

Looking beyond purely technological remedies, what legislative, political, and social steps can we take to defend against deepfakes’ dangers?

One tempting, simple solution is to pass laws that make it illegal to create or spread deepfakes. The state of California has experimented with this approach, enacting a law last year that makes it illegal to create or distribute deepfakes of politicians within 60 days of an election. But a blanket deepfake ban has both constitutional and practical challenges.

The First Amendment of the U.S. Constitution enshrines the freedom of expression. Any law proscribing online content, particularly political content, risks running afoul of these constitutional protections.

“Political speech enjoys the highest level of protection under U.S. law,” said law professor Jane Kirtley. “The desire to protect people from deceptive content in the run-up to an election is very strong and very understandable, but I am skeptical about whether they are going to be able to enforce [the California] law.”

Beyond constitutional concerns, deepfake bans will likely prove impracticable to enforce due to the anonymity and borderlessness of the Internet.

Other existing legal frameworks that might be deployed to combat deepfakes include copyright, defamation and the right of publicity. But given the broad applicability of the fair use doctrine, the usefulness of these legal avenues may be limited.

In the short term, the most effective solution may come from major tech platforms like Facebook, Google and Twitter voluntarily taking more rigorous action to limit the spread of harmful deepfakes.

Relying on private companies to solve broad political and societal problems understandably makes many deeply uncomfortable. Yet as legal scholars Bobby Chesney and Danielle Citron put it, these tech platforms’ terms-of-service agreements are “the single most important documents governing digital speech in today’s world.” As a result, these companies’ content policies may be “the most salient response mechanism of all” to deepfakes.

A related legislative option is to amend the controversial Section 230 of the Communications Decency Act. Written in the early days of the commercial Internet, Section 230 gives Internet companies almost complete civil immunity for any content posted on their platforms by third parties. Walking these protections back would make companies like Facebook legally responsible for limiting the spread of damaging content on their sites. But such an approach raises complex free speech and censorship concerns.

In the end, no single solution will suffice. An essential first step is simply to increase public awareness of the possibilities and dangers of deepfakes. An informed citizenry is a crucial defense against widespread misinformation.

The recent rise of fake news has led to fears that we are entering a “post-truth” world. Deepfakes threaten to intensify and accelerate this trajectory. The next major chapter in this drama is likely just around the corner: the 2020 elections. The stakes could hardly be higher.

“The man in front of the tank at Tiananmen Square moved the world,” said NYU professor Nasir Memon. “Nixon on the phone cost him his presidency. Images of horror from concentration camps finally moved us into action. If the notion of not believing what you see is under attack, that is a huge problem. One has to restore truth in seeing again.”