What is a deepfake, explained
Take the quiz: Can you spot the deepfake?
“Star Wars” actor Peter Cushing died in 1994. To bring back his character Grand Moff Tarkin in 2016’s “Rogue One: A Star Wars Story,” filmmakers cast an actor with a similar build and had him wear motion-capture materials on his head. They then digitally replaced his face with Cushing’s.
For 1994’s “Forrest Gump,” filmmakers digitally inserted archival footage of JFK into this scene and manipulated his mouth movements.
Actor Paul Walker died while 2015’s “Furious 7” was in production. To create this scene, filmmakers shot the late actor’s brother in the car and digitally inserted a composite of Walker’s face.
Actor Oliver Reed died before his last two scenes had been shot for 2000’s “Gladiator.” To get those scenes done, filmmakers took shots of Reed’s head from scenes that had been filmed before his death, matched the lighting in a computer, then grafted them to a body double.
“Star Wars” actor Peter Cushing died in 1994. To bring back his character Grand Moff Tarkin in 2016’s “Rogue One: A Star Wars Story,” filmmakers cast an actor with a similar build and had him wear motion-capture materials on his head. They then digitally replaced his face with Cushing’s.
For 1994’s “Forrest Gump,” filmmakers digitally inserted archival footage of JFK into this scene and manipulated his mouth movements.
Actor Paul Walker died while 2015’s “Furious 7” was in production. To create this scene, filmmakers shot the late actor’s brother in the car and digitally inserted a composite of Walker’s face.
Actor Oliver Reed died before his last two scenes had been shot for 2000’s “Gladiator.” To get those scenes done, filmmakers took shots of Reed’s head from scenes that had been filmed before his death, matched the lighting in a computer, then grafted them to a body double.
“Star Wars” actor Peter Cushing died in 1994. To bring back his character Grand Moff Tarkin in 2016’s “Rogue One: A Star Wars Story,” filmmakers cast an actor with a similar build and had him wear motion-capture materials on his head. They then digitally replaced his face with Cushing’s.
It’s been possible to alter video footage for decades, but doing it took time, highly skilled artists, and a lot of money. Deepfake technology could change the game. As it develops and proliferates, anyone could have the ability to make a convincing fake video, including some people who might seek to “weaponize” it for political or other malicious purposes.
See how deepfakes are different. Computers, not humans, do the hard work
Now deepfake technology is on the US government's radar
The Pentagon, through the Defense Advanced Research Projects Agency (DARPA), is working with several of the country’s biggest research institutions to get ahead of deepfakes.
But in order to learn how to spot deepfakes, you first have to make them. That takes place at the University of Colorado in Denver, where researchers working on DARPA’s program are trying to create convincing deepfake videos. These will later be used by other researchers who are developing technology to detect what’s real and what’s fake.
How are they made?
Spotting a deepfake
A thousand miles west of Denver a team at SRI International in Menlo Park, California is developing the crucial second component to DARPA’s program: technology to spot a deepfake.
How are they detected?
By feeding computers examples of real videos as well as deepfake videos, these researchers are training computers to detect deepfake videos.
What about fake audio?
Training computers to recognize visual inconsistencies is one way researchers at SRI are working to detect deepfakes. They’re also focusing on fake audio.
Another example of the real-time webcam capture technique from researchers at Stanford University.
Researchers at Carnegie Mellon used artificial intelligence to transfer facial expressions and mannerisms from one video to another. In this example, they grafted some of Martin Luther King Jr.’s facial movements onto former President Barack Obama.
The researchers at Carnegie Mellon use a class of algorithms called generative adversarial networks, or GANs, to copy the way one video’s subject moves their mouth and face and duplicate those movements on the subject of another video.
At Stanford University, researchers developed a technique to use any standard webcam to capture their own facial movements and put them onto a target video in real time.
Another example of the real-time webcam capture technique from researchers at Stanford University.
Researchers at Carnegie Mellon used artificial intelligence to transfer facial expressions and mannerisms from one video to another. In this example, they grafted some of Martin Luther King Jr.’s facial movements onto former President Barack Obama.
The researchers at Carnegie Mellon use a class of algorithms called generative adversarial networks, or GANs, to copy the way one video’s subject moves their mouth and face and duplicate those movements on the subject of another video.
At Stanford University, researchers developed a technique to use any standard webcam to capture their own facial movements and put them onto a target video in real time.
Another example of the real-time webcam capture technique from researchers at Stanford University.
Who else is studying deepfake technology?
Researchers at academic institutions like Carnegie Mellon, the University of Washington, Stanford University, and The Max Planck Institute for Informatics are also experimenting with deepfake technology. While not a part of DARPA’s program, their work, some of which is featured above and here, highlights different techniques with which artificial intelligence can be used to manipulate video. *Note: these clips do not have audio.
What happens if we can no longer trust our eyes or our ears?
For more than a century, audio and video have functioned as a bedrock of truth. Not only have sound and images recorded our history, they have also informed and shaped our perception of reality.
National Archives
NASA
CNN
CNN
Some people already question the facts around events that unquestionably happened, like the Holocaust, the moon landing and 9/11, despite video proof. If deepfakes make people believe they can’t trust video, the problems of misinformation and conspiracy theories could get worse. While experts told CNN that deepfake technology is not yet sophisticated enough to fake large-scale historical events or conflicts, they worry that the doubt sown by a single convincing deepfake could alter our trust in audio and video for good.
What if we can dismiss real events as fake?
Think about it. Would history be different if these recordings were claimed as fake?
In this excerpt from the now infamous “smoking gun” White House tape, President Nixon is heard agreeing to have administration officials approach the director of the CIA to ask him to request that the FBI stop their investigation into the Watergate break-in. Once the “smoking gun” tape was made public Nixon’s political support practically vanished and he ultimately resigned.
In this clip released by Mother Jones in 2012, Mitt Romney was caught on camera at a campaign fundraiser saying that 47 percent of the country is dependent on the government. The video was a setback for Romney’s presidential ambitions.
What's next?
The emergence of deepfake technology has prompted members of the U.S. Congress to request a formal report from the Director of National Intelligence. Senator Marco Rubio worries about the global fallout after a convincing deepfake goes viral before it’s detected.
A new kind of arms race?
- Reporter: Donie O'Sullivan
- Producers: Deborah Brunswick, Samantha Guff, Julian Quiñones
- Supervising Producer: Margaret Dawson
- Senior Creative Producer: Craig Waxman
- Senior Producer: Logan Whiteside
- Executive Producer: Wendy Brundige
- Managing Editor, CNN Business: Alex Koppelman
- Motion Graphics: Justin Weiss, Shane Csontos-Popko, Padraic Driscoll
- Design & Development: Sean O'Key