The Current

The fight against 'deepfake' videos includes former U.S. ambassador to Russia Michael McFaul

As technology continues to make it easier for people to create 'deepfake' videos, the threat to democracy has become more urgent. Former ambassador to Russia, Michael McFaul shares how he was a target of this technology that aimed to discredit him.

'A video circulated that suggested that I was a pedophile. What do you say to that?' says McFaul

Former U.S. ambassador to Russia, Michael McFaul says he was targeted in 'deepfake' videos trying to discredit him. (David Goldman/Associated Press)
Listen24:03

Read Story Transcript

Michael McFaul knows firsthand the negative impact of so-called "deepfakes" — digitally constructed videos that can make it appear that a person is saying or doing something they never did.

The former U.S. Ambassador to Russia — a vocal opponent of President Vladimir Putin —  was a victim of this rapidly advancing technology.

McFaul was posted to Moscow during the Obama administration between 2012 and 2014. He says at the time Russia was starting to experiment with the video technology and created several fake photos and videos to discredit him.

"A video circulated that suggested that I was a pedophile. What do you say to that? You go on Twitter and argue you're not a pedophile? I mean, there's no excuse for that, no defence," McFaul told The Current.

"So it's effective. Disinformation is effective. Propaganda works."

He said the difficult narrative was hard to fight back as a government — but they did so with facts.

This image made from video of a fake video featuring former President Barack Obama shows elements of facial mapping used in Deep Fake technology that lets anyone make videos of real people appearing to say things they've never said. (Associated Press)

According to McFaul, deepfake videos may also allow public figures to retroactively avoid accountability for things they've said on tape in the past.

"When Donald Trump is recorded saying some really horrible things about how he treats women for instance — that happened in our presidential election in 2016 — it's going to be easier in the future for him, or other people like that, to say, 'well that's fake, that's not really me,'" McFaul explained.

"And how are we going to be able to know? It's really blurring what is fact and what is fiction and I think that's a pretty scary world."

'Incredibly significant' threat

The term "deepfake" was created by a Reddit user referring to deep learning and fake video.

This technology uses facial mapping and artificial intelligence to produce videos that appear so genuine it's hard to spot the phonies.

What deepfakes are capable of

Developer and entrepreneur Gaurav Oberoi has experimented with deepfake technology and shares them online.

In this example, he showcases how the algorithm has learned enough from about 300 images and videos inputted to make John Oliver look like he's hosting Jimmy Fallon's show.

The pace at which this technology is accelerating in the last year has shocked researchers at the U.S. Defence Advanced Research Projects Agency (DARPA). They have been tasked to find ways to detect fake content. 

Researcher Hany Farid said that in the last two years, deepfakes have grown into a significant political and social concern.

"There is almost no doubt that within a year or so, well in time for the next national election at least here in the U.S., this is going to be a real threat," Farid told The Current's guest host Ioanna Roumeliotis. 

There are physiological signals that are very innately human that tend to get disrupted when the computer synthesizes content, says DARPA researcher Hany Farid 1:12

Deepfakes become more dangerous, and their impact more potent, when combined with the speed that they can proliferate over social media.

"The fact that the social media companies are aggressively promoting this content because it engages users ... [means] that threat is incredibly significant," said Farid, who is also a computer science professor specializing in digital forensics at Dartmouth College.

He criticised companies like Facebook, Google and Twitter for not taking enough responsibility for how their technology and platforms can be misused in ways that potentially lead to harm.

"We have to acknowledge that technology has a dark side to it, and to pretend that technology is inherently good, is going to make everybody happy, is incredibly naive," he said.

"We have to do better than we have over the last few decades."

Listen to the full discussion near the top of this post.


With files from the Associated Press. This segment was produced by The Current's Danielle Carr. 

Comments

To encourage thoughtful and respectful conversations, first and last names will appear with each submission to CBC/Radio-Canada's online communities (except in children and youth-oriented communities). Pseudonyms will no longer be permitted.

By submitting a comment, you accept that CBC has the right to reproduce and publish that comment in whole or in part, in any manner CBC chooses. Please note that CBC does not endorse the opinions expressed in comments. Comments on this story are moderated according to our Submission Guidelines. Comments are welcome while open. We reserve the right to close comments at any time.