The health of the Internet in 2019: Deepfakes, biased AI and addiction by design
Mozilla releases annual report
The last year has not been a particularly good one for the internet.
In some ways, it's a space dominated by fake news, algorithms designed to keep us glued to our devices, privacy-invading bots, and worse.
You decide to focus on several issues in the health of the Internet, issues that we've actually looked at on Spark as well. I want to talk about deepfakes. How are these so-called deepfakes affecting people's ability to effectively use the internet?
Those of us who are a little older all got surprised the first time we saw photoshopped images—that you could actually take a photo and make it look like something it wasn't. We're now moving to the to the point with AI where we can do that to moving images.
You also have genuine attempts to confuse people, and manipulate people, and pull the wool over their eyes by making a video of somebody like Barack Obama or whoever—that actually isn't that person. And to the naked eye you often can't tell. The technology is getting better and the technology is going mainstream.
So it raises questions about authenticity, [things] that our senses have relied on for the last century of electronic media now really just completely fail us.
Especially in the context of how we've already seen—even before the deepfake thing—the spread of "fake news" and the way our tendency is to share stuff without really thinking very critically about it, has had a serious impact for our understanding of what truth is and what the news is.
Yeah. And you know, as we've seen, it shouldn't be a surprise. People who have bad intent weaponize the internet in more powerful ways. And so there can be lots of fun and cute and interesting uses of that kind of deepfake technology. But we are already seeing—and we will see more use of it—by people who really want to confuse, distract and polarize.
The other thing that comes up in the report is the issue of bias in artificial intelligence and machine learning, which is something we've talked about on the show. So what's Mozilla's concern about bias?
You know, we see that in everything, [like] issues around predictive policing: to automated systems where policing resources are allocated in an automated way, and the data that was fed into them was past police records. But it turns out that past police records have a lot of racial bias in them.
And we've even seen things you might find counterintuitive, like why would a camera in a self-driving car recognize white faces better than black faces? Well, it turns out the data sets had more white faces in them—and that those cars actually are going to be more lethal to people who aren't white. And so it really actually goes back to the concern that people can get hurt. And people can get discriminated against.
At the end of the day, are any bright spots or reasons for optimism?
I think it's a call to action for people—how we do design this stuff? We still have the rule of law. We still have governments that we can actually decide where to take this. We're at that kind of a juncture right now, and we can also govern it stupidly. And so now is the time for engaging in really thoughtful, hard-nosed investigation of what we want the digital society to look like.
This interview has been edited for length and clarity. Click the listen button above to hear the full conversation.