Controversial Clearview AI app could 'end privacy.' So, what now?

​​​​​​​When it comes to the new facial recognition app, there’s no putting the genie back in the bottle, say experts. But new laws could give users some recourse against the unfettered use of their data.

Company says it doesn't have plans to offer app to consumers

A new app called Clearview AI that uses facial recognition has generated concern about privacy violations. (Photo illustration/CBC)

A powerful and controversial new facial recognition app can identify a person's name, phone number and even their address by comparing their photo to a database of billions of images scraped from the internet. Now, a class-action lawsuit is taking on the startup, arguing that its app is a threat to civil liberties.

In a New York Times investigation, journalist Kashmir Hill revealed how a groundbreaking yet little-known facial recognition tool could "end privacy as we know it." 

The app in question, Clearview AI, has the capacity to turn up search results, including a person's name and other information such as their phone number, address or occupation, based on nothing more than a photo. 

Who's using it?

While it's not available for public use — you won't find it in the App Store — according to the company, it's already being used by more than 600 law enforcement agencies.

Even though Clearview AI says it doesn't have plans to make a consumer-facing version of the app, it's easy to imagine a copycat jumping on what they deem to be a lucrative market opportunity. We already outsource parts of our memories, turning to tech to help us remember things like phone numbers; an app that could help you recall people's names at conferences or reunions feels like a natural evolution of our current use of our smartphones.

Facial recognition technology has vastly improved, leading to privacy concerns. (Getty Images)

"More digital memories are going to be appearing," says Ann Cavoukian, the executive director of the Global Privacy and Security by Design Centre. "And if we don't address these issues in terms of preventing non-consenting access to this data, we're going to lose the game."

So now what? 

Now that a tool like this is on the market, is there any hope for putting the proverbial data genie back in its bottle, or is this in fact the end of anonymity?

"At this point with facial recognition, the cat is out of the bag. We've seen multiple implementations of it in the public and private sector. Even if this isn't used now, someone will use it," says Tiffany C. Li, an attorney and visiting scholar at the Boston University School of Law.

According to Li, the best option is to regulate both the creation and the use of the technology.

"It's easy to say you should regulate companies like Clearview AI, which create these services," she says. But, she says, the big picture is more complex.

"Who are they selling them to? And if they're working with third parties, how can we make sure that those companies don't misuse the technology?"

Li notes that in addition to laws by which companies would need to abide, there needs to be built-in recourse for individuals to protect their privacy and their rights.

Regulation seems the best bet for preserving privacy

Indeed, that could already be proving to be our best hope. In Illinois, a lawsuit seeking class-action status was just filed against Clearview AI claiming the company broke privacy laws, namely the state's Biometric Information Privacy Act (BIPA). The law safeguards Illinois residents from having their biometric data used without consent, and the lawsuit argues that the app's use of artificial intelligence algorithms to scan the facial geometry of each individual depicted in the images violates multiple privacy laws.

The lawsuit, which is seeking, among other things, an injunction to stop Clearview from continuing its business, argues that the company "used the internet to covertly gather information on millions of American citizens, collecting approximately three billion pictures of them, without any reason to suspect any of them of having done anything wrong, ever."

Laws protecting people's biometric data could prove to be our best bet when it comes to preserving any semblance of privacy, says privacy law scholar Frank Pasquale, but regulatory safeguards like Illinois's BIPA are still few and far between.

Flipping the way we think about the use of big data

Because technology advances at light speed compared to the laws meant to keep those who use it safe, many privacy advocates are pushing for a moratorium on the use of facial recognition.

A temporary ban would give regulators a chance to catch up, lest the technology advance past a point of no return, Pasquale says.

Our current way of dealing with privacy is broken, says Pasquale, who argues that "we can't expect individual users to keep track of all of the data that is being gathered about them, and what is being done with that data."

People walk past a poster simulating facial recognition software at the Security China 2018 exhibition on public safety and security in Beijing, China, on Oct. 24, 2018. (Thomas Peter/Reuters)

Instead, he says, we must flip the way we think about how big data, and technologies like facial recognition, are used. 

"The current presumption is that any use of this data is fine, absent an explicit governmental regulation," says Pasquale, who argues that the opposite should be the case.

Pasquale says, given the dangers of facial recognition —– such as its tendency to misidentify individuals and foster biases — we ought to require organizations to have approvals in place before operating with data. Private entities, he says, should have to obtain a licence from a governmental authority "specifying the nature of the approved use, vetting the validity of the underlying data and specifying modes of recourse for those adversely impacted."

As for whether the availability of a tool like Clearview AI means that privacy as we know it is over, "this is going to be a tough one, because the technology is out there," says Cavoukian. 

But, she says, laws that protect users and their data will go a long way toward preventing harm, adding that, in her mind, "there is no turning point that doesn't allow you to return to greater privacy."


Ramona Pringle

Technology Columnist

Ramona Pringle is an associate professor in Faculty of Communication and Design and director of the Creative Innovation Studio at Ryerson University. She is a CBC contributor who writes and reports on the relationship between people and technology.


To encourage thoughtful and respectful conversations, first and last names will appear with each submission to CBC/Radio-Canada's online communities (except in children and youth-oriented communities). Pseudonyms will no longer be permitted.

By submitting a comment, you accept that CBC has the right to reproduce and publish that comment in whole or in part, in any manner CBC chooses. Please note that CBC does not endorse the opinions expressed in comments. Comments on this story are moderated according to our Submission Guidelines. Comments are welcome while open. We reserve the right to close comments at any time.

Become a CBC Member

Join the conversation  Create account

Already have an account?