Police use of facial recognition program breaks 'trust relationship' with public, privacy expert says
Clearview AI uses database of billions of images scraped from social media
A privacy expert says she would be "profoundly surprised" if a review of controversial facial recognition technology gives Ontario police forces a green light to use the software in criminal investigations.
Brenda McPhail of the Canadian Civil Liberties Association said that Clearview AI software, named for the U.S.-based startup that developed it, has the "potential to seriously wipe out privacy."
"It sounds hyperbolic, it sounds dystopian — but the reality is that we live a lot of our lives online," she told CBC Radio's Metro Morning on Friday. McPhail heads up the CCLA's privacy, technology and surveillance project.
Clearview AI uses a database of more than three billion images, scraped from dozens of social media sites, to turn up a wide range of information, including a person's name, phone number, address or occupation. It is already used as an investigative resource by more than 600 law enforcement agencies around the world, an investigation by the New York Times revealed last month.
Using those images for police to conduct investigations, and ultimately prosecutions, is a problem.- Brenda McPhail, CCLA
On Thursday, Toronto Police Chief Mark Saunders revealed that several officers in the force had been using Clearview AI since last October without his knowledge. A spokesperson said that Saunders ordered them to stop, and he has asked the Information and Privacy Commissioner of Ontario and the Crown Attorney's Office to review whether police can legally use the technology in investigations.
"I would be profoundly surprised if the privacy commissioner felt that this was an appropriate technology for use in a Canadian context," said McPhail.
"I find it hard to imagine that this would pass a privacy impact assessment."
McPhail added that Clearview AI's database consists of images "that are arguably collected illegally."
The terms of service for most social media companies are crafted to allow them to use images posted on their platforms, but those same agreements are supposed to ensure the photos are not used by a third party.
"We have, essentially, contracts with these companies, who have a duty to protect this information. And then we have police who are legally bound to obey our laws and our charter of rights and freedoms," she told host Piya Chattopadhyay.
"Using those images for police to conduct investigations, and ultimately prosecutions, is a problem."
'There is a trust relationship'
Police forces have defended their use of Clearview AI, saying the technology has assisted investigators in identifying both alleged perpetrators and victims of crime.
On the company's website, a review attributed to a "detective constable in the sex crimes unit" of a Canadian law enforcement agency praises the software.
"Clearview is hands-down the best thing that has happened to victim identification in the last 10 years. Within a week and a half of using Clearview, [we] made eight identifications of either victims or offenders through the use of this new tool."
Metro Morning asked Toronto police if the review came from one of their officers, but has not received a response to the question.
Beyond the legality, McPhail said, there are broader questions about the relationship between police and citizens.
"We give our police officers special powers in our society that we believe they need to keep us safe. But there is a trust relationship that has to be there, it's the foundation of those powers."
With files from Metro Morning