Toronto police board draft policy on AI a step forward, but privacy experts cautious about future
Draft policy is a step in right direction but more details are needed, experts say
Privacy and surveillance experts say they're cautiously optimistic about a draft policy that outlines how Toronto police will be able to use artificial intelligence systems (AI) — but they add that the devil is in the details.
That draft policy, released by the Toronto Police Services Board (TPSB), is now open to public consultation until Dec. 15.
In a press release, the TPSB says the policy aims to ensure AI is used by the Toronto Police Service (TPS) in a way that is "fair, equitable, and does not breach the privacy or other rights of members of the public." The release also claims it's the first policy of its kind among Canadian police boards or commissions.
"We're seeing something that's trying to get to grips with a massive, growing area before it's very common, and that's exactly what we should be doing," said David Murakami Wood, director of the Surveillance Studies Centre at Queen's University.
While seeking public input is a welcome shift toward accountability and transparency, experts acknowledge that using AI could still be harmful, particularly for marginalized communities.
"Artificial intelligence-based systems can be used to analyze and draw 'inferences' from data," said Kristen Thomasen, acting assistant professor at the University of British Columbia's Peter A. Allard School of Law.
"But in that process, the system very much decontextualizes what it's doing."
She warns those systems could make decisions about individuals or communities without considering the same social contexts, like the frequency or location of crime, that a human would. That could, for example, result in over-policing areas that have been historically and disproportionately targeted by law enforcement.
"The problem is that the data that goes into these systems is already collected within contexts that are already biased. It's often already itself, even if it's subtly or less subtly, skewed in various ways," said Murakami Wood.
"It contains decades, in some cases, of previous forms of bias and the results of previous forms of bias, which aren't in themselves necessarily considered to be biased."
WATCH | AI's inability to recognize racist language
Draft policy includes multiple risk levels
The TPSB draft policy includes a five-level assessment to determine the risk of certain AI systems on the public.
Facial recognition technology using illegally-sourced data that has the ability for mass surveillance of citizens would be considered an extreme risk, for example. According to the draft policy, technologies deemed extreme-risk would not be approved for use by TPS.
High scenarios might include analytics technology that provides data on where police should be deployed in order to "maximize crime suppression." A traffic analysis system that makes recommendations for locations police should be deployed might be considered medium risk. Both risk levels would be subject to evaluations and consultations, require a risk-mitigation plan and need approval by the police board.
Low- and minimal-risk scenarios might include software that transcribes audio from police-worn body cameras, or translation systems that provide website information in multiple languages, respectively. Low-risk technology will be reported to the board, and information about its application made publicly available.
The draft policy also includes a provision for reassessing risk on a schedule, a move that Murakami Wood applauds.
"[An AI application] might not actually be just moderate risk in practice," said Murakami Wood. "It may turn out to fit into the extreme-risk category, after all — and then there is a process to actually remove those things again."
When asked for a statement on the kinds of AI technology the service plans on using, TPS spokesperson Connie Osborne told CBC Radio's Day 6 by email the force "will await the input from the public consultation and feedback before any next steps."
Other police forces will be watching, says expert
There is currently no regulation of AI technologies at a provincial or federal level. Both Murakami Wood and Thomasen say the move to develop AI policy by the Toronto Police Services Board could be motivated by reaction to the purchase of Clearview AI, a facial recognition system, by police services across Canada.
Earlier this year, Canadian privacy commissioners found the company behind the technology violated Canadian privacy law by conducting mass surveillance through the collection of Canadians' photos without their consent.
"What Clearview does is mass surveillance and it is illegal," federal privacy commissioner Daniel Therrien said in February.
Thomasen said she was encouraged that the draft policy recognizes the importance of transparency and careful scrutiny of any AI systems that would be used by a police force.
But more details on the policy are needed to understand how effective it will be.
"Who is actually going to be involved in these assessments? Because if it's just somebody in the office of the Toronto Police Services Board who hasn't got any particular expertise in this ... I wouldn't be so confident this would be a good process, as if they have an external advisory board especially specialised in AI ethics," said Murakami Wood.
Following the public consultation, the Toronto Police Services Board says a final policy will be presented for approval at its February 2022 meeting.
"I suspect that the other police boards will be watching this with some caution because I don't expect a lot of them want to do this, but if it goes well and it seems to be popular, they might well follow Toronto," said Murakami Wood.
Written by Jason Vermes. Interview with Kristen Thomasen produced by Sameer Chhabra.