Day 6

Toronto police board draft policy on AI a step forward, but privacy experts cautious about future

Privacy and surveillance experts say they're cautiously optimistic about a draft policy that outlines how Toronto police will be able to use artificial intelligence systems (AI) — but they add that the devil is in the details.

Draft policy is a step in right direction but more details are needed, experts say

The Toronto Police Services Board has opened a public consultation on draft policy that will guide how Toronto police use artificial intelligence technology. (Robert Krbavac/CBC)

Privacy and surveillance experts say they're cautiously optimistic about a draft policy that outlines how Toronto police will be able to use artificial intelligence systems (AI) — but they add that the devil is in the details.

That draft policy, released by the Toronto Police Services Board (TPSB), is now open to public consultation until Dec. 15.

In a press release, the TPSB says the policy aims to ensure AI is used by the Toronto Police Service (TPS) in a way that is "fair, equitable, and does not breach the privacy or other rights of members of the public." The release also claims it's the first policy of its kind among Canadian police boards or commissions.

"We're seeing something that's trying to get to grips with a massive, growing area before it's very common, and that's exactly what we should be doing," said David Murakami Wood, director of the Surveillance Studies Centre at Queen's University.

While seeking public input is a welcome shift toward accountability and transparency, experts acknowledge that using AI could still be harmful, particularly for marginalized communities.

"Artificial intelligence-based systems can be used to analyze and draw 'inferences' from data," said Kristen Thomasen, acting assistant professor at the University of British Columbia's Peter A. Allard School of Law.

"But in that process, the system very much decontextualizes what it's doing."

She warns those systems could make decisions about individuals or communities without considering the same social contexts, like the frequency or location of crime, that a human would. That could, for example, result in over-policing areas that have been historically and disproportionately targeted by law enforcement.

"The problem is that the data that goes into these systems is already collected within contexts that are already biased. It's often already itself, even if it's subtly or less subtly, skewed in various ways," said Murakami Wood.

"It contains decades, in some cases, of previous forms of bias and the results of previous forms of bias, which aren't in themselves necessarily considered to be biased."

WATCH | AI's inability to recognize racist language

AI’s inability to recognize racist language

8 months ago
Duration 2:27
Artificial intelligence is used for translation apps, and other software. The problem is the technology is unable to differentiate between legitimate terms and ones that might be biased or racist. 2:27

Draft policy includes multiple risk levels

The TPSB draft policy includes a five-level assessment to determine the risk of certain AI systems on the public.

Facial recognition technology using illegally-sourced data that has the ability for mass surveillance of citizens would be considered an extreme risk, for example. According to the draft policy, technologies deemed extreme-risk would not be approved for use by TPS.

High scenarios might include analytics technology that provides data on where police should be deployed in order to "maximize crime suppression." A traffic analysis system that makes recommendations for locations police should be deployed might be considered medium risk. Both risk levels would be subject to evaluations and consultations, require a risk-mitigation plan and need approval by the police board.

The TPSB draft policy would create 5 risk levels for the various uses of AI. An AI-based transcription service used to transcribe audio from police body-worn cameras would be considered low risk, for example. (Evan Mitsui/CBC)

Low- and minimal-risk scenarios might include software that transcribes audio from police-worn body cameras, or translation systems that provide website information in multiple languages, respectively. Low-risk technology will be reported to the board, and information about its application made publicly available.

The draft policy also includes a provision for reassessing risk on a schedule, a move that Murakami Wood applauds.

"[An AI application] might not actually be just moderate risk in practice," said Murakami Wood. "It may turn out to fit into the extreme-risk category, after all — and then there is a process to actually remove those things again."

When asked for a statement on the kinds of AI technology the service plans on using, TPS spokesperson Connie Osborne told CBC Radio's Day 6 by email the force "will await the input from the public consultation and feedback before any next steps."

Other police forces will be watching, says expert

There is currently no regulation of AI technologies at a provincial or federal level. Both Murakami Wood and Thomasen say the move to develop AI policy by the Toronto Police Services Board could be motivated by reaction to the purchase of Clearview AI, a facial recognition system, by police services across Canada.

Earlier this year, Canadian privacy commissioners found the company behind the technology violated Canadian privacy law by conducting mass surveillance through the collection of Canadians' photos without their consent.

"What Clearview does is mass surveillance and it is illegal," federal privacy commissioner Daniel Therrien said in February.

Canadian privacy commissioners found that American technology company Clearview AI violated Canadian law when it collected images of people without their knowledge or consent. (Ascannio/Shutterstock )

Thomasen said she was encouraged that the draft policy recognizes the importance of transparency and careful scrutiny of any AI systems that would be used by a police force.

But more details on the policy are needed to understand how effective it will be.

"Who is actually going to be involved in these assessments? Because if it's just somebody in the office of the Toronto Police Services Board who hasn't got any particular expertise in this ... I wouldn't be so confident this would be a good process, as if they have an external advisory board especially specialised in AI ethics," said Murakami Wood.

Following the public consultation, the Toronto Police Services Board says a final policy will be presented for approval at its February 2022 meeting.

"I suspect that the other police boards will be watching this with some caution because I don't expect a lot of them want to do this, but if it goes well and it seems to be popular, they might well follow Toronto," said Murakami Wood.


Written by Jason Vermes. Interview with Kristen Thomasen produced by Sameer Chhabra.

Comments

To encourage thoughtful and respectful conversations, first and last names will appear with each submission to CBC/Radio-Canada's online communities (except in children and youth-oriented communities). Pseudonyms will no longer be permitted.

By submitting a comment, you accept that CBC has the right to reproduce and publish that comment in whole or in part, in any manner CBC chooses. Please note that CBC does not endorse the opinions expressed in comments. Comments on this story are moderated according to our Submission Guidelines. Comments are welcome while open. We reserve the right to close comments at any time.

Become a CBC Member

Join the conversation  Create account

Already have an account?

now