How law enforcement is struggling to control hate in an online world
Police signal zero-tolerance, reminding social media users comments made online can have legal consequences
Since the fatal shooting last year of six men in a Quebec City mosque, police in the province have taken a more aggressive stand against online hate speech.
Within a week of the Jan. 29 shooting, three people were arrested — two in the Montreal area and one in Quebec City — and accused of making threatening comments on social media about Muslims.
Quebec City police made further arrests in October (a 47-year-old woman) and December (a 46-year-old man) on the same grounds.
None of the arrests were directly connected to the shooting at the mosque, though the man accused of the killings also expressed radical anti-immigrant views on social media.
But police have used the incidents to signal a zero-tolerance approach, reminding social media users that comments made online can have legal consequences.
"It's OK to exchange ideas on social media, but there really is a limit to respect so that it doesn't become criminal," Mélissa Cliche, a spokesperson for Quebec City police, said following the arrests.
This law-and-order strategy — a digital version of broken-windows policing — stands out amid the growing effort to limit the spread of extremist materials over the internet.
'Havens' of hate
Not only can posting hateful comments online itself amount to a crime, but experts also worry about how social media facilitates the spread of material that encourages radicalization, such as recruitment videos or bomb-making instructions.
Law enforcement agencies have been forced to acknowledge that given the size of networks such as Facebook (which has more than 18 million users in Canada), they cannot be policed without the help of the tech companies themselves.
Last year, leaders of the G7 countries pressured tech giant to take a more active role in preventing their websites from being used as a platform for violent ideals.
Facebook, Google and Twitter agreed in the fall to remove terrorist content within two hours of it appearing on their sites. Public Safety Minister Ralph Goodale met recently with their Canadian representatives to ensure their collaboration with the initiative.
"They expressed great support for the process," Goodale told CBC's The House earlier this month.
"The companies do not want their enterprises to become known as havens for, or platforms, for terror or hate."
Problems of scale
Canadian law enforcement agencies, by and large, depend on a public complaints to draw their attention to extremist comments online.
"We don't monitor online activity of individuals. That's not our job," said John Athanasiades, who heads operations for the RCMP's Integrated National Security Enforcement Team (INSET) in Montreal.
Once a complaint is flagged to the RCMP, Athanasiades' team will analyze open-source data to determine whether an investigation is warranted.
If there is a national security component to the investigation, the RCMP will remain involved. Otherwise, it hands the reins over to the appropriate local police force.
Police may also opt for "disruption tactics" if there is insufficient evidence of a crime, but enough to concern investigators.
"We will go out and meet this individual, if we've identified them, and ask them to explain themselves," Athanasiades said.
"If necessary, we will continue to monitor that individual, if we're not satisfied that this individual is not going to stop, or if we believe that individual is a threat."
This type of police intervention nevertheless faces a problem of scale.
Not only do millions of Canadians use Facebook, Twitter and other social media platforms, but there has been an exponential increase in the number of comments that potentially qualify as hate speech.
Between November 2015 and November 2016, there was a 600 per cent increase in the amount of racist and intolerant language used by Canadians on social media, according to a study by Cision, a media marketing company.
Finding real threats amid the noise
In order to identify real threats amid the white noise of the internet, police forces need to make better use of artificial intelligence, said Paul Laurier, a former counter-terrorism investigator with the Sûreté du Québec.
Laurier, who now runs a firm that conducts digital forensic analysis, believes that data mining tools can help police isolate online profiles of individuals liable to commit mass killings.
As part of his ongoing doctoral work in computer engineering at the École de technologie supérieure, Laurier coded information from mass killings dating back to Columbine.
His research suggests mass killers share certain behavioural patterns: they are narcissistic, they display an admiration for weapons and will break away from their normal pattern of social interaction, often indicating their murderous intention to someone.
Laurier is developing software that will link existing artificial intelligence programs that can already analyze text for signs of narcism, recognize certain images and pattern social media activity.
"The goal of our program — it's not finished yet — is to detect speech, detect images of guns or bombs and detect whether a person is going outside of their usual network. If so, we have a problem," Laurier said.
Once a problem is flagged, he added, it would then be up to an officer to carry out a more personalized investigation.
It is an approach that would allow police to allocate their resources more effectively, but Laurier said there is a reluctance among law enforcement agencies in Quebec to trust the technology.
"When you talk about computer science people are afraid. It costs a lot of money, and there have been some financial disasters," he said.
"But at the end you will manage more cases faster and humans will intervene in the right way."
This story is part of CBC's in-depth look at the aftermath of the shooting at the mosque in Quebec City one year ago. CBC will also have special coverage of the commemorative events on Monday, Jan. 29, including live radio, TV and online broadcasts.