Experts in both terrorism and digital media say British Prime Minister Theresa May's accusations that internet companies are providing a "safe space" for extremism aren't helpful in the fight against terror attacks.
"This is what you do when you want to appear like you're really taking care of the problem but you're helpless," said Kamran Bokhari, a fellow at George Washington University's program on extremism, noting that May is not the first politician to view controlling content posted on the internet as a way to stop radicalization.
Putting pressure on tech companies to take more responsibility for content that users post, including a proposal to charge an industry-wide levy toward the cost of policing the internet, has been part of May's campaign platform as the British election approaches on Thursday.
"We cannot allow this ideology the safe space it needs to breed. Yet that is precisely what the internet — and the big companies that provide internet-based services — provide," May said as part of a televised address on Sunday, after terror attacks in London killed seven people — including a Canadian woman — and critically injured 21 more.
'Hostile environment for terrorists'
Facebook, Twitter and Google all issued statements in response to May's comments on Sunday, saying they are working to continually remove and prevent such content.
"We want Facebook to be a hostile environment for terrorists," said Simon Milner, director of policy at Facebook in an emailed statement to Reuters.
A statement from Nick Pickles, U.K. head of public policy at Twitter, said, "Terrorist content has no place on Twitter," adding that the company had suspended nearly 400,000 accounts in the second half of 2016.
Google's statement said the company's "thoughts are with the victims of this shocking attack, and with the families of those caught up in it.
"We are committed to working in partnership with the government and NGOs to tackle these challenging and complex problems, and share the government's commitment to ensuring terrorists do not have a voice online," the statement, attributed to a U.K.-based Google spokesperson, said.
Responsibility to minimize hate
Google also said it employed "thousands of people" and spent "hundreds of millions of pounds to fight abuse on our platforms."
Social media and internet services "do have a responsibility to minimize any sort of hate on their platforms," Fuyuki Kurasawa, research chair in global digital citizenship at York University, told CBC News.
But despite his belief that they need to invest more in hiring people to actively monitor content, rather than relying as heavily as they do on technological algorithms, Kurasawa criticized May's comments as a knee-jerk reaction to the complexities of terrorism and hate.
"It's a bit like a few years ago when there were mass shootings, people would blame video games for the shootings," he said. "It's a very easy target to blame ... rather than actually trying to understand the causes underlying this.
"The internet and social media are simply manifestations or symptoms of what's going on. They're not the cause."
Terrorist attacks are "politically damaging" events, Kurasawa said, and May's remarks echoing an already announced campaign promise to crack down on internet providers could also be viewed as "opportunistic" just days before the election.
Accusation 'goes overboard'
Blaming social media companies after terrorist attacks "often goes overboard," said Amarnath Amarasingam, a Canadian expert in radicalization and terrorism.
"Social media companies do have a role to play but it's completely false to say they somehow shoulder the blame for what is happening," Amarasingam told CBC News in an email.
"To think radicalization happens just online or just on social media is simply false," he said. "It's a multifaceted process. We had terrorism before social media."