Students create tool to filter homophobic, racist content
Web browser extension scrubs offensive content on sites like Facebook, Twitter
When it comes to reading news online and watching YouTube videos in 2018, "don't read the comments" is, in most cases, sage advice.
But a University of Ottawa student wants people to embrace the comment section with a bit of handy censorship.
Nikola Draca, a third-year statistics student, and his colleague, Angus McLean, 23, an engineering student at McGill University, put their heads together to develop an extension called Soothe for the Google Chrome web browser that blurs out homophobic, racist, sexist, transphobic and other hateful language while browsing the web.
Draca, 22, got the idea to create it after a friend who had been suffering from severe anxiety once asked him if there was an online tool that would filter out triggering content. He and McLean both created a prototype of the extension during a hackathon at Carleton University last year and have been developing it ever since.
So far, Soothe has about 500 users and works on websites like Facebook, Twitter and YouTube, according to Draca. The Ottawa student told CBC Radio's All in a Day on Wednesday the extension uses "sentiment analysis" to target hateful words and phrases and blurs them in real time.
The emergence of cyber-bullying was one of the main reasons behind the project, he said.
"It's becoming an extremely relevant issue. We personally thought that there wasn't enough attention being put on it despite it being talked about very often," Draca said.
"And we wanted to empower people suffering from anxiety, PTSD online and just do a small part of making their online experience better."
During his research, Draca said YouTube was one of the websites that had the most homophobic content. Overall, the extension's current 500 users have spotted 35,000 instances of online harassment.
What's unique about the extension is that is has several words already in the Soothe database, whereas similar tools require the user to manually add offensive words, he added. Users can also tag their own words that the developing team might have missed.
"Which means that naturally as language is evolving as new slang is appearing we're going to stay relevant," he said.
"What we've, from the very beginning, wanted to do is be careful not to censor ideas or important dialogue. I don't think the important thing to do is just to remove dialogue that could be important. Rather, just give people an additional layer where they can make the decision whether or not they want to read a comment that could potentially be offensive or harmful."