Twitter asks for help to fix the 'health' of its conversations

The social media giant turns to social science to solve its toxic culture
Logos of the social network Twitter displayed on computers' screens. Twitter is looking for help to create a series of metrics to identify whether interactions on their platform are 'healthy'. (LOIC VENANCE/AFP/Getty Images)
Listen9:33

After years of complaints that the company has been unable or unwilling to curb harassment and hate speech on their platform, Twitter is asking for help.



On their blog, Twitter announced that they would be taking submissions for ways to identify problem users. The winner would receive funding for their research to create a way to measure the 'health' of conversation on Twitter.

Jen Golbeck is a professor of computer science at the University of Maryland.

Jen Golbeck is a professor of computer science at the University of Maryland. While she thinks it may be useful to have metrics to help determine how healthy conversations are online, she said that Twitter has a decision to make before those metrics can help solve their problems. "They can't even agree on what their own policies are," said Golbeck. "They will ban people for things that we can all agree would be pretty innocuous, and then they'll allow really violent harassment to go past and say that it's not a violation of their terms."

"If you can't get the humans to agree on it, you definitely can't get some artificial intelligence or an automated system to agree on it."

Recently, Twitter made a move to ban neo-nazis and bots on the platform, and the response from those users and others on the far right was to accuse Twitter of censorship.

That line, between maintaining civil interactions and censorship is something Twitter has struggled with. And while banning Nazis and harassers seems like a pretty clear decision, Golbeck says that finding where that line should be is one of their biggest challenges.

If you can't get the humans to agree on it, you definitely can't get some artificial intelligence or an automated system to agree on it.-Jen Golbeck

"I don't have a lot of sympathy for people who are threatening women online and then getting banned… But we have to be very careful that that doesn't slide into, 'these people are just sharing an unpopular opinion and also getting blocked or getting censored'."

While metrics, like what Twitter is proposing now, will help them identify when a user has crossed that line, it won't necessarily help them decide where that line is.

Comments

To encourage thoughtful and respectful conversations, first and last names will appear with each submission to CBC/Radio-Canada's online communities (except in children and youth-oriented communities). Pseudonyms will no longer be permitted.

By submitting a comment, you accept that CBC has the right to reproduce and publish that comment in whole or in part, in any manner CBC chooses. Please note that CBC does not endorse the opinions expressed in comments. Comments on this story are moderated according to our Submission Guidelines. Comments are welcome while open. We reserve the right to close comments at any time.