Twitter asks for help to fix the 'health' of its conversations
After years of complaints that the company has been unable or unwilling to curb harassment and hate speech on their platform, Twitter is asking for help.
We simply can’t and don’t want to do this alone. So we’re seeking help by opening up an RFP process to cast the widest net possible for great ideas and implementations. This will take time, and we’re committed to providing all the necessary resources. RFP: <a href="https://t.co/SFb3e8joLl">https://t.co/SFb3e8joLl</a>
—@jack
On their blog, Twitter announced that they would be taking submissions for ways to identify problem users. The winner would receive funding for their research to create a way to measure the 'health' of conversation on Twitter.
Jen Golbeck is a professor of computer science at the University of Maryland. While she thinks it may be useful to have metrics to help determine how healthy conversations are online, she said that Twitter has a decision to make before those metrics can help solve their problems. "They can't even agree on what their own policies are," said Golbeck. "They will ban people for things that we can all agree would be pretty innocuous, and then they'll allow really violent harassment to go past and say that it's not a violation of their terms."
"If you can't get the humans to agree on it, you definitely can't get some artificial intelligence or an automated system to agree on it."
Recently, Twitter made a move to ban neo-nazis and bots on the platform, and the response from those users and others on the far right was to accuse Twitter of censorship.
That line, between maintaining civil interactions and censorship is something Twitter has struggled with. And while banning Nazis and harassers seems like a pretty clear decision, Golbeck says that finding where that line should be is one of their biggest challenges.
If you can't get the humans to agree on it, you definitely can't get some artificial intelligence or an automated system to agree on it.-Jen Golbeck
"I don't have a lot of sympathy for people who are threatening women online and then getting banned… But we have to be very careful that that doesn't slide into, 'these people are just sharing an unpopular opinion and also getting blocked or getting censored'."
While metrics, like what Twitter is proposing now, will help them identify when a user has crossed that line, it won't necessarily help them decide where that line is.