Opinion

Will Twitter's attempt to 'mute' the haters really make social media more civil?

Twitter’s recent action is intended to crack down on online trolling, but not everyone is satisfied with its steps to curb harassment, writes Ramona Pringle

Twitter admitted that "the amount of abuse, bullying and harassment we've seen across the internet has risen"

Twitter has recently suspended a number alt-right and white nationalist accounts in a push to create a more civil online space. (Soeren Stach/AP/Canadian Press)

Twitter has finally taken a long-awaited stand against online trolling, announcing new abuse policies in a post published last week. But with so much damage already done — particularly during the U.S. election — is it too little, too late?

In a statement published on its website, Twitter admitted that "the amount of abuse, bullying, and harassment we've seen across the internet has risen sharply over the past few years." It also announced enhanced controls, reporting and enforcement for dealing with harassment and abuse.

The platform's users will already be familiar with the mute feature, which lets you hide accounts you don't want to see. New functionality gives users more control over the mute function, letting them silence specific words and phrases they don't want to see.

Twitter is allowing users to mute specific words and phrases they don't want to see. (Chesnot/Getty)

Twitter is also updating the way that hateful conduct is dealt with, allowing users to report it for others whenever they see it happening, instead of putting the onus entirely on the target of the abuse. The company's hope is that this approach will help "to strengthen a culture of collective support on Twitter."

On top of all of that, Twitter has recently suspended a number alt-right and white nationalist accounts in a push to create a more civil online space. Twitter has long been criticized for taking too long to respond to reports of abuse, and not taking strong enough action when they do. Since the summer, we've witnessed the harassment of female sports journalists, prominent actresses targeted because of their race and gender, and in a report last month, the Anti-Defamation League cited roughly 2.6 million anti-Semitic tweets in the past year, with over 10 billion impressions across the internet. 

Twitter's recent action is intended to crack down on all of that, but not everyone is satisfied with its steps to curb harassment. They argue that while the expanded mute feature might make the platform more pleasant to use, it could leave some in very real danger.

In a post on Motherboard, game designer Brianna Wu, who has been a target of ongoing harassment says, "Twitter's new solutions work by hiding content from you, rather than removing it. This is a solution that can leave you in more danger. If someone is threatening to kill you, or if private information about you is released, you're more likely to simply be unaware of it."

Abuse is common

While Twitter often bears the brunt of criticism about online toxicity, this kind of behaviour is all too common elsewhere as well — from YouTube comments to Reddit threads. Newspaper comments have gotten so bad that many media outlets have actually shut them down because fixing them seems like too daunting a task.

Wikipedia, meanwhile, has proven to be a slightly more positive space. While it is certainly not immune to sexism and flame wars, according to a recent paper, individuals who edit political articles on the platform seem to grow less biased.

By design, Wikipedia exposes users to alternate points of view (Wikimedia/Wikipedia)

Because of the nature of the site, users have a tendency to edit pages with opposing political positions; a right-wing contributor is likely to edit a left-wing page and encounter different views and vice-versa — something that researchers suggest helps people break out of their filter bubbles. Indeed, of the 70,000 articles analyzed for the study, contributors who started out with extreme political stances developed more neutral language over time.

The lesson for other platforms? According to Barnabe Geis, the Manager of Impact & Accelerators at the Centre for Social Innovation in Toronto, there is something to be said for a design that promotes meaningful engagement with differing view points, as opposed to the status quo which often "perpetuates a culture of anger, confusion and fear, where people lash out online and at each other, where sides are black and white and cannot come together."

Time for soul-searching

Yet part of the challenge when it comes to fostering online civility is still acknowledging the importance of the undertaking. We've long diminished the impact of social media, shrugging it off as "just" Facebook, or "just a tweet." But the internet is a reflection of us, and we are good, bad and everything in between. Vint Cerf, an internet pioneer who is currently Google's Chief Internet Evangelist, says, "If you don't like what you see, don't break the mirror." In other words, instead of pointing fingers at which social media platform is to blame, it's time to do some soul-searching.

And sometimes, soul searching can just mean taking a break from social media in the interest of self-care. So while developers work on redesigning online communities in an effort to make them more civil, there's nothing wrong with pressing the ultimate "mute" button and taking a hiatus from the digital sphere for a little while. 

This column is an opinion - for more information about our commentary section please read this editor's blog and our FAQ.

ABOUT THE AUTHOR

Ramona Pringle

Technology Columnist

Ramona Pringle is an associate professor in Faculty of Communication and Design and director of the Creative Innovation Studio at Ryerson University. She is a CBC contributor who writes and reports on the relationship between people and technology.