Science·Q&A

Social media sites censor a range of content, new report says

In light of criticism of 'fake news,' Facebook CEO Mark Zuckerberg has said the company 'must be extremely cautious about becoming arbiters of truth ourselves.' But a new report says Facebook already censors and removes a wide variety of content, as do other social media platforms.

Policies around what can and can't be posted online are often unclear, according to report's co-author

Tina Spenst, a B.C. mom, complained in 2013 after she said a photo she posted of herself breastfeeding her daughter was blocked from Facebook. Its policies now state such photos are permissible, but nudity remains one of the main reasons sites remove content, according to a new report. (Tina Spenst/Facebook)

Even as social media companies like Facebook struggle to filter out fake news, a new report says their rules on what can and can't be posted online are often unclear and lead to the controversial removal of some content.

Facebook CEO Mark Zuckerberg has announced steps to tackle the issue of misinformation, but has also said the company "must be extremely cautious about becoming arbiters of truth ourselves."

However, a new report says Facebook already censors and removes a wide variety of content — as do other social media platforms.

CBC Radio technology columnist Dan Misener has looked at what the report says about online censorship.

What does this new report say?

It's from the Electronic Frontier Foundation, a non-profit digital rights group based in California and is part of a project called Onlinecensorship.org, which aims to better understand how social media companies decide what is and isn't allowed on their platforms.

Of course, all these sites have policies — but they're not always easy to understand, and they're not always consistently enforced. As we've seen with the ongoing conversation around fake news, these companies' approaches change over time.

So Onlinecensorship.org tries to better understand these policies by collecting reports from people whose content has been taken down, or censored. For instance, if a mother posts a photo of herself breastfeeding and that gets taken down, she can file a report with Onlinecensorship.org.

Onlinecensorship.org lets users file reports when their content has been censored by social media sites. (OnlineCensorship.org)
Right now, they have about a year's worth of user-generated data, so they've been able to identify some patterns in what gets taken down and what results in accounts being suspended.

What types of content are being censored?

Jillian York is the director for international freedom of expression with the Electronic Frontier Foundation. She said a lot of the content takedowns they've seen involved nudity or sexual content, citing a couple of famous examples from earlier this year.

"A photo of the Little Mermaid statue in Copenhagen was censored from Facebook. It's a mermaid, so not really human nudity," she noted.

"But then later in the year, a more serious example of that is when Nick Ut's Pulitzer Prize-winning photo 'The Terror of War' was also censored by Facebook … a very famous image of a young nude girl fleeing a napalm attack."

Nick Ut's famous 1972 photo of Kim Phuc, centre, fleeing after a napalm attack was removed by Facebook earlier this year because it showed nudity. Facebook later reversed its decision. (Nick Ut/Associated Press)
Facebook later reversed its decision with regard to the Nick Ut photo. But nudity was one of the biggest reasons content was censored, according to the EFF's research.

Of course, social media sites also remove content like hate speech. But the precise definitions of hate speech, acceptable nudity or "fake news" aren't always clearly spelled out.

And then there are truly bizarre examples — like the Facebook user in India who was banned because he posted a picture of a cat in a business suit. It's unclear exactly what about that image brought it to the attention of Facebook's moderators.

Who actually moderates the content?

We don't know much about the people who do content moderation work. We do know they work in offices all around the world, to deal with the global nature of social media. York said there's evidence some of these offices are in places where labour is relatively inexpensive, like the Philippines, Bangladesh and India. Many of them don't work directly with the social media companies themselves. Instead, the content moderators often work for third parties, so it's often outsourced labour.

And as an end-user of Facebook,Twitter or YouTube, it's almost impossible to know who's enforcing their content rules.

In a recent interview with TVO's The Agenda, Sarah Roberts, who is an assistant professor in the faculty of information studies at Western University, pointed out content moderation is a "hidden practice" and the workers are "faceless and invisible."

What should companies do to improve?

The Electronic Frontier Foundation's Jillian York says social media companies need to do a better job of clarifying their rules on what can and can't be posted. (EFF.org/Matthew Stender)
York says policies on issues like fake news, nudity and hate speech can be opaque.

It's not always clear what the rules are, or how they're enforced. So she wants to see more transparency from social media companies.

"We want users to understand why their content's being taken down, how they violated the rules — not just that they violated the rules," she said.

"And then also maybe a little bit more clarity from the companies in terms of what violations look like more broadly."

She's also pushing for better transparency reporting, which would see companies share statistics on things like how much content is removed under their own guidelines, how many accounts are suspended and why they're suspended.

Why do social media bans matter?

For me, this is fundamentally about freedom of speech — what you're allowed to say and what you're not allowed to say.

We have existing laws around this stuff. But through their content moderation policies, social media companies are writing their own rules around what can or can't be said. 

It's worth remembering that, for a while, people thought of the internet as a kind of digital public square, with the emphasis on "public."

This report is a great reminder that when we spend time on social networks, we're not really in a public space. We're spending time on private servers, owned by private companies, who have the ability to make their own rules and regulations around speech.

We forget that at our peril.

ABOUT THE AUTHOR

Dan Misener

CBC Radio technology columnist

Dan Misener is a technology journalist for CBC radio and CBCNews.ca. Find him on Twitter @misener.