Science·Analysis

After cracking down on neo-Nazis, tech companies wonder who should police online hate

For more than two decades, lawmakers, tech companies and consumers have held vastly different views on how to handle those who spread hate, racism and abuse online.

With no simple solution, firms have been left to regulate themselves, with mixed results

'I woke up in a bad mood and decided someone shouldn't be allowed on the internet. No one should have that power,' said CloudFlare CEO Matthew Prince in an internal email obtained by Gizmodo. (Steve Jennings/Getty Images for TechCrunch)

For more than two decades, a question with no easy answer has consumed international lawmakers, tech companies and internet users: How should we handle those who spread hate, racism and abuse online?

This long-simmering debate came to a boil this week, after white supremacist website The Daily Stormer helped organize a rally in Charlottesville, Va. that left 32-year-old counter-protester Heather Heyer dead. Its administrators spent much of the week trying to find a home online after multiple service providers declined to do business with the site.

As early as 1994, the secretary general of the United Nations noted that France's Minitel, a pre-internet online service, was being used to share anti-Semitic material. In the years that followed, the UN watched as far-right groups embraced electronic methods of communication, and one of the first white supremacist websites, Stormfront, was brought online.

"The internet has already captured the imagination of people with a message, including purveyors of hate, racists and anti-Semites," the UN's special rapporteur on contemporary forms of racism, racial discrimination, xenophobia and related intolerance wrote in 1997. Later, international working groups tried and failed to forge a policy that would satisfy all.

In the absence of a simple solution — say, a single standard for internet governance embraced by Europe and the U.S. — tech companies have largely been left to regulate themselves, often with mixed results.

All the while, some technologists and civil liberties advocates have questioned whether technology companies should have this power at all.

Neutral parties?

For years, platforms such as Facebook, Twitter and YouTube have tried to paint themselves as neutral parties, uninterested in making judgment calls on what is acceptable speech. This approach has frequently angered users who feel that not enough has been done to combat abusive, hateful and racist language on their platforms.

But the violence in Charlottesville prompted an unusually swift reaction across the tech community, the likes of which hasn't been seen before.

The week started with GoDaddy and then Google cutting off The Daily Stormer from their domain registration services.

Other platforms took actions of their own. Airbnb said it banned people tied to white supremacist groups from booking places to stay ahead of the rally, while Facebook and Twitter doubled down on the removal of groups and accounts that violated their hate speech policies.

Since 2012, the German government has required Twitter to hide neo-Nazi accounts from users in that country. This year it passed a law that fines tech companies that fail to remove hate speech and fake news. (Jonathan Ernst/Reuters)

Meanwhile, Apple and PayPal, which provide services that enable merchants to accept payments online, disabled support for websites that sold clothing featuring Nazi and white supremacist slogans. Executives weren't shy about taking a stand.

"We're talking about actual Nazis here," wrote eBay founder Pierre Omidyar on Twitter, whose company owned PayPal from 2002 to 2014. "Let them send cash to each other in envelopes. No need to help them use our products."

'I woke up in a bad mood'

But the company that attracted the most attention was CloudFlare, an internet infrastructure provider that helps protect websites against distributed denial of service (DDoS) attacks.

It made an exception to its long-standing policies on free speech and content neutrality and denied The Daily Stormer access to its services, too.

"I woke up in a bad mood and decided someone shouldn't be allowed on the Internet," CloudFlare CEO Matthew Prince wrote in an internal email obtained by Gizmodo. "No one should have that power."

"My rationale for making this decision was simple: the people behind the Daily Stormer are assholes and I'd had enough," he continued. But in a subsequent blog post, Prince explained what worried him: that a small group of companies, like his, in control of the internet can have such an outsized influence over the type of content that people are able to see online.

On the one hand, companies such as Twitter or Apple can freely decide what they will and won't accept on their platforms under U.S. law. And an increasingly vocal group of users and lawmakers have called on companies to take stronger measures against hate.

But others are concerned about what it means to have tech's biggest companies  — in particular, those that operate the internet services underpinning the web — making seemingly arbitrary judgments about what is acceptable behaviour on their platforms.

"All fair-minded people must stand against the hateful violence and aggression that seems to be growing across our country," wrote senior staff of digital rights group Electronic Frontier Foundation in a blog post.

"But we must also recognize that on the Internet, any tactic used now to silence neo-Nazis will soon be used against others, including people whose opinions we agree with."

Reining in racism

Each time a new wave of hateful rhetoric washes across the web, Twitter users point to Germany, where many of the neo-Nazis and white supremacists that appear in U.S. users' feeds are nowhere to be found.

It's not that Twitter can't filter these racist, hateful accounts out, users argue — just that they won't where they're not required to by law.

Since 2012, the German government has required Twitter to hide neo-Nazi accounts from users in the country, and earlier this year, the country passed legislation that imposed fines on tech companies that failed to remove hate speech and fake news.

It's a notable example of a government stepping in where it believes tech companies aren't doing enough. But there have been criticisms of this approach, too.

"Rather than reining in social media behemoths, the law risks reinforcing their role as online gatekeepers," argued researchers with the Global Public Policy Institute in Berlin. For its part, Facebook told Bloomberg earlier this year it was worried the legislation "would force private companies instead of courts to decide which content is illegal."

But for many still reeling from a weekend of hate and violence in Charlottesville, discussions of policy, due process and free speech are distant concerns.

They just want to see the Nazis kicked off — and this time, tech companies seem more than happy to oblige.

ABOUT THE AUTHOR

Matthew Braga

Senior Technology Reporter

Matthew Braga is the senior technology reporter for CBC News, where he covers stories about how data is collected, used, and shared. You can contact him via email at matthew.braga@cbc.ca. For particularly sensitive messages or documents, consider using Secure Drop, an anonymous, confidential system for sharing encrypted information with CBC News.