World

Why the challenge being put forward at the White House's extremism discussion is no easy task

As White House officials meet with tech companies on Friday to discuss how to combat violent online extremism, experts say the biggest challenge is tracking anonymous users and filtering out hateful messaging in an ever-evolving landscape.

Friday's discussion with tech companies comes just days after deadly shootings in El Paso and Dayton

People attend a candlelight vigil at a makeshift memorial honouring victims of the mass shooting in El Paso, Texas, which left 22 people dead. The accused gunman posted an apparent anti-immigrant manifesto on the anonymous message board 8chan just moments before the attack in the southern border city. (Mario Tama/Getty Images)

The red flags were all there: Multiple Facebook posts filled with hateful white supremacist language, pictures of large caches of weapons and threats to shoot up schools and synagogues. 

When Oren Segal and other researchers at the Anti-Defamation League's Center for Extremism alerted the FBI to the posts last fall, the social media trail quickly led to Dakota Reed, a 20-year-old Washington state man who was arrested, convicted and sentenced to one year in prison for making threats. 

With posts explicitly threatening to kill Jewish people and talk of emulating Dylann Roof, the white supremacist who killed nine people at a church in Charleston, S.C., stopping Reed before those comments became a reality was a win in the ongoing effort to track and stop online extremism.

And while such arrests are more common than people realize, Segal said the bigger problem is tracking users who spew and consume violent and hateful views on anonymous forums.

"It's a challenge of volume and the ability for people to reach recruit and radicalize in ways that we haven't seen in human history," he said.

Dakota Reed, 20, was convicted and sentenced to one year in prison earlier this year for making threats after researchers found Facebook posts where he threatened to attack schools and synagogues. (Snohomish County Sheriff’s Office)

That's the crux of the problem that will be on the table Friday, when senior White House officials host representatives from a number of internet and technology companies to discuss violent online extremism — a meeting that comes just days after the deadly shooting attacks in El Paso, Texas and Dayton, Ohio.

It's not clear which companies were invited or what exactly is on the agenda. U.S. President Donald Trump's schedule for Friday made no mention of the meeting. 

But Trump did mention the role of tech companies in remarks made Monday, in the aftermath of the twin shootings that left 31 people dead, saying he wants government agencies to work with social media firms to "detect mass shooters before they strike."

An alleged manifesto of the accused gunman in El Paso was posted to the anonymous message board 8chan just moments before the attack, filled with anti-immigrant language.

Many white supremacist groups operate out in the open, Segal said, using websites, social media and podcasts to amplify their message. He said the challenge facing federal authorities now is tracking those being radicalized by that violent messaging.

"The real challenge is how some of these hateful messages impact those who are not out," Segal said. "It takes real hard investigative work that can take not just hours and hours but days, weeks and months."

Gaming the platforms

Pinpointing the source of the hate and those consuming it is no simple task, given that extremists have learned how to game the social media platforms, said Michael Hayden, a senior investigative reporter with the Southern Poverty Law Center.

By avoiding obvious slurs and using dog whistles and coded language, Hayden said white supremacists can avoid running afoul of the terms of service of social media platforms.

"It's not about bad words on the Internet; it's about using the algorithm and using the platform to radicalize people and push people further and further toward an agenda of extremism and violence," said Hayden.

Hayden noted that the accused shooters in El Paso, at a synagogue in Poway, Calif., in April, and at the Tree of Life Synagogue in Pittsburgh last October all shared similar ideology.

The United States flag flies at half-mast above the White House in response to the El Paso and Dayton mass shooting attacks. White House officials are hosting a number of tech companies in Washington Friday for a roundtable discussion on violent online extremism. (Erin Scott/Reuters)

"The dialogue between internet radicalization and real world violence is appears to be ramping up rather than cooling off," he said.

The ability to recognize what seems innocuous, but is really a call to violence, is part of a larger content moderation problem, according to data scientist Emily Gorcenski.

Terms like "degenerate," "globalist," and "invasion" have added meaning when used by far-right groups, she said.

Having the expertise and knowledge at a scale large enough to be effective for sites like Facebook or Twitter is extremely difficult, said Gorcenski, who tracks far-right extremists in the court system on her website, First Vigil.

"The challenge of the platform is that they need context, where it's not just what these people are saying — it is who is saying it and for what reason."

Does de-platforming work?

A key tool in the fight to limit extremist influence is de-platforming, the banning and removal of bad actors from a site or denying certain sites themselves the capacity to operate.

8chan — also linked to the mosque shootings in Christchurch, New Zealand — recently lost some of the web services it needs to operate, leaving administrators scrambling to find a new home.

"Even if you de-platform somebody for a week, it can have a huge effect on the ability for that person to sow hate and sow discord," said Gorcenski.

She points to the removal of controversial right-wing personality Milo Yiannopoulos from Twitter and Facebook, saying it has rendered his influence to barely a whisper.

The removal of a number of Reddit communities ​​​​​under a then-new anti-harassment policy in 2015 also had an effect on lessening hate speech on that site specifically, she said.

Conservative commentator Milo Yiannopoulos was banned from Facebook and Twitter for violating the companies' policies on harassment and hate speech. (Noah Berger/Reuters)

While de-platforming can be effective, Hayden said there are always new avenues available to disseminate hate.

He warns that Telegram, a cloud-based, encrypted messaging app, has become a new favourite for neo-Nazis and white nationalists, allowing users to chat in public and in private, hidden from scrutiny. The same app was popular with ISIS, with its members using Telegram for recruitment and to spread Islamic extremist propaganda.

"What you have is [message boards like] 8chan, plus the ability for extremists to network, strengthen relationships, potentially plan violence," he said. "It is a very dangerous space right now."

White House credibility gap

Some have pointed out the challenge of having a discussion focused on extremism at a White House occupied by a president whose rhetoric sometimes reflects the language favoured by white nationalists.

"Anytime the government amplifies messages that are similar to those that are animating extremists, that's a problem," said Segal. "Whether it's a tweet that comes out that speaks about 'invasion,' these are all things that help normalize and mainstream messages that we know have deadly consequences."

The key to addressing the threats online will take more than just government leaning on the tech companies to take a tougher stance, Segal said. Groups like his, as well as the general public, also need to be monitoring what happens on these sites every day.

Protesters hold signs as the motorcade carrying Donald Trump passes through El Paso, Texas on Wednesday during the U.S. president's visit, which came in the aftermath of a mass shooting that killed 22 people. Protesters accused Trump of inflaming tensions with anti-immigrant and racially charged rhetoric. (Saul Loeb/AFP/Getty Images)

"I don't think any one organization has the capability on their own to track all these threats — and that includes law enforcement," he said.

Care must also be taken not to believe that the threats will only come from disaffected loners and outcasts, said Hayden. He said he recently outed an employee of the U.S. State Department who had published white nationalist propaganda online.

"We're not talking about some person with no connection to society; this is a guy who was being groomed for a potential leadership position," he said.

ABOUT THE AUTHOR

Steven D'Souza

Co-host, The Fifth Estate

Steven D'Souza is a co-host with The Fifth Estate. Previously he was CBC's correspondent in New York covering two U.S. Presidential campaigns and travelling around the U.S. covering everything from protests to natural disasters to mass shootings. He won a Canadian Screen Award for coverage of the protests around the death of George Floyd. He's reported internationally from Rome, Israel and Brazil.