Only Facebook knows how it spreads fake election news

Secret algorithms make it hard to judge how too-good-to-be-true stories influence voters

Image | USA-ELECTION/CLINTON

Caption: Hillary Clinton campaign chairman John Podesta addresses a crowd of supporters on election night in New York. (Jim Bourg/Reuters)

If Facebook is to be believed, Hillary Clinton has deep ties to satanic rituals and the occult.
The post in question(external link) has nearly 3,000 shares, and links to a story on a conspiracy-laden political site. It is most definitely fake. But like many of the stories that were posted to Facebook in this U.S. election cycle, it was written specifically for those with a right-leaning partisan bias in mind. For this particular group of voters, it just begged to be shared.
Because the algorithms are a black box, there's no way to study them. — Frank Pasquale, law professor at the University of Maryland
And share they did. In an election dominated by the sexist, racist, and generally outrageous invective of America's president-elect Donald Trump, Facebook proved the perfect social platform for the sharing of fake, too-good-to-be-true style news.
At the end of August, The New York Times' John Herrman reported(external link) on the subtle shift in Facebook feeds across America, many of which were increasingly filled with questionable news sources and fake stories specifically designed to be shared. More recently, BuzzFeed's Craig Silverman took on the daunting task of debunking fake news stories in near-real time(external link).
Democrats and Republicans alike(external link) clicked and shared on what they hoped to be true, whether or not there was any underlying truth.
In both the run-up to the election and its immediate aftermath, there have been arguments that Facebook helped make a Trump presidency possible(external link) — that, by design, Facebook helps breed misinformation and encourage the spread of fake news, and that it can shape voter opinion based on the stories it chooses to show.
Whether or not this is true is practically impossible to say because of how little insight we have into how Facebook's myriad algorithms work.

Image | Election Protests California

Caption: High school students in San Francisco protest on Thursday against the election of Donald Trump. (Jeff Chiu/Associated Press)

"I think that if we were to learn how, for example, networks of disinformation form, that would give people a lot more information of how to create networks of information," said Frank Pasquale, a law professor at the University of Maryland, and author of The Black Box Society, a book on algorithms. "But because the algorithms are a black box, there's no way to study them."
Facebook is notoriously tight-lipped about how its algorithms are designed and maintained, and has granted only a handful of carefully controlled interviews(external link) with journalists. We know that signals such as likes, comments, and shares all factor heavily into what Facebook shows its users, but not which signals contribute to a particular post's appearance in a user's feed, nor how those signals are weighted.
"Anything that gets clicks, anything that gets more engagement and more potential ad revenue is effectively accelerated by the platform, with very rare exceptions," Pasquale said.

Algorithmic transparency

Inevitably, posts that hewed to partisan beliefs proved especially popular, whether or not they were true. And how much of an impact these voices had on the voting public, only Facebook knows.
For us to have any insight would require a level of algorithmic transparency, or algorithmic accountability into systems that few understand, though they increasingly shape the way we think.
"Election information is one of those domains where there's a pretty clear connection between information that people are being given access to and their ability to make a well informed decision," says Nicholas Diakopoulos, an assistant professor at the University of Maryland's journalism school.
He says algorithmic transparency is "one method to increase the level of accountability we have over these platforms."

Image | GOP 2016 Convention

Caption: Facebook board member Peter Thiel, who donated $1.25 million to Donald Trump's campaign, speaks at the Republican National Convention in July. (Mark J. Terrill/Associated Press)

Both Diakopoulos and Pasquale believe that Facebook is actually a media company — despite its repeated claims otherwise — and as such needs to take more responsibility for the quality of news that appears on its site.
One concern(external link) is that Facebook has so much power and influence over the content its nearly 1.2 billion daily users see that it could conceivably influence the outcome of an election. In fact, Facebook actually did something to this effect in 2012(external link), assisting academic researchers with a "randomized controlled trial of political mobilization messages delivered to 61 million Facebook users during the 2010 U.S. congressional elections."
The study's authors concluded that, both directly and indirectly, the Facebook messages increased voter turnout by 340,000 votes. Without more insight into how Facebook places news stories in its users' feeds, no one would ever know if a viral political hoax site was responsible for doing the same.

Trusted sources

There is little insight into how Facebook identifies trustworthy sources of information and penalizes those that are not. But in light of past censorship squabbles — such as Facebook's removal and subsequent reinstatement of a photo from the Vietnam War of a young, napalm-burned Kim Phuc — the question is whether users will feel comfortable with Facebook having that role.
"Do we really want Facebook deciding what's misinformation or not?" asked Jonathan Koren, who previously worked at Facebook on the company's trending news algorithm, and is a software engineer at an artificial intelligence company called Ozlo.
"And that's why they don't want to do it, because they don't want to be responsible for it. But at the same time, there's nobody responsible."