Facebook has known since 2018 it was being used to incite hate
We weren't doing enough to prevent Facebook from being used to incite violence: Chan
Facebook knew as early as 2018 that its platform was being used to incite division and violence, a company official told Canadian MPs on Thursday.
Testifying at justice committee hearings into online hate, Kevin Chan, head of public policy for Facebook Canada, said the company realized long before this year's deadly attack in Sri Lanka that Facebook was being used to amplify ethnic and religious tensions and took action.
"In 2018, we commissioned a human rights impact assessment on the role of our services, which found that we weren't doing enough to help prevent our platform from being used to foment division and incite offline violence," Chan told the committee in Ottawa.
A series of co-ordinated suicide bombings in Sri Lanka on Easter Day targeted three churches and three luxury hotels, killing more than 250 people and injuring hundreds.
"With regards to the tragedy in Sri Lanka, we know that the misuse and abuse of our platform may amplify underlying ethnic and religious tensions and contribute to offline harm in some parts of the world," Chan said. "This is especially true in countries like Sri Lanka, where many people are using the internet for the first time, and social media can be used to spread hate and fuel tension on the ground."
Facebook taking steps to ID 'content risks'
Chan said Facebook has set up a team to work on building "products, policies and programs" that take those kinds of situations into account.
He said Facebook also has learned from its experience in Myanmar. The company acknowledged in November 2018 that Facebook was used to incite hatred against Myanmar's Rohingya Muslim minority.
"We've also been building up our content review teams to ensure we have people with the right language skills and understanding of the cultural context," he said. "We've also been investing in technology and programs in places where we have identified heightened content risks and are taking steps to get ahead of them."
Chan's comments stand in contrast to the company's response last week to questions from the International Grand Committee in Ottawa about the role Facebook played in the events that led up to the attack in Sri Lanka.
Questioned about videos that appeared on Facebook in Sri Lanka six months before the attack and were flagged to the company — videos urging people to kill non-Muslims, including women and children — global policy director Neil Potts defended the company's actions.
"When we're made aware of that content, we do remove it," Potts told MPs from nearly a dozen countries. "If it is not reported or if we have not proactively identified it, then we would not remove it, because honestly we would not know that it exists."
'6 hate figures' removed, official says
Chan said Facebook signed the Christchurch Call to eliminate terrorist and extremist content online, and said people who break their Dangerous Organizations and Individuals Policy will be restricted from livestreaming through Facebook Live.
Groups that seek to promote hate often shift their language or alter content slightly to circumvent Facebook's filters, he said.
Chan also said that, working with Canadian experts, the company removed "six hate figures and hate organizations, including Faith Goldy, Kevin Goudreau, the Canadian Nationalist Front, the Aryan Strikeforce, the Wolves of Odin and the Soldiers of Odin, from having any further presence on Facebook and Instagram."
The Toronto Star and Buzzfeed reported in April that only days after Facebook's announcement, two of those banned were back on the platform, along with several pages with names similar to those of the groups Facebook said it banned.
Chan faced pointed questions today from committee members about how the company defines permissible speech online, and what it's doing to protect political candidates who are women or LGBT, or belong to ethnic and religious minorities, from people who want to use Facebook to attack them online.
Chan said Facebook has set up a line for politicians to report problems. As for what is permissible, Chan said the company would rather not be the one deciding what people should be allowed to say online.
"That is the challenge, that is the hard question," Chan said in response to a question from Conservative MP Lisa Raitt. "What we're saying is we don't think it is appropriate for us to make that call. To the extent that governments or legislatures want to put a framework around speech online, we obviously think it's best for parliaments to do that."
Thursday's appearance by Facebook wraps up the justice committee's hearings on the topic of online hate.
Committee Chair Anthony Housefather said the committee now has to prepare its report and recommendations, adding that what he heard during the hearings convinced him personally that action is needed.
"What I have learned is that we need to attack online hate from a multi-pronged approach," he said. "You need to define online hate, you need to be able to track online hate, you have to educate people about what online hate is about, and then you need to be able to work with the providers to cooperatively attack online hate. And that may include also government regulation.
"Where you can't get agreement, you may have to define things and actually give them instruction as to what they must or must not do ... that's a difficult line to find."
Elizabeth Thompson is part of a CBC team investigating online misinformation and attempts to disrupt the upcoming Canadian election. She can be reached at elizabeth.thompson@cbc.ca