Facebook is failing to contain Brazil election disinformation, says digital human rights group
Global Witness senior advisor says finds are a 'stark reminder' of how easy it is to circumvent measures
As the run-up to Brazil's 2022 general election heats up, one digital human rights group has found that Facebook is failing to stop the spread of election disinformation in the country.
In findings released on Aug. 15, Global Witness revealed they tested Facebook's handling of ads that contain election disinformation in Brazil by pushing their own fake ads. They created the ads using false claims already found online, from a mix of real-life posts and examples highlighted in 2021 by Brazil's Superior Electoral Court, including some ads with the wrong election date, and others that undermined Brazil's electoral process.
It's a similar investigation the group had conducted in Myanmar, Ethiopia and Kenya, where they submitted ads that promoted some of "the worst of the worst hate speech," he said.
Just like in those cases, all of the disinformation ads Global Witness submitted for the Brazil election were approved by Facebook.
In a statement to Day 6, Facebook parent company Meta said their efforts in Brazil's previous election "resulted in the removal of 140,000 posts from Facebook and Instagram for violating our election interference policies and 250,000 rejections of unauthorized political ads."
They also said they have "prepared extensively" for Brazil's next election, which takes place on Oct. 2, including launching "tools that promote reliable information and label election-related posts, [and] established a direct channel for the Superior Electoral Court to send us potentially-harmful content for review."
Following Global Witness' latest findings, Meta added to an existing news release that they "will prohibit ads calling into question the legitimacy of the upcoming election."
But Lloyd said this is just the same policy they've always had "dressed up in different clothing."
"I'm really skeptical of when Facebook comes out with new policy announcements because, to be honest, the problem isn't necessarily that the policies aren't there. It's that they just simply aren't being enforced." he told Day 6 guest host Saroja Coelho.
Instead of touting their election integrity efforts, I think our findings are a stark reminder of how easy it is for bad actors to circumvent the measures.-Jon Lloyd, senior advisor at Global Witness
Lloyd and Coelho spoke about Global Witness' process and what impact ads promoting disinformation could have. Here's part of their conversation.
How did Global Witness test Facebook's handling of ads that contain election disinformation in Brazil?
So firstly, we sourced 10 examples of disinformation… and then we set up an account to post the ads.
It's important to note as well that Facebook has an ad authorization process in countries like Brazil, which they have deemed a kind of priority country, specifically because of this threat of disinformation. So you have to be authorized to post political, social issue or election-related content.
Really, we broke all the rules. We set up the account outside of Brazil. We used the non-Brazilian payment methods. We posted the ads while I was in Nairobi, [Kenya], and then again back here in London. So there were lots of chances for Meta to detect that it wasn't an authentic account.
Then, what we did was we scheduled the ads to go live a couple of weeks after we were actually posting them, and the reason that we do that is so we can go through Facebook's content moderation process without anybody actually ever seeing the disinformation. So we know whether or not those ads are accepted or rejected based on that.
So you posted multiple ads containing this disinformation. Did all of them get through?
Yeah. Every single ad that we posted went through.
We had them in two overarching categories. So we had a set of ads that were just straight-up election disinformation. So it had things like the wrong election date, who's allowed to vote, the methods of voting — so voting by mail is not allowed in Brazil, so we were saying you can vote by mail now.
The other set of it was a bit more insidious, and that was ads designed to kind of discredit the electoral process and therefore undermine election integrity.
That really plays into … the mood in Brazil right now where there is, I guess, [President Jair] Bolsonaro is seething doubt about the legitimacy of the election results. There's sort of fears of a "stop the steal" style coup attempt.
So what do you think it would take for Facebook Meta to change its ways?
What we're advocating for really falls into two main categories: resourcing and transparency. So we want Meta to properly ensure that its content moderation and ad account verification processes [are] up to scratch.
But crucially, we also need them to show their work. It's not enough to dazzle us with statistics that have no basis of reference. What we really need is for them to stop marking their own homework and allow some verified, independent third parties to come in and do some auditing.
That is necessary to hold them accountable for what they say that they're doing.
You say that this is Facebook's fourth failure at stopping or even flagging harmful disinformation, as they have promised that they would do. How did they respond to what you revealed in other tests of this nature?
They've responded very similar to how they responded to us in this test, which is not necessarily acknowledging that there is an issue, and coming back to us with statistics that they're all, like, figures that don't really have a common denominator.
So they could say they've removed 10,000 pieces of hate speech or a million pieces of hate speech, but the fact of the matter is 100 per cent of the hate speech and disinformation that we've tested … all of those have got through.
So we don't know how much they're missing. And again, that's sort of why we need an independent third party coming in and actually looking at what's going on.
We'd even like them to be publishing their risk assessments. We would hope that they've done a ... pre-election risk assessment for Brazil. We hope that they've done one in Kenya as well for the election that's just happened there.
Instead of touting their election integrity efforts, I think our findings are a stark reminder of how easy it is for bad actors to circumvent the measures.
But why does it matter if a bad actor, as you've called it, decides to put disinformation about elections out there? What's the actual impact on the ground?
Well, we know that the choices of some of the world's major tech companies do have an impact online, before and after high-stakes elections around the world.
We saw that in 2020 where a lot of the "stop the steal" stuff was really promoted on Facebook. People were joining groups at the recommendation of Facebook to kind of get into those conspiracy theory groups.
So we know that the information that is online has offline consequences. So that's why they just have to get this right.
Produced by Pedro Sanchez. Q&A has been edited for length and clarity.