Social media algorithms to blame for antisemitic, Islamophobic content online, Waterloo expert says

There are reports of a significant rise of hate on some social media platforms since Oct. 7

Image | US TIKTOK BAN

Caption: An open letter has been penned by some celebrities who say that TikTok "is not safe for Jewish users" due to threats, harassment and antisemitism. (Gabby Jones/Bloomberg)

There's been a reported bump in the amount of antisemitic and Islamophic content on social media platforms like TikTok and X (formerly Twitter) since the start of the Israel-Hamas war, which may be due to algorithms that are geared to enhance engagement and to make money. That's according to Jimmy Lin, a professor at the David R. Cheriton School of Computer Science at the University of Waterloo.

Image | Jimmy Lin

Caption: Jimmy Lin, a professor at the David R. Cheriton School of Computer Science at the University of Waterloo, said that engagement algorithms are to blame for the rise in antisemitic and Islamophobic content on social media, post Oct. 7. (cs.uwaterloo.ca)

"The more engagement that any video gets, the more it's going to show up in the user feed, and things that get engagement are things that have shock value on either side, either vehemently antisemitic or anti-Palestinian," said Lin, with the caveat that his view is speculation since he doesn't "work on the algorithms," explaining that most of them aren't publicly available.
"Basically what we're observing here is that we've lost the middle ground here in the sort of modern discourse. Everything is becoming increasingly polarized because polarized content [gets] engagement."
Antisemitism is up by 919 per cent on X since Oct. 7, according to the Anti-Defamation League. The Institute for Strategic Dialogue indicated that Islamophobia is up by 422 per cent on the platform too.
There is also an open letter(external link) penned by some celebrities who say that TikTok "is not safe for Jewish users" due to threats, harassment and antisemitism.
The effects of online hate have been felt nationally too.
"This certainly concerns us in the Jewish community," said Jaime Kirzner-Roberts, the Greater Toronto Area vice-president of The Centre for Israel and Jewish Affairs. "But I think it should concern all Canadians because we're letting algorithms create problems of extremism and polarization in our society."

Image | Jaime Kirzner-Roberts, the Greater Toronto Area vice-president of The Centre for Israel and Jewish A

Caption: Jaime Kirzner-Roberts is the Greater Toronto Area vice-president of The Centre for Israel and Jewish Affairs. (cija.ca)

What can be done?

Lin doesn't believe that the algorithms are creating polarizing content on purpose. It's more of what he thinks of as "benign neglect, in the sense that engagement and eyeballs … drive traffic and traffic drives revenue."
"In the absence of countervailing forces this is just what's going to happen," Lin said.
"And to speed the conversation along, you're going to ask, 'What are the countervailing forces?' Well, for example, regulation is a countervailing force. Threat of lawsuit is a countervailing force, and so in the absence of these countervailing forces, the profit motive will dominate."
Lin doesn't believe that fixing these algorithms is an issue for social media companies, but that doesn't guarantee it's going to happen.
"Technically it's not very challenging but whether or not there is the will, the corporate will, to make these changes, that's a totally separate matter," he said.
In a statement to CBC News, a TikTok spokesperson explained that they have taken proactive measures since the start of the Israel-Hamas war to curb hate online.
In response to the open letter directed at the tech company by the Jewish community, their spokesperson said, "We oppose antisemitism in all forms. Antisemitism is on the rise globally, and we're committed to doing our part to fight it. We've taken important steps to protect our community and prevent the spread of hate, and we appreciate ongoing, honest dialogue and feedback as we continually work to strengthen these protections."
CBC News reached out to X for comment but didn't receive a response by the time of publication.

Image | Fatema Abdalla

Caption: Fatema Abdalla is an advocacy officer with the National Council of Canadian Muslims. (National Council of Canadian Muslims)

The National Council of Canadian Muslims said that they too have noticed an influx of hate online, but they think the government should do more.
"At this rate the onus is on the government to regulate what is taking place online and what forms of hate are spewing because we see the drastic effects that it can have in the lives of Canadians including many Canadians who have lost their lives due to online forms of hate," said Fatema Abdalla, an advocacy officer with the organization.
Ian McLeod, a spokesperson for the Department of Justice Canada, said that curbing online hate is on the radar for the federal government.
"The Government of Canada is committed to putting in place a transparent and accountable regulatory framework for online safety in Canada," said McLeod. "Now, more than ever, online services must be held responsible for addressing harmful content on their platforms and creating a safe online space that protects all Canadians."
"[Minister of Justice and Attorney General of Canada, Arif Virani] has recently publicly reiterated the Government of Canada's commitment to introducing specific legislation to combat online hate in the near future. At the same time, the Government of Canada continues to take concrete steps to fight hate crime and hate speech, in all their forms."
McLeod said that Department of Canadian Heritage is "also leading efforts to improve online safety."

What users can do

There are things that users can do if hateful content persists, and quitting the platforms altogether isn't a realistic option, said Aimée Morrison, an associate professor at The University of Waterloo's English department whose work focuses on social media. She said that unwanted content can be filtered out by users in order to get a feed that is less distressing.
Each platform has its own way for users to do this. TikTok, for example, has "content preferences" where keywords used in videos can be filtered.
"The more you do that the more you can clean up your own feed so that that traumatizing material isn't showing up," she said. "Or if it's ruining your vibe there, you're just there for recipe tips and make-up influencers and you don't want war content in there you can sort of train the algorithm."
However, she warned against engaging with such content.
"I would not click a reaction button. I would not leave a comment on that video," she said, explaining that doing these things would result in more people seeing the content since it would appear to be engaging.

The future

Lin worries that generative AI, which could create things like deep fakes, will continue to make these issues much worse. He said there used to be misinformation being created by misappropriating media, but now we've entered into a whole new sphere.
"Now with generative AI you're seeing outright fabrications," he said. "Things that were just made up wholesale that just aren't true … Once you throw that into the mix it becomes even more complicated to sort out truth from misinformation, intentional from people that were [duping] and amplifying false information."
When asked whether this could be stopped he said that "the genie is out of the bottle on this one," adding the only thing that might change it are government regulations or policies.
CBC News reached out to the federal government for comment but didn't receive a response by the time of publication.

Media Video | The National : Explosion of hate across social media platforms

Caption: Social media users from TikTok to X are being exposed to a deluge of different Islamophobic and antisemitic tropes — some of them perpetuated by people like Elon Musk, the owner of X.

Open Full Embed in New Tab (external link)Loading external pages may require significantly more data usage.