Explicit fake images of Taylor Swift prove laws haven't kept pace with tech, experts say

Experts say laws must target developers, social media companies and individual users

Image | Taylor Swift

Caption: Explicit digitally-altered photos of Taylor Swift, shown here at the 2023 MTV Video Music Awards, have renewed calls for better laws involving deepfakes. (Noam Galai/Getty Images for MTV)

Explicit AI-generated photos of one of the world's most famous artists spread rapidly across social media this week, highlighting once again what experts describe as an urgent need to crack down on technology and platforms that make it possible for harmful images to be shared.
Fake photos of Taylor Swift that depicted the singer-songwriter in sexually suggestive positions were viewed tens of millions of times on X, previously known as Twitter, before being removed.
One photo, shared by a single user, was seen more than 45 million times before the account was suspended. But by then, the widely-shared photo had been immortalized elsewhere on the internet.
The situation showcased how advanced — and easily accessible — AI has become, while reigniting calls in both Canada and the U.S. for better laws.
"If I can quote Taylor Swift, X marks the spot where we fell apart(external link)," said Kristen Thomasen, an assistant professor at the University of British Columbia.
"Where we ought to be focusing more attention in the law is also now on the designers that create the tools that make this so easy, and [on] the websites that make it so possible to have this image go up … and then be seen by millions of people," said Thomasen.

Image | SXSW DEEPFAKE 20190309

Caption: This image, made from a fake video featuring former U.S. president Barack Obama shows elements of facial mapping technology that lets anyone make videos of real people appearing to say things they've never said. (The Associated Press)

After the pornographic photos depicting Swift began to appear, the artist's fans swamped the platform with "Protect Taylor Swift" posts, an effort to bury the images to make them harder to find through search.
In a post, X said its teams were "closely monitoring" the site to see whether photos would continue to appear.
"Our teams are actively removing all identified images and taking appropriate actions against the accounts responsible for posting them," the post read.
Neither Swift nor her publicist have commented on the images.
U.S. White House spokesperson Karine Jean-Pierre said social media companies have an important role to play in preventing the spread of misinformation and nonconsensual intimate imagery, and said the companies' "lax enforcement" of their own rules disproportionately impacts women and girls.
"This is very alarming, so we're going to do what we can to deal with this issue," Jean-Pierre told reporters Friday.
WATCH | White House 'alarmed' by AI-generated explicit images of Taylor Swift:

Media Video | CBC News : White House 'alarmed' by AI-generated explicit images of Taylor Swift on social media

Caption: U.S. White House spokesperson Karine Jean-Pierre responded to a question from a reporter about fake, explicit images of Taylor Swift generated by artificial intelligence being spread on social media, saying social media companies have a clear role in enforcing policies to prevent that kind of material from being distributed across their platforms.

Open Full Embed in New Tab (external link)Loading external pages may require significantly more data usage.
As the AI industry continues to grow, companies looking to share in the profits have designed tools giving users with little experience the ability to create images and videos using simple instructions. The tools have been popular and beneficial in some sectors, but also make it unnervingly easy to create what are known as deepfakes — images that show a person doing something they did not actually do.
The deepfake-detecting group Reality Defender said it tracked a deluge of non-consensual pornographic material depicting Swift, particularly on X. Some images also made their way to Meta-owned Facebook and other social media platforms.
"Unfortunately, they spread to millions and millions of users by the time that some of them were taken down," said Mason Allen, Reality Defender's head of growth.
The researchers found several dozen unique images that were generated by AI. The most widely shared were football-related, showing a painted or bloodied Swift that objectified her and, in some cases, suggested the infliction of violent harm.

Tools bring 'new era' of cybercrime

"One of the biggest problems is it's just an amazing tool … and now everyone can do it," said Steve DiPaola, an artificial intelligence professor at Simon Fraser University.
A 2019 study by DeepTrace Labs, an Amsterdam-based cybersecurity company, found that 96 per cent of deepfake video content online was non-consenting pornographic material(external link). It also found that the top four websites dedicated to deepfake pornography received more than 134 million views on videos targeting hundreds of female celebrities around the world.
WATCH | Schools could be doing more to educate young people about online harm:

Media Video | The National : Youth need better education on the risks of online sexual harm, report says

Caption: Tech-facilitated sexual violence and harassment are on the rise in Canada and a new report suggests that schools could be doing more to educated young people about the risks.

Open Full Embed in New Tab (external link)Loading external pages may require significantly more data usage.
In Canada, police launched an investigation in December after fake nude photos of female students at a Grade 7-12 French immersion school in Winnipeg were shared online. Earlier that year, a Quebec man was sentenced to prison for using AI to create seven deepfake videos of child pornography – believed to be the first sentence of its kind for the Canadian courts(external link).
"The police have clearly entered a new era of cybercrime," Court of Quebec judge Benoit Gagnoni wrote in his ruling.

Canadian judges working with outdated laws

After this week's targeting of Swift, U.S. politicians called for new laws to criminalize the creation of deepfake images.
Canada could also use that kind of legislation, said UBC's Thomasen.
There are some Canadian laws that deal with the broader issue of non-consensual distribution of intimate images, but most of those laws don't explicitly refer to deepfakes because they weren't an issue.

Image | cbckidsnews-taylor-swift-spotify-wrapped-2023

Caption: Deepfake images of Swift, shown here in October 2023, circulated to millions of social media users before being taken down. (Valerie Macon/AFP/Getty Images)

It means judges dealing with deepfakes have to decide how to apply old laws to new tech.
"This is such an overt violation of someone's dignity, control over their body, control over their information, that it's hard for me to imagine that that couldn't be interpreted in that way," Thomasen said. "But there's some legal disagreement on that and we're waiting for clarity from the courts."
The new Intimate Images Protection Act coming into effect in B.C. on Monday does includes references to deepfakes and will give prosecutors more power to go after people who post intimate images of others online without consent — but it does not include reference people who create them or social media companies.
WATCH | Not everyone has the resources to defend themselves when fake images posted:

Media Video | The National : Deepfakes of Taylor Swift were taken offline. It’s not so easy for regular people

Caption: Fake, AI-generated sexually explicit images of Taylor Swift were feverishly shared on social media until X took them down after 17 hours. But many victims of the growing trend lack the means, clout and laws to accomplish the same thing.

Open Full Embed in New Tab (external link)Loading external pages may require significantly more data usage.