EU deal targets Big Tech over hate speech, disinformation
Digital Services Act also takes aim at material promoting terrorism, child sexual abuse and commercial scams
The European Union reached a landmark deal early Saturday to take aim at hate speech, disinformation and other harmful online content.
The law will force big tech companies to police themselves harder, make it easier for users to flag problems and empower regulators to punish noncompliance with billions in fines.
EU officials finally clinched the agreement in principle in the early hours of Saturday. The Digital Services Act will overhaul the digital rulebook for 27 countries and cement Europe's reputation as the global leader in reining in the power of social media companies and other digital platforms, such as Facebook, Google and Amazon.
"With the DSA, the time of big online platforms behaving like they are 'too big to care' is coming to an end," said EU Internal Market Commissioner Thierry Breton.
EU Commission Vice President Margrethe Vestager added that "with today's agreement we ensure that platforms are held accountable for the risks their services can pose to society and citizens."
The act is the EU's third significant law targeting the tech industry, a notable contrast with the U.S., where lobbyists representing Silicon Valley's interests have largely succeeded in keeping federal lawmakers at bay.
While the Justice Department and Federal Trade Commission have filed major antitrust actions against Google and Facebook, Congress remains politically divided on efforts to address competition, online privacy, disinformation and more.
Accountable for users' content
The EU's new rules, which are designed to protect internet users and their "fundamental rights online," should make tech companies more accountable for content created by users and amplified by their platforms' algorithms.
Breton said they will have plenty of stick to back up their laws.
"It entrusts the Commission with supervising very large platforms, including the possibility to impose effective and dissuasive sanctions of up to 6 per cent of global turnover or even a ban on operating in the EU single market in case of repeated serious breaches," he said.
The tentative agreement was reached between the EU parliament and member states. It still needs to be officially rubber-stamped by those institutions but should pose no political problem.
"The DSA is nothing short of a paradigm shift in tech regulation. It's the first major attempt to set rules and standards for algorithmic systems in digital media markets," said Ben Scott, a former tech policy advisor to Hillary Clinton who's now executive director of advocacy group Reset.
Negotiators had been hoping to hammer out a deal before French elections Sunday. A new French government could stake out different positions on digital content.
The need to regulate Big Tech more effectively came into sharper focus after the 2016 U.S. presidential election, when Russia was found to have used social media platforms to try to influence the country's vote. Tech companies like Facebook and Twitter promised to crack down on disinformation, but the problems have only worsened. During the pandemic, health misinformation blossomed and again the companies were slow to act, cracking down after years of allowing anti-vaccine falsehoods to thrive on their platforms.
Tools to flag content
Under the EU law, governments would be able to request companies take down a wide range of content that would be deemed illegal, including material that promotes terrorism, child sexual abuse, hate speech and commercial scams. Social media platforms like Facebook and Twitter would have to give users tools to flag such content in an "easy and effective way" so that it can be swiftly removed. Online marketplaces like Amazon would have to do the same for dodgy products, such as counterfeit sneakers or unsafe toys.
These systems will be standardized so that they will work the same way on any online platform.
The tech giants have been lobbying furiously in Brussels to water down the EU rules.
Twitter said Saturday it would review the rules "in detail" and that it supports "smart, forward thinking regulation that balances the need to tackle online harm with protecting the Open Internet."
Google said in a statement on Friday that it looks forward to "working with policymakers to get the remaining technical details right to ensure the law works for everyone." Amazon referred to a blog post from last year that said it welcomed measures that enhance trust in online services. Facebook didn't respond to requests for comment.
The Digital Services Act would ban ads targeted at minors, as well as ads targeted at users based on their gender, ethnicity and sexual orientation. It would also ban deceptive techniques companies use to nudge people into doing things they didn't intend to, such as signing up for services that are easy to opt into, but hard to decline.
Annual risk assessments
To show they're making progress on limiting these practices, tech companies would have to carry out annual risk assessments of their platforms.
Up until now, regulators have had no access to the inner workings at Google, Facebook and other popular services. But under the new law, the companies will have to be more transparent and provide information to regulators and independent researchers on content-moderation efforts. This could mean, for example, making YouTube turn over data on whether its recommendation algorithm has been directing users to more Russian propaganda than normal.
To enforce the new rules, the European Commission is expected to hire more than 200 new staffers. To pay for it, tech companies will be charged a "supervisory fee," which could be up to 0.1% of their annual global net income, depending on the negotiations.
The EU reached a separate agreement last month on its so-called Digital Markets Act, a law aimed at reining in the market power of tech giants and making them treat smaller rivals fairly.
And in 2018, the EU's General Data Protection Regulation set the global standard for data privacy protection, though it has faced criticism for not being effective at changing the behaviour of tech companies. Much of the problem centres on the fact that a company's lead privacy regulator is in the country where its European head office is located, which for most tech companies is Ireland.
Irish regulators have opened dozens of data-privacy investigations, but have only issued judgments for a handful. Critics say the problem is understaffing, but the Irish regulator says the cases are complex and time-consuming.
EU officials say they have learned from that experience and will make the bloc's executive Commission the enforcer for the Digital Services Act and Digital Markets Act.