Meta’s overhaul of its content moderation and fact-checking policies in the US is bringing into focus a key geopolitical tension likely to grow under the incoming Trump administration: the regulation of speech online.
CEO Mark Zuckerberg made no secret of his attempt to align his interests with those of President-elect Donald Trump, saying he planned to work with Trump to “push back on governments around the world that are going after American companies and pushing to censor more” — naming Europe specifically. The US and the European Union have long had different approaches when it comes to digital regulation, which has at times inflamed tensions since many of the largest tech companies that end up being targeted by Europe’s rules are the US’s crown jewels. That dynamic is likely to be exacerbated under a second Trump administration, with the incoming president’s protectionist policies.
“The inflection point is Trump, and Facebook is just following along,” says Daphne Keller, director of the program on platform regulation at Stanford University’s Cyber Policy Center. Through the policy change, Meta is signaling to Trump that “we want to be part of a fight with Europe. We’re on your side. We’re pro-free speech,” she says.
“The inflection point is Trump, and Facebook is just following along”
Meta says the end of its third-party fact-checking program is a change it’s making “starting in the US.” The company is switching from working with third-party fact-checkers to a crowd-sourced Community Notes model, styled after X, along with fewer restrictions on what negative things users can say — particularly about women and LGBTQ people — on their platforms. Zuckerberg says this combined with other content moderation policy changes will make it so less content is inappropriately removed, a common complaint the right has been making for years, even if that means more unsavory (but legal) content stays up longer.
Under Europe’s Digital Services Act, large platforms like Meta can be held accountable for failing to remove illegal content or that which violates their own terms of service in a timely manner once it’s reported, with fines as high as 6 percent of their annual global revenue. Meta says that under its changes, it will still take down illegal content but is loosening its approach on what’s sometimes referred to as “lawful but awful” content, such as likening women to “household objects.”
Even so, should Meta expand its new approach globally, it could run into trouble in Europe. Some digital law experts worry that the DSA’s risk assessment and risk mitigation provisions could be interpreted to compel platforms to remove speech, even if the law doesn’t directly require the removal of certain harmful content. Those parts of the law require platforms to assess risk and create plans to mitigate the potential negative impact of their services on “fundamental rights,” which may be vague enough for some regulators to make the case that content moderation and fact-checking decisions may be included.
Others, like London School of Economics and Political Science associate law professor Martin Husovec, have said that fears that the DSA would turn the EU into a “Ministry of Truth are misplaced,” since even though there’s opportunity for abuse, the law is not “pre-programmed” to suppress lawful disinformation.
European Commission spokesperson Thomas Regnier declined to comment on Meta’s announcement but said in a statement that they will continue to monitor designated “very large online platforms” like Meta for compliance with the DSA. “Under the DSA, collaborating with independent fact-checkers can be an efficient way for platforms to mitigate systemic risks stemming from their services, while fully respecting the freedom of expression,” Regnier says. “This applies to risks such as the spread of disinformation, or negative effects to civic discourse and electoral integrity.”
Regnier also noted that Meta signed the voluntary Code of Practice against disinformation, which includes certain commitments about working with fact-checkers. But it could continue to follow X’s footsteps in reversing that commitment.
During a press conference after Meta’s announcement, Regnier said that Europe isn’t asking any platforms to remove lawful content. “We just need to make the difference between illegal content and then content that is potentially harmful … There, we ask just platforms to take appropriate risk mitigation measures.”
Regardless, Meta will still likely need to remove more speech in Europe than it does in the US to comply with local laws. For example, Holocaust denial is illegal in countries like Germany, while the US has no such speech restrictions. Still, Keller points out that European leaders are less unified now than they were a couple years ago when it comes to dealing with issues like gender identity and immigration. “A bunch of right and far-right parties are coming to power in Europe. So there’s far less of a unified European political agenda around culture wars issues than there used to be,” she says.
Even so, Keller says she worries that Zuckerberg’s rhetoric toward Europe in his announcement could create a dynamic that emboldens European regulators who want to go after US platforms over speech concerns. “He will offend them, and they’ll get their backup, and then they really will interpret it to give themselves broader powers and to be able to punish Meta more,” Keller says. “It’s almost like he’s going to drive them into becoming the censors that he claims they are now.”