Meta Ditches Fact-Checking Program, Wants You to Pitch In With Community Notes


Meta announced Tuesday it is shutting down its third-party fact-checking program on Instagram and Facebook, opting instead to rely on Community Notes, a user-driven moderation system similar to that used by X. It’s also removing restrictions on topics such as gender and immigration.

“It’s time to get back to our roots around free expression on Facebook and Instagram,” Zuckerberg said in a video, referencing his 2019 Georgetown University speech advocating freedom of expression.

Going back roughly a decade, including during the first Trump administration, Meta implemented complex content moderation systems in response to societal and political pressures. Zuckerberg said in the video that it didn’t always work out as desired: “The problem with complex systems is that they make mistakes. Even if they accidentally censor just 1% of posts, that’s millions of people. We’ve reached a point where there’s too many mistakes and too much censorship.”

Citing the 2024 US elections, in which Donald Trump was elected to a second term as president, as a “cultural tipping point,” Zuckerberg said the company will prioritize speech by simplifying policies, reducing mistakes and restoring free expression. 

These changes come two weeks ahead of Trump’s inaugural and as Meta faces ongoing criticism for its handling of misinformation, allegations of political bias and the broader societal impact of its platforms.

One of the biggest challenges for social media companies over the last decade has been decision-making about what content is allowed on their platforms, and what to remove, including political and medical misinformation and hate speech. Critics have long charged that what social networks, especially Facebook, Twitter (now X) and YouTube, have been doing is censoring speech. 

Introducing Community Notes

In the US, Meta will now implement Community Notes, where users write and rate notes to provide context for potentially misleading posts. Kaplan highlighted the system’s safeguards, noting it requires agreement from people with diverse perspectives to help prevent bias. Users can sign up as contributors starting today.

Meta also plans to adjust how it enforces policies to reduce censorship errors. Severe violations, like terrorism and child exploitation, will still rely on automation, but less severe issues will require human reporting before action is taken.

In a blog post, Joel Kaplan, Meta’s chief global affairs officer, echoed Zuckerberg, emphasizing a “more personalized approach to political content” that lets users control how much they see. 

“Meta’s platforms are built to be places where people can express themselves freely,” he wrote. “That can be messy. … But that’s free expression.”

Additionally, Meta will personalize how users see political and civic content, reversing its 2021 reduction in such posts. Kaplan called that approach “blunt.” Content from followed Pages and people will now be ranked like any other post, based on likes and views.  

Kaplan said that the fact-checking program, launched in 2016 to combat misinformation, evolved into a tool that sometimes stifled legitimate political debate. 

“Over time we ended up with too much content being fact checked that people would understand to be legitimate political speech and debate,” he wrote. “Our system then attached real consequences in the form of intrusive labels and reduced distribution. A program intended to inform too often became a tool to censor.”

Zuckerberg said that it will take time to get the new approach right, and that there is still a good deal of illegal material to figure out how best to remove. “These are complex systems; they are never going to be perfect,” he said. 





Source link