Facebook owner Meta is killing off its current third-party fact-checking process in favor of a new system similar to the community notes used by X, formerly Twitter. Also known as Community Notes, Meta’s process will take a more hands-off approach, aiming to limit the number of posts taken down or restricted based on user complaints and other red flags.
Initially rolling out in the US over the coming months and eventually expanding to other countries, Meta’s Community Notes will replace the existing fact-checking method used across Facebook, Instagram, and Threads. In touting the new process, Meta CEO Mark Zuckerberg criticized the current system, calling it one that makes too many mistakes and censors too many posts.
Also: Meta’s latest update is a devastating blow to advertisers – what you need to know
“We built a lot of complex systems to moderate content, but the problem with complex systems is they make mistakes,” Zuckerberg said in a video posted on Facebook. “Even if they accidentally censor just 1% of posts, that’s millions of people, and we’ve reached a point where it’s just too many mistakes and too much censorship.”
Current process smacks of censorship
In a Meta page explaining how fact-checking works, the company said the current process uses independent fact-checkers who rate a post or ad for accuracy. Based on the review, Meta then decides whether the content should be taken down, labeled, or otherwise restricted. But that method has long triggered complaints from conservatives, who argue that it smacks of censorship. With the new political climate in the US, those voices are growing louder.
Also: The one feature Bluesky really needs
“The recent elections also feel like a cultural tipping point towards, once again, prioritizing speech,” Zuckerberg said in the video. “So we’re going to get back to our roots and focus on reducing mistakes, simplifying our policies, and restoring free expression on our platforms.”
In a news post titled “More Speech and Fewer Mistakes,” Meta Chief Global Affairs Officer Joel Kaplan also criticized the current process, saying too much harmless content is censored, too many people wind up in “Facebook jail,” and Meta is slow to respond to correct such mistakes. Toward that end, the new Community Notes feature will work similarly to the one on X, where the broader user community decides which posts are misleading or inaccurate and which ones need more context.
How the new process will work
During the transition to the new process, Meta will remove the fact-checking controls and stop demoting fact-checked content, Kaplan said. Instead of overlaying warnings that users must click to see the post, the company will display a less obtrusive label to indicate that a note is available with additional context.
Kaplan also outlined four ways in which the new process would work.
- After the program has started, Meta won’t write the actual community notes or decide which ones will appear. Instead, those notes will be written and rated by contributing users.
- Like on X, Community Notes will require buy-in from a range of people with different opinions and perspectives to try to prevent bias.
- Meta will attempt to explain how different viewpoints come together to agree on which notes will appear, though Kaplan said that the company is trying to find the right way to share such details.
- Anyone who wants to contribute Community Notes on Facebook, Instagram, or Threads will be able to sign up for the program when it launches.
Even more changes are in store, according to Kaplan. Meta plans to remove restrictions on sensitive topics like immigration and gender identity that often trigger debate. “It’s not right that things can be said on TV or the floor of Congress, but not on our platforms,” he asserted.
Further, Meta plans to rely less on automated systems to scan for potential violations. Such systems will continue to look for illegal and critical infractions, such as terrorism, child sexual exploitation, drugs, fraud, and scams. But less severe violations will have to be reported by users before the company responds.
Additionally, Meta will tweak the way it handles political and social topics. Instead of simply showing less of this type of content based on user reactions, the company will treat these types of posts and ads like any other content in your feed. That means it will analyze whether you like or view a particular post to determine which ones should continue to appear on your feed.
Cultural tipping point
With Zuckerberg citing the recent elections as a cultural tipping point, Meta is clearly taking advantage of the new political climate to push back on its old ways of filtering content. The new approach also comes at a time when business leaders like Zuckerberg are cozying up to Trump in hopes of currying favor and avoiding conflicts with the new administration. Meta donated $1 million to Trump’s inaugural fund, as have Amazon, OpenAI CEO Sam Altman, and Apple CEO Tim Cook.
Most importantly, what will the new process mean for Facebook, Instagram, and Threads users? The answer depends partly on whether you prioritize free speech over facts.
Also: X’s Grok did surprisingly well in my AI coding tests
Some may feel the new approach will lead to more toxic and uncivil discussions on social media unchecked by any restrictions. That could turn Meta’s media platforms into something more akin to Elon Musk’s X, which has been criticized for its Wild West approach to content.
Others might applaud the new tactics, arguing that even toxic speech should be uncensored.
“The reality is that this is a trade-off,” Zuckerberg acknowledged. “It means we’re going to catch less bad stuff, but we’ll also reduce the number of innocent people’s posts and accounts that we accidentally take down.”