Meta dismantles misinformation industry

Mark Zuckerberg announced:

In recent years we’ve developed increasingly complex systems to manage content across our platforms, partly in response to societal and political pressure to moderate content. This approach has gone too far. As well-intentioned as many of these efforts have been, they have expanded over time to the point where we are making too many mistakes, frustrating our users and too often getting in the way of the free expression we set out to enable. Too much harmless content gets censored, too many people find themselves wrongly locked up in “Facebook jail,” and we are often too slow to respond when they do. 

We want to fix that and return to that fundamental commitment to free expression. Today, we’re making some changes to stay true to that ideal.

Great to have Meta acknowledge this.

When we launched our independent fact checking program in 2016, we were very clear that we didn’t want to be the arbiters of truth. We made what we thought was the best and most reasonable choice at the time, which was to hand that responsibility over to independent fact checking organizations. The intention of the program was to have these independent experts give people more information about the things they see online, particularly viral hoaxes, so they were able to judge for themselves what they saw and read.

That’s not the way things played out, especially in the United States. Experts, like everyone else, have their own biases and perspectives. This showed up in the choices some made about what to fact check and how. Over time we ended up with too much content being fact checked that people would understand to be legitimate political speech and debate. Our system then attached real consequences in the form of intrusive labels and reduced distribution. A program intended to inform too often became a tool to censor.   

This is not the sort of revelation you expect to hear from the CEO – that their fact checking was biased and became a tool to censor. He is right, of course.

We will end the current third party fact checking program in the United States and instead begin moving to a Community Notes program. We’ve seen this approach work on X – where they empower their community to decide when posts are potentially misleading and need more context, and people across a diverse range of perspectives decide what sort of context is helpful for other users to see. We think this could be a better way of achieving our original intention of providing people with information about what they’re seeing – and one that’s less prone to bias.

I love community notes. For them to appear, the community note has to be endorsed by a larger number of users who are not homogeneous in their views. Generally only those tweets that are clearly factually wrong get noted. Also you get some good humour with some of them also.

For example, in December 2024, we removed millions of pieces of content every day. While these actions account for less than 1% of content produced every day, we think one to two out of every 10 of these actions may have been mistakes (i.e., the content may not have actually violated our policies).

So hundreds of thousands pieces of contents were removed despite not being in violation. As an example of this, a friend yesterday posted a summary of my blog post on the EU over regulating things to Facebook, and it got censored!

We’re getting rid of a number of restrictions on topics like immigration, gender identity and gender that are the subject of frequent political discourse and debate. It’s not right that things can be said on TV or the floor of Congress, but not on our platforms. These policy changes may take a few weeks to be fully implemented. 

Also a huge change. No more being suspended for using the wrong pronoun.

This is a massive massive change. Some of it no doubt is due to the results of the US elections, but hopefully it is also genuine self-reflection that they got it wrong.

Comments (23)

Login to comment or vote

Add a Comment