Community Notes for Everything

4 minute read

Published:

I was working on my dissertation when I read about the changes in Meta’s fact-checking policy last week. These is big news for the fight against misinformation and shifts who has the responsibility of fact-checking. Platforms and companies must decide between two approaches: professional fact-checkers or platform users’ opinions integrated into an algorithm.

When I first read about Community Notes, I liked the idea because of its openness, and crowdsourcing seems more efficient in addressing the problem of misinformation. However, I liked both approaches running in parallel to see how effective they are. Unfortunately, Meta probably won’t share much information about the impact of the policies. I think this decision was made based on profit maximization and avoiding the role of censor, and they were waiting for the elections to make this move. Leaving the decision to the public could have some advantages in terms of the scope of fact-checking, the latency to recognize false claims, and efficiency. One disadvantage however is that the algorithm chosen to rate notes will impact how users behave and the effectiveness of this approach.

In statistical terms, Zuckerberg explained that one main consequence will be a reduction of type 2 error when deciding that something is misinformation, increasing type 1 error, and creating more exposure to misinformation on their platforms. Whether this is the best option or not depends on social preferences, and it is nearly impossible to state which error type is more important to minimize (however, there is definitely much false information around). The government made Meta raise the bar for considering something safe after the criticism from every side after the 2016 election, but for a company, this was an expense they wanted to get rid of.

I talked to the director of AnimalPolitico (a fact checker in Mexico), Daniel Moreno, last summer, and he mentioned that the main reason X uses Community Notes is because it’s free. They have an incentive to promote the use of professional fact-checkers since an important part of the income for animal politico comes from verification services, as well as other sites like Politifact. However, Meta was looking to eliminate fact-checking before the election, and now they have the government on their side on this. Meta has been regulated, which required censorship of posts, especially by the last administration, which saw the problem of misinformation around COVID-19 as dangerous and Meta has criticized this. However, Facebook has tried to avoid the problem of deciding what is fake and what is not, even when this seems unavoidable. I think that true information is a public good on the internet, and for this reason, fact-checking should be subsidized through regulation or direct grants from the government.

Moving moderators from California to Texas might be a response to old criticism from conservatives about tech companies being biased and criticizing how these companies [fight misinformation](https://www.reuters.com/technology/zuckerberg-says-biden-administration-pressured-meta-censor-covid-19-content-2024-08-27/. This physical move also represents the move towards less fact-checking. I see the willingness of the companies to cooperate with the government as a possible way to get money and avoid regulations. I am surprised by the lack of discussion of the fact-checkers’ attempts to be unbiased and the evidence that misinformation is generated from all political extremes. However, only time will tell, and hopefully, this won’t backfire and increase government censorship. This last scenario is what worries me the most.

The consensus is that the problem is too complex, and companies don’t want to deal with it. Research has also shown that conservatives are more prone to believe and distribute misinformation, although other demographics can play a stronger role and political extremists on both sides are prone to share misinformation. That means the problem is not only in the extreme right but any real effort to fight misinformation will mechanically appear biased if there are more people sharing misinformation among them. Unfortunately, this politicization of fact-checking has created a hostile environment for verification efforts in the US. At least one fact-checker has already expressed concern about the government possibly censoring their work.

What seems obvious is that professional fact-checking won’t come from platforms anymore, and the only in-app policy will be the integration of users’ opinions using an algorithm like the one implemented by Community Notes to flag posts that contain misinformation. It remains to be seen what this means for the fight against misinformation.