Editorial: Social media is resistant to fact checking

FILE - In this April 14, 2020 file photo, the thumbs up Like logo is shown on a sign at Facebook headquarters in Menlo Park, Calif. Facebook's long-awaited oversight board that will act as a referee on whether specific content is allowed on the tech giant's platforms is set to launch in October, 2020. CEO Mark Zuckerberg announced two years ago that he was setting up the quasi-independent board in response to criticism that the company wasn't moving fast enough to remove misinformation, hate speech and malign influence campaigns. The board is intended to rule on thorny content issues, such as when Facebook or Instagram posts constitute hate speech.  (AP Photo/Jeff Chiu, File)

FILE - In this April 14, 2020 file photo, the thumbs up Like logo is shown on a sign at Facebook headquarters in Menlo Park, Calif. Facebook's long-awaited oversight board that will act as a referee on whether specific content is allowed on the tech giant's platforms is set to launch in October, 2020. CEO Mark Zuckerberg announced two years ago that he was setting up the quasi-independent board in response to criticism that the company wasn't moving fast enough to remove misinformation, hate speech and malign influence campaigns. The board is intended to rule on thorny content issues, such as when Facebook or Instagram posts constitute hate speech. (AP Photo/Jeff Chiu, File) Jeff Chiu

Published: 01-10-2025 10:01 PM

Modified: 01-13-2025 8:45 AM


Facts are the lifeblood of journalism, so any suggestion that they don’t matter is a grave affront to what is now referred to quaintly as the “legacy news media.” (Which is the only legacy the vast majority of journalists we have encountered over 45 years are likely to inherit.)

So news last week that the Metaverse — Facebook, Instagram and WhatsApp — is abandoning third-party fact-checking of its content is in one sense alarming. In another way, though, we have to wonder just how much difference it makes.

The company cast this decision as a return to CEO Mark Zuckerberg’s free-speech roots. Given that Zuckerberg is in our view both rootless and ruthless, it’s just as likely a craven attempt to curry favor with the incoming Trump administration and its conservative allies in Congress who have been screaming that fact-checking amounts to suppression of right-wing views.

So in its fact-free incarnation, Meta proposes to subject posts on its sites to a program called Community Notes, borrowed from the platform X, owned by Elon Musk. It depends on users to police false or misleading content. When enough users respond on X, a note appears below the contested material.

The New York Times, citing numerous studies, reports on how well that system has worked: “Antisemitic, racist and misogynistic posts (on X) rose sharply after Musk’s takeover, as did disinformation about climate change. Users spent more time liking and reposting items from authoritarian governments and terrorist groups, including the Islamic State and Hamas.” Welcome to the new town square.

Invoking the wisdom of the crowd is not in itself an outlandish idea. But an obvious flaw is that the process takes time and, as the Times notes, is subject to manipulation. Indeed, by the time community notes catch up to a falsehood, it may well already be ingrained in the consciousness of lots of people, where it will be extremely difficult to dislodge.

Any number of psychological studies have demonstrated why that’s so: Facts often don’t change our minds. As The New Yorker magazine reported in 2017 essay, “confirmation bias” is probably the best documented of the reasons for this. It refers to “the tendency people have to embrace information that supports their beliefs and reject information that contradicts them.”

Researchers point out that humans are quite adept at spotting weaknesses in arguments that they don’t believe in, but are almost invariably blind to the weaknesses in their own positions.

Article continues after...

Yesterday's Most Read Articles

So is Meta’s decision to wave the white flag a game changer for the worse? Hard to tell. After all, millions of Americans went to the polls in November and voted for a president whose appeal depends in large part on the known web of falsehoods that he brazenly spins. If social media fact-checking were effective, would such a result have been possible? Isn’t it more likely that people simply enjoy hearing confirmation in entertaining form of what they already believe?

We don’t know. But it seems to us that the important point here is not that people will believe a specific falsehood presented on social media, but rather that an endless supply of falsehood destroys the whole notion of objective truth. If you can’t depend on anything to provide the straight story, then you might well begin to doubt whether there is any such thing as a straight story. This is, of course, a hallmark of authoritarian government and manifestly a part of the conservative project.

Thus we make the case for legacy media to do what they do best, which is to subject all sorts of ideas, issues and officials to scrutiny. Yes, inaccuracies sometimes creep into reporting — that being in the nature of human fallibility — and no doubt unconscious bias is sometimes embedded in it as well. But for all its failings, the best of the legacy media generally present the facts on both sides of an issue or controversy to the reader or viewer at the same time, so there is at least the possibility that they can make an informed and rational judgment. And that’s a rich legacy.