Since the rollout of the COVID-19 vaccine in 2021, fake news on social media has been widely blamed for low vaccine uptake in the United States โ€” but research by MIT Sloan School of Management PhD candidate Jennifer Allen and Professor David Rand finds that the blame lies elsewhere. 

In a new paper published in Science and co-authored by Duncan J. Watts of the University of Pennsylvania, the researchers introduce a new methodology for measuring social media contentโ€™s causal impact at scale. They show that misleading content from mainstream news sources โ€” rather than outright misinformation or โ€œfake newsโ€ โ€” was the primary driver of vaccine hesitancy on Facebook. 

A new approach to estimating impact

โ€œMisinformation has been correlated with many societal challenges, but thereโ€™s not a lot of research showing that exposure to misinformation actually causes harm,โ€ explained Allen. 



Embrace the charm of nostalgia with our Vintage-Inspired Contemplative Pooh-Bear Graphic T-Shirt. The “Winnie” in Winnie-the-Pooh was based on a Canadian Brown Bear, aka Ursus americanus, named Winnipeg.

During the COVID-19 pandemic, for example, the spread of misinformation related to the virus and vaccine received significant public attention. However, existing research has, for the most part, only established correlations between vaccine refusal and factors such as sharing misinformation online โ€” and largely overlooked the role of โ€œvaccine-skepticalโ€ content, which was potentially misleading but not flagged as misinformation by Facebook fact-checkers. 

To address that gap, the researchers first asked a key question: What would be necessary for misinformation or any other type of content to have far-reaching impacts? 

โ€œTo change behavior at scale, content has to not only be persuasive enough to convince people not to get the vaccine, but also widely seen,โ€ Allen said. โ€œPotential harm results from the combination of persuasion and exposure.โ€


Sign up for the Daily Dose Newsletter and get every morning’s best science news from around the web delivered straight to your inbox? It’s easy like Sunday morning.

Processingโ€ฆ
Success! You're on the list.

To quantify contentโ€™s persuasive ability, the researchers conducted randomized experiments in which they showed thousands of survey participants the headlines from 130 vaccine-related stories โ€” including both mainstream content and known misinformation โ€” and tested how those headlines impacted their intentions to get vaccinated against COVID-19. Researchers also asked a separate group of respondents to rate the headlines across various attributes, including plausibility and political leaning. One factor reliably predicted impacts on vaccination intentions: the extent to which a headline suggested that the vaccine was harmful to a personโ€™s health. 

Using the โ€œwisdom of crowdsโ€ and natural language processing AI tools, Allen and her co-authors extrapolated those survey results to predict the persuasive power of all 13,206 vaccine-related URLs that were widely viewed on Facebook in the first three months of the vaccine rollout. By combining these predictions with data from Facebook showing the number of users who viewed each URL, the researchers could predict each headlineโ€™s overall impact โ€” the number of people it might have persuaded not to get the vaccine. The results were surprising. 

The underestimated power of exposure

Contrary to popular perceptions, the researchers estimated that vaccine-skeptical content reduced vaccination intentions 46 times more than misinformation flagged by fact-checkers. 

The reason? Even though flagged misinformation was more harmful when seen, it had relatively low reach. In total, the vaccine-related headlines in the Facebook data set received 2.7 billion views โ€” but content flagged as misinformation received just 0.3% of those views, and content from domains rated as low-credibility received 5.1%. 

โ€œEven though the outright false content reduced vaccination intentions the most when viewed, comparatively few people saw it,โ€ explained Rand. โ€œEssentially, that means thereโ€™s this class of gray-area content that is less harmful per exposure but is seen far more often โ€”and thus more impactful overall โ€” that has been largely overlooked by both academics and social media companies.โ€

Notably, several of the most impactful URLs within the data set were articles from mainstream sources that cast doubt on the vaccineโ€™s safety. For instance, the most-viewed was an article โ€” from a well-regarded mainstream news source โ€” suggesting that a medical doctor died two weeks after receiving the COVID-19 vaccine. This single headline received 54.9 million views โ€” more than six times the combined views of all flagged misinformation. 

While the body of this article did acknowledge the uncertainty of the doctorโ€™s cause of death, its โ€œclickbaitโ€ headline was highly suggestive and implied that the vaccine was likely responsible. Thatโ€™s significant since the vast majority of viewers on social media likely never click out to read past the headline. 

How journalists and social media platforms can help

According to Rand, one implication of this work is that media outlets need to take more care with their headlines, even if that means they arenโ€™t as attention-grabbing. 

โ€œWhen you are writing a headline, you should not just be asking yourself if itโ€™s false or not,โ€ he said. โ€œYou should be asking yourself if the headline is likely to cause inaccurate perceptions.โ€ 

For platforms, added Allen, the research also points to the need for more nuanced moderation โ€” across all subjects, not just public health. 

โ€œContent moderation focuses on identifying the most egregiously false information โ€” but that may not be an effective way of identifying the most overall harmful content,โ€ she says. โ€œPlatforms  should also prioritize reviewing content from the people or organizations with the largest numbers of followers while balancing freedom of expression. We need to invest in more research and creative solutions in this space โ€“ for example, crowdsourced moderation tools like Xโ€™s Community Notes.โ€

โ€œContent moderation decisions can be really difficult because of the inherent tension between wanting to mitigate harm and allowing people to express themselves,โ€ Rand said. โ€œOur paper introduces a framework to help balance that trade-off by allowing tech companies to actually quantify potential harm.โ€

And the trade-offs could be large. An exploratory analysis by the authors found that if Facebook users hadnโ€™t been exposed to this vaccine-skeptical content, as many as 3 million more Americans could have been vaccinated. 

โ€œWe canโ€™t just ignore this gray area-content,โ€ Allen concluded. โ€œLives could have been saved.โ€


If you enjoy the content we create and would like to support us, please consider becoming a patron on Patreon! By joining our community, you’ll gain access to exclusive perks such as early access to our latest content, behind-the-scenes updates, and the ability to submit questions and suggest topics for us to cover. Your support will enable us to continue creating high-quality content and reach a wider audience.

Join us on Patreon today and let’s work together to create more amazing content! https://www.patreon.com/ScientificInquirer


Marker of biological aging linked to cognitive symptoms of depression
Blood tests analyzing white blood cell aging can predict cognitive and mood …
Outer solar system object has an atmosphere but shouldnโ€™t
Japanese astronomers discovered a thin atmosphere around the small trans-Neptunian object 2002 …

Leave a Reply

Trending

Discover more from Scientific Inquirer

Subscribe now to keep reading and get access to the full archive.

Continue reading