The Psychology of Misinformation: An Evidence-Based Guide to Belief, Susceptibility, and Resilience

The Psychology of Misinformation

The proliferation of false and misleading information represents one of the most significant societal challenges of the modern era, influencing public health, political stability, and social cohesionm (see Fisher & Fisher, 2023). Understanding the psychology of misinformation (i.e., why people believe, share, and are persuaded by misinformation) is a central task of contemporary science. This knowledge hub provides a comprehensive, evidence-based overview of the psychological dynamics behind misinformation and conspiracy theories. It synthesizes key findings from peer-reviewed research published in advances.in/psychology to illuminate the cognitive traits, social factors, and psychological interventions that shape our relationship with information in a complex digital world. This resource is designed to be a definitive guide for students, researchers, journalists, policymakers, and any individual seeking to navigate the modern information ecosystem with greater clarity and resilience.

Key Takeaways

  • Belief in misinformation is complex: It is not simply a result of “lazy” or uncritical thinking. Research indicates that individuals who believe implausible claims often operate under a different epistemic framework, using distinct criteria to evaluate evidence and trusting personal experience over institutional expertise (Robson et al., 2024). Furthermore, social needs, such as the desire for community and support, can be powerful drivers of belief, particularly for those with a conspiratorial mindset (Jetten et al., 2023).   
  • Intellectual humility is a key protective factor: The psychological trait of intellectual humility—the recognition that one’s own beliefs and knowledge are fallible—is consistently linked to a greater ability to distinguish between true and false news headlines (Bowes & Fazio, 2024). This is because it enhances critical discernment and metacognitive insight, rather than promoting simple, indiscriminate skepticism (Prike et al., 2024).   
  • Psychological inoculation builds resilience: Interventions that “inoculate” individuals by pre-exposing them to weakened forms of manipulation techniques are effective at building psychological resistance to misinformation (Traberg et al., 2024; Ziemer et al., 2024) and extremist persuasion (Saleh et al., 2023). However, these interventions must be designed with care, as some may inadvertently foster blind skepticism, reducing a person’s ability to recognize plausible or true conspiracies. The most effective approaches may be those that teach true discernment (O’Mahony et al., 2024).   
  • A significant gap exists between lab findings and real-world impact: Many interventions that prove effective under controlled laboratory conditions fail to produce lasting effects in the real world. This “efficacy versus effectiveness” gap is a major challenge for the field, highlighting the need for more ecologically valid research that considers factors like user uptake, cultural context, and the rapid decay of intervention effects over time (Roozenbeek et al., 2024).

Foundational Concepts in Misinformation Research

To effectively address the challenge of misinformation, it is essential to begin with a clear and scientifically grounded understanding of the core concepts and the nature of the research itself. This section defines key terms, explains the complexities of the scientific process, and examines the nuanced role of source credibility in how people evaluate information.

What Are Misinformation, Disinformation, and Fake News?

In both academic and public discourse, several terms are used to describe false content, and their precise meanings are critical for clear analysis. While often used interchangeably, they denote important distinctions related to intent and format.

  • Misinformation is the broadest category, referring to any information that is false, inaccurate, or misleading. A crucial characteristic of misinformation is that it is spread regardless of the intent of the person sharing it. An individual might share a false health claim believing it to be true and helpful, thereby spreading misinformation without malicious intent.
  • Disinformation is a subset of misinformation. It refers to false information that is created and disseminated with the specific, deliberate intent to deceive, manipulate, or cause harm. Examples include state-sponsored propaganda campaigns designed to destabilize a foreign election or fraudulent marketing schemes that use false claims to sell a product. The defining element is the malicious intent behind the creation and spread of the content.
  • Fake News is a specific type of disinformation that is designed to mimic the format, style, and presentation of legitimate news media. It often employs professional-looking websites, credible-sounding headlines, and a journalistic tone to deceive audiences into believing it is a product of established news-gathering processes. This mimicry is a strategic choice to exploit the trust that audiences typically place in news organizations.

Understanding these distinctions is vital for designing appropriate counter-strategies. Combating unintentional misinformation may require educational approaches that improve media literacy, whereas fighting intentional disinformation might necessitate platform-level policy changes and regulatory action.

Why Does Scientific Research on Misinformation Seem Contradictory?

Individuals seeking to understand the science of misinformation may encounter studies with seemingly conflicting conclusions. For instance, one study might find that a personality trait like conscientiousness reduces fake news sharing, while another finds no such effect. This apparent lack of consensus is not a sign of a field in chaos but is characteristic of a complex and rapidly evolving area of scientific inquiry. Research demonstrates that these scholarly debates and conflicting findings often arise from the specific methodological choices made by researchers (Lawson & Kakkar, 2024).

A comprehensive re-analysis of 12 separate studies on the topic of fake news sharing revealed that divergent conclusions could often be reconciled by accounting for differences in methodology (Lawson & Kakkar, 2024). Three key factors were identified as significant drivers of these discrepancies:

  1. Ideology Measures: How researchers choose to measure political ideology (e.g., a simple liberal-conservative scale versus a multi-dimensional measure of social and economic beliefs) can fundamentally alter the observed relationship between ideology and misinformation susceptibility (see Table 1).
  2. Analytical Approach: The statistical models and techniques used to analyze the data can lead to different interpretations. For example, whether a study controls for other variables like age, education, or other personality traits can change the apparent significance of a single factor.
  3. Choice of News Stimuli: The specific fake and real news headlines shown to participants in an experiment can influence the results. Findings based on highly partisan or emotionally charged headlines may not generalize to more subtle forms of misinformation.

A clear example of this principle in action is the debate over the role of conscientiousness. By standardizing the measurement of political ideology and applying a consistent analytical approach across multiple datasets, the re-analysis confirmed that conscientiousness does indeed play a moderating role; it weakens the tendency of individuals with certain right-wing ideologies to share false content (Lawson & Kakkar, 2024). This demonstrates that progress in the field requires not only new experiments but also careful methodological synthesis to build a more robust and coherent body of knowledge. Rather than undermining the science, these debates and resolutions are the scientific process working as intended, refining our understanding through rigorous critique and replication.

Table 1

Relationship Between Measures of Ideology and the Likelihood of Sharing Fake News Across Studies in Lawson and Kakkar (2024)

 Studiesrp95% CIRangeHeterogeneityI2
Left-right
conservatism
L&K.177<.001[.132, .222][.078, .266]Q(6) = 223, p < .00196.6%
Binary
Republican identification
L&K.097<.001[.055, .139][.024, .201]Q(6) = 138, p < .00196.0%
Left-right
conservatism
LRP.306<.001[.283, .329]
Binary
Republican identification
LRP.072.022[.011, .133][-.004, .175]Q(4) = 116, p < .00196.5%
Continuous
Republican identification
LRP.069.149[-.025, .162][-.064, .228]Q(4) = 270, p < .00198.5%
Warmth to
Republicans
LRP.171<.001[.100, .242][.100, .291]Q(4) = 155, p < .00197.4%
Warmth to DemocratsLRP-.016.761[-.122, .089][-.211, .113]Q(4) = 346, p < .00198.8%
Social
conservatism
LRP.108.058[-.004, .219][-.030, .301]Q(4) = 384, p < .00198.9%
Economic
conservatism
LRP.070.117[-.018, .158][-.048, .218]Q(4) = 238, p < .00198.3%
Belief in a GodLRP.123<.001[.071, .176][.070, .222]Q(4) = 84.9, p < .00195.3%
Trump 2016LRP.096.020[.015, .178][.012, .247]Q(4) = 204, p < .00198.0%
Trump 2020LRP.074.161[-.030, .178][-.078, .246]Q(4) = 333, p < .00198.8%
Note. L&K refers to studies from the Lawson and Kakkar (2021) paper; LRP refers to studies from the Lin et al. (2023) paper. The items associated with these ideology measures are included in Appendix A. r refers to the meta-analytic zero-order correlation between the variable and the likelihood of sharing fake news. 95% CI refers to a 95% confidence interval for the value of the meta-analytic correlation coefficient. Range refers to the range of zero-order correlations observed across the individual studies. Q indicates the Q-Test for heterogeneity, with its degrees of freedom in brackets. The I2 statistic is a measure used to quantify the percentage of total variation across studies that is due to heterogeneity rather than chance.

How Much Does the Source of Information Really Matter?

The conventional wisdom of media literacy often begins with the advice to “consider the source.” While this is a valuable heuristic, scientific research reveals that the impact of source credibility on belief in misinformation is far more complex and inconsistent than in other domains like advertising or public health messaging (Mang et al., 2024). The pautential causes for inconsistencies are presented in Figure 1. A systematic review of studies in this area highlights a more nuanced reality, where the power of a source’s reputation can be easily overridden by other factors.

The effect of source credibility appears to be highly dependent on the format and length of the information. Source cues—such as the name of a well-known newspaper versus an unknown blog—have a more consistent and powerful effect on how people evaluate short-form content like headlines or social media posts (Mang et al., 2024). In these contexts, where engagement is fleeting, the source’s reputation can act as a quick mental shortcut for assessing credibility. However, when individuals engage with longer articles, the substantive content of the message itself often becomes more influential than the source it came from. Once a person has invested the cognitive effort to read a lengthy piece, their evaluation tends to focus more on the arguments presented, the evidence cited (however flawed), and how the narrative aligns with their pre-existing beliefs. In these cases, the initial gateway of source trust is bypassed by deeper engagement with the message’s content.

Furthermore, research shows that source credibility is more consistently linked to cognitive outcomes (e.g., whether a person rates a headline as accurate) than to behavioral outcomes (e.g., whether they would actually share it; Mang et al., 2024). A person might acknowledge that a story from an untrustworthy source is likely false, yet still express an intention to share it, perhaps because it aligns with their political identity or is emotionally satisfying. This gap between belief and behavior suggests that different psychological mechanisms drive accuracy judgments versus sharing intentions. This has critical implications for media literacy education. While teaching people to check for trusted sources is a necessary first step, it is insufficient. Effective interventions must also equip individuals with the skills to critically deconstruct the substance of an argument, identify logical fallacies, and recognize manipulative rhetorical techniques (Saleh et al., 2023: Traberg et al., 2024; Ziemer et al., 2024), especially within long-form content designed to overwhelm source-based heuristics.

Figure 1

Potential Causes of Inconsistencies in Source Credibility Effects based on the Review by Mang et al. (2024)

The Cognitive and Social Roots of Belief

Why do people fall for misinformation? The answer extends far beyond simple ignorance or a lack of intelligence. A growing body of research reveals a complex interplay of cognitive processes, personality traits, and deep-seated social needs that shape an individual’s susceptibility to false beliefs. This section explores the psychological architecture of belief, moving beyond simplistic explanations to uncover the nuanced reasons why misinformation resonates.

Do Believers in Misinformation Simply Think Less Critically?

A common and intuitive explanation for belief in misinformation is that some people are simply “lazy thinkers” who fail to engage in sufficient critical thought. This idea, often termed the “miserly thinking” hypothesis, suggests that people rely on gut feelings and intuition rather than engaging in the more effortful, analytical reasoning required to vet information. While analytical thinking certainly plays a role, research increasingly challenges the notion that believers are merely less effortful in their cognition. Instead, evidence suggests they may be operating under a completely different epistemic framework (Robson et al., 2024).

A quantitative content analysis of how believers and non-believers reason about evidence found that believers in implausible claims (e.g., fringe scientific or medical theories) were not necessarily less effortful in their responses. Instead, the primary difference lay in the type of justifications they provided (Robson et al., 2024). Compared to non-believers, believers offered significantly fewer normative justifications—that is, appeals to conventional indicators of evidence quality, such as the expertise of the source, the methodology of a study, or institutional consensus (see Figure 2). Conversely, there was some evidence they provided more self-generated justifications, drawing on their own personal experiences, opinions, and intuitions to support their conclusions.

This finding is profound because it reframes the problem of misinformation from being a deficit of effort to a difference in epistemology—the philosophical theory of knowledge, especially with regard to its methods, validity, and scope. It suggests that you cannot effectively correct someone’s belief with facts if that person does not agree on what constitutes a “fact” or how its validity is established. For an individual operating within this alternative framework, a peer-reviewed study from a government agency they distrust may hold less evidentiary weight than a compelling personal testimony from an anonymous online account. This explains why traditional debunking and fact-checking efforts often fail or even backfire. The intervention is fundamentally mismatched to the recipient’s rules of evidence. To be effective, communication strategies must not only present accurate information but also frame it in ways that resonate with the recipient’s epistemic framework, a point explicitly supported by the research (Robson et al., 2024).

Figure 2

Number of justifications provided from each category (Present-relevant, Present-irrelevant, and Self-generated) for each group and condition in Study 1 (A) and in Study 2 (B) in Robson et al., 2024.

Which Personality Traits Are Linked to Sharing False Content?

While situational and social factors are immensely powerful, individual differences in personality also contribute to a person’s likelihood of engaging with and sharing false information. Research has explored various traits, but one that has received significant attention is conscientiousness, one of the “Big Five” personality dimensions characterized by traits like diligence, discipline, and a sense of duty.

Initial research on this topic produced mixed results, exemplifying the methodological challenges discussed earlier. However, a large-scale re-analysis of 12 different studies, encompassing nearly 7,000 participants, brought clarity to the role of this trait (Lawson & Kakkar, 2024). The meta-analysis confirmed a direct, negative association between conscientiousness and the sharing of fake news, with an average correlation coefficient of r = -.22. This indicates that, on the whole, individuals who are more conscientious are less likely to share false news content.

The relationship, however, is more nuanced. The study found that conscientiousness also acts as a moderator of political ideology (see Figure 3). Specifically, it weakens the tendency observed among some individuals with right-wing ideologies to share false content (Lawson & Kakkar, 2024). This suggests that the inherent disposition towards carefulness and responsibility that defines conscientiousness can serve as a psychological brake, overriding partisan motivations to share provocative but inaccurate information. Even when a piece of fake news aligns with their political worldview, a highly conscientious person’s inclination to be thorough and responsible may make them pause and reconsider before sharing. This finding underscores that personality is not a deterministic factor but rather one of several interacting elements—including political identity, social context, and the nature of the information itself—that collectively shape an individual’s online behavior.

Figure 3

Moderation by Conscientiousness Across Studies in Lawson and Kakkar (2024)

Can Intellectual Humility Serve as a Shield Against False Beliefs?

In an information environment defined by complexity and uncertainty, one of the most powerful psychological assets may not be raw intelligence, but rather intellectual humility. This trait involves recognizing the limits of one’s own knowledge and appreciating that one’s beliefs could be wrong. Based on a meta-analysis (see Figure 4), a robust body of research now demonstrates that this virtue is a significant protective factor against misinformation (Bowes & Fazio, 2024).

Figure 4

Meta-Analytic Relations Between IH and (a) Belief in Misinformation, (b) Behavioral Intentions, and (c) Behavior and Caterpillar Plots in Bowes and Fazio (2024)

Multiple studies have established a consistent association between higher levels of intellectual humility and lower receptivity to misinformation. Crucially, follow-up research has illuminated the precise mechanism behind this protective effect. It is not that intellectually humble individuals are simply more skeptical or cautious in general—a trait that could lead them to reject true information as well as false. Instead, intellectual humility is linked to greater misinformation discernment: a refined ability to accurately distinguish between true and false news headlines (Prike et al., 2024). They are better at accepting truth and rejecting falsehood, indicating a superior critical thinking process (see Figure 5).

Figure 5

Correlations Between Intellectual Humility and Misinformation Discernment (Type-1 d’) in Prike et al. (2024)

The psychological engine driving this enhanced discernment appears to be superior metacognitive insight. Metacognition is “thinking about thinking,” and metacognitive insight is the ability to accurately judge one’s own cognitive processes—in this case, to know when your assessment of a headline is likely correct and when it is likely incorrect. Intellectually humble people seem to be better attuned to their own internal signals of confidence and uncertainty (Prike et al., 2024). This allows them to apply more scrutiny when they feel uncertain and to place more trust in their judgments when they are on solid ground. This finding separates true critical thinking from simple cynicism.

The protective effect of intellectual humility also varies depending on the type of misinformation. The association is strongest for beliefs in pseudoscience, particularly those related to anti-vaccination attitudes and COVID-19 misinformation (Bowes & Fazio, 2024). The relationship is weaker, though still present, for general fake news or conspiracy theories. This suggests that intellectual humility is especially powerful in domains where individuals are asked to evaluate claims that contradict a strong scientific consensus. This has important implications for education and public outreach. Interventions aimed at countering misinformation could be more effective if they focus not just on teaching specific facts, but on fostering intellectual virtues and metacognitive skills. Encouraging people to adopt a mindset of humility and to ask themselves, “How confident am I in this belief, and what evidence is that confidence based on?” could be a more durable and broadly applicable strategy than simply debunking individual false claims.

How Do Conspiracy Beliefs Relate to Social Isolation?

Conspiracy theories are not just sets of alternative beliefs; they often function as comprehensive worldviews that fulfill deep-seated psychological and social needs. One of the most critical of these is the need for community and social support. Research reveals a striking link between a conspiratorial mindset and feelings of social isolation, suggesting that for many, online platforms are not just sources of information but vital social lifelines (Jetten et al., 2023).

A two-study investigation conducted in both China and Australia examined how individuals with varying levels of conspiracy mentality experienced a mandatory 24-hour period of “unplugging” from the internet and all digital media. The results were consistent across both cultures: individuals with a stronger belief in conspiracy theories reported experiencing significantly more negative emotions, such as anger and anxiety, during their time offline (Jetten et al., 2023).

Statistical analysis revealed that this heightened distress was explained by a greater sense of social isolation and a perceived lack of social support during the unplugging period (see Figure 6). The findings suggest that people who are highly invested in conspiracy theories may rely more heavily on online communities to find validation and support for their beliefs—support they may not receive in their offline lives. When this connection is severed, they experience a more acute sense of being “out of the loop” and a greater fear of missing out (FoMO) on important information and social interaction within their community. The aversive experience of being offline is therefore amplified for those who feel cut off from their primary source of social and epistemic validation.

Figure 6

Mediation model of the effect of conspiracy mentality on positive emotions while unplugged via social isolation and lack of social support, controlling for hours unplugged, Study 1 in Jetten et al. (2023.)

Conspiracy mentality leads to less positive emotions during unplugging

This research fundamentally reframes the function of online engagement for this population. For those with a high conspiracy mentality, social media is not merely a passive source of content but an active, participatory environment that provides a community of like-minded individuals. This online ingroup offers the social reinforcement that underpins their identity and worldview. This has critical implications for how we approach counter-strategies. Efforts to de-platform conspiracy communities or isolate individuals from these online spaces, while sometimes necessary, may carry the unintended consequence of reinforcing their sense of persecution and alienation. This can strengthen their identity as a besieged group holding special knowledge, potentially driving them to more extreme, fringe platforms and making them even harder to reach with credible information. Any effective, long-term strategy must therefore address the underlying social needs that draw people to these communities in the first place.

What Role Do Identity and Media Consumption Play in Susceptibility?

An individual’s susceptibility to misinformation is not determined in a vacuum. It is profoundly shaped by their social identity and their media consumption habits, which often exist in a self-reinforcing loop. When a particular narrative aligns with a core aspect of a person’s identity—be it national, ethnic, religious, or political—they are more likely to accept it uncritically. This effect is magnified when their media diet consists primarily of sources that echo and reinforce that identity.

A compelling real-world demonstration of this dynamic was observed in a study examining susceptibility to pro-Kremlin disinformation among Germans of Russian descent (Ziemer et al., 2024). The research found a clear and positive correlation: the more strongly an individual identified as Russian, the more susceptible they were to pro-Kremlin narratives about the war in Ukraine. Similarly, greater exposure to Russian state-controlled media was also linked to higher susceptibility (see Figure 7).

When compared to a control group of Germans without a Russian migration background, participants with a Russian background were, on average, less skilled at correctly identifying pro-Russian disinformation. They also perceived these false narratives as more credible, attributed less responsibility to Russia for the war, and expressed less solidarity with Ukraine (Ziemer et al., 2024). This illustrates a powerful interplay between identity and information processing. The disinformation narratives likely resonated more strongly with this group because they offered a version of events that was more psychologically comfortable and consistent with a positive view of their heritage identity. This identity-protective cognition, fueled by a media ecosystem that continuously validates these narratives, creates a powerful barrier to accepting conflicting, fact-based information. This dynamic is not unique to any single group; it is a fundamental aspect of human psychology that applies across the political and cultural spectrum, highlighting the immense challenge of communicating across identity divides.

Figure 7

Influence of Condition (Inoculation vs. Control) and Identity (Russian Migration Background vs. no Russian Migration Background) on a) Disinformation Recognition, b) Disinformation Credibility, c) Russia’s Responsibility for the War, and d) Solidarity with Ukraine in Ziemer et al. (2024).

Bar chart illustrating the experimental effects of inoculation versus control conditions and identity (Russian migration background vs. no Russian migration background) on disinformation recognition, disinformation credibility, Russia’s responsibility for the war, and solidarity with Ukraine. The chart shows how inoculation improves disinformation recognition and reduces its credibility, increases perceptions of Russia’s responsibility, and strengthens solidarity with Ukraine across different identity groups.
Note. Numbers indicate the mean. Error bars indicate 95% Confidence Intervals around the mean.

Evidence-Based Interventions and Counter-Strategies

Understanding the psychology of misinformation is the first step; developing effective, evidence-based strategies to combat it is the second. Researchers have been rigorously testing a variety of interventions designed to build resilience, correct false beliefs, and improve the overall health of the information ecosystem. This section explores the science behind these counter-strategies, from “pre-bunking” techniques like psychological inoculation to novel approaches like crowdsourcing, and evaluates their strengths, weaknesses, and real-world applicability.

What Is Psychological Inoculation and Is It Effective?

One of the most promising and extensively studied interventions against misinformation is psychological inoculation. The theory, analogous to medical vaccination, posits that you can build people’s resistance to future persuasion attempts by pre-exposing them to a weakened dose of the manipulative arguments or techniques they are likely to encounter. Instead of debunking a specific false claim after it has spread (“post-bunking”), inoculation aims to “pre-bunk” the underlying techniques of manipulation, making people immune before they are exposed.

A growing body of evidence from numerous studies confirms that this approach is effective across various contexts and populations. Key findings on its effectiveness include:

  • Improving Disinformation Recognition and Shifting Attitudes: In the context of state-sponsored disinformation, an inoculation intervention that explained common propaganda techniques (see Figure 8) not only improved participants’ ability to correctly recognize false narratives but also shifted their underlying attitudes. After the intervention, participants perceived the disinformation as less credible, assigned greater responsibility to the aggressor nation for the conflict, and expressed stronger solidarity with the victims (Ziemer et al., 2024). This shows that inoculation can impact not just belief, but also related social and political attitudes (see Figure 7).
  • Targeting Emotional Manipulation: Misinformation often works by appealing to strong emotions like anger, fear, or outrage, bypassing rational thought. An emotion-fallacy inoculation, which specifically forewarns people about how their emotions can be manipulated to spread misleading news, has been shown to be effective. This type of intervention significantly reduces the perceived reliability of false news and improves participants’ overall ability to discern true from false content (Traberg et al., 2024).
  • Building Resilience in High-Risk Environments: The power of inoculation is not limited to online experiments. A field study conducted in post-conflict areas in Iraq tested a short, gamified inoculation intervention called Radicalise. The game puts players in the shoes of an extremist propagandist, exposing them to the techniques used for radicalization. The study found that playing the game was effective at building resilience against extremist persuasion techniques among a vulnerable population. Participants who played the game became significantly better at spotting manipulative messaging and reported a reduced willingness to support extremist organizations (Saleh et al., 2023).

Together, these studies demonstrate that psychological inoculation is a robust and versatile tool. Whether delivered through text, videos, or interactive games, it can effectively equip individuals with the cognitive skills needed to resist manipulation and navigate a complex information environment more critically.

Figure 8

Inoculation Sheet Used in Ziemer et al. (2024)

Inoculation sheet used in a study to combat pro-Russian disinformation about the Ukraine war. Highlights three propaganda strategies: denial of events, denial of the aggressor role, and blame shifting. Provides factual counterarguments to false claims about Bucha, NATO, and international condemnation. Aimed at enhancing recognition of disinformation and strengthening solidarity with Ukraine. Relevant for research on media effects and identity in the context of the Russia-Ukraine conflict.
Note. In the study, the inoculation sheet was presented in German. The German version can be found in the supplementary materials.

What Are the Limitations and Risks of Inoculation Strategies?

While psychological inoculation is a powerful tool, it is not a panacea. Like any intervention, it has limitations and potential unintended consequences that must be carefully considered. A critical challenge is ensuring that the intervention produces discerning critical thinkers rather than indiscriminate cynics. The ultimate goal of media literacy is not to make people disbelieve everything, but to equip them to separate credible information from falsehood.

Research has suggested a significant risk associated with some standard inoculation approaches: they can inadvertently induce blind skepticism. A study comparing different types of interventions found that while a standard inoculation successfully reduced participants’ belief in implausible conspiracy theories, it simultaneously made them worse at correctly identifying plausible ones (O’Mahony et al., 2024). In teaching people to recognize the “red flags” of conspiratorial thinking, the intervention made them overly suspicious, leading them to reject legitimate information simply because it challenged an official narrative (see Figure 9). This highlights a crucial distinction:

  • True Discernment: The ideal outcome. This is the ability to critically evaluate evidence on its own merits and accurately distinguish between well-founded claims (which may sometimes be conspiratorial in nature, such as the Watergate scandal) and baseless, implausible theories.
  • Blind Skepticism: A potential negative side effect. This is an indiscriminate tendency to reject any claim that sounds conspiratorial or challenges authority, regardless of the evidence supporting it.

This finding reveals a fundamental risk in many well-intentioned media literacy efforts. By focusing too heavily on the stylistic features of misinformation, we might be training people to dismiss genuine investigations into real-world wrongdoing. Recognizing this risk, researchers developed and tested a novel “Discernment” intervention. Unlike standard inoculation, this approach actively teaches people how to evaluate the quality of evidence for both plausible and implausible theories, discouraging blind rejection. The study found that this was the only intervention that successfully improved participants’ ability to critically appraise both types of claims, fostering true discernment without the negative side effect of blind skepticism (O’Mahony et al., 2024). The future of inoculation research and public education may therefore have to therefore focus on developing and scaling these more nuanced interventions. The measure of success should not be a simple reduction in belief in a specific false claim, but rather a demonstrable improvement in an individual’s overall critical reasoning process.

Figure 9

Violin plots showing the kernel density distribution of A) conspiracy judgements of the likelihood of plausible and implausible conspiracy theories (JICT), B) discernment between plausible and implausible conspiracy theories (JICT d’), and C) general conspiracy ideation (GCBS) in O’Mahony et al. (2024).

Can We Fight Misinformation by Crowdsourcing Fact-Checks?

While individual-level interventions like inoculation are valuable, system-level solutions that leverage the “wisdom of the crowd” offer another promising avenue for improving the information ecosystem. Crowdsourcing interventions use the collective judgments of ordinary users to identify and label misleading content, providing a scalable alternative or supplement to professional fact-checking (Pretus et al., 2024). Platforms like X (formerly Twitter) have implemented this with their “Community Notes” feature.

Research suggests this approach can be surprisingly effective, even in reaching the most polarized audiences who are often resistant to corrections from mainstream sources (Pretus et al., 2024). The key is that collective accuracy judgments from peers are often perceived as more trustworthy and less biased than those from institutional fact-checkers. Extreme partisans who might dismiss a fact-check from a news organization are more willing to reduce their sharing of misinformation when they see that a large number of fellow users have flagged it as misleading.

However, the success of a crowdsourcing system depends on a delicate balance of three competing factors (Pretus et al., 2024), see Figure 10:

  1. Trust in the Source: Users must trust the people who are providing the fact-checks.
  2. Cognitive Dissonance: The fact-check must conflict with the user’s prior belief enough to create psychological discomfort (dissonance), motivating them to reconsider their position.
  3. Crowd Size: The crowd providing the fact-check must be sufficiently large to be persuasive.

These factors are often in tension. In a polarized environment, the most trusted sources (those within one’s own ideological “echo chamber”) are the least likely to provide dissonant information. Conversely, sources that provide dissonant information are often too ideologically distant to be trusted. To overcome this, researchers propose a “Two Steps Away” implementation strategy. This network-based approach would connect users with fact-checkers from ideologically adjacent communities—just outside their immediate bubble. These sources are close enough to be considered trustworthy but different enough to introduce novel information, creating the optimal conditions for belief revision (Pretus et al., 2024). While challenges related to speed, scale, and the potential for manipulation remain, crowdsourcing represents an innovative model for fostering a sense of shared responsibility for content moderation.

Figure 10

Elements of Crowdsourcing Interventions that Influence Belief Update in Pretus et al. (2024)

A model is shown with elements of Crowdsourcing Misinformation Interventions that Influence Belief Update.

A Comparative Overview of Misinformation Interventions

To provide a clear summary of the primary intervention strategies discussed in the scientific literature published here, the following Table 3 compares their mechanisms, strengths, and documented limitations. This allows for a quick, evidence-based assessment of the current landscape of counter-misinformation tools.

Table 3

Overview of Intervention Types

Intervention TypePrimary MechanismKey StrengthsDocumented Limitations / Risks
Standard InoculationPre-bunking: Exposing individuals to weakened manipulation techniques to build cognitive resistance against future persuasion attempts.Broadly effective in lab settings (Traberg et al., 2024); can be successfully gamified for engagement (Saleh et al., 2023); effective across different identity groups (Ziemer et al., 2024).May induce “blind skepticism,” harming the ability to recognize plausible conspiracies (O’Mahony et al., 2024); lab-based effects can decay rapidly and may not translate to real-world effectiveness (Roozenbeek et al., 2024).
Discernment InterventionA form of inoculation that teaches critical evaluation of evidence for both plausible and implausible claims, actively discouraging blind skepticism.In one comparative study, the only tested method that significantly improves true discernment (evaluating both claim types accurately) without inducing blind skepticism (O’Mahony et al., 2024).A novel approach that is not yet widely deployed; requires further research on scalability and long-term effectiveness in real-world environments.
CrowdsourcingLeveraging the collective intelligence of platform users to identify and label misleading content, creating social norms around accuracy.Can be effective even with highly partisan users who distrust institutional sources (Pretus et al., 2024); highly scalable via existing platforms; reinforces a sense of shared responsibility.Relies on a delicate balance of trust and cognitive dissonance; fact-checks can be delayed; vulnerable to coordinated manipulation or bot attacks (Pretus et al., 2024).

Advancing the Science: Challenges and Future Directions

The scientific study of misinformation is a dynamic and self-critical field. While significant progress has been made in understanding the psychology of misinformation belief and developing interventions, researchers are keenly aware of the formidable challenges that lie ahead. To create truly effective and lasting solutions, the field must confront the limitations of current research paradigms and embrace methodological innovation. This final section looks to the future, exploring the critical gap between lab results and real-world impact and highlighting the evolving methods scientists are using to better understand causality in a complex, digital world.

Why Do Interventions That Work in the Lab Often Fail in the Real World?

One of the most pressing issues confronting misinformation research is the significant gap between efficacy and effectiveness (Roozenbeek et al., 2024). Efficacy refers to how well an intervention works under the pristine, controlled conditions of a laboratory experiment. Effectiveness refers to how well it performs in the messy, chaotic, and unpredictable environment of the real world. The field has produced an abundance of studies demonstrating the efficacy of various interventions, but there is a troubling lack of evidence for their real-world effectiveness. This discrepancy stems from several key challenges (Roozenbeek et al., 2024), see Figure 11:

  • Over-reliance on Lab Studies: The vast majority of research is conducted in artificial lab settings. While essential for establishing causality (Tay et al., 2024), these findings may not generalize to how people consume information in their daily lives, where they are distracted, emotionally engaged, and influenced by their social networks.
  • Rapid Decay and Testing Effects: The protective effects of many interventions decay rapidly, sometimes vanishing within days or even hours. Furthermore, many lab studies may have artificially inflated the longevity of their interventions due to testing effects. The very act of giving participants an immediate post-test (e.g., asking them to rate headlines right after an intervention) acts as a rehearsal or “booster shot” that helps them remember the lesson. Since this immediate rehearsal rarely happens in the real world, the true longevity of these interventions is likely overestimated (Roozenbeek et al., 2024).
  • Modest Real-World Impact: Even when an intervention has a statistically significant effect, its practical impact may be modest. Interventions are likely to benefit only a small fraction of the population, and reaching the most vulnerable or prolific spreaders of misinformation remains a major hurdle (Roozenbeek et al., 2024).
  • The “One-Size-Fits-All” Fallacy: Interventions developed and tested primarily in Western, educated, industrialized, rich, and democratic (WEIRD) societies often fail to replicate or work less well in other cultural contexts, particularly in the Global South. To be effective, interventions must be tailored to the specific cultural, linguistic, and socioeconomic realities of the target audience, preferably by co-designing them with local partners (Roozenbeek et al., 2024).

To bridge the efficacy-effectiveness gap, the field must shift its priorities. It is time to move beyond simply measuring performance on item evaluation tasks in the lab and to elevate real-world outcomes—such as user uptake, long-term behavioral change, and community-level resilience—as equally important metrics for determining “what works.”

Figure 11

A chart showing which individuals are impacted by a hypothetical learning-based (“boosts”; Panel A) or sharing-based intervention (“nudges”; Panel B). The former intervention is aimed at boosting people’s ability to recognise unwanted content (such as misinformation), the latter at preventing unwanted content sharing. Concentric half circles indicate diminishing potential group size. “Successfully treated individuals” are those whom the intervention successfully taught or boosted the target skill or competence (Panel A), or who were prevented from sharing unwanted content as a result of the intervention (Panel B). We make no claims as to the relative size of each concentric half circle, only that each successive group of individuals must logically be smaller than the one above it; we know, for instance, that the percentage of social media users who share “fake news” is low (Allen et al., 2020; Guess et al., 2019), and so it may be the case that only a fraction of social media users actually share unwanted content, and therefore that the second largest concentric circle in Panel B should be much smaller than the largest one, had they been scaled to relative size. Figure from Roozenbeek et al. (2024).

How Is Research Methodology Evolving to Better Understand Causality?

The gold standard for establishing causality in science is the randomized controlled trial (RCT). However, in the study of misinformation, RCTs face significant limitations. It is often not feasible or ethical to randomly expose people to harmful misinformation to study its effects (Tay et al., 2024). For example, one could not ethically expose cancer patients to medical misinformation to measure its impact on their treatment decisions. This has led the field to rely heavily on observational studies, which can identify correlations but often struggle to make justified causal claims.

To overcome these limitations and better test causal hypotheses with real-world data, researchers are increasingly turning to a powerful set of quasi-experimental methods often referred to as “natural experiments” (Tay et al., 2024). These methods take advantage of naturally occurring events or existing rules that effectively sort people into treatment and control groups, allowing for rigorous causal inference without direct manipulation by the researcher. Adopting these advanced methods allows the field to move beyond the constraints of the lab and study the causal impact of real-world events and policies. Key underutilized methods include:

  • Regression Discontinuity Design (RDD): This method is used when an intervention or treatment is assigned based on a cutoff score on some continuous variable. For example, researchers used the publication date of the fraudulent 1998 Wakefield study on vaccines as a sharp cutoff point to study its causal effect on vaccination rates and disease outbreaks. By comparing outcomes for children born just before and just after this date, they could isolate the study’s causal impact, as these groups of children were otherwise statistically identical (Tay et al., 2024).
  • Difference-in-Differences (DiD): This technique compares the change in an outcome over time between a group that was exposed to an event or policy (the treatment group) and a group that was not (the control group). For instance, one could study the impact of a social media platform’s policy change in one country by comparing trends in that country to trends in a similar country where the policy was not implemented (Tay et al., 2024).

By embracing this expanded methodological toolkit (see Table 4), researchers can ask more ambitious and ecologically valid questions. Instead of only asking, “Can this intervention work under perfect conditions?” they can begin to answer, “Did this real-world event, platform policy, or educational program actually change people’s beliefs and behaviors?” This methodological evolution is crucial for building more complete theories of misinformation and for developing applied interventions and policies that are truly evidence-based.

Table 4

Selection of Empirical Strategies For Causal Assessment in Tay et al. (2024)

StrategyBrief ExplanationExamples
Randomized experiment– Randomization helps rule out alternative explanations and allows researchers to ascribe differences in outcomes to causal effects of the randomized manipulation.
– Challenges arise when latent causes are hypothesized (e.g., psychological processes such as motivated reasoning; Tappin et al., 2020), and when ethical or feasibility considerations preclude certain manipulations (e.g., exposing cancer patients to cancer misinformation to test for misinformation effects).
– Effect of misinformation on vaccination intention (e.g., Loomba et al., 2021)
– Effect of repetition on belief in true and false information (e.g., Pillai & Fazio, 2021)
– Effect of source-credibility information and social norms on misinformation engagement (e.g., Prike et al., 2024)
Observational studies– Observational studies rely on naturally occurring data or self-reports from research participants, and often seek to establish causality, either explicitly or implicitly, by controlling for third variables. – The choice of control variables should be explicitly justified, such that the assumption of no unmeasured confounding is plausible. However, it is instead often based on disciplinary norms and data availability, which can lead to confused analysis goals or unjustified causal conclusions for readers and the public, even if researchers avoid using causal terms (Grosz et al., 2020).– Relationship between perceived (self-reported) exposure to misinformation and trust in institutions (e.g., Boulianne & Humprecht, 2023)
– Relationship between social-media use and belief in conspiracy theories (e.g., Enders et al., 2021)  
Instrumental-variable analysis– Instrumental-variable analysis allows researchers to test for causal effect between an explanatory variable of interest and an outcome, even in the presence of unmeasured confounding between the two.
– The analysis relies on identifying instruments, which are variables that influence the outcome only through their effects on the explanatory variable and that must not share any unobserved common cause with the outcome. If these two assumptions are plausible, researchers can use this approach to isolate the unconfounded variation in the explanatory variable. However, the two assumptions cannot be empirically tested.
– Effect of media attention to terrorism on future terrorist attacks (e.g., Jetter, 2017)
– Effect of watching Fox News during COVID-19 on social-distancing behaviors (e.g., Ash et al., 2024; Bursztyn et al., 2020)
– Effect of Facebook and Instagram on political beliefs, attitudes, and behavior during the 2020 US election (e.g., Allcott et al., 2024)
Regression-discontinuity design– The defining feature of the regression discontinuity approach is a running variable, in which there is a sharp change in probability of the explanatory variable of interest being assigned or activated around a threshold value.
– This approach has been described as a “locally randomized” experiment (Lee & Lemieux, 2014), because it compares observations around the threshold, motivated by the assumption that these observations are unlikely to systematically differ aside from their status as regards the explanatory variable of interest. Whether this assumption holds will depend on whether the observations can directly manipulate their values on the running variable.
– Effect of Wakefield et al. (1998) on vaccine skepticism (e.g., Motta & Stecula, 2021)
– Effect of recession news on consumer behaviors (e.g., Eggers et al., 2021)
– Effect of fact checks on Twitter (e.g., Aruguete et al., 2023)
Difference-in-differences and synthetic control– Difference-in-differences compares differences in outcomes over time between treatment and control groups. With non-random assignment into groups, the analysis relies on the parallel-trends assumption, where the treatment and control groups would have followed the same trend over time in the absence of treatment.
– Synthetic controls may be used to better match pre-treatment characteristics, if the parallel-trends assumption is unlikely to hold and no control group is sufficiently similar to the treatment group to act as a counterfactual.
– Demand vs. supply of misinformation on Facebook (Motta et al., 2023)
– Effect of misinformation on vaccination and voting behavior (Carrieri et al., 2019)
– Effect of fake-news flagging on dissemination behaviours (Ng et al., 2021)
Note. For another introduction and additional references for instrumental variable analysis and regression discontinuity design, see Grosz et al. (2024), and for another introduction and additional references for difference-in-differences and synthetic control, see Rohthbard et al. (2023) and Bonander et al. (2021).


This knowledge hub was compiled by the editorial team of advances.in/psychology, based on peer-reviewed research from our contributing authors. Our mission is to advance the scientific understanding of the human mind and behavior. For more information about our journal and editorial standards, please visit our About page.

References

The findings and concepts presented in this guide are based on the following peer-reviewed articles published in advances.in/psychology:

Bowes, S. M., & Fazio, L. K. (2024). Intellectual humility and misinformation receptivity: A meta-analytic review. advances.in/psychology, 2, e940422. https://doi.org/10.56296/aip00026

Fisher, J.D. & Fisher, W.A. (2023). An Information-Motivation-Behavioral Skills (IMB) Model of Pandemic Risk and Prevention. advances.in/psychology, 1(1), 1-26. https://doi.org/10.56296/aip00004

Jetten, J., Zhao, C., Álvarez, B., Kaempf, S., & Mols, F. (2023). Trying to unplug for 24 hours: Conspiracy mentality predicts social isolation and negative emotions when refraining from internet use. advances.in/psychology, 1(1), 1-19. https://doi.org/10.56296/aip00002

Lawson, M.A., & Kakkar, H. (2024). Resolving conflicting findings in misinformation research: A methodological perspective. advances.in/psychology, 2, e235462. https://doi.org/10.56296/aip00031

Mang, V., Fennis, B. M., & Epstude, K. (2024). Source credibility effects in misinformation research: A review and primer. advances.in/psychology, 2, e443610. https://doi.org/10.56296/aip00028

O’Mahony, C., Murphy, G., & Linehan, C. (2024). True discernment or blind scepticism? Comparing the effectiveness of four conspiracy belief interventions. advances.in/psychology, 2, e215691. https://doi.org/10.56296/aip00030

Pretus, C., Gil-Buitrago, H., Cisma, I., Hendricks, R. C., & Lizarazo-Villareal, D. (2024). Scaling crowdsourcing interventions to combat partisan misinformation. advances.in/psychology, 2, e85592. https://doi.org/10.56296/aip00018

Prike, T., Holloway, J., & Ecker, U.K.H. (2024). Intellectual humility is associated with greater misinformation discernment and metacognitive insight but not response bias. advances.in/psychology, 2, e020433. https://doi.org/10.56296/aip00025

Robson, S.G., Faasse, K., Gordon, E.-R., Jones, S. P., Drew, M., & Martire, K. A. (2024). Lazy or different? A quantitative content analysis of how believers and nonbelievers of misinformation reason. advances.in/psychology, 2, e003511. https://doi.org/10.56296/aip00027

Roozenbeek, J., Remshard, M., & Kyrychenko, Y. (2024). Beyond the headlines: On the efficacy and effectiveness of misinformation interventions. advances.in/psychology, 2, e24569. https://doi.org/10.56296/aip00019

Tay, L. Q., Hurlstone, M., Jiang, Y., Platow, M. J., Kurz, T., & Ecker, U. K. H. (2024) Causal inference in misinformation and conspiracy research interventions. advances.in/psychology, 2, e69941. https://doi.org/10.56296/aip00023

Traberg, C. S., Morton, T., & van der Linden, S.  (2024). Counteracting socially endorsed misinformation through an emotion-fallacy inoculation. advances.in/psychology, 2, e765332. https://doi.org/10.56296/aip00017

Ziemer, C.T., Schmid, P., Betsch, C., & Rothmund, T. (2024). Identity is key, but Inoculation helps – how to empower Germans of Russian descent against pro-Kremlin disinformation. advances.in/psychology, 2, e628359. https://doi.org/10.56296/aip00015