Intellectual humility and misinformation receptivity: A meta-analytic review

Shauna M. Bowes1* & Lisa K. Fazio1

Received: April 26, 2024. Accepted: September 26, 2024. Published: September 27, 2024. https://doi.org/10.56296/aip00026

Abstract
Given the pervasiveness and dangers of misinformation, there has been a surge of research dedicated to uncovering predictors of and interventions for misinformation receptivity. One promising individual differences variable is intellectual humility (IH), which reflects a willingness to acknowledge the limitations of one’s views. Research has found that IH is correlated with less belief in misinformation, greater intentions to engage in evidence-based behaviors (e.g., receive vaccinations), and more actual engagement in evidence-based behaviors (e.g., take COVID-19 precautions). We sought to synthesize this growing area of research in a multi-level meta-analytic review (k = 27, S = 54, ES = 469, N = 33,814) to provide an accurate estimate of the relations between IH and misinformation receptivity and clarify potential sources of heterogeneity. We found that IH was related to less misinformation receptivity for beliefs (r = -.15, 95% CI [-.19, -.12]) and greater intentions to move away from misinformation (r = .13, 95% CI [.06, .19]) and behaviors that move people away from misinformation (r = .30, 95% CI [.24, .36]). Effect sizes were generally small, and moderator analyses revealed that effects were stronger for comprehensive (as opposed to narrow) measures of IH. These findings suggest that IH is one path for understanding resilience against misinformation, and we leverage our results to highlight pressing areas for future research focused on boundary conditions, risk factors, and causal implications.

Keywords: intellectual humility, misinformation, beliefs, meta-analysis

  1. Department of Psychology and Human Development, Vanderbilt University.

*Please address correspondence to shauna.m.bowes@gmail.com, Department of Psychology and Human Development, Vanderbilt University, Psychology and Human Development, 1210 21st Ave S, Nashville, TN 37212, United States

Bowes, S. M., & Fazio, L. K. (2024). Intellectual humility and misinformation receptivity: A meta-analytic review. advances.in/psychology, 2, e940422. https://doi.org/10.56296/aip00026

The current article passed two rounds of double-blind peer review. The anonymous review report can be found here.

Introduction

On January 6th, 2021, a mob of individuals stormed the Capitol Building in Washington D.C. to ostensibly return former President Donald Trump to power. In looking back at this event, misinformation was fuel for a growing fire. From misinformation around the electoral process to conspiracy theories about Democratic politicians, false, misleading, and otherwise inaccurate information abounded (Election Integrity Project, 2021). The January 6th insurrection represents an event in which misinformation led to action—many of the people in attendance were both dedicated to the cause and willing to participate in a violent act to uphold their beliefs.   

That said, not everyone who attended the January 6th events ended up storming the Capitol. One such example is an individual named Justin, whose last name remains anonymous in his press interviews to protect his privacy (Zadrozny, 2022). Justin was an ardent believer in QAnonymous (or QAnon), a far-right group espousing conspiracy theories about a Democratic-led “deep state” that includes child sex-trafficking, election fraud, and more (see Enders et al., 2022). Although Justin attended the rally in Washington D.C., he eventually left as the mob reached the bottom steps of the Capitol. He later said, “I felt like I was watching people get radicalized. It got me… I was supposed to be a part of a movement, but did I just get duped?” (Zadrozny, 2022).

The example of the Capitol insurrection in 2021 raises a natural question: What makes some people more receptive to misinformation and what makes others more resilient against it? To address this question, it may be especially fruitful to focus on individual differences characteristics. By adopting an individual differences approach in the context of misinformation receptivity, it will be possible to build an understanding of whois susceptible to misinformation and design targeted interventions for those most at risk.

The example of Justin points to an intriguing individual differences construct that may correlate with less receptivity to misinformation: intellectual humility (IH). IH refers to a willingness to acknowledge the limitations of one’s viewpoints and remain open to new evidence (see Porter et al., 2022). Based on this definition, IH should be related to less misinformation receptivity, as it may facilitate the ability or motivation to pause, reflect, and consider the possibility that one’s views are necessarily limited and sometimes even wrong. One of the reasons that IH may protect against the harmful effects of misinformation is that it represents a “virtuous mean” between reflexive open-mindedness (or gullibility) and dogmatism (or close-mindedness; Church & Barrett, 2016). In other words, IH reflects a willingness to change one’s mind but only if there is evidence to do so. Belief in misinformation is positively related to reflexive open-mindedness (i.e., being overly accepting of information to the point that one is gullible rather than discerning; Pennycook & Rand, 2019) and to dogmatism (e.g., Bronstein et al., 2019). IH may, thus, be a “sweet spot” in so far as it allows people to be both open-minded and discerning.

IH is related to less dogmatism (e.g., Leary et al., 2017), and it is additionally related to other constructs commonly used in misinformation interventions. For instance, IH is related to stronger critical-thinking abilities, including cognitive flexibility and intelligence (Zmigrod et al., 2019). Similarly, IH is related to more discernment on measures of overclaiming (i.e., tendencies to claim familiarity with topics that do not exit) and bullshit receptivity (i.e., tendencies to see semantically meaningless sentences as profound; Bowes et al., 2024). Altogether, research suggests that IH would be related to less misinformation receptivity.

The literature on IH and misinformation receptivity is rapidly growing, and the time is now ripe to synthesize results from the available literature and gain insights on the nature and strength of the relations between IH and misinformation receptivity. A meta-analysis is a powerful and effective way to both characterize the relations between IH and misinformation receptivity and illuminate sources of heterogeneity that warrant additional research scrutiny. That is, a meta-analysis provides a clearer path for funneling research resources toward innovative questions and for future studies to conduct riskier and more targeted research on the relations between IH and misinformation receptivity.

Beliefs, Intentions, and Behaviors

Misinformation is an umbrella phrase for “any information that is demon­strably false or otherwise misleading” (van der Linden et al., 2023, p. 7). As such, misinformation can take many forms, from false trivia facts (e.g., Newman et al., 2022) to conspiracy theories (i.e., claims that a small, powerful group is acting in secret to harm the common good and reap personal benefits; Uscinski & Enders, 2023). Misinformation receptivity, thus, refers to tendencies to believe in and act on misinformation. The lion’s share of research on IH and misinformation receptivity focuses on conspiracy theories, pseudoscientific claims (i.e., claims that are not supported by scientific evidence or are outside the domain of science but are presented as truths; Fasce & Picó, 2019), and fake news (i.e., information that takes on the appearance of real news but is fabricated or false; Pennycook & Rand, 2021).

Consistent with the relations between IH and critical-thinking, IH is related to less conspiracy belief (e.g., Huynh & Bayles, 2022), less belief in pseudoscience (e.g., Preston & Khan, 2024), and less belief in fake news (e.g., Bowes & Tasimi, 2022). There is also additional support for the positive relations between IH and discernment, as IH is related to perceiving real news as more accurate than fake news (Bowes & Tasimi, 2022). Correlations tend to be small to medium per Gignac and Szodorai’s (2016) effect size guidelines for individual differences research (which we will use to interpret all effect sizes in the present investigation). The effect size guidelines (a small effect is r = .10, a medium effect is r = .20, and a large effect is r = .30) were meta-analytically derived from individual differences research studies and thus are the most appropriate guidelines for the present meta-analysis (as it is focused squarely on individual differences).

These negative relations between IH and beliefs, however, do not necessarily give rise to behaviors. That is, belief in misinformation is not the same as acting on misinformation (see Ecker et al., 2022; van der Linden et al., 2023). For instance, the correlates of belief in fake news differ from the correlates of intentions to share fake news, as there are different motivational and attentional pulls for each (Pennycook & Rand, 2021). Although beliefs often do correspond to intentions and behaviors, relations tend to be small and effect sizes can vary considerably across studies depending on the context and how intentions and behaviors are measured (see van der Linden et al., 2023). Thus, the relations between IH and intentions and behaviors may be smaller and/or more heterogenous compared with the relations between IH and beliefs.

A small body of research has examined the relations between IH and behavioral intentions, finding that, overall, IH is related to intentions to move away from misinformation. For instance, IH is positively related to intentions to investigate misinformation (i.e., spend time fact-checking false claims; Koetke et al., 2023), negatively related to intentions to share fake news (e.g., Bowes & Tasimi, 2022), and positively related to intentions to engage in evidence-based public health practices (e.g., receive a vaccine; Huynh & Senger, 2021).

Not only is IH related to these intentions, but it is also related to actual behaviors. Dovetailing with the findings on intentions to investigate misinformation, IH is related to actual engagement in investigative behaviors for fake news (Koetke et al., 2022) and engagement in counter-information searches (i.e., electing to read more about evidence running against one’s views; Gollwitzer et al., 2024). Also consistent with the relations between IH and intentions to engage in evidence-based public health practices, IH was related to actual engagement in COVID-19 precautions (e.g., Jongman-Sereno et al., 2023). Correlations for intentions and behaviors tended to be small to moderate. These correlations are generally consistent with the effect sizes for beliefs, raising the possibility that IH is equally related to less misinformation receptivity for beliefs, intentions, and behaviors.

What aspects of intellectual humility are related to misinformation receptivity?

Results from individual studies are promising—they largely indicate that IH is weakly to moderately related to less belief in misinformation, more intentions to engage in evidence-based behaviors and investigate misinformation, and actual engagement in these behaviors. That said, not all manifestations of IH are equally related to misinformation receptivity. Although there is a common metacognitive core in definitions of IH—a willingness to acknowledge the limits of one’s views (Porter et al., 2022)—IH measures differ in their coverage of relational and emotional features. Some measures assess only metacognitive features, such as an openness to revising one’s beliefs and a willingness to question one’s opinions (Leary et al., 2017), whereas other measures assess relational and emotional features in addition to metacognitive features. These comprehensive measures may include items pertaining to whether people publicly acknowledge the limits of their views, treat others with respect, feel threatened by challenges to their views, express overconfidence, and enjoy learning new things from others (Alfano et al., 2017; Haggard et al., 2018; Krumrei-Mancuso & Rouse, 2016; Porter & Schumann, 2018). Many of the measures that assess a blend of IH features are multidimensional (c.f., Porter & Schumann, 2018), meaning there is a total IH score in addition to dimension scores.

The relations between IH and misinformation receptivity may vary based on how IH is measured. Previous research finds stronger relations between IH and misinformation receptivity with broad measures of IH (e.g., Bowes & Tasimi, 2022). These results suggest that the strength of the relations between IH and misinformation receptivity varies across comprehensive versus narrow operationalizations of IH.

What forms of misinformation receptivity are related to intellectual humility?

Just as relations may vary across IH measures, they may also vary across forms of misinformation. Different measures of misinformation tend to be strongly and positively interrelated (e.g., Anthony & Moulding, 2019; van der Linden et al., 2021), but they can have different patterns of relations with relevant external criteria. For instance, one paper found that IH was consistently related to less conspiracy belief and belief in fake news but was inconsistently related to less belief in pseudoscience (Bowes & Tasimi, 2022). Moreover, within a misinformation domain, such as pseudoscience belief, correlations vary across measures. For example, IH is strongly related to less belief in anti-vaccination claims (e.g., Huynh & Senger, 2021; Senger & Huynh, 2021) and less belief in the paranormal (Bowes & Tasimi, 2022), but it is weakly or not significantly related to other measures of pseudoscience belief, such as belief in complementary and alternative medicine or superstitious beliefs (Bowes & Tasimi, 2022). These findings suggest that the strength of the relations between IH and misinformation receptivity varies across types of misinformation.

Present Investigation

We sought to synthesize this emerging body of research on the relations between IH and misinformation receptivity in a multi-level meta-analytic review. This area of research is rapidly growing and is timely and of practical importance, so we aimed to characterize the relations between IH and misinformation receptivity, the extent to which they generalize across measures of IH and misinformation, and the magnitude of the heterogeneity in these relations. Our motivation for this meta-analysis was twofold.

First, a meta-analysis will clarify the robustness of the relations between IH and misinformation receptivity, especially given that many of the previously reported effect sizes are small. By aggregating results across reports, a meta-analysis promises to provide a more accurate and informative snapshot of the magnitude of the relations between IH and various measures of misinformation receptivity. Using meta-analytic tools will also allow us to account for variation in sample size across samples, placing heavier weight on effect sizes from larger samples. Moreover, this meta-analysis will uncover whether relations are consistent across measures of beliefs, intentions, and behaviors, as we use the same coding scheme and analytic approach for each broad domain.

Second, a meta-analysis is not only a powerful way to clarify the magnitude of relations but also is a powerful way to quantify heterogeneity in said relations. Through quantifying heterogeneity, it is possible to identify whether there are boundary conditions in the relations between IH and misinformation receptivity. For instance, if IH is only related to less conspiracy belief but not to less pseudoscience belief, sweeping claims about IH being related to less misinformation receptivity will be largely inaccurate or at the very least overstated. Such boundary conditions are also informative about the underlying psychological mechanisms connecting IH with misinformation receptivity. Understanding which measures are and are not correlated can help constrain psychological theory. As such, we aimed to systematically identify sources of heterogeneity in these relations, especially when considering that correlations seem to vary across IH and misinformation measures.

To do so, we aggregated results from the full body of currently available literature (including published and unpublished data), examined the potential for publication bias, and coded for moderators, including how both IH and misinformation receptivity were measured. Moderation analyses will shed light on whether third variables impose boundaries on the strength of the relations between IH and misinformation receptivity. We also coded for whether the misinformation measure assessed COVID-19 misinformation, as recent meta-analytic evidence indicates that the relations between motivational and personological variables and conspiracy belief varied based on whether COVID-19 conspiracy theories were assessed or not (Bowes et al., 2023). Belief in COVID-19 conspiracy theories may be separable from belief in other conspiracy theories in terms of their antecedents (e.g., populism; Stecula & Pickup, 2021), content (e.g., including a mix of conspiracy stereotypes and pseudoscientific claims; see Van Mulukom et al., 2022), and relevance to the current sociopolitical context (e.g., recency effects).

Method

Datasets, analytic code, output files, and the screened papers are available on the Open Science Framework (OSF) at the following link: https://osf.io/bmn5p/.

Literature Search and Inclusion Criteria

The literature search was conducted in March 2024 on Google Scholar and PsycInfo. We used the following search terms on both databases: “intellectual humility” AND (1) “misinformation”, (2) (“conspiracy belief” OR “conspiratorial ideation” OR “conspiracist belief” OR “conspiracist ideation”), (3) “fake news”, (4) “pseudoscience”, (5) “COVID”, and (6) (“vaccine” OR “vaccination”). We also searched through the references sections of two recent relevant papers, one of which focused on IH and misinformation comprehensively (Bowes & Tasimi, 2022) and one of which was a scoping narrative review on IH (Porter et al., 2022).  Given their focus, these papers provided a broad starting point to engage in a snowball search for relevant papers. In July 2024, we broadened our conspiracy belief search to also include “conspiracy mentality” and “conspiracy theory” in Google Scholar and PsychInfo. We also replicated all searches in PubMed at this time.

Papers were screened based on their abstract and method sections and then were subjected to a full review. Papers were eligible if they (1) included a self-report measure of IH, (2) included a measure of misinformation receptivity (belief, intention, or behavior), and (3) reported an effect size (either a zero-order correlation or an effect that could be converted to a zero-order correlation) for the relationship between (1) and (2). Published and unpublished papers were eligible for inclusion. Papers not written in English were excluded. We implemented a stop rule in our search across all databases such that if 30 consecutive papers were either duplicates or ineligible for inclusion, we did not continue screening subsequent papers.

Data Coding

The search yielded 5,075 results. Upon removing duplicates and using our stopping rule, there were 312 results that were screened (Figure 1). After assessing records for eligibility, there were 27 included records (54 samples, 469 effect sizes). Here and throughout, we use k to denote records/papers (i.e., individual published/unpublished papers), S to denote samples (i.e., distinct groups of participants included in the research report), and ES to denote effect sizes (i.e., correlation between IH and misinformation receptivity).

Figure 1

Flowchart of Search and Screening Process.

Data coding was conducted by the first-author (who has expertise in both IH and misinformation and has previously conducted a multi-level meta-analysis). As a reliability check, a research assistant was trained on the coding scheme for 10% of the studies (3 studies), and then double-coded an additional 15% of the studies (4 studies). The reliability between the two coders, as indexed by Fleiss’ kappa coefficient, was near perfect for all coded variables (e.g., study characteristics, effect sizes, misinformation domains; κs ranged from .97 to 1.00), indicating strong consistency across coders and a reliable coding scheme. Pearson’s zero-order r coefficients were extracted from each of the papers and represent the main effects of interest in the present investigation. If papers reported a longitudinal design (Coelho et al., 2022; Gollwitzer et al., 2024), only correlations within timepoints were included. Finally, if effect sizes were not included in a report but were relevant to the meta-analysis (k = 4), first-authors were contacted a maximum of three times by email within a three-week period to request the data. Of the four authors contacted, three (75%) responded with the data within a three-week period.  

Main Effects

We conducted separate meta-analyses for measures of beliefs, intentions, and behaviors. For belief in misinformation, our main analyses focus on the meta-analytic relation with IH across IH measures and different domains of misinformation. Specifically, we coded for belief in fake news, conspiracy belief, and pseudoscience belief. We also examined the meta-analytic relations between IH and belief in misinformation collapsed across beliefs. In total, there were four meta-analytic models for belief in misinformation.

For intentions, we separated intentions to (1) engage in investigative behaviors for misinformation, (2) engage in evidence-based COVID-19 precautions, (3) receive a vaccine, and (4) share fake news. We also examined the meta-analytic relations between IH and intentions when collapsing across intentions. For intentions, we examined a total of five meta-analytic models.

For behaviors, we coded engagement in (1) investigative behaviors for misinformation, (2) evidence-based COVID-19 precautions, (3) counter-information searches, and (4) general evidence-based health practices. However, engagement in evidence-based COVID-19 precautions was the only behavior assessed in two or more papers, so that was the only behavior estimated in its own meta-analytic model. We examined the relations between IH and the other coded behaviors in subgroup analyses when collapsing across behaviors. For behaviors, we examined a total of two meta-analytic models. In the collapsed models (collapsed beliefs, intentions, and behaviors), we examined whether effects differed across the misinformation domains. For instance, for beliefs, we compared effect sizes across conspiracy belief, pseudoscience belief, and belief in fake news.

Moderators

The number of effect sizes for each moderator is presented in Table 1[1].

Intellectual Humility Measure. As described earlier, IH measures differ in their coverage of metacognitive, relational, and emotional features (see Porter et al., 2022). As such, we coded for the total scores from each individual IH measure. We coded for the following measures: (1) Comprehensive Intellectual Humility Scale (CIHS; Krumrei-Mancuso & Rouse, 2016), (2) General Intellectual Humility Scale (GIHS; Leary et al., 2017), (3) Alfano Intellectual Humility Scale (AIHS; Alfano et al., 2017), (4) Limitations-Owning Intellectual Humility Scale (L-OIHS; Haggard et al., 2018), (5) Porter Intellectual Humility Scale (PIHS; Porter & Schumann, 2018), and (6) Situated Wise Reasoning Scale (SWRS; Brienza et al., 2018).

Like the total scores compared across measures, the IH dimensions within measures also differ in their coverage of IH features. Thus, we coded for the CIHS dimensions of (1) Lack of Intellectual Overconfidence, (2) Independence of Intellect and Ego, (3) Openness to Revising One’s Viewpoint, and (4) Respect for Others’ Viewpoints. The former two dimensions reflect a blend of IH features. Openness to Revising One’s Viewpoint exclusively assesses metacognitive features whereas Respect for Others’ Viewpoints exclusively assesses relational features. We additionally coded for the dimension of Openness on the AIHS, as this was the only dimension reported on from the AIHS. We ran two moderation models for the coded IH measures. First, we compared IH total scores to each other. Second, we compared IH dimensions to each other.

Misinformation Measures. We also coded the misinformation measures for beliefs and certain intentions. We did not code for the specific measures for most intentions or behaviors, given that many were assessed with single-item scales or in an inconsistent fashion across papers. For conspiracy belief, we coded the following: (1) general conspiracy belief (i.e., belief in abstract, decontextualized conspiracy theories; Brotherton et al. 2013), (2) specific conspiracy belief (i.e., belief in concrete, contextualized conspiracy theories; Swami et al., 2011), (3) measures that assessed both general and specific conspiracy belief (i.e., a mixture of conspiracy theories), (4) political conspiracy belief (Federico et al., 2018), and (5) vaccine conspiracy belief (Shapiro et al., 2016). For pseudoscience belief, we coded the following: (1) anti-vaccination beliefs (Martin & Petrie, 2017), (2) belief in complementary and alternative medicine (Lie & Boker, 2004), (3) less support for/belief in science (Farias et al., 2013), (4) paranormal belief (Tobacyk, 2004), (5) general belief in pseudoscience/anti-scientific claims (Fasce & Picó, 2019), (6) less trust in science (Plohl & Musil, 2023), and (7) superstitious beliefs (Wiseman & Watt, 2004). And, finally, for fake news, both belief in fake news and intentions to share fake news, we coded for the following: (1) Republican-consistent fake news, (2) Democratic-consistent fake news, (3) political fake news (i.e., contains both Republican-consistent and Democratic-consistent fake news), (4) neutral fake news (i.e., non-political fake news), (5) COVID-19 fake news, and (6) false information not presented in the form of a news headline (Newman et al., 2022; e.g., “All apples in the grocery stores are clones of each other, flavored and colored differently to increase sales”).

COVID-19 Misinformation. We assessed whether the relationships between IH and misinformation receptivity varied based on whether measures assessed COVID-19 misinformation/behaviors/intentions (“yes”) or not (“no”). We also coded for the type of COVID-19 misinformation/behaviors/intentions: (1) social distancing, (2) washing hands, (3) wearing a mask, (4) avoiding touching one’s face, (5) avoiding risky exposures, (6) staying at home/quarantining, and (7) support for COVID-19 public health policies.

Data Analytic Plan

Main Effects

We used Fisher’s r-to­-z transformation to normalize the sampling distribution of the correlation coefficients (Silver & Dunlap, 1987), and we used standard inverse weighting for the correlations (Marín-Martínez & Sánchez-Meca, 2010). We used a three-level random-effects meta-analytic model with restricted maximum likelihood estimation (Assink & Wibbelink, 2016), allowing us to model the sampling variance for individual effect sizes (Level 1), variation within-samples and across outcomes (Level 2), and variation between-samples (Level 3). This design can handle and account for correlated sampling errors due to multiple effect sizes being included from the same sample and paper (Van den Noortgate et al., 2013). We also include 95% confidence intervals (i.e., the range wherein 95% of the average effects are expected to fall; Chiolero et al., 2012). All meta-analytic models were calculated using the metafor package (Viechtbauer, 2010) in Posit Cloud (R version 4.3.3; R Core Team, 2024)[2].

Heterogeneity

We used several metrics to assess heterogeneity. We examined Cochrane’s Q statistic—a significant Q statisticindicates the presence of between-study heterogeneity. The Q statistic, however, can be significant even in the absence of meaningful heterogeneity if the k is large, and it can have poor power to detect meaningful heterogeneity if the k is small (Huedo-Medina et al., 2006). Thus, we also examined the H2 and I2 statistics, which are not theoretically impacted by the k and allow for comparisons across meta-analytic models. The H2 statistic is the difference between Q and its predicted or expected value when heterogeneity is absent, so larger H2 values indicate more heterogeneity (H2 > 1.5 suggests a heterogenous population of studies; Higgins & Thompson, 2002). The I2 statistic (which is a transformation of H) represents the proportion of total variation in the meta-analytic effect that is a result of between-study heterogeneity in the “true” effect (Higgins & Thompson, 2002). We examined the I2 statistic in Level 2 (I2(2)) and Level 3 (I2(3)) of the meta-analytic models, allowing us to ascertain variation within- and between-samples, respectively, relative to the total variation.  

We additionally examined τ21 and τ22 for each meta-analytic model to compute τ, which reflects the standard deviation of the true effect sizes. Finally, we include the 90% prediction intervals (i.e., the range wherein 90% of individual estimates of the true effect are expected to fall in new studies using the same study designs and methodological approaches; IntHout et al., 2016). Note, a prediction interval is almost always wider than a confidence interval, as it pertains to a single estimate rather than a mean estimate (e.g., Forthofer et al., 2007). If the 90% prediction interval contains zero, then there will be some instances or settings where the relations between IH and misinformation receptivity are not significant.

Moderation

For categorical moderators, we fit a single three-level random-effects model with the intercept removed from the model; this approach allowed us to simultaneously estimate the effect sizes for each level of the moderator rather than in reference to the intercept (Viechtbauer, 2010). If the omnibus F statistics were significant for the tested moderation models, we then conducted follow-up t-tests to compare each level of the moderator; this approach allowed us to ascertain whether the effect sizes significantly differed across levels of the moderator. Subgroups were only included in these moderation analyses if there were at least three effect sizes present for that subgroup.

Publication Bias

We assessed potential publication bias (i.e., characteristics of published reports that may limit the representativeness of the reports or bias the meta-analytic estimates; see McShane et al., 2016) in two ways. First, we created a publication status (published or unpublished) categorical moderator variable. Second, we examined the standard error for each effect size as a predictor in each meta-analytic model, a technique which closely mirrors PET-PEESE analyses in two-level meta-analyses (Stanley & Doucouliagos, 2014). The PET test refers to a meta-regression in which weighted standard errors (according to their precision estimates) predict the effect sizes (see Carter et al., 2019). If the PET test is statistically significant, then it is recommended to conduct the follow-up PEESE test. The PEESE test refers to a meta-regression in which weighted squared standard errors (according to their precision estimates) predict the effect sizes (see Carter et al., 2019). The intercept of the PEESE meta-regression is the estimated total effect controlling for potential publication bias (Stanley & Doucouliagos, 2014). We adopted both of the aforementioned publication bias assessment methods because PET-PEESE meta-regressions can produce unstable estimates if there is high between-study heterogeneity and/or if the ks are low (Carter et al., 2019).

Results

Paper Characteristics

There were 27 papers (k), 54 samples (S), 469 effect sizes (ES), and 33,814 participants (N) included in the meta-analysis. Papers were published between 2020 and 2024 (M = 2022.02; SD = 1.01). Most papers were published (k = 22), and the samples predominately comprised female (M = 58.11%, SD = 17.57) and White (M = 72.23%, SD = 7.83) participants. A plurality of participants was politically Democratic (M = 43.93%, SD = 5.08) and college-educated (M = 49.81%, SD = 19.86). The average age of participants across samples was 36.87 (SD = 6.92). Most papers assessed pseudoscience belief (k = 14), followed by conspiracy belief (k = 12), behavioral intentions (k = 8), behaviors (k = 7), and belief in fake news (k = 6). A minority of papers assessed COVID-19 beliefs/intentions/behaviors (k = 7).

Most samples included online community participants from platforms such as MTurk or Prolific (S = 32) and participants from the United States (S = 29). The CIHS, GIHS, and dimension of Openness from the AIHS were the most commonly used IH measures (Ss ranged from 14 [CIHS Total] to 23 [GIHS]). The AIHS total score, L-OIHS, PIHS, and SWRS were only used in a small number of samples (Ss were 1 [AIHS, SWRS] to 2 [L-OIHS and PIHS]).

Main Effects

The main effects, 95% confidence intervals, and heterogeneity statistics are presented in Table 2.

Beliefs

The meta-analytic relations between each type of misinformation belief and IH are presented in Figure 2, Panel A. IH was weakly and significantly related to less conspiracy belief (r = -.11) and fake news belief (r = -.12). IH was also moderately and significantly related to less pseudoscience belief (r = -.20). When collapsing across beliefs, IH was weakly and significantly related to less belief in misinformation (r = -.15). The relations between IH and pseudoscience belief (b = -.18, p < .001) were significantly larger than for conspiracy belief (b = -.13, p < .001; t(340) = 1.98, p = .048) and belief in fake news (b = -.12, p < .001; t(340) = 2.07, p = .039).

Effects were highly heterogeneous for all misinformation beliefs (H2 ranged from 10.03 [conspiracy belief] to 18.83 [pseudoscience belief]). Between-sample heterogeneity was smaller than within-sample heterogeneity (I2(3) ranged from 4.01 [fakes news] to 26.33 [pseudoscience]) except for conspiracy belief (I2(3) = 51.67). The standard deviation in the true effect sizes tended to be similar in magnitude to the estimated effect sizes or even larger than the effect sizes (τ ranged from .10 [fake news] to .17 [pseudoscience]). Moreover, the 90% prediction intervals also suggested that the results were heterogeneous, as the intervals were wide and included zero for all belief measures.

Figure 2

Meta-Analytic Relations Between IH and (a) Belief in Misinformation, (b) Behavioral Intentions, and (c) Behavior and Caterpillar Plots.

Behavioral Intentions

The meta-analytic relations between behavioral intentions and IH are presented in Figure 2, Panel B. IH was strongly, significantly, and positively related to intentions to investigate misinformation (r = .29) and weakly, significantly, and positively related to intentions to be vaccinated (r = .11) and less willingness to share fake news (r = .06). When collapsing across intentions, the effect was weak, positive, and significant (r = .13). The relationship between IH and intentions to engage in investigative behaviors (b = .31, p < .001) was significantly larger than for intentions to be vaccinated (b = .11, p = .071; t(62) = 2.73, p = .009) and less willingness to share fake news (b = .05, p = .115; t(62) = 4.91, p < .001).

Effects were less heterogeneous for intentions compared with beliefs, but effects were still heterogeneous for all intentions (H2 were 2.45 [misinformation investigation] and 3.82 [share fake news]), with the exception of intentions to receive vaccines (H2 = .79). Between-sample heterogeneity was larger than within-sample heterogeneity for intentions to receive vaccines (I2(3) = 37.76) but not for intentions to investigate misinformation (I2(3) = 39.20) and intentions to share fake news (I2(3) = 41.94). The standard deviation in the true effect sizes tended to be smaller in magnitude than the estimated effect sizes (τ were .07 [vaccination] and .11 [misinformation investigation]), with the exception of intentions to share fake news (τ = .08). Consistent with these findings, the 90% prediction intervals tended to be wide and include zero. The only exception was for intentions to investigate misinformation, with the prediction interval not including zero.

Behaviors

The meta-analytic relations between behaviors and IH are presented in Figure 2, Panel C. IH was strongly, significantly, and positively related to engaging in COVID-19 precautions (r = .34). Moreover, the relationship between IH and engagement in social distancing (b = .39, p < .001) was significantly larger than for washing one’s hands (b = .30, p < .001), but all effects were positive, large, and significant (bs ranged from .29 to .39, ps < .001). When collapsing across behaviors, the effect was strong, positive, and significant (r = .30). The relationship between IH and engagement in COVID-19 precautions (b = .34, p < .001) was significantly stronger than for general health behaviors (b = .05, p = .672; t(55) = 2.38, p = .021) and for counter-information searches (b = .14, p = .101; t(55) = 2.15, p = .036). The effect for COVID-19 precautions was highly heterogeneous (H2 = 17.21). Between-sample heterogeneity was larger than within-sample heterogeneity (I2(3) = 51.41). The standard deviation in the true effect size was smaller in magnitude than the estimated effect size (τ = .14). The 90% prediction interval was wide but did not include zero.

Moderation Results

We only describe the moderation results with a significant omnibus F-statistic and at least one significant follow-up t-test. Given the number of contrasts, we focus on the broad pattern of results in the main text of the manuscript. We do not present moderation results for the models collapsing across beliefs, intentions, and behaviors. The full results and a more detailed description of the moderation results are available on the OSF repository.

IH Total Scores

For IH total scores, the following moderation models were significant: (1) conspiracy belief (F(2,39) = 28.92, p < .001), (2) pseudoscience belief (F(4,43) = 5.22, p = .002), (3) belief in fake news (F(2,18) = 27.61, p < .001), and (4) willingness to share fake news (F(2,15) = 25.12, p < .001). The relations are depicted in Figure 3. Results generally indicated that the Comprehensive Intellectual Humility Scale (CIHS) was the strongest correlate of less belief in misinformation (bs ranged from -.20 to -.18, ps < .001) and less willingness to share fake news (b = .14, p < .001) compared with other IH total scores. Thus, comprehensive IH measures tended to be stronger correlates of less misinformation receptivity than narrow IH measures. The one exception was for pseudoscience belief, as the CIHS and General Intellectual Humility Scale (GIHS, which measures only metacognitive IH features) were equally strong correlates of less belief in pseudoscience (bs were -.18 and -.11, respectively, ps < .001).

Figure 3

Meta-Analytic Subgroup Relations Between IH Total Scores and Misinformation Receptivity.

IH Dimensions

Turning to IH dimensions, the following moderation models were significant: (1) conspiracy belief (F(5,60) = 19.38, p < .001), (2) pseudoscience belief (F(4,104) = 14.78, p < .001), (3) belief in fake news (F(4,32) = 19.64, p < .001), and (4) willingness to share fake news (F(4,28) = 9.27, p < .001). The relations are depicted in Figure 4. CIHS Lack of Intellectual Overconfidence (bs ranged from -.21 to -.28, ps < .001) and Independence of Intellect and Ego (bs ranged from -.17 to -.19, ps < .001) tended to be the strongest correlates of less misinformation receptivity compared with other IH dimensions. CIHS Lack of Intellectual Overconfidence was the strongest correlate of less conspiracy belief and pseudoscience belief even compared with Independence of Intellect and Ego. CIHS Respect for Others’ Viewpoints tended to be the weakest correlate of less misinformation receptivity and relations were not invariably significant (bs ranged from -.04 to -.08). These results align with the IH total score moderation results—comprehensive dimensions of IH, chiefly CIHS Lack of Intellectual Overconfidence and Independence of Intellect and Ego, are stronger correlates of less misinformation receptivity than narrow dimensions of IH, chiefly CIHS Openness to Revising One’s Viewpoint (metacognitive) and Respect for Others’ Viewpoints (relational).

Figure 4

Meta-Analytic Subgroup Relations Between IH Dimensions and Misinformation Receptivity.

Misinformation Measures

We examined whether the relations between IH and misinformation receptivity varied across misinformation measures within misinformation domains (e.g., across pseudoscience belief measures). For misinformation measures, the following moderation models were significant: (1) conspiracy belief (F(1,107) = 5.07, p < .001), pseudoscience belief (F(8,153) = 14.24, p < .001), (2) belief in fake news (F(5,51) = 12.37, p < .001), and (3) willingness to share fake news (F(5,44) = 6.56, p < .001). The relations are depicted in Figure 5. First, regarding conspiracy belief, the relations between IH and less conspiracy belief were significantly stronger for measures of specific conspiracy belief (b = -.13, p < .001) than for measures of general conspiracy belief (b = -.06, p = .131). For pseudoscience belief, the relations for IH were not significant for belief in complementary and alternative medicine, less belief in science, overall pseudoscience belief, and superstitious beliefs (bs ranged from -.14 to .20, ps > .05), but they were negative and significant for less mistrust of science, paranormal belief, anti-vaccination beliefs, and less support for COVID-19 precautions (bs ranged from -.19 to -.28, ps < .05). All contrasts were significant for less belief in science. The relationship between IH and less support for COVID-19 precautions (b = -.29, p = .012) was stronger than for less belief in complementary and alternative medicine (b = -.10, p = .104).

For fake news belief, all of the relations between IH and fake news belief were negative and significant across measures (bs ranged from -.10 to -.14, ps < .05) except for belief in COVID-19 fake news (b = .01, p = .856). The relationship between IH and belief in COVID-19 fake news was significantly weaker than for all other measures of fake news belief with the exception of Democratic-consistent fake news. The relationship between IH and less willingness to share fake news was also significantly weaker for COVID-19 fake news (b = -.05, p =.280) compared to other fake news measures of sharing intentions. All other relations between IH and less willingness to share fake news were weak, positive, and significant (bs ranged from .09 to .14, ps < .05).

Figure 5

Meta-Analytic Subgroup Relations Between IH and Misinformation Receptivity by Misinformation Measure.

Meta-analytic subgroup relations showing the association between intellectual humility and misinformation receptivity based on the type of misinformation measure used.
Note. Belief Comp. Alt. Med. = Belief in complementary and alternative medicine; Less COVID Prec. = Less COVID-19 precautions; Pseudo. = Pseudoscience; Mis. Measure = Misinformation measure. The 95% confidence intervals are depicted. The dashed vertical red line indicates an effect size of zero.

COVID-19 Misinformation

The COVID-19 variable significantly moderated the following: (1) pseudoscience belief (F(2,171) = 45.00, p < .001), (2) belief in fake news (F(2,56) = 33.12, p < .001), and (3) willingness to share fake news (F(2,48) = 20.48, p < .001). Relations are depicted in Figure 6. The relationship between IH and less pseudoscience belief was significantly stronger for COVID-19 pseudoscience measures (b = -.28, p < .001) than for other pseudoscience measures (b = -.16, p < .001). In contrast with pseudoscience belief, the relationship between IH and less belief in fake news was significantly stronger for fake news that was not about COVID-19 (b = -.13, p < .001) than about COVID-19 (b = .01, p = .856). Similar to belief in fake news, the relationship between IH and less willingness to share fake news was significantly stronger for fake news that was not about COVID-19 (b = .10, p < .001) than about COVID-19 (b = -.05, p = .270). These moderation results for fake news are consistent with the moderation results for misinformation measures, as both sets of analyses reveal that effects for fake news are weakest for COVID-19 fake news.

Figure 6

Meta-Analytic Subgroup Relations Between IH and Misinformation Beliefs by COVID Measure.

Publication Bias

The PET test for the conspiracy belief model was significant (b = 4.63, p = .007). The follow-up PEESE test was moderate, significant, and negative (intercept = -.21, p < .001). In addition, the PET test for the overall behavioral model was significant (b = -4.92, p = .034). The follow-up PEESE test was large, positive, and significant (intercept = .50, p < .001). Based on the results, these specific areas of investigation may suffer from publication bias, but results were still significant even when accounting for potential publication bias. All other PET-PEESE results were not statistically significant, suggesting little influence of publication bias in the results. Further, publication status did not significantly moderate the relations between IH and misinformation receptivity for which there were at least three effect sizes for unpublished papers.

Discussion

We conducted a multi-level meta-analytic review on the growing body of research examining the relations between intellectual humility (IH) and misinformation receptivity. Overall, meta-analytic relations were in line with results from individual studies, in so far as IH was a small to moderate correlate of less misinformation receptivity. Although beliefs, intentions, and behaviors can diverge in their relations with each other and relevant external criteria (see van der Linden et al., 2023), IH correlated with less belief in misinformation, greater intentions to move away from misinformation, and greater engagement in evidence-based behaviors. Effect sizes tended to be small, which is not surprising when considering that misinformation receptivity is a complex phenomenon. Small effect sizes are likely more precise, nuanced, and realistic estimates of the “true” effect than large effect sizes when examining a single predictor and a complex outcome variable (Funder & Ozer, 2019; Matz et al., 2017). Moreover, these small effects were highly heterogeneous, indicating that the strength of the relations varied across measures of IH and misinformation. Below, we summarize the overall pattern of results and highlight future directions that promise to advance research on IH and misinformation receptivity. 

Intellectual Humility and Misinformation Beliefs

IH tended to be weakly related to less belief in misinformation. This overall effect, however, obscures important differences across IH conceptualizations and misinformation types. First, turning to IH measures, relations were consistently strongest for the Comprehensive Intellectual Humility Scale (CIHS) relative to other IH scales. These results suggest that a blend of IH features, including metacognitive, relational, and emotional IH features, is the strongest predictor of less misinformation receptivity relative to narrow IH features (i.e., metacognitive or relational features in isolation). These meta-analytic results are consistent with previous research finding that the CIHS was a stronger correlate of less conspiracy belief (e.g., Bowes et al., 2021; Bowes et al., 2023) and belief in fake news (Bowes & Tasimi, 2022) than the General Intellectual Humility Scale (GIHS), the latter of which assesses only metacognitive IH features.

A similar portrait emerged at the dimensional-level of analysis, as IH dimensions assessing a blend of IH features (CIHS Lack of Intellectual Overconfidence and Independence of Intellect and Ego) more strongly correlated with less misinformation receptivity than IH dimensions assessing narrow IH features (CIHS Openness to Revising One’s Viewpoint and Respect for Others’ Viewpoints), which is a pattern of findings that is consistent with previous research (Bowes & Tasimi, 2022; Huynh & Bayles, 2022; Huynh & Senger, 2021; Huynh et al., 2024; Plohl & Musil, 2023). Moreover, the present meta-analysis clarified the relative predictive strength of narrow IH features: Respect for Others’ Viewpoints tended to be one of the weakest correlates of less misinformation receptivity relative to other IH dimensions, including Openness to Revising One’s Viewpoint. These results illuminate that metacognitive IH features in isolation may more strongly and consistently correlate with less misinformation receptivity than relational IH features in isolation. Hence, welcoming different points of view and maintaining respect for someone in the face of disagreement, while certainly worthwhile endeavors, are unlikely to, in isolation, buffer against misinformation receptivity.

Second, there were differences across misinformation types and measures. Although a previous paper found that the relations between IH and pseudoscience belief were typically the smallest compared with conspiracy belief and belief in fake news (Bowes & Tasimi, 2022), across the entire existing literature, the relations between IH and pseudoscience belief were the strongest relative to the other belief domains. The relationship between IH and pseudoscience belief was moderate whereas effects were small for conspiracy belief and belief in fake news. Certain measures of pseudoscience belief appeared to be driving this larger effect. Specifically, IH was moderately to strongly related to less misinformation receptivity for measures of anti-vaccination attitudes, support for COVID-19 public health policies, paranormal beliefs, and mistrust of science.

There was also evidence that the relations between IH and pseudoscience belief and fake news belief varied across COVID-19 and non-COVID-19 misinformation measures. The patterns for each domain, however, were different. For pseudoscience belief, the relations with IH were stronger for measures of COVID-19 beliefs than for non-COVID-19 beliefs. For fake news belief, the opposite pattern emerged, as effects were stronger for non-COVID-19 fake news than for COVID-19 fake news. These results raise the possibility that COVID-19 misinformation is not a unitary construct, as there may be important psychological differences between manifestations of COVID-19 misinformation. That said, before making such a conclusion, future research is needed to directly compare these two domains of fake news—there were approximately 14 times more effect sizes for non-COVID-19 fake news than COVID-19 fake news. As such, there may have been more heterogeneity, contributing to a smaller effect, for COVID-19 fake news compared with non-COVID-19 fake news.

Intellectual Humility and Behavioral Intentions

The overall relationship between IH and behavioral intentions was small, weak, and positive indicating that IH is generally related to greater intentions to engage in evidence-based practices and/or move away from misinformation. Dovetailing with the moderation results for beliefs, the CIHS total score was the strongest correlate of less willingness to share fake news compared with the GIHS. In addition, CIHS Lack of Intellectual Overconfidence was the strongest correlate of less willingness to share fake news compared with Openness to Revising One’s Viewpoint and Respect for Others’ Viewpoints. These relations again point to the possibility that focusing on comprehensive conceptualizations of IH in future research on misinformation receptivity may be especially fruitful, for both beliefs and intentions. It is important to note that we could only examine differences across IH total scores for willingness to share fake news, as only a single IH measure was used for investigative intentions (the GIHS) and for vaccine intentions (the CIHS).

Regarding misinformation measures, the relationship for intentions to investigate misinformation was significantly stronger compared with the relations for intentions to be vaccinated and share fake news. As with belief in fake news, the relationship between IH and less willingness to share fake news was stronger for non-COVID-19 fake news than for COVID-19 fake news. Thus, relations were especially strong for investigative intentions and non-COVID-19 misinformation. Given that the number of papers and samples available for intentions was small, additional research on these main effects and moderation effects is needed to strengthen (or challenge) these conclusions.

Intellectual Humility and Behaviors

The main effect for behaviors was strong, positive, and significant, illustrating that IH generally correlates with more engagement in evidence-based behaviors and/or movement away from misinformation. Consistent with the effect when collapsing across behaviors, IH was strongly, positively, and significantly related to engagement in COVID-19 precautions, especially for engagement in social distancing. Although we could not estimate main effects for other behaviors due to an insufficient number of papers, there were enough effect sizes to conduct subgroup analyses when collapsing across behaviors. The relationship between IH and COVID-19 precautions was significantly stronger than the relations for actual investigation of misinformation and general health behaviors.

In looking at the overall pattern of relations for behaviors, it is largely consistent with the overall pattern of relations for intentions. That is, intentions seem to align with behaviors in the context of IH, as IH is related to intentions to engage and actual engagement in investigative behaviors for fake news and intentions to engage and actual engagement in evidence-based health practices. Moreover, behaviors and beliefs may also align in the context of IH, as IH is associated with more support for COVID-19 public health policies and more engagement in COVID-19 precautions.

Although we could examine differences across measures of behaviors, we could not examine differences across IH measures. Only one IH total score (the GIHS) and one IH subdimension (AIHS Open) were reported for COVID-19 precautions. Until additional research is published, it remains an open question as to whether the relations between IH and misinformation-related behaviors vary across comprehensive and narrow operationalizations of IH.

Limitations and Future Directions

The present investigation is characterized by several important limitations that should be considered when interpreting the results. First, the body of research examining IH and misinformation receptivity is nascent. The average year of publication was 2022, and the earliest published paper was made available in 2020. Hence, this research has only been published within the last four years. Because it is a new field, there were necessarily constraints on statistical power for certain variables and what could be estimated. The number of papers, samples, and effect sizes was often small, which could contribute to Type I error. As such, we encourage readers to attend to effect sizes rather than rely solely on statistical significance. Similarly, we sampled a limited number of databases, and we employed a 30-paper stopping rule in our investigation. It is possible we missed relevant papers by using these particular databases and this stopping rule, but, given that the literature is emerging (e.g., many search results only yielded 10 or fewer papers), it is unlikely that our approaches significantly biased the results in the present meta-analysis.

The specter of correlated error variance is also important to consider in the present investigation. Sources of error are likely not independent across studies. All measures were assessed via self-report, contributing to mono-method bias. Other sources of measurement error, such as response biases, may similarly contribute to an over- or underinflation of the effect sizes (but see Bowes et al., 2021). To overcome these potential limitations, additional research using different methodological and measurement approaches (such as informant-reports; see Meagher, 2022) is needed.

Similarly, other sources of heterogeneity should be examined in future research. Heterogeneity, and even statistical uncertainty, in the relations between IH and misinformation receptivity was generally substantial, even for statistics that are not influenced by the number of reports. Most of the 90% prediction intervals included zero, indicating that some individual estimates in future studies will not be significant even when adopting the same methodological designs. Moreover, the 90% prediction intervals for beliefs and intentions indicated that the estimates may sometimes be in the opposite direction, with IH predicting more misinformation receptivity, albeit with the effects being weaker in the opposite direction than in the theorized direction. As such, we encourage researchers to work toward elucidating the conditions under which strong (versus weak) and theoretically-consistent (versus opposite to prediction) relations between IH and misinformation receptivity emerge by including relevant moderators. For instance, because relations vary across IH and misinformation measures, researchers should carefully consider their measurement selection and, ideally, include multiple measures of IH and misinformation to allow for comparisons across measures. Beyond measurement-related considerations, IH may statistically interact with certain demographic characteristics (e.g., political ideology; see supplemental materials) or other psychological variables (e.g., critical-thinking) in predicting misinformation receptivity. By systematically testing moderation effects as this literature continues to grow, it will be possible to not only identify what the relations are but when they emerge.

Greater attention to sample characteristics is also warranted. It is not possible to generalize these results to cultures outside of the United States. The vast majority of samples comprised participants from the United States (Table S1). Given that cultural factors may promote or hinder IH (such as valuing collectivism vs. independence; Porter et al., 2022), it will be important to investigate whether and to what extent these relations hold in other cultural contexts. Belief in misinformation can also vary across cultures. For instance, conspiracy belief is often elevated in nations that are more collectivistic, are characterized by lower socioeconomic status, and have corrupt governments (see Hornsey & Pearson, 2022). However, there is also some evidence of cross-cultural consistency for belief in misinformation. For example, conspiracy belief is associated with extreme political beliefs across more than 25 countries (Imhoff et al., 2022), and efforts to inoculate people against misinformation were generally effective across four countries with different cultural characteristics (Roozenbeek et al., 2020). Additional research is needed to ascertain whether the relations between IH and misinformation receptivity vary across different cultures and nations. Along these lines, most participants were female, White, and college-educated, and many were politically Democratic (Table S1). These sample characteristics may further constrain the generalizability of our findings and should be evaluated in future research.

Just as it is essential to consider the demographic features of the samples, it is also essential to consider the levels of belief commitment present in the samples. Preexisting levels of belief commitment were, for the most part, not taken into consideration in the examined studies. If most participants do not believe in misinformation, this consistent restriction of range across studies would attenuate the relations between IH and misinformation receptivity. A similar problem would arise if most participants do believe in misinformation. For instance, in one paper including participants with strong preexisting commitments to misinformation (anti-vaccination belief, political conspiracy belief), the relations between IH (across two measures) and conspiracy belief were positive rather than negative (Gollwitzer et al., 2024). This result could be due to restriction of range or could reflect that the relations between IH and misinformation receptivity change depending on preexisting commitments to misinformation. To establish the potential promise of IH for mitigating misinformation receptivity, research should strive to include those who are most at risk for believing or already believe misinformation (Brashier, 2024). Future research on IH and misinformation receptivity should recruit individuals who would benefit from interventions the most, chiefly those who are committed to their views, may lack IH, and are likely to act on their beliefs in harmful ways.

One of the goals of this meta-analysis was to determine if a causal link between IH and misinformation receptivity was feasible. Obviously, the best test of this link would be experimental work that manipulates levels of IH and accounts for third variables; however, a reasonable first step is establishing correlational evidence of a link, which is one of the necessary steps for making causal conclusions. It is important to remember that results from the meta-analysis do not speak to other fundamental aspects of causality, including temporal precedence and the influence of potential third variables. Only by examining causality and conducting risky tests of these relations can we determine whether IH is powerful for understanding resilience against misinformation.

One such risky test of IH’s causal influence in this domain would be to examine IH as an intervention for misinformation receptivity. Although the correlational effects were small to medium, small correlations can still be worthwhile to pursue in an experimental context. After all, some of the most well-established experimental findings in psychology (e.g., that scarcity contributes to perceiving that something is more valuable) are characterized by small correlations (see Funder & Ozer, 2019). Some research has already shown that IH can be experimentally increased (i.e., priming a growth mindset of intelligence, Porter & Schumann, 2018; see also Porter et al., 2020), making IH a promising target for applied research. Indeed, making people aware of the fallibility of their knowledge promotes state IH and, in turn, leads to increases in intentions to investigate fake news headlines (Koetke et al., 2022). IH may be an especially effective intervention for misinformation receptivity, as IH interventions may help people reconsider their preexisting views and address the emotional and relational aspects of misinformation receptivity (e.g., Bowes et al., 2023; Martel et al., 2020; McLoughlin & Brady, 2024). Our results provide key insights surrounding ways to intervene on misinformation receptivity broadly construed. If we imagine misinformation receptivity as a grid, then comprehensive IH measures are hitting that grid in multiple quadrants (metacognitive, emotional, and relational) whereas narrow IH measures are hitting that grid in only one quadrant. Thus, interventions focused on increasing broad IH features, such as those that target overconfidence and emotional enmeshment with one’s beliefs, may be more effective in reducing misinformation receptivity. Such experimental work could also measure potentially relevant third variables (e.g., cognitive flexibility) to establish that IH is uniquely causing reductions in misinformation receptivity.

In addition to homing in on broad rather than narrow definitions of IH in future applied research, targeted research is needed to clarify whether IH can reduce misinformation receptivity across disparate misinformation types, especially when considering that IH is a weak or not significant correlate of certain manifestations of misinformation (both within and across misinformation types). It is possible that IH may reduce misinformation receptivity for some misinformation beliefs but not for others. Such a finding would not detract from the utility of IH for understanding and promoting resilience against misinformation, but it would illustrate that there are boundaries limiting the broad applicability of IH interventions. In this vein, IH should also be examined as an intervention for promoting evidence-based intentions and behaviors. Even if IH interventions do not strongly change peoples’ beliefs, results suggest that these interventions may still hold promise for moving people away from acting on their beliefs in harmful ways.

Summary

Over the last few years, scholars have advanced that IH may help people orient to accuracy and away from falsehoods (e.g., Church & Barrett, 2016). Cross-sectional research lends initial support to this supposition, as studies indicate that IH is related to less misinformation receptivity (e.g., Bowes & Tasimi, 2022). Here, we meta-analytically investigated the relations between IH and misinformation receptivity to provide a more accurate estimate of these relations, identify sources of heterogeneity, and provide a roadmap for future research aiming to elucidate the potential causal influence of IH on misinformation receptivity. Altogether, results indicate that IH is weakly to moderately related to less misinformation receptivity for beliefs, intentions, and behaviors, with effect sizes tending to be heterogenous and varying across IH measures and misinformation measures. Future research should leverage these findings to identify the mechanisms underlying the relations between IH and misinformation receptivity, the generality of these relations across levels of belief commitment, and whether IH interventions may be a fruitful path for reducing misinformation receptivity.

Data Availability Statement

Datasets, analytic code, output files, and the screened studies are available on the Open Science Framework (OSF) at the following link: https://osf.io/bmn5p/.

Conflicts of Interest

The authors declare no competing interests.

Acknowledgements

This material is based upon work supported by the National Science Foundation SBE Postdoctoral Research Fellowship under Grant No. 2313708.

Author Contributions

S.B. and L.F. designed the study and research questions. S.B. screened and coded the studies, conducted the analyses, and drafted the initial manuscript. L.F. provided critical revisions to the manuscript.

Endnotes

[1]We also examined a range of sample characteristics (e.g., demographic variables, sample recruitment) as moderators (Table S1). These analyses for sample characteristics were purely exploratory in nature and were in the service of clarifying other potential sources of heterogeneity. The output for these analyses and a description of these results is available on the OSF repository.

[2]We also examined the main effects wherein we removed outliers (i.e., data at the 95th and 99th percentiles of the distributions of the standardized residuals). Since none of the effects appreciably changed when removing outliers (e.g., statistical significance, direction of the effect, magnitude of the effect), we focus on the results from the full dataset. Results with outliers removed are available on the OSF repository.

References

Alfano, M., Iurino, K., Stey, P., Robinson, B., Christen, M., Yu, F., & Lapsley, D. (2017). Development and validation of a multidimensional measure of intellectual humility. PLoS ONE, 12(8), Article e0182950. https://doi.org/10.1371/journal.pone.0182950

Anthony, A., & Moulding, R. (2019). Breaking the news: Belief in fake news and conspiracist beliefs. Australian Journal of Psychology, 71(2), 154–162. https://doi.org/10.1111/ajpy.12233

Assink, M., & Wibbelink, C. J. M. (2016). Fitting three-level meta-analytic models in R: A step-by-step tutorial. Tutorials in Quantitative Methods for Psychology, 12(3), 154–174. https://doi.org/10.20982/tqmp.12.3.p154

*Bowes, S. M., Costello, T. H., Ma, W., & Lilienfeld, S. O. (2021). Looking under the tinfoil hat: Clarifying the personological and psychopathological correlates of conspiracy beliefs. Journal of Personality, 89(3), 422–436. https://doi.org/10.1111/jopy.12588

Bowes, S. M., Costello, T. H., & Tasimi, A. (2023). The conspiratorial mind: A meta-analytic review of motivational and personological correlates. Psychological Bulletin, 149(5-6), 259–293. https://doi.org/10.1037/bul0000392

Bowes, S. M., Ringwood, A., & Tasimi, A. (2024). Is intellectual humility related to more accuracy and less overconfidence?. The Journal of Positive Psychology, 19, 538-553. https://doi.org/10.1080/17439760.2023.2208100

*Bowes, S. M., & Tasimi, A. (2022). Clarifying the relations between intellectual humility and pseudoscience beliefs, conspiratorial ideation, and susceptibility to fake news. Journal of Research in Personality, 98, 104220. https://doi.org/10.1016/j.jrp.2022.104220

Brashier, N. M. (2024). Fighting misinformation among the most vulnerable users. Current Opinion in Psychology. Advanced online publication. https://doi.org/10.1016/j.copsyc.2024.101813

Brotherton, R., French, C. C., & Pickering, A. D. (2013). Measuring belief in conspiracy theories: the generic conspiracist beliefs scale. Frontiers in Psychology, 4, 279–279. https://doi.org /10.3389/fpsyg.2013.00279

Brienza, J. P., Kung, F. Y. H., Santos, H. C., Bobocel, D. R., & Grossmann, I. (2018). Wisdom, bias, and balance: Toward a process-sensitive measurement of wisdom-related cognition. Journal of Personality and Social Psychology, 115(6), 1093–1126. https://doi.org/10.1037/pspp0000171

Bronstein, M. V., Pennycook, G., Bear, A., Rand, D. G., & Cannon, T. D. (2019). Belief in fake news is associated with delusionality, dogmatism, religious fundamentalism, and reduced analytic thinking. Journal of Applied Research in Memory and Cognition, 8(1), 108–117. https://doi.org/10.1016/j.jarmac.2018.09.005

Carter, E. C., Schönbrodt, F. D., Gervais, W. M., & Hilgard, J. (2019). Correcting for bias in psychology: A comparison of meta-analytic methods. Advances in Methods and Practices in Psychological Science, 2(2), 115–144. https://doi.org/10.1177/2515245919847196

Chiolero, A., Santschi, V., Burnand, B., Platt, R. W., & Paradis, G. (2012). Meta-analyses: with confidence or prediction intervals? European Journal of Epidemiology27, 823-825. https://doi.org/10.1007/s10654-012-9738-y

Church, I. & Barrett, J (2016). Intellectual humility: An introduction to the philosophy and science. Bloomsbury.

*Coelho, P., Foster, K., Nedri, M., & Marques, M. D. (2022). Increased belief in vaccination conspiracy theories predicts increases in vaccination hesitancy and powerlessness: Results from a longitudinal study. Social Science & Medicine, 315, 115522. https://doi.org/10.1016/j.socscimed.2022.115522

Ecker, U. K. H., Lewandowsky, S., Cook, J., Schmid, P., Fazio, L. K., Brashier, N., Kendeou, P., Vraga, E. K., & Amazeen, M. A. (2022). The psychological drivers of misinformation belief and its resistance to correction. Nature Reviews Psychology, 1(1), 13–29. https://doi.org/10.1038/s44159-021-00006-y

Election Integrity Project (2021, July 1). The long fuse: Misinformation and the 2020 election. Election Integrity Project. https://www.eipartnership.net/report

Enders, A., Farhart, C., Miller, J., Uscinski, J., Saunders, K., & Drochon, H. (2022). Are Republicans and conservatives more likely to believe conspiracy theories? Political Behavior, 45(4), 2001–2024. https://doi.org/10.1007/s11109-022-09812-3

Farias, M., Newheiser, A. K., Kahane, G., & de Toledo, Z. (2013). Scientific faith: Belief in science increases in the face of stress and existential anxiety. Journal of Experimental Social Psychology, 49(6), 1210–1213. https://doi.org/10.1016/j.jesp.2013.05.008

Fasce, A., & Picó, A. (2019). Conceptual foundations and validation of the Pseudoscientific Belief Scale. Applied Cognitive Psychology, 33(4), 617–628. https://doi.org/10.1002/acp.3501

Federico, C. M., Williams, A. L., & Vitriol, J. A. (2018). The role of system identity threat in conspiracy theory endorsement. European Journal of Social Psychology, 48(7), 927–938. https://doi.org/10.1002/ejsp.2495

Forthofer, R. N., Lee, E. S., & Hernandez, M. (2007). 7—Interval Estimation. In R. N. Forthofer, E. S. Lee, & M. Hernandez (Eds.), Biostatistics (Second Edition) (Second Edition, pp. 169–212). Academic Press. https://doi.org/10.1016/B978-0-12-369492-8.50012-1

Funder, D. C., & Ozer, D. J. (2019). Evaluating effect size in psychological research: Sense and nonsense. Advances in Methods and Practices in Psychological Science2(2), 156-168. https://doi.org/10.1177/2515245919847202

Gignac, G. E., & Szodorai, E. T. (2016). Effect size guidelines for individual differences researchers. Personality and Individual Differences, 102, 74–78. https://doi.org/10.1016/j.paid.2016.06.069

*Gollwitzer, A., Bao, E., & Oettingen, G. (2024). Intellectual humility as a tool to combat false beliefs: An individual-based approach to belief revision. British Journal of Social Psychology, 12732. Advance online publication. https://doi.org/10.1111/bjso.12732

Haggard, M., Rowatt, W. C., Leman, J. C., Meagher, B., Moore, C., Fergus, T., Whitcomb, D., Battaly, H., Baehr, J., & Howard-Snyder, D. (2018). Finding middle ground between intellectual arrogance and intellectual servility: Development and assessment of the limitations-owning intellectual humility scale. Personality and Individual Differences, 124, 184–193. https://doi.org/10.1016/j.paid.2017.12.014

Higgins, J. P., & Thompson, S. G. (2002). Quantifying heterogeneity in a meta-analysis. Statistics in Medicine, 21(11), 1539–1558. https://doi.org/10.1002/sim.1186

Hornsey, M. J., & Pearson, S. (2022). Cross-national differences in willingness to believe conspiracy theories. Current Opinion in Psychology47, 101391. https://doi.org/10.1016/j.copsyc.2022.101391

Huedo-Medina, T. B., Sánchez-Meca, J., Marín-Martínez, F., & Botella, J. (2006). Assessing heterogeneity in meta-analysis: Q statistic or I2 index?. Psychological Methods, 11(2), 193–206. https://psycnet.apa.org/doi/10.1037/1082-989X.11.2.193

*Huynh, H. P., & Bayles, B. (2022). Secure yet flexible: Can intellectual humility protect against belief in conspiracy theories? North American Journal of Psychology, 24(4), 561–570.

*Huynh, H. P., Dicke-Bohmann, A., & Zsila, Á. (2024). Conservatism, anti-vaccination attitudes, and intellectual humility: examining their associations through a social judgment theory framework. Journal of Behavioral Medicine, 47(2), 184–196. https://doi.org/10.1007/s10865-023-00450-6

*Huynh, H. P., & Senger, A. R. (2021). A little shot of humility: Intellectual humility predicts vaccination attitudes and intention to vaccinate against COVID‐19. Journal of Applied Social Psychology, 51(4), 449–460. https://doi.org/10.1111/jasp.12747

Imhoff, R., Zimmer, F., Klein, O., António, J. H., Babinska, M., Bangerter, A., … & Van Prooijen, J. W. (2022). Conspiracy mentality and political orientation across 26 countries. Nature Human Behaviour6(3), 392-403. https://doi.org/10.1038/s41562-021-01258-7

IntHout, J., Ioannidis, J. P., Rovers, M. M., & Goeman, J. J. (2016). Plea for routinely presenting prediction intervals in meta-analysis. BMJ open6(7), e010247. https://doi.org/10.1136/bmjopen-2015-010247

*Jongman-Sereno, K. P., Hoyle, R. H., Davisson, E. K., & Park, J. (2023). Intellectual humility and responsiveness to public health recommendations. Personality and Individual Differences, 211, 112243. https://doi.org/10.1016/j.paid.2023.112243

*Koetke, J., Schumann, K., & Porter, T. (2022). Intellectual humility predicts scrutiny of COVID-19 misinformation. Social Psychological & Personality Science, 13(1), 277–284. https://doi.org/10.1177/1948550620988242

*Koetke, J., Schumann, K., Porter, T., & Smilo-Morgan, I. (2023). Fallibility salience increases intellectual humility: Implications for people’s willingness to investigate political misinformation. Personality & Social Psychology Bulletin, 49(5), 806–820. https://doi.org/10.1177/01461672221080979

Krumrei-Mancuso, E. J., & Rouse, S. V. (2016). The development and validation of the Comprehensive Intellectual Humility Scale. Journal of Personality Assessment, 98(2), 209–221. https://doi.org/10.1080/00223891.2015.1068174

Leary, M. R., Diebels, K. J., Davisson, E. K., Jongman-Sereno, K. P., Isherwood, J. C., Raimi, K. T., Deffler, S. A., & Hoyle, R. H. (2017). Cognitive and interpersonal features of intellectual humility. Personality & Social Psychology Bulletin, 43(6), 793–813. https://doi.org/10.1177/0146167217697695

Lie, D., & Boker, J. (2004). Development and validation of the CAM Health Belief Questionnaire (CHBQ) and CAM use and attitudes amongst medical students. BMC Medical Education, 4, 2. https://doi.org/10.1186/1472-6920-4-2

Marín-Martínez, F., & Sánchez-Meca, J. (2010). Weighting by inverse variance or by sample size in random-effects meta-analysis. Educational and Psychological Measurement, 70(1), 56–73. https://doi.org/10.1177/0013164409344534

Martel, C., Pennycook, G., & Rand, D. G. (2020). Reliance on emotion promotes belief in fake news. Cognitive Research: Principles and Implications, 5, 1-20. https://doi.org/10.1186/s41235-020-00252-3

Martin, L. R., & Petrie, K. J. (2017). Understanding the dimensions of anti-vaccination attitudes: The Vaccination Attitudes Examination (VAX) Scale. Annals of Behavioral Medicine, 51(5), 652–660. https://doi.org/10.1007/s12160-017-9888-y

Matz, S. C., Gladstone, J. J., & Stillwell, D. (2017). In a world of big data, small effects can still matter: A reply to Boyce, Daly, Hounkpatin, and Wood (2017). Psychological Science28(4), 547-550. https://doi.org/10.1177/0956797617697445

McLoughlin, K. L., & Brady, W. J. (2024). Human-algorithm interactions help explain the spread of misinformation. Current Opinion in Psychology, 101770. https://doi.org/10.1016/j.copsyc.2023.101770

McShane, B. B., Böckenholt, U., & Hansen, K. T. (2016). Adjusting for publication bias in meta-analysis: An evaluation of selection methods and some cautionary notes. Perspectives on Psychological Science, 11(5), 730–749. https://doi.org/10.1177/1745691616662243

Meagher, B. R. (2022). An assessment of self and informant data for measuring intellectual humility. Personality and Individual Differences184, 111218. https://doi.org/10.1016/j.paid.2021.111218

*Newman, D., Lewandowsky, S., & Mayo, R. (2022). Believing in nothing and believing in everything: The underlying cognitive paradox of anti-COVID-19 vaccine attitudes. Personality and Individual Differences, 189, 111522. https://doi.org/10.1016/j.paid.2022.111522

Pennycook, G., & Rand, D. G. (2019). Lazy, not biased: Susceptibility to partisan fake news is better explained by lack of reasoning than by motivated reasoning. Cognition, 188, 39–50. https://doi.org/10.1016/j.cognition.2018.06.011

Pennycook, G., & Rand, D. G. (2021). The psychology of fake news. Trends in Cognitive Sciences, 25(5), 388–402. https://doi.org/10.1016/j.tics.2021.02.007

*Plohl, N., & Musil, B. (2023). Assessing the incremental value of intellectual humility and cognitive reflection in predicting trust in science. Personality and Individual Differences, 214, 112340. https://doi.org/10.1016/j.paid.2023.112340

Porter, T., Elnakouri, A., Meyers, E. A., Shibayama, T., Jayawickreme, E., & Grossmann, I.  (2022). Predictors and consequences of intellectual humility. Nature Reviews Psychology, 1(9), 524–536. https://doi.org/10.1038/s44159-022-00081-9

Porter, T., & Schumann, K. (2018). Intellectual humility and openness to the opposing view. Self and Identity, 17(2), 139–162. https://doi.org/10.1080/15298868.2017.1361861

Porter, T., Schumann, K., Selmeczy, D., & Trzesniewski, K. (2020). Intellectual humility predicts mastery behaviors when learning. Learning and Individual Differences, 80, 101888. https://doi.org/10.1016/j.lindif.2020.101888

*Preston, J. L., & Khan, A. (2024). Comparing the influence of intellectual humility, religiosity, and political conservatism on vaccine attitudes in the United States, Canada, and the United Kingdom. Public Understanding of Science, 33(3), 343–352. https://doi.org/10.1177/09636625231191633

R Core Team. (2024). R: A Language and Environment for Statistical Computing. R Foundation for Statistical Computing. https://www.R-project.org/

Roozenbeek, J., Van Der Linden, S., & Nygren, T. (2020). Prebunking interventions based on “inoculation” theory can reduce susceptibility to misinformation across cultures. Harvard Kennedy School Misinformation Review. https://doi.org/10.17863/CAM.48846

*Senger, A. R., & Huynh, H. P. (2021). Intellectual humility’s association with vaccine attitudes and intentions. Psychology, Health & Medicine, 26(9), 1053–1062. https://doi.org/10.1080/13548506.2020.1778753

Shapiro, G. K., Holding, A., Perez, S., Amsel, R., & Rosberger, Z. (2016). Validation of the vaccine conspiracy beliefs scale. Papillomavirus Research, 2, 167–172. https://doi.org/10.1016/j.pvr.2016.09.001

Silver, N. C., & Dunlap, W. P. (1987). Averaging correlation coefficients: Should Fisher’s z transformation be used? Journal of Applied Psychology, 72(1), 146–148. https://doi.org/10.1037/0021-9010.72.1.146

Stanley, T. D., & Doucouliagos, H. (2014). Meta‐regression approximations to reduce publication selection bias. Research Synthesis Methods, 5(1), 60–78. https://doi.org/10.1002/jrsm.1095

Stecula, D. A., & Pickup, M. (2021). How populism and conservative media fuel conspiracy beliefs about COVID-19 and what it means for COVID-19 behaviors. Research & Politics8(1). https://doi.org/10.1177/2053168021993979

Swami, V., Coles, R., Stieger, S., Pietschnig, J., Furnham, A., Rehim, S., & Voracek, M. (2011). Conspiracist ideation in Britain and Austria: Evidence of a monological belief system and associations between individual psychological differences and real‐world and fictitious conspiracy theories. British Journal of Psychology102(3), 443-463. https://doi.org/10.1111/j.2044-8295.2010.02004.x

Tobacyk, J. J. (2004). A revised paranormal belief scale. International Journal of Transpersonal Studies, 23(1), 94–98. http://dx.doi.org/10.24972/ijts.2004.23.1.94

Uscinski, J. E., & Enders, A. M. (2023). What is a conspiracy theory and why does it matter? 

Critical Review: A Journal of Politics and Society, 35(1–2), 148–169. https://doi.org/10.1080/08913811.2022.2115668

Van den Noortgate, W., López-López, J. A., Marín-Martínez, F., & Sánchez-Meca, J. (2013). Three-level meta-analysis of dependent effect sizes. Behavior Research Methods, 45(2), 576–594. https://doi.org/10.3758/s13428-012-0261-6

van der Linden, S., Albarracín, D., Fazio, L., Freelon, D., Roozenbeek, J., Swire-Thompson, B., & Van Bavel, J. (2023). Using psychological science to understand and fight health misinformation: An APA consensus statement. American Psychological Association. https://www.apa.org/pubs/reports/misinformation-consensus-statement.pdf

van der Linden, S., Panagopoulos, C., Azevedo, F., & Jost, J. T. (2021). The paranoid style in American politics revisited: An ideological asymmetry in conspiratorial thinking. Political Psychology42(1), 23-51. https://doi.org/10.1111/pops.12681

Van Mulukom, V., Pummerer, L. J., Alper, S., Bai, H., Čavojová, V., Farias, J., … & Žeželj, I. (2022). Antecedents and consequences of COVID-19 conspiracy beliefs: A systematic review. Social Science & Medicine301. https://doi.org/10.1016/j.socscimed.2022.114912

Viechtbauer, W. (2010). Conducting Meta-Analyses in R with the metafor package. Journal of  Statistical Software, 36(3), 1–48. https://doi.org/10.18637/jss.v036.i03

Wiseman, R., & Watt, C. (2004). Measuring superstitious belief: Why lucky charms matter. Personality and Individual Differences, 37(8), 1533–1541. https://doi.org/10.1016/j.paid.2004.02.009

Zadrozny, B. (2022, January 18). Escape from QAnon: How Jan. 6 changed one person’s path. NBC News. https://www.nbcnews.com/tech/internet/qanon-jan-6-changed-one-persons-path-rcna11276

Zmigrod, L., Zmigrod, S., Rentfrow, P. J., & Robbins, T. W. (2019). The psychological roots of intellectual humility: The role of intelligence and cognitive flexibility. Personality and Individual Differences, 141, 200–208. https://doi.org/10.1016/j.paid.2019.01.016

Note: References with an asterisk were included in the meta-analysis. The full references for the studies used in the meta-analysis (but not cited in the main-text of the manuscript) is available on the OSF repository.