Introduction
Fake news has the potential to disrupt society (Ecker et al., 2022) and presents a more serious risk than ever with the advent of artificial intelligence, such as large language models (Chen & Shu, 2023). Given the gravity of these risks, interest in the psychological foundations of misinformation has surged since the 2016 United States (U.S.) Presidential election. There is a wealth of research on the antecedents of sharing fake news. For example, much research has concerned itself with the role of cognitive factors in misinformation sharing such as analytical thinking (Bago et al., 2020), confidence (Gawronski et al., 2023), and linguistic processing (Muda et al., 2023), as well as the social motivations for the spread of fake news (Lawson et al., 2023; Osmundsen et al., 2021; Pereira et al., 2023; Ren et al., 2023). This work complements other research that focuses more closely on news content, types of news outlets (e.g., Grinberg et al., 2019; Vosoughi et al., 2018), and social media platform properties (Allcott & Gentzkow, 2017; Bail, 2022). Overall, there is substantial evidence about the role of these factors in the spread of misinformation.
Yet research into misinformation is also littered with divergent conclusions. For example, it has been suggested that drawing attention to accuracy can reduce the spread of misinformation (Pennycook et al., 2021). However, this claim has sparked debate regarding its robustness across the political spectrum (Pennycook & Rand, 2022; Rathje et al., 2022; Roozenbeek et al., 2021). A recent adversarial collaboration (Martel et al., 2024) suggested that accuracy prompts were indeed effective for Republicans and those with conservative values, but there was still some suggestion that the effectiveness might be lower for these groups. These divergences may in part be driven by the fact that the misinformation landscape is constantly changing: For example, factors such as Republican party identification that have consistently predicted sharing fake news (Guess et al., 2019; Osmundsen et al., 2021) appear to have inconsistent effects in more recent samples (e.g., Lin et al., 2023). In addition to this, it is also possible that divergences can result from the choices made by researchers, such as how to operationalize ideology, what news stimuli to use, and other frequently unobserved factors. Notably, this heterogeneity in approaches emerges even in the presence of pre-registration, and it is possible to reach opposite conclusions from the same data based on these types of choices (Breznau et al., 2022; Silberzahn et al., 2018).
In another case of disagreement, there is debate regarding the role of personality factors such as conscientiousness in the spread of misinformation. Though Lawson and Kakkar (2021) documented a robust effect whereby conscientiousness weakened the positive effect of right-wing political ideology on the likelihood of sharing fake news, others have suggested that this effect is not robust and applies equally to real news stories, minimizing its relevance to the study of misinformation (Lin et al., 2023). We use this example as a case study to demonstrate the role of methodological choices in the study of misinformation, as well as clarifying the role conscientiousness plays across datasets, both directly as a predictor and indirectly by moderating the relationship between measures of political ideology and fake news sharing.
The role of personality in misinformation research has several factors that make it attractive as a case study for this purpose. First, personality has a proven relationship with a broad range of behaviors (Paunonen & Ashton, 2001) and so could viably play a key role in the misinformation ecosystem (Lawson & Kakkar, 2021). Second, despite this, there is limited research that examines the relationship between personality variables and sharing fake news (see Calvillo et al., 2024 for a review) and so establishing further evidence would be particularly valuable to the field. There is related meta-analytic evidence suggesting that the Big Five are not statistically significantly associated with beliefs in conspiracy theories (e.g., Goreis & Voracek, 2019), though this evidence focuses on beliefs rather than the behavior of sharing. Third, there is a specific disagreement concerning the role of conscientiousness in sharing fake news (Lawson & Kakkar, 2021; Lin et al., 2023) that it would be beneficial to resolve, which also relates to the uncertain role of ideology and partisanship as with the case of accuracy prompts (Martel et al., 2024). Finally, both sets of researchers involved in this disagreement pre-registered their studies and made their data publicly available yet arrived at divergent conclusions. Hence, by re-analyzing this data, we aim to reconcile the sources of this divergence.
It is first necessary to clarify the hypothesis proposed by Lawson and Kakkar (2021) as a precursor to examining the support for it. In the original article, the authors proposed that higher levels of trait conscientiousness would weaken the positive association between right wing political ideology and the likelihood of sharing fake news. Here, we refer to this as the moderation by conscientiousness hypothesis. The reasoning behind this proposed interaction is as follows: while tendencies such as distrust in mainstream media (van der Linden et al., 2020) and a heightened need for shared reality (Jost et al., 2018) may make political conservatives more inclined to share falsehoods, those who have higher levels of conscientiousness—characterized by superior orderliness and impulse control (Petrocelli et al., 2020; Roberts et al., 2009)—are expected to suppress these inclinations and reduce the likelihood of such behavior. Importantly, the proposed moderation effect is that conscientiousness will attenuate the positive effect of political ideology, not a cross-over interaction (i.e., there is no suggestion that at high levels of conscientiousness, conservatives would be less likely than liberals to share misinformation). The nature of the proposed moderation is very important for the following reason: to observe such an attenuation moderation, it is necessary to observe a main effect of political ideology. In other words, the adopted measure of political ideology must be positively correlated with the likelihood of sharing fake news overall. As will be shown later, this was not true for the majority of the ideology measures utilized in the Lin et al. (2023) studies, which has substantial implications for the resulting conclusions.
To investigate our research questions concerning the role of personality and the sources of divergent conclusions, we first summarize extant evidence for the main effect of personality variables on sharing fake news before augmenting this evidence by re-analyzing 12 experiments with a total of 6,790 participants and 143,956 observations. We next consider the evidence that conscientiousness moderates the relationship between ideology and sharing misinformation and highlight methodological choices that lead to the divergences in conclusions observed in the literature. Finally, we describe the difference between the likelihood of sharing fake news and what has been called ‘discernment’ or ‘susceptibility to fake news sharing’ (Guay et al., 2023; Lin et al., 2023). We further discuss how to interpret these effects and offer a general framework for understanding them. By highlighting these factors, we hope to facilitate the synthesis of disparate results in the literature, allowing for a better understanding of factors that should be considered to combat the spread of misinformation.
Methods
As well as reviewing published research, we re-analyzed data from two sources: seven studies from Lawson and Kakkar (2021, L&K) with a total of 91,144 observations and five studies from Lin et al. (2023, LRP) with a total of 52,812 observations. All studies had very similar designs, so we summarize their samples and materials together (Table 1). For additional details, refer to the published manuscripts, and for the data refer to the OSF links in the authors’ note. Given the substantial similarities in design across each stream of studies, we analyzed each group together, primarily focusing on what led the authors of Lin et al. (2023) to reach divergent conclusions to Lawson and Kakkar (2021).
Table 1
Description of Participants and Materials Across Studies
Study | Participants | Materials |
L&K Study 1 | 488. MTurk. | 12 real and 12 fake Covid-19 stories1 |
L&K Study 2 | 484. MTurk. | 12 real and 12 fake political stories2 |
L&K Study 3 | 479. MTurk. | 12 real and 12 fake Covid-19 stories1 |
L&K Study 4 | 967. MTurk. | 12 real and 12 fake Covid-19 stories1 |
L&K Study 6 | 491. MTurk. | 12 real and 12 fake Covid-19 stories1 |
L&K Study S1 | 954. Prolific. | 12 fake Covid-19 stories1 |
L&K Study S2 | 494. MTurk. | 10 fake Covid-19 and 10 non-Covid fake stories3 |
LRP Study 1 | 490. Lucid. | 12 real and 12 fake political stories2 |
LRP Study 2 | 484. MTurk. | 12 real and 12 fake political stories2 |
LRP Study 3 | 465. MTurk. | 12 fake political stories2 |
LRP Study 4 | 495. MTurk. | 12 real and 12 fake political stories4 |
LRP Study 5 | 499. MTurk. | 12 real and 12 fake Covid-19 stories5 |
Total | 6,790 participants | 5 different sets of stimuli. |
Samples
All participants were recruited online, with 6/7 L&K studies and 4/5 LRP studies conducted on Amazon’s Mechanical Turk (MTurk). In the L&K MTurk studies, participants were required to pass a comprehension check, not be using mobile devices, have United States (U.S.) IP addresses, an acceptance rate of 94% or over on MTurk, and to have completed between 50 and 50,000 HITs on MTurk. Lin et al. (2023) did not state whether they required respondents to not be using mobile devices, but further excluded respondents with suspicious IP addresses (Kennedy et al., 2020) and those who failed bot screeners at the beginning of each study. The recruitment procedures for the Prolific and Lucid studies were very similar, except for the described MTurk-specific conditions.
Procedure and Measures
At the core of each study design was participants’ responses regarding whether they would consider sharing each story in a set of stories. Further specifics regarding the number, veracity, and content of these stories are available in Table 1. Participants saw the real and fake stories in a random order, indicating whether they would consider sharing them or not from the options ‘No,’ ‘Maybe,’ and ‘Yes.’ The latter two responses were coded as indicating an openness to sharing. In L&K Studies 1-6 and S1, participants were also asked to rate the accuracy of the stories before indicating their sharing preferences. After responding to the stories, participants’ personalities were evaluated using the BFI-2 Big Five personality inventory (Soto & John, 2017) and they further answered measures of ideology and demographics. Some studies also included additional measures (e.g., thinking style), and some counterbalanced the order of presentation of the stories and other measures. In L&K Study 3, the stories had fact-check warnings associated with them. In each case, these data were then structured to produce a dataset with intention to share at a story-level and personality and ideology at a participant-level.
Analytical Approach
In the original Lawson and Kakkar (2021) manuscript as well as Lin et al. (2023), the primary analytical approach involved predicting participants’ binary sharing decisions using logistic regression for repeated measures, with standard errors clustered within participants. The core model featured in both manuscripts tested for a significant negative two-way interaction between political ideology and conscientiousness with this specification. In the present manuscript, we conduct a series of complementary analyses that clarify and extend the findings of the two manuscripts, as well as reconciling their disparate conclusions. Note that we analyze each set of studies together—the L&K and the LRP studies, respectively—maintaining the repeated measures approach but testing the sensitivity of conclusions to different ways of analyzing the data.
We start by testing the relationship between conscientiousness and the likelihood of sharing fake news, meta-analyzing conscientiousness’ zero order correlations with the likelihood of sharing fake news across each set of studies. We next perform the same process for each of the different measures of ideology. This clarifies for what measures of ideology it is possible to test the moderation by conscientiousness hypothesis by showing which measures of ideology display robust relationships with sharing fake news. We then identified a wider range of possible ideology variables in the LRP studies—which featured many single item measures of ideology—by considering the indices constructed by Lin et al. (2023) and conducting factor analysis on the full set of featured items. For all measures of ideology that showed significant meta-analytic correlations with sharing, we then tested the moderation by conscientiousness hypothesis using specification curve analysis. This form of analysis examines the robustness of conclusions to different specifications and summarizes the overall nature of an effect across a range of possible approaches. Specifically, we tested whether there was a significant interaction effect between conscientiousness and ideology in predicting the sharing of fake news across seven different ideology measures and five different analytical approaches, for a total of 35 estimates of the key effect. Our set of analytical approaches includes both the single specification adopted by Lin et al. (2023) post-hoc—i.e., single level logistic regression with standard errors only clustered within individuals—as well as methods that account for the fact that each study varied on dimensions such as sample population, stimuli, and design. Building on this analysis, we further tested for the presence of an interaction effect using non-linear models—i.e., Generalized Additive Models (GAMs).
After considering the factors that reconcile the disparate conclusions observed across Lawson and Kakkar (2021), Lin et al. (2023), and the present manuscript, we conduct further analyses concerning fake news sharing and fake news discernment. In particular, we show that even in the absence of a three-way interaction between ideology, conscientiousness, and news veracity, conscientiousness modulates key news ecosystem level outcomes such as the proportion of shared stories that are fake. We complement this analysis with supplementary simulations and presentation of descriptive statistics to provide a more holistic understanding of fake news discernment and the role of veracity, before discussing the high-level insights offered by our analyses.
Results
Conscientiousness as a Direct Predictor of Sharing Fake News
In a recent review, Calvillo, León, and Rutchick (2024) summarized evidence for the relationship between personality factors and the likelihood of sharing misinformation. Notably, three additional papers other than Lawson and Kakkar (2021) included conscientiousness and measures of sharing, all of which found significant negative associations (Ahmed & Rasul, 2022; Ahmed & Tan, 2022; Buchanan, 2023). Though this was the most robust personality relationship, there was also some evidence for a positive effect of extraversion and neuroticism and a negative effect of agreeableness on sharing. We next turn to complementing these results with a re-analysis of both the L&K studies and the LRP studies.
Across the L&K studies, the meta-analytic correlation between conscientiousness and sharing fake news using a random effects model was r = -.22 (95% CI [-.304, -.130], p < .001). Yet there was variation across studies, with this correlation varying between r = -.01 (Study S1) and r = -.35 (Study S2); indeed, the test for heterogeneity was significant (Q(6) = 820, p < .001), with an I² statistic of 99.1%.
Moving beyond simple correlations, we wanted to examine whether personality predicted the sharing of fake news across the L&K studies when controlling for additional variables. To do so, we estimated a logistic regression model to predict the choice of whether a story is shared (1) or not (0) with standard errors clustered within both individuals (to account for the repeated measures design) and studies (to account for the fact that many factors, such as stimuli, samples, and designs changed across studies). When controlling for the role of left-right political ideology as well as the other four Big Five factors, age, gender, and education, conscientiousness remained a significant negative predictor (b = -.475, 95% CI [-.692, -.258], p < .001). Notably, three out of the other four facets of the Big Five were also significant predictors, with both openness (b = -.474, 95% CI [-.629, -.319], p < .001) and agreeableness (b = -.190, 95% CI [-.300, -.079], p < .001) negatively predicting sharing and extraversion (b = .438, 95% CI [.319, .556], p < .001) having a positive effect on sharing. Neuroticism had a non-significant positive coefficient (b = .057, 95% CI [-.021, .135], p = .151). Overall, it appears that personality was an important determinant of respondents’ sharing behaviors.
The LRP studies replicated these results, with a random-effects meta-analysis finding a negative correlation between conscientiousness and the likelihood of sharing fake stories (r = -.079, 95% CI [-.135, -.236], p = .005), though it was substantially weaker. The correlation varied from r = -.01 to r = -.17 across studies, and the test for heterogeneity was again significant (Q(4) = 95.0, p < .001) with an I² statistic of 95.8%. We next conducted similar regression analyses to those described for the L&K studies. However, in the LRP studies the L&K measure of left-right ideology was not available across studies. As such, to control for the role of political ideology and partisanship, we included warmth to Republicans—which captures positive sentiment towards members of the Republican party—in our regression analyses. In these analyses, conscientiousness emerged as a significant negative predictor (b = -.238, 95% CI [-.368, -.107], p < .001). Of the other Big Five, only openness (b = -.329, 95% CI [-.449, -.209], p < .001) and extraversion (b = .323, 95% CI[.133, .512], p < .001) were significant predictors, with the effects of agreeableness (b = -.162, 95% CI [-.340, .015], p = .073), and neuroticism (b = .037, 95% CI [-.019, .093], p = .193) non-significant but directionally consistent with the findings of L&K.
In sum, across the 12 studies analyzed, there was substantial evidence for the role of personality in sharing fake news. In both the L&K and LRP studies, there was a robust negative effect of conscientiousness on the likelihood of sharing fake news. That said, in both sets of studies there was substantial heterogeneity (Higgins et al., 2003), which suggests the importance of personality may vary across different experimental designs, necessitating further investigation. Apart from the robust effect that more conscientious respondents shared less fake news, extraversion positively and openness negatively predicted sharing across both groups of studies. The extraversion results replicated other findings (Ahmed & Rasul, 2022) but the openness result appears to be novel, with Ahmed and Rasul (2022) finding that openness related positively to belief in misinformation but was not statistically significantly correlated with sharing.
Conscientiousness as a Moderator of the Positive Effect of Ideology on Fake News Sharing
At the core of their argument, Lawson and Kakkar (2021) proposed that higher levels of trait conscientiousness will weaken the positive relationship between right-wing political ideology and the likelihood of sharing fake news. This hypothesis was supported: the attenuation moderation effect across studies was highly significant. Contrastingly, Lin et al. (2023) reported that they failed to replicate this effect. However, many features varied across the two manuscripts’ designs and approaches that could explain their divergent conclusions. Of note, Lin et al. (2023) i) used a different type of stimuli, primarily those associated with political themes in 2017 in experiments conducted in December 2021, ii) included a range of different single-item ideology measures which they used to create two indices post-hoc, and iii) adopted a different set of choices in the coding and analysis of the data. Though the stimuli utilized cannot be changed, we reconcile the two sets of findings by considering these latter two divergences, both by analyzing the full range of ideology measures and by adopting a broader analytical approach. To foreshadow our results, across analytical approaches the LRP studies show substantial support for the moderating role of conscientiousness for many but not all measures of ideology.
Table 2
Relationship Between Measures of Ideology and the Likelihood of Sharing Fake News Across Studies
Studies | r | p | 95% CI | Range | Heterogeneity | I2 | |
Left-right conservatism | L&K | .177 | <.001 | [.132, .222] | [.078, .266] | Q(6) = 223, p < .001 | 96.6% |
Binary Republican identification | L&K | .097 | <.001 | [.055, .139] | [.024, .201] | Q(6) = 138, p < .001 | 96.0% |
Left-right conservatism | LRP | .306 | <.001 | [.283, .329] | – | – | – |
Binary Republican identification | LRP | .072 | .022 | [.011, .133] | [-.004, .175] | Q(4) = 116, p < .001 | 96.5% |
Continuous Republican identification | LRP | .069 | .149 | [-.025, .162] | [-.064, .228] | Q(4) = 270, p < .001 | 98.5% |
Warmth to Republicans | LRP | .171 | <.001 | [.100, .242] | [.100, .291] | Q(4) = 155, p < .001 | 97.4% |
Warmth to Democrats | LRP | -.016 | .761 | [-.122, .089] | [-.211, .113] | Q(4) = 346, p < .001 | 98.8% |
Social conservatism | LRP | .108 | .058 | [-.004, .219] | [-.030, .301] | Q(4) = 384, p < .001 | 98.9% |
Economic conservatism | LRP | .070 | .117 | [-.018, .158] | [-.048, .218] | Q(4) = 238, p < .001 | 98.3% |
Belief in a God | LRP | .123 | <.001 | [.071, .176] | [.070, .222] | Q(4) = 84.9, p < .001 | 95.3% |
Trump 2016 | LRP | .096 | .020 | [.015, .178] | [.012, .247] | Q(4) = 204, p < .001 | 98.0% |
Trump 2020 | LRP | .074 | .161 | [-.030, .178] | [-.078, .246] | Q(4) = 333, p < .001 | 98.8% |
Ideology Measures and Their Relationship With Sharing
As highlighted in the introduction, for the hypothesized attenuation interaction to emerge, it is first necessary that there be a positive main effect of the ideology variable included in the interaction. As such, we first meta-analyzed the zero-order correlations between each of the possible ideology measures and the likelihood of sharing fake news in both the L&K and LRP studies using a random effects model. In the other papers measuring conscientiousness summarized by Calvillo et al. (2024), there were no datasets that were both publicly available and contained ideology measures. We note that we use the term ‘ideology’ broadly here—encompassing aspects of political ideology, religious ideology, and political partisanship. The L&K studies contain two possible measures of ideology—left-right political ideology (Graham et al., 2009) and binary Republican identification. Both displayed significant meta-analytic correlations with sharing fake news (Table 2). The LRP studies contained eleven items capturing different variables across studies and left-right political ideology in just one study where it displayed a significant positive correlation. The wording of and possible responses to these eleven items are included in Appendix A. Of the eleven items, nine appeared to constitute ideology measures, with risk tolerance and trust also captured by a single item each. Of these nine ideology measures featured throughout the LRP studies, only four displayed significant meta-analytic correlations with the likelihood of sharing—binary Republican identification, Warmth to Republicans, belief in a God, and support for Donald Trump in the 2016 election. Notably, risk tolerance also displayed a significant meta-analytic correlation with the likelihood of sharing fake news; refer to the Supplementary Information for further details.
The results from the LRP studies were somewhat surprising. For example, given the vast emphasis on partisanship in the misinformation literature (e.g., Osmundsen et al., 2021), it was surprising that continuous Republican identification and support for Donald Trump in 2020 did not display significant effects, whereas the effects appeared to be more consistent for aspects of ideology that are not frequently discussed in the literature, such as belief in a God. Another notable point also emerges: For all measures in both sets of studies, there was significant heterogeneity across studies (Table 2). Further work is required to unpack why ideology had different relationships with sharing fake news across studies. If this was driven by aspects of the designs (e.g., changing stimuli, sample populations), then these factors must be subsumed into a theory of the overarching role of ideology.
In sum, one remarkable takeaway from Table 2 is the weakness of the correlations between most aspects of ideology and sharing fake news in the LRP studies. This could suggest that certain facets of ideology may have become less important for sharing fake news, which in turn has implications for other hypotheses, such as the moderation by conscientiousness hypothesis (Lawson & Kakkar, 2021) and whether accuracy prompts are effective across the range of political ideology (Martel et al., 2024). Alternatively, it is possible that these data exhibit non-standard characteristics, a point we address in the following section. Overall, from the nine ideology items included in the LRP studies, four emerge as candidates whose positive effects might be attenuated by conscientiousness. We next turn to considering the possibility of constructing further ideology measures from indices of these constituent items for the broadest possible analysis.
Testing More Reliable Measures of Ideology
Lin et al. (2023) analyzed three primary measures of ideology: continuous Republican identification and two indices that were constructed post-hoc. These indices were a general measure of conservatism, computed by averaging respondents’ social and economic conservatism, and a general measure of warmth, computed by averaging respondents’ Warmth to Republicans and Warmth to Democrats, with the latter item reverse coded (see Lin et al.’s (2023) online supplement). Across all observations combined, the former had an internal reliability of .86, indicating acceptable reliability, and the latter had a reliability of .52, falling beneath traditional standards for reliability (Cronbach, 1951; Nunnally & Bernstein, 1994). We next tested whether these variables showed significant zero-order meta-analytic correlations with sharing fake news using a random effects model. Though general warmth did (r = .117, 95% CI [.016, .217], p = .023), general conservatism did not (r = .095, 95% CI [-.011, .202], p = .080), nor did continuous Republican identification, as mentioned previously (Table 2). As such, none of the measures included by Lin et al. (2023) were both internally reliable and significantly related to the likelihood of sharing fake news. Despite this, we continue to consider general warmth as a measure of ideology due to its demonstrated relationship with sharing behavior and its relevance to the analysis conducted by Lin et al. (2023), even though it exhibits inconsistent effects.
In addition to the indices computed in Lin et al. (2023), given the limited reliability of any individual item, a natural step was to conduct factor analysis on the 11 items included across the LRP studies to extract any ideology factors. We estimated an exploratory factor analysis model with a single factor (for further details, refer to the Supplementary Information). All individual measures positively loaded onto this factor except ‘warmth to Democrats,’ which had a negative weight, and ‘trust’ and ‘risk,’ which did not load onto the factor, supporting the idea that these latter two items should not be considered as measures of ideology. The alpha of the remaining nine items was .91, indicating acceptable reliability. We then computed this factor in two ways: i) using the weights extracted from the factor analysis and ii) using equal weighting as one might do with scale items after verifying their factor structure. Both the factor weighted (r = .112, 95% CI [.041, .220], p = .041) and simple weighted (r = .117, 95% CI [.009, .225], p = .034) general ideology measures displayed significant meta-analytic correlations with the likelihood of sharing using a random effects model.
In sum, from the raw ideology items included in the LRP studies, the indices computed by Lin et al. (2023), and factor analysis of the nine constituent ideology items, seven possible ideology measures emerged that were appropriate for testing whether conscientiousness attenuated the positive effect of measures of ideology. By testing the hypothesis across all seven, this broader analysis responds to a highlighted shortcoming in the original Lawson and Kakkar (2021, p.20-21) manuscript, in that it presents a more comprehensive analysis of the role of conscientiousness in sharing fake news across measures of ideology.
Specification Curve Analysis of the Moderation by Conscientiousness Hypothesis
To present the most comprehensive view of the data, we conducted specification curve analysis using multiple analytical approaches as well as multiple measures of ideology. Specifically, we estimated the following models to predict whether respondents shared each fake news story: i) single level logistic regression models that cluster standard errors only by participant, ii) single level logistic regression models that cluster standard errors by both study and participant, iii) single level probit regression models that cluster standard errors by both study and participant, iv) multilevel logistic regression models with random intercepts nested within study and participant, and v) multilevel logistic regression models with random intercepts and random slopes for conscientiousness, ideology, and their two-way interaction nested within study. In the next section we also test the moderation hypothesis using a non-linear model, but we separate this analysis from the specification curve because it yields different coefficients and statistics that are not directly comparable.
We estimated these five possible models with each of the seven possible measures of ideology. Across the 35 estimates, 21 ideology-conscientiousness interactions achieved statistical significance at p < .05, 16 at p < .01, and 15 at p < .001 (Figure 1). For three of the ideology measures, the conclusions did not depend on the analytical approach: support for Trump in 2016 and Warmth to Republicans consistently displayed significant negative interaction effects in support of the moderation by conscientiousness hypothesis, and belief in a God never showed an interaction effect. For the other four ideology variables (the two factor ideology variables, binary Republican identification, and general warmth), the effects varied by analytical approach. All four displayed significant interaction effects in models ii) and iii) that clustered observations within studies and participants; yet when not accounting for differences across studies (i.e., model i, the analytical approach adopted by Lin et al. (2023)), only the simple weighted factor displayed a significant effect. In the multilevel model with random intercepts and slopes nested within study (model v), both factor ideology variables displayed significant interactions; in the random intercept model (model iv), there were no significant interaction effects for these four ideology measures. Finally, we note that the positive effect of risk tolerance was moderated by conscientiousness across all specified models, highlighting an interesting future direction for research into the role of conscientiousness in misinformation sharing (refer to the Supplementary Information for further details).
In sum, we found robust evidence for the moderation by conscientiousness hypothesis for two ideology measures—support for Trump in 2016 and Warmth to Republicans—as well as that conscientiousness moderated the relationship between an ideology factor and sharing in the majority of analytical approaches. There was no evidence that conscientiousness moderated the relationship between belief in a God and sharing, which presents an important boundary condition of the effect, in that it appears to apply to measures of partisanship, but not religious ideology. We further found that the effect of risk tolerance on sharing falsehoods was moderated by conscientiousness, highlighting an area for future research in the space.
Figure 1
Results of Specification Curve Analysis.
Non-Linear Models of Interaction Effects
In our specification curve analysis, we tested the moderation by conscientiousness hypothesis across five different models. As is standard in psychology, the relationship between political ideology, conscientiousness, and the likelihood of sharing fake news was modeled linearly, as it was in both Lawson and Kakkar (2021) and Lin et al. (2023). This assumes that these variables—and their two-way interaction—are linearly related to the likelihood of sharing. However, many times, this assumption is violated. In response to violations of this assumption, some have proposed (Simonsohn, 2024) using non-linear models (e.g., Generalized Additive Models, GAMs) to test and interpret interactions. We respond to this call here and highlight the implication of using such models for our results.
It was possible to estimate GAMs with a binomial link function for 4/7 of the ideology measures—the two binary measures (Republican identification and support for Donald Trump in 2016) and the belief in a God variable did not have enough unique combinations with conscientiousness to estimate the model. All four of the ideology measures displayed significant interaction effects between conscientiousness and the measure of ideology (ps < .001, refer to the Supplementary Information for further details). As a demonstrative example, we plot the predicted values across the range of the factor-weighted ideology measure at high and low levels of conscientiousness first using a single level logistic regression with standard errors clustered by respondent and study, and then using a GAM. With both the logistic regression (b = -.050, 95% CI [-.067, -.033], p < .001) and a GAM (edf = 12.0, χ2 = 117.8, p < .001), we find evidence for a significant interaction term (see the prediction plot in Figure 2). Though using a GAM replicated the significant interaction, the interpretation of the overall pattern is somewhat different.
It appears here that the interaction is driven by the fact that conscientiousness plays no role at low levels of the ideology factor but has a large simple effect for moderate to high levels, without showing a clear divergence between the two from moderate to high levels, without showing a clear divergence from moderate to high levels. This highlights an additional nuance that it is important to consider in misinformation research: If effects are non-linear, modelling them linearly could lead to mischaracterizations that may impede progress. In Figures S2-S5, we compare the nature of the interaction effects for the other four tested variables across linear models and GAMs. There are cases where the assumption of linearity appears to be reasonable, and other cases where there is a divergence across the two models that prompts the need for further investigation and displays the utility of this analytical approach.
Figure 2
Estimating the Interaction Effect Using Logistic Regression Versus GAMs.
Reconciling Divergent Conclusions
Our analyses found substantial agreement in the results across the L&K and LRP studies, yet the published manuscripts made quite different claims. So, what explains these discrepant conclusions between Lawson and Kakkar (2021) and Lin et al. (2023)? In the next section, we consider three factors that differ between the two sets of studies—ideology measures, analytical approaches, and time and news stories—that may explain the opposing takeaways.
Ideology Measures
There are many different ways to operationalize ideology (Martel et al., 2024). As such, it could be the case that different tests of the moderation by conscientiousness hypothesis will produce different results as a consequence of differences in the behavior of ideology. Importantly, both the ideology measures and news stimuli used in a study affect the strength of the relationship between the two and potentially the mechanism for such a relationship. We examine two important dimensions of the choice of ideology measure below.
Critically, given that Lawson and Kakkar (2021) hypothesized an attenuation interaction, a significant positive main effect of ideology is required to test the hypothesis. In Lin et al.’s (2023) analyses, two-out-of-three primary ideology measures did not display significant meta-analytic correlations with sharing. This contributes to the differing conclusions: By focusing on ideology relationships with inconsistent relationships with sharing, Lin et al. (2023) reduced the likelihood of detecting attenuation moderation effects. In practice, this is not a binary distinction: The effects of the two ideology measures used by Lin et al. (2023) that were not significant were somewhat close to significant (p = .149, p = .080). The overall takeaway is that it may be more likely to see a significant moderation effect in cases where the main effect of the ideology measure is stronger. In the LRP studies, the only ideology measure that exhibited a similar strength of relationship with sharing as left-right conservatism in the L&K studies was warmth to Republicans. The effect of this variable on sharing was consistently moderated by conscientiousness. Moreover, as seen in Table 2, all measures of ideology showed heterogeneous effects across studies, which suggests that additional features of experimental designs may influence the expected effects of such ideology measures, necessitating further research to understand the changing and varied role of ideology.
Second, Lawson and Kakkar (2021) theorized that conscientiousness would attenuate the relationship between left-right political ideology and sharing fake news due to the superior diligence and prudence of conscientious individuals, leading to greater responsibility in sharing decisions. Yet it could be the case that some measures of ideology influence sharing via different pathways, and thus, conscientiousness will not affect such relations. For example, we saw that conscientiousness did not statistically significantly moderate the effect of belief in a God on sharing, which could suggest that this facet of ideology influences sharing decisions differently compared to other measures of ideology. In cases where this is true, one would not expect the moderation by conscientiousness hypothesis to apply.
Finally, we note that in the supplementary materials, Lin et al. (2023) reported a broader analysis across most of the other ideology measures but still reached an overall conclusion of failing to find negative interactions across studies. We next describe the reasons that we believe drove this divergent conclusion.
Analytical Approach
As well as the choice of ideology variables, many analytical choices contributed to the discrepant conclusions across research teams. For example, the specification curve analysis of the LRP studies featured here reached a different conclusion to the supplemental analysis in Lin et al. (2023) because we used multiple analytical approaches rather than solely logistic regressions with standard errors clustered only at a participant level. The Lin et al. (2023) analytical approach led to significant interactions for 3/7 of the ideology measures, whereas clustering standard errors by both participant and study increased this to 6/7. Given that in the LRP studies, the researchers “varied the study characteristics along several dimensions” (Lin et al., 2023, p. 3280), clustering at a study level appears to be necessary as it accounts for unobserved variation across studies. If one does not use this level of clustering, large differences in behavior across studies (e.g., those associated with different rates of sharing of different sets of stories) will be associated with the error terms. This will reduce statistical power, meaning that equivalent-sized effects will be less likely to be statistically significant, which could lead to an increased rate of Type II errors.
That said, one could take many different approaches when analyzing the type of repeated measures data frequently seen in misinformation research—for example, one could use multilevel models with random effects rather than single-level models with clustered standard errors. Yet this invites still further degrees of freedom: When estimating multilevel logistic regression models with random intercepts nested within participants and studies, only 2/7 ideology measures displayed significant interaction patterns, compared to 4/7 when estimating models with random intercepts and slopes nested within studies. In sum, the choice between various models that at face value all seem reasonable for analyzing repeated measures data may lead to discrepant conclusions, though, as described, significant interaction effects emerged in all specifications for two of the ideology measures, showing the robustness of the moderation by conscientiousness hypothesis to these choices.
Finally, there are other choices that affect the likelihood of detecting effects. These include approaches to interpretation that tend to favor the null hypothesis (e.g., Bayes factors, see Simonsohn, 2019), which could disparage the possibility of effects where more traditional frequentist methods might find effects, but also several other aspects. Estimating many higher order terms at once (e.g., 10 three-way interactions as in Lin et al. (2023)) can also dilute focal effect sizes, as well as reducing degrees of freedom and running the risk of overfitting. Finally, in samples that are very noisy due to inattention or low-quality data (e.g., in Lin et al. (2023)’s Study 1, only 35% of participants passed all four attention checks – compared to 87-91% for the other studies), larger samples will be required to study equivalent sized effects. Researchers need to consider all these factors when formulating what is the optimal design.
Time and Stimuli
So far, we have identified how selecting particular ideology variables and analytical methods post-hoc contributed to Lin et al.’s (2023) discrepant conclusions that are not supported by the broader specification curve analysis in Figure 1. In addition to these points about how to treat the data, there are differences in how the data were produced which contribute to the different sets of conclusions.
One aspect that varied across the L&K and LRP studies was time. The L&K Studies 1-6 were conducted between 2nd March 2020 and 21st April 2020 (Study S1 was added in September 2020 and Study S2 in April 2021), primarily using stimuli related to the Covid-19 pandemic except for Study 2. Three out of five LRP studies used the same political stories as L&K Study 2 but were conducted during December 6-13, 2021. The cultural connotations of these stories had changed significantly in the meantime. For example, they featured a reference to Donald Trump being President of the U.S., though during the time between the studies, there was a general election in which Joe Biden became President, as well as the Jan 6 Capitol riot. In a starker case, one fake conservative-leaning story in these stimuli was a story that Ruth Bader Ginsburg (RBG) had been rushed to hospital. When the LRP studies were conducted, RBG had, in fact, died over a year ago. Differences in the contextual relevance of stimuli could be one potential explanation for the weaker correlations between ideology measures and sharing in the LRP studies relative to in L&K, as it is unclear how ideology should determine one’s response to stimuli that are clearly false ex-post. This could also explain why particular facets of ideology (e.g., belief in a God) appeared to be more relevant determinants of sharing falsehoods in the LRP data than more established facets (e.g., continuous Republican party identification). Finally, the correlation between conscientiousness and sharing was also weaker in the LRP studies, which could reflect the idiosyncrasies of using out-of-date stimuli. Weakening the main effects associated with an interaction will reduce the likelihood of detecting an interaction effect, all else equal.
More broadly, the changing political landscape poses a quandary for misinformation research as the field consolidates its findings over time, given that stimuli and effects are more closely embedded in societal contexts than other, more politically neutral content (e.g., decision-making under uncertainty). Though many of the most well-known studies in the literature were conducted in the period after the 2016 U.S. general election (e.g., Allcott & Gentzkow, 2017), the cultural context has undoubtedly changed in many ways since then. So, it could well be that such effects would not be replicated. Nevertheless, this is clearly a different type of non-replication to studies based on Type I errors. In any case, as shown above, it is necessary that replications are conceptual rather than literal and should aim to use stimuli that have similar connotations in the contemporary cultural context. However, this introduces researcher degrees of freedom, and so we propose best practices for alleviating such concerns at the end of this manuscript.
Fake News Sharing and Fake News Discernment
So far, our analyses have concerned individuals’ propensity to share fake news, but what about real news? The original Lawson and Kakkar (2021) analyses included both real and fake stories, and some researchers have argued that the key effect to consider is fake news discernment (Guay et al., 2023; Lin et al., 2023). This is defined as a significant interaction between a focal variable and the veracity of the news, requiring analysis of the two types of news together. The basic logic goes that for a factor to be relevant specifically to fake news, it must have a stronger effect when news is fake than when it is real, as quantified by a significant interaction with news veracity. This seems reasonable, but further consideration is required before accepting this definition. We investigate the question of what effects are relevant to misinformation research next.
Is an Interaction With News Veracity Necessary to be Studying Fake News?
There is little evidence for a substantial three-way ideology-conscientiousness-veracity interaction (Lawson & Kakkar, 2021; Lin et al., 2023). What is the implication for the moderation by conscientiousness hypothesis? It is first important to note that the number of observations required to detect such an effect with a reasonable degree of power would be enormous, and so it is possible that such an effect may still exist. For example, a 50% moderation effect (a two-way interaction) requires 16x the number of observations than those that are required to detect its associated main effect (Gelman et al., 2021). Thus, with the weakening main effects of ideology (e.g., Lin et al., 2023) and this effect of interest being a three-way interaction, detecting such an effect with adequate power would require an astronomical number of observations. Yet, critically, rather than focusing on a three-way interaction as others have proposed (Guay et al., 2023), we consider what dynamics can emerge in the absence of a three-way interaction. Specifically, we consider two key questions in misinformation research:
- Who shares fake news?
- What determines measures of discernment (e.g., the proportion of shared stories that are fake)?
Let us consider the first question of who shares fake news. People who actually share fake news are quite rare—1% of individuals account for 80% of fake news exposures (Grinberg et al., 2019), and the majority of social media users have never shared fake news. As such, it is practically important to understand the psychological profiles of those who are at risk of sharing fake news to inform the question of where interventions should be targeted.
In the data from the L&K studies, the three-way interaction between political ideology, conscientiousness, and news veracity was not close to being significant (Lawson & Kakkar, 2021). Yet, low conscientiousness conservatives (LCCs) were responsible for 34.9% of the times that fake news was shared despite being only 16.8% of respondents.[1] Every other group (e.g., high conscientiousness conservatives, HCCs) was underrepresented in terms of fake news shared relative to their proportion of the population. In the LRP studies, 34.5% of the times fake news was shared it was by LCCs, despite them only constituting 23.4% of the respondents. In other words, when fake news was shared, it tended to be by LCC respondents. This is not coincidental to these specific samples: A broader simulation analysis further corroborated that the number of LCCs in a sample was a key determinant of how much fake news was shared (refer to the Supplementary Information for further details). In sum, despite the absence of a three-way interaction with veracity, the combination of ideology and conscientiousness was a key element of the psychological profile of those sharing fake news.
Despite the clear relevance of conscientiousness and political ideology in determining who shares fake news, one possible critique is the following: If this psychological profile also applies to those who share real news, reducing this group’s sharing could have a negative impact on the overall quality of the media ecosystem. There are at least two pushbacks to this critique: i) having identified who shares fake news, presumably interventions should then aim to reduce their sharing of specifically fake news rather than all news, and ii) when considering the entire news ecosystem, the effect of one variable (e.g., conscientiousness) is also determined by the effects of other variables that it interacts with (in this case, ideology and news veracity). We expand on what we mean by this next.
We will use an example to demonstrate why a three-way interaction with news veracity is not required for a focal variable to be important specifically for misinformation research. Suppose that we knew the data generating process that determined the likelihood of sharing real and fake news, and that it could be modeled by a logistic regression model involving an effect of political ideology (as captured by the ideology factor from the LRP studies), conscientiousness, and news veracity, a two-way ideology-conscientiousness interaction, but no three-way ideology-conscientiousness-veracity interaction. We will examine what dynamics can emerge in the absence of a three-way interaction. We plot the resulting predicted likelihoods of sharing in log odds space in Figure 3, Panel A. These are the actual empirical relationships from the LRP studies, but as we are assuming that the data generating process is known, they do not have error bars.
As can be seen in Figure 3, Panel A, higher ideology scores are associated with a greater likelihood of sharing in all cases, but especially so for low conscientiousness respondents. This is true for both real and fake news. The absence of a three-way interaction with news veracity translates to the difference between the slopes of the high and low conscientiousness predictions across the range of ideology being the same in log odds space for both real and fake news. The logit link function then translates these log odds non-linearly onto a probability scale, though if effects are modestly sized the resulting function is typically close to linear. The critical point to note is that, in the data, fake stories are shared at a lower rate overall. This means that an equivalently large effect for fake and real stories—when fake stories are shared more rarely—translates to a proportionally much more important effect on fake news sharing. For a simple example, consider if people shared 10% of fake stories and 30% of real stories. Reducing sharing of both types of news by 5 percentage points would change the proportion of fake stories from 10/40 = 0.25 to 5/30 = .17—a 33% reduction.
We demonstrate this in Figure 3, Panels B and C: conscientiousness conscientiousness and ideology clearly interactively affect key ecosystem-level outcomes—such as the overall proportion of shared stories that are fake or the difference in the likelihood of sharing real and fake stories—despite the absence of any three-way interaction with news veracity. In short, no interaction with veracity is required for an effect to be vital to misinformation research or the media ecosystem (refer to the Supplementary Information for the results of a further simulation). Given that fake news is in fact shared much less commonly than real news, equivalent sized absolute effects for real and fake news will be even more relatively important in the field. In the next section, we discuss this result and its implications for misinformation research.
Figure 3
Demonstrating the Effect of Ideology and Conscientiousness on the Media Ecosystem in the Absence of an Interaction with Veracity.
Holistic Understanding of News Veracity Interaction Effects in Misinformation Research
Recently, Guay et al. (2023) argued that interventions should only be considered effective if they decrease belief in or sharing of false news more than they decrease belief in or sharing of true news. We argue that whether this is true or not depends on the outcome of interest. As was shown in the previous section, when this condition is not met, variables such as conscientiousness can still play key roles in determining important ecosystem level outcomes such as the proportion of shared stories that are fake.
In general, we propose that the impact of an effect cannot be quantified solely by a single coefficient (e.g., the interaction between a focal variable and news veracity). Rather, it is necessary to consider the full pattern of effects to understand the implications of a variable. This is even more relevant when considering higher-order effects (e.g., the ideology-conscientiousness interaction pattern documented by Lawson and Kakkar, 2021). As such, research that focuses on a single coefficient as indicative of an effect may reach erroneous conclusions regarding the relevance of variables to misinformation specifically.
To alleviate this concern, in the previous section, we demonstrated one possible approach that we summarize here in the following steps: i) estimate the appropriate statistical model, noting which effects are statistically significant and which are not, ii) reduce the model to the statistically significant relationships and assume that this is the data generating process, iii) simulate regression predictions over the range of values of independent variables observed in the data, and iv) use these predictions to visualize key ecosystem level outcomes such as the proportion of stories shared that are fake. In Figure 3, we provide an example of how to use this approach to understand how the ideology-conscientiousness interaction impacts outcomes of interest, but this approach can be applied more broadly. We argue that simulations and visualizations are a key complement to statistical analyses in accurately representing effects with fidelity. The focus on single coefficients could be one reason why meta-analyses have struggled to find agreement on important topics, such as the relationship between ideology and accuracy prompts (Pennycook & Rand, 2022; Rathje et al., 2022), as other relevant effects (e.g., the effect of ideology, the effect of news veracity on the likelihood of sharing, the efficacy of accuracy prompts in a particular design) may be playing roles that are difficult to comprehend without attempts at simulation and visualization. We conclude by discussing some general best practices to avoid potential confusions in misinformation research.
General Discussion
In this article, we sought to clarify the role of conscientiousness in the spread of misinformation while simultaneously highlighting key elements of research design and analysis that can help to reconcile divergent conclusions in misinformation research. Across studies, more conscientious respondents were less likely to share fake news, and conscientiousness weakened the relationship between some, but not all, measures of ideology and sharing falsehoods. This latter point still requires further investigation: For what types of stimuli and measures of ideology is conscientiousness more or less important? What are the specific mechanisms by which it moderates these relations? In the present research, we uncovered one further nuance—that conscientiousness’ moderating effect appears to be limited to political ideology and partisanship, and does not extend to religious beliefs—but understanding the role of each type of belief’s impact across situations remains a question that requires further research. Notably, risk tolerance also emerged as a significant positive predictor of sharing fake news whose positive effect was moderated by conscientiousness. Further, we replicated the positive effect of extraversion on sharing fake news (Ahmed & Rasul, 2022) and observed a robust negative effect of openness on misinformation sharing.
In the process of uncovering these results that advance our understanding of the role of personality in misinformation research, several important factors emerged that explain the ongoing debate in the literature regarding the moderating role of conscientiousness (Lawson & Kakkar, 2021; Lin et al., 2023). Most notably, we highlight the significance of: i) the selection of an appropriate measure of ideology that positively predicts sharing to test the moderation by conscientiousness hypothesis, ii) the implications of different analytical choices and their need for justification, iii) the role of stimuli and the need to ensure that news content is relevant in the contemporary socio-political context, and iv) the need to appropriately define the outcomes of interest. These first three points are directly relevant to discerning the strength of evidence for the moderation by conscientiousness hypothesis, whereas the fourth point has broader implications which we further comment on shortly.
How strong is the support for the hypothesis that conscientiousness moderates the positive effect of ideology on the likelihood of sharing fake news? Lawson and Kakkar (2021) found consistent evidence that conscientiousness moderated the effect of left-right political ideology on sharing. These studies used a single primary measure of ideology and a single analytical approach. The data from Lin et al. (2023) show support for this effect across all five different analytical approaches for two out of seven viable ideology measures—covering a much broader array of facets of ideology and preferences, also finding a significant moderation effect for risk tolerance as well as measures of political partisanship. As such, the overall support for the presence of an effect is substantially strengthened by this evidence. That said, some boundary conditions also emerge. In particular, the hypothesis did not apply to the facet of religious ideology, and for some measures of ideology—such as a factor ideology variable extracted from the individual items in the LRP studies—the conclusions depended on analytical approach. In particular, modelling responses using single level logistic regression models that did not cluster standard errors within study or multilevel logistic regression models with random intercepts nested in participants and studies tended to result in lower support for the hypothesis, with some marginally significant or insignificant effects. In addition to these design choices, the differences in overall support between the L&K and LRP studies may in part be explained by the weaker predictive power of most measures of ideology in the LRP studies: For the only ideology measure that showed a similar strength of effect to left-right ideology in the L&K studies—warmth to Republicans—robust interaction effects emerged. Regardless, further work is required to fully understand the interactive roles of stimuli, ideology, and personality in determining respondents’ willingness to share falsehoods.
In this research, we emphasize how seemingly arbitrary and sometimes unreported choices in experimental design and analysis can contribute to discrepant conclusions in the study of misinformation. Given the vital importance of reaching consensus—not only in this debate but also in other cases—we outline a framework containing guidance on four broad areas of research design and analysis in Table 3. We provide this commentary and guidance to improve the replicability and cumulative value of misinformation research—focusing on how to sample stimuli, ensure construct validity, justify and vary analytical approaches, and clearly define effects of interest. By moving towards principled common practices on these four issues we hope that disagreements can be resolved more efficiently and unequivocally for the field to provide the most practical support to policymakers and society as quickly as possible.
Table 3
Best Practices for Misinformation Research.
Topic | Explanation |
1. Systematically sample stimuli | Stimuli are much more heterogeneous in misinformation research—news articles with near-infinite dimensions—than in many other research topics. Though pre-testing stimuli for some features of the news is common, there are many other possible confounds, particularly when concerned with the role of ideology, which may interact with specific content features. Recent research (Simonsohn et al., 2024) has proposed a systematic process for generating and studying individual stimuli that can identify and test moderators. The widespread adoption of such frameworks could help to add transparency to misinformation research and provide a greater understanding of the complex relationships at play. More broadly, the application of principles from psychometrics is key in ensuring that misinformation research adopts valid measures capable of producing cumulative knowledge (e.g., see Maertens et al., 2024). |
2. Ensure construct validity | Many complicated hypotheses are tested in the domain of misinformation research, yet less emphasis is placed on the construct validity of independent and dependent variables. We recommend using multiple items and factor analysis to ensure that independent variables represent latent variables coherently and clearer definitions of stimuli to represent effects accurately. For example, when testing whether respondents share Republican and Democratic content, is a difference across the two expected? Why, or why not? If a research team does not have an answer to such a question, it is unclear what is being tested when models predicting sharing decisions across different types of content are estimated. Moreover, complete patterns of relationships must be examined to understand the effects. This could involve running pilot studies to test whether variables have the expected zero-order relationship with outcome variables before further developing a research design. |
3. Justify and vary analytical approach | Simple analytical choices such as whether to use clustered standard errors or hierarchical models and defining the groups for each of these approaches can lead to large divergences in researchers’ conclusions. As such, these choices should have substantial theoretical support and should not solely be justified by their pre-registration. In cases where conclusions diverge across teams or clear theoretical rationales are not present, multiverse analyses (Steegen et al., 2016) can be used to represent effects more broadly, though it may be the case that not all approaches are equally appropriate for testing a hypothesis. We propose replicating key models across multiple specifications and summarizing the entire set of outcomes. |
4. Clearly define effect of interest | To test a hypothesis requires a precise definition of the estimand (Lundberg et al., 2021). In other words, what quantity are you trying to estimate? In the case of misinformation research, this might require reference to ecosystem-level outcomes, such as the proportion of fake stories shared. The effects of interest must be clearly articulated, and their implications must be simulated and visualized beyond references to individual regression coefficients. This can reduce researcher degrees of freedom in the statement of hypotheses and characterization of effects and ensure that data are represented and communicated with high fidelity. |
Notably, our guidelines primarily apply to experimental studies on misinformation. However, many stakeholders are interested in understanding the spread of misinformation in the real world and effectively deploying interventions that curtail its progression. The critical question is whether experimental results should be interpreted as applicable to the actual sharing of fake news. The answer is complicated. If one is interested in understanding the psychological processes underlying fake news sharing on social media, experimental results offer valuable insight into social media users’ psychology. Yet in laboratory studies, the outcome of interest is typically sharing intentions rather than actual sharing. Given how skewed the actual sharing of fake news is (Grinberg et al., 2019), one should expect that most experimental participants have never actually shared fake news. This will not matter for some hypotheses, as there will be a clear mapping between sharing intentions and sharing: those with the very highest sharing intentions will actually share falsehoods.
In other designs, the implications are less clear. For example, Ghezae et al. (2024) asked experimental participants whether they expected to receive greater reputational rewards from sharing fake news relative to real news, finding that they did not. Suppose the majority of these participants had never actually shared fake news. In that case, this is not surprising—clearly, those who do not share fake news will likely not perceive high social returns to sharing—yet the results in a sample of people who actually share fake news might look quite different. One potential solution to this problem is requiring that misinformation papers contain a paragraph explicitly stating the hypothesized relationship between their experimental results and behavior in the field and providing evidence for this connection. Regardless, we propose the need for caution in generalizing experimental effects to real-world settings, though this is not unique to misinformation research.
Finally, in addition to the empirical verification of the role of conscientiousness in sharing fake news and these broader methodological contributions to misinformation research, we offer a theoretical contribution to the study of media ecosystems. Recent research has highlighted the importance of ecosystem-level outcomes, such as the difference between the likelihood of sharing real and fake stories and the percentage of fake stories in the overall ecosystem (Guay et al., 2023). However, these outcomes have not always been successfully mapped into analyses. We demonstrate that understanding the impact of a variable on these critical outcomes necessitates examining comprehensive patterns of effects rather than isolated coefficients. Using this approach, we highlight the clear relevance of conscientiousness to misinformation research. We hope that this clarification can provoke a re-interpretation of extant effects, helping to better understand the synthesis of misinformation research to date. This insight is particularly important in evaluating the impact of misinformation interventions—a keenly debated topic (Gawronski et al., 2024; Guay et al., 2023)—with our central proposition being that simulations and visualizations are a critical complement to statistical models to ensure that effects are well understood.
We conclude by noting that, so far, misinformation research has a clear paradigm regarding how experiments are administered: Participants are asked to indicate whether or not they would share a series of stories. The field does not yet have a corresponding, consistent analytical paradigm. For example, some have found that with signal detection theory (Batailler et al., 2021; Gawronski, 2021; Gawronski et al., 2023, 2024), the conclusions reached in particular studies completely changed, though this methodology relies on news being definable as real or false, which is not always possible. Establishing a unified framework for valid inference is essential to resolve debates on how ideology interacts with accuracy prompts (Martel et al., 2024) and the relationship between conscientiousness and fake news sharing (Lawson & Kakkar, 2021; Lin et al., 2023). Without such a framework, unobserved heterogeneity in research approaches may prevent evidence from converging, hindering the field’s potential to enhance scientific understanding and societal well-being. By highlighting some of the factors and choices that drive such divergences, we hope to contribute towards this push to reconciliation.
Endnotes
[1] LCCs were defined as those with below median conscientiousness and above median left-right political ideology in the L&K studies; for the LRP studies we used the ideology factor as the ideology variable to median split on. We use median splits for simplicity of exposition; the full statistical analyses of conscientiousness’ role are presented in the previous sections.Conflicts of Interest
The authors declare no competing interests.
Author Contributions
M.A.L. analyzed the data and wrote up the first draft. M.A.L and H.K. provided critical revisions.
Reproducibility Statement
Two sets of published studies are re-analyzed: those from Lawson and Kakkar (L&K, 2021) and those from Lin et al. (LRP, 2023). These studies received IRB approval. The data and materials from the L&K studies are available here https://osf.io/ahdsf/, and the data and materials from the LRP studies are available here: https://osf.io/972jm/. The code for the featured re-analyses is available here: https://osf.io/4mnhs/.
References
Ahmed, S., & Rasul, M. E. (2022). Social Media News Use and COVID-19 Misinformation Engagement: Survey Study. Journal of Medical Internet Research, 24(9), e38944. https://doi.org/10.2196/38944
Ahmed, S., & Tan, H. W. (2022). Personality and perspicacity: Role of personality traits and cognitive ability in political misinformation discernment and sharing behavior. Personality and Individual Differences, 196, 111747. https://doi.org/10.1016/j.paid.2022.111747
Allcott, H., & Gentzkow, M. (2017). Social Media and Fake News in the 2016 Election. Journal of Economic Perspectives, 31(2), 211–236. https://doi.org/10.1257/jep.31.2.211
Bago, B., Rand, D. G., & Pennycook, G. (2020). Fake news, fast and slow: Deliberation reduces belief in false (but not true) news headlines. Journal of Experimental Psychology: General. https://doi.org/10.1037/xge0000729
Bail, C. (2022). Breaking the Social Media Prism: How to Make Our Platforms Less Polarizing. Princeton University Press. https://doi.org/10.1515/9780691246499
Batailler, C., Brannon, S. M., Teas, P. E., & Gawronski, B. (2021). A Signal Detection Approach to Understanding the Identification of Fake News. Perspectives on Psychological Science, 174569162098613. https://doi.org/10.1177/1745691620986135
Breznau, N., Rinke, E. M., Wuttke, A., Nguyen, H. H. V., Adem, M., Adriaans, J., Alvarez-Benjumea, A., Andersen, H. K., Auer, D., Azevedo, F., Bahnsen, O., Balzer, D., Bauer, G., Bauer, P. C., Baumann, M., Baute, S., Benoit, V., Bernauer, J., Berning, C., … Żółtak, T. (2022). Observing many researchers using the same data and hypothesis reveals a hidden universe of uncertainty. Proceedings of the National Academy of Sciences, 119(44), e2203150119. https://doi.org/10.1073/pnas.2203150119
Buchanan, T. (2023). Trust, personality, and belief as determinants of the organic reach of political disinformation on social media. The Social Science Journal, 0(0), 1–12. https://doi.org/10.1080/03623319.2021.1975085
Calvillo, D. P., León, A., & Rutchick, A. M. (2024). Personality and misinformation. Current Opinion in Psychology, 55, 101752. https://doi.org/10.1016/j.copsyc.2023.101752
Chen, C., & Shu, K. (2023). Combating Misinformation in the Age of LLMs: Opportunities and Challenges (arXiv:2311.05656). arXiv. https://doi.org/10.48550/arXiv.2311.05656
Cronbach, L. J. (1951). Coefficient alpha and the internal structure of tests. Psychometrika, 16, 297–334. https://doi.org/10.1007/BF02310555
Ecker, U. K. H., Lewandowsky, S., Cook, J., Schmid, P., Fazio, L. K., Brashier, N., Kendeou, P., Vraga, E. K., & Amazeen, M. A. (2022). The psychological drivers of misinformation belief and its resistance to correction. Nature Reviews Psychology, 1(1), Article 1. https://doi.org/10.1038/s44159-021-00006-y
Gawronski, B. (2021). Partisan bias in the identification of fake news. Trends in Cognitive Sciences, 25(9), 723–724. https://doi.org/10.1016/j.tics.2021.05.001
Gawronski, B., Nahon, L. S., & Ng, N. L. (2024). A signal-detection framework for misinformation interventions. Nature Human Behaviour, 1–3. https://doi.org/10.1038/s41562-024-02021-4
Gawronski, B., Ng, N. L., & Luke, D. M. (2023). Truth sensitivity and partisan bias in responses to misinformation. Journal of Experimental Psychology: General, 152(8), 2205–2236. https://doi.org/10.1037/xge0001381
Gelman, A., Hill, J., & Vehtari, A. (2021). Regression and Other Stories. Cambridge University Press.
Ghezae, I., Jordan, J. J., Gainsburg, I. B., Mosleh, M., Pennycook, G., Willer, R., & Rand, D. G. (2024). Partisans neither expect nor receive reputational rewards for sharing falsehoods over truth online. PNAS Nexus, 3(8), pgae287. https://doi.org/10.1093/pnasnexus/pgae287
Goreis, A., & Voracek, M. (2019). A Systematic Review and Meta-Analysis of Psychological Research on Conspiracy Beliefs: Field Characteristics, Measurement Instruments, and Associations With Personality Traits. Frontiers in Psychology, 10. https://doi.org/10.3389/fpsyg.2019.00205
Graham, J., Haidt, J., & Nosek, B. A. (2009). Liberals and conservatives rely on different sets of moral foundations. Journal of Personality and Social Psychology, 96(5), 1029–1046. https://doi.org/10.1037/a0015141
Grinberg, N., Joseph, K., Friedland, L., Swire-Thompson, B., & Lazer, D. (2019). Fake news on Twitter during the 2016 U.S. presidential election. Science, 363(6425), 374–378. https://doi.org/10.1126/science.aau2706
Guay, B., Berinsky, A. J., Pennycook, G., & Rand, D. (2023). How to think about whether misinformation interventions work. Nature Human Behaviour, 7(8), 1231–1233. https://doi.org/10.1038/s41562-023-01667-w
Guess, A., Nagler, J., & Tucker, J. (2019). Less than you think: Prevalence and predictors of fake news dissemination on Facebook. Science Advances, 5(1), eaau4586. https://doi.org/10.1126/sciadv.aau4586
Higgins, J. P. T., Thompson, S. G., Deeks, J. J., & Altman, D. G. (2003). Measuring inconsistency in meta-analyses. BMJ, 327(7414), 557–560. https://doi.org/10.1136/bmj.327.7414.557
Jost, J. T., van der Linden, S., Panagopoulos, C., & Hardin, C. D. (2018). Ideological asymmetries in conformity, desire for shared reality, and the spread of misinformation. Current Opinion in Psychology, 23, 77–83. https://doi.org/10.1016/j.copsyc.2018.01.003
Kennedy, R., Clifford, S., Burleigh, T., Waggoner, P. D., Jewell, R., & Winter, N. J. G. (2020). The shape of and solutions to the MTurk quality crisis. Political Science Research and Methods, 8(4), 614–629. https://doi.org/10.1017/psrm.2020.6
Lawson, M. A., Anand, S., & Kakkar, H. (2023). Tribalism and tribulations: The social costs of not sharing fake news. Journal of Experimental Psychology: General, 152(3), 611–631. https://doi.org/10.1037/xge0001374
Lawson, M. A., & Kakkar, H. (2021). Of pandemics, politics, and personality: The role of conscientiousness and political ideology in the sharing of fake news. Journal of Experimental Psychology: General, Advance online publication.
Lin, H., Rand, D. G., & Pennycook, G. (2023). Conscientiousness does not moderate the association between political ideology and susceptibility to fake news sharing. Journal of Experimental Psychology: General, 152(11), 3277–3284. https://doi.org/10.1037/xge0001467
Lundberg, I., Johnson, R., & Stewart, B. M. (2021). What Is Your Estimand? Defining the Target Quantity Connects Statistical Evidence to Theory. American Sociological Review, 86(3), 532–565. https://doi.org/10.1177/00031224211004187
Maertens, R., Götz, F. M., Golino, H. F., Roozenbeek, J., Schneider, C. R., Kyrychenko, Y., Kerr, J. R., Stieger, S., McClanahan, W. P., Drabot, K., He, J., & van der Linden, S. (2024). The Misinformation Susceptibility Test (MIST): A psychometrically validated measure of news veracity discernment. Behavior Research Methods, 56(3), 1863–1899. https://doi.org/10.3758/s13428-023-02124-2
Martel, C., Rathje, S., Clark, C. J., Pennycook, G., Van Bavel, J. J., Rand, D. G., & van der Linden, S. (2024). On the Efficacy of Accuracy Prompts Across Partisan Lines: An Adversarial Collaboration. Psychological Science, 35(4), 435–450. https://doi.org/10.1177/09567976241232905
Muda, R., Pennycook, G., Hamerski, D., & Białek, M. (2023). People are worse at detecting fake news in their foreign language. Journal of Experimental Psychology: Applied, 29(4), 712–724. https://doi.org/10.1037/xap0000475
Nunnally, J. C., & Bernstein, I. H. (1994). Psychometric Theory. McGraw-Hill Companies,Incorporated.
Osmundsen, M., Bor, A., Vahlstrup, P. B., Bechmann, A., & Petersen, M. B. (2021). Partisan Polarization Is the Primary Psychological Motivation behind Political Fake News Sharing on Twitter. American Political Science Review, 115(3), 999–1015. https://doi.org/10.1017/S0003055421000290
Paunonen, S. V., & Ashton, M. C. (2001). Big Five factors and facets and the prediction of behavior. Journal of Personality and Social Psychology, 81(3), 524–539. https://doi.org/10.1037/0022-3514.81.3.524
Pennycook, G., Epstein, Z., Mosleh, M., Arechar, A. A., Eckles, D., & Rand, D. (2021). Shifting attention to accuracy can reduce misinformation online. In Nature. https://doi.org/10.31234/osf.io/3n9u8
Pennycook, G., & Rand, D. G. (2022). Accuracy prompts are a replicable and generalizable approach for reducing the spread of misinformation. Nature Communications, 13(1), Article 1. https://doi.org/10.1038/s41467-022-30073-5
Pereira, A., Harris, E., & Van Bavel, J. J. (2023). Identity concerns drive belief: The impact of partisan identity on the belief and dissemination of true and false news. Group Processes & Intergroup Relations, 26(1), 24–47. https://doi.org/10.1177/13684302211030004
Petrocelli, J. V., Watson, H. F., & Hirt, E. R. (2020). Self-Regulatory Aspects of Bullshitting and Bullshit Detection. Social Psychology, 51(4), 239–253. https://doi.org/10.1027/1864-9335/a000412
Rathje, S., Roozenbeek, J., Traberg, C., Van Bavel, J., & van der Linden, S. (2022). Letter to the Editors of Psychological Science: Meta-Analysis Reveals that Accuracy Nudges Have Little to No Effect for U.S. Conservatives: Regarding Pennycook et al. (2020). Psychological Science. https://doi.org/10.25384/SAGE.12594110.v2
Ren, Z. (Bella), Dimant, E., & Schweitzer, M. (2023). Beyond belief: How social engagement motives influence the spread of conspiracy theories. Journal of Experimental Social Psychology, 104, 104421. https://doi.org/10.1016/j.jesp.2022.104421
Roberts, B. W., Jackson, J. J., Fayard, J. V., Edmonds, G., & Meints, J. (2009). Conscientiousness. In Handbook of individual differences in social behavior (pp. 369–381). The Guilford Press.
Roozenbeek, J., Freeman, A. L. J., & van der Linden, S. (2021). How Accurate Are Accuracy-Nudge Interventions? A Preregistered Direct Replication of Pennycook et al. (2020). Psychological Science, 32(7), 1169–1178. https://doi.org/10.1177/09567976211024535
Silberzahn, R., Uhlmann, E. L., Martin, D. P., Anselmi, P., Aust, F., Awtrey, E., Bahník, Š., Bai, F., Bannard, C., Bonnier, E., Carlsson, R., Cheung, F., Christensen, G., Clay, R., Craig, M. A., Dalla Rosa, A., Dam, L., Evans, M. H., Flores Cervantes, I., … Nosek, B. A. (2018). Many Analysts, One Data Set: Making Transparent How Variations in Analytic Choices Affect Results. Advances in Methods and Practices in Psychological Science, 1(3), 337–356. https://doi.org/10.1177/2515245917747646
Simonsohn, U. (2019, September 6). [78a] If you think p-values are problematic, wait until you understand Bayes Factors. Data Colada. http://datacolada.org/78a
Simonsohn, U. (2024). Interacting With Curves: How to Validly Test and Probe Interactions in the Real (Nonlinear) World. Advances in Methods and Practices in Psychological Science. https://doi.org/10.1177/25152459231207787
Simonsohn, U., Montealegre, A., & Evangelidis, I. (2024). Stimulus Sampling Reimagined: Designing Experiments with Mix-and-Match, Analyzing Results with Stimulus Plots (SSRN Scholarly Paper 4716832). https://doi.org/10.2139/ssrn.4716832
Steegen, S., Tuerlinckx, F., Gelman, A., & Vanpaemel, W. (2016). Increasing Transparency Through a Multiverse Analysis. Perspectives on Psychological Science, 11(5), 702–712. https://doi.org/10.1177/1745691616658637
van der Linden, S., Panagopoulos, C., & Roozenbeek, J. (2020). You are fake news: Political bias in perceptions of fake news. Media, Culture & Society, 42(3), 460–470. https://doi.org/10.1177/0163443720906992
Vosoughi, S., Roy, D., & Aral, S. (2018). The spread of true and false news online. Science, 359(6380), 1146–1151. https://doi.org/10.1126/science.aap9559