Lazy or different? A quantitative content analysis of how believers and nonbelievers of misinformation reason

Samuel G. Robson1*, Kate Faasse1, Eliza-Rose Gordon1, Samuel P. Jones1, Manisara Drew1, & Kristy A. Martire1

Received: April 30, 2024. Accepted: October 3, 2024. Published: October 9, 2024. https://doi.org/10.56296/aip00027

Abstract
Widespread belief in claims that run counter to substantial scientific evidence—like climate change is a hoax, and the earth is flat—can have harmful societal consequences. Understanding the reasoning behind these beliefs is crucial for mitigating their spread. Across two studies, we used quantitative content analysis to compare how believers (Fringe) and non-believers (Mainstream) of implausible claims reason about evidence. After reading a fictitious report of either high or low quality from a forensic expert (Study 1; N = 183) or doctor (Study 2; N = 193), and rating the expert, participants explained the reasoning behind their ratings. We analysed the quantity and quality of these responses. There was mixed evidence suggesting that Fringe believers' responses were less effortful than Mainstream responses. However, we found consistent evidence that Fringe believers provided significantly fewer normative justifications and weak evidence that they provided significantly more self-generated justifications. These results suggest that Fringe believers rely less on conventional indicators of evidence quality. Differences in how people evaluate information may explain why some adopt implausible beliefs, and framing information in ways that resonate with Fringe believers may help reduce the spread of false claims.

Keywords: reasoning, content analysis, conspiracy theories, epistemically suspect beliefs

  1. School of Psychology, The University of New South Wales, Australia

*Please address correspondence to sam.robson@unsw.edu.au, School of Psychology, Mathews Building Library Walk, UNSW, Kensington, NSW 2052, Australia

Robson, S.G., Faasse, K., Gordon, E.-R., Jones, S. P., Drew, M., & Martire, K. A. (2024). Lazy or different? A quantitative content analysis of how believers and nonbelievers of misinformation reason. advances.in/psychology, 2, e003511. https://doi.org/10.56296/aip00027

The current article passed two rounds of double-blind peer review. The anonymous review report can be found here.

Introduction

Discerning fact from falsehood presents a significant challenge in the information age. Belief in epistemically suspect claims such as unverified conspiracies, fake news, and pseudoscience is associated with a host of negative outcomes such as heightened discrimination against marginalised groups, lower civil engagement, reduced public health compliance, and weakened societal cohesion (Jolley et al., 2022; Roozenbeek et al., 2020). Belief in suspect claims is not reserved to a small minority either. For example, 20% of surveyed Australians believed the government concealed health risks posed by 5G (Marques et al., 2022). To understand why people find claims of this kind persuasive, we examine how believers and non-believers of implausible claims differ in their reasoning.

One proposed explanation for why people believe misinformation is miserly thinking. To use Kahneman’s (2011) dual process model, cognitive miserliness would broadly refer to a reliance on intuitive, fast thinking (System 1) over analytical, slow thinking (System 2). When evaluating information, cognitive misers are unwilling to go beyond heuristic processing, expending little effort or analysis when trying to determine if information is plausible or correct. Miserly thinkers may come to believe misinformation because they do not think carefully enough about the evidence that they encounter (Pennycook & Rand, 2019; Scherer et al., 2021).

In line with this view, self-reported analytic cognitive style has been found to negatively correlate with belief in conspiracy theories and the paranormal (Aarnio & Lindeman, 2005; Yelbuz et al., 2022). Similarly, performance on tasks designed to assess analytic thinking, such as the Cognitive Reflection Test (CRT; Frederick, 2005), is inversely correlated with belief in dubious claims (Pennycook et al., 2012, 2015; Pennycook & Rand, 2019; Yelbuz et al., 2022). Pennycook and Rand (2019), in finding that lower CRT scores relate to lower accuracy in detecting implausible news headlines, suggested that failing to think thoroughly leads people to believe false claims. Yelbuz (2022) also suggested that claims like conspiracy theories attract people with an intuitive thinking style because such theories require lower cognitive effort. And Brashier (2023) noted that several pieces of evidence appear to characterize conspiracy theorists as “overconfident underthinkers” (p. 3). We refer to the idea that people believe false claims because they fail to effortfully evaluate information as the Miserly Hypothesis.

It is worth noting that analytic versus intuitive thinking may refer to many things, including processing strategies, reliance on heuristics versus formal logic, emotional versus rational decision-making, or the mental effort people expend when evaluating information. Varied definitions can also complicate tests of miserliness because effort and ability can be challenging to disentangle. Analytic thinking involves the deliberate, systematic evaluation of information, which is inherently effortful, but effort alone does not guarantee analytic processing. In testing the Miserly Hypothesis, we focus specifically on whether those who believe implausible claims expend less effort than non-believers when evaluating evidence. We reason that while increased effort may sometimes equate to more analysis, decreased effort cannot signal more analytical thinking.

Within this framing, evidence suggesting that those who believe misinformation are cognitive misers has certain limitations. For example, even though the CRT is the dominant way to assess analytic thinking in the literature (Pennycook et al., 2015), it may not reliably assess individual differences in miserliness. Stupple and colleagues (2017) demonstrated that people who struggle on the CRT perform poorly because of limited working memory or cognitive inhibition rather than miserliness. Instead, poor performers on the CRT may expend considerable mental effort but have a misguided approach. Several studies in fact question the validity of the CRT for similar reasons (e.g., Blacksmith et al., 2019; Otero et al., 2022; Patel et al., 2019). Relatedly, believers of implausible claims (e.g., climate change is a hoax, vaccines are harmful) give incorrect answers to CRT questions more often than non-believers, but they spend longer attempting to answer the questions (Martire et al., 2023). Lengthier deliberation is inconsistent with miserliness.

Other evidence also challenges the notion that believers of implausible claims are cognitive misers. Anti-vaccination comments on Facebook, for instance, display more analytical language compared to pro-vaccination or neutral comments (Faasse et al., 2016). The evidence on whether encouraging analytical thinking reduces belief in false claims is also mixed (see Bago et al., 2020; Sanchez et al., 2017; Većkalov et al., 2024). Additionally, in studies that assess how individuals evaluate rich text material, there is little evidence that those who believe implausible claims are less analytical than non-believers (Martire, Growns et al., 2020). Overall, growing research suggests that it may be inaccurate to broadly characterize those who believe implausible claims as having a generally miserly cognitive style.

If not miserliness, what else may be at play? Another perspective is that false beliefs stem from different perceptions about what evidence is reliable or valid. Lewandowsky and colleagues (2017) argue that those who endorse misinformation adopt a ‘post-truth’ lens, viewing the world through an alternative epistemology that defies conventional standards of evidence. A preference for alternative sources and less conventional evidence (see Harambam & Aupers, 2015) can mean that people draw on or emphasise different information on given topics. Such preferences can shape what information people value and attend to, and how that information is then assessed and integrated. As a result, people can arrive at drastically different conclusions on the same topic. Adopting fringe positions on issues like vaccines and climate change, in other words, may arise from overvaluing certain types of information even when it conflicts with higher-quality, epistemically normative evidence. We call this view the Information Preference Hypothesis.

There is some evidence to support this perspective. For one, those who believe suspect claims tend to distrust information from scientific experts, authorities and institutions (Imhoff et al., 2018; Lewandowsky et al., 2017; Lyons, 2023; Ward et al., 2017). Believers may also exhibit lower intellectual humility (Bowes & Tasimi, 2022) and might therefore trust their own knowledge and experiences over the opinions of others (Krumrei-Mancuso & Rouse, 2016). Suspect beliefs are also linked to preferences for less credible sources, such as social media and personal experience (Blankenship et al., 2018; Enders et al., 2023; Rodriguez, 2016). These varied pieces of evidence indicate that misbeliefs may emerge from operating in an alternate epistemic framework that values non-normative information.

The Present Studies

The aim of the present set of studies is to test the Miserly and Information Preference hypotheses by using evidence evaluation tasks to compare those who believe implausible claims to those who do not. We define an implausible claim as one that contradicts substantial evidence and consensus among scientists and institutions, where relevant evidence is publicly accessible and key events are not dependent on secrecy among a small group of individuals. This definition narrows implausible claims to a subset of claims within the broader category of misinformation. The four implausible claims we focus on are: 1) global warming is a hoax, 2) vaccines are harmful and this fact is covered up, 3) the Earth is flat, and 4) the Apollo moon landings never happened and were staged in a Hollywood studio (Martire, Growns et al., 2020; Martire et al., 2023). Notably, three of these claims are overtly conspiratorial, and the fourth (the Earth is flat) implies a global conspiracy as well. It is quite possible that a belief which contradicts substantial evidence and scientific consensus may necessitate a large-scale conspiracy to be convincing. Our four claims could therefore also be viewed as implausible conspiracy theories (see Frenken et al., 2024 for a distinction between plausible and implausible conspiracy theories). In any case, we label those who believe implausible claims as Fringe believers and those who do not as Mainstream believers because these terms are more neutral.

Across two studies, Fringe and Mainstream believers read facts about an expert and their opinion, which is of either conventionally high-quality or low-quality. They then rate the persuasiveness of the expert and their opinion before providing an open-ended response explaining their ratings. The results for the quantitative data are reported elsewhere (Robson et al., 2024), but the persuasiveness ratings for the high- and low-quality evidence suggest that Fringe and Mainstream believers are equally effective at distinguishing evidence quality; both groups expended the analytic effort required to reach a normatively expected conclusion. We extend on this work here by employing content analyses to compare the groups on the open-ended responses they gave to justify these evaluations.

We code the open-ended responses into discrete justifications that fit under three overarching categories. The first includes justifications about information in the report that was relevant to expertise (per the Expert Persuasion Expectancy Framework [ExPEx]; Martire, Edmond & Navarro, 2020). The second includes justifications about peripheral information in the report irrelevant to expertise. And the third includes self-generated information provided by the participant. Hypotheses for both studies were preregistered (Study 1: https://aspredicted.org/blind.php?x=2DP_68B; Study 2: https://aspredicted.org/blind.php?x=9Z5_FJL), but specific coding procedures and planned analyses were not specified.

Under the Miserly Hypothesis, Fringe believers should display less effort in their reasoning and therefore provide significantly fewer justifications overall compared to Mainstream believers. In line with the Information Preference Hypothesis, however, we expect Fringe and Mainstream believers not to differ significantly in the number of justifications they provide, but to differ significantly in the type of justifications they provide.[1] Importantly, participants were asked to reason about information unrelated to the implausible claims above. This design ensures that pre-existing beliefs about specific issues are unlikely to affect participants’ reasoning on our experimental tasks. However, any difference in miserliness, if it is a cognitive style, should apply even in these unrelated contexts.

Methods

Ethics and Data Availability

Ethical approval for both studies was obtained through the UNSW Human Research Ethical Approval Panel C #3454 and #3636, respectively. The codebook, data, and analytic scripts for these studies can be found on the Open Science Framework (OSF) at: https://osf.io/ebv9c/.

Design

In both Studies 1 and 2, we employed a 2 (Belief Group; Fringe vs. Mainstream) ✕ 2 (Condition; High vs. Low quality report) between-groups design. Participants read and rated the persuasiveness of fictional expert reports from either a forensic examiner (Study 1) or medical doctor (Study 2). This report was manipulated to be of either high or low quality (based on the ExPEx framework; Martire, Edmond & Navarro, 2020). Participants then justified their ratings in an open-ended response. Justifications were coded into one of three categories using the manual coding method detailed below. The dependent variables of interest were the total number of justifications provided and the number of justifications from each category.

Grouping Method

Participants were grouped via the Belief in Implausible Claims Protocol (BEIC; Martire, Growns et al., 2020) administered towards the end of each study. This protocol includes the four implausible claims outlined in the introduction (global warming is a hoax; vaccines are harmful, and this fact is covered up; the earth is flat; the Apollo moon landings never happened and were staged in a Hollywood film studio). Participants answered using a scale from 0 (Not at all) to 100 (Definitely true). If a participant rated one or more of the implausible statements above 60 for truth, they were labelled as a Fringe believer whereas participants who rated all four implausible statements below 40 were classified as Mainstream believers. These cut-off values were selected because they are analogous to thresholds typically used with 5-point Likert scales where values of 4 and 5 indicate agreement, values of 1 and 2 indicate disagreement, and a value of 3 indicates indecision or uncertainty. These criteria ensure that all participants categorized as Fringe believers have indicated strong belief in at least one implausible claim, while all Mainstream believers have indicated little or no belief in all of the implausible claims. This method arguably aligns more with the Fringe and Mainstream labels than say a median split method, which might misclassify individuals with mild belief across all four claims as Fringe, or those with strong belief in only one implausible claim as Mainstream. Prior work, however, has indicated that varying the precise cut-off values has little impact on general findings regarding these groups (see Martire, Growns et al., 2020, Martire et al., 2023).

We chose to group participants, rather than use a continuous measure of belief, for several reasons. First, implausible claims are endorsed by relatively few people, so a general population sample would include few people who actually believe such claims, limiting insights into this group of interest. To address this, we specifically prescreened participants who either endorsed or rejected implausible claims (see Participants section). A group comparison approach is therefore consistent with how participants were recruited.

Consider also an analogy from basketball: to understand what predicts expert basketball performance, sampling from general population may be inadvisable as most people are not professional basketballers. The key variables that distinguish basketball ability in the general population may be different from the variables that separate experts from amateurs. A more suitable approach would be to recruit equal numbers of experts and amateurs, and to compare them on relevant traits and skills. Using general population samples may similarly misrepresent relationships regarding belief in implausible claims. For instance, van Prooijen and colleagues (2023) found that correlations between beliefs can be entirely driven by those who accept the official narrative rather than by actual conspiracy theorists. Recruiting equal numbers of believers and non-believers, and comparing them on psychological measures, can allow for meaningful conclusions about what separates the two groups.

Participants

All participants were pre-screened on Prolific within two months prior to data collection with the BEIC protocol outlined above. From those pre-screened, we aimed to recruit 120 presumptive Fringe believers and 120 presumptive Mainstream believers via Prolific. However, participants were formally grouped for analysis via their responses to the BEIC protocol re-administered in the present studies. There were no restrictions on which countries we collected data from because we aimed to oversample people who believe implausible claims rather than recruit a representative sample from any particular country. Participants received monetary compensation of £2.25 for taking part.

A priori power analyses conducted in G*Power (Faul et al., 2007) indicated that a minimum of 179 participants would be needed to detect a medium effect (f = 0.25) in a 2 ✕ 2 between subjects ANOVA with 80% power. Two attention checks were included throughout each study, and we excluded participants who failed both attention checks or who responded in a language other than English. We also excluded participants who no longer met the criteria for either Fringe or Mainstream based on their responses on the re-administered BEIC protocol.

Study 1

A total of 240 participants were recruited. None were excluded based on attention checks but two were excluded for not responding in English. Of the remaining 238 participants, the average age was 39.1 (SD = 14.4), 54.2% identified as Male (45.0% as female; 0.4% other), 71.8% identified as White/Caucasian (9.7% Hispanic, 8.4% African American, 3.4% Asian, 6.3% other), 75.6% had tertiary level education or higher (23.5% high school level or less), and 68.9% spoke English as a first language. Twenty-six participants did not meet the criteria for either Fringe or Mainstream believers, leaving 104 Fringe believers and 108 Mainstream believers. After pilot coding, 183 participant responses remained, and only these were used for analysis.

Study 2

We recruited 239 participants. None were excluded based on attention checks or language. The average age of these participants was 45.3 (SD = 13.7), 57.7% identified as male (41.4% female, 0.4% other), 84.9% identified as White/Caucasian (4.6% Asian, 4.2% African America, 2.9% Hispanic, 2.9% other), 71.1% had tertiary level education or higher (28.5% high school level or less), and 81.6% spoke English as a first language. Of these, 96 were classified as Fringe believers and 126 as Mainstream believers, with 17 removed for not fitting either criterion. After pilot coding, 193 participant responses remained, and only these responses were used for analysis.

Materials and Procedure

Participants provided informed consent and then completed an evidence evaluation task in which they read facts about an expert and their opinion. Participants were randomly allocated to read either a high- or low-quality report (manipulated based on the ExPEx framework; Martire, Edmond & Navarro, 2020). Table 1 (adapted from Robson et al., 2024) provides the wording of the reports. The ExPEx framework contains eight attributes relevant to the merit-based assessment of an expert’s opinion: foundation, field, specialty, ability, opinion, support, consistency, and trustworthiness (see Martire, Growns et al., 2020; Martire &  Montgomery-Farrer, 2020 for more details). The reports presented eight facts, each corresponding to one of the ExPEx attributes. The expert opinion’s soundness, credibility and merit is determined according to strengths and weaknesses on these attributes (Martire, Edmond & Navarro, 2020).

Table 1

The Information Provided in the Reports in Studies 1 and 2.

ExPEx AttributeHigh [low] quality and low relevance information
Study 1Study 2
FoundationAn extensive review of the medical literature reveals that formal training, study, and experience working as a clinical podiatrist makes it possible [does not make it possible]to identify a perpetrator through the comparison of a) gait patterns captured on CCTV with b) the gait of suspected offenders.An extensive review of the medical literature reveals that formal training, study, and experience working as a neurologist makes it possible [does not make it possible] to accurately diagnose a neurological disorder called Intraarterial Capitis by looking at the results of brain scans.
FieldDr X has formal training, study and experience working as a clinical podiatrist [hand surgeon]. Dr X has also obtained a degree in Psychology from a local college[prestigious university].* Dr X is a single mother of two children who lives in a ‘working class’ neighbourhood.Dr Chen has formal training, study, and experience working as a neurologist [orthopaedic surgeon]. Dr Chen has also obtained a degree in Psychology from a local college [prestigious university]*. Dr Chen is a single mother of two children who lives in a ‘working class’ neighbourhood. Dr.Chen is not close friends with any (is close friends with many)* of her colleagues.
SpecialtyDr X has [has not] undertaken training courses specifically instructing how to make comparisons from low quality CCTV footage to high quality suspect footage for the purpose of gait analysis. She is also a candlemaker [But she is also a pilot]* in her spare time.Dr Chen has [has not] undertaken an Advanced Training Program specifically instructing how to read brain scans to make accurate diagnoses of Intraarterial Capitis. She is also a candlemaker [But she is also a pilot]* in her spare time.
AbilityDr X has been tested to examine whether she is accurately able to match suspects to perpetrators on the basis of a gait comparison using CCTV footage. Dr X performs the task very well and rarely makes mistakes [very poorly and commonly makes mistakes]. Dr X’s friends like to tell stories about all of the times she was [was not] able to recognise them even from afar, just by the way they move.Brain changes caused by Intraarterial Capitis can be seen in autopsy results. Dr Chen has [has not]been tested to examine whether she is accurately able to diagnose Intraarterial Capitis from brain scans in training cases, by comparing her diagnoses to later autopsy results. Dr Chen performs the task very well in training and rarely makes mistakes. This is her first case of Intraarterial Capitis with a real patient.* After visiting Dr Chen, Patient B heard that a friend had a really bad [good]* experience with Dr Chen.
OpinionIn her report, Dr X gave the opinion that she is extremely [somewhat] confident that the perpetrator from the CCTV and the suspect are the same person given the observed similarities in their gait.In her report, Dr Chen gave the opinion that she is extremely [somewhat] confident that Patient B should be diagnosed with Intraarterial Capitis given the features of Patient B’s brain scan.
SupportWhen asked by the Court, Dr X did not provide any evidence [provided extensive evidence]*in support of this opinion.When asked by Patient B, Dr Chen provided extensive evidence [did not provide any evidence] in support of this diagnosis.
ConsistencyThe opinion expressed by Dr X was reviewed by a clinical podiatrist with formal training, study, and experience. The clinical podiatrist did not reach [reached]*the same opinion as Dr X.The opinion expressed by Dr Chen was reviewed by a neurologist [two neurologists]with formal training, study, and experience. The neurologist reached [One neurologist did not reach] the same diagnosis as Dr Chen,but Patient B’s husband – an electrician – disagrees [agrees]* with Dr Chen’s diagnosis.
TrustworthinessDr X is a court appointed expert in this case. She has worked for both the prosecution and the defense in the past showing that she does not have a pro-prosecution or pro-defense bias in her work [known as a hired gun and only testifies for the prosecution].Dr Chen has no [a] vested interest in diagnosing Intraarterial Capitis. She has no known [is suspected to have] ties to pharmaceutical companies that provide treatments for Intraarterial Capitis and is only concerned with the well-being of her patients [so it benefits her to make more diagnoses of the disorder].
Note. Table adapted from Robson et al. (2024). Bolded text was presented in the high quality condition whereas text in brackets was presented in the low quality condition. Italicized text denotes low relevance information (i.e.. not related to Expert Persuasion Expectancy [ExPEx] attributes). Some information (*) was incongruent with the quality condition to add ambiguity to the material.

In addition to the relevant pieces of information, we added incongruent and low-relevance information to make the task more challenging. For example, the expert was described as having a technical and specialised hobby (i.e., pilot) in the low-quality condition and a casual hobby in the high-quality condition (i.e., candlemaker). In Study 1, two relevant ExPEx attributes (support and consistency) were also incongruent with the other attributes, again to make the task more difficult. The high-quality report, for instance, stated that another expert did not reach the same opinion as the expert, whereas in the low quality report another expert did reach the same opinion. In Study 2, all relevant information was intentionally congruent with the quality condition because the paradigm was being applied to a medical setting for the first time.

After reading the report, participants rated how persuasive they found the expert and their opinion (aggregate of weight, value, and credibility), and whether they agreed with the opinion. Next, participants wrote an open-ended response asking them to justify their ratings. The open-ended responses are of main interest here. The questions posed to participants in Studies 1 and 2 were similar to each other. The question from Study 1 was:

“We are interested in learning about the information you used to make your decisions about Dr X and her opinion on the previous screens. Please describe what you considered as accurately as you can. There are no right or wrong answers, but please write in full sentences.”

Participants then completed a series of other questions beyond the scope of the present study, followed by the BEIC Protocol, which was used to place participants into groups. This protocol includes the four implausible items outlined eatlier interspersed among eight general knowledge items, and one attention check. The general knowledge items were included to avoid demand effects when inquiring about beliefs. An example of a general knowledge item was: ‘Spiders have six legs’. Finally, participants responded to demographic questions and were then debriefed.

Response Coding

Pilot Phase

We initially selected 35 responses randomly from the full sample in each study to develop a codebook that we could then apply to the remaining responses (per White & Marsh, 2006). The codebook’s development followed procedures outlined in qualitative coding manuals (Bryman, 2016; Riff et al., 2014; Ward et al., 2017; Weber, 1985; White & Marsh, 2006). Two independent coders (authors EG and SPJ) identified and categorised the presence (‘1’) or absence (‘0’) of justifications for each participant’s response. Coders were blind to who wrote the responses. A justification was defined as any reason provided by the participant for rating the credibility, value, and weight (i.e. persuasiveness) of the expert, or for agreeing or disagreeing with the expert. After initial independent coding, the coders discussed differences and refined the category definitions. This codebook was revised several times, with iterations ceasing once inter-rater reliability for most categories had ‘substantial agreement’ or higher (Cohen’s κ > .60; Landis & Koch, 1977).

Test Phase

The final codebook included 18 justification subcategories with detailed instructions and examples for each subcategory (see https://osf.io/ebv9c/). Each of these 18 subcategories fell under one of three overarching justification types: Present-relevant (pertaining to information contained in the expert report and relevant to an ExPEx attribute), present-irrelevant (a justification relying on information that was included in the expert report but was unrelated to an ExPEx attribute), or self-generated (reasons not based on information contained in the expert report, including subjective assumptions and opinions). Here is an example response from Study 2:

[Dr Chen’s main area of expertise was stated to be orthapaedic surgery, rather than neurology]. [Furthermore, it was stated that even well-trained neurologists cannot provide an accurate diagnosis from brain scans]. [Dr Chen also had a conflict of interest.]

The text in each set of brackets refers to a discrete justification. Each is an example of a present-relevant justification. The first justification was coded as Field, the second as Foundation and the third as Trustworthiness, as per the ExPEx framework. A present-irrelevant justification would refer to low-relevance information from the report (e.g., “has a degree in psychology” or “is a single mother…”). Self-generated justificationswould refer to information not provided in-text (e.g., “I just can’t trust what she says” or “I believe from my own experience that…”).

With this final codebook, the remaining responses were then independently coded. Before breaking the blind, the coders resolved any disagreements and decided on a single code (presence or absence) for each subcategory for all responses. This final coding was used for all subsequent analyses comparing Fringe and Mainstream believers (Study 1 [N = 183]; Study 2 [N = 193]).

Analysis Plan

In the preregistrations for Studies 1 and 2, we expected Fringe and Mainstream believers to differ qualitatively but not quantitively in the justifications they provided. However, we had not specified any planned analyses to test these hypotheses beyond conducting content analysis. Since then, we decided to compare the groups on several measures: the total number of words typed, the total number of justifications given, and the number of justifications from each overarching category. These comparisons were made using either poisson or negative binomial generalized linear models depending on overdispersion. We also decided to include Condition (High vs. Low quality report) as a predictor in addition to belief group, as well as the interaction between them. To avoid results being dependent on arbitrary reference categories, we summarized the regression models using the Anova function (type 2; sum contrasts) from the “car” package (Fox & Weisberg, 2019). This is a robust tool for interpreting effects in generalized linear models and is functionally equivalent to an analysis of deviance. Given the post hoc nature of these decisions, all analyses are exploratory.

Results

We present the results for each study below, but we were also able to combine the data from both studies and perform analyses on the aggregated data for added power. For these combined analyses, we conducted generalized multilevel models using the “lme4” package (Bates et al., 2015) and included Study as a random effect. For each data set, we report analyses comparing the preregistered groups. However, to maximize empirical insights, we also report the results for the analyses where belief is included as a continuous predictor variable, which we computed by taking participants’ mean ratings across the four implausible claims. For these latter analyses, we included data even from those who did not fall into one of the belief groups. However, we still excluded participants who did not pass attention checks or respond in a language other than English. The results for all analyses can be found in Table 2.

Table 2

Results for Studies 1 and 2, and the Combined Data, for all Five Dependent Variables

 Belief as dichotomous (Fringe vs Mainstream)Belief as continuous (mean rating)
VariableStudy 1Study 2Combined dataStudy 1Study 2Combined data
Total word countBelief ME χ²(1) = 5.68, p = .017 Condition ME χ²(1) = 2.54, p = .111 Interaction χ²(1) = 1.34, p = .248Belief ME χ²(1) = 0.04, p = .846 Condition ME χ²(1) = 3.48, p = .062 Interaction χ²(1) = 1.06, p = .304Belief ME χ²(1) = 3.37, p = .066 Condition ME χ²(1) = 6.39, p = .012 Interaction χ²(1) = 2.23, p = .135Belief ME χ²(1) = 10.58, p = .001 Condition ME χ²(1) = 3.45, p = .063 Interaction χ²(1) = 1.42, p = .234Belief ME χ²(1) = 0.34, p = .558 Condition ME χ²(1) = 3.38, p = .066 Interaction χ²(1) = 1.10, p = .294Belief ME χ²(1) = 7.06, p = .008 Condition ME χ²(1) = 7.09, p = .008 Interaction χ²(1) = 2.32, p = .127
Total justificationsBelief ME χ²(1) = 1.99, p = .156 Condition ME χ²(1) = 4.45, p = .035 Interaction χ²(1) = 1.05, p = .306Belief ME χ²(1) = 1.02, p = .313 Condition ME χ²(1) = 0.14, p = .711 Interaction χ²(1) = 0.93, p = .334Belief ME χ²(1) = 3.47, p = .062 Condition ME χ²(1) = 1.29, p = .256 Interaction χ²(1) = 1.63, p = .202Belief ME χ²(1) = 5.29, p = .021 Condition ME χ²(1) = 6.00, p = .014 Interaction χ²(1) = 0.51, p = .477Belief ME χ²(1) = 2.55, p = .110 Condition ME χ²(1) = 0.25, p = .619 Interaction χ²(1) = 0.36, p = .549Belief ME χ²(1) = 7.74, p = .005 Condition ME χ²(1) = 1.68, p = .195 Interaction χ²(1) = 0.70, p = .402
Present-relevant justificationsBelief ME χ²(1) = 9.49, p = .002 Condition ME χ²(1) = 0.90, p = .344 Interaction χ²(1) = 0.35, p = .555Belief ME χ²(1) = 5.82, p = .016 Condition ME χ²(1) < 0.01, p = .967 Interaction χ²(1) < 0.01, p = .980Belief ME χ²(1) = 14.98, p < .001 Condition ME χ²(1) = 0.45, p = .501 Interaction χ²(1) = 0.11, p = .740Belief ME χ²(1) = 11.71, p = .001 Condition ME χ²(1) = 2.14, p = .143 Interaction χ²(1) = 0.23, p = .629Belief ME χ²(1) = 6.94, p = .008 Condition ME χ²(1) < .01, p = .999 Interaction χ²(1) = 0.01, p = .932Belief ME χ²(1) = 19.31, p < .001 Condition ME χ²(1) = 0.57, p = .448 Interaction χ²(1) < 0.01, p = .953
Present-irrelevant justificationsBelief ME χ²(1) = 0.59, p = .444 Condition ME χ²(1) = 9.80, p = .002 Interaction χ²(1) = 2.10, p = .147Belief ME χ²(1) = 1.09, p = .297 Condition ME χ²(1) = 6.40, p = .011 Interaction χ²(1) = 2.46, p = .117Belief ME χ²(1) = 0.99, p = .319 Condition ME χ²(1) = 0.39, p = .531 Interaction χ²(1) = 3.82, p = .051Belief ME χ²(1) = 0.02, p = .883 Condition ME χ²(1) = 14.63, p < .001 Interaction χ²(1) = 0.75, p = .388Belief ME χ²(1) = 0.33, p = .568 Condition ME χ²(1) = 8.65, p = .003 Interaction χ²(1) = 3.77, p = .052Belief ME χ²(1) = 0.27, p = .607 Condition ME χ²(1) = 0.87, p = .351 Interaction χ²(1) = 3.22, p = .073
Self-generated justificationsBelief ME χ²(1) = 1.14, p = .286 Condition ME χ²(1) = 0.30, p = .582 Interaction χ²(1) = 0.09, p = .762Belief ME χ²(1) = 3.22, p = .073 Condition ME χ²(1) = 0.62, p = .431 Interaction χ²(1) = 1.35, p = .246Belief ME χ²(1) = 4.04, p = .044 Condition ME χ²(1) = 0.98, p = .321 Interaction χ²(1) = 1.13, p = .288Belief ME χ²(1) = 0.09, p = .759 Condition ME χ²(1) = 0.01, p = .911 Interaction χ²(1) < 0.01, p = .994Belief ME χ²(1) = 1.15, p = .282 Condition ME χ²(1) = 0.80, p = .371 Interaction χ²(1) = 0.02, p = .887Belief ME χ²(1) = 0.99, p = .320 Condition ME χ²(1) = 0.50, p = .481 Interaction χ²(1) = 0.01, p = .924
Note.  Bolded effects are p < .05, ME = main effect

Response Quantity

Word Count

We conducted a negative binomial generalized linear models with response length (word count) as the outcome. In Study 1, there was a significant main effect of belief group where Fringe believers typed significantly fewer words (M = 50.61, SD = 32.36) than Mainstream believers (M = 63.76, SD = 40.85). However, in Study 2, Fringe believers (M = 60.64, SD = 35.96) and Mainstream believers (M = 61.28, SD = 38.60) did not differ significantly in the number of words typed. The group difference was also non-significant when the data were combined. When belief was included as a continuous variable, however, there was a negative association between belief and word count in Study 1 and in the combined analysis, but not in Study 2.

There was a non-significant main effect of condition in Studies 1 and 2. However, when the data were combined, being in the low quality condition was associated with typing more words (M = 63.71, SD = 39.15) compared to the high quality condition (M = 54.43, SD = 34.81). All interaction effects were non-significant. This pattern of findings was consistent with the analyses where belief was included as a continuous variable.

Number of Justifications

We conducted Poisson generalized linear models with total number of justifications as the outcome. In Study 1, Fringe believers (M = 2.87, SD = 1.50) and Mainstream believers (M = 3.27, SD = 1.65) did not differ significantly in total number of justifications. In Study 2, Fringe (M = 3.33, SD = 1.44) and Mainstream believers (M = 3.61, SD = 1.51) did not significantly differ either. And when combining data from both studies, the main effect of belief group remained non-significant. That said, there was a negative association between the total number of justifications and belief in Study 1 and for the combined data when the continuous measure of belief was used, but not in Study 2.

There was a main effect of condition in Study 1 where those who read the high quality report provided significantly fewer justifications (M = 2.77, SD = 1.33) than those who read the low quality report (M = 3.34, SD = 1.75). However, this effect was not significant for Study 2 nor for the combined data set. All interaction effects were non-significant. These results align with analyses where belief was included as a continuous predictor.

Response Quality

We tested whether Fringe and Mainstream believers differed in the type of justifications they gave. Each justification fell into one of three overarching categories: Present-relevant, Present-irrelevant and Self-generated. Figure 1 depicts the number of justifications from each participant across each category.

Figure 1

Number of justifications provided from each category (Present-relevant, Present-irrelevant, and Self-generated) for each group and condition in Study 1 (A) and in Study 2 (B).

Present-Relevant

Poisson generalized linear models revealed that the number of Present-relevant justifications was consistently related to belief. In Study 1, Fringe believers provided significantly fewer justifications (M = 1.47, SD = 1.10) from this category than did Mainstream believers (M = 2.09, SD = 1.16). Fringe believers in Study 2 also gave significantly fewer justifications from this category (M = 2.27, SD = 1.04) compared to Mainstream believers (M = 2.83, SD = 1.12). When combining the data, the effect of belief group was also significant and in the same direction. Additionally, when belief was included as a continuous variable, it was negatively related to the number of present-relevant justifications in Study 1, Study 2, and when the data were combined. The main effect of condition and the interaction effect were non-significant in all analyses.

Present-Irrelevant

For the Present-irrelevant justifications, a Poisson generalized linear model revealed that in Study 1, Fringe believers (M = 0.41, SD = 0.63) did not differ significantly from Mainstream believers (M = 0.36, SD = 0.62). Similarly, in Study 2, Fringe believers (M = 0.34, SD = 0.62) did not significantly differ in the number of present-irrelevant justifications compared to Mainstream believers (M = 0.27, SD = 0.54). In combining the data, the main effect of belief group was also non-significant.

However, the main effect of condition in Study 1 was significant such that those who read the high quality report provided significantly fewer present-irrelevant justifications (M = 0.24, SD = 0.53) than those who read the low quality report (M = 0.52, SD = 0.68). The main effect of condition was also significant in Study 2, but the effect was in the opposite direction; those who read the high quality report provided significantly more justifications from this category (M = 0.39, SD = 0.64) than those who read the low quality report (M = 0.20, SD = 0.50). Similarly, in the analyses where belief was included as a continuous variable, being in the low quality condition was associated with more present-irrelevant justifications in Study 1 but fewer present-irrelevant justifications in Study 2. The main effect of condition when the data were combined, however, was non-significant for both types of analyses. All interaction effects were also non-significant. 

Self-Generated

For the Self-generated justifications, we conducted Poisson generalized linear models. In Study 1, Fringe believers (M = 0.96, SD = 0.85) and Mainstream believers (M = 0.81, SD = 0.84) did not significantly differ. In Study 2, Fringe believers (M = 0.72, SD = 0.87) and Mainstream believers (M = 0.51, SD = 0.70) did not significantly differ either. Nonetheless, for the combined data, there was a significant main effect where Fringe believers provided significantly more self-generated justifications (M = 0.84, SD = 0.86) compared to Mainstream believers (M = 0.65, SD = 0.78). When belief was included as a continuous variable, however, the effect of belief was non-significant for Studies 1 and 2, and for the combined analysis. All main effects for condition, and all interaction effects, were non-significant. 

Subcategory Prevalence

Finally, we counted how many participants in each group provided justifications from each of the 18 subcategories. The frequencies are presented in Table 3. In addition to reporting prevalence rates, we conducted chi-square analyses to statistically compare the groups in each subcategory. There were no significant group differences for any subcategories except for Ability in Study 1, but this too was non-significant after correcting for multiple tests.

Table 3

Prevalence of Justifications for each Subcategory for All Responses

 Study 1 (N = 183)Study 2 (N = 193)Combined data (N = 376)
SubcategoryFringeMainstream2 p-value FringeMainstream2 p-value2 p-value
Foundation (PR)
(7.6%)
14 
(15.4%)
0.256
(4.5%)
12 
(11.5%)
0.209.062
Field (PR)36 
(39.1%)
37 
(40.7%)
0.97839 
(43.8%)
59 
(56.7%)
0.202.317
Specialty (PR)22 
(23.9%)
24 
(26.4%)
0.92940 
(44.9%)
56 
(53.8%)
0.468.400
Ability (PR)24 
(26.1%)
41 
(45.1%)
0.028*27 
(30.3%)
36 
(34.6%)
0.819.069
Opinion (PR)
(1.1%)

(4.4%)
0.390
(0%)

(3.8%)
0.174.080
Support (PR)13 
(14.1%)
22 
(24.2%)
0.22510 
(11.2%)
10 
(9.6%)
0.934.597
Consistency (PR)21 
(22.8%)
30 
(33%)
0.31038 
(42.7%)
57 
(54.8%)
0.245.058
Trustworthiness (PR)11 
(12%)
18 
(19.8%)
0.35044 
(49.4%)
60 
(57.7%)
0.518.150
Additional qualification (PI)14 
(15.2%)

(7.7%)
0.279
(10.1%)

(4.8%)
0.367.092
Personal characteristics (PI)
(4.3%)

(5.5%)
0.938
(4.5%)

(1.0%)
0.305.790
Anecdote (PI)20 
(21.7%)
21 
(23.1%)
0.97712 
(13.5%)
15 
(14.4%)
0.983.981
Character (PI)
(0%)

(0%)
NA
(5.6%)

(6.7%)
0.950.901
Foundation 
(SG)
10 
(10.9%)
10 
(11.0%)
1.00013 
(14.6%)

(8.7%)
0.431.660
Inaccurate 
(SG)
11 
(12%)
14 
(15.4%)
0.79614 
(15.7%)

(5.8%)
0.077.569
Expertise (SG)
(8.7%)

(2.2%)
0.154
(2.2%)

(2.9%)
0.962.342
Trustworthiness (SG)17 
(18.5%)
10 
(11.0%)
0.361
(4.5%)

(7.7%)
0.657.753
Anecdote (SG)
(5.4%)

(1.1%)
0.258
(2.2%)

(3.8%)
0.816.773
Other 
(SG)
37 
(40.2%)
37 
(40.7%)
0.99829 
(32.6%)
23 
(22.1%)
0.263.505
Note. *Significant at p < .05, but non-significant when using Holm-Bonferroni adjustment. PR = Present-relevant, PI = Present-irrelevant, SG = Self-generated

Discussion

We investigated two explanations for why people might come to accept implausible claims: cognitive miserliness and information preferences. The Miserly Hypothesis suggests that those who accept misinformation do so because they have a lazy cognitive style where they avoid mental effort when evaluating information. The Information Preference Hypothesis, on the other hand, suggests that those who accept misinformation have an alternative epistemic framework and differ in what evidence they consider credible. Valuing certain kinds of information over others may explain why some people find implausible claims convincing.

We tested these hypotheses by comparing endorsers of implausible claims (Fringe believers) to non-endorsers (Mainstream believers) on reasoning tasks across two studies. Participants read either a high or low quality expert report and then rated the persuasiveness of the expert and their opinion, before typing an open-ended response to justify their ratings. Under the Miserly Hypothesis, Fringe believers should have provided significantly fewer justifications in their responses than Mainstream believers because this would reflect less mental effort. In line with the Information Preference Hypothesis, however, Fringe and Mainstream believers should have significantly differed in the composition of their justifications.

Counter to the miserly hypotheses, we did not find a significant difference between the groups in terms of response length or total number of justifications in Study 2 nor when the data from both studies were combined. However, in line with the miserly hypothesis, Fringe believers wrote significantly shorter responses in Study 1. When using a continuous measure of belief, higher belief was also predictive of fewer justifications in Study 1 and when the data from both studies were combined. In examining the quantity of participants’ responses, we therefore found mixed evidence that Fringe believers reason less effortfully.

In favour of the idea that the groups have different information preferences, we found consistent evidence that Fringe believers provided significantly fewer justifications relevant to expertise (foundation, field, specialty, ability, opinion, support, consistency, and trustworthiness of the expert ExPEx attributes; Martire, Edmond & Navarro, 2020) compared to Mainstream believers. In other words, Fringe believers relied less on normative indicators of epistemic quality compared to Mainstream believers. Additionally, in combining the data sets, we found that Fringe believers provided significantly more self-generated justifications (i.e., not based on the information in the report such as subjective opinions or experiences). However, this was not the case for each study individually, nor did it align with analyses where belief was included as continuous variable. In all, the findings revealed that Fringe believers may have different underlying assumptions or preferences about what is important when reasoning about evidence, but this is largely characterised by fewer references to expertise-relevant information, with weak evidence that they gave more self-generated justifications.

Fringe believers’ less frequent reference to expertise-relevant cues could of course also be interpreted as a sign that they did not thoroughly analyse or encode the information presented to them. This result could therefore be construed to be in line with the Miserly Hypothesis. The quantitative ratings related to the present studies (reported in Robson et al., 2024), however, indicate that both groups were sensitive to evidence quality during the initial evaluation of the experts and their opinion, showing no difference in the conclusions they reached. Given this added context, the relative number of expertise-relevant justifications is more likely to reflect diverging information preferences rather than differences in effortful processing.

The findings therefore suggest that contrasting beliefs on topics like climate change and vaccines may stem from downplaying normative, high-quality information. And although we found weak evidence that Fringe believers justified their decisions based on self-generated information, the specific kinds of information they prefer could not be ascertained here. Believers of suspect claims may, for instance, favour evidence such as personal anecdotes, personal experiences, and narrative formats (see Dahlstrom, 2014; Rodriguez, 2016). Ultimately, more work is needed to understand the diverging evidentiary frameworks involved.

We also cannot be sure about the causal direction between information preferences and belief in implausible claims. Nera (2024) argued that the relationship between conspiracy mentality and belief in specific conspiracy theories, for example, may be bidirectional. Similarly, information preferences may lead one to believe implausible claims, but holding beliefs that run counter to scientific consensus may also influence what information one prefers.

Given the mixed evidence for the Miserly Hypothesis, labelling those who believe implausible claims as cognitive misers may be an over-generalisation—particularly if miserliness is defined as a cognitive style. If it were truly a style, it ought to manifest across all contexts. It is more plausible that people adopt different reasoning strategies depending on the situation or topic at hand. For example, reasoning on social media under conditions of information overload may lead to more superficial processing. People may also reason about evidence differently to how they assess the credibility of sources, and their reasoning about climate change may differ from how they reason about fake news or forensic evidence. Even Pennycook and Rand (2019), who have argued that accepting implausible news stories stems from a lack of analytical thinking, suggest that any such effect is likely situational and less important in domains where reflective thinkers do not have the requisite knowledge or training (e.g., climate science). 

Previous studies in support of the Miserly Hypothesis have predominantly focused on how people reason when initially exposed to false claims with evidence showing that measures of analytic thinking are associated with accepting misinformation (see Pennycook et al., 2015 for review). Our work instead focuses on justifications provided after participants evaluated a fictitious expert report. It is possible that open-ended questions of this sort may inaccurately capture the actual reasoning processes underlying participants’ decision-making (see Dang et al., 2020; Nisbett & Wilson, 1977). If so, our task may not adequately assess the level of effort people expended when deliberating. Our task may have also prompted more analytic processing than what people would normally engage in when encountering information day-to-day. That said, carefully manipulated fictitious information is commonly used in psychological research to isolate specific cognitive mechanisms. Measures often used to assess the relationship between cognitive style and belief, like the CRT, are not without limitations either (see Martire et al., 2023; Stupple et al., 2017).

Another relatively unique feature of our study lies in our recruiting strategy and group comparisons approach. This differs from many other studies that recruit general samples and measure belief on a continuum. Group differences can sometimes be non-significant even when a correlation that hinges on the same items is significant, and the reverse can also be true. A linear relationship reflects consistent co-variation between variables, but this does not always translate into mean differences between groups. Indeed, our findings sometimes varied depending on whether we compared Fringe believers to Mainstream believers, or whether we used a continuous measure of belief to test for associations. The sample sizes also differed between the two approaches. Different ways of operationalising belief may account for some of our mixed results and why our findings diverge somewhat from previous studies.

Finally, we sometimes found an association between the justifications participants provided and whether they read the high or low quality report. Those who read the low quality report tended to provide significantly longer responses than those who read the high quality report, particularly in Study 1. It is possible that participants felt compelled to justify their decisions when presented with low quality evidence, especially when there was more conflicting information, like in Study 1. This evidence aligns with findings in Robson et al. (2024), which suggest that identifying low quality information requires additional deliberation and reflection. Additionally, reading the high quality report was associated with fewer irrelevant justifications in Study 1, but more irrelevant justifications in Study 2. The high quality report in Study 2 may have contained more irrelevant information to draw upon compared to Study 1. Of course, the studies differed in terms of the facts presented, the congruency of those facts, and the domain, meaning that any of these variables could account for discrepancies in findings between studies.

Several practical considerations stem from our primary findings. Some evidence suggests that false beliefs can be ‘corrected’ by simply encouraging deliberation (Bago et al., 2020). In light of the evidence here, however, another fruitful direction may be to educate people about the normative, high-quality indicators of epistemic quality (see, for example, Lutzke et al., 2019; Guess et al., 2020). Empirical work also suggests that those who believe misinformation tend to incorporate what they find persuasive when attempting to persuade others (Wood & Douglas, 2015). Altering the way that information is communicated and disseminated to fit with the epistemic frameworks and preferences of Fringe believers may therefore be a promising avenue for intervention.

Data Availability Statement

The data and analytic scripts for these studies can be found on the Open Science Framework at: https://osf.io/ebv9c/.

Conflicts of Interest

The authors declare no competing interests.

Acknowledgements

This research was funded by Australian Research Council Discovery Project DP220102412 to KAM & KF.

Author Contributions

In line with CRediT, the authors contributed to this manuscript as follows: conceptualisation (EG, SPJ, KAM & KF), data curation (SGR, EG & SPJ), formal analysis (SGR, EG & SPJ), funding acquisition (KAM & KF), investigation (EG & SPJ), methodology (all authors), project administration (SGR, EG & SPJ), supervision (KAM & KF), visualisation (SGR), Writing — original draft (SGR, EG & SPJ), Writing — reviewing and editing (SGR, MD, KAM & KF). Generative AI (ChatGPT) was used to enhance the clarity of writing and expression in this manuscript, to assist with revisions, and to assist with generating R code for the analyses. The authors assume complete accountability for the content.

Endnotes

[1] In the preregistrations, we refer to this hypothesis as the ‘limited’ explanation, but the way in which we operationalised the variables in fact aligns more with what we call information preferences.

References

Aarnio, K., & Lindeman, M. (2005). Paranormal beliefs, education, and thinking styles. Personality and individual differences39(7), 1227-1236. https://doi.org/10.1016/j.paid.2005.04.009

Bago, B., Rand, D. G., & Pennycook, G. (2020). Fake news, fast and slow: Deliberation reduces belief in false (but not true) news headlines. Journal of Experimental Psychology: General, 149(8), 1608-1613. https://doi.org/10.1037/xge0000729

Bates, D., Mächler, M., Bolker, B., & Walker, S. (2015). Fitting Linear Mixed-Effects Models Using lme4. Journal of Statistical Software, 67(1), 1–48. doi:10.18637/jss.v067.i01.

Blacksmith, N., Yang, Y., Behrend, T. S., & Ruark, G. A. (2019). Assessing the validity of inferences from scores on the cognitive reflection test. Journal of Behavioral Decision Making32(5), 599-612. https://doi.org/10.1002/bdm.2133   

Blankenship, E. B., Goff, M. E., Yin, J., Tse, Z. T. H., Fu, K. W., Liang, H., Saroha, N., & Fung, I. C. H. (2018). Sentiment, contents, and retweets: a study of two vaccine-related twitter datasets. The Permanente Journal22, 17-138. https://doi.org/10.7812/TPP/17-138

Bowes, S. M., & Tasimi, A. (2022). Clarifying the relations between intellectual humility and pseudoscience beliefs, conspiratorial ideation, and susceptibility to fake news. Journal of Research in Personality98, 104220. https://doi.org/10.1016/j.jrp.2022.104220

Brashier, N. M. (2023). Do conspiracy theorists think too much or too little? Current opinion in psychology49, 101504. https://doi.org/10.1016/j.copsyc.2022.101504

Bryman, A. (2016). Social Research Methods (5th ed.). Oxford University Press.

Dahlstrom, M. F. (2014). Using narratives and storytelling to communicate science with nonexpert audiences. Proceedings of the National Academy of Sciences111, 13614-13620. https://doi.org/10.1073/pnas.1320645111

Dang, J., King, K. M., & Inzlicht, M. (2020). Why are self-report and behavioral measures weakly correlated? Trends in Cognitive Sciences24(4), 267-269. https://doi.org/10.1016/j.tics.2020.01.007

Enders, A.M., Uscinski, J.E., Seelig, M.I., Klofstad, C. A., Wuchty, S., Funchion, J. R., Murthi, M. N., Premaratne, K., & Stoler, J. (2023). The Relationship Between Social Media Use and Beliefs in Conspiracy Theories and Misinformation. Political Behavior, 45, 781–804. https://doi.org/10.1007/s11109-021-09734-6

Faasse, K., Chatman, C. J., & Martin, L. R. (2016). A comparison of language use in pro-and anti-vaccination comments in response to a high profile Facebook post. Vaccine34(47), 5808-5814. https://doi.org/10.1016/j.vaccine.2016.09.029

Faul, F., Erdfelder, E., Lang, A. G., & Buchner, A. (2007). G* Power 3: A flexible statistical power analysis program for the social, behavioral, and biomedical sciences. Behavior research methods39(2), 175-191. https://doi.org/10.3758/BF03193146

Fox, J., & Weisberg, S. (2019). An R Companion to Applied Regression (3rd ed). Sage. https://socialsciences.mcmaster.ca/jfox/Books/Companion/

Frederick, S. (2005). Cognitive reflection and decision making. Journal of Economic Perspectives19(4), 25-42. https://doi.org/10.1257/089533005775196732   

Frenken, M., Reusch, A., & Imhoff, R. (2024). “Just because it’s a conspiracy theory doesn’t mean they’re not out to get you”: Differentiating the correlates of judgments of plausible versus implausible conspiracy theories. Social Psychological and Personality Science, 19485506241240506 https://doi.org/10.1177/19485506241240506

Guess, A. M., Lerner, M., Lyons, B., Montgomery, J. M., Nyhan, B., Reifler, J., & Sircar, N. (2020). A digital media literacy intervention increases discernment between mainstream and false news in the United States and India. Proceedings of the National Academy of Sciences117(27), 15536-15545. https://doi.org/10.1073/pnas.1920498117

Harambam, J., & Aupers, S. (2015). Contesting epistemic authority: Conspiracy theories on the boundaries of science. Public Understanding of Science24(4), 466-480. https://doi.org/10.1177/0963662514559891

Imhoff, R., Lamberty, P., & Klein, O. (2018). Using power as a negative cue: How conspiracy mentality affects epistemic trust in sources of historical knowledge. Personality and Social Psychology Bulletin44(9), 1364-1379. https://doi.org/10.1177/0146167218768779

Jastrzębski, J., & Chuderski, A. (2022). Analytic thinking outruns fluid reasoning in explaining rejection of pseudoscience, paranormal, and conspiracist beliefs. Intelligence95, 101705. https://doi.org/10.1016/j.intell.2022.101705

Jolley, D., Marques, M. D., & Cookson, D. (2022). Shining a spotlight on the dangerous consequences of conspiracy theories. Current Opinion in Psychology47, 101363. https://doi.org/10.1016/j.copsyc.2022.101363

Kahneman, D. (2011). Thinking, fast and slow. Farrar, Straus and Giroux.

Krumrei-Mancuso, E. J., & Rouse, S. V. (2016). The development and validation of the comprehensive intellectual humility scale. Journal of Personality Assessment98(2), 209-221. https://doi.org/10.1080/00223891.2015.1068174

Landis, J. R., & Koch, G. G. (1977). The measurement of observer agreement for categorical data. Biometrics, 159-174. https://doi.org/10.2307/2529310  

Lewandowsky, S., Ecker, U. K., & Cook, J. (2017). Beyond misinformation: Understanding and coping with the “post-truth” era. Journal of Applied Research in Memory and Cognition6(4), 353-369. https://doi.org/10.1016/j.jarmac.2017.07.008 

Lutzke, L., Drummond, C., Slovic, P., & Árvai, J. (2019). Priming critical thinking: Simple interventions limit the influence of fake news about climate change on Facebook. Global Environmental Change58, 101964. https://doi.org/10.1016/j.gloenvcha.2019.101964

Lyons, B. A. (2023). How orientations to expertise condition the acceptance of (mis) information. Current Opinion in Psychology, 101714. https://doi.org/10.1016/j.copsyc.2023.101714

Marques, M. D., Ling, M., Williams, M. N., Kerr, J. R., & McLennan, J. (2022). Australasian public awareness and belief in conspiracy theories: Motivational correlates. Political Psychology43(1), 177-198. https://doi.org/10.1111/pops.12746

Martire, K. A., Edmond, G., & Navarro, D. (2020). Exploring juror evaluations of expert opinions using the Expert Persuasion Expectancy framework. Legal and Criminological Psychology25(2), 90-110. https://doi.org/10.1111/lcrp.12165

Martire, K. A., Growns, B., Bali, A. S., Montgomery-Farrer, B., Summersby, S., & Younan, M. (2020). Limited not lazy: a quasi-experimental secondary analysis of evidence quality evaluations by those who hold implausible beliefs. Cognitive Research: Principles and Implications5, 1-15. https://doi.org/10.1186/s41235-020-00264-z

Martire, K. A., & Montgomery-Farrer, B. (2020). Judging experts: Australian magistrates’ evaluations of expert opinion quality. Psychiatry, Psychology and Law27(6), 950-962. https://doi.org/10.1080/13218719.2020.1751334

Martire, K. A., Robson, S. G., Drew, M., Nicholls, K., & Faasse, K. (2023). Thinking false and slow: Implausible beliefs and the Cognitive Reflection Test. Psychonomic Bulletin & Review30(6), 2387-2396. https://doi.org/10.3758/s13423-023-02321-2

Nera, K. (2024). Thinking the relationships between conspiracy mentality and belief in conspiracy theories. Zeitschrift für Psychologie, 232(1), 64-67. https://doi.org/10.1027/2151-2604/a000551

Nisbett, R. E., & Wilson, T. D. (1977). Telling more than we can know: Verbal reports on mental processes. Psychological Review, 84(3), 231–259. https://doi.org/10.1037/0033-295X.84.3.231

Otero, I., Salgado, J. F., & Moscoso, S. (2022). Cognitive reflection, cognitive intelligence, and cognitive abilities: A meta-analysis. Intelligence90, 101614. https://doi.org/10.1016/j.intell.2021.101614

Patel, N., Baker, S. G., & Scherer, L. D. (2019). Evaluating the cognitive reflection test as a measure of intuition/reflection, numeracy, and insight problem solving, and the implications for understanding real-world judgments and beliefs. Journal of Experimental Psychology: General148(12), 2129–2153.  https://doi.org/10.1037/xge0000592

Pennycook, G., Cheyne, J. A., Seli, P., Koehler, D. J., & Fugelsang, J. A. (2012). Analytic cognitive style predicts religious and paranormal belief. Cognition123(3), 335-346. https://doi.org/10.1016/j.cognition.2012.03.003

Pennycook, G., Fugelsang, J. A., & Koehler, D. J. (2015). Everyday consequences of analytic thinking. Current Directions in Psychological Science24(6), 425-432. https://doi.org/10.1177/0963721415604610

Pennycook, G., & Rand, D. G. (2019). Lazy, not biased: Susceptibility to partisan fake news is better explained by lack of reasoning than by motivated reasoning. Cognition188, 39-50. https://doi.org/10.1016/j.cognition.2018.06.011

Riff, D., Lacy, S., & Fico, F. (2014). Analyzing media messages: Using quantitative content analysis in research. Routledge. https://doi.org/10.4324/9780429464287

Robson, S. G., Faasse, K., Gordon, E-R., Jones, S. P., Smith, N., & Martire, K. A. (2024). People who believe implausible claims are not cognitive misers: Evidence from evaluation tasks. Journal of Applied Research in Memory and Cognition. Advance online publication. https://doi.org/10.1037/mac0000190

Rodriguez, N. J. (2016). Vaccine-Hesitant justifications: “too many, too soon,” narrative persuasion, and the conflation of expertise. Global Qualitative Nursing Research3. https://doi.org/10.1177/233339361666330

Roozenbeek, J., Schneider, C. R., Dryhurst, S., Kerr, J., Freeman, A. L., Recchia, G., van der Bles, A. M., & van Der Linden, S. (2020). Susceptibility to misinformation about COVID-19 around the world. Royal Society Open Science7(10), 201199. https://doi.org/10.1098/rsos.201199

Sanchez, C., Sundermeier, B., Gray, K., & Calin-Jageman, R. J. (2017). Direct replication of Gervais & Norenzayan (2012): No evidence that analytic thinking decreases religious belief. PloS One12(2), e0172636. https://doi.org/10.1371/journal.pone.0172636

Scherer, L. D., McPhetres, J., Pennycook, G., Kempe, A., Allen, L. A., Knoepke, C. E., Tate, C. E., & Matlock, D. D. (2021). Who is susceptible to online health misinformation? A test of four psychosocial hypotheses. Health Psychology40(4), 274. https://doi.org/10.1037/hea0000978

Stupple, E. J., Pitchford, M., Ball, L. J., Hunt, T. E., & Steel, R. (2017). Slower is not always better: Response-time evidence clarifies the limited role of miserly information processing in the Cognitive Reflection Test. PloS One12(11), e0186404. https://doi.org/10.1371/journal.pone.0186404

Ward, P. R., Attwell, K., Meyer, S. B., Rokkas, P., & Leask, J. (2017). Understanding the perceived logic of care by vaccine-hesitant and vaccine-refusing parents: A qualitative study in Australia. PloS One12(10), e0185955. https://doi.org/10.1371/journal.pone.0185955

Weber, R. (1985). Basic Content Analysis. Sage Publications.

van Prooijen, J.-W., Wahring, I., Mausolf, L., Mulas, N., & Shwan, S. (2023). Just dead, not alive: Reconsidering belief in contradictory conspiracy theories. Psychological Science34(6), 670-682. https://doi.org/10.1177/09567976231158570

Većkalov, B., Gligorić, V., & Petrović, M. B. (2024). No evidence that priming analytic thinking reduces belief in conspiracy theories: A Registered Report of high-powered direct replications of Study 2 and Study 4 from Swami, Voracek, Steiger, Tran, and Furnham (2024). Journal of Experimental Social Psychology110, 104549. https://doi.org/10.1016/j.jesp.2023.104549

White, M. D., & Marsh, E. E. (2006). Content analysis: A flexible methodology. Library Trends55(1), 22-45. https://doi.org/10.1353/lib.2006.0053 

Wood, M. J., & Douglas, K. M. (2015). Online communication as a window to conspiracist worldviews. Frontiers in Psychology6, 115799. https://doi.org/10.3389/fpsyg.2015.00836 Yelbuz, B. E., Madan, E., & Alper, S. (2022). Reflective thinking predicts lower conspiracy beliefs: A meta-analysis. Judgment and Decision Making17(4), 720-744. https://doi.org/10.1017/S1930297500008913