Durably reducing conspiracy beliefs through dialogues with AI
Editor’s summary
Structured Abstract
INTRODUCTION
RATIONALE
RESULTS
CONCLUSION

Abstract
Access the full article
View all access options to continue reading this article.
Supplementary Materials
This PDF file includes:
References and Notes
(0)eLetters
eLetters is a forum for ongoing peer review. eLetters are not edited, proofread, or indexed, but they are screened. eLetters should provide substantive and scholarly commentary on the article. Neither embedded figures nor equations with special characters can be submitted, and we discourage the use of figures and equations within eLetters in general. If a figure or equation is essential, please include within the text of the eLetter a link to the figure, equation, or full text with special characters at a public repository with versioning, such as Zenodo. Please read our Terms of Service before submitting an eLetter.
Log In to Submit a ResponseNo eLetters have been published for this article yet.
Information & Authors
Information
Published In

13 September 2024
Copyright
Article versions
Submission history
Acknowledgments
Authors
Funding Information
Metrics & Citations
Metrics
Article Usage
- 0 citation in Scopus
- 29 citation in Web of Science
Altmetrics
Citations
Cite as
- Thomas H. Costello et al.
Export citation
Select the format you want to export the citation of this publication.
Cited by
- Overconfidently Conspiratorial: Conspiracy Believers are Dispositionally Overconfident and Massively Overestimate How Much Others Agree With Them, Personality and Social Psychology Bulletin, (2025).https://doi.org/10.1177/01461672251338358
- Balancing Large Language Model Alignment and Algorithmic Fidelity in Social Science Research, Sociological Methods & Research, (2025).https://doi.org/10.1177/00491241251342008
- The Dual Pathways Hypothesis of Incel Harm: A Model of Harmful Attitudes and Beliefs Among Involuntary Celibates, Archives of Sexual Behavior, (2025).https://doi.org/10.1007/s10508-025-03161-y
- Perceived legitimacy of layperson and expert content moderators, PNAS Nexus, 4, 5, (2025).https://doi.org/10.1093/pnasnexus/pgaf111
- On the conversational persuasiveness of GPT-4, Nature Human Behaviour, (2025).https://doi.org/10.1038/s41562-025-02194-6
- The benefits and dangers of anthropomorphic conversational agents, Proceedings of the National Academy of Sciences, 122, 22, (2025).https://doi.org/10.1073/pnas.2415898122
- Ideation with Generative AI—in Consumer Research and Beyond, Journal of Consumer Research, 52, 1, (18-31), (2025).https://doi.org/10.1093/jcr/ucaf012
- A polycrisis threat model for AI, AI & SOCIETY, (2025).https://doi.org/10.1007/s00146-025-02371-1
- Promoting online evaluation skills through educational chatbots, Computers in Human Behavior: Artificial Humans, 4, (100160), (2025).https://doi.org/10.1016/j.chbah.2025.100160
- Leveraging Large Language Models for Personalized Public Messaging, Proceedings of the Extended Abstracts of the CHI Conference on Human Factors in Computing Systems, (1-7), (2025).https://doi.org/10.1145/3706599.3720018
- See more
View Options
Log in to view the full text
AAAS login provides access to Science for AAAS Members, and access to other journals in the Science family to users who have purchased individual subscriptions.
More options
Purchase digital access to this article
Download and print this article for your personal scholarly, research, and educational use.
Buy a single issue of Science for just $15 USD.
Milestone results pinpoint urgent need to explain mechanisms that refute conspiracy beliefs
Costello et al.’s use of a large language model (LLM, here GPT-4 Turbo) to reduce beliefs in conspiracy theories is innovative and inspiring (1). That this intervention worked so well—reducing conspiracy beliefs by 20% with the effect persisting for at least 2 months and extending to unrelated conspiracy theories—was no small feat. When faced with such impressive results from an intervention, it is critical to understand why it worked.
According to the authors, the “causal mechanisms underpinning our results remain unformalized” and “the specific cognitive or psychological processes through which this change [in conspiracy beliefs] occurs are unusually difficult to confirm” (1, p. 8). They suggest “fact-based argumentation was the focal point of each interaction” and call for future research to examine this. However, decades of research suggest that the mechanism underlying this intervention must involve gist. According to fuzzy-trace theory, people represent information in two main ways: Gist captures the essence or underlying meaning, and verbatim is a literal representation of the facts (2). Gist-based explanations allow people to understand why their conspiracy theories are implausible or fail to make sense, whereas a verbatim approach might knock down individual supporting facts as false without explaining why.
Two key results show that gist is what powered this intervention: (a) conspiracy beliefs “not targeted by the conversation with the AI model” also decreased, and (b) the positive effect remained after a 2 month delay (1). Such far transfer and long-term retention are hallmarks of gist thinking (3). Therefore, what was achieved here cannot be due to verbatim representation of specific facts. Future research should test such gist-based explanations, alongside other candidate explanations. Identifying causal mechanisms is not just academic. The democratic process relies on a scientifically literate populace that is not susceptible to illusions on the left or right.
References