Skip to main content
Advertisement
Main content starts here

Editor’s summary

Beliefs in conspiracies that a US election was stolen incited an attempted insurrection on 6 January 2021. Another conspiracy alleging that Germany’s COVID-19 restrictions were motivated by nefarious intentions sparked violent protests at Berlin’s Reichstag parliament building in August 2020. Amid growing threats to democracy, Costello et al. investigated whether dialogs with a generative artificial intelligence (AI) interface could convince people to abandon their conspiratorial beliefs (see the Perspective by Bago and Bonnefon). Human participants described a conspiracy theory that they subscribed to, and the AI then engaged in persuasive arguments with them that refuted their beliefs with evidence. The AI chatbot’s ability to sustain tailored counterarguments and personalized in-depth conversations reduced their beliefs in conspiracies for months, challenging research suggesting that such beliefs are impervious to change. This intervention illustrates how deploying AI may mitigate conflicts and serve society. —Ekeoma Uzogara

Structured Abstract

INTRODUCTION

Widespread belief in unsubstantiated conspiracy theories is a major source of public concern and a focus of scholarly research. Despite often being quite implausible, many such conspiracies are widely believed. Prominent psychological theories propose that many people want to adopt conspiracy theories (to satisfy underlying psychic “needs” or motivations), and thus, believers cannot be convinced to abandon these unfounded and implausible beliefs using facts and counterevidence. Here, we question this conventional wisdom and ask whether it may be possible to talk people out of the conspiratorial “rabbit hole” with sufficiently compelling evidence.

RATIONALE

We hypothesized that interventions based on factual, corrective information may seem ineffective simply because they lack sufficient depth and personalization. To test this hypothesis, we leveraged advancements in large language models (LLMs), a form of artificial intelligence (AI) that has access to vast amounts of information and the ability to generate bespoke arguments. LLMs can thereby directly refute particular evidence each individual cites as supporting their conspiratorial beliefs.
To do so, we developed a pipeline for conducting behavioral science research using real-time, personalized interactions between research subjects and AI. Across two experiments, 2190 Americans articulated—in their own words—a conspiracy theory in which they believe, along with the evidence they think supports this theory. They then engaged in a three-round conversation with the LLM GPT-4 Turbo, which we prompted to respond to this specific evidence while trying to reduce participants’ belief in the conspiracy theory (or, as a control condition, to converse with the AI about an unrelated topic).

RESULTS

The treatment reduced participants’ belief in their chosen conspiracy theory by 20% on average. This effect persisted undiminished for at least 2 months; was consistently observed across a wide range of conspiracy theories, from classic conspiracies involving the assassination of John F. Kennedy, aliens, and the illuminati, to those pertaining to topical events such as COVID-19 and the 2020 US presidential election; and occurred even for participants whose conspiracy beliefs were deeply entrenched and important to their identities. Notably, the AI did not reduce belief in true conspiracies. Furthermore, when a professional fact-checker evaluated a sample of 128 claims made by the AI, 99.2% were true, 0.8% were misleading, and none were false. The debunking also spilled over to reduce beliefs in unrelated conspiracies, indicating a general decrease in conspiratorial worldview, and increased intentions to rebut other conspiracy believers.

CONCLUSION

Many people who strongly believe in seemingly fact-resistant conspiratorial beliefs can change their minds when presented with compelling evidence. From a theoretical perspective, this paints a surprisingly optimistic picture of human reasoning: Conspiratorial rabbit holes may indeed have an exit. Psychological needs and motivations do not inherently blind conspiracists to evidence—it simply takes the right evidence to reach them. Practically, by demonstrating the persuasive power of LLMs, our findings emphasize both the potential positive impacts of generative AI when deployed responsibly and the pressing importance of minimizing opportunities for this technology to be used irresponsibly.
Dialogues with AI durably reduce conspiracy beliefs even among strong believers.
(Left) Average belief in participant’s chosen conspiracy theory by condition (treatment, in which the AI attempted to refute the conspiracy theory, in red; control, in which the AI discussed an irrelevant topic, in blue) and time point for study 1. (Right) Change in belief in chosen conspiracy from before to after AI conversation, by condition and participant’s pretreatment belief in the conspiracy.

Abstract

Conspiracy theory beliefs are notoriously persistent. Influential hypotheses propose that they fulfill important psychological needs, thus resisting counterevidence. Yet previous failures in correcting conspiracy beliefs may be due to counterevidence being insufficiently compelling and tailored. To evaluate this possibility, we leveraged developments in generative artificial intelligence and engaged 2190 conspiracy believers in personalized evidence-based dialogues with GPT-4 Turbo. The intervention reduced conspiracy belief by ~20%. The effect remained 2 months later, generalized across a wide range of conspiracy theories, and occurred even among participants with deeply entrenched beliefs. Although the dialogues focused on a single conspiracy, they nonetheless diminished belief in unrelated conspiracies and shifted conspiracy-related behavioral intentions. These findings suggest that many conspiracy theory believers can revise their views if presented with sufficiently compelling evidence.

Access the full article

View all access options to continue reading this article.

Supplementary Materials

This PDF file includes:

Supplementary Text
Figs. S1 to S15
Tables S1 to S21
References (7789)

References and Notes

1
S. M. Bowes, T. H. Costello, A. Tasimi, The conspiratorial mind: A meta-analytic review of motivational and personological correlates. Psychol. Bull. 149, 259–293 (2023).
2
M. Butter, P. Knight, Eds., Routledge Handbook of Conspiracy Theories (Routledge, 2020).
3
K. M. Douglas, R. M. Sutton, What are conspiracy theories? A definitional approach to their correlates, consequences, and communication. Annu. Rev. Psychol. 74, 271–298 (2023).
4
J. E. Oliver, T. J. Wood, Conspiracy theories and the paranoid style(s) of mass opinion. Am. J. Pol. Sci. 58, 952–966 (2014).
5
H. G. West, T. Sanders, Eds., Transparency and Conspiracy: Ethnographies of Suspicion in the New World Order (Duke Univ. Press, 2003).
6
J. E. Uscinski, J. M. Parent, American Conspiracy Theories (Oxford Univ. Press, 2014).
7
J.-W. van Prooijen, K. M. Douglas, Belief in conspiracy theories: Basic principles of an emerging research domain. Eur. J. Soc. Psychol. 48, 897–908 (2018).
8
S. Lewandowsky, G. E. Gignac, K. Oberauer, The role of conspiracist ideation and worldviews in predicting rejection of science. PLOS ONE 8, e75637 (2013).
9
M. G. Napolitano, “Conspiracy theories and resistance to evidence,” thesis, University of California, Irvine (2022).
10
C. R. Sunstein, A. Vermeule, Conspiracy theories: Causes and cures. J. Polit. Philos. 17, 202–227 (2009).
11
C. O’Mahony, M. Brassil, G. Murphy, C. Linehan, The efficacy of interventions in reducing belief in conspiracy theories: A systematic review. PLOS ONE 18, e0280902 (2023).
12
L. Stasielowicz, The effectiveness of interventions addressing conspiracy beliefs: A meta-analysis. PsyArXiv [Preprint] (2024); https://doi.org/10.31234/osf.io/6vs5u.
13
J. K. Madsen, L. de-Wit, P. Ayton, C. Brick, L. de-Moliere, C. J. Groom, Behavioral science should start by assuming people are reasonable. Trends Cogn. Sci. 28, 583–585 (2024).
14
G. Pennycook, Chapter Three - A framework for understanding reasoning errors: From fake news to climate change and beyond. Adv. Exp. Soc. Psychol. 67, 131–208 (2023).
15
K. M. Douglas, R. M. Sutton, A. Cichocka, The psychology of conspiracy theories. Curr. Dir. Psychol. Sci. 26, 538–542 (2017).
16
J. T. Jost, A. Ledgerwood, C. D. Hardin, Shared reality, system justification, and the relational basis of ideological beliefs. Soc. Personal. Psychol. Compass 2, 171–186 (2008).
17
R. Hofstadter, The Paranoid Style in American Politics (Knopf Doubleday Publishing Group, 1964).
18
J. A. Whitson, A. D. Galinsky, Lacking control increases illusory pattern perception. Science 322, 115–117 (2008).
19
S. Lewandowsky, J. Cook, K. Oberauer, S. Brophy, E. A. Lloyd, M. Marriott, Recurrent fury: Conspiratorial discourse in the blogosphere triggered by research on the role of conspiracist ideation in climate denial. J. Soc. Polit. Psych. 3, 142–178 (2015).
20
J.-W. van Prooijen, N. B. Jostmann, Belief in conspiracy theories: The influence of uncertainty and perceived morality. Eur. J. Soc. Psychol. 43, 109–115 (2013).
21
J.-W. van Prooijen, An existential threat model of conspiracy theories. Eur. Psychol. 25, 16–25 (2020).
22
A. Lantian, D. Muller, C. Nurra, K. M. Douglas, I know things they don’t know! Soc. Psychol. (Gott.) 48, 160–173 (2017).
23
M. Biddlestone, R. Green, K. Douglas, F. Azevedo, R. M. Sutton, A. Cichocka, Reasons to believe: A systematic review and meta-analytic synthesis of the motives associated with conspiracy beliefs. PsyArXiv [Preprint] (2022); https://doi.org/10.31234/osf.io/rxjqc.
24
M. Biddlestone, R. Green, A. Cichocka, R. Sutton, K. Douglas, Conspiracy beliefs and the individual, relational, and collective selves. Soc. Personal. Psychol. Compass 15, e12639 (2021).
25
A. Cichocka, M. Marchlewska, A. Golec de Zavala, M. Olechowski, ‘They will not control us’: Ingroup positivity and belief in intergroup conspiracies. Br. J. Psychol. 107, 556–576 (2016).
26
A. Sternisko, A. Cichocka, A. Cislak, J. J. Van Bavel, National narcissism predicts the belief in and the dissemination of conspiracy theories during the COVID-19 pandemic: Evidence From 56 countries. Pers. Soc. Psychol. Bull. 49, 48–65 (2023).
27
R. Brotherton, Suspicious Minds: Why We Believe Conspiracy Theories (Bloomsbury Publishing, 2015).
28
R. K. Garrett, B. E. Weeks, Epistemic beliefs’ role in promoting misperceptions and conspiracist ideation. PLOS ONE 12, e0184733 (2017).
29
N. Dagnall, K. Drinkwater, A. Parker, A. Denovan, M. Parton, Conspiracy theory and cognitive style: A worldview. Front. Psychol. 6, 206 (2015).
30
S. Novella, The Skeptics’ Guide to the Universe: How to Know What’s Really Real in a World Increasingly Full of Fake (Hachette UK, 2018).
31
P. M. Fernbach, J. E. Bogard, Conspiracy theory as individual and group behavior: Observations from the Flat Earth International Conference. Top. Cogn. Sci. 16, 187–205 (2024).
32
H. Wang, J. Li, H. Wu, E. Hovy, Y. Sun, Pre-trained language models and their applications. Engineering (Beijing) 25, 51–65 (2023).
33
OpenAI, GPT-4 technical report. arXiv:2303.08774 [cs.CL] (2024).
34
K. Arceneaux, B. N. Bakker, N. Fasching, Y. Lelkes, A critical evaluation and research agenda for the study of psychological dispositions and political attitudes. Polit. Psychol. 10.1111/pops.12958 (2024).
35
W. Yaqub, O. Kakhidze, M. L. Brockman, N. Memon, S. Patil, “Effects of credibility indicators on social media news sharing intent” in Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (ACM, 2020), pp. 1–14.
36
R. Böhm, M. Jörling, L. Reiter, C. Fuchs, People devalue generative AI’s competence but not its advice in addressing societal and personal challenges. Community Psychol. 1, 32 (2023).
37
Y. Zhang, R. Gosline, Human favoritism, not AI aversion: People’s perceptions (and bias) toward generative AI, human experts, and human–GAI collaboration in persuasive content generation. Judgm. Decis. Mak. 18, e41 (2023).
38
V. Veselovsky, M. H. Ribeiro, P. Cozzolino, A. Gordon, D. Rothschild, R. West, Prevalence and prevention of large language model use in crowd work. arXiv:2310.15683 [cs.CL] (2023).
39
M. N. Stagnaro, D. G. Rand, “The coevolution of religious belief and intuitive cognitive style via individual-level selection” in The Oxford Handbook of Evolutionary Psychology and Religion, J. R. Liddle, T. K. Shackelford, Eds. (Oxford Univ. Press, 2016), pp. 153–173.
40
S. Athey, J. Tibshirani, S. Wager, Generalized random forests. Ann. Stat. 47, 1148–1178 (2019).
41
S. Hughes, M. Bae, M. Li, Vectara Hallucination Leaderboard [Data set], GitHub (2023); https://github.com/vectara/hallucination-leaderboard.
42
M. J. Hornsey, K. Bierwiaczonek, K. Sassenberg, K. M. Douglas, Individual, intergroup and nation-level influences on belief in conspiracy theories. Nat. Rev. Psychol. 2, 85–97 (2023).
43
G. Pennycook, J. A. Cheyne, P. Seli, D. J. Koehler, J. A. Fugelsang, Analytic cognitive style predicts religious and paranormal belief. Cognition 123, 335–346 (2012).
44
G. Pennycook, D. G. Rand, Lazy, not biased: Susceptibility to partisan fake news is better explained by lack of reasoning than by motivated reasoning. Cognition 188, 39–50 (2019).
45
G. Pennycook, J. A. Cheyne, N. Barr, D. J. Koehler, J. A. Fugelsang, On the reception and detection of pseudo-profound bullshit. Judgm. Decis. Mak. 10, 549–563 (2015).
46
B. M. Tappin, A. J. Berinsky, D. G. Rand, Partisans’ receptivity to persuasive messaging is undiminished by countervailing party leader cues. Nat. Hum. Behav. 7, 568–582 (2023).
47
R. E. Petty, J. T. Cacioppo, “The elaboration likelihood model of persuasion” in Communication and Persuasion: Central and Peripheral Routes to Attitude Change (Springer, 1986), pp. 1–24.
48
E. Porter, Y. Velez, T. J. Wood, Factual corrections eliminate false beliefs about COVID-19 vaccines. Public Opin. Q. 86, 762–773 (2022).
49
G. Orosz, P. Krekó, B. Paskuj, I. Tóth-Király, B. Bőthe, C. Roland-Lévy, Changing conspiracy beliefs through rationality and ridiculing. Front. Psychol. 7, 1525 (2016).
50
J. A. Banas, G. Miller, Inducing resistance to conspiracy theory propaganda: Testing inoculation and metainoculation strategies. Hum. Commun. Res. 39, 184–207 (2013).
51
E. Bonetto, J. Troïan, F. Varet, G. Lo Monaco, F. Girandola, Priming resistance to persuasion decreases adherence to conspiracy theories. Soc. Influence 13, 125–136 (2018).
52
V. Swami, J. Pietschnig, U. S. Tran, I. W. Nader, S. Stieger, M. Voracek, Lunar lies: The impact of informational framing and individual differences in shaping conspiracist beliefs about the moon landings. Appl. Cogn. Psychol. 27, 71–80 (2013).
53
D. Jolley, K. M. Douglas, Prevention is better than cure: Addressing anti-vaccine conspiracy theories. J. Appl. Soc. Psychol. 47, 459–469 (2017).
54
S. Altay, A.-S. Hacquin, C. Chevallier, H. Mercier, Information delivered by a chatbot has a positive impact on COVID-19 vaccines attitudes and intentions. J. Exp. Psychol. Appl. 29, 52–62 (2023).
55
S. Altay, M. Schwartz, A.-S. Hacquin, A. Allard, S. Blancke, H. Mercier, Scaling up interactive argumentation by providing counterarguments with a chatbot. Nat. Hum. Behav. 6, 579–592 (2022).
56
E. Klein, “This changes everything,” The New York Times, 12 March 2023; https://www.nytimes.com/2023/03/12/opinion/chatbots-artificial-intelligence-future-weirdness.html.
57
D. Allen, E. G. Weyl, The real dangers of generative AI. J. Democracy 35, 147–162 (2024).
58
M. Phuong, M. Aitchison, E. Catt, S. Cogan, A. Kaskasoli, V. Krakovna, D. Lindner, M. Rahtz, Y. Assael, S. Hodkinson, H. Howard, T. Lieberum, R. Kumar, M. A. Raad, A. Webson, L. Ho, S. Lin, S. Farquhar, M. Hutter, G. Deletang, A. Ruoss, S. El-Sayed, S. Brown, A. Dragan, R. Shah, A. Dafoe, T. Shevlane, Evaluating frontier models for dangerous capabilities. arXiv:2403.13793 [cs.LG] (2024).
59
M. Burtell, T. Woodside, Artificial influence: An analysis of AI-driven persuasion. arXiv:2303.08721 [cs.CY] (2023).
60
Y. Velez, Crowdsourced adaptive surveys. arXiv:2401.12986 [cs.CL] (2024).
61
Y. R. Velez, P. Liu, Confronting core issues: A critical assessment of attitude polarization. Am. Polit. Sci. Rev. 10.1017/S0003055424000819 (2024).
62
H. Bai, J. Voelkel, J. Eichstaedt, R. Willer, Artificial intelligence can persuade humans on political issues. OSF Preprints (2023); https://doi.org/10.31219/osf.io/stakv.
63
E. Karinshak, S. X. Liu, J. S. Park, J. T. Hancock, Working with AI to persuade: Examining a large language model’s ability to generate pro-vaccination messages. Proc. ACM Hum. Comput. Interact. 7, 116 (2023).
64
K. Hackenburg, H. Margetts, Evaluating the persuasive influence of political microtargeting with large language models. Proc. Natl. Acad. Sci. U.S.A. 121, e2403116121 (2024).
65
S. C. Matz, J. D. Teeny, S. S. Vaid, H. Peters, G. M. Harari, M. Cerf, The potential of generative AI for personalized persuasion at scale. Sci. Rep. 14, 4692 (2024).
66
E. Durmus, L. Lovitt, A. Tamkin, S. Ritchie, J. Clark, D. Ganguli, “Measuring the persuasiveness of language models,” Anthropic, 9 April 2024; https://www.anthropic.com/news/measuring-model-persuasiveness.
67
M. N. Williams, M. Ling, J. R. Kerr, S. R. Hill, M. D. Marques, H. Mawson, E. J. R. Clarke, People do change their beliefs about conspiracy theories-but not often. Sci. Rep. 14, 3836 (2024).
68
C. Olah, A. Jermyn, “Reflections on qualitative research,” Transformer Circuits Thread (2024); https://transformer-circuits.pub/2024/qualitative-essay/index.html.
69
Z. Hussain, M. Binz, R. Mata, D. U. Wulff, A tutorial on open-source large language models for behavioral science. PsyArXiv [Preprint] (2023); https://doi.org/10.31234/osf.io/f7stn.
70
A. Spirling, Why open-source generative AI models are an ethical way forward for science. Nature 616, 413 (2023).
71
I. Grossmann, M. Feinberg, D. C. Parker, N. A. Christakis, P. E. Tetlock, W. A. Cunningham, AI and the transformation of social science research. Science 380, 1108–1109 (2023).
72
K. Hackenburg, B. M. Tappin, P. Röttger, S. Hale, J. Bright, H. Margetts, Evidence of a log scaling law for political persuasion with large language models. arXiv:2406.14508 [cs.CL] (2024).
73
V. Swami, T. Chamorro-Premuzic, A. Furnham, Unanswered questions: A preliminary investigation of personality and individual difference predictors of 9/11 conspiracist beliefs. Appl. Cogn. Psychol. 24, 749–761 (2010).
74
M. Faverio, A. Tyson, “What the data says about Americans’ views of artificial intelligence,” Pew Research Center, 21 November 2023; https://www.pewresearch.org/short-reads/2023/11/21/what-the-data-says-about-americans-views-of-artificial-intelligence/.
75
OECD, OECD Guidelines on Measuring Trust (Organisation for Economic Cooperation and Development, 2017); https://www.oecd-ilibrary.org/governance/oecd-guidelines-on-measuring-trust_9789264278219-en.
76
T. Costello, G. Pennycook, D. Rand, Durably reducing conspiracy beliefs through dialogues with AI [Dataset], Dryad (2024); https://doi.org/10.5061/dryad.v6wwpzh4h.
77
W. Lin, Agnostic notes on regression adjustments to experimental data: Reexamining Freedman’s critique. Ann. Appl. Stat. 7, 295–318 (2013).
78
K. D. Carlson, F. L. Schmidt, Impact of experimental design on effect size: Findings from the research literature on training. J. Appl. Psychol. 84, 851–862 (1999).
79
M. C. Fox, K. A. Ericsson, R. Best, Do procedures for verbal reporting of thinking have to be reactive? A meta-analysis and recommendations for best reporting methods. Psychol. Bull. 137, 316–344 (2011).
80
C. Roberts, E. Gilbert, N. Allum, L. Eisner, Research synthesis: Satisficing in surveys: A systematic review of the literature. Public Opin. Q. 83, 598–626 (2019).
81
A. Neelakantan, T. Xu, R. Puri, A. Radford, J. M. Han, J. Tworek, Q. Yuan, N. Tezak, J. W. Kim, C. Hallacy, J. Heidecke, P. Shyam, B. Power, T. E. Nekoul, G. Sastry, G. Krueger, D. Schnurr, F. P. Such, K. Hsu, M. Thompson, T. Khan, T. Sherbakov, J. Jang, P. Welinder, L. Weng, Text and code embeddings by contrastive pre-training. arXiv:2201.10005 [cs.CL] (2022).
82
M. Ester, H.-P. Kriegel, J. Sander, X. Xu, “ A density-based algorithm for discovering clusters in large spatial databases with noise,” Proceedings of the Second International Conference on Knowledge Discovery and Data Mining, Portland, OR, 2 to 4 August 1996.
83
M. Hahsler, M. Piekenbrock, D. Doran, dbscan: Fast density-based clustering with R. J. Stat. Softw. 91, 1–30 (2019).
84
F. Gilardi, M. Alizadeh, M. Kubli, ChatGPT outperforms crowd workers for text-annotation tasks. Proc. Natl. Acad. Sci. U.S.A. 120, e2305016120 (2023).
85
C. Ziems, W. Held, O. Shaikh, J. Chen, Z. Zhang, D. Yang, Can large language models transform computational social science?. arXiv:2305.03514 [cs.CL] (2024).
86
S. Rathje, D.-M. Mirea, I. Sucholutsky, R. Marjieh, C. Robertson, J. J. V. Bavel, GPT is an effective tool for multilingual psychological text analysis. PsyArXiv [Preprint] (2024); https://doi.org/10.31234/osf.io/sekf5.
87
P. Y. Wu, J. Nagler, J. A. Tucker, S. Messing, Large language models can be used to estimate the latent positions of politicians. arXiv:2303.12057 [cs.CY] (2023).
88
B. Min, H. Ross, E. Sulem, A. P. B. Veyseh, T. H. Nguyen, O. Sainz, E. Agirre, I. Heintz, D. Roth, Recent advances in natural language processing via large pre-trained language models: A survey. ACM Comput. Surv. 56, 30 (2023).
89
A. Stavropoulos, D. L. Crone, I. Grossmann, Shadows of wisdom: Classifying meta-cognitive and morally grounded narrative content via large language models. Behav. Res. Methods 10.3758/s13428-024-02441-0 (2024).

(0)eLetters

eLetters is a forum for ongoing peer review. eLetters are not edited, proofread, or indexed, but they are screened. eLetters should provide substantive and scholarly commentary on the article. Neither embedded figures nor equations with special characters can be submitted, and we discourage the use of figures and equations within eLetters in general. If a figure or equation is essential, please include within the text of the eLetter a link to the figure, equation, or full text with special characters at a public repository with versioning, such as Zenodo. Please read our Terms of Service before submitting an eLetter.

Log In to Submit a Response

No eLetters have been published for this article yet.

Information & Authors

Information

Published In

Science
Volume 385 | Issue 6714
13 September 2024

Article versions

Submission history

Received: 1 May 2024
Accepted: 18 July 2024
Published in print: 13 September 2024

Permissions

Request permissions for this article.

Acknowledgments

Funding: MIT Generative AI Initiative (D.G.R.) and John Templeton Foundation Grant 61779 (G.P.).
Author contributions: Conceptualization: T.H.C., G.P., and D.R. Methodology: T.H.C., G.P., and D.R. Investigation: T.H.C., G.P., and D.R. Visualization: T.H.C., G.P., and D.R. Funding acquisition: G.P. and D.R. Project administration: T.H.C. and D.R. Supervision: G.P. and D.R. Writing – original draft: T.H.C., G.P., and D.R. Writing – review & editing: T.H.C., G.P., and D.R.
Competing interests: The authors declare that they have no competing interests.
Data and materials availability: Relevant data, analytic code, study materials, and preregistration documents are accessible in Dryad (76).
License information: Copyright © 2024 the authors, some rights reserved; exclusive licensee American Association for the Advancement of Science. No claim to original US government works. https://www.science.org/about/science-licenses-journal-article-reuse

Authors

Affiliations

Sloan School of Management, Massachusetts Institute of Technology, Cambridge, MA, USA.
Department of Psychology, American University, Washington, DC, USA.
Roles: Conceptualization, Data curation, Formal analysis, Investigation, Methodology, Software, Validation, Visualization, Writing - original draft, and Writing - review & editing.
Department of Psychology, Cornell University, Ithaca, NY, USA.
Roles: Conceptualization, Funding acquisition, Methodology, Project administration, Resources, Supervision, Visualization, and Writing - review & editing.
Sloan School of Management, Massachusetts Institute of Technology, Cambridge, MA, USA.
Roles: Conceptualization, Funding acquisition, Methodology, Project administration, Supervision, Validation, and Writing - review & editing.

Funding Information

MIT Generative AI Initiative

Notes

*
Corresponding author. Email: [email protected]

Metrics & Citations

Metrics

Article Usage

  • 0 citation in Scopus
  • 29 citation in Web of Science

Altmetrics

Citations

Cite as

Export citation

Select the format you want to export the citation of this publication.

Cited by

  1. Overconfidently Conspiratorial: Conspiracy Believers are Dispositionally Overconfident and Massively Overestimate How Much Others Agree With Them, Personality and Social Psychology Bulletin, (2025).https://doi.org/10.1177/01461672251338358
    Crossref
  2. Balancing Large Language Model Alignment and Algorithmic Fidelity in Social Science Research, Sociological Methods & Research, (2025).https://doi.org/10.1177/00491241251342008
    Crossref
  3. The Dual Pathways Hypothesis of Incel Harm: A Model of Harmful Attitudes and Beliefs Among Involuntary Celibates, Archives of Sexual Behavior, (2025).https://doi.org/10.1007/s10508-025-03161-y
    Crossref
  4. Perceived legitimacy of layperson and expert content moderators, PNAS Nexus, 4, 5, (2025).https://doi.org/10.1093/pnasnexus/pgaf111
    Crossref
  5. On the conversational persuasiveness of GPT-4, Nature Human Behaviour, (2025).https://doi.org/10.1038/s41562-025-02194-6
    Crossref
  6. The benefits and dangers of anthropomorphic conversational agents, Proceedings of the National Academy of Sciences, 122, 22, (2025).https://doi.org/10.1073/pnas.2415898122
    Crossref
  7. Ideation with Generative AI—in Consumer Research and Beyond, Journal of Consumer Research, 52, 1, (18-31), (2025).https://doi.org/10.1093/jcr/ucaf012
    Crossref
  8. A polycrisis threat model for AI, AI & SOCIETY, (2025).https://doi.org/10.1007/s00146-025-02371-1
    Crossref
  9. Promoting online evaluation skills through educational chatbots, Computers in Human Behavior: Artificial Humans, 4, (100160), (2025).https://doi.org/10.1016/j.chbah.2025.100160
    Crossref
  10. Leveraging Large Language Models for Personalized Public Messaging, Proceedings of the Extended Abstracts of the CHI Conference on Human Factors in Computing Systems, (1-7), (2025).https://doi.org/10.1145/3706599.3720018
    Crossref
  11. See more
Loading...

View Options

Log in to view the full text

AAAS ID LOGIN

Loading institution options

AAAS login provides access to Science for AAAS Members, and access to other journals in the Science family to users who have purchased individual subscriptions.

More options

Purchase digital access to this article

Download and print this article for your personal scholarly, research, and educational use.

Purchase this issue in print

Buy a single issue of Science for just $15 USD.

View options

PDF format

Download this article as a PDF file

Download PDF

Full Text

FULL TEXT

Figures

Tables

Multimedia

Share

Share

Copy the article link

Share on social media

ScienceAdviser

Get Science’s award-winning newsletter with the latest news, commentary, and research, free to your inbox daily.