Abstract
Do citizens update strongly held beliefs when presented with belief-incongruent information, and does such updating affect downstream attitudes? Though fact-checking studies find that corrections reliably influence beliefs, attitudinal effects are negligible. We argue that such findings may reflect belief relevance - the extent to which specific beliefs bear on attitudes. Using large language models (LLMs), we elicit deeply held issue attitudes and "focal beliefs" that are described as central to those attitudes. We then randomly assign participants to receive either an LLM-generated factual argument targeting (1) their focal belief, (2) an attitude-relevant but unmentioned belief ("distal belief"), or (3) a placebo. In experiments with two large online convenience samples, we show that counterarguments successfully decrease both focal and distal belief strength, with effects persisting after one week. More importantly, focal belief counterarguments produce larger and more durable attitude change than distal counterarguments.
Supplementary materials
Title
Appendix
Description
Supplementary appendix
Actions