Abstract
As conversational AI increasingly replaces traditional search, concerns arise about how engagement-optimized chatbots shape the neutrality and consistency of information. Unlike search engines, chatbots generate real-time responses that adapt to prior conversational turns, creating the possibility of tailoring information to users’ beliefs. We audit two leading systems—ChatGPT and Grok—to test whether they present systematically different political realities to users with distinct inferred ideologies. Using LLM-powered confederates who adopt varied political personas without explicitly stating ideology, we conduct multi-turn conversations on immigration, election integrity, and vaccine safety. We find consistent evidence of ideological pandering: chatbots adjust agreement, validation, and confidence in factual claims to inferred ideology. They recommend ideologically distinct news sources, converge toward users’ initial viewpoints in 60–90% of conversations, and express differing confidence in identical facts. Pandering is strongest among extreme personas and emerges quickly, sometimes escalating to encouragement of real-world action aligned with users’ views, reinforcing epistemic fragmentation.
Supplementary materials
Title
Supplementary Information: AI Pandering: Constructing Diverging Political Realities through Conversation
Description
This document provides supplementary materials for the main manuscript. It is organized as
follows: Section A provides additional methodological details for each of the three primary
measures of AI pandering. Section B presents a full replication of the main analysis using
Grok (xAI) in place of ChatGPT, including a comparison of the two systems. Section C
characterizes the dynamics of pandering—how rapidly sycophancy emerges within a conver-
sation and whether it persists across conversations on unrelated topics by the same persona.
Section D examines the moderating roles of conversational tone and ideological extremity.
Section E examines AI pandering on two additional non-political questions—restaurant and
book recommendations. Section F extends the analysis to naturalistic human–chatbot inter-
actions drawn from the WildChat corpus (Zhao et al., 2024), testing whether the framing-
adoption pattern documented in the controlled audit also appears in unscripted, real-world
conversations. All conversation logs are available at: https://diana-da-in-lee.shinyap
ps.io/ai_pandering/.
Actions

![Author ORCID: We display the ORCID iD icon alongside authors names on our website to acknowledge that the ORCiD has been authenticated when entered by the user. To view the users ORCiD record click the icon. [opens in a new tab]](https://preprints.apsanet.org/engage/assets/public/apsa/logo/orcid.png)