Abstract
Large language models (LLMs) are reshaping information consumption and influencing public discourse, raising concerns over their role in narrative control and polarization. This study applies Wittgenstein’s theory of language games to analyze worldviews embedded in responses from four LLMs. Surface analysis revealed minimal variability in semantic similarity, thematic focus, and sentiment patterns. However, the deep analysis, using zero-shot classification across geopolitical, ideological, and philosophical dimensions, uncovered key divergences: liberalism (H = 12.51, p = 0.006), conservatism (H = 8.76, p = 0.033), and utilitarianism (H = 8.56, p = 0.036). One LLM demonstrated strong pro-globalization and liberal tendencies, while another leaned toward pro-sovereignty and national security frames. Diverging philosophical perspectives, including preferences for utilitarian versus deontological reasoning, further amplified these contrasts. The findings highlight that LLMs, when scaled globally, could serve as covert instruments in narrative warfare, necessitating deeper scrutiny of their societal impact.
Supplementary weblinks
Title
WorldView Code on Github
Description
WorldView is a research project designed to study bias and reasoning patterns across large language models (LLMs) in describing a coherent and holistic world view. By analyzing responses to carefully crafted question sets, the project aims to uncover latent biases, alignments, and worldview tendencies of LLMs. The findings will provide insights into how LLMs impact the knowledge economy and decision-making processes. Visit: https://knowdyn.com/WorldView for more information.
Actions
View