Conversational AI and Democratic Renewal: Uncertainty, Attention, and the Future of Sense-Making
Large language models (LLMs) are increasingly experienced as conversational partners. This development is significant not because of any putative ‘intelligence’ these systems might possess, but because conversation is the primary medium through which humans develop and refine the bundle of intuitions, habits of perception, and evaluative dispositions that condition our capacity for transformative agency. That LLMs now mediate a growing share of these conversational exchanges represents both a systemic risk and an opportunity.
The risk is twofold. First, LLMs may standardise moral discourse by privileging established patterns of articulation, gradually eroding the ‘productive uncertainty’ upon which genuine sense-making depends. Second, this standardisation threat is compounded by the broader environment in which LLMs are deployed: current democratic cultures are partly in the trouble they are in because social media platforms systematically reward certainty and punish nuance, creating what Goodin might recognise as a ‘democracy of sound bites’.
The nature and severity of these risks are, however, modulated by how uncertainty is expressed within conversational exchanges. When a human interlocutor expresses herself with unwavering certainty, the conversational space for genuine sense-making collapses: the kind of exploratory exchange through which we hone the perceptual capacities that condition transformative agency as a capability becomes unavailable. The same dynamic applies to LLMs. Yet there is an asymmetry: unlike human interlocutors, LLMs can be shaped in how they express uncertainty. The design choices governing whether an LLM maintains space for productive exploration are not fixed features but parameters open to iterative refinement.
Seizing this opportunity requires more than better engineering. It demands participatory interfaces that incentivise communities of users to refine how LLMs express uncertainty over time. The resulting exercise of iterative refinement serves more than instrumental aims. The collective work of articulating the values that structure how we navigate uncertainty constitutes a form of democratic practice in its own right, one that democratic life demands but that current communicative infrastructures systematically discourage.
Registration
Register here to attend the talk.
Contact
Event series
DEMOCRACY & AI is a series of online talks exploring the political thought, theory, and philosophy of artificial intelligence. Envisioned as an international platform for the politics-and-AI research community, the series aims to bring voices together to examine how AI is shaping democracy, both positively and negatively.
Map of South Campus
View directions.
View on map of the Faculty of Humanities - South Campus.
View map of South Campus (pdf).