Welcome to CogSci Unpacked, an exciting blog series dedicated to summarizing academic papers from the Cognitive Science, a CSS Journal. Our goal is to bridge the gap between academia and the broader public, fostering a better understanding of cognitive science and making it accessible and relatable to all. If you’re curious to dive even deeper, we invite you to explore the full academic paper.
Why do humans gesture when they speak? It sounds like a simple question, yet cognitive science has never quite settled on an answer. Are gestures deliberate communicative tools shaped by an audience? Or are they largely unconscious by-products of thinking and speaking?
Part of the difficulty lies in what researchers have traditionally counted as a gesture. Most studies focus on representational movements – iconic gestures that visually depict meaning, like tracing a spiral in the air to describe a winding road, or shaping the hands to show the size of an object. But everyday conversation contains another, less discussed class of movements. Imagine someone saying, “You know what I mean?”, while extending an open palm toward their interlocutor. The gesture does not depict an object or action. Instead, it manages the interaction itself – inviting agreement, marking shared understanding, or softening a claim.
These are interactive gestures, and they were the unexpected starting point of our recent study.
We recorded participants engaged in extended, face-to-face conversations under varying visual and conversational conditions. Initially, gestures were annotated using standard categories common in gesture research: iconic, metaphoric, deictic (pointing), emblematic (conventional signs like thumbs-up), pantomimic, and beat gestures. Surprisingly, nearly a quarter of all gestures resisted classification. Rather than treating these movements as noise, a second analysis asked a simple question: were they interactive?
They overwhelmingly were. Almost 90% of gestures that do not fall into traditional categories were found to play an interactive role, accounting for over a quarter of the entire dataset. In frequency terms, interactive gestures were not marginal phenomena but central components of conversational behaviour – at least as prevalent as many gesture types that dominate the literature.
This finding alone invites a shift in perspective. If cognitive science aims to understand how communication works in real time, movements that regulate shared understanding may be as theoretically important as those that visually depict meaning.
The study’s central manipulation examined gesture visibility. In one phase, speakers and listeners could see one another normally. In another, a screen occluded visibility of the torso and hands while preserving access to the face. The logic was straightforward: if gestures primarily serve communicative functions, blocking visibility should suppress gesture production.
Interactive gestures did decrease when visibility was blocked, but only in simple conversations.
When discussions involved more complex topics (those involving emotionally and socially challenging concepts) in this visual occlusion condition, participants’ interactive gesture rates remained stable. Participants continued to produce discourse-managing, audience-directed gestures even when those gestures could not be seen, only when the conversations were complex.
This selective dissociation poses a challenge for familiar theoretical positions. A strictly pragmatic account predicts broad suppression under occlusion: why deploy invisible communicative signals? A strictly unconscious account predicts minimal sensitivity to visibility: why should occlusion matter at all?
Instead, gesture behaviour appears shaped by both communicative context and conversational demands.
One interpretation is that gesture operates at a level best described as subconscious. On this view, gesturing reflects intrinsic social pressures within the cognitive system – pressures tied to being polite tied to intersubjective acknowledgement – that are neither fully deliberate nor purely automatic. Visual feedback modulates these pressures, but does not fully determine them. Under sufficient task difficulty, arising as social-emotional complexity in conversations, the drive to maintain interactional coherence may override visibility constraints entirely.
One of the most consequential implications of this work is methodological. Gesture research has historically relied on constrained tasks and narrative paradigms, which historically yield detailed analyses only of representational gestures. Observing behaviour in sustained, semi-naturalistic conversation reveals a different distribution of gesture types, highlighting movements that regulate interaction rather than depict content.
Rather than viewing gestures as optional accompaniments to language, the present findings point toward something more fundamental. Gesturing may be tied to a subconscious drive to express oneself. This drive only becomes experimentally observable in interaction, and it appears shaped both by the emotions arising in conversation and modulated by interlocutor visibility, but not fully dependent on either. The persistence of invisible gestures highlights a cognitive system organised around expression as much as communication. For cognitive science, this reframes gesture as evidence of how thought, action, and social engagement remain deeply intertwined.



