Welcome to CogSci Unpacked, an exciting blog series dedicated to summarizing academic papers from the Cognitive Science, a CSS Journal. Our goal is to bridge the gap between academia and the broader public, fostering a better understanding of cognitive science and making it accessible and relatable to all. If you’re curious to dive even deeper, we invite you to explore the full academic paper.
Color metaphors are pervasive everyday language — we speak of “feeling blue,” seeing “red flags,” or being “green with envy.” But how do we make sense of these associations? Can they be learned through language alone, or are they necessarily grounded in perceptual, embodied experiences?
Our new study published in Cognitive Science asks whether large language models (LLMs) trained only on text develop the same intuitive grasp of color metaphors as humans. Furthermore, we studied whether humans who have different amounts of embodied experience with color, including colorblind adults and artists, interpret color metaphors differently.
We conducted large-scale online surveys comparing four groups: (1) colorseeing adults, (2) colorblind adults, (3) painters who regularly work with color pigments, and (4) the popular LLM ChatGPT. Each group was asked to assign colors to abstract words and to decipher both familiar and unfamiliar color metaphors (e.g., “They were on red alert” vs. “It was a very pink party”).
Colorseeing and colorblind adults reported strikingly similar and replicable color associations, suggesting that the visual experience of color isn’t strictly necessary for understanding color-related language. However, when explaining their metaphorical reasoning, humans frequently drew on references to the embodied experience of color, and painters consistently provided the most perceptually rich interpretations.
By contrast, while LLMs generated repeatable color associations, their responses often broke down when asked to explain their reasoning, interpret novel metaphors, or invert their own color associations. In short, LLMs reproduced human-like semantic patterns but didn’t “think” about them in the same embodied way humans do.
So, can color metaphors be learned through statistical patterns among words in language alone? Our results suggest that language associations may do a lot of the work, but also reveal a spectrum of meaning-making strategies. While both LLMs and humans can form repeatable color associations even for experientially unfamiliar color metaphors, hands-on experience with color significantly enriches its semantics.
Ethan Nadler is an Assistant Professor of Astronomy & Astrophysics at UC San Diego. His interdisciplinary research draws on techniques from physics and statistics to address problems at the interface of cognition, complex systems, and data science.
Douglas Guilbeault is an Assistant Professor of Organizational Behavior at Stanford’s Graduate School of Business. As co-director of the Computational Culture Lab, he harnesses and builds computationally intensive network- and language-based methods to study collective cognition and behavior, as well as how organizational cultures emerge and evolve.
Sofronia M. Ringold is a PhD candidate in the Chan Division of Occupational Science and Occupational Therapy at USC and a graduate research assistant in the Center for the Neuroscience of Embodied Cognition. Her research focuses on sensory processing and the brain gut microbiome system in autism spectrum disorder.
T. R. Williamson (Tom) is a PhD student in the Brain, Language, and Behaviour Laboratory at the University of the West of England, Bristol, and Southmead Hospital, North Bristol NHS Trust. He currently holds a research position in the Centre for the Neuroscience of Embodied Cognition at USC and until recently the Faculty of Linguistics, Philology, and Phonetics at Oxford. His work combines the neuroscience of language and cognition with clinical applications to neuro-oncology practice in awake craniotomy patients.
Antoine Bellemare is a multidisciplinary artist and postdoctoral fellow at Bard College and Université de Montréal. He builds interactive installations and brain-computer interfaces that fuse plant, brain, and heart signals, exploring the edges of creativity and perception through neuroscience, digital arts, and AI.
Iulia M. Comșa is a Research Scientist at Google DeepMind in Zürich, Switzerland. Her recent work focuses on measuring and enhancing the cognitive capabilities of large language-only and multimodal AI models, with an emphasis on their human-like qualities. Her previous research includes spiking networks and the development of neural measures for quantifying transitions of consciousness in humans.
Karim Jerbi is a professor at the Psychology department of the University of Montreal, and associate professor at Mila (Quebec AI institute). He is the director of the Quebec Neuro-AI research center, and a member of the Royal Society of Canada’s College of New Scholars, Artists and Scientists. His research lies at the crossroads between natural and artificial intelligence with a focus on cognitive and computational neuroscience. He has a keen interest in the convergence between brain science, AI, creativity and art.
Srini Narayanan is a Distinguished Scientist and a Senior Director at Google DeepMind in Zurich, Switzerland. He leads a world-wide team working on Multimodal models in Gemini, and on improving the reasoning and inference capabilities of AI systems and agents.
Lisa Aziz-Zadeh is a Professor at the Brain & Creativity Institute and Division of Occupational Science and Occupational Therapy at USC. Her work focuses on understanding the neural basis of social cognition, including language processing and embodied cognition.