What a cognitive linguist means by meaning and why it could impact research in Natural Language Processing

Meaning has always been the focus of Cognitive Linguistics since its early days in the 1970s. Foundational research in the then newly started field shed light on how meaning construction processes were key for understanding the mechanisms of language. The same research claimed that language was not to be regarded as some species-specific modular mechanism whose main function was that of generating sequences of linguistic form. The idea was then – and still is now – that language relies on abilities that are shared with other cognitive systems.

In the context of its foundation, Cognitive Linguistics embraced the analysis of language phenomena swept under the rug of “non-core” grammar. That may be the reason why, for some period of time, theories proposed in the field were meant to address only quirky aspects of languages. In fact, those theories did address phenomena like metaphor, lexical valency constraints, idiomatic expressions and differences in meaning promoted by various construal operations, such as grinding, for instance. Taking as an example, cognitive linguists have looked closely into issues like the ones described by Pelletier and Schubert in “Mass expressions” and very didactically put in this classic example sentence:

There was mosquito all over the windshield.”

The first thing that is curious about this sentence is the use of a count noun – mosquito – as a mass one. But it is more than that. Such a difference in the expression of this count/mass distinction triggers a reinterpretation of mosquito not as one animal, but as a multitude of mosquitoes being smashed against the windshield as the car moves forward. And before you think this is precisely the kind of quirky phenomenon mentioned above, and, therefore, that construal happens only in some “special cases”, please return to the title of this blog post and read it again. Pay special attention to the indefinite article in the noun phrase “a cognitive linguist”. You see, if this wasn’t meant to be an unpretentious reading list, one could interpret that noun phrase as having a generic referent, being equivalent to “any cognitive linguist”. However, since I’m willing to accept that you’ve never heard of me before, that you’ll most likely not remember my name when talking about this list with your friends, and, most importantly, that my opinion does not represent all cognitive linguists – see?… unpretentious – the intended reading of the noun phrase in the title is that of an undefined unique referent, somewhat equivalent to “some cognitive linguist you don’t know”.

That’s the reason why one of the most important claims in Cognitive Linguistics theories is that “quirky” phenomena can be accounted for within the same theoretical and methodological framework used for “core” grammar, that is, basic morphology and lexical properties together with phrase and sentence organization patterns of a language. And that is why this cognitive linguist thinks that the way the authors of the selected texts in this reading list conceived of meaning could have important implications to Natural Language Processing. Now, to the list.

Framing meaning

Of the linguistic theory at hand, meaning is somehow related to knowledge. Therefore, and also because this list is focused on the impact Cognitive Linguistics research may have in Natural Language Processing, the first recommendation is a classic reading in Artificial Intelligence:

A framework for representing knowledge
Marvin Minsky (1974)

In this research report, one of the founders of AI argues against the idea of defining knowledge as a list of statements. In the very first paragraph of the very first section, Minsky sets the tone of the paper, by stating that:

… the ingredients of most theories both in Artificial Intelligence and in Psychology have been on the whole too minute, local, and unstructured to account – either practically or phenomenologically – for the effectiveness of common-sense thought. The “chunks” of reasoning, language, memory, and “perception” ought to be larger and more structured; their factual and procedural contents must be more intimately connected in order to explain the apparent power and speed of mental activities.

Minsky then defines frames as the kind of data structure capable of representing stereotyped situations upon which human common-sense knowledge is built. He uses visual perception as the application field for his proposal and discusses how scenes should be processed within this framework. Although the report does not focus on language understanding and linguistic meaning, the transposition of the notion of frame to – Cognitive and also Interactional – Linguistics was just around the corner, as the next reading recommendation will show.

An alternative to checklist theories of meaning
Charles J. Fillmore (1975)

In this paper, published in the proceedings of the first meeting of the Berkeley Linguistics Society, Fillmore, who is the father of Frame Semantics, explores how the notion of frames could be used in analyzing meaning, by saying that “people associate certain scenes with certain linguistic frames. By scenes, he meant anything from actual visual scenes to cultural, institutional and bodily experiences. By frames, he meant then any system of linguistic choices. If you’re familiar with Fillmore’s work – or if you keep reading this blog post – you’ll know that the definition of frame changes along his career. However, at this point, the take-home is this: scenes and frames activate each other in the process of meaning making and in the remainder of the paper, Fillmore provides insightful examples of why this is so, demonstrating why checklist theories of meaning – those which define meaning in terms of features – fail in providing a proper account of daily language use.

The most famous example is the word bachelor. In a checklist theory of meaning, bachelor would be defined by a feature set similar to [+ man, – married, + adult, + successful]. However,  Fillmore questions how old some man would need to be before he could be called a bachelor, and also if we could call the Pope a bachelor, given he is an unmarried adult man who is pretty successful in his career.

The kinds of reflections in this paper were further expanded in another BLS paper four years later. And that’s the next item on our list.

Innocence: a second idealization for Linguistics
Charles J. Fillmore (1979)

This paper is fun to read. Fillmore constructs an allegorical character, the Innocent Speaker/Hearer – any resemblance with the Chomskyian Ideal Speaker/Hearer is not a mere coincidence –  to demonstrate how the idea of strict meaning compositionality cannot be sustained if one really wants to address meaning making processes in natural language. 

The paper provides two lists of abilities: one with the things the Innocent Speaker/Hearer – let’s call them ISH – would be capable of doing, and the other with the things they would not. ISH would know the identifiable parts of their language, such as words and morphemes, and what they mean, as well as the order in which they appear. ISH would also know the semantic import of each of them, calculating the meaning of each sentence by computing the meaning of its parts.

The list of the things ISH wouldn’t be able to do, on the other hand, is more interesting. Because they cannot interpret beyond the limits of strict compositionality, ISH would not understand, well, most of everyday language. They would think “prisoner” and “jailer” are perfect synonyms, since prison and jail both indicate a place where people are held. They would not get it when people use the verb “get” to indicate anything different from picking up an object. And if ISH cannot understand even those apparently simple things, let alone those expressing indirect intentions, such as saying “isn’t it cold in here?” as a means to ask someone to turn up the heat. And since we’re on it… ISH wouldn’t have a clue of what “let alone” means in the previous sentence.

Now, some folks reading this may be asking (I truly hope not, but they may): those are interesting examples, but why does it impact Natural Language Processing Systems? Well, imagine that you build the ultimate conversational agent covering all the principles of strict meaning composition. It would still fail. Miserably. 

In a nutshell, this paper demonstrates very clearly why a strictly compositional approach to meaning is unrealistic and should be replaced by one grounded in context and on how human linguistic cognition works. And yes, I know some of you may still be thinking that at least for some basic “who did what to whom” kind of meaning, one could still go fully compositional, right? Sorry, but the answer is still no.

Constructions: a construction grammar approach to argument structure
Adele E. Goldberg (1995)

This book takes the mission of applying the theoretical and analytical principles of cognitive linguistics to “core” grammar very seriously. Goldberg presents the model currently known as Cognitive Construction Grammar by applying it to the analysis of the ditransitive construction in English. She demonstrates how a non-transformational approach to argument structure can capture more relevant and adequate generalizations about argument structure than a transformational one. Moreover, she shows how such an approach also makes the design of the lexicon more economic. 

To better understand such claims, consider the examples of the ditransitive construction and of the constructions in which the recipient argument is prepositional phrase headed by to:

They sent me the documents.
* They sent the trash the documents.

They sent the documents to me.
They sent the documents to the trash.

By adopting a non-transformational approach, Goldberg shows that the ditransitive sentences are not to be seen as derived from the to-dative sentences, since the first require that the recipient argument is perceived as human, while the latter do not. Therefore, the noun phrase the trash cannot serve as a recipient argument in the ditransitive.

Also, Goldberg argues that the ditransitive argument structure has its own meaning, independent of that indicated by the verb in it. Take the following sentence, for example:

They baked me a cake.

Note that, although bake is not a transfer verb, there’s a semantic of transfer in the sentence, which is contributed by the ditransitive construction. Such an analysis avoids the need to posit two different senses for bake, one with and one without a recipient argument.

The model proposed by Goldberg, on top of being more adequate in terms of language description, is also more adequate for language explanation, as her other two books demonstrate with a plethora of psycholinguistic evidence.

Meaning and Cognition

Before we proceed, a second disclaimer is necessary: this section of the reading list is the main reason behind the presence of the word “unfinished” in the title. As I’ve just pointed out above, meaning is grounded on context, and you are reading this list as a post to the Cognitive Science Society Blog. So, of course, given this context, the following three recommendations are just the tiny tip of an iceberg of possible readings.

The Way We Think: conceptual blending and the mind’s hidden complexities
Gilles Fauconnier & Mark Turner (2002)

Speaking of tips and icebergs, one of the chapters in this book takes its title from a famous quote by Fauconnier’s 1997 Mappings in thought and language:

visible language is only the tip of the iceberg of invisible meaning construction.

The book as a whole is a masterclass explaining with very rich examples how the three I’s of cognition – identity, integration and imagination – set the principles guiding conceptual blending processes that can be indicated in language, but are unpacked in our cognitive system. The first I, identity, focuses on the apparently simple task of defining whether X and Y are the same or different. Just by changing X and Y for dalmatian and yorkshire terrier we can have an idea of how complex this process is. Put in other words: how can a three year old look at exemplars of these two types of dogs and be okay with the fact that they are both named dogs? Integration, the second I, builds the connections between cognitive entities, allowing us to compress them and make them understandable at the human scale. Finally, Imagination is responsible for our ability to extrapolate reality and engage in complex thinking regardless of the presence of any in presence stimulus. It allows us to simulate reality in thought. The three I’s team up to explain how language and other cognitive systems work together for constructing meaning.

Yes, I know, there’s no I in TEAM, but there are certainly three of them in LINGUISTICS.

Louder than Words: the new science on how the mind makes meaning
Benjamin K. Bergen (2012)

This book presents the field of Simulation Semantics, whose key hypothesis can be summarized as follows: when we engage in meaning comprehension/construction activities, we simulate the things we are talking about in our brain so that this process is embodied. This is to say that when we describe, for instance, an amazing move in a soccer match, we simulate such a move in our brains as we describe it. Bergen reports on a carefully curated set of experiments that build the foundation for his claims. This is a must-read for those NLP folks interested in thinking deeply about the issues involved in the idea of grounded agents.

The origins of human communication
Michael Tomasello (2008)

The last recommendation in this section is a masterpiece by Michael Tomasello (recipient of the 2021 David E. Rumelhart Prize), summarizing a long history on the study of primate (human and non-human) communication. The key insight from this book revolves around the importance of the shared attention scene in human communication. Before they master any linguistic system, human infants develop foundational abilities sustaining the uniqueness of human communication, the most important being that of sharing, with the people they interact with, focused attention on an external scene. 

Recognizing that humans engage in cooperative efforts with a second party focused on a third party helps reframe some of the challenging problems in NLP. Language is allowed to be vague, messy and variable because humans cooperate while using it.  

Implications for NLP

Last but not least, I’ll wrap up this long post with some recommendations of papers in NLP that are involved, more or less explicitly, in a cooperative effort with some of the central aspects of Cognitive Linguistics mentioned above. The three of them were featured in the 58th Annual Meeting of the Association for Computational Linguistics during a theme session entitled “Taking Stock of Where We’ve Been and Where We’re Going”.

Climbing towards NLU: On Meaning, Form, and Understanding in the Age of Data
Emily M. Bender & Alexander Koller (2020)

In this award-winning paper, Bender and Koller propose the octopus test, a thought experiment in which a super intelligent octopus, after monitoring messages exchanged via a cable system between two people stranded in desert islands, intercepts the communication system and starts acting as one of the stranded humans. The interesting turn of events is that the octopus is implemented in the paper as a large language model and, because it can only have access to the linguistic forms in the data it had access to, it fails in actually understanding language. 

The paper provides very interesting connections between NLP and Semantics, including some of the authors recommended above.

Language (Re)modelling: Towards Embodied Language Understanding
Ronen Tamari, Chen Shani, Tom Hope, Miriam R L Petruck, Omri Abend, Dafna Shahaf (2020)

In this brilliant position paper, the authors claim that systems targeting natural language understanding (NLU) tasks differ fundamentally from how humans construct meaning in three central aspects: efficiency, interpretability and generalization. They embrace embodiment and simulation semantics as the way towards better performance in NLU and provide a template architecture to navigate such a path.

Finally, we’ve reached the advertising part. Not that I’m about to recommend readings on advertising, no. I’m about to engage in some self-advertising.

(Re)construing Meaning in NLP
Sean Trott, Tiago Timponi Torrent, Nancy Chang, Nathan Schneider (2020)

Last year I had the great opportunity of working on this paper with Sean, Nancy and Nathan, brilliant colleagues and friends with whom I’ve shared productive time discussing how the kinds of insights from Cognitive Linguistics presented in the first and second sections of this post could be brought into NLP. We’ve decided to focus on construal, since we had a more stable – although not definitive – set of dimensions to work on, thanks to Sean. Those dimensions include:

  • Prominence: what kinds of aspects of meaning speakers choose to highlight, e.g. we can say that we are meeting before lunch, or that we are having lunch after the meeting. 
  • Resolution: how fine-grained is the presentation of entities and events, e.g. if we’re really hungry, we can ask whether there would be another slot for us to meet soon, and what we mean by soon in this case is grounded on our shared experience with meetings, lunches and calendars.
  • Metaphor: how one domain can be used to talk about another domain, e.g. we can talk about moving meetings up or down in our calendars.
  • Perspective: which is the vantage point we’re adopting when we speak, e.g. we can suggest an international conference call to our morning or someone else’s afternoon.
  • Configuration: how the structure of entities or events is presented, e.g. we can talk about blocking out a week or five days of our calendar due to a big NLP conference we’re about to attend.

In the paper, we present a review of the Cognitive Linguistics literature on those construal dimensions, as well as psycholinguistic evidence for their existence. We also indicate related work on NLP. Our main claim though, as you may have inferred from the series of examples above, is how construal is present in everyday communication and, therefore, how addressing it is key if we want the NLP systems we build to perform better than Fillmore’s Innocent Speaker/Hearer.

I’m aware of the fact that some of the connections made with NLP throughout this list are far from being testable or implementable. Some may never get to this point. However, my main purpose with this selection of great reads was not that of providing you, my already dearest reader, with experimental set-ups, but with difficult, almost philosophical questions. This is, afterall, as brilliantly pointed out by Susan Etlinger in the TED Talk What do we do with all this big data?, the role of Linguists and other fellows in the Humanities in the NLP and other tech fields: asking questions.

Tiago Torrent is a Cognitive Linguist building multilingual multimodal computational implementations of Frame Semantics and Construction Grammar at the FrameNet Brasil Lab. He is also one of the founding partners of Global FrameNet, a multinational collaborative for the development of frame-based language resources and applications. He’s a professor at the Federal University of Juiz de Fora, and currently serves as the Dean of the Graduate Program in Linguistics.

You May Also Like…

Share This