Modeling word interpretation with deep language models: The interaction between expectations and lexical information

AbstractHow a word is interpreted depends on the context it appears in. We study word interpretation leveraging deep language models, tracing the contribution and interaction of two sources of information that have been shown to be central to it: context-invariant lexical knowledge, represented by the word embeddings of a model, and a listener's contextual expectations, represented by its predictions. We define operations to combine these components to obtain representations of word interpretations. We instantiate our framework using two English language models, and evaluate the yielded representations in the extent by which they reflect contextual word substitutes provided by human subjects. Our results suggest that both lexical information and expectations codify information pivotal to word interpretation; however, their combination is better than either on its own. Moreover, the division of labor between expectations and the lexicon appears to change across contexts.


Return to previous page