Simulating Early Word Learning in Situated Connectionist Agents
- Felix Hill, DeepMind, London, United Kingdom
- Stephen Clark, DeepMind, London, United Kingdom
- Phil Blunsom, DeepMind, London, United Kingdom
- Karl Moritz Hermann, DeepMind, London, United Kingdom
AbstractRecent advances in Deep Learning (DL) and Reinforcement Learning (RL) make it possible to train neural network agents with raw, first-person visual perception to execute language-like instructions in 3D simulated worlds. Here, we investigation the application of such deep RL agents as cognitive models, specifically as models of infant word learning. We first develop a simple neural network-based language learning agent, trained via policy-gradient methods, which can interpret single-word instructions in a simulated 3D world. Taking inspiration from experimental paradigms in developmental psychology, we run various controlled simulations with the artificial agent, exploring the conditions in which established human biases and learning effects emerge, and propose a novel method for visualising and interpreting semantic representations in the agent. The results highlight the potential utility, and some limitations, of applying state-of-the-art learning agents and simulated environments to model human cognition.