Systematicity in a Recurrent Neural Network by Factorizing Syntax and Semantics
- Jacob Russin, Computational Cognitive Neuroscience Lab, UC Davis, Davis, California, United States
- Jason Jo, MILA, University of Montreal, Montreal, Quebec, Canada
- Randall O'Reilly, Computational Cognitive Neuroscience Lab, Center for Neuroscience, University of California Davis, Davis, California, United States
- Yoshua Bengio, MILA, University of Montreal, Montreal, Quebec, Canada
AbstractStandard methods in deep learning fail to capture compositional or systematic structure in their training data, as shown by their inability to generalize outside of the training distribution. However, human learners readily generalize in this way, e.g. by applying known grammatical rules to novel words. The inductive biases that might underlie this powerful cognitive capacity remain unclear. Inspired by work in cognitive science suggesting a functional distinction between systems for syntactic and semantic processing, we implement a modification to an existing deep learning architecture, imposing an analogous separation. The resulting architecture substantially outperforms standard recurrent networks on the SCAN dataset, a compositional generalization task, without any additional supervision. Our work suggests that separating syntactic from semantic learning may be a useful heuristic for capturing compositional structure, and highlights the potential of using cognitive principles to inform inductive biases in deep learning.