Context variability promotes generalization in reading aloud: Insight from a neural network simulation
- Ian Miller, Psychology, University of Toronto, Toronto, Ontario, Canada
- Nicolas Dumay, Department of Psychology, University of Exeter, Exeter, United Kingdom
- Mark Pitt, Psychology, Ohio State University, Columbus, Ohio, United States
- Brian Lam, Division of Engineering Science, University of Toronto, Toronto, Ontario, Canada
- Blair Armstrong, Department of Psychology, University of Toronto, Toronto, Ontario, Canada
AbstractHow do neural network models of quasiregular domains learn to represent knowledge that varies in its consistency with the domain, and generalize this knowledge appropriately? Recent work focusing on spelling-to-sound correspondences in English proposes that a graded “warping” mechanism determines the extent to which the pronunciation of a newly learned word should generalize to its orthographic neighbors. We explored the micro-structure of this proposal by training a network to pronounce new made-up words that were consistent with the dominant pronunciation (regulars), were comprised of a completely unfamiliar pronunciation (exceptions), or were consistent with a subordinate pronunciation in English (ambiguous). Crucially, by training the same spelling-to-sound mapping with either one or multiple items, we tested whether variation in adjacent, within-item context made a given pronunciation more able to generalize. This is exactly what we found. Context variability, therefore, appears to act as a modulator of the warping in quasiregular domains.