Evaluating computational models of infant phonetic learning across languages
- Yevgen Matusevych, School of Informatics, University of Edinburgh , Edinburgh, United Kingdom
- Thomas Schatz, Department of Linguistics & UMIACS, University of Maryland, College Park, Maryland, United States
- Herman Kamper, Department of Electrical and Electronic Engineering, Stellenbosch University, Stellenbosch, South Africa
- Naomi Feldman, Linguistics and UMIACS, University of Maryland, College Park, Maryland, United States
- Sharon Goldwater, School of Informatics, University of Edinburgh, Edinburgh, United Kingdom
AbstractIn the first year of life, infants' speech perception becomes attuned to the sounds of their native language. Many accounts of this early phonetic learning exist, but computational models predicting the attunement patterns observed in infants from the speech input they hear have been lacking. A recent study presented the first such model, drawing on algorithms proposed for unsupervised learning from naturalistic speech, and tested it on a single phone contrast. Here we study five such algorithms, selected for their potential cognitive relevance. We simulate phonetic learning with each algorithm and perform tests on three phone contrasts from different languages, comparing the results to infants' discrimination patterns. The five models display varying degrees of agreement with empirical observations, showing that our approach can help decide between candidate mechanisms for early phonetic learning, and providing insight into which aspects of the models are critical for capturing infants' perceptual development.