Artificial Language Learning: Combining Syntax and Semantics

AbstractArtificial Grammar Learning (AGL) paradigms are a powerful method to study language learning and processing. However, unlike natural languages, these tasks rely on grammars specifying relationships between meaningless stimuli with no real-world referents. Therefore, learning is typically assessed based on grammaticality or familiarity judgements, assessing how ‘well-formed’ a sequence is. We combined a meaningful vocabulary (in which nonsense words refer to properties of visual stimuli (colored shapes)) with different grammatical structures (adjacent, center-embedded, or crossed dependencies). Using an incremental, starting-small paradigm, participants were asked to interpret increasingly complex sequences of nonsense words and select the set of visual stimuli that they described. High levels of learning were observed for all grammars, including those which have previously been difficult to learn in traditional AGL paradigms. Here, the addition of semantics not only allows closer comparisons to natural language but also aids learning, representing a valuable approach to studying language learning.


Return to previous page