Decoding Eye Movements in Cross-Situational Word Learning via Tensor Component Analysis

AbstractStatistical learning is an active process wherein information is actively selected from the learning environment. As current information is integrated with existing knowledge, it shapes attention in subsequent learning, placing biases on which new information will be sampled. One statistical learning task that has been studied recently is cross-situational word learning (CSL). In CSL, statistical learners are able to learn the correct mappings between novel visual objects and spoken labels after watching sequences where the two are paired together in referentially ambiguous contexts. In the present paper, we use a computational method called Tensor Component Analysis (TCA) to analyze real-time gaze data collected from a set of CSL studies. We applied TCA to learners' gaze data in order to derive latent variables related to real-time statistical learning and to examine how selective attention is organized in time. Our method allows us to address two specific questions: a) the similarity in attention behavior across strong vs. weak learners as well as across learned vs. not-learned items and b) how the structure of attention relates to word learning. We measured learners' knowledge of label-object pairs at the end of a training session, and show that their real-time gaze data can be used to predict item-level learning outcomes as well as decode pretrained item knowledge.


Return to previous page