Auditory, Visual, and Speech Category Learning in the Same Individuals

AbstractCategory learning is a fundamental process in human cognition. Recent efforts have attempted to adapt theories developed in vision to the auditory domain. However, no study has directly compared auditory and visual category learning in the same individuals. Using a fully within-subjects approach, we trained participants on non-speech auditory, visual, and non-native speech categories in a single day. By comparing category learning behavior, the ability to generalize to novel category exemplars, and leveraging decision bound computational models, we found that while individuals demonstrated similar learning across the auditory and visual modalities, there were distinct perceptual biases that influenced learning of non-speech auditory categories. Further, there were substantial individual differences in performance across the three tasks. This study presents a novel comparison of category learning across modalities in the same individuals and demonstrates that although commonalities exist, there is some domain-specificity to category learning.


Return to previous page