Modeling Visuospatial Reasoning Across 17 Different Tests on the Leiter Scale of Nonverbal Intelligence
- James Ainooson, Electrical Engineering and Computer Science, Vanderbilt University, Nashville, Tennessee, United States
- Joel Michelson, Electrical Engineering and Computer Science, Vanderbilt University, Nashville, Tennessee, United States
- Deepayan Sanyal, Electrical Engineering and Computer Science, Vanderbilt University, Nashville, Tennessee, United States
- Joshua Palmer, Electrical Engineering and Computer Science, Vanderbilt University, Nashville, Tennessee, United States
- Maithilee Kunda, Electrical Engineering and Computer Science, Vanderbilt University, Nashville, Tennessee, United States
AbstractUnderstanding the computational mechanisms enabling visuospatial reasoning is important for studying human intelligence as well as for exploring the possibility of introducing human-like reasoning into artificial intelligence systems. In our work, we investigate how a collection of primitive image processing operations can be combined into different coherent strategies for solving a range of visuospatial reasoning tasks. We evaluate our approach on 20 subtests from the Leiter International Performance Scale-Revised (Leiter-R). Through our computational experiments, we show that with only four primitive operations – similarity, containment, rotation, and scaling – we can form strategies that solve, to different degrees of success, at least portions of 17 of the 20 subtests. These results lay foundations for our future work to study how intelligent agents can learn and generalize strategies from simple task definitions in order to perform complex visuospatial reasoning tasks.