Attentional Allocation as Optimal Compression in Visual Search

AbstractThere is large agreement among vision scientists that biological perception is capacity-limited and that attentional mechanisms control how that capacity is allocated. Despite the fact that Bayesian models generally do not include capacity limits, many researchers model perceptual attention as the result of optimal Bayesian inference. This inconsistency arises because vision science currently lacks a feasible and principled computational framework for characterizing optimal attentional allocation in the presence of capacity constraints. Here, we introduce such a framework based on rate-distortion theory (RDT), a theory of optimal lossy compression developed in the engineering literature. Our approach defines Bayes-optimal performance when an upper limit on information processing rate is imposed. Here, we compare Bayesian and RDT accounts in a visual search task, and highlight a typical shortcoming of unlimited-capacity Bayesian models that is not shared by RDT models, namely that they often over-estimate task-performance when information-processing demands are increased. In this study, we asked human subjects to find either one or two targets in a collection of distractors in a single-fixation search task. We predicted relative performance between one- and two-target conditions based on both RDT and Bayesian models. Performance differed between conditions in a way that was well accounted for by the capacity-limited RDT model but not by the capacity-unlimited Bayesian model.

Return to previous page