Encoder-Decoder Neural Architectures for Fast Amortized Inference of Cognitive Process Models

AbstractComputational cognitive modeling offers a principled interpretation of the functional demands of cognitive systems and affords quantitative fits to behavioral/brain data. Typically, cognitive modelers are interested in the fit of a model with parameters estimated using maximum likelihood or Bayesian methods. However, the set of models with known likelihoods is dramatically smaller than the set of plausible generative models. For all but some standard models (e.g., the drift-diffusion model), lack of closed-form likelihoods typically prevents using traditional Bayesian inference methods. Employing likelihood-free methods is a workaround in practice. However, the computational complexity of these methods is a bottleneck, since it requires many simulations for each proposed parameter set in posterior sampling schemes. Here, we propose a method that learns an approximate likelihood over the parameter space of interest by encapsulation into a convolutional neural network, affording fast parallel posterior sampling downstream after a one-off simulation cost is incurred for training.


Return to previous page