“Conscious” Multi-Modal Perceptual Learning for Grounded Simulation-Based Cognition

AbstractBarsalou (1999) presented a simulation-based theory of grounded cognition called Perceptual Symbol Systems. According to this theory, a fully functional conceptual system can be implemented using only modal representations (aka perceptual symbols) and simulations. While the theory has gained considerable neuroscientific and experimental support, there is an urgent need for computational accounts that flesh out the theory. The current paper explores one approach for implementing these computational foundations. We present an implementation of perceptual symbols, simulators, simulation-based perception, and “conscious” multi-modal perceptual learning based on state-of-the-art generative neural networks, called β"-" variational autoencoders, combined with LIDA, a biologically-inspired cognitive architecture. We show that our implementation satisfies many of the properties attributed to perceptual symbol systems, and provides a solid foundation for future computational work in perception, categorization, and simulation-based cognition.


Return to previous page