Adding biological constraints to deep neural networks reduces their capacity to learn unstructured data

AbstractDeep neural networks (DNNs) are becoming increasingly popular as a model of the human visual system. However, they show behaviours that are uncharacteristic of humans, including the ability to learn arbitrary data, such as images with pixel values drawn randomly from a Gaussian distribution. We investigated whether this behaviour is due to the learning and memory capacity of DNNs being too high for the training task. We reduced the capacity of DNNs by incorporating biologically motivated constraints -- an information bottleneck, internal noise and sigmoid activations -- in order to diminish the learning of arbitrary data, without significantly degrading performance on natural images. Internal noise reliably produced the desired behaviour, while a bottleneck had limited impact. Combining all three constraints yielded an even greater reduction in learning capacity. Furthermore, we tested whether these constraints contribute to a network's ability to generalize by helping it develop more robust internal representations. However, none of the methods could consistently improve generalization.


Return to previous page