Exploring Exploration: Comparing Children with Agents in Unified Exploration Environments

AbstractResearch in developmental psychology consistently shows that children explore the world thoroughly and efficiently and that this exploration allows them to learn. While much work has gone into developing methods for exploration in machine learning, artificial agents have not yet reached the standard set by their human counterparts. In this work we propose using DeepMind Lab as a platform to directly compare child and agent behaviors and to develop new exploration techniques. We tested 60 children aged 4-6 examining two conditions that emulate how current reinforcement learning algorithms learn using dense and sparse rewards and the children are then asked to find a goal in various mazes. These tasks provide data that can easily be compared to algorithms and we evaluate turn-by-turn moves the children do to what the Intrinsic-Curiosity-Module and Depth-First-Search algorithm do in the same exact maze. We show specifically where and when children differ from the algorithms.


Return to previous page