Integration of visual and spoken cues in a virtual reality navigation task

AbstractWhen integrating information in real time from multiple modalities or sources, such as when navigating with the help of GPS, a decision-maker is faced with a difficult cue integration problem. The two sources, visual and spoken, have potentially a different presumed reliability. In a sequence of three studies, we asked participants to navigate through a set of virtual reality mazes using a head-mounted VR display. Each maze consisted of a series of T intersections, at each of which the subject was presented with a visual cue and a spoken cue, each separately indicating which direction to continue. However, the two cues did not always agree. Each type of cue had a certain level of reliability, independent from the other. We found that subjects generally trusted spoken cues more than visual ones. Finally, we show how subjects’ tendency to favor the spoken cue can be modeled as a Bayesian prior.


Return to previous page