Navigation requires the ability to update and track one's location and course from available multisensory information. Multisensory input comes in two prominent forms: body-based idiothetic cues and visual allothetic cues, usually from visual landmarks. Yet, how these two streams of information are integrated remains unresolved. In this study, we used a highly controlled straight-line distance estimation task in immersive virtual reality to investigate how idiothetic and allothetic spatial cues are integrated. In our experiment, participants reproduced a walked distance in the dark (path integration), with some trials involving misleading visual feedback showing a translated virtual room with an offset of up to 1.5 m from the true distance. We used computational modeling to determine the effect of visual feedback offset on the distance participants walked. We modelled participants' performance on the task with three distinct models involving path integration, landmark navigation, and integrating landmark feedback based on the uncertainty of the participant. The model results showed that the behavior of most participants (n = 24) was best predicted by a Bayesian cue combination model that involved averaging the two spatial cues according to their perceived level of uncertainty. Our data showed considerable individual differences in the uncertainty estimates of participants, which spanned almost uniformly from pure path integration (ignoring the visual cue) to pure landmark navigation (ignoring path integration estimate). Taken together these findings provide evidence in favor of Bayesian cue combination strategy in distance reproduction with individual differences in navigation behavior dictated by perceived level of uncertainty.
扫码关注我们
求助内容:
应助结果提醒方式:
