{"title":"动作过程中视觉与本体感觉的注意调节","authors":"Jakub Limanowski, Karl J. Friston","doi":"10.1093/cercor/bhz192","DOIUrl":null,"url":null,"abstract":"Abstract To control our actions efficiently, our brain represents our body based on a combination of visual and proprioceptive cues, weighted according to how (un)reliable—how precise—each respective modality is in a given context. However, perceptual experiments in other modalities suggest that the weights assigned to sensory cues are also modulated “top-down” by attention. Here, we asked whether during action, attention can likewise modulate the weights (i.e., precision) assigned to visual versus proprioceptive information about body position. Participants controlled a virtual hand (VH) via a data glove, matching either the VH or their (unseen) real hand (RH) movements to a target, and thus adopting a ``visual'' or ``proprioceptive'' attentional set, under varying levels of visuo-proprioceptive congruence and visibility. Functional magnetic resonance imaging (fMRI) revealed increased activation of the multisensory superior parietal lobe (SPL) during the VH task and increased activation of the secondary somatosensory cortex (S2) during the RH task. Dynamic causal modeling (DCM) showed that these activity changes were the result of selective, diametrical gain modulations in the primary visual cortex (V1) and the S2. These results suggest that endogenous attention can balance the gain of visual versus proprioceptive brain areas, thus contextualizing their influence on multisensory areas representing the body for action.","PeriodicalId":9825,"journal":{"name":"Cerebral Cortex (New York, NY)","volume":"2015 1","pages":"1637 - 1648"},"PeriodicalIF":0.0000,"publicationDate":"2019-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"37","resultStr":"{\"title\":\"Attentional Modulation of Vision Versus Proprioception During Action\",\"authors\":\"Jakub Limanowski, Karl J. Friston\",\"doi\":\"10.1093/cercor/bhz192\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Abstract To control our actions efficiently, our brain represents our body based on a combination of visual and proprioceptive cues, weighted according to how (un)reliable—how precise—each respective modality is in a given context. However, perceptual experiments in other modalities suggest that the weights assigned to sensory cues are also modulated “top-down” by attention. Here, we asked whether during action, attention can likewise modulate the weights (i.e., precision) assigned to visual versus proprioceptive information about body position. Participants controlled a virtual hand (VH) via a data glove, matching either the VH or their (unseen) real hand (RH) movements to a target, and thus adopting a ``visual'' or ``proprioceptive'' attentional set, under varying levels of visuo-proprioceptive congruence and visibility. Functional magnetic resonance imaging (fMRI) revealed increased activation of the multisensory superior parietal lobe (SPL) during the VH task and increased activation of the secondary somatosensory cortex (S2) during the RH task. Dynamic causal modeling (DCM) showed that these activity changes were the result of selective, diametrical gain modulations in the primary visual cortex (V1) and the S2. These results suggest that endogenous attention can balance the gain of visual versus proprioceptive brain areas, thus contextualizing their influence on multisensory areas representing the body for action.\",\"PeriodicalId\":9825,\"journal\":{\"name\":\"Cerebral Cortex (New York, NY)\",\"volume\":\"2015 1\",\"pages\":\"1637 - 1648\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2019-10-16\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"37\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Cerebral Cortex (New York, NY)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1093/cercor/bhz192\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Cerebral Cortex (New York, NY)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1093/cercor/bhz192","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Attentional Modulation of Vision Versus Proprioception During Action
Abstract To control our actions efficiently, our brain represents our body based on a combination of visual and proprioceptive cues, weighted according to how (un)reliable—how precise—each respective modality is in a given context. However, perceptual experiments in other modalities suggest that the weights assigned to sensory cues are also modulated “top-down” by attention. Here, we asked whether during action, attention can likewise modulate the weights (i.e., precision) assigned to visual versus proprioceptive information about body position. Participants controlled a virtual hand (VH) via a data glove, matching either the VH or their (unseen) real hand (RH) movements to a target, and thus adopting a ``visual'' or ``proprioceptive'' attentional set, under varying levels of visuo-proprioceptive congruence and visibility. Functional magnetic resonance imaging (fMRI) revealed increased activation of the multisensory superior parietal lobe (SPL) during the VH task and increased activation of the secondary somatosensory cortex (S2) during the RH task. Dynamic causal modeling (DCM) showed that these activity changes were the result of selective, diametrical gain modulations in the primary visual cortex (V1) and the S2. These results suggest that endogenous attention can balance the gain of visual versus proprioceptive brain areas, thus contextualizing their influence on multisensory areas representing the body for action.