Pub Date : 2012-01-01DOI: 10.1163/187847612X648143
Sonja Schall, S. Kiebel, B. Maess, K. Kriegstein
There is compelling evidence that low-level sensory areas are sensitive to more than one modality. For example, auditory cortices respond to visual-only stimuli (Calvert et al., 1997; Meyer et al., 2010; Pekkola et al., 2005) and conversely, visual sensory areas respond to sound sources even in auditory-only conditions (Poirier et al., 2005; von Kriegstein et al., 2008; von Kriegstein and Giraud, 2006). Currently, it is unknown what makes the brain activate modality-specific, sensory areas solely in response to input of a different modality. One reason may be that such activations are instrumental for early sensory processing of the input modality — a hypothesis that is contrary to current text book knowledge. Here we test this hypothesis by harnessing a temporally highly resolved method, i.e., magnetoencephalography (MEG), to identify the temporal response profile of visual regions in response to auditory-only voice recognition. Participants ( n = 19 ) briefly learned a set of voices audio–visually, i.e., together with a talking face in an ecologically valid situation, as in daily life. Once subjects were able to recognize these now familiar voices, we measured their brain responses using MEG. The results revealed two key mechanisms that characterize the sensory processing of familiar speakers’ voices: (i) activation in the visual face-sensitive fusiform gyrus at very early auditory processing stages, i.e., only 100 ms after auditory onset and (ii) a temporal facilitation of auditory processing (M200) that was directly associated with improved recognition performance. These findings suggest that visual areas are instrumental already during very early auditory-only processing stages and indicate that the brain uses visual mechanisms to optimize sensory processing and recognition of auditory stimuli.
有令人信服的证据表明,低水平的感觉区域对不止一种模态敏感。例如,听觉皮层只对视觉刺激作出反应(Calvert et al., 1997;Meyer et al., 2010;Pekkola等人,2005),相反,即使在只有听觉的条件下,视觉感觉区域也会对声源做出反应(Poirier等人,2005;von Kriegstein et al., 2008;von Kriegstein and Giraud, 2006)。目前,尚不清楚是什么使大脑激活特定模式的感觉区域,仅对不同模式的输入作出反应。一个原因可能是这种激活有助于输入模态的早期感觉处理——这一假设与当前教科书知识相反。在这里,我们通过利用一种时间高度分辨率的方法,即脑磁图(MEG)来验证这一假设,以确定视觉区域对纯听觉语音识别的时间反应特征。参与者(n = 19)简短地学习了一组视听声音,即在生态有效的情况下(如在日常生活中)与说话的面孔一起学习。一旦受试者能够识别这些熟悉的声音,我们就用脑磁图测量他们的大脑反应。研究结果揭示了熟悉说话者声音感知加工的两个关键机制:(i)视觉面部敏感梭状回在非常早期的听觉加工阶段激活,即在听觉开始后仅100毫秒;(ii)听觉加工的时间促进(M200)与识别性能的提高直接相关。这些发现表明,视觉区域在非常早期的听觉处理阶段就已经起作用了,并表明大脑使用视觉机制来优化对听觉刺激的感觉处理和识别。
{"title":"Early auditory sensory processing is facilitated by visual mechanisms","authors":"Sonja Schall, S. Kiebel, B. Maess, K. Kriegstein","doi":"10.1163/187847612X648143","DOIUrl":"https://doi.org/10.1163/187847612X648143","url":null,"abstract":"There is compelling evidence that low-level sensory areas are sensitive to more than one modality. For example, auditory cortices respond to visual-only stimuli (Calvert et al., 1997; Meyer et al., 2010; Pekkola et al., 2005) and conversely, visual sensory areas respond to sound sources even in auditory-only conditions (Poirier et al., 2005; von Kriegstein et al., 2008; von Kriegstein and Giraud, 2006). Currently, it is unknown what makes the brain activate modality-specific, sensory areas solely in response to input of a different modality. One reason may be that such activations are instrumental for early sensory processing of the input modality — a hypothesis that is contrary to current text book knowledge. Here we test this hypothesis by harnessing a temporally highly resolved method, i.e., magnetoencephalography (MEG), to identify the temporal response profile of visual regions in response to auditory-only voice recognition. Participants ( n = 19 ) briefly learned a set of voices audio–visually, i.e., together with a talking face in an ecologically valid situation, as in daily life. Once subjects were able to recognize these now familiar voices, we measured their brain responses using MEG. The results revealed two key mechanisms that characterize the sensory processing of familiar speakers’ voices: (i) activation in the visual face-sensitive fusiform gyrus at very early auditory processing stages, i.e., only 100 ms after auditory onset and (ii) a temporal facilitation of auditory processing (M200) that was directly associated with improved recognition performance. These findings suggest that visual areas are instrumental already during very early auditory-only processing stages and indicate that the brain uses visual mechanisms to optimize sensory processing and recognition of auditory stimuli.","PeriodicalId":49553,"journal":{"name":"Seeing and Perceiving","volume":"25 1","pages":"184-185"},"PeriodicalIF":0.0,"publicationDate":"2012-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1163/187847612X648143","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"64428816","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-01-01DOI: 10.1163/18784763-00002391
Seong Taek Jeon, Daphne Maurer, Terri L Lewis
Amblyopia is a condition involving reduced acuity caused by abnormal visual input during a critical period beginning shortly after birth. Amblyopia is typically considered to be irreversible during adulthood. Here we provide the first demonstration that video game training can improve at least some aspects of the vision of adults with bilateral deprivation amblyopia caused by a history of bilateral congenital cataracts. Specifically, after 40 h of training over one month with an action video game, most patients showed improvement in one or both eyes on a wide variety of tasks including acuity, spatial contrast sensitivity, and sensitivity to global motion. As well, there was evidence of improvement in at least some patients for temporal contrast sensitivity, single letter acuity, crowding, and feature spacing in faces, but not for useful field of view. The results indicate that, long after the end of the critical period for damage, there is enough residual plasticity in the adult visual system to effect improvements, even in cases of deep amblyopia caused by early bilateral deprivation.
{"title":"The effect of video game training on the vision of adults with bilateral deprivation amblyopia.","authors":"Seong Taek Jeon, Daphne Maurer, Terri L Lewis","doi":"10.1163/18784763-00002391","DOIUrl":"https://doi.org/10.1163/18784763-00002391","url":null,"abstract":"<p><p>Amblyopia is a condition involving reduced acuity caused by abnormal visual input during a critical period beginning shortly after birth. Amblyopia is typically considered to be irreversible during adulthood. Here we provide the first demonstration that video game training can improve at least some aspects of the vision of adults with bilateral deprivation amblyopia caused by a history of bilateral congenital cataracts. Specifically, after 40 h of training over one month with an action video game, most patients showed improvement in one or both eyes on a wide variety of tasks including acuity, spatial contrast sensitivity, and sensitivity to global motion. As well, there was evidence of improvement in at least some patients for temporal contrast sensitivity, single letter acuity, crowding, and feature spacing in faces, but not for useful field of view. The results indicate that, long after the end of the critical period for damage, there is enough residual plasticity in the adult visual system to effect improvements, even in cases of deep amblyopia caused by early bilateral deprivation.</p>","PeriodicalId":49553,"journal":{"name":"Seeing and Perceiving","volume":"25 5","pages":"493-520"},"PeriodicalIF":0.0,"publicationDate":"2012-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1163/18784763-00002391","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"31084659","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-01-01DOI: 10.1163/187847612X646343
Elan Barenholtz, D. Lewkowicz, Lauren Kogelschatz
Learning about objects often involves associating multisensory properties such as the taste and smell of a food or the face and voice of a person. Here, we report a novel phenomenon in associative learning in which pairs of multisensory attributes that are consistent with deriving from a single object are learned better than pairs that are not. In Experiment 1, we found superior learning of arbitrary pairs of human faces and voices when they were gender-congruent — and thus were consistent with belonging to a single personal identity — compared with gender-incongruent pairs. In Experiment 2, we found a similar advantage when the learned pair consisted of species-congruent animal pictures and vocalizations vs. species-incongruent pairs. In Experiment 3, we found that temporal synchrony — which provides a highly reliable alternative cue that properties derive from a single object — improved performance specifically for the incongruent pairs. Together, these findings demonstrate a novel principle in associative learning in which multisensory pairs that are consistent with having a single object as their source are learned more easily than multisensory pairs that are not. These results suggest that unitizing multisensory properties into a single representation may be a specialized learning mechanism.
{"title":"Single-object consistency facilitates multisensory pair learning: Evidence for unitization","authors":"Elan Barenholtz, D. Lewkowicz, Lauren Kogelschatz","doi":"10.1163/187847612X646343","DOIUrl":"https://doi.org/10.1163/187847612X646343","url":null,"abstract":"Learning about objects often involves associating multisensory properties such as the taste and smell of a food or the face and voice of a person. Here, we report a novel phenomenon in associative learning in which pairs of multisensory attributes that are consistent with deriving from a single object are learned better than pairs that are not. In Experiment 1, we found superior learning of arbitrary pairs of human faces and voices when they were gender-congruent — and thus were consistent with belonging to a single personal identity — compared with gender-incongruent pairs. In Experiment 2, we found a similar advantage when the learned pair consisted of species-congruent animal pictures and vocalizations vs. species-incongruent pairs. In Experiment 3, we found that temporal synchrony — which provides a highly reliable alternative cue that properties derive from a single object — improved performance specifically for the incongruent pairs. Together, these findings demonstrate a novel principle in associative learning in which multisensory pairs that are consistent with having a single object as their source are learned more easily than multisensory pairs that are not. These results suggest that unitizing multisensory properties into a single representation may be a specialized learning mechanism.","PeriodicalId":49553,"journal":{"name":"Seeing and Perceiving","volume":"87 1","pages":"11-11"},"PeriodicalIF":0.0,"publicationDate":"2012-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1163/187847612X646343","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"64426262","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-01-01DOI: 10.1163/187847612X646622
Alexis Pérez-Bellido, Joan López-Moliner, S. Soto-Faraco
Prior knowledge about the spatial frequency (SF) of upcoming visual targets (Gabor patches) speeds up average reaction times and decreases standard deviation. This has often been regarded as evidence for a multichannel processing of SF in vision. Multisensory research, on the other hand, has often reported the existence of sensory interactions between auditory and visual signals. These interactions result in enhancements in visual processing, leading to lower sensory thresholds and/or more precise visual estimates. However, little is known about how multisensory interactions may affect the uncertainty regarding visual SF. We conducted a reaction time study in which we manipulated the uncertanty about SF (SF was blocked or interleaved across trials) of visual targets, and compared visual only versus audio–visual presentations. Surprisingly, the analysis of the reaction times and their standard deviation revealed an impairment of the selective monitoring of the SF channel by the presence of a concurrent sound. Moreover, this impairment was especially pronounced when the relevant channels were high SFs at high visual contrasts. We propose that an accessory sound automatically favours visual processing of low SFs through the magnocellular channels, thereby detracting from the potential benefits from tuning into high SF psychophysical-channels.
{"title":"Sounds prevent selective monitoring of high spatial frequency channels in vision","authors":"Alexis Pérez-Bellido, Joan López-Moliner, S. Soto-Faraco","doi":"10.1163/187847612X646622","DOIUrl":"https://doi.org/10.1163/187847612X646622","url":null,"abstract":"Prior knowledge about the spatial frequency (SF) of upcoming visual targets (Gabor patches) speeds up average reaction times and decreases standard deviation. This has often been regarded as evidence for a multichannel processing of SF in vision. Multisensory research, on the other hand, has often reported the existence of sensory interactions between auditory and visual signals. These interactions result in enhancements in visual processing, leading to lower sensory thresholds and/or more precise visual estimates. However, little is known about how multisensory interactions may affect the uncertainty regarding visual SF. We conducted a reaction time study in which we manipulated the uncertanty about SF (SF was blocked or interleaved across trials) of visual targets, and compared visual only versus audio–visual presentations. Surprisingly, the analysis of the reaction times and their standard deviation revealed an impairment of the selective monitoring of the SF channel by the presence of a concurrent sound. Moreover, this impairment was especially pronounced when the relevant channels were high SFs at high visual contrasts. We propose that an accessory sound automatically favours visual processing of low SFs through the magnocellular channels, thereby detracting from the potential benefits from tuning into high SF psychophysical-channels.","PeriodicalId":49553,"journal":{"name":"Seeing and Perceiving","volume":"25 1","pages":"40-40"},"PeriodicalIF":0.0,"publicationDate":"2012-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1163/187847612X646622","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"64426679","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-01-01DOI: 10.1163/187847612X646712
J. Stapleton, E. Doheny, A. Setti, C. Cunningham, L. Crosby, R. Kenny, F. Newell
It has previously been shown that older adults may be less efficient than younger adults at processing multisensory information, and that older adults with a history of falling may be less efficient than a healthy cohort when processing audio–visual stimuli (Setti et al., 2011). We investigated whether body stance has an effect on older adults’ ability to efficiently process multisensory information and also whether being presented with multisensory stimuli while standing may affect an individual’s balance. This experiment was performed by 44 participants, including both fall-prone older adults and a healthy control cohort. We tested their susceptibility to a sound-induced flash illusion (i.e., Shams et al., 2002), during both sitting and standing positions while measuring balance parameters using body-worn sensors. The results suggest that balance control in fall prone-adults was compromised relative to adults with no falls history, and this was particularly evident whilst they were presented with the auditory-flash illusion but not the non-illusory condition. Also, when the temporal window of the stimulus onset asynchrony was narrow (70 ms) fall-prone adults were more susceptible to the illusion during the standing position compared with their performance while seated, while the performance of older adults with no history of falling was unaffected by a change in position. These results suggest a link between efficient multisensory integration and balance control and have implications for interventions when fall-prone adults encounter complex multisensory information in their environment.
先前的研究表明,老年人处理多感官信息的效率可能低于年轻人,有跌倒史的老年人处理视听刺激的效率可能低于健康人群(Setti et al., 2011)。我们研究了身体姿势是否对老年人有效处理多感官信息的能力有影响,以及站立时受到多感官刺激是否会影响个人的平衡。这项实验由44名参与者进行,包括易跌倒的老年人和健康对照队列。我们测试了他们在坐着和站着时对声音引起的闪光错觉的敏感性(即Shams等人,2002),同时使用穿戴式传感器测量平衡参数。结果表明,与没有跌倒史的成年人相比,有跌倒倾向的成年人的平衡控制能力受到损害,这在他们出现幻听而非幻听的情况下尤为明显。此外,当刺激开始的时间窗口较窄(70 ms)时,跌倒倾向的成年人在站立时比坐着时更容易受到错觉的影响,而没有跌倒史的老年人的表现则不受位置变化的影响。这些结果表明,有效的多感官整合与平衡控制之间存在联系,并对易跌倒的成年人在其环境中遇到复杂的多感官信息时进行干预具有启示意义。
{"title":"Is maintaining balance during standing associated with inefficient audio–visual integration in older adults?","authors":"J. Stapleton, E. Doheny, A. Setti, C. Cunningham, L. Crosby, R. Kenny, F. Newell","doi":"10.1163/187847612X646712","DOIUrl":"https://doi.org/10.1163/187847612X646712","url":null,"abstract":"It has previously been shown that older adults may be less efficient than younger adults at processing multisensory information, and that older adults with a history of falling may be less efficient than a healthy cohort when processing audio–visual stimuli (Setti et al., 2011). We investigated whether body stance has an effect on older adults’ ability to efficiently process multisensory information and also whether being presented with multisensory stimuli while standing may affect an individual’s balance. This experiment was performed by 44 participants, including both fall-prone older adults and a healthy control cohort. We tested their susceptibility to a sound-induced flash illusion (i.e., Shams et al., 2002), during both sitting and standing positions while measuring balance parameters using body-worn sensors. The results suggest that balance control in fall prone-adults was compromised relative to adults with no falls history, and this was particularly evident whilst they were presented with the auditory-flash illusion but not the non-illusory condition. Also, when the temporal window of the stimulus onset asynchrony was narrow (70 ms) fall-prone adults were more susceptible to the illusion during the standing position compared with their performance while seated, while the performance of older adults with no history of falling was unaffected by a change in position. These results suggest a link between efficient multisensory integration and balance control and have implications for interventions when fall-prone adults encounter complex multisensory information in their environment.","PeriodicalId":49553,"journal":{"name":"Seeing and Perceiving","volume":"1 1","pages":"50-50"},"PeriodicalIF":0.0,"publicationDate":"2012-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1163/187847612X646712","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"64427178","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-01-01DOI: 10.1163/187847612X646956
D. Luke, D. Terhune, Ross Friday
The neurobiology of synaesthesia is receiving growing attention in the search for insights into consciousness, such as the binding problem. One way of decoding the neurocognitive mechanisms underlying this phenomenon is to investigate the induction of synaesthesia via neurochemical agents, as commonly occurs with psychedelic substances. How synaesthesia is affected by drugs can also help inform us of the neural mechanisms underlying this condition. To address these questions we surveyed a sample of recreational drug users regarding the prevalence, type and frequency of synaesthesia under the influence of psychedelics and other psychoactive substances. The results indicate that synaesthesia is frequently experienced following the consumption of serotonergic agonists such as LSD and psilocybin and that these same drugs appear to augment synaesthesia in congenital synaesthetes. These results implicate the serotonergic system in the experience of synaesthesia.
{"title":"Psychedelic synaesthesia: Evidence for a serotonergic role in synaesthesia","authors":"D. Luke, D. Terhune, Ross Friday","doi":"10.1163/187847612X646956","DOIUrl":"https://doi.org/10.1163/187847612X646956","url":null,"abstract":"The neurobiology of synaesthesia is receiving growing attention in the search for insights into consciousness, such as the binding problem. One way of decoding the neurocognitive mechanisms underlying this phenomenon is to investigate the induction of synaesthesia via neurochemical agents, as commonly occurs with psychedelic substances. How synaesthesia is affected by drugs can also help inform us of the neural mechanisms underlying this condition. To address these questions we surveyed a sample of recreational drug users regarding the prevalence, type and frequency of synaesthesia under the influence of psychedelics and other psychoactive substances. The results indicate that synaesthesia is frequently experienced following the consumption of serotonergic agonists such as LSD and psilocybin and that these same drugs appear to augment synaesthesia in congenital synaesthetes. These results implicate the serotonergic system in the experience of synaesthesia.","PeriodicalId":49553,"journal":{"name":"Seeing and Perceiving","volume":"25 1","pages":"74-74"},"PeriodicalIF":0.0,"publicationDate":"2012-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1163/187847612X646956","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"64427400","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-01-01DOI: 10.1163/187847612X646659
Brenda R. Malcolm, K. Reilly, J. Mattout, R. Salemme, O. Bertrand, M. Beauchamp, T. Ro, A. Farnè
Our ability to accurately discriminate information from one sensory modality is often influenced by information from the other senses. Previous research indicates that tactile perception on the hand may be enhanced if participants look at a hand (compared to a neutral object) and if visual information about the origin of touch conveys temporal and/or spatial congruency. The current experiment further assessed the effects of non-informative vision on tactile perception. Participants made speeded discrimination responses (digit 2 or digit 5 of their right hand) to supra-threshold electro-cutaneous stimulation while viewing a video showing a pointer, in a static position or moving (dynamic), towards the same or different digit of a hand or to the corresponding spatial location on a non-corporeal object (engine). Therefore, besides manipulating whether a visual contact was spatially congruent to the simultaneously felt touch, we also manipulated the nature of the recipient object (hand vs. engine). Behaviourally, the temporal cues provided by the dynamic visual information about an upcoming touch decreased reaction times. Additionally, a greater enhancement in tactile discrimination was present when participants viewed a spatially congruent contact compared to a spatially incongruent contact. Most importantly, this visually driven improvement was greater for the view-hand condition compared to the view-object condition. Spatially-congruent, hand-specific visual events also produced the greatest amplitude in the P50 somatosensory evoked potential (SEP). We conclude that tactile perception is enhanced when vision provides non-predictive spatio-temporal cues and that these effects are specifically enhanced when viewing a hand.
{"title":"The hands have it: Hand specific vision of touch enhances touch perception and somatosensory evoked potential","authors":"Brenda R. Malcolm, K. Reilly, J. Mattout, R. Salemme, O. Bertrand, M. Beauchamp, T. Ro, A. Farnè","doi":"10.1163/187847612X646659","DOIUrl":"https://doi.org/10.1163/187847612X646659","url":null,"abstract":"Our ability to accurately discriminate information from one sensory modality is often influenced by information from the other senses. Previous research indicates that tactile perception on the hand may be enhanced if participants look at a hand (compared to a neutral object) and if visual information about the origin of touch conveys temporal and/or spatial congruency. The current experiment further assessed the effects of non-informative vision on tactile perception. Participants made speeded discrimination responses (digit 2 or digit 5 of their right hand) to supra-threshold electro-cutaneous stimulation while viewing a video showing a pointer, in a static position or moving (dynamic), towards the same or different digit of a hand or to the corresponding spatial location on a non-corporeal object (engine). Therefore, besides manipulating whether a visual contact was spatially congruent to the simultaneously felt touch, we also manipulated the nature of the recipient object (hand vs. engine). Behaviourally, the temporal cues provided by the dynamic visual information about an upcoming touch decreased reaction times. Additionally, a greater enhancement in tactile discrimination was present when participants viewed a spatially congruent contact compared to a spatially incongruent contact. Most importantly, this visually driven improvement was greater for the view-hand condition compared to the view-object condition. Spatially-congruent, hand-specific visual events also produced the greatest amplitude in the P50 somatosensory evoked potential (SEP). We conclude that tactile perception is enhanced when vision provides non-predictive spatio-temporal cues and that these effects are specifically enhanced when viewing a hand.","PeriodicalId":49553,"journal":{"name":"Seeing and Perceiving","volume":"6 1","pages":"43-43"},"PeriodicalIF":0.0,"publicationDate":"2012-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1163/187847612X646659","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"64427476","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-01-01DOI: 10.1163/187847612X646668
Valeria Occelli, G. Esposito, P. Venuti, P. Walker, M. Zampini
The label ‘crossmodal correspondences’ has been used to define the nonarbitrary associations that appear to exist between different basic physical stimulus attributes in different sensory modalities. For instance, it has been consistently shown in the neurotypical population that higher pitched sounds are more frequently matched with visual patterns which are brighter, smaller, and sharper than those associated to lower pitched sounds. Some evidence suggests that patients with ASDs tend not to show this crossmodal preferential association pattern (e.g., curvilinear shapes and labial/lingual consonants vs. rectilinear shapes and plosive consonants). In the present study, we compared the performance of children with ASDs (6–15 years) and matched neurotypical controls in a non-verbal crossmodal correspondence task. The participants were asked to indicate which of two bouncing visual patterns was making a centrally located sound. In intermixed trials, the visual patterns varied in either size, surface brightness, or shape, whereas the sound varied in pitch. The results showed that, whereas the neurotypical controls reliably matched the higher pitched sound to a smaller and brighter visual pattern, the performance of participants with ASDs was at chance level. In the condition where the visual patterns differed in shape, no inter-group difference was observed. Children’s matching performance cannot be attributed to intensity matching or difficulties in understanding the instructions, which were controlled. These data suggest that the tendency to associate congruent visual and auditory features vary as a function of the presence of ASDs, possibly pointing to poorer capabilities to integrate auditory and visual inputs in this population.
{"title":"Audiovisual crossmodal correspondences in Autism Spectrum Disorders (ASDs)","authors":"Valeria Occelli, G. Esposito, P. Venuti, P. Walker, M. Zampini","doi":"10.1163/187847612X646668","DOIUrl":"https://doi.org/10.1163/187847612X646668","url":null,"abstract":"The label ‘crossmodal correspondences’ has been used to define the nonarbitrary associations that appear to exist between different basic physical stimulus attributes in different sensory modalities. For instance, it has been consistently shown in the neurotypical population that higher pitched sounds are more frequently matched with visual patterns which are brighter, smaller, and sharper than those associated to lower pitched sounds. Some evidence suggests that patients with ASDs tend not to show this crossmodal preferential association pattern (e.g., curvilinear shapes and labial/lingual consonants vs. rectilinear shapes and plosive consonants). In the present study, we compared the performance of children with ASDs (6–15 years) and matched neurotypical controls in a non-verbal crossmodal correspondence task. The participants were asked to indicate which of two bouncing visual patterns was making a centrally located sound. In intermixed trials, the visual patterns varied in either size, surface brightness, or shape, whereas the sound varied in pitch. The results showed that, whereas the neurotypical controls reliably matched the higher pitched sound to a smaller and brighter visual pattern, the performance of participants with ASDs was at chance level. In the condition where the visual patterns differed in shape, no inter-group difference was observed. Children’s matching performance cannot be attributed to intensity matching or difficulties in understanding the instructions, which were controlled. These data suggest that the tendency to associate congruent visual and auditory features vary as a function of the presence of ASDs, possibly pointing to poorer capabilities to integrate auditory and visual inputs in this population.","PeriodicalId":49553,"journal":{"name":"Seeing and Perceiving","volume":"25 1","pages":"44-44"},"PeriodicalIF":0.0,"publicationDate":"2012-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1163/187847612X646668","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"64427535","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-01-01DOI: 10.1163/187847612X647270
E. Leonardelli, Valeria Occelli, G. Demarchi, M. Grassi, C. Braun, M. Zampini
The present study aims to assess the mechanisms involved in the processing of potentially threatening stimuli presented within the peri-head space of humans. Magnetic fields evoked by air-puffs presented at the peri-oral area of fifteen participants were recorded by using magnetoencephalography (MEG). Crucially, each air puff was preceded by a sound, which could be either perceived as looming, stationary and close to the body (i.e., within the peri-head space) or stationary and far from the body (i.e., extrapersonal space). The comparison of the time courses of the global field power (GFP) indicated a significant difference in the time window ranging from 70 to 170 ms between the conditions. When the air puff was preceded by a stationary sound located far from the head stronger somatosensory activity was evoked as compared to the conditions where the sounds were located close to the head. No difference could be shown for the looming and the stationary prime stimulus close to the head. Source localization was performed assuming a pair of symmetric dipoles in a spherical head model that was fitted to the MRI images of the individual participants. Results showed sources in primary and secondary somatosensory cortex. Source activities in secondary somatosensory cortex differed between the three conditions, with larger effects evoked by the looming sounds and smaller effects evoked by the far stationary sounds, and the close stationary sounds evoking intermediate effects. Overall, these findings suggest the existence of a system involved in the detection of approaching objects and protecting the body from collisions in humans.
{"title":"Effects of looming and static sounds on somatosensory processing: A MEG study","authors":"E. Leonardelli, Valeria Occelli, G. Demarchi, M. Grassi, C. Braun, M. Zampini","doi":"10.1163/187847612X647270","DOIUrl":"https://doi.org/10.1163/187847612X647270","url":null,"abstract":"The present study aims to assess the mechanisms involved in the processing of potentially threatening stimuli presented within the peri-head space of humans. Magnetic fields evoked by air-puffs presented at the peri-oral area of fifteen participants were recorded by using magnetoencephalography (MEG). Crucially, each air puff was preceded by a sound, which could be either perceived as looming, stationary and close to the body (i.e., within the peri-head space) or stationary and far from the body (i.e., extrapersonal space). The comparison of the time courses of the global field power (GFP) indicated a significant difference in the time window ranging from 70 to 170 ms between the conditions. When the air puff was preceded by a stationary sound located far from the head stronger somatosensory activity was evoked as compared to the conditions where the sounds were located close to the head. No difference could be shown for the looming and the stationary prime stimulus close to the head. Source localization was performed assuming a pair of symmetric dipoles in a spherical head model that was fitted to the MRI images of the individual participants. Results showed sources in primary and secondary somatosensory cortex. Source activities in secondary somatosensory cortex differed between the three conditions, with larger effects evoked by the looming sounds and smaller effects evoked by the far stationary sounds, and the close stationary sounds evoking intermediate effects. Overall, these findings suggest the existence of a system involved in the detection of approaching objects and protecting the body from collisions in humans.","PeriodicalId":49553,"journal":{"name":"Seeing and Perceiving","volume":"25 1","pages":"94-94"},"PeriodicalIF":0.0,"publicationDate":"2012-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1163/187847612X647270","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"64427800","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-01-01DOI: 10.1163/187847612X647784
Elena Gherri, Bettina Forster
Previous research demonstrated that directing one’s gaze at a body part reduces detection speed (e.g., Tipper et al., 1998) and enhances the processing (Forster and Eimer, 2005) of tactile stimuli presented at the gazed location. Interestingly, gaze-dependent modulation of somatosensory evoked potentials (SEPs), are very similar to those observed in previous studies of tactile spatial attention. This might indicate that manipulating gaze direction activates the same mechanisms that are responsible for the covert orienting of spatial attention in touch. To investigate this possibility, gaze direction and sustained tactile attention were orthogonally manipulated in the present study. In different blocks of trials, participants focused their attention on the left or right hand while gazing to the attended or to the unattended hand while they had to respond to infrequent tactile targets presented to the attended hand. Analyses of the SEPs elicited by tactile non-target stimuli demonstrate that gaze and attention influence different stages of tactile processing. While gaze is able to modulate tactile processing already 50 ms after stimulus onset, attentional SEP modulations are only observed beyond 110 ms post-stimulus. This dissociation in the timing and therefore the associated locus of the effects of gaze and attention on somatosensory processing reveals that the effect of gaze on tactile processing is independent of tactile attention.
{"title":"ERP investigations into the effects of gaze and spatial attention on the processing of tactile events","authors":"Elena Gherri, Bettina Forster","doi":"10.1163/187847612X647784","DOIUrl":"https://doi.org/10.1163/187847612X647784","url":null,"abstract":"Previous research demonstrated that directing one’s gaze at a body part reduces detection speed (e.g., Tipper et al., 1998) and enhances the processing (Forster and Eimer, 2005) of tactile stimuli presented at the gazed location. Interestingly, gaze-dependent modulation of somatosensory evoked potentials (SEPs), are very similar to those observed in previous studies of tactile spatial attention. This might indicate that manipulating gaze direction activates the same mechanisms that are responsible for the covert orienting of spatial attention in touch. To investigate this possibility, gaze direction and sustained tactile attention were orthogonally manipulated in the present study. In different blocks of trials, participants focused their attention on the left or right hand while gazing to the attended or to the unattended hand while they had to respond to infrequent tactile targets presented to the attended hand. Analyses of the SEPs elicited by tactile non-target stimuli demonstrate that gaze and attention influence different stages of tactile processing. While gaze is able to modulate tactile processing already 50 ms after stimulus onset, attentional SEP modulations are only observed beyond 110 ms post-stimulus. This dissociation in the timing and therefore the associated locus of the effects of gaze and attention on somatosensory processing reveals that the effect of gaze on tactile processing is independent of tactile attention.","PeriodicalId":49553,"journal":{"name":"Seeing and Perceiving","volume":"25 1","pages":"146-146"},"PeriodicalIF":0.0,"publicationDate":"2012-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1163/187847612X647784","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"64428481","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}