Pub Date : 2012-01-01DOI: 10.1163/187847612X646767
Elodie Lerens, L. Renier, A. Volder
Early blind people compensate for their lack of vision by developing superior abilities in the remaining senses such as audition (Collignon et al., 2006; Gougoux et al., 2004; Wan et al., 2010). Previous studies reported supra-normal abilities in auditory spatial attention, particularly for the localization of peripheral stimuli in comparison with frontal stimuli (Lessard et al., 1998; Roder et al., 1999). However, it is unknown whether this specific supra-normal ability extends to the non-spatial attention domain. Here we compared the performance of early blind subjects and sighted controls, who were blindfolded, during an auditory non-spatial attention task: target detection among distractors according to tone frequency. We paid a special attention to the potential effect of the sound source location, comparing the accuracy and speed in target detection in the peripheral and frontal space. Blind subjects displayed shorter reaction times than sighted controls for both peripheral and frontal stimuli. Moreover, in the two groups of subjects, we observed an interaction effect between the target location and the distractors location: the target was detected faster when its location was different from the location of the distractors. However, this effect was attenuated in early blind subjects and even cancelled in the condition with frontal targets and peripheral distractors. We conclude that early blind people compensate for the lack of vision by enhancing their ability to process auditory information but also by changing the spatial distribution of their auditory attention resources.
早期失明的人通过发展其他感官的优越能力来弥补他们的视力不足,比如听觉(Collignon et al., 2006;Gougoux et al., 2004;Wan et al., 2010)。先前的研究报告了听觉空间注意的超常能力,特别是与额叶刺激相比,外围刺激的定位能力(Lessard et al., 1998;Roder et al., 1999)。然而,这种特殊的超常能力是否延伸到非空间注意力领域尚不清楚。在这里,我们比较了早期失明受试者和蒙住眼睛的正常对照组在听觉非空间注意任务中的表现:根据音调频率在干扰物中检测目标。我们特别关注了声源位置的潜在影响,比较了周边空间和正面空间目标检测的精度和速度。对于外周和额部刺激,失明受试者的反应时间都比视力正常的对照组短。此外,在两组被试中,我们观察到目标位置与干扰物位置之间存在交互作用:当目标位置与干扰物位置不同时,目标被检测的速度更快。然而,这种效应在早期失明受试者中减弱,在有额叶目标和外周干扰物的情况下甚至消失。我们的结论是,早期失明的人通过增强他们处理听觉信息的能力来弥补视力的不足,同时也通过改变他们听觉注意力资源的空间分布。
{"title":"The spatial distribution of auditory attention in early blindness","authors":"Elodie Lerens, L. Renier, A. Volder","doi":"10.1163/187847612X646767","DOIUrl":"https://doi.org/10.1163/187847612X646767","url":null,"abstract":"Early blind people compensate for their lack of vision by developing superior abilities in the remaining senses such as audition (Collignon et al., 2006; Gougoux et al., 2004; Wan et al., 2010). Previous studies reported supra-normal abilities in auditory spatial attention, particularly for the localization of peripheral stimuli in comparison with frontal stimuli (Lessard et al., 1998; Roder et al., 1999). However, it is unknown whether this specific supra-normal ability extends to the non-spatial attention domain. Here we compared the performance of early blind subjects and sighted controls, who were blindfolded, during an auditory non-spatial attention task: target detection among distractors according to tone frequency. We paid a special attention to the potential effect of the sound source location, comparing the accuracy and speed in target detection in the peripheral and frontal space. Blind subjects displayed shorter reaction times than sighted controls for both peripheral and frontal stimuli. Moreover, in the two groups of subjects, we observed an interaction effect between the target location and the distractors location: the target was detected faster when its location was different from the location of the distractors. However, this effect was attenuated in early blind subjects and even cancelled in the condition with frontal targets and peripheral distractors. We conclude that early blind people compensate for the lack of vision by enhancing their ability to process auditory information but also by changing the spatial distribution of their auditory attention resources.","PeriodicalId":49553,"journal":{"name":"Seeing and Perceiving","volume":"41 1","pages":"55-55"},"PeriodicalIF":0.0,"publicationDate":"2012-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1163/187847612X646767","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"64427324","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-01-01DOI: 10.1163/187847612X647478
Marlene Hense, Boukje Habets, B. Roeder
In sequential unimodal stimulus designs the time it takes for an event-related potential (ERP)-amplitude to recover is often interpreted as a transient decrement in responsiveness of the generating cortical circuits. This effect has been called neural refractoriness, which is the larger the more similar the repeated stimuli are and thus indicates the degree of overlap between the neural generator systems activated by two sequential stimuli. We hypothesize that crossmodal refractoriness-effects in a crossmodal sequential design might be a good parameter to assess the ‘modality overlap’ in the involved neural generators and the degree of crossmodal interaction. In order to investigate crossmodal ERP refractory period effects we presented visual and auditory (Experiment 1) and visual and tactile stimuli (Experiment 2) with inter stimulus intervals of 1 and 2 s to adult participants. Participants had to detect rare auditory and visual stimuli. Both, intra- and crossmodal ISI effects for all modalities were found for three investigated ERP-deflections (P1, N1, P2). The topography of the crossmodal refractory period effect of the N1- and P2-deflections in Experiment 1 and of P1 and N1 in Experiment 2 of both modalities was similar to the corresponding intramodal refractory effect, yet more confined and crossmodal effects were generally weaker. The crossmodal refractory effect for the visual P1, however, had a distinct, less circumscribed topography with respect to the intramodal effect. These results suggest that ERP refractory effects might be a promising indicator of the neural correlates of crossmodal interactions.
{"title":"Recovery periods of event-related potentials indicating crossmodal interactions between the visual, auditory and tactile system","authors":"Marlene Hense, Boukje Habets, B. Roeder","doi":"10.1163/187847612X647478","DOIUrl":"https://doi.org/10.1163/187847612X647478","url":null,"abstract":"In sequential unimodal stimulus designs the time it takes for an event-related potential (ERP)-amplitude to recover is often interpreted as a transient decrement in responsiveness of the generating cortical circuits. This effect has been called neural refractoriness, which is the larger the more similar the repeated stimuli are and thus indicates the degree of overlap between the neural generator systems activated by two sequential stimuli. We hypothesize that crossmodal refractoriness-effects in a crossmodal sequential design might be a good parameter to assess the ‘modality overlap’ in the involved neural generators and the degree of crossmodal interaction. In order to investigate crossmodal ERP refractory period effects we presented visual and auditory (Experiment 1) and visual and tactile stimuli (Experiment 2) with inter stimulus intervals of 1 and 2 s to adult participants. Participants had to detect rare auditory and visual stimuli. Both, intra- and crossmodal ISI effects for all modalities were found for three investigated ERP-deflections (P1, N1, P2). The topography of the crossmodal refractory period effect of the N1- and P2-deflections in Experiment 1 and of P1 and N1 in Experiment 2 of both modalities was similar to the corresponding intramodal refractory effect, yet more confined and crossmodal effects were generally weaker. The crossmodal refractory effect for the visual P1, however, had a distinct, less circumscribed topography with respect to the intramodal effect. These results suggest that ERP refractory effects might be a promising indicator of the neural correlates of crossmodal interactions.","PeriodicalId":49553,"journal":{"name":"Seeing and Perceiving","volume":"9 1","pages":"114-114"},"PeriodicalIF":0.0,"publicationDate":"2012-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1163/187847612X647478","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"64427751","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-01-01DOI: 10.1163/187847612X647081
A. Thelen, M. Murray
The identification of monosynaptic connections between primary cortices in non-human primates has recently been complemented by observations of early-latency and low-level non-linear interactions in brain responses in humans as well as observations of facilitative effects of multisensory stimuli on behavior/performance in both humans and monkeys. While there is some evidence in favor of causal links between early–latency interactions within low-level cortices and behavioral facilitation, it remains unknown if such effects are subserved by direct anatomical connections between primary cortices. In non-human primates, the above monosynaptic projections from primary auditory cortex terminate within peripheral visual field representations within primary visual cortex, suggestive of there being a potential bias for the integration of eccentric visual stimuli and pure tone (vs. broad-band) sounds. To date, behavioral effects in humans (and monkeys) have been observed after presenting (para)foveal stimuli with any of a range of auditory stimuli from pure tones to noise bursts. The present study aimed to identify any heterogeneity in the integration of auditory–visual stimuli. To this end, we employed a 3 × 3 within subject design that varied the visual eccentricity of an annulus (2.5°, 5.7°, 8.9°) and auditory pitch (250, 1000, 4000 Hz) of multisensory stimuli while subjects completed a simple detection task. We also varied the auditory bandwidth (pure tone vs. pink noise) across blocks of trials that a subject completed. To ensure attention to both modalities, multisensory stimuli were equi-probable with both unisensory visual and unisensory auditory trials that themselves varied along the abovementioned dimensions. Median reaction times for each stimulus condition as well as the percentage gain/loss of each multisensory condition vs. the best constituent unisensory condition were measured. The preliminary results reveal that multisensory interactions (as measured from simple reaction times) are indeed heterogeneous across the tested dimensions and may provide a means for delimiting the anatomo-functional substrates of behaviorally-relevant early–latency neural response interactions. Interestingly, preliminary results suggest selective interactions for visual stimuli when presented with broadband stimuli but not when presented with pure tones. More precisely, centrally presented visual stimuli show the greatest index of multisensory facilitation when coupled to a high pitch tone embedded in pink noise, while visual stimuli presented at approximately 5.7° of visual angle show the greatest slowing of reaction times.
{"title":"Heterogeneous auditory–visual integration: Effects of pitch, band-width and visual eccentricity","authors":"A. Thelen, M. Murray","doi":"10.1163/187847612X647081","DOIUrl":"https://doi.org/10.1163/187847612X647081","url":null,"abstract":"The identification of monosynaptic connections between primary cortices in non-human primates has recently been complemented by observations of early-latency and low-level non-linear interactions in brain responses in humans as well as observations of facilitative effects of multisensory stimuli on behavior/performance in both humans and monkeys. While there is some evidence in favor of causal links between early–latency interactions within low-level cortices and behavioral facilitation, it remains unknown if such effects are subserved by direct anatomical connections between primary cortices. In non-human primates, the above monosynaptic projections from primary auditory cortex terminate within peripheral visual field representations within primary visual cortex, suggestive of there being a potential bias for the integration of eccentric visual stimuli and pure tone (vs. broad-band) sounds. To date, behavioral effects in humans (and monkeys) have been observed after presenting (para)foveal stimuli with any of a range of auditory stimuli from pure tones to noise bursts. The present study aimed to identify any heterogeneity in the integration of auditory–visual stimuli. To this end, we employed a 3 × 3 within subject design that varied the visual eccentricity of an annulus (2.5°, 5.7°, 8.9°) and auditory pitch (250, 1000, 4000 Hz) of multisensory stimuli while subjects completed a simple detection task. We also varied the auditory bandwidth (pure tone vs. pink noise) across blocks of trials that a subject completed. To ensure attention to both modalities, multisensory stimuli were equi-probable with both unisensory visual and unisensory auditory trials that themselves varied along the abovementioned dimensions. Median reaction times for each stimulus condition as well as the percentage gain/loss of each multisensory condition vs. the best constituent unisensory condition were measured. The preliminary results reveal that multisensory interactions (as measured from simple reaction times) are indeed heterogeneous across the tested dimensions and may provide a means for delimiting the anatomo-functional substrates of behaviorally-relevant early–latency neural response interactions. Interestingly, preliminary results suggest selective interactions for visual stimuli when presented with broadband stimuli but not when presented with pure tones. More precisely, centrally presented visual stimuli show the greatest index of multisensory facilitation when coupled to a high pitch tone embedded in pink noise, while visual stimuli presented at approximately 5.7° of visual angle show the greatest slowing of reaction times.","PeriodicalId":49553,"journal":{"name":"Seeing and Perceiving","volume":"25 1","pages":"89-89"},"PeriodicalIF":0.0,"publicationDate":"2012-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1163/187847612X647081","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"64427980","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-01-01DOI: 10.1163/187847612X647333
J. Neufeld, C. Sinke, Daniel Wiswede, H. Emrich, S. Bleich, G. Szycik
In synaesthesia certain external stimuli (e.g., music) trigger automatically internally generated sensations (e.g., colour). Results of behavioural investigations indicate that multisensory processing works differently in synaesthetes. However, the reasons for these differences and the underlying neural correlates remain unclear. The aim of the current study was to investigate if synaesthetes show differences in electrophysiological components of multimodal processing. Further we wanted to test synaesthetes for an enhanced distractor filtering ability in multimodal situations. Therefore, line drawings of animals and objects were presented to participants, either with congruent (typical sound for presented picture, e.g., picture of bird together with chirp), incongruent (picture of bird together with gun shot) or without simultaneous auditory stimulation. 14 synaesthetes (auditory–visual and grapheme-colour synaesthetes) and 13 controls participated in the study. We found differences in the event-related potentials between synaesthetes and controls, indicating an altered multisensory processing of bimodal stimuli in synaesthetes in competition situations. These differences were especially found over frontal brain sites. An interaction effect between group (synaesthetes vs. controls) and stimulation (unimodal visual vs. congruent multimodal) could not be detected. Therefore we conclude that multisensory processing works in general similar in synaesthetes and controls and that only specifically integration processes in multisensory competition situations are altered in synaesthetes.
{"title":"Multisensory processes in the synaesthetic brain — An event-related potential study in multisensory competition situations","authors":"J. Neufeld, C. Sinke, Daniel Wiswede, H. Emrich, S. Bleich, G. Szycik","doi":"10.1163/187847612X647333","DOIUrl":"https://doi.org/10.1163/187847612X647333","url":null,"abstract":"In synaesthesia certain external stimuli (e.g., music) trigger automatically internally generated sensations (e.g., colour). Results of behavioural investigations indicate that multisensory processing works differently in synaesthetes. However, the reasons for these differences and the underlying neural correlates remain unclear. The aim of the current study was to investigate if synaesthetes show differences in electrophysiological components of multimodal processing. Further we wanted to test synaesthetes for an enhanced distractor filtering ability in multimodal situations. Therefore, line drawings of animals and objects were presented to participants, either with congruent (typical sound for presented picture, e.g., picture of bird together with chirp), incongruent (picture of bird together with gun shot) or without simultaneous auditory stimulation. 14 synaesthetes (auditory–visual and grapheme-colour synaesthetes) and 13 controls participated in the study. We found differences in the event-related potentials between synaesthetes and controls, indicating an altered multisensory processing of bimodal stimuli in synaesthetes in competition situations. These differences were especially found over frontal brain sites. An interaction effect between group (synaesthetes vs. controls) and stimulation (unimodal visual vs. congruent multimodal) could not be detected. Therefore we conclude that multisensory processing works in general similar in synaesthetes and controls and that only specifically integration processes in multisensory competition situations are altered in synaesthetes.","PeriodicalId":49553,"journal":{"name":"Seeing and Perceiving","volume":"25 1","pages":"101-101"},"PeriodicalIF":0.0,"publicationDate":"2012-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1163/187847612X647333","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"64427987","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-01-01DOI: 10.1163/187847612X647847
Alex K. Malone, N. N. Chang, T. Hullar
Falls are one of the leading causes of disability in the elderly. Previous research has shown that falls may be related to changes in the temporal integration of multisensory stimuli. This study compared the temporal integration and processing of a vestibular and auditory stimulus in younger and older subjects. The vestibular stimulus consisted of a continuous sinusoidal rotational velocity delivered using a rotational chair and the auditory stimulus consisted of 5 ms of white noise presented dichotically through headphones (both at 0.5 Hz). Simultaneity was defined as perceiving the chair being at its furthest rightward or leftward trajectory at the same moment as the auditory stimulus was perceived in the contralateral ear. The temporal offset of the auditory stimulus was adjusted using a method of constant stimuli so that the auditory stimulus either led or lagged true simultaneity. 15 younger (ages 21–27) and 12 older (ages 63–89) healthy subjects were tested using a two alternative forced choice task to determine at what times they perceived the two stimuli as simultaneous. Younger subjects had a mean temporal binding window of 334 ± 37 ms (mean ± SEM) and a mean point of subjective simultaneity of 83 ± 15 ms. Older subjects had a mean TBW of 556 ± 36 ms and a mean point of subjective simultaneity of 158 ± 27. Both differences were significant indicating that older subjects have a wider temporal range over which they integrate vestibular and auditory stimuli than younger subjects. These findings were consistent upon retesting and were not due to differences in vestibular perception thresholds.
{"title":"Age-related changes in temporal processing of vestibular stimuli","authors":"Alex K. Malone, N. N. Chang, T. Hullar","doi":"10.1163/187847612X647847","DOIUrl":"https://doi.org/10.1163/187847612X647847","url":null,"abstract":"Falls are one of the leading causes of disability in the elderly. Previous research has shown that falls may be related to changes in the temporal integration of multisensory stimuli. This study compared the temporal integration and processing of a vestibular and auditory stimulus in younger and older subjects. The vestibular stimulus consisted of a continuous sinusoidal rotational velocity delivered using a rotational chair and the auditory stimulus consisted of 5 ms of white noise presented dichotically through headphones (both at 0.5 Hz). Simultaneity was defined as perceiving the chair being at its furthest rightward or leftward trajectory at the same moment as the auditory stimulus was perceived in the contralateral ear. The temporal offset of the auditory stimulus was adjusted using a method of constant stimuli so that the auditory stimulus either led or lagged true simultaneity. 15 younger (ages 21–27) and 12 older (ages 63–89) healthy subjects were tested using a two alternative forced choice task to determine at what times they perceived the two stimuli as simultaneous. Younger subjects had a mean temporal binding window of 334 ± 37 ms (mean ± SEM) and a mean point of subjective simultaneity of 83 ± 15 ms. Older subjects had a mean TBW of 556 ± 36 ms and a mean point of subjective simultaneity of 158 ± 27. Both differences were significant indicating that older subjects have a wider temporal range over which they integrate vestibular and auditory stimuli than younger subjects. These findings were consistent upon retesting and were not due to differences in vestibular perception thresholds.","PeriodicalId":49553,"journal":{"name":"Seeing and Perceiving","volume":"25 1","pages":"153-153"},"PeriodicalIF":0.0,"publicationDate":"2012-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1163/187847612X647847","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"64428177","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-01-01DOI: 10.1163/187847612X647603
D. Janković
Crossmodal correspondences have been widely demonstrated, although mechanisms that stand behind the phenomenon have not been fully established yet. According to the Evaluative similarity hypothesis crossmodal correspondences are influenced by evaluative (affective) similarity of stimuli from different sensory modalities (Jankovic, 2010, Journal of Vision 10(7), 859). From this view, detection of similar evaluative information in stimulation from different sensory modalities facilitates crossmodal correspondences and multisensory integration. The aim of this study was to explore the evaluative similarity hypothesis of crossmodal correspondences in children. In Experiment 1 two groups of participants (nine- and thirteen-year-olds) were asked to make explicit matches between presented auditory stimuli (1 s long sound clips) and abstract visual patterns. In Experiment 2 the same participants judged abstract visual patterns and auditory stimuli on the set of evaluative attributes measuring affective valence and arousal. The results showed that crossmodal correspondences are mostly influenced by evaluative similarity of visual and auditory stimuli in both age groups. The most frequently matched were visual and auditory stimuli congruent in both valence and arousal, followed by stimuli congruent in valence, and finally stimuli congruent in arousal. Evaluatively incongruent stimuli demonstrated low crossmodal associations especially in older group.
尽管这种现象背后的机制尚未完全确立,但跨模式对应已被广泛证明。根据评价相似性假说,跨模态对应受到来自不同感觉模态刺激的评价(情感)相似性的影响(Jankovic, 2010, Journal of Vision 10(7), 859)。从这个角度来看,在不同感觉模式的刺激中发现相似的评价信息有助于跨模式对应和多感觉整合。本研究的目的是探讨儿童跨模式通信的评价相似性假设。在实验1中,两组参与者(9岁和13岁)被要求在呈现的听觉刺激(15个长声音片段)和抽象的视觉模式之间进行明确的匹配。在实验2中,同样的被试对抽象的视觉模式和听觉刺激的判断是基于测量情感效价和唤醒的评价属性集。结果表明,两个年龄组的跨模对应关系主要受视觉和听觉刺激评价相似性的影响。匹配频率最高的是效价一致和唤醒一致的视觉和听觉刺激,其次是效价一致的刺激,最后是唤醒一致的刺激。评价不一致的刺激表现出低的跨模关联,尤其是在老年人中。
{"title":"Evaluative similarity hypothesis of crossmodal correspondences: A developmental view","authors":"D. Janković","doi":"10.1163/187847612X647603","DOIUrl":"https://doi.org/10.1163/187847612X647603","url":null,"abstract":"Crossmodal correspondences have been widely demonstrated, although mechanisms that stand behind the phenomenon have not been fully established yet. According to the Evaluative similarity hypothesis crossmodal correspondences are influenced by evaluative (affective) similarity of stimuli from different sensory modalities (Jankovic, 2010, Journal of Vision 10(7), 859). From this view, detection of similar evaluative information in stimulation from different sensory modalities facilitates crossmodal correspondences and multisensory integration. The aim of this study was to explore the evaluative similarity hypothesis of crossmodal correspondences in children. In Experiment 1 two groups of participants (nine- and thirteen-year-olds) were asked to make explicit matches between presented auditory stimuli (1 s long sound clips) and abstract visual patterns. In Experiment 2 the same participants judged abstract visual patterns and auditory stimuli on the set of evaluative attributes measuring affective valence and arousal. The results showed that crossmodal correspondences are mostly influenced by evaluative similarity of visual and auditory stimuli in both age groups. The most frequently matched were visual and auditory stimuli congruent in both valence and arousal, followed by stimuli congruent in valence, and finally stimuli congruent in arousal. Evaluatively incongruent stimuli demonstrated low crossmodal associations especially in older group.","PeriodicalId":49553,"journal":{"name":"Seeing and Perceiving","volume":"13 1","pages":"127-127"},"PeriodicalIF":0.0,"publicationDate":"2012-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1163/187847612X647603","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"64428352","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-01-01DOI: 10.1163/187847612X647801
M. Grabowecky, Emmanuel Guzman-Martinez, L. Ortega, Satoru Suzuki
Watching moving lips facilitates auditory speech perception when the mouth is attended. However, recent evidence suggests that visual attention and awareness are mediated by separate mechanisms. We investigated whether lip movements suppressed from visual awareness can facilitate speech perception. We used a word categorization task in which participants listened to spoken words and determined as quickly and accurately as possible whether or not each word named a tool. While participants listened to the words they watched a visual display that presented a video clip of the speaker synchronously speaking the auditorily presented words, or the same speaker articulating different words. Critically, the speaker’s face was either visible (the aware trials), or suppressed from awareness using continuous flash suppression. Aware and suppressed trials were randomly intermixed. A secondary probe-detection task ensured that participants attended to the mouth region regardless of whether the face was visible or suppressed. On the aware trials responses to the tool targets were no faster with the synchronous than asynchronous lip movements, perhaps because the visual information was inconsistent with the auditory information on 50% of the trials. However, on the suppressed trials responses to the tool targets were significantly faster with the synchronous than asynchronous lip movements. These results demonstrate that even when a random dynamic mask renders a face invisible, lip movements are processed by the visual system with sufficiently high temporal resolution to facilitate speech perception.
{"title":"An invisible speaker can facilitate auditory speech perception","authors":"M. Grabowecky, Emmanuel Guzman-Martinez, L. Ortega, Satoru Suzuki","doi":"10.1163/187847612X647801","DOIUrl":"https://doi.org/10.1163/187847612X647801","url":null,"abstract":"Watching moving lips facilitates auditory speech perception when the mouth is attended. However, recent evidence suggests that visual attention and awareness are mediated by separate mechanisms. We investigated whether lip movements suppressed from visual awareness can facilitate speech perception. We used a word categorization task in which participants listened to spoken words and determined as quickly and accurately as possible whether or not each word named a tool. While participants listened to the words they watched a visual display that presented a video clip of the speaker synchronously speaking the auditorily presented words, or the same speaker articulating different words. Critically, the speaker’s face was either visible (the aware trials), or suppressed from awareness using continuous flash suppression. Aware and suppressed trials were randomly intermixed. A secondary probe-detection task ensured that participants attended to the mouth region regardless of whether the face was visible or suppressed. On the aware trials responses to the tool targets were no faster with the synchronous than asynchronous lip movements, perhaps because the visual information was inconsistent with the auditory information on 50% of the trials. However, on the suppressed trials responses to the tool targets were significantly faster with the synchronous than asynchronous lip movements. These results demonstrate that even when a random dynamic mask renders a face invisible, lip movements are processed by the visual system with sufficiently high temporal resolution to facilitate speech perception.","PeriodicalId":49553,"journal":{"name":"Seeing and Perceiving","volume":"25 1","pages":"148-148"},"PeriodicalIF":0.0,"publicationDate":"2012-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1163/187847612X647801","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"64428539","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-01-01DOI: 10.1163/187847612X647946
Tim Paris, Jeesun Kim, C. Davis
The processing of multisensory information depends on the learned association between sensory cues. In the case of speech there is a well-learned association between the movements of the lips and the subsequent sound. That is, particular lip and mouth movements reliably lead to a specific sound. EEG and MEG studies that have investigated the differences between this ‘congruent’ AV association and other ‘incongruent’ associations have commonly reported ERP differences from 350 ms after sound onset. Using a 256 active electrode EEG system, we tested whether this ‘congruency effect’ would be reduced in the context where most of the trials had an altered audiovisual association (auditory speech paired with mismatched visual lip movements). Participants were presented stimuli over 2 sessions: in one session only 15% were incongruent trials; in the other session, 85% were incongruent trials. We found a congruency effect, showing differences in ERP between congruent and incongruent speech between 350 and 500 ms. Importantly, this effect was reduced within the context of mostly incongruent trials. This reduction in the congruency effect indicates that the way in which AV speech is processed depends on the context it is viewed in. Furthermore, this result suggests that exposure to novel sensory relationships leads to updated expectations regarding the relationship between auditory and visual speech cues.
{"title":"Updating expectencies about audiovisual associations in speech","authors":"Tim Paris, Jeesun Kim, C. Davis","doi":"10.1163/187847612X647946","DOIUrl":"https://doi.org/10.1163/187847612X647946","url":null,"abstract":"The processing of multisensory information depends on the learned association between sensory cues. In the case of speech there is a well-learned association between the movements of the lips and the subsequent sound. That is, particular lip and mouth movements reliably lead to a specific sound. EEG and MEG studies that have investigated the differences between this ‘congruent’ AV association and other ‘incongruent’ associations have commonly reported ERP differences from 350 ms after sound onset. Using a 256 active electrode EEG system, we tested whether this ‘congruency effect’ would be reduced in the context where most of the trials had an altered audiovisual association (auditory speech paired with mismatched visual lip movements). Participants were presented stimuli over 2 sessions: in one session only 15% were incongruent trials; in the other session, 85% were incongruent trials. We found a congruency effect, showing differences in ERP between congruent and incongruent speech between 350 and 500 ms. Importantly, this effect was reduced within the context of mostly incongruent trials. This reduction in the congruency effect indicates that the way in which AV speech is processed depends on the context it is viewed in. Furthermore, this result suggests that exposure to novel sensory relationships leads to updated expectations regarding the relationship between auditory and visual speech cues.","PeriodicalId":49553,"journal":{"name":"Seeing and Perceiving","volume":"120 1","pages":"164-164"},"PeriodicalIF":0.0,"publicationDate":"2012-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1163/187847612X647946","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"64428756","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-01-01DOI: 10.1163/187847612X648008
G. Juravle, T. Heed, C. Spence, B. Roeder
Tactile information arriving at our sensory receptors is differentially processed over the various temporal phases of goal-directed movements. By using event-related potentials (ERPs), we investigated the neuronal correlates of tactile information processing during movement. Participants performed goal-directed reaches for an object placed centrally on the table in front of them. Tactile and visual stimuli were presented in separate trials during the different phases of the movement (i.e., preparation, execution, and post-movement). These stimuli were independently delivered to either the moving or the resting hand. In a control condition, the participants only performed the movement, while omission (movement-only) ERPs were recorded. Participants were told to ignore the presence or absence of any sensory events and solely concentrate on the execution of the movement. The results highlighted enhanced ERPs between 80 and 200 ms after tactile stimulation, and between 100 and 250 ms after visual stimulation. These modulations were greatest over the execution phase of the goal-directed movement, they were effector-based (i.e., significantly more negative for stimuli presented at the moving hand), and modality-independent (i.e., similar ERP enhancements were observed for both tactile and visual stimuli). The enhanced processing of sensory information over the execution phase of the movement suggests that incoming sensory information may be used for a potential adjustment of the current motor plan. Moreover, these results indicate a tight interaction between attentional mechanisms and the sensorimotor system.
{"title":"Electrophysiological correlates of tactile and visual perception during goal-directed movement","authors":"G. Juravle, T. Heed, C. Spence, B. Roeder","doi":"10.1163/187847612X648008","DOIUrl":"https://doi.org/10.1163/187847612X648008","url":null,"abstract":"Tactile information arriving at our sensory receptors is differentially processed over the various temporal phases of goal-directed movements. By using event-related potentials (ERPs), we investigated the neuronal correlates of tactile information processing during movement. Participants performed goal-directed reaches for an object placed centrally on the table in front of them. Tactile and visual stimuli were presented in separate trials during the different phases of the movement (i.e., preparation, execution, and post-movement). These stimuli were independently delivered to either the moving or the resting hand. In a control condition, the participants only performed the movement, while omission (movement-only) ERPs were recorded. Participants were told to ignore the presence or absence of any sensory events and solely concentrate on the execution of the movement. The results highlighted enhanced ERPs between 80 and 200 ms after tactile stimulation, and between 100 and 250 ms after visual stimulation. These modulations were greatest over the execution phase of the goal-directed movement, they were effector-based (i.e., significantly more negative for stimuli presented at the moving hand), and modality-independent (i.e., similar ERP enhancements were observed for both tactile and visual stimuli). The enhanced processing of sensory information over the execution phase of the movement suggests that incoming sensory information may be used for a potential adjustment of the current motor plan. Moreover, these results indicate a tight interaction between attentional mechanisms and the sensorimotor system.","PeriodicalId":49553,"journal":{"name":"Seeing and Perceiving","volume":"25 1","pages":"170-170"},"PeriodicalIF":0.0,"publicationDate":"2012-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1163/187847612X648008","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"64428762","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-01-01DOI: 10.1163/187847612X648044
Stefano Targher, Valeria Occelli, M. Zampini
Our recent findings have shown that sounds improve visual detection in low vision individuals when the audiovisual pairs are presented simultaneously. The present study purports to investigate possible temporal aspects of the audiovisual enhancement effect that we have previously reported. Low vision participants were asked to detect the presence of a visual stimulus (yes/no task) either presented in isolation or together with an auditory stimulus at different SOAs. In the first experiment, when the sound was always leading the visual stimuli, there was a significant visual detection enhancement even when the visual stimulus was temporally delayed by 400 ms. However, the visual detection improvement was reduced in the second experiment when the sound could randomly lead or lag the visual stimulus. A significant enhancement was found only when the audiovisual stimuli were synchronized. Taken together, the results of the present study seem to suggest that high-level associations between modalities might modulate audiovisual interactions in low vision individuals.
{"title":"Temporal disparity effects on audiovisual integration in low vision individuals","authors":"Stefano Targher, Valeria Occelli, M. Zampini","doi":"10.1163/187847612X648044","DOIUrl":"https://doi.org/10.1163/187847612X648044","url":null,"abstract":"Our recent findings have shown that sounds improve visual detection in low vision individuals when the audiovisual pairs are presented simultaneously. The present study purports to investigate possible temporal aspects of the audiovisual enhancement effect that we have previously reported. Low vision participants were asked to detect the presence of a visual stimulus (yes/no task) either presented in isolation or together with an auditory stimulus at different SOAs. In the first experiment, when the sound was always leading the visual stimuli, there was a significant visual detection enhancement even when the visual stimulus was temporally delayed by 400 ms. However, the visual detection improvement was reduced in the second experiment when the sound could randomly lead or lag the visual stimulus. A significant enhancement was found only when the audiovisual stimuli were synchronized. Taken together, the results of the present study seem to suggest that high-level associations between modalities might modulate audiovisual interactions in low vision individuals.","PeriodicalId":49553,"journal":{"name":"Seeing and Perceiving","volume":"25 1","pages":"175-175"},"PeriodicalIF":0.0,"publicationDate":"2012-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1163/187847612X648044","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"64428766","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}