首页 > 最新文献

Seeing and Perceiving最新文献

英文 中文
Investigating task and modality switching costs using bimodal stimuli 使用双峰刺激调查任务和模态转换成本
Pub Date : 2012-01-01 DOI: 10.1163/187847612X646451
Rajwant Sandhu, B. Dyson
Investigations of concurrent task and modality switching effects have to date been studied under conditions of uni-modal stimulus presentation. As such, it is difficult to directly compare resultant task and modality switching effects, as the stimuli afford both tasks on each trial, but only one modality. The current study investigated task and modality switching using bi-modal stimulus presentation under various cue conditions: task and modality (double cue), either task or modality (single cue) or no cue. Participants responded to either the identity or the position of an audio–visual stimulus. Switching effects were defined as staying within a modality/task (repetition) or switching into a modality/task (change) from trial n − 1 to trial n, with analysis performed on trial n data. While task and modality switching costs were sub-additive across all conditions replicating previous data, modality switching effects were dependent on the modality being attended, and task switching effects were dependent on the task being performed. Specifically, visual responding and position responding revealed significant costs associated with modality and task switching, while auditory responding and identity responding revealed significant gains associated with modality and task switching. The effects interacted further, revealing that costs and gains associated with task and modality switching varying with the specific combination of modality and task type. The current study reconciles previous data by suggesting that efficiently processed modality/task information benefits from repetition while less efficiently processed information benefits from change due to less interference of preferred processing across consecutive trials.
迄今为止,在单模态刺激条件下的并发任务和模态转换效应的研究已经完成。因此,很难直接比较结果任务和模态转换效应,因为刺激在每次试验中都提供两个任务,但只有一个模态。本研究采用双模态刺激呈现方法研究了不同提示条件下的任务和模态转换:任务和模态(双提示)、任务或模态(单提示)或无提示。参与者对视听刺激的身份或位置作出反应。切换效应定义为从试验n−1到试验n保持在一个模态/任务内(重复)或切换到一个模态/任务(变化),并对试验n的数据进行分析。虽然在复制先前数据的所有条件下,任务和模态转换成本是次相加的,但模态转换的效果取决于所参加的模态,任务转换的效果取决于所执行的任务。具体而言,视觉反应和位置反应显示了与模态和任务转换相关的显著成本,而听觉反应和身份反应显示了与模态和任务转换相关的显著收益。这些影响进一步相互作用,揭示了任务和模式转换相关的成本和收益随着模式和任务类型的具体组合而变化。本研究与以往的数据一致,表明有效处理的模态/任务信息受益于重复,而低效率处理的信息受益于连续试验中首选处理干扰较少的变化。
{"title":"Investigating task and modality switching costs using bimodal stimuli","authors":"Rajwant Sandhu, B. Dyson","doi":"10.1163/187847612X646451","DOIUrl":"https://doi.org/10.1163/187847612X646451","url":null,"abstract":"Investigations of concurrent task and modality switching effects have to date been studied under conditions of uni-modal stimulus presentation. As such, it is difficult to directly compare resultant task and modality switching effects, as the stimuli afford both tasks on each trial, but only one modality. The current study investigated task and modality switching using bi-modal stimulus presentation under various cue conditions: task and modality (double cue), either task or modality (single cue) or no cue. Participants responded to either the identity or the position of an audio–visual stimulus. Switching effects were defined as staying within a modality/task (repetition) or switching into a modality/task (change) from trial n − 1 to trial n, with analysis performed on trial n data. While task and modality switching costs were sub-additive across all conditions replicating previous data, modality switching effects were dependent on the modality being attended, and task switching effects were dependent on the task being performed. Specifically, visual responding and position responding revealed significant costs associated with modality and task switching, while auditory responding and identity responding revealed significant gains associated with modality and task switching. The effects interacted further, revealing that costs and gains associated with task and modality switching varying with the specific combination of modality and task type. The current study reconciles previous data by suggesting that efficiently processed modality/task information benefits from repetition while less efficiently processed information benefits from change due to less interference of preferred processing across consecutive trials.","PeriodicalId":49553,"journal":{"name":"Seeing and Perceiving","volume":"25 1","pages":"22-22"},"PeriodicalIF":0.0,"publicationDate":"2012-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1163/187847612X646451","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"64426709","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
4 year olds localize tactile stimuli using an external frame of reference 4岁儿童使用外部参照系定位触觉刺激
Pub Date : 2012-01-01 DOI: 10.1163/187847612X646631
Jannath Begum, A. Bremner, Dorothy Cowie
Adults show a deficit in their ability to localize tactile stimuli to their hands when their arms are in the less familiar, crossed posture (e.g., Overvliet et al., 2011; Shore et al., 2002). It is thought that this ‘crossed-hands effect’ arises due to conflict (when the hands are crossed) between the anatomical and external frames of reference within which touches can be perceived. Pagel et al. (2009) studied this effect in young children and observed that the crossed-hands effect first emerges after 5.5-years. In their task, children were asked to judge the temporal order of stimuli presented across their hands in quick succession. Here, we present the findings of a simpler task in which children were asked to localize a single vibrotactile stimulus presented to either hand. We also compared the effect of posture under conditions in which children either did, or did not, have visual information about current hand posture. With this method, we observed a crossed-hands effect in the youngest age-group testable; 4-year-olds. We conclude that young children localize tactile stimuli with respect to an external frame of reference from early in childhood or before (cf. Bremner et al., 2008). Additionally, when visual information about posture was made available, 4- to 5-year-olds’ tactile localization accuracy in the uncrossed-hands posture deteriorated and the crossed-hands effect disappeared. We discuss these findings with respect to visual–tactile-proprioceptive integration abilities of young children and examine potential sources of the discrepancies between our findings and those of Pagel et al. (2009).
当成年人的手臂处于不太熟悉的交叉姿势时,他们将触觉刺激定位到手部的能力存在缺陷(例如,Overvliet et al., 2011;Shore et al., 2002)。据认为,这种“双手交叉效应”是由于解剖学和外部参照系之间的冲突(当双手交叉时)而产生的,在这些参照系中,触摸可以被感知。Pagel et al.(2009)在幼儿中研究了这种效应,并观察到双手交叉效应在5.5岁后首次出现。在他们的任务中,孩子们被要求判断快速连续出现在他们手上的刺激的时间顺序。在这里,我们展示了一个更简单的任务的发现,在这个任务中,孩子们被要求定位一个单一的振动触觉刺激,这个刺激呈现在他们的任何一只手上。我们还比较了两种情况下姿势的影响,一种是儿童对当前的手部姿势有视觉信息,另一种是没有。通过这种方法,我们在最年轻的年龄组中观察到交叉手效应;4岁的孩子。我们得出的结论是,幼儿在童年早期或更早的时候就会根据外部参考框架定位触觉刺激(参见Bremner et al., 2008)。此外,当提供姿势的视觉信息时,4 ~ 5岁儿童在非双手交叉姿势下的触觉定位精度下降,双手交叉效应消失。我们讨论了这些关于幼儿视觉-触觉-本体感觉整合能力的发现,并检查了我们的发现与Pagel等人(2009)之间差异的潜在来源。
{"title":"4 year olds localize tactile stimuli using an external frame of reference","authors":"Jannath Begum, A. Bremner, Dorothy Cowie","doi":"10.1163/187847612X646631","DOIUrl":"https://doi.org/10.1163/187847612X646631","url":null,"abstract":"Adults show a deficit in their ability to localize tactile stimuli to their hands when their arms are in the less familiar, crossed posture (e.g., Overvliet et al., 2011; Shore et al., 2002). It is thought that this ‘crossed-hands effect’ arises due to conflict (when the hands are crossed) between the anatomical and external frames of reference within which touches can be perceived. Pagel et al. (2009) studied this effect in young children and observed that the crossed-hands effect first emerges after 5.5-years. In their task, children were asked to judge the temporal order of stimuli presented across their hands in quick succession. Here, we present the findings of a simpler task in which children were asked to localize a single vibrotactile stimulus presented to either hand. We also compared the effect of posture under conditions in which children either did, or did not, have visual information about current hand posture. With this method, we observed a crossed-hands effect in the youngest age-group testable; 4-year-olds. We conclude that young children localize tactile stimuli with respect to an external frame of reference from early in childhood or before (cf. Bremner et al., 2008). Additionally, when visual information about posture was made available, 4- to 5-year-olds’ tactile localization accuracy in the uncrossed-hands posture deteriorated and the crossed-hands effect disappeared. We discuss these findings with respect to visual–tactile-proprioceptive integration abilities of young children and examine potential sources of the discrepancies between our findings and those of Pagel et al. (2009).","PeriodicalId":49553,"journal":{"name":"Seeing and Perceiving","volume":"25 1","pages":"41-41"},"PeriodicalIF":0.0,"publicationDate":"2012-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1163/187847612X646631","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"64426730","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The spatial distribution of auditory attention in early blindness 早期失明听觉注意的空间分布
Pub Date : 2012-01-01 DOI: 10.1163/187847612X646767
Elodie Lerens, L. Renier, A. Volder
Early blind people compensate for their lack of vision by developing superior abilities in the remaining senses such as audition (Collignon et al., 2006; Gougoux et al., 2004; Wan et al., 2010). Previous studies reported supra-normal abilities in auditory spatial attention, particularly for the localization of peripheral stimuli in comparison with frontal stimuli (Lessard et al., 1998; Roder et al., 1999). However, it is unknown whether this specific supra-normal ability extends to the non-spatial attention domain. Here we compared the performance of early blind subjects and sighted controls, who were blindfolded, during an auditory non-spatial attention task: target detection among distractors according to tone frequency. We paid a special attention to the potential effect of the sound source location, comparing the accuracy and speed in target detection in the peripheral and frontal space. Blind subjects displayed shorter reaction times than sighted controls for both peripheral and frontal stimuli. Moreover, in the two groups of subjects, we observed an interaction effect between the target location and the distractors location: the target was detected faster when its location was different from the location of the distractors. However, this effect was attenuated in early blind subjects and even cancelled in the condition with frontal targets and peripheral distractors. We conclude that early blind people compensate for the lack of vision by enhancing their ability to process auditory information but also by changing the spatial distribution of their auditory attention resources.
早期失明的人通过发展其他感官的优越能力来弥补他们的视力不足,比如听觉(Collignon et al., 2006;Gougoux et al., 2004;Wan et al., 2010)。先前的研究报告了听觉空间注意的超常能力,特别是与额叶刺激相比,外围刺激的定位能力(Lessard et al., 1998;Roder et al., 1999)。然而,这种特殊的超常能力是否延伸到非空间注意力领域尚不清楚。在这里,我们比较了早期失明受试者和蒙住眼睛的正常对照组在听觉非空间注意任务中的表现:根据音调频率在干扰物中检测目标。我们特别关注了声源位置的潜在影响,比较了周边空间和正面空间目标检测的精度和速度。对于外周和额部刺激,失明受试者的反应时间都比视力正常的对照组短。此外,在两组被试中,我们观察到目标位置与干扰物位置之间存在交互作用:当目标位置与干扰物位置不同时,目标被检测的速度更快。然而,这种效应在早期失明受试者中减弱,在有额叶目标和外周干扰物的情况下甚至消失。我们的结论是,早期失明的人通过增强他们处理听觉信息的能力来弥补视力的不足,同时也通过改变他们听觉注意力资源的空间分布。
{"title":"The spatial distribution of auditory attention in early blindness","authors":"Elodie Lerens, L. Renier, A. Volder","doi":"10.1163/187847612X646767","DOIUrl":"https://doi.org/10.1163/187847612X646767","url":null,"abstract":"Early blind people compensate for their lack of vision by developing superior abilities in the remaining senses such as audition (Collignon et al., 2006; Gougoux et al., 2004; Wan et al., 2010). Previous studies reported supra-normal abilities in auditory spatial attention, particularly for the localization of peripheral stimuli in comparison with frontal stimuli (Lessard et al., 1998; Roder et al., 1999). However, it is unknown whether this specific supra-normal ability extends to the non-spatial attention domain. Here we compared the performance of early blind subjects and sighted controls, who were blindfolded, during an auditory non-spatial attention task: target detection among distractors according to tone frequency. We paid a special attention to the potential effect of the sound source location, comparing the accuracy and speed in target detection in the peripheral and frontal space. Blind subjects displayed shorter reaction times than sighted controls for both peripheral and frontal stimuli. Moreover, in the two groups of subjects, we observed an interaction effect between the target location and the distractors location: the target was detected faster when its location was different from the location of the distractors. However, this effect was attenuated in early blind subjects and even cancelled in the condition with frontal targets and peripheral distractors. We conclude that early blind people compensate for the lack of vision by enhancing their ability to process auditory information but also by changing the spatial distribution of their auditory attention resources.","PeriodicalId":49553,"journal":{"name":"Seeing and Perceiving","volume":"41 1","pages":"55-55"},"PeriodicalIF":0.0,"publicationDate":"2012-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1163/187847612X646767","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"64427324","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Heterogeneous auditory–visual integration: Effects of pitch, band-width and visual eccentricity 异质视听整合:音高、频带宽度和视觉偏心率的影响
Pub Date : 2012-01-01 DOI: 10.1163/187847612X647081
A. Thelen, M. Murray
The identification of monosynaptic connections between primary cortices in non-human primates has recently been complemented by observations of early-latency and low-level non-linear interactions in brain responses in humans as well as observations of facilitative effects of multisensory stimuli on behavior/performance in both humans and monkeys. While there is some evidence in favor of causal links between early–latency interactions within low-level cortices and behavioral facilitation, it remains unknown if such effects are subserved by direct anatomical connections between primary cortices. In non-human primates, the above monosynaptic projections from primary auditory cortex terminate within peripheral visual field representations within primary visual cortex, suggestive of there being a potential bias for the integration of eccentric visual stimuli and pure tone (vs. broad-band) sounds. To date, behavioral effects in humans (and monkeys) have been observed after presenting (para)foveal stimuli with any of a range of auditory stimuli from pure tones to noise bursts. The present study aimed to identify any heterogeneity in the integration of auditory–visual stimuli. To this end, we employed a 3 × 3 within subject design that varied the visual eccentricity of an annulus (2.5°, 5.7°, 8.9°) and auditory pitch (250, 1000, 4000 Hz) of multisensory stimuli while subjects completed a simple detection task. We also varied the auditory bandwidth (pure tone vs. pink noise) across blocks of trials that a subject completed. To ensure attention to both modalities, multisensory stimuli were equi-probable with both unisensory visual and unisensory auditory trials that themselves varied along the abovementioned dimensions. Median reaction times for each stimulus condition as well as the percentage gain/loss of each multisensory condition vs. the best constituent unisensory condition were measured. The preliminary results reveal that multisensory interactions (as measured from simple reaction times) are indeed heterogeneous across the tested dimensions and may provide a means for delimiting the anatomo-functional substrates of behaviorally-relevant early–latency neural response interactions. Interestingly, preliminary results suggest selective interactions for visual stimuli when presented with broadband stimuli but not when presented with pure tones. More precisely, centrally presented visual stimuli show the greatest index of multisensory facilitation when coupled to a high pitch tone embedded in pink noise, while visual stimuli presented at approximately 5.7° of visual angle show the greatest slowing of reaction times.
最近,对人类大脑反应的早期潜伏期和低水平非线性相互作用的观察,以及对人类和猴子多感觉刺激对行为/表现的促进作用的观察,补充了对非人灵长类动物初级皮层之间单突触连接的识别。虽然有一些证据支持低水平皮层内的早期潜伏期相互作用与行为促进之间的因果关系,但尚不清楚这种影响是否由初级皮层之间的直接解剖联系所支持。在非人类灵长类动物中,来自初级听觉皮层的上述单突触投射终止于初级视觉皮层的外周视野表征,这表明偏心视觉刺激和纯音(相对于宽带)声音的整合存在潜在的偏见。迄今为止,人类(和猴子)的行为效应已经被观察到,在用一系列从纯音到噪音爆发的听觉刺激来刺激中央凹后。本研究旨在确定听觉-视觉刺激整合的异质性。为此,我们采用了3 × 3受试者设计,在受试者完成简单检测任务的同时,改变多感官刺激的视觉偏心率(2.5°、5.7°、8.9°)和听觉音高(250、1000、4000 Hz)。我们还在受试者完成的实验块之间改变了听觉带宽(纯音与粉红噪声)。为了确保对两种模式的关注,多感觉刺激在单感觉视觉和单感觉听觉试验中是等可能的,它们本身沿着上述维度变化。测量了每种刺激条件的中位反应时间,以及每种多感觉条件与最佳成分单感觉条件的百分比增益/损失。初步结果表明,多感觉相互作用(从简单反应时间测量)在测试维度上确实是异质的,这可能为界定行为相关的早潜伏期神经反应相互作用的解剖功能基础提供了一种方法。有趣的是,初步结果表明,宽频刺激会对视觉刺激产生选择性互动,而纯音刺激则不会。更准确地说,当集中呈现的视觉刺激与嵌入在粉红色噪声中的高音调相结合时,多感官促进指数最高,而在约5.7°视角下呈现的视觉刺激反应时间最慢。
{"title":"Heterogeneous auditory–visual integration: Effects of pitch, band-width and visual eccentricity","authors":"A. Thelen, M. Murray","doi":"10.1163/187847612X647081","DOIUrl":"https://doi.org/10.1163/187847612X647081","url":null,"abstract":"The identification of monosynaptic connections between primary cortices in non-human primates has recently been complemented by observations of early-latency and low-level non-linear interactions in brain responses in humans as well as observations of facilitative effects of multisensory stimuli on behavior/performance in both humans and monkeys. While there is some evidence in favor of causal links between early–latency interactions within low-level cortices and behavioral facilitation, it remains unknown if such effects are subserved by direct anatomical connections between primary cortices. In non-human primates, the above monosynaptic projections from primary auditory cortex terminate within peripheral visual field representations within primary visual cortex, suggestive of there being a potential bias for the integration of eccentric visual stimuli and pure tone (vs. broad-band) sounds. To date, behavioral effects in humans (and monkeys) have been observed after presenting (para)foveal stimuli with any of a range of auditory stimuli from pure tones to noise bursts. The present study aimed to identify any heterogeneity in the integration of auditory–visual stimuli. To this end, we employed a 3 × 3 within subject design that varied the visual eccentricity of an annulus (2.5°, 5.7°, 8.9°) and auditory pitch (250, 1000, 4000 Hz) of multisensory stimuli while subjects completed a simple detection task. We also varied the auditory bandwidth (pure tone vs. pink noise) across blocks of trials that a subject completed. To ensure attention to both modalities, multisensory stimuli were equi-probable with both unisensory visual and unisensory auditory trials that themselves varied along the abovementioned dimensions. Median reaction times for each stimulus condition as well as the percentage gain/loss of each multisensory condition vs. the best constituent unisensory condition were measured. The preliminary results reveal that multisensory interactions (as measured from simple reaction times) are indeed heterogeneous across the tested dimensions and may provide a means for delimiting the anatomo-functional substrates of behaviorally-relevant early–latency neural response interactions. Interestingly, preliminary results suggest selective interactions for visual stimuli when presented with broadband stimuli but not when presented with pure tones. More precisely, centrally presented visual stimuli show the greatest index of multisensory facilitation when coupled to a high pitch tone embedded in pink noise, while visual stimuli presented at approximately 5.7° of visual angle show the greatest slowing of reaction times.","PeriodicalId":49553,"journal":{"name":"Seeing and Perceiving","volume":"25 1","pages":"89-89"},"PeriodicalIF":0.0,"publicationDate":"2012-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1163/187847612X647081","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"64427980","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Evaluative similarity hypothesis of crossmodal correspondences: A developmental view 跨模式对应的评价相似性假设:一个发展的观点
Pub Date : 2012-01-01 DOI: 10.1163/187847612X647603
D. Janković
Crossmodal correspondences have been widely demonstrated, although mechanisms that stand behind the phenomenon have not been fully established yet. According to the Evaluative similarity hypothesis crossmodal correspondences are influenced by evaluative (affective) similarity of stimuli from different sensory modalities (Jankovic, 2010, Journal of Vision 10(7), 859). From this view, detection of similar evaluative information in stimulation from different sensory modalities facilitates crossmodal correspondences and multisensory integration. The aim of this study was to explore the evaluative similarity hypothesis of crossmodal correspondences in children. In Experiment 1 two groups of participants (nine- and thirteen-year-olds) were asked to make explicit matches between presented auditory stimuli (1 s long sound clips) and abstract visual patterns. In Experiment 2 the same participants judged abstract visual patterns and auditory stimuli on the set of evaluative attributes measuring affective valence and arousal. The results showed that crossmodal correspondences are mostly influenced by evaluative similarity of visual and auditory stimuli in both age groups. The most frequently matched were visual and auditory stimuli congruent in both valence and arousal, followed by stimuli congruent in valence, and finally stimuli congruent in arousal. Evaluatively incongruent stimuli demonstrated low crossmodal associations especially in older group.
尽管这种现象背后的机制尚未完全确立,但跨模式对应已被广泛证明。根据评价相似性假说,跨模态对应受到来自不同感觉模态刺激的评价(情感)相似性的影响(Jankovic, 2010, Journal of Vision 10(7), 859)。从这个角度来看,在不同感觉模式的刺激中发现相似的评价信息有助于跨模式对应和多感觉整合。本研究的目的是探讨儿童跨模式通信的评价相似性假设。在实验1中,两组参与者(9岁和13岁)被要求在呈现的听觉刺激(15个长声音片段)和抽象的视觉模式之间进行明确的匹配。在实验2中,同样的被试对抽象的视觉模式和听觉刺激的判断是基于测量情感效价和唤醒的评价属性集。结果表明,两个年龄组的跨模对应关系主要受视觉和听觉刺激评价相似性的影响。匹配频率最高的是效价一致和唤醒一致的视觉和听觉刺激,其次是效价一致的刺激,最后是唤醒一致的刺激。评价不一致的刺激表现出低的跨模关联,尤其是在老年人中。
{"title":"Evaluative similarity hypothesis of crossmodal correspondences: A developmental view","authors":"D. Janković","doi":"10.1163/187847612X647603","DOIUrl":"https://doi.org/10.1163/187847612X647603","url":null,"abstract":"Crossmodal correspondences have been widely demonstrated, although mechanisms that stand behind the phenomenon have not been fully established yet. According to the Evaluative similarity hypothesis crossmodal correspondences are influenced by evaluative (affective) similarity of stimuli from different sensory modalities (Jankovic, 2010, Journal of Vision 10(7), 859). From this view, detection of similar evaluative information in stimulation from different sensory modalities facilitates crossmodal correspondences and multisensory integration. The aim of this study was to explore the evaluative similarity hypothesis of crossmodal correspondences in children. In Experiment 1 two groups of participants (nine- and thirteen-year-olds) were asked to make explicit matches between presented auditory stimuli (1 s long sound clips) and abstract visual patterns. In Experiment 2 the same participants judged abstract visual patterns and auditory stimuli on the set of evaluative attributes measuring affective valence and arousal. The results showed that crossmodal correspondences are mostly influenced by evaluative similarity of visual and auditory stimuli in both age groups. The most frequently matched were visual and auditory stimuli congruent in both valence and arousal, followed by stimuli congruent in valence, and finally stimuli congruent in arousal. Evaluatively incongruent stimuli demonstrated low crossmodal associations especially in older group.","PeriodicalId":49553,"journal":{"name":"Seeing and Perceiving","volume":"13 1","pages":"127-127"},"PeriodicalIF":0.0,"publicationDate":"2012-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1163/187847612X647603","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"64428352","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Spatial codes for movement coordination do not depend on developmental vision 运动协调的空间编码并不依赖于发育视觉
Pub Date : 2012-01-01 DOI: 10.1163/187847612X646721
T. Heed, B. Roeder
When people make oscillating right–left movements with their two index fingers while holding their hands palms down, they find it easier to move the fingers symmetrically (i.e., both fingers towards the middle, then both fingers to the outside) than parallel (i.e., both fingers towards the left, then both fingers towards the right). It was originally proposed that this effect is due to concurrent activation of homologous muscles in the two hands. However, symmetric movements are also easier when one of the hands is turned palm up, thus requiring concurrent use of opposing rather than homologous muscles. This was interpreted to indicate that movement coordination relies on perceptual rather than muscle-based information (Mechsner et al., 2001). The current experiment tested whether the spatial code used in this task depends on vision. Participants made either symmetrical or parallel right–left movements with their two index fingers while their palms were either both facing down, both facing up, or one facing up and one down. Neither in sighted nor in congenitally blind participants did movement execution depend on hand posture. Rather, both groups were always more efficient when making symmetrical rather than parallel movements with respect to external space. We conclude that the spatial code used for movement coordination does not crucially depend on vision. Furthermore, whereas congenitally blind people predominately use body-based (somatotopic) spatial coding in perceptual tasks (Roder et al., 2007), they use external spatial codes in movement tasks, with performance indistinguishable from the sighted.
当人们在掌心向下的情况下用两个食指左右摆动时,他们发现手指对称移动(即两个手指向中,然后两个手指向外)比平行移动(即两个手指向左,然后两个手指向右)更容易。最初提出这种效应是由于两只手的同源肌肉同时激活。然而,当一只手掌心朝上时,对称运动也更容易,因此需要同时使用相反的肌肉而不是同源的肌肉。这被解释为表明运动协调依赖于感知而不是基于肌肉的信息(Mechsner et al., 2001)。本实验测试了该任务中使用的空间编码是否依赖于视觉。参与者用他们的两个食指做对称或平行的左右运动,而他们的手掌要么都朝下,要么都朝上,要么一个朝上,一个朝下。无论是视力正常的参与者还是先天失明的参与者,他们的动作执行都不依赖于手的姿势。相反,两组人在对外部空间进行对称而不是平行运动时总是更有效率。我们得出的结论是,用于运动协调的空间代码并不完全取决于视觉。此外,先天失明的人在感知任务中主要使用基于身体的(体位)空间编码(Roder等,2007),而他们在运动任务中使用外部空间编码,其表现与视力正常的人无异。
{"title":"Spatial codes for movement coordination do not depend on developmental vision","authors":"T. Heed, B. Roeder","doi":"10.1163/187847612X646721","DOIUrl":"https://doi.org/10.1163/187847612X646721","url":null,"abstract":"When people make oscillating right–left movements with their two index fingers while holding their hands palms down, they find it easier to move the fingers symmetrically (i.e., both fingers towards the middle, then both fingers to the outside) than parallel (i.e., both fingers towards the left, then both fingers towards the right). It was originally proposed that this effect is due to concurrent activation of homologous muscles in the two hands. However, symmetric movements are also easier when one of the hands is turned palm up, thus requiring concurrent use of opposing rather than homologous muscles. This was interpreted to indicate that movement coordination relies on perceptual rather than muscle-based information (Mechsner et al., 2001). The current experiment tested whether the spatial code used in this task depends on vision. Participants made either symmetrical or parallel right–left movements with their two index fingers while their palms were either both facing down, both facing up, or one facing up and one down. Neither in sighted nor in congenitally blind participants did movement execution depend on hand posture. Rather, both groups were always more efficient when making symmetrical rather than parallel movements with respect to external space. We conclude that the spatial code used for movement coordination does not crucially depend on vision. Furthermore, whereas congenitally blind people predominately use body-based (somatotopic) spatial coding in perceptual tasks (Roder et al., 2007), they use external spatial codes in movement tasks, with performance indistinguishable from the sighted.","PeriodicalId":49553,"journal":{"name":"Seeing and Perceiving","volume":"25 1","pages":"51-51"},"PeriodicalIF":0.0,"publicationDate":"2012-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1163/187847612X646721","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"64427187","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Age-related changes in temporal processing of vestibular stimuli 前庭刺激颞加工的年龄相关变化
Pub Date : 2012-01-01 DOI: 10.1163/187847612X647847
Alex K. Malone, N. N. Chang, T. Hullar
Falls are one of the leading causes of disability in the elderly. Previous research has shown that falls may be related to changes in the temporal integration of multisensory stimuli. This study compared the temporal integration and processing of a vestibular and auditory stimulus in younger and older subjects. The vestibular stimulus consisted of a continuous sinusoidal rotational velocity delivered using a rotational chair and the auditory stimulus consisted of 5 ms of white noise presented dichotically through headphones (both at 0.5 Hz). Simultaneity was defined as perceiving the chair being at its furthest rightward or leftward trajectory at the same moment as the auditory stimulus was perceived in the contralateral ear. The temporal offset of the auditory stimulus was adjusted using a method of constant stimuli so that the auditory stimulus either led or lagged true simultaneity. 15 younger (ages 21–27) and 12 older (ages 63–89) healthy subjects were tested using a two alternative forced choice task to determine at what times they perceived the two stimuli as simultaneous. Younger subjects had a mean temporal binding window of 334 ± 37 ms (mean ± SEM) and a mean point of subjective simultaneity of 83 ± 15 ms. Older subjects had a mean TBW of 556 ± 36 ms and a mean point of subjective simultaneity of 158 ± 27. Both differences were significant indicating that older subjects have a wider temporal range over which they integrate vestibular and auditory stimuli than younger subjects. These findings were consistent upon retesting and were not due to differences in vestibular perception thresholds.
跌倒是老年人致残的主要原因之一。先前的研究表明,跌倒可能与多感官刺激的时间整合变化有关。本研究比较了年轻人和老年人前庭刺激和听觉刺激的时间整合和处理。前庭刺激包括使用旋转椅传递的连续正弦旋转速度,听觉刺激包括通过耳机呈现的5 ms白噪声(均为0.5 Hz)。同时性被定义为在对侧耳朵感知到听觉刺激的同时,感知到椅子在其最右或最左的轨迹上。用恒刺激法调整听觉刺激的时间偏移,使听觉刺激引导或滞后真同时性。15名年龄较小的(21-27岁)和12名年龄较大的(63-89岁)健康受试者使用两种选择任务进行测试,以确定他们在什么时候同时感受到两种刺激。年轻受试者的平均时间结合窗为334±37 ms(平均±SEM),主观同时性平均点为83±15 ms。老年受试者的平均TBW为556±36 ms,主观同时性平均点为158±27。这两种差异都很显著,表明老年受试者比年轻受试者有更大的时间范围来整合前庭和听觉刺激。这些结果在重新测试时是一致的,而不是由于前庭感知阈值的差异。
{"title":"Age-related changes in temporal processing of vestibular stimuli","authors":"Alex K. Malone, N. N. Chang, T. Hullar","doi":"10.1163/187847612X647847","DOIUrl":"https://doi.org/10.1163/187847612X647847","url":null,"abstract":"Falls are one of the leading causes of disability in the elderly. Previous research has shown that falls may be related to changes in the temporal integration of multisensory stimuli. This study compared the temporal integration and processing of a vestibular and auditory stimulus in younger and older subjects. The vestibular stimulus consisted of a continuous sinusoidal rotational velocity delivered using a rotational chair and the auditory stimulus consisted of 5 ms of white noise presented dichotically through headphones (both at 0.5 Hz). Simultaneity was defined as perceiving the chair being at its furthest rightward or leftward trajectory at the same moment as the auditory stimulus was perceived in the contralateral ear. The temporal offset of the auditory stimulus was adjusted using a method of constant stimuli so that the auditory stimulus either led or lagged true simultaneity. 15 younger (ages 21–27) and 12 older (ages 63–89) healthy subjects were tested using a two alternative forced choice task to determine at what times they perceived the two stimuli as simultaneous. Younger subjects had a mean temporal binding window of 334 ± 37 ms (mean ± SEM) and a mean point of subjective simultaneity of 83 ± 15 ms. Older subjects had a mean TBW of 556 ± 36 ms and a mean point of subjective simultaneity of 158 ± 27. Both differences were significant indicating that older subjects have a wider temporal range over which they integrate vestibular and auditory stimuli than younger subjects. These findings were consistent upon retesting and were not due to differences in vestibular perception thresholds.","PeriodicalId":49553,"journal":{"name":"Seeing and Perceiving","volume":"25 1","pages":"153-153"},"PeriodicalIF":0.0,"publicationDate":"2012-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1163/187847612X647847","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"64428177","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Recovery periods of event-related potentials indicating crossmodal interactions between the visual, auditory and tactile system 事件相关电位的恢复期,表明视觉、听觉和触觉系统之间的跨模态相互作用
Pub Date : 2012-01-01 DOI: 10.1163/187847612X647478
Marlene Hense, Boukje Habets, B. Roeder
In sequential unimodal stimulus designs the time it takes for an event-related potential (ERP)-amplitude to recover is often interpreted as a transient decrement in responsiveness of the generating cortical circuits. This effect has been called neural refractoriness, which is the larger the more similar the repeated stimuli are and thus indicates the degree of overlap between the neural generator systems activated by two sequential stimuli. We hypothesize that crossmodal refractoriness-effects in a crossmodal sequential design might be a good parameter to assess the ‘modality overlap’ in the involved neural generators and the degree of crossmodal interaction. In order to investigate crossmodal ERP refractory period effects we presented visual and auditory (Experiment 1) and visual and tactile stimuli (Experiment 2) with inter stimulus intervals of 1 and 2 s to adult participants. Participants had to detect rare auditory and visual stimuli. Both, intra- and crossmodal ISI effects for all modalities were found for three investigated ERP-deflections (P1, N1, P2). The topography of the crossmodal refractory period effect of the N1- and P2-deflections in Experiment 1 and of P1 and N1 in Experiment 2 of both modalities was similar to the corresponding intramodal refractory effect, yet more confined and crossmodal effects were generally weaker. The crossmodal refractory effect for the visual P1, however, had a distinct, less circumscribed topography with respect to the intramodal effect. These results suggest that ERP refractory effects might be a promising indicator of the neural correlates of crossmodal interactions.
在顺序单峰刺激设计中,事件相关电位(ERP)振幅恢复所需的时间通常被解释为产生皮层回路反应性的短暂衰减。这种效应被称为神经耐火度,重复刺激越相似,神经耐火度越大,从而表明被两个连续刺激激活的神经产生系统之间的重叠程度。我们假设,在一个交叉模态序列设计中,交叉模态折射效应可能是一个很好的参数来评估所涉及的神经发生器的“模态重叠”和交叉模态相互作用的程度。为了研究跨模态ERP不应期效应,我们分别以1秒和2秒为刺激间隔,对成年被试进行了视觉和听觉刺激(实验1)和视觉和触觉刺激(实验2)。参与者必须检测罕见的听觉和视觉刺激。在三个被调查的erp偏转(P1, N1, P2)中,发现了所有模式的内和跨模ISI效应。实验1中N1-和p2偏转以及实验2中P1和N1偏转的跨模不应期效应的地形与相应的模内不应期效应相似,但越受限,跨模不应期效应一般越弱。然而,相对于模态内效应,视觉P1的跨模态难阻效应具有明显的、较少限制的地形。这些结果表明,ERP难解效应可能是跨模态相互作用的神经相关指标。
{"title":"Recovery periods of event-related potentials indicating crossmodal interactions between the visual, auditory and tactile system","authors":"Marlene Hense, Boukje Habets, B. Roeder","doi":"10.1163/187847612X647478","DOIUrl":"https://doi.org/10.1163/187847612X647478","url":null,"abstract":"In sequential unimodal stimulus designs the time it takes for an event-related potential (ERP)-amplitude to recover is often interpreted as a transient decrement in responsiveness of the generating cortical circuits. This effect has been called neural refractoriness, which is the larger the more similar the repeated stimuli are and thus indicates the degree of overlap between the neural generator systems activated by two sequential stimuli. We hypothesize that crossmodal refractoriness-effects in a crossmodal sequential design might be a good parameter to assess the ‘modality overlap’ in the involved neural generators and the degree of crossmodal interaction. In order to investigate crossmodal ERP refractory period effects we presented visual and auditory (Experiment 1) and visual and tactile stimuli (Experiment 2) with inter stimulus intervals of 1 and 2 s to adult participants. Participants had to detect rare auditory and visual stimuli. Both, intra- and crossmodal ISI effects for all modalities were found for three investigated ERP-deflections (P1, N1, P2). The topography of the crossmodal refractory period effect of the N1- and P2-deflections in Experiment 1 and of P1 and N1 in Experiment 2 of both modalities was similar to the corresponding intramodal refractory effect, yet more confined and crossmodal effects were generally weaker. The crossmodal refractory effect for the visual P1, however, had a distinct, less circumscribed topography with respect to the intramodal effect. These results suggest that ERP refractory effects might be a promising indicator of the neural correlates of crossmodal interactions.","PeriodicalId":49553,"journal":{"name":"Seeing and Perceiving","volume":"9 1","pages":"114-114"},"PeriodicalIF":0.0,"publicationDate":"2012-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1163/187847612X647478","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"64427751","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multisensory processes in the synaesthetic brain — An event-related potential study in multisensory competition situations 联觉脑中的多感觉过程——多感觉竞争情境下的事件相关电位研究
Pub Date : 2012-01-01 DOI: 10.1163/187847612X647333
J. Neufeld, C. Sinke, Daniel Wiswede, H. Emrich, S. Bleich, G. Szycik
In synaesthesia certain external stimuli (e.g., music) trigger automatically internally generated sensations (e.g., colour). Results of behavioural investigations indicate that multisensory processing works differently in synaesthetes. However, the reasons for these differences and the underlying neural correlates remain unclear. The aim of the current study was to investigate if synaesthetes show differences in electrophysiological components of multimodal processing. Further we wanted to test synaesthetes for an enhanced distractor filtering ability in multimodal situations. Therefore, line drawings of animals and objects were presented to participants, either with congruent (typical sound for presented picture, e.g., picture of bird together with chirp), incongruent (picture of bird together with gun shot) or without simultaneous auditory stimulation. 14 synaesthetes (auditory–visual and grapheme-colour synaesthetes) and 13 controls participated in the study. We found differences in the event-related potentials between synaesthetes and controls, indicating an altered multisensory processing of bimodal stimuli in synaesthetes in competition situations. These differences were especially found over frontal brain sites. An interaction effect between group (synaesthetes vs. controls) and stimulation (unimodal visual vs. congruent multimodal) could not be detected. Therefore we conclude that multisensory processing works in general similar in synaesthetes and controls and that only specifically integration processes in multisensory competition situations are altered in synaesthetes.
在联觉中,某些外部刺激(如音乐)会自动触发内部产生的感觉(如颜色)。行为研究结果表明,联觉者的多感觉处理工作方式不同。然而,造成这些差异的原因和潜在的神经关联尚不清楚。当前研究的目的是调查联觉者是否在多模态处理的电生理成分上表现出差异。进一步,我们想测试联觉者在多模态情况下对干扰物过滤能力的增强。因此,向参与者展示动物和物体的线条图,要么是一致的(所呈现的图片的典型声音,例如鸟的照片和啾啾声),要么是不一致的(鸟的照片和枪声),要么是没有同时的听觉刺激。14名联觉者(听觉-视觉联觉者和文字-颜色联觉者)和13名对照者参加了这项研究。我们发现联觉者和对照组在事件相关电位上存在差异,这表明在竞争情境下,联觉者对双峰刺激的多感觉处理发生了改变。这些差异在大脑额叶部位尤为明显。组(联觉者vs.对照组)和刺激(单峰视觉vs.同峰多峰)之间的相互作用效应无法检测到。因此,我们得出结论,在联觉者和控制者中,多感觉加工的工作原理是相似的,只有在多感觉竞争情况下的整合过程在联觉者中才会发生改变。
{"title":"Multisensory processes in the synaesthetic brain — An event-related potential study in multisensory competition situations","authors":"J. Neufeld, C. Sinke, Daniel Wiswede, H. Emrich, S. Bleich, G. Szycik","doi":"10.1163/187847612X647333","DOIUrl":"https://doi.org/10.1163/187847612X647333","url":null,"abstract":"In synaesthesia certain external stimuli (e.g., music) trigger automatically internally generated sensations (e.g., colour). Results of behavioural investigations indicate that multisensory processing works differently in synaesthetes. However, the reasons for these differences and the underlying neural correlates remain unclear. The aim of the current study was to investigate if synaesthetes show differences in electrophysiological components of multimodal processing. Further we wanted to test synaesthetes for an enhanced distractor filtering ability in multimodal situations. Therefore, line drawings of animals and objects were presented to participants, either with congruent (typical sound for presented picture, e.g., picture of bird together with chirp), incongruent (picture of bird together with gun shot) or without simultaneous auditory stimulation. 14 synaesthetes (auditory–visual and grapheme-colour synaesthetes) and 13 controls participated in the study. We found differences in the event-related potentials between synaesthetes and controls, indicating an altered multisensory processing of bimodal stimuli in synaesthetes in competition situations. These differences were especially found over frontal brain sites. An interaction effect between group (synaesthetes vs. controls) and stimulation (unimodal visual vs. congruent multimodal) could not be detected. Therefore we conclude that multisensory processing works in general similar in synaesthetes and controls and that only specifically integration processes in multisensory competition situations are altered in synaesthetes.","PeriodicalId":49553,"journal":{"name":"Seeing and Perceiving","volume":"25 1","pages":"101-101"},"PeriodicalIF":0.0,"publicationDate":"2012-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1163/187847612X647333","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"64427987","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An invisible speaker can facilitate auditory speech perception 隐形说话者可以促进听觉言语感知
Pub Date : 2012-01-01 DOI: 10.1163/187847612X647801
M. Grabowecky, Emmanuel Guzman-Martinez, L. Ortega, Satoru Suzuki
Watching moving lips facilitates auditory speech perception when the mouth is attended. However, recent evidence suggests that visual attention and awareness are mediated by separate mechanisms. We investigated whether lip movements suppressed from visual awareness can facilitate speech perception. We used a word categorization task in which participants listened to spoken words and determined as quickly and accurately as possible whether or not each word named a tool. While participants listened to the words they watched a visual display that presented a video clip of the speaker synchronously speaking the auditorily presented words, or the same speaker articulating different words. Critically, the speaker’s face was either visible (the aware trials), or suppressed from awareness using continuous flash suppression. Aware and suppressed trials were randomly intermixed. A secondary probe-detection task ensured that participants attended to the mouth region regardless of whether the face was visible or suppressed. On the aware trials responses to the tool targets were no faster with the synchronous than asynchronous lip movements, perhaps because the visual information was inconsistent with the auditory information on 50% of the trials. However, on the suppressed trials responses to the tool targets were significantly faster with the synchronous than asynchronous lip movements. These results demonstrate that even when a random dynamic mask renders a face invisible, lip movements are processed by the visual system with sufficiently high temporal resolution to facilitate speech perception.
当口腔受到照顾时,观察嘴唇的运动有助于听觉语言感知。然而,最近的证据表明,视觉注意和意识是由不同的机制介导的。我们研究了被视觉意识抑制的嘴唇运动是否能促进语言感知。我们使用了一个单词分类任务,在这个任务中,参与者听着口语单词,并尽可能快速准确地确定每个单词是否代表一种工具。当参与者听单词时,他们观看了一个视觉显示,显示了说话者同步说出所听单词的视频剪辑,或者同一说话者发音不同的单词。关键的是,说话者的脸要么是可见的(有意识的试验),要么是通过持续的闪光抑制来抑制意识。有意识和抑制试验随机混合。第二个探针探测任务确保参与者关注嘴巴区域,而不管脸是可见的还是被抑制的。在有意识的实验中,同步嘴唇运动对工具目标的反应并不比非同步嘴唇运动更快,这可能是因为50%的实验中视觉信息与听觉信息不一致。然而,在抑制试验中,同步唇运动对工具目标的反应明显快于非同步唇运动。这些结果表明,即使随机动态面具使人脸不可见,嘴唇运动也会被视觉系统以足够高的时间分辨率处理,以促进语音感知。
{"title":"An invisible speaker can facilitate auditory speech perception","authors":"M. Grabowecky, Emmanuel Guzman-Martinez, L. Ortega, Satoru Suzuki","doi":"10.1163/187847612X647801","DOIUrl":"https://doi.org/10.1163/187847612X647801","url":null,"abstract":"Watching moving lips facilitates auditory speech perception when the mouth is attended. However, recent evidence suggests that visual attention and awareness are mediated by separate mechanisms. We investigated whether lip movements suppressed from visual awareness can facilitate speech perception. We used a word categorization task in which participants listened to spoken words and determined as quickly and accurately as possible whether or not each word named a tool. While participants listened to the words they watched a visual display that presented a video clip of the speaker synchronously speaking the auditorily presented words, or the same speaker articulating different words. Critically, the speaker’s face was either visible (the aware trials), or suppressed from awareness using continuous flash suppression. Aware and suppressed trials were randomly intermixed. A secondary probe-detection task ensured that participants attended to the mouth region regardless of whether the face was visible or suppressed. On the aware trials responses to the tool targets were no faster with the synchronous than asynchronous lip movements, perhaps because the visual information was inconsistent with the auditory information on 50% of the trials. However, on the suppressed trials responses to the tool targets were significantly faster with the synchronous than asynchronous lip movements. These results demonstrate that even when a random dynamic mask renders a face invisible, lip movements are processed by the visual system with sufficiently high temporal resolution to facilitate speech perception.","PeriodicalId":49553,"journal":{"name":"Seeing and Perceiving","volume":"25 1","pages":"148-148"},"PeriodicalIF":0.0,"publicationDate":"2012-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1163/187847612X647801","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"64428539","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Seeing and Perceiving
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1