Pub Date : 2012-01-01DOI: 10.1163/187847612X647748
M. Auvray, Ophelia Deroy
Sensory substitution devices (SSDs) aim at replacing or assisting one or several functions of a deficient sensory modality by means of another sensory modality. Despite the numerous studies and research programs devoted to their development and integration, SSDs have failed to live up to their goal of allowing one to ‘see with the skin’ (White et al., 1970) or to ‘see with the brain’ (Bach-y-Rita et al., 2003). These somewhat peremptory claims, as well as the research conducted so far, are based on an implicit perceptual paradigm. Such perceptual assumption accepts the equivalence between using a SSD and perceiving through a particular sensory modality. Our aim is to provide an alternative model, which defines the integration of SSDs as being closer to culturally-implemented cognitive extensions of existing perceptual skills such as reading. In this talk, we will show why the analogy with reading provides a better explanation of the actual findings, that is, both of the positive results achieved and of the limitations noticed across the field of research on SSDs. The parallel with the most recent two-route and interactive models of reading (e.g., Dehaene et al., 2005) generates a radically new way of approaching these results, by stressing the dependence of integration on the existing perceptual-semantic route. In addition, it enables us to generate innovative research questions and specific predictions which set the stage for future work.
感觉替代装置(ssd)旨在通过另一种感觉方式取代或辅助有缺陷的感觉方式的一种或几种功能。尽管有大量的研究和研究项目致力于ssd的发展和整合,但ssd并没有达到让人们“用皮肤看”(White et al., 1970)或“用大脑看”(Bach-y-Rita et al., 2003)的目标。这些有些武断的主张,以及迄今为止进行的研究,都是基于一种内隐的感知范式。这种感知假设接受了使用SSD和通过特定感官方式感知之间的等价性。我们的目标是提供一种替代模型,该模型将ssd的集成定义为更接近现有感知技能(如阅读)的文化实施认知扩展。在这次演讲中,我们将展示为什么与阅读的类比可以更好地解释实际发现,即在ssd研究领域取得的积极结果和注意到的局限性。与最近的双路径阅读和互动模型(例如,Dehaene et al., 2005)的平行产生了一种接近这些结果的全新方式,通过强调整合对现有感知语义路径的依赖。此外,它使我们能够产生创新的研究问题和具体的预测,为未来的工作奠定基础。
{"title":"Interpreting sensory substitution beyond the perceptual assumption: An analogy with reading","authors":"M. Auvray, Ophelia Deroy","doi":"10.1163/187847612X647748","DOIUrl":"https://doi.org/10.1163/187847612X647748","url":null,"abstract":"Sensory substitution devices (SSDs) aim at replacing or assisting one or several functions of a deficient sensory modality by means of another sensory modality. Despite the numerous studies and research programs devoted to their development and integration, SSDs have failed to live up to their goal of allowing one to ‘see with the skin’ (White et al., 1970) or to ‘see with the brain’ (Bach-y-Rita et al., 2003). These somewhat peremptory claims, as well as the research conducted so far, are based on an implicit perceptual paradigm. Such perceptual assumption accepts the equivalence between using a SSD and perceiving through a particular sensory modality. Our aim is to provide an alternative model, which defines the integration of SSDs as being closer to culturally-implemented cognitive extensions of existing perceptual skills such as reading. In this talk, we will show why the analogy with reading provides a better explanation of the actual findings, that is, both of the positive results achieved and of the limitations noticed across the field of research on SSDs. The parallel with the most recent two-route and interactive models of reading (e.g., Dehaene et al., 2005) generates a radically new way of approaching these results, by stressing the dependence of integration on the existing perceptual-semantic route. In addition, it enables us to generate innovative research questions and specific predictions which set the stage for future work.","PeriodicalId":49553,"journal":{"name":"Seeing and Perceiving","volume":"25 1","pages":"142-142"},"PeriodicalIF":0.0,"publicationDate":"2012-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1163/187847612X647748","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"64428343","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-01-01DOI: 10.1163/187847612X647667
T. Makin, J. Scholz, N. Filippini, D. H. Slater, I. Tracey, H. Johansen-Berg
Phantom pain has become an influential example of maladaptive cortical plasticity. According to this model, sensory deprivation following limb amputation allows for intra-regional invasion of neighbouring cortical representations into the former hand area of the primary sensorimotor cortex, which gives rise to pain sensations. Over the years, this model was extended to explain other disorders of pain, motor control and tinnitus, and has inspired rehabilitation strategies. Yet, other research, demonstrating that phantom hand representation is maintained in the sensorimotor system, and that phantom pain can be triggered by bottom-up aberrant inputs, may call this model to question. Using fMRI, we identified the cortical area representing the missing hand in a group of 18 arm amputees. This allowed us to directly study changes in the ‘phantom’ cortex associated with chronic phantom pain, using functional connectivity and voxel-based morphometry. We show that, while loss of sensory input is generally characterized by structural degeneration of the deprived sensorimotor cortex, the experience of persistent pain was associated with preserved intra-regional structure and functional organization. Furthermore, consistent with the dissociative nature of phantom sensations from other sensory experiences, phantom pain is also associated with reduced long-range inter-regional functional connectivity. We propose that this disrupted inter-regional connectivity may be consequential, rather than causal, of the retained yet isolated local representation of phantom pain. We therefore propose that, contrary to the maladaptive model, cortical plasticity occurs when powerful and long-lasting subjective sensory experience, most likely due to peripheral inputs, is decoupled from the external sensory environment.
{"title":"Can maladaptive cortical plasticity form new sensory experiences? Revisiting phantom pain","authors":"T. Makin, J. Scholz, N. Filippini, D. H. Slater, I. Tracey, H. Johansen-Berg","doi":"10.1163/187847612X647667","DOIUrl":"https://doi.org/10.1163/187847612X647667","url":null,"abstract":"Phantom pain has become an influential example of maladaptive cortical plasticity. According to this model, sensory deprivation following limb amputation allows for intra-regional invasion of neighbouring cortical representations into the former hand area of the primary sensorimotor cortex, which gives rise to pain sensations. Over the years, this model was extended to explain other disorders of pain, motor control and tinnitus, and has inspired rehabilitation strategies. Yet, other research, demonstrating that phantom hand representation is maintained in the sensorimotor system, and that phantom pain can be triggered by bottom-up aberrant inputs, may call this model to question. Using fMRI, we identified the cortical area representing the missing hand in a group of 18 arm amputees. This allowed us to directly study changes in the ‘phantom’ cortex associated with chronic phantom pain, using functional connectivity and voxel-based morphometry. We show that, while loss of sensory input is generally characterized by structural degeneration of the deprived sensorimotor cortex, the experience of persistent pain was associated with preserved intra-regional structure and functional organization. Furthermore, consistent with the dissociative nature of phantom sensations from other sensory experiences, phantom pain is also associated with reduced long-range inter-regional functional connectivity. We propose that this disrupted inter-regional connectivity may be consequential, rather than causal, of the retained yet isolated local representation of phantom pain. We therefore propose that, contrary to the maladaptive model, cortical plasticity occurs when powerful and long-lasting subjective sensory experience, most likely due to peripheral inputs, is decoupled from the external sensory environment.","PeriodicalId":49553,"journal":{"name":"Seeing and Perceiving","volume":"25 1","pages":"134-134"},"PeriodicalIF":0.0,"publicationDate":"2012-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1163/187847612X647667","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"64428550","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-01-01DOI: 10.1163/187847612X647810
S. Convento, Chiara Galantini, N. Bolognini, G. Vallar
Crossmodal interactions occur not only within brain regions deemed to be heteromodal, but also within primary sensory areas, traditionally considered as modality-specific. So far, mechanisms of crossmodal interactions in primary visual areas remain largely unknown. In the present study, we explored the effect of crossmodal stimuli on phosphene perception, induced by single-pulse transcranial magnetic stimulation (sTMS) delivered to the occipital visual cortex. In three experiments, we showed that redundant auditory and/or tactile information facilitated the detection of phosphenes induced by occipital sTMS, applied at sub-threshold intensity, which also increased their level of brightness, with the maximal enhancement occurring for trimodal stimulus combinations. Such crossmodal enhancement can be further boosted by the brain polarization of heteromodal areas mediating crossmodal links in spatial attention. Specifically, anodal transcranial direct current stimulation (tDCS) of both the occipital and the parietal cortices facilitated phosphene detection under unimodal conditions, whereas anodal tDCS of the parietal and temporal cortices enhanced phosphene detection selectively under crossmodal conditions, when auditory or tactile stimuli were combined with occipital sTMS. Overall, crossmodal interactions can enhance neural excitability within low-level visual areas, and tDCS can be used for boosting such crossmodal influences on visual responses, likely affecting mechanisms of crossmodal spatial attention involving feedback modulation from heteromodal areas on sensory-specific cortices. TDCS can effectively facilitate the integration of multisensory signals originating from the external world, hence improving visual perception.
{"title":"Neuromodulation of crossmodal influences on visual cortex excitability","authors":"S. Convento, Chiara Galantini, N. Bolognini, G. Vallar","doi":"10.1163/187847612X647810","DOIUrl":"https://doi.org/10.1163/187847612X647810","url":null,"abstract":"Crossmodal interactions occur not only within brain regions deemed to be heteromodal, but also within primary sensory areas, traditionally considered as modality-specific. So far, mechanisms of crossmodal interactions in primary visual areas remain largely unknown. In the present study, we explored the effect of crossmodal stimuli on phosphene perception, induced by single-pulse transcranial magnetic stimulation (sTMS) delivered to the occipital visual cortex. In three experiments, we showed that redundant auditory and/or tactile information facilitated the detection of phosphenes induced by occipital sTMS, applied at sub-threshold intensity, which also increased their level of brightness, with the maximal enhancement occurring for trimodal stimulus combinations. Such crossmodal enhancement can be further boosted by the brain polarization of heteromodal areas mediating crossmodal links in spatial attention. Specifically, anodal transcranial direct current stimulation (tDCS) of both the occipital and the parietal cortices facilitated phosphene detection under unimodal conditions, whereas anodal tDCS of the parietal and temporal cortices enhanced phosphene detection selectively under crossmodal conditions, when auditory or tactile stimuli were combined with occipital sTMS. Overall, crossmodal interactions can enhance neural excitability within low-level visual areas, and tDCS can be used for boosting such crossmodal influences on visual responses, likely affecting mechanisms of crossmodal spatial attention involving feedback modulation from heteromodal areas on sensory-specific cortices. TDCS can effectively facilitate the integration of multisensory signals originating from the external world, hence improving visual perception.","PeriodicalId":49553,"journal":{"name":"Seeing and Perceiving","volume":"25 1","pages":"149-149"},"PeriodicalIF":0.0,"publicationDate":"2012-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1163/187847612X647810","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"64428582","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-01-01DOI: 10.1163/187847612X647937
Luigi Tamè, T. Johnstone, N. Holmes
Many studies have investigated interactions in the processing of tactile stimuli across different fingers. However, the precise time-scale of these interactions when stimuli arrive on opposite sides of the body remains uncertain. Specifically, it is not clear how tactile stimulation of different fingers of the same and different hands can interact with each other. The aim of the present study was to address this issue using a novel approach combining the QUEST threshold estimation method with single pulse TMS (spTMS). First, QUEST was used in a two-interval forced-choice design in order to establish threshold for detecting a 200 ms, 100 Hz sinusoidal vibration applied to the index fingertip (target finger threshold). This was done either when the target was presented in isolation, or concurrently with a distractor stimulus on another finger of the same or a different hand. Second, the same participants underwent a series of MRI scans (localisers) to produce somatotopic maps of SI and SII cortices. These maps were used to stimulate over SI with spTMS during a subsequent behavioural task, with the aim of modulating the behavioural interactions between the different fingers. The results showed that the threshold for detecting the target was lower when it was presented in isolation, as compared to when a concurrent distractor was present. Moreover, detection thresholds varied as a function of the distractor finger stimulated. The differential effect of the distractor finger on target detection thresholds is consistent with the segregation of different fingers in early somatosensory processing, from the periphery to SI.
{"title":"Inter-hemispheric interaction of touches at the fingers: A combined psychophysics and TMS approach","authors":"Luigi Tamè, T. Johnstone, N. Holmes","doi":"10.1163/187847612X647937","DOIUrl":"https://doi.org/10.1163/187847612X647937","url":null,"abstract":"Many studies have investigated interactions in the processing of tactile stimuli across different fingers. However, the precise time-scale of these interactions when stimuli arrive on opposite sides of the body remains uncertain. Specifically, it is not clear how tactile stimulation of different fingers of the same and different hands can interact with each other. The aim of the present study was to address this issue using a novel approach combining the QUEST threshold estimation method with single pulse TMS (spTMS). First, QUEST was used in a two-interval forced-choice design in order to establish threshold for detecting a 200 ms, 100 Hz sinusoidal vibration applied to the index fingertip (target finger threshold). This was done either when the target was presented in isolation, or concurrently with a distractor stimulus on another finger of the same or a different hand. Second, the same participants underwent a series of MRI scans (localisers) to produce somatotopic maps of SI and SII cortices. These maps were used to stimulate over SI with spTMS during a subsequent behavioural task, with the aim of modulating the behavioural interactions between the different fingers. The results showed that the threshold for detecting the target was lower when it was presented in isolation, as compared to when a concurrent distractor was present. Moreover, detection thresholds varied as a function of the distractor finger stimulated. The differential effect of the distractor finger on target detection thresholds is consistent with the segregation of different fingers in early somatosensory processing, from the periphery to SI.","PeriodicalId":49553,"journal":{"name":"Seeing and Perceiving","volume":"25 1","pages":"163-163"},"PeriodicalIF":0.0,"publicationDate":"2012-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1163/187847612X647937","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"64428755","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-01-01DOI: 10.1163/187847612X647973
A. Nesti, M. Barnett-Cowan, H. Bülthoff, P. Pretto
The restricted operational space of dynamic driving simulators requires the implementation of motion cueing algorithms that tilt the simulator cabin to reproduce sustained accelerations. In order to avoid conflicting inertial cues, the tilt rate is limited below drivers’ perceptual thresholds, which are typically derived from the results of classical vestibular research, where additional sensory cues to self-motion are removed. These limits might be too conservative for an ecological driving simulation, which provides a variety of complex visual and vestibular cues as well as demands of attention which vary with task difficulty. We measured roll rate detection threshold in active driving simulation, where visual and vestibular stimuli are provided as well as increased cognitive load from the driving task. Here thresholds during active driving are compared with tilt rate detection thresholds found in the literature (passive thresholds) to assess the effect of the driving task. In a second experiment, these thresholds (active versus passive) are related to driving preferences in a slalom driving course in order to determine which roll rate values are most appropriate for driving simulators so as to present the most realistic driving experience. The results show that detection threshold for roll in an active driving task is significantly higher than the limits currently used in motion cueing algorithms, suggesting that higher tilt limits can be successfully implemented to better optimize simulator operational space. Supra-threshold roll rates in the slalom task are also rated as more realistic. Overall, our findings indicate that increasing task complexity in driving simulation can decrease motion sensitivity allowing for further expansion of the virtual workspace environment.
{"title":"Roll rate thresholds in driving simulation","authors":"A. Nesti, M. Barnett-Cowan, H. Bülthoff, P. Pretto","doi":"10.1163/187847612X647973","DOIUrl":"https://doi.org/10.1163/187847612X647973","url":null,"abstract":"The restricted operational space of dynamic driving simulators requires the implementation of motion cueing algorithms that tilt the simulator cabin to reproduce sustained accelerations. In order to avoid conflicting inertial cues, the tilt rate is limited below drivers’ perceptual thresholds, which are typically derived from the results of classical vestibular research, where additional sensory cues to self-motion are removed. These limits might be too conservative for an ecological driving simulation, which provides a variety of complex visual and vestibular cues as well as demands of attention which vary with task difficulty. We measured roll rate detection threshold in active driving simulation, where visual and vestibular stimuli are provided as well as increased cognitive load from the driving task. Here thresholds during active driving are compared with tilt rate detection thresholds found in the literature (passive thresholds) to assess the effect of the driving task. In a second experiment, these thresholds (active versus passive) are related to driving preferences in a slalom driving course in order to determine which roll rate values are most appropriate for driving simulators so as to present the most realistic driving experience. The results show that detection threshold for roll in an active driving task is significantly higher than the limits currently used in motion cueing algorithms, suggesting that higher tilt limits can be successfully implemented to better optimize simulator operational space. Supra-threshold roll rates in the slalom task are also rated as more realistic. Overall, our findings indicate that increasing task complexity in driving simulation can decrease motion sensitivity allowing for further expansion of the virtual workspace environment.","PeriodicalId":49553,"journal":{"name":"Seeing and Perceiving","volume":"51 1","pages":"167-167"},"PeriodicalIF":0.0,"publicationDate":"2012-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1163/187847612X647973","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"64428759","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-01-01DOI: 10.1163/187847612X648035
Olivera Ilić, V. Ković, D. Janković
Since described by Kohler more than half a century ago, phonetic–iconic correspondences have been demonstrated in a series of studies showing remarkable consistency in matches of pseudowords containing specific types of phonemes (e.g., Maluma or Takete) with rounded and angular shapes. If the effect found in these experiments reveals something about processes involved in natural language interpretation, we should expect similar association between phonological properties of objects’ labels and their perceptual properties to exist in natural language as well. However, results of the studies testing this effect in natural language are rather inconsistent and sometimes even contradictory. The aim of the present study was to test whether the distribution of phonemes and consonant-vowel patterns, previously found in pseudowords participants produced for the abstract visual patterns (Jankovic and Markovic, 2000, Perception 29 ECVP Abstract Supplement), could be found in the words of natural language. For 1066 nouns denoting round and angular shapes extracted from the Corpus of Serbian Language, distribution of phonemes and consonant-vowel patterns were analyzed. Results showed that words of Serbian language denoting sharp and rounded objects show similar patterns of phoneme and consonant-vowel distributions as those found in pseudowords produced for sharp and rounded visual stimuli, and therefore provide further evidence for cross-modal correspondences in natural language. These findings were discussed in the light of the role crossmodal correspondences can have in the natural language acquisition.
{"title":"Crossmodal correspondences in natural language: Distribution of phonemes and consonant-vowel patterns in Serbian words denoting round and angular objects","authors":"Olivera Ilić, V. Ković, D. Janković","doi":"10.1163/187847612X648035","DOIUrl":"https://doi.org/10.1163/187847612X648035","url":null,"abstract":"Since described by Kohler more than half a century ago, phonetic–iconic correspondences have been demonstrated in a series of studies showing remarkable consistency in matches of pseudowords containing specific types of phonemes (e.g., Maluma or Takete) with rounded and angular shapes. If the effect found in these experiments reveals something about processes involved in natural language interpretation, we should expect similar association between phonological properties of objects’ labels and their perceptual properties to exist in natural language as well. However, results of the studies testing this effect in natural language are rather inconsistent and sometimes even contradictory. The aim of the present study was to test whether the distribution of phonemes and consonant-vowel patterns, previously found in pseudowords participants produced for the abstract visual patterns (Jankovic and Markovic, 2000, Perception 29 ECVP Abstract Supplement), could be found in the words of natural language. For 1066 nouns denoting round and angular shapes extracted from the Corpus of Serbian Language, distribution of phonemes and consonant-vowel patterns were analyzed. Results showed that words of Serbian language denoting sharp and rounded objects show similar patterns of phoneme and consonant-vowel distributions as those found in pseudowords produced for sharp and rounded visual stimuli, and therefore provide further evidence for cross-modal correspondences in natural language. These findings were discussed in the light of the role crossmodal correspondences can have in the natural language acquisition.","PeriodicalId":49553,"journal":{"name":"Seeing and Perceiving","volume":"25 1","pages":"174-174"},"PeriodicalIF":0.0,"publicationDate":"2012-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1163/187847612X648035","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"64428765","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-01-01DOI: 10.1163/187847612X648152
Daniel E. Drebing, Jared Medina, H. Coslett, Jeffrey T. Shenton, R. Hamilton
Integrating sensory information across modalities is necessary for a cohesive experience of the world; disrupting the ability to bind the multisensory stimuli arising from an event leads to a disjointed and confusing percept. We previously reported (Hamilton et al., 2006) a patient, AWF, who suffered an acute neural incident after which he displayed a distinct inability to integrate auditory and visual speech information. While our prior experiments involving AWF suggested that he had a deficit of audiovisual speech processing, they did not explore the hypothesis that his deficits in audiovisual integration are restricted to speech. In order to test this notion, we conducted a series of experiments aimed at testing AWF’s ability to integrate cross-modal information from both speech and non-speech events. AWF was tasked with making temporal order judgments (TOJs) for videos of object noises (such as hands clapping) or speech, wherein the onsets of auditory and visual information were manipulated. Results from the experiments show that while AWF performed worse than controls in his ability to accurately judge even the most salient onset differences for speech videos, he did not differ significantly from controls in his ability to make TOJs for the object videos. These results illustrate the possibility of disruption of intermodal binding for audiovisual speech events with spared binding for real-world, non-speech events.
跨模式整合感官信息对于世界的凝聚力体验是必要的;破坏从一个事件中产生的多感官刺激的结合能力会导致一个脱节和混乱的感知。我们之前报道过(Hamilton et al., 2006)一位患有急性神经事件的AWF患者,在此之后,他表现出明显的无法整合听觉和视觉语音信息。虽然我们之前涉及AWF的实验表明他有视听语音处理的缺陷,但他们没有探索他的视听整合缺陷仅限于言语的假设。为了验证这一概念,我们进行了一系列实验,旨在测试AWF整合来自语音和非语音事件的跨模态信息的能力。AWF的任务是对物体噪声(如鼓掌)或语音视频进行时间顺序判断(toj),其中听觉和视觉信息的开始被操纵。实验结果表明,虽然AWF在准确判断语音视频中最显著的开始差异的能力上比对照组差,但他在为物体视频制作toj的能力上与对照组没有显著差异。这些结果说明了对视听语音事件的多模式绑定中断的可能性,而对现实世界的非语音事件则保留绑定。
{"title":"An acquired deficit of intermodal temporal processing for audiovisual speech: A case study","authors":"Daniel E. Drebing, Jared Medina, H. Coslett, Jeffrey T. Shenton, R. Hamilton","doi":"10.1163/187847612X648152","DOIUrl":"https://doi.org/10.1163/187847612X648152","url":null,"abstract":"Integrating sensory information across modalities is necessary for a cohesive experience of the world; disrupting the ability to bind the multisensory stimuli arising from an event leads to a disjointed and confusing percept. We previously reported (Hamilton et al., 2006) a patient, AWF, who suffered an acute neural incident after which he displayed a distinct inability to integrate auditory and visual speech information. While our prior experiments involving AWF suggested that he had a deficit of audiovisual speech processing, they did not explore the hypothesis that his deficits in audiovisual integration are restricted to speech. In order to test this notion, we conducted a series of experiments aimed at testing AWF’s ability to integrate cross-modal information from both speech and non-speech events. AWF was tasked with making temporal order judgments (TOJs) for videos of object noises (such as hands clapping) or speech, wherein the onsets of auditory and visual information were manipulated. Results from the experiments show that while AWF performed worse than controls in his ability to accurately judge even the most salient onset differences for speech videos, he did not differ significantly from controls in his ability to make TOJs for the object videos. These results illustrate the possibility of disruption of intermodal binding for audiovisual speech events with spared binding for real-world, non-speech events.","PeriodicalId":49553,"journal":{"name":"Seeing and Perceiving","volume":"25 1","pages":"186-186"},"PeriodicalIF":0.0,"publicationDate":"2012-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1163/187847612X648152","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"64428817","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-01-01DOI: 10.1163/187847612X648288
V. Harrar, C. Spence
When deciding on a product’s quality, we often pick it up to gauge its weight. If it’s heavy enough, we tend to think that it is good quality. We have recently shown that the weight of a dish can affect the taste and quality perception of the food it contains. Here, we varied the weight of spoons in order to determine whether the weight or size of the cutlery might influence taste perception. Teaspoons and tablespoons were tested, with one of each spoon-size artificially weighted with lead hidden into the handle (teaspoons: 2.35 and 5.67 g, and tablespoons: 3.73 and 10.84 g). Participants tasted yoghurt from each spoon and rated the yoghurt’s perceived density, price, sweetness, and pleasantness. Four within-participant ANOVAs were used to test the effects of spoon size and spoon weight on each attribute. The perceived density of the yoghurt was affected by the spoon’s weight, with yoghurt from light spoons being perceived as thicker than yoghurt sampled from a heavy spoon. The perceived price of the yoghurt also varied with spoon weight such that lighter spoons made the yoghurt taste more expensive. The most reliable effect was an interaction between spoon weight and spoon size on sweetness perception: heavy teaspoons and light tablespoons made the yoghurt appear sweeter. These data support the growing body of research demonstrating that tableware (and silverware) can affect the consumer’s judgements without their being aware.
{"title":"A weighty matter: The effect of spoon size and weight on food perception","authors":"V. Harrar, C. Spence","doi":"10.1163/187847612X648288","DOIUrl":"https://doi.org/10.1163/187847612X648288","url":null,"abstract":"When deciding on a product’s quality, we often pick it up to gauge its weight. If it’s heavy enough, we tend to think that it is good quality. We have recently shown that the weight of a dish can affect the taste and quality perception of the food it contains. Here, we varied the weight of spoons in order to determine whether the weight or size of the cutlery might influence taste perception. Teaspoons and tablespoons were tested, with one of each spoon-size artificially weighted with lead hidden into the handle (teaspoons: 2.35 and 5.67 g, and tablespoons: 3.73 and 10.84 g). Participants tasted yoghurt from each spoon and rated the yoghurt’s perceived density, price, sweetness, and pleasantness. Four within-participant ANOVAs were used to test the effects of spoon size and spoon weight on each attribute. The perceived density of the yoghurt was affected by the spoon’s weight, with yoghurt from light spoons being perceived as thicker than yoghurt sampled from a heavy spoon. The perceived price of the yoghurt also varied with spoon weight such that lighter spoons made the yoghurt taste more expensive. The most reliable effect was an interaction between spoon weight and spoon size on sweetness perception: heavy teaspoons and light tablespoons made the yoghurt appear sweeter. These data support the growing body of research demonstrating that tableware (and silverware) can affect the consumer’s judgements without their being aware.","PeriodicalId":49553,"journal":{"name":"Seeing and Perceiving","volume":"31 1","pages":"199-199"},"PeriodicalIF":0.0,"publicationDate":"2012-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1163/187847612X648288","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"64428854","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-01-01DOI: 10.1163/187847612X648332
A. Slater, Dina Lew, G. Bremner, P. Walker
One of the most important crossmodal associations is between vision and sound, and we know that such bimodal information is of great importance in perceptual learning. Many crossmodal relationships are non-arbitrary or ‘natural’, and a particularly important case is object naming. While many object-name relationships are arbitrary, others are not. The clearest examples are known as onomatopoeia — the cuckoo and the kittiwake are named after the sounds they make. And a striking demonstration that such effects extend beyond onomatopoeic naming of familiar objects concerns shapes. When adults are shown two shapes, one angular and one with rounded contours, and given the words ‘Takete’ and ‘Maluma’ they will invariably associate ‘Takete’ with the angular shape, and ‘Maluma’ with the rounded shape. This effect was first described by Kohler in 1947, and there have been recent demonstrations of the effect with adults and young (3-year-old) children. Several researchers have suggested that these non-arbitrary associations may be of great importance in that they may influence and ‘bootstrap’ the infant’s early language development, particularly the learning of words for objects. If this is so, such associations should be present prior to language acquisition, and we describe three experiments which demonstrate such relationships in preverbal, 3–5-month-old infants, using random shapes, such as those in the figure, and angular and rounded face-like stimuli.
{"title":"Preverbal infants experience sound-shape correspondences","authors":"A. Slater, Dina Lew, G. Bremner, P. Walker","doi":"10.1163/187847612X648332","DOIUrl":"https://doi.org/10.1163/187847612X648332","url":null,"abstract":"One of the most important crossmodal associations is between vision and sound, and we know that such bimodal information is of great importance in perceptual learning. Many crossmodal relationships are non-arbitrary or ‘natural’, and a particularly important case is object naming. While many object-name relationships are arbitrary, others are not. The clearest examples are known as onomatopoeia — the cuckoo and the kittiwake are named after the sounds they make. And a striking demonstration that such effects extend beyond onomatopoeic naming of familiar objects concerns shapes. When adults are shown two shapes, one angular and one with rounded contours, and given the words ‘Takete’ and ‘Maluma’ they will invariably associate ‘Takete’ with the angular shape, and ‘Maluma’ with the rounded shape. This effect was first described by Kohler in 1947, and there have been recent demonstrations of the effect with adults and young (3-year-old) children. Several researchers have suggested that these non-arbitrary associations may be of great importance in that they may influence and ‘bootstrap’ the infant’s early language development, particularly the learning of words for objects. If this is so, such associations should be present prior to language acquisition, and we describe three experiments which demonstrate such relationships in preverbal, 3–5-month-old infants, using random shapes, such as those in the figure, and angular and rounded face-like stimuli.","PeriodicalId":49553,"journal":{"name":"Seeing and Perceiving","volume":"25 1","pages":"204-204"},"PeriodicalIF":0.0,"publicationDate":"2012-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1163/187847612X648332","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"64428888","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-01-01DOI: 10.1163/187847612X648440
R. Rouw
A synesthete might inform you that McDonald’s is ‘all wrong’, as obviously their large letter M has the completely wrong color. While synesthesia is now well accepted as a ‘real’ phenomenon, what underlies the highly specific and consistent additional sensations is a topic of debate. What sets synesthetic mechanisms apart from those involved in ‘normal’ associations? In this presentation, we first discuss the possible neurobiological underpinnings. A review study has shown six brain regions related to synesthesia. Furthermore, results from structural as well as functional connectivity studies show hyperconnectivity in the synaesthete’s brain. Second, the behavioral characteristics that set synesthetes apart from non-synesthetes are discussed. One problem in obtaining a clear model of synesthesia is that currently, most studies are performed on particular types of synesthesia (in particular colored letters/numbers). We present rare cases of synesthesia (taste/smell with sounds) and examine how well their characteristics fit with the traditionally presented model of synesthesia.
{"title":"Ordinary associations or ‘special cases’; defining mechanisms in synesthesia","authors":"R. Rouw","doi":"10.1163/187847612X648440","DOIUrl":"https://doi.org/10.1163/187847612X648440","url":null,"abstract":"A synesthete might inform you that McDonald’s is ‘all wrong’, as obviously their large letter M has the completely wrong color. While synesthesia is now well accepted as a ‘real’ phenomenon, what underlies the highly specific and consistent additional sensations is a topic of debate. What sets synesthetic mechanisms apart from those involved in ‘normal’ associations? In this presentation, we first discuss the possible neurobiological underpinnings. A review study has shown six brain regions related to synesthesia. Furthermore, results from structural as well as functional connectivity studies show hyperconnectivity in the synaesthete’s brain. Second, the behavioral characteristics that set synesthetes apart from non-synesthetes are discussed. One problem in obtaining a clear model of synesthesia is that currently, most studies are performed on particular types of synesthesia (in particular colored letters/numbers). We present rare cases of synesthesia (taste/smell with sounds) and examine how well their characteristics fit with the traditionally presented model of synesthesia.","PeriodicalId":49553,"journal":{"name":"Seeing and Perceiving","volume":"25 1","pages":"218-218"},"PeriodicalIF":0.0,"publicationDate":"2012-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1163/187847612X648440","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"64428944","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}