Pub Date : 2021-05-28DOI: 10.1163/22134808-bja10052
Rita Mendonça, Margarida V Garrido, Gün R Semin
The experiment reported here used a variation of the spatial cueing task to examine the effects of unimodal and bimodal attention-orienting primes on target identification latencies and eye gaze movements. The primes were a nonspatial auditory tone and words known to drive attention consistent with the dominant writing and reading direction, as well as introducing a semantic, temporal bias (past-future) on the horizontal dimension. As expected, past-related (visual) word primes gave rise to shorter response latencies on the left hemifield and future-related words on the right. This congruency effect was differentiated by an asymmetric performance on the right space following future words and driven by the left-to-right trajectory of scanning habits that facilitated search times and eye gaze movements to lateralized targets. Auditory tone prime alone acted as an alarm signal, boosting visual search and reducing response latencies. Bimodal priming, i.e., temporal visual words paired with the auditory tone, impaired performance by delaying visual attention and response times relative to the unimodal visual word condition. We conclude that bimodal primes were no more effective in capturing participants' spatial attention than the unimodal auditory and visual primes. Their contribution to the literature on multisensory integration is discussed.
{"title":"The Effect of Simultaneously Presented Words and Auditory Tones on Visuomotor Performance.","authors":"Rita Mendonça, Margarida V Garrido, Gün R Semin","doi":"10.1163/22134808-bja10052","DOIUrl":"10.1163/22134808-bja10052","url":null,"abstract":"<p><p>The experiment reported here used a variation of the spatial cueing task to examine the effects of unimodal and bimodal attention-orienting primes on target identification latencies and eye gaze movements. The primes were a nonspatial auditory tone and words known to drive attention consistent with the dominant writing and reading direction, as well as introducing a semantic, temporal bias (past-future) on the horizontal dimension. As expected, past-related (visual) word primes gave rise to shorter response latencies on the left hemifield and future-related words on the right. This congruency effect was differentiated by an asymmetric performance on the right space following future words and driven by the left-to-right trajectory of scanning habits that facilitated search times and eye gaze movements to lateralized targets. Auditory tone prime alone acted as an alarm signal, boosting visual search and reducing response latencies. Bimodal priming, i.e., temporal visual words paired with the auditory tone, impaired performance by delaying visual attention and response times relative to the unimodal visual word condition. We conclude that bimodal primes were no more effective in capturing participants' spatial attention than the unimodal auditory and visual primes. Their contribution to the literature on multisensory integration is discussed.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":" ","pages":"1-28"},"PeriodicalIF":1.6,"publicationDate":"2021-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39040402","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-05-12DOI: 10.1163/22134808-bja10051
Giovanni Anobile, Maria C Morrone, Daniela Ricci, Francesca Gallini, Ilaria Merusi, Francesca Tinelli
Premature birth is associated with a high risk of damage in the parietal cortex, a key area for numerical and non-numerical magnitude perception and mathematical reasoning. Children born preterm have higher rates of learning difficulties for school mathematics. In this study, we investigated how preterm newborns (born at 28-34 weeks of gestation age) and full-term newborns respond to visual numerosity after habituation to auditory stimuli of different numerosities. The results show that the two groups have a similar preferential looking response to visual numerosity, both preferring the incongruent set after crossmodal habituation. These results suggest that the numerosity system is resistant to prematurity.
{"title":"Typical Crossmodal Numerosity Perception in Preterm Newborns.","authors":"Giovanni Anobile, Maria C Morrone, Daniela Ricci, Francesca Gallini, Ilaria Merusi, Francesca Tinelli","doi":"10.1163/22134808-bja10051","DOIUrl":"10.1163/22134808-bja10051","url":null,"abstract":"<p><p>Premature birth is associated with a high risk of damage in the parietal cortex, a key area for numerical and non-numerical magnitude perception and mathematical reasoning. Children born preterm have higher rates of learning difficulties for school mathematics. In this study, we investigated how preterm newborns (born at 28-34 weeks of gestation age) and full-term newborns respond to visual numerosity after habituation to auditory stimuli of different numerosities. The results show that the two groups have a similar preferential looking response to visual numerosity, both preferring the incongruent set after crossmodal habituation. These results suggest that the numerosity system is resistant to prematurity.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":" ","pages":"1-22"},"PeriodicalIF":1.6,"publicationDate":"2021-05-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"38909099","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-05-12DOI: 10.1163/22134808-bja10050
Anna Borgolte, Ahmad Bransi, Johanna Seifert, Sermin Toto, Gregor R Szycik, Christopher Sinke
Synaesthesia is a multimodal phenomenon in which the activation of one sensory modality leads to an involuntary additional experience in another sensory modality. To date, normal multisensory processing has hardly been investigated in synaesthetes. In the present study we examine processes of audiovisual separation in synaesthesia by using a simultaneity judgement task. Subjects were asked to indicate whether an acoustic and a visual stimulus occurred simultaneously or not. Stimulus onset asynchronies (SOA) as well as the temporal order of the stimuli were systematically varied. Our results demonstrate that synaesthetes are better in separating auditory and visual events than control subjects, but only when vision leads.
{"title":"Audiovisual Simultaneity Judgements in Synaesthesia.","authors":"Anna Borgolte, Ahmad Bransi, Johanna Seifert, Sermin Toto, Gregor R Szycik, Christopher Sinke","doi":"10.1163/22134808-bja10050","DOIUrl":"10.1163/22134808-bja10050","url":null,"abstract":"<p><p>Synaesthesia is a multimodal phenomenon in which the activation of one sensory modality leads to an involuntary additional experience in another sensory modality. To date, normal multisensory processing has hardly been investigated in synaesthetes. In the present study we examine processes of audiovisual separation in synaesthesia by using a simultaneity judgement task. Subjects were asked to indicate whether an acoustic and a visual stimulus occurred simultaneously or not. Stimulus onset asynchronies (SOA) as well as the temporal order of the stimuli were systematically varied. Our results demonstrate that synaesthetes are better in separating auditory and visual events than control subjects, but only when vision leads.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":" ","pages":"1-12"},"PeriodicalIF":1.6,"publicationDate":"2021-05-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"38909098","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-04-20DOI: 10.1163/22134808-bja10049
Katharina Margareta Theresa Pöhlmann, Julia Föcker, Patrick Dickinson, Adrian Parke, Louise O'Hare
Virtual Reality (VR) experienced through head-mounted displays often leads to vection, discomfort and sway in the user. This study investigated the effect of motion direction and eccentricity on these three phenomena using optic flow patterns displayed using the Valve Index. Visual motion stimuli were presented in the centre, periphery or far periphery and moved either in depth (back and forth) or laterally (left and right). Overall vection was stronger for motion in depth compared to lateral motion. Additionally, eccentricity primarily affected stimuli moving in depth with stronger vection for more peripherally presented motion patterns compared to more central ones. Motion direction affected the various aspects of VR sickness differently and modulated the effect of eccentricity on VR sickness. For stimuli moving in depth far peripheral presentation caused more discomfort, whereas for lateral motion the central stimuli caused more discomfort. Stimuli moving in depth led to more head movements in the anterior-posterior direction when the entire visual field was stimulated. Observers demonstrated more head movements in the anterior-posterior direction compared to the medio-lateral direction throughout the entire experiment independent of motion direction or eccentricity of the presented moving stimulus. Head movements were elicited on the same plane as the moving stimulus only for stimuli moving in depth covering the entire visual field. A correlation showed a positive relationship between dizziness and vection duration and between general discomfort and sway. Identifying where in the visual field motion presented to an individual causes the least amount of VR sickness without losing vection and presence can guide development for Virtual Reality games, training and treatment programmes.
{"title":"The Effect of Motion Direction and Eccentricity on Vection, VR Sickness and Head Movements in Virtual Reality.","authors":"Katharina Margareta Theresa Pöhlmann, Julia Föcker, Patrick Dickinson, Adrian Parke, Louise O'Hare","doi":"10.1163/22134808-bja10049","DOIUrl":"10.1163/22134808-bja10049","url":null,"abstract":"<p><p>Virtual Reality (VR) experienced through head-mounted displays often leads to vection, discomfort and sway in the user. This study investigated the effect of motion direction and eccentricity on these three phenomena using optic flow patterns displayed using the Valve Index. Visual motion stimuli were presented in the centre, periphery or far periphery and moved either in depth (back and forth) or laterally (left and right). Overall vection was stronger for motion in depth compared to lateral motion. Additionally, eccentricity primarily affected stimuli moving in depth with stronger vection for more peripherally presented motion patterns compared to more central ones. Motion direction affected the various aspects of VR sickness differently and modulated the effect of eccentricity on VR sickness. For stimuli moving in depth far peripheral presentation caused more discomfort, whereas for lateral motion the central stimuli caused more discomfort. Stimuli moving in depth led to more head movements in the anterior-posterior direction when the entire visual field was stimulated. Observers demonstrated more head movements in the anterior-posterior direction compared to the medio-lateral direction throughout the entire experiment independent of motion direction or eccentricity of the presented moving stimulus. Head movements were elicited on the same plane as the moving stimulus only for stimuli moving in depth covering the entire visual field. A correlation showed a positive relationship between dizziness and vection duration and between general discomfort and sway. Identifying where in the visual field motion presented to an individual causes the least amount of VR sickness without losing vection and presence can guide development for Virtual Reality games, training and treatment programmes.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":" ","pages":"1-40"},"PeriodicalIF":1.6,"publicationDate":"2021-04-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"38895506","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-04-20DOI: 10.1163/22134808-bja10048
Cristina Jordão Nazaré, Armando Mónica Oliveira
The present study examines the extent to which temporal and spatial properties of sound modulate visual motion processing in spatial localization tasks. Participants were asked to locate the place at which a moving visual target unexpectedly vanished. Across different tasks, accompanying sounds were factorially varied within subjects as to their onset and offset times and/or positions relative to visual motion. Sound onset had no effect on the localization error. Sound offset was shown to modulate the perceived visual offset location, both for temporal and spatial disparities. This modulation did not conform to attraction toward the timing or location of the sounds but, demonstrably in the case of temporal disparities, to bimodal enhancement instead. Favorable indications to a contextual effect of audiovisual presentations on interspersed visual-only trials were also found. The short sound-leading offset asynchrony had equivalent benefits to audiovisual offset synchrony, suggestive of the involvement of early-level mechanisms, constrained by a temporal window, at these conditions. Yet, we tentatively hypothesize that the whole of the results and how they compare with previous studies requires the contribution of additional mechanisms, including learning-detection of auditory-visual associations and cross-sensory spread of endogenous attention.
{"title":"Effects of Audiovisual Presentations on Visual Localization Errors: One or Several Multisensory Mechanisms?","authors":"Cristina Jordão Nazaré, Armando Mónica Oliveira","doi":"10.1163/22134808-bja10048","DOIUrl":"10.1163/22134808-bja10048","url":null,"abstract":"<p><p>The present study examines the extent to which temporal and spatial properties of sound modulate visual motion processing in spatial localization tasks. Participants were asked to locate the place at which a moving visual target unexpectedly vanished. Across different tasks, accompanying sounds were factorially varied within subjects as to their onset and offset times and/or positions relative to visual motion. Sound onset had no effect on the localization error. Sound offset was shown to modulate the perceived visual offset location, both for temporal and spatial disparities. This modulation did not conform to attraction toward the timing or location of the sounds but, demonstrably in the case of temporal disparities, to bimodal enhancement instead. Favorable indications to a contextual effect of audiovisual presentations on interspersed visual-only trials were also found. The short sound-leading offset asynchrony had equivalent benefits to audiovisual offset synchrony, suggestive of the involvement of early-level mechanisms, constrained by a temporal window, at these conditions. Yet, we tentatively hypothesize that the whole of the results and how they compare with previous studies requires the contribution of additional mechanisms, including learning-detection of auditory-visual associations and cross-sensory spread of endogenous attention.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":" ","pages":"1-35"},"PeriodicalIF":1.6,"publicationDate":"2021-04-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"38895507","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-04-16DOI: 10.1163/22134808-bja10047
Yuta Ujiie, Kohske Takahashi
While visual information from facial speech modulates auditory speech perception, it is less influential on audiovisual speech perception among autistic individuals than among typically developed individuals. In this study, we investigated the relationship between autistic traits (Autism-Spectrum Quotient; AQ) and the influence of visual speech on the recognition of Rubin's vase-type speech stimuli with degraded facial speech information. Participants were 31 university students (13 males and 18 females; mean age: 19.2, SD: 1.13 years) who reported normal (or corrected-to-normal) hearing and vision. All participants completed three speech recognition tasks (visual, auditory, and audiovisual stimuli) and the AQ-Japanese version. The results showed that accuracies of speech recognition for visual (i.e., lip-reading) and auditory stimuli were not significantly related to participants' AQ. In contrast, audiovisual speech perception was less susceptible to facial speech perception among individuals with high rather than low autistic traits. The weaker influence of visual information on audiovisual speech perception in autism spectrum disorder (ASD) was robust regardless of the clarity of the visual information, suggesting a difficulty in the process of audiovisual integration rather than in the visual processing of facial speech.
虽然来自面部语音的视觉信息会调节听觉语音感知,但它对自闭症患者视听语音感知的影响却小于典型发育患者。在这项研究中,我们调查了自闭症特质(自闭症谱系商数;AQ)与视觉语音对识别鲁宾花瓶型语音刺激的影响之间的关系。参与者为 31 名大学生(13 男 18 女;平均年龄:19.2 岁,标准差:1.13 岁),听力和视力正常(或矫正为正常)。所有参与者都完成了三项语音识别任务(视觉、听觉和视听刺激)和 AQ 日语版。结果显示,视觉(即唇读)和听觉刺激的语音识别准确率与参与者的 AQ 没有明显关系。相比之下,自闭症特质高的人对视听语音感知的影响比自闭症特质低的人对面部语音感知的影响要小。在自闭症谱系障碍(ASD)患者中,无论视觉信息的清晰度如何,视觉信息对视听言语感知的影响都较弱,这表明视听整合过程中存在困难,而不是面部言语的视觉处理过程中存在困难。
{"title":"Weaker McGurk Effect for Rubin's Vase-Type Speech in People With High Autistic Traits.","authors":"Yuta Ujiie, Kohske Takahashi","doi":"10.1163/22134808-bja10047","DOIUrl":"10.1163/22134808-bja10047","url":null,"abstract":"<p><p>While visual information from facial speech modulates auditory speech perception, it is less influential on audiovisual speech perception among autistic individuals than among typically developed individuals. In this study, we investigated the relationship between autistic traits (Autism-Spectrum Quotient; AQ) and the influence of visual speech on the recognition of Rubin's vase-type speech stimuli with degraded facial speech information. Participants were 31 university students (13 males and 18 females; mean age: 19.2, SD: 1.13 years) who reported normal (or corrected-to-normal) hearing and vision. All participants completed three speech recognition tasks (visual, auditory, and audiovisual stimuli) and the AQ-Japanese version. The results showed that accuracies of speech recognition for visual (i.e., lip-reading) and auditory stimuli were not significantly related to participants' AQ. In contrast, audiovisual speech perception was less susceptible to facial speech perception among individuals with high rather than low autistic traits. The weaker influence of visual information on audiovisual speech perception in autism spectrum disorder (ASD) was robust regardless of the clarity of the visual information, suggesting a difficulty in the process of audiovisual integration rather than in the visual processing of facial speech.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":" ","pages":"1-17"},"PeriodicalIF":1.6,"publicationDate":"2021-04-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"38888435","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-04-09DOI: 10.1163/22134808-bja10046
Lieke M J Swinkels, Harm Veling, Hein T van Schie
During a full body illusion (FBI), participants experience a change in self-location towards a body that they see in front of them from a third-person perspective and experience touch to originate from this body. Multisensory integration is thought to underlie this illusion. In the present study we tested the redundant signals effect (RSE) as a new objective measure of the illusion that was designed to directly tap into the multisensory integration underlying the illusion. The illusion was induced by an experimenter who stroked and tapped the participant's shoulder and underarm, while participants perceived the touch on the virtual body in front of them via a head-mounted display. Participants performed a speeded detection task, responding to visual stimuli on the virtual body, to tactile stimuli on the real body and to combined (multisensory) visual and tactile stimuli. Analysis of the RSE with a race model inequality test indicated that multisensory integration took place in both the synchronous and the asynchronous condition. This surprising finding suggests that simultaneous bodily stimuli from different (visual and tactile) modalities will be transiently integrated into a multisensory representation even when no illusion is induced. Furthermore, this finding suggests that the RSE is not a suitable objective measure of body illusions. Interestingly however, responses to the unisensory tactile stimuli in the speeded detection task were found to be slower and had a larger variance in the asynchronous condition than in the synchronous condition. The implications of this finding for the literature on body representations are discussed.
{"title":"The Redundant Signals Effect and the Full Body Illusion: not Multisensory, but Unisensory Tactile Stimuli Are Affected by the Illusion.","authors":"Lieke M J Swinkels, Harm Veling, Hein T van Schie","doi":"10.1163/22134808-bja10046","DOIUrl":"10.1163/22134808-bja10046","url":null,"abstract":"<p><p>During a full body illusion (FBI), participants experience a change in self-location towards a body that they see in front of them from a third-person perspective and experience touch to originate from this body. Multisensory integration is thought to underlie this illusion. In the present study we tested the redundant signals effect (RSE) as a new objective measure of the illusion that was designed to directly tap into the multisensory integration underlying the illusion. The illusion was induced by an experimenter who stroked and tapped the participant's shoulder and underarm, while participants perceived the touch on the virtual body in front of them via a head-mounted display. Participants performed a speeded detection task, responding to visual stimuli on the virtual body, to tactile stimuli on the real body and to combined (multisensory) visual and tactile stimuli. Analysis of the RSE with a race model inequality test indicated that multisensory integration took place in both the synchronous and the asynchronous condition. This surprising finding suggests that simultaneous bodily stimuli from different (visual and tactile) modalities will be transiently integrated into a multisensory representation even when no illusion is induced. Furthermore, this finding suggests that the RSE is not a suitable objective measure of body illusions. Interestingly however, responses to the unisensory tactile stimuli in the speeded detection task were found to be slower and had a larger variance in the asynchronous condition than in the synchronous condition. The implications of this finding for the literature on body representations are discussed.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":" ","pages":"1-33"},"PeriodicalIF":1.6,"publicationDate":"2021-04-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"25577661","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-03-15DOI: 10.1163/22134808-bja10045
Tsukasa Kimura
Interaction with other sensory information is important for prediction of tactile events. Recent studies have reported that the approach of visual information toward the body facilitates prediction of subsequent tactile events. However, the processing of tactile events is influenced by multiple spatial coordinates, and it remains unclear how this approach effect influences tactile events in different spatial coordinates, i.e., spatial reference frames. We investigated the relationship between the prediction of a tactile stimulus via this approach effect and spatial coordinates by comparing ERPs. Participants were asked to place their arms on a desk and required to respond tactile stimuli which were presented to the left (or right) index finger with a high probability (80%) or to the opposite index finger with a low probability (20%). Before the presentation of each tactile stimulus, visual stimuli approached sequentially toward the hand to which the high-probability tactile stimulus was presented. In the uncrossed condition, each hand was placed on the corresponding side. In the crossed condition, each hand was crossed and placed on the opposite side, i.e., left (right) hand placed on the right (left) side. Thus, the spatial location of the tactile stimulus and hand was consistent in the uncrossed condition and inconsistent in the crossed condition. The results showed that N1 amplitudes elicited by high-probability tactile stimuli only decreased in the uncrossed condition. These results suggest that the prediction of a tactile stimulus facilitated by approaching visual information is influenced by multiple spatial coordinates.
{"title":"Multiple Spatial Coordinates Influence the Prediction of Tactile Events Facilitated by Approaching Visual Stimuli.","authors":"Tsukasa Kimura","doi":"10.1163/22134808-bja10045","DOIUrl":"10.1163/22134808-bja10045","url":null,"abstract":"<p><p>Interaction with other sensory information is important for prediction of tactile events. Recent studies have reported that the approach of visual information toward the body facilitates prediction of subsequent tactile events. However, the processing of tactile events is influenced by multiple spatial coordinates, and it remains unclear how this approach effect influences tactile events in different spatial coordinates, i.e., spatial reference frames. We investigated the relationship between the prediction of a tactile stimulus via this approach effect and spatial coordinates by comparing ERPs. Participants were asked to place their arms on a desk and required to respond tactile stimuli which were presented to the left (or right) index finger with a high probability (80%) or to the opposite index finger with a low probability (20%). Before the presentation of each tactile stimulus, visual stimuli approached sequentially toward the hand to which the high-probability tactile stimulus was presented. In the uncrossed condition, each hand was placed on the corresponding side. In the crossed condition, each hand was crossed and placed on the opposite side, i.e., left (right) hand placed on the right (left) side. Thus, the spatial location of the tactile stimulus and hand was consistent in the uncrossed condition and inconsistent in the crossed condition. The results showed that N1 amplitudes elicited by high-probability tactile stimuli only decreased in the uncrossed condition. These results suggest that the prediction of a tactile stimulus facilitated by approaching visual information is influenced by multiple spatial coordinates.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":" ","pages":"1-21"},"PeriodicalIF":1.6,"publicationDate":"2021-03-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"25484869","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-03-10DOI: 10.1163/22134808-034001ED
Amber Ross, Mohan Matthen
European philosophers of the modern period generally acknowledged that the senses are our primary source of knowledge about the contingent states of the world around us. The question of modality was of secondary interest and was very little discussed in this period. Why? Because these philosophers were atomists about sense-perception, an attitude that makes multisensory perception impossible. Let us explain. Atomists hold that all sense-experience is of ‘ideas’ — a somewhat oversimple, but still useful, way to think of these is as images. All ideas are ultimately composed of simple ideas. Atomists hold, moreover, that the intrinsic nature of a simple (or non-composite) idea is fully given by conscious experience of that idea, and in no other way. For example, burnt sienna is a simple idea because it is not composed of other ideas. Nothing about its intrinsic nature can be known except by experiencing it — a colour-blind individual cannot know what it is. It is, moreover, adequately and completely known when it is experienced; there is nothing more to know about it than is given by visual experience of it (see Note 1). Now, on this account of simple ideas, distinctions among them cannot be analysed. For atomists, inter-modal distinctions, like all other distinctions among ideas, are primitive and based in experience. What, for example, is the difference between burnt sienna and the sound of a trumpet playing middle C? All that can be said is that they are experientially different from one
{"title":"Introduction to the Special Issue on Multisensory Perception in Philosophy.","authors":"Amber Ross, Mohan Matthen","doi":"10.1163/22134808-034001ED","DOIUrl":"https://doi.org/10.1163/22134808-034001ED","url":null,"abstract":"European philosophers of the modern period generally acknowledged that the senses are our primary source of knowledge about the contingent states of the world around us. The question of modality was of secondary interest and was very little discussed in this period. Why? Because these philosophers were atomists about sense-perception, an attitude that makes multisensory perception impossible. Let us explain. Atomists hold that all sense-experience is of ‘ideas’ — a somewhat oversimple, but still useful, way to think of these is as images. All ideas are ultimately composed of simple ideas. Atomists hold, moreover, that the intrinsic nature of a simple (or non-composite) idea is fully given by conscious experience of that idea, and in no other way. For example, burnt sienna is a simple idea because it is not composed of other ideas. Nothing about its intrinsic nature can be known except by experiencing it — a colour-blind individual cannot know what it is. It is, moreover, adequately and completely known when it is experienced; there is nothing more to know about it than is given by visual experience of it (see Note 1). Now, on this account of simple ideas, distinctions among them cannot be analysed. For atomists, inter-modal distinctions, like all other distinctions among ideas, are primitive and based in experience. What, for example, is the difference between burnt sienna and the sound of a trumpet playing middle C? All that can be said is that they are experientially different from one","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":"34 3","pages":"219-231"},"PeriodicalIF":1.6,"publicationDate":"2021-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"25465185","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-02-09DOI: 10.1163/22134808-bja10044
Tyler C Dalal, Anne-Marie Muller, Ryan A Stevenson
Recent literature has suggested that deficits in sensory processing are associated with schizophrenia (SCZ), and more specifically hallucination severity. The DSM-5's shift towards a dimensional approach to diagnostic criteria has led to SCZ and schizotypal personality disorder (SPD) being classified as schizophrenia spectrum disorders. With SCZ and SPD overlapping in aetiology and symptomatology, such as sensory abnormalities, it is important to investigate whether these deficits commonly reported in SCZ extend to non-clinical expressions of SPD. In this study, we investigated whether levels of SPD traits were related to audiovisual multisensory temporal processing in a non-clinical sample, revealing two novel findings. First, less precise multisensory temporal processing was related to higher overall levels of SPD symptomatology. Second, this relationship was specific to the cognitive-perceptual domain of SPD symptomatology, and more specifically, the Unusual Perceptual Experiences and Odd Beliefs or Magical Thinking symptomatology. The current study provides an initial look at the relationship between multisensory temporal processing and schizotypal traits. Additionally, it builds on the previous literature by suggesting that less precise multisensory temporal processing is not exclusive to SCZ but may also be related to non-clinical expressions of schizotypal traits in the general population.
最近的文献表明,感觉处理能力的缺陷与精神分裂症(SCZ)有关,更具体地说,与幻觉的严重程度有关。DSM-5 在诊断标准上向维度方法转变,导致 SCZ 和分裂型人格障碍(SPD)被归类为精神分裂症谱系障碍。由于SCZ和SPD在病因和症状(如感觉异常)方面存在重叠,因此研究SCZ中常见的这些缺陷是否会延伸到SPD的非临床表现中非常重要。在这项研究中,我们在非临床样本中调查了SPD特质的水平是否与视听多感官时间处理有关,发现了两个新发现。首先,较不精确的多感官时间处理与较高的 SPD 症状总体水平有关。其次,这种关系与 SPD 症状的认知-感知领域有关,更具体地说,与异常感知体验和古怪信念或神奇思维症状有关。本研究初步探讨了多感官时间处理与精神分裂症特质之间的关系。此外,本研究还在以往文献的基础上指出,不那么精确的多感官时间处理并非是分裂型精神分裂症的专属症状,它还可能与普通人群中精神分裂症的非临床表现有关。
{"title":"The Relationship Between Multisensory Temporal Processing and Schizotypal Traits.","authors":"Tyler C Dalal, Anne-Marie Muller, Ryan A Stevenson","doi":"10.1163/22134808-bja10044","DOIUrl":"10.1163/22134808-bja10044","url":null,"abstract":"<p><p>Recent literature has suggested that deficits in sensory processing are associated with schizophrenia (SCZ), and more specifically hallucination severity. The DSM-5's shift towards a dimensional approach to diagnostic criteria has led to SCZ and schizotypal personality disorder (SPD) being classified as schizophrenia spectrum disorders. With SCZ and SPD overlapping in aetiology and symptomatology, such as sensory abnormalities, it is important to investigate whether these deficits commonly reported in SCZ extend to non-clinical expressions of SPD. In this study, we investigated whether levels of SPD traits were related to audiovisual multisensory temporal processing in a non-clinical sample, revealing two novel findings. First, less precise multisensory temporal processing was related to higher overall levels of SPD symptomatology. Second, this relationship was specific to the cognitive-perceptual domain of SPD symptomatology, and more specifically, the Unusual Perceptual Experiences and Odd Beliefs or Magical Thinking symptomatology. The current study provides an initial look at the relationship between multisensory temporal processing and schizotypal traits. Additionally, it builds on the previous literature by suggesting that less precise multisensory temporal processing is not exclusive to SCZ but may also be related to non-clinical expressions of schizotypal traits in the general population.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":" ","pages":"1-19"},"PeriodicalIF":1.6,"publicationDate":"2021-02-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"25465186","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}