Pub Date : 2022-04-05DOI: 10.1163/22134808-bja10074
William Chung, Michael Barnett-Cowan
Integration of incoming sensory signals from multiple modalities is central in the determination of self-motion perception. With the emergence of consumer virtual reality (VR), it is becoming increasingly common to experience a mismatch in sensory feedback regarding motion when using immersive displays. In this study, we explored whether introducing various discrepancies between the vestibular and visual motion would influence the perceived timing of self-motion. Participants performed a series of temporal-order judgements between an auditory tone and a passive whole-body rotation on a motion platform accompanied by visual feedback using a virtual environment generated through a head-mounted display. Sensory conflict was induced by altering the speed and direction by which the movement of the visual scene updated relative to the observer's physical rotation. There were no differences in perceived timing of the rotation without vision, with congruent visual feedback and when the speed of the updating of the visual motion was slower. However, the perceived timing was significantly further from zero when the direction of the visual motion was incongruent with the rotation. These findings demonstrate the potential interaction between visual and vestibular signals in the temporal perception of self-motion. Additionally, we recorded cybersickness ratings and found that sickness severity was significantly greater when visual motion was present and incongruent with the physical motion. This supports previous research regarding cybersickness and the sensory conflict theory, where a mismatch between the visual and vestibular signals may lead to a greater likelihood for the occurrence of sickness symptoms.
{"title":"Influence of Sensory Conflict on Perceived Timing of Passive Rotation in Virtual Reality.","authors":"William Chung, Michael Barnett-Cowan","doi":"10.1163/22134808-bja10074","DOIUrl":"10.1163/22134808-bja10074","url":null,"abstract":"<p><p>Integration of incoming sensory signals from multiple modalities is central in the determination of self-motion perception. With the emergence of consumer virtual reality (VR), it is becoming increasingly common to experience a mismatch in sensory feedback regarding motion when using immersive displays. In this study, we explored whether introducing various discrepancies between the vestibular and visual motion would influence the perceived timing of self-motion. Participants performed a series of temporal-order judgements between an auditory tone and a passive whole-body rotation on a motion platform accompanied by visual feedback using a virtual environment generated through a head-mounted display. Sensory conflict was induced by altering the speed and direction by which the movement of the visual scene updated relative to the observer's physical rotation. There were no differences in perceived timing of the rotation without vision, with congruent visual feedback and when the speed of the updating of the visual motion was slower. However, the perceived timing was significantly further from zero when the direction of the visual motion was incongruent with the rotation. These findings demonstrate the potential interaction between visual and vestibular signals in the temporal perception of self-motion. Additionally, we recorded cybersickness ratings and found that sickness severity was significantly greater when visual motion was present and incongruent with the physical motion. This supports previous research regarding cybersickness and the sensory conflict theory, where a mismatch between the visual and vestibular signals may lead to a greater likelihood for the occurrence of sickness symptoms.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":"1 1","pages":"1-23"},"PeriodicalIF":1.6,"publicationDate":"2022-04-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"64581115","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-03-09DOI: 10.1163/22134808-bja10071
Lisa Rosenblum, Elisa Grewe, Jan Churan, Frank Bremmer
The integration of information from different sensory modalities is crucial for successful navigation through an environment. Among others, self-motion induces distinct optic flow patterns on the retina, vestibular signals and tactile flow, which contribute to determine traveled distance (path integration) or movement direction (heading). While the processing of combined visual-vestibular information is subject to a growing body of literature, the processing of visuo-tactile signals in the context of self-motion has received comparatively little attention. Here, we investigated whether visual heading perception is influenced by behaviorally irrelevant tactile flow. In the visual modality, we simulated an observer's self-motion across a horizontal ground plane (optic flow). Tactile self-motion stimuli were delivered by air flow from head-mounted nozzles (tactile flow). In blocks of trials, we presented only visual or tactile stimuli and subjects had to report their perceived heading. In another block of trials, tactile and visual stimuli were presented simultaneously, with the tactile flow within ±40° of the visual heading (bimodal condition). Here, importantly, participants had to report their perceived visual heading. Perceived self-motion direction in all conditions revealed a centripetal bias, i.e., heading directions were perceived as compressed toward straight ahead. In the bimodal condition, we found a small but systematic influence of task-irrelevant tactile flow on visually perceived headings as function of their directional offset. We conclude that tactile flow is more tightly linked to self-motion perception than previously thought.
{"title":"Influence of Tactile Flow on Visual Heading Perception.","authors":"Lisa Rosenblum, Elisa Grewe, Jan Churan, Frank Bremmer","doi":"10.1163/22134808-bja10071","DOIUrl":"https://doi.org/10.1163/22134808-bja10071","url":null,"abstract":"<p><p>The integration of information from different sensory modalities is crucial for successful navigation through an environment. Among others, self-motion induces distinct optic flow patterns on the retina, vestibular signals and tactile flow, which contribute to determine traveled distance (path integration) or movement direction (heading). While the processing of combined visual-vestibular information is subject to a growing body of literature, the processing of visuo-tactile signals in the context of self-motion has received comparatively little attention. Here, we investigated whether visual heading perception is influenced by behaviorally irrelevant tactile flow. In the visual modality, we simulated an observer's self-motion across a horizontal ground plane (optic flow). Tactile self-motion stimuli were delivered by air flow from head-mounted nozzles (tactile flow). In blocks of trials, we presented only visual or tactile stimuli and subjects had to report their perceived heading. In another block of trials, tactile and visual stimuli were presented simultaneously, with the tactile flow within ±40° of the visual heading (bimodal condition). Here, importantly, participants had to report their perceived visual heading. Perceived self-motion direction in all conditions revealed a centripetal bias, i.e., heading directions were perceived as compressed toward straight ahead. In the bimodal condition, we found a small but systematic influence of task-irrelevant tactile flow on visually perceived headings as function of their directional offset. We conclude that tactile flow is more tightly linked to self-motion perception than previously thought.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":"35 4","pages":"291-308"},"PeriodicalIF":1.6,"publicationDate":"2022-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9378724","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-03-09DOI: 10.1163/22134808-bja10070
Adam J Reeves
The institution of citizenship has undergone far-reaching factual and normative changes. In two recent studies, Christian Joppke and Ayelet Shachar address complex and pressing problems underlying modern citizenship theory. Joppke and Shachar begin from different premises regarding immigration and citizenship. Joppke takes for granted the existing regime of birthright citizenship; his main focus is the relationship between immigration and citizenship, and the interrelation between the dimensions of citizenship. Shachar finds the option of becoming a citizen deficient, and underscores the need to rethink the whole concept of birthright citizenship and the role it plays in perpetuating global injustice. Joppke is more optimistic: he celebrates the triumph of liberalism. Shachar is pessimistic about the citizenship discourse—which, even if more liberal than in the past, is still flawed—yet optimistic about the potential of her ideas to bring about a better future. This review briefly examines each book and discusses the contribution of each to the contemporary, evolving debates on citizenship.
{"title":"Book Review.","authors":"Adam J Reeves","doi":"10.1163/22134808-bja10070","DOIUrl":"https://doi.org/10.1163/22134808-bja10070","url":null,"abstract":"The institution of citizenship has undergone far-reaching factual and normative changes. In two recent studies, Christian Joppke and Ayelet Shachar address complex and pressing problems underlying modern citizenship theory. Joppke and Shachar begin from different premises regarding immigration and citizenship. Joppke takes for granted the existing regime of birthright citizenship; his main focus is the relationship between immigration and citizenship, and the interrelation between the dimensions of citizenship. Shachar finds the option of becoming a citizen deficient, and underscores the need to rethink the whole concept of birthright citizenship and the role it plays in perpetuating global injustice. Joppke is more optimistic: he celebrates the triumph of liberalism. Shachar is pessimistic about the citizenship discourse—which, even if more liberal than in the past, is still flawed—yet optimistic about the potential of her ideas to bring about a better future. This review briefly examines each book and discusses the contribution of each to the contemporary, evolving debates on citizenship.","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":"35 3","pages":"289-290"},"PeriodicalIF":1.6,"publicationDate":"2022-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9209582","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Rosanne Roosmarijn Maria Tuip, W.M. van der Ham, Jeannette A. M. Lorteije, Filip Van Opstal
Perceptual decision-making in a dynamic environment requires two integration processes: integration of sensory evidence from multiple modalities to form a coherent representation of the environment, and integration of evidence across time to accurately make a decision. Only recently studies started to unravel how evidence from two modalities is accumulated across time to form a perceptual decision. One important question is whether information from individual senses contributes equally to multisensory decisions. We designed a new psychophysical task that measures how visual and auditory evidence is weighted across time. Participants were asked to discriminate between two visual gratings, and/or two sounds presented to the right and left ear based on respectively contrast and loudness. We varied the evidence, i.e., the contrast of the gratings and amplitude of the sound, over time. Results showed a significant increase in performance accuracy on multisensory trials compared to unisensory trials, indicating that discriminating between two sources is improved when multisensory information is available. Furthermore, we found that early evidence contributed most to sensory decisions. Weighting of unisensory information during audiovisual decision-making dynamically changed over time. A first epoch was characterized by both visual and auditory weighting, during the second epoch vision dominated and the third epoch finalized the weighting profile with auditory dominance. Our results suggest that during our task multisensory improvement is generated by a mechanism that requires cross-modal interactions but also dynamically evokes dominance switching.
{"title":"Dynamic Weighting of Time-Varying Visual and Auditory Evidence During Multisensory Decision Making.","authors":"Rosanne Roosmarijn Maria Tuip, W.M. van der Ham, Jeannette A. M. Lorteije, Filip Van Opstal","doi":"10.31234/osf.io/knzbv","DOIUrl":"https://doi.org/10.31234/osf.io/knzbv","url":null,"abstract":"Perceptual decision-making in a dynamic environment requires two integration processes: integration of sensory evidence from multiple modalities to form a coherent representation of the environment, and integration of evidence across time to accurately make a decision. Only recently studies started to unravel how evidence from two modalities is accumulated across time to form a perceptual decision. One important question is whether information from individual senses contributes equally to multisensory decisions. We designed a new psychophysical task that measures how visual and auditory evidence is weighted across time. Participants were asked to discriminate between two visual gratings, and/or two sounds presented to the right and left ear based on respectively contrast and loudness. We varied the evidence, i.e., the contrast of the gratings and amplitude of the sound, over time. Results showed a significant increase in performance accuracy on multisensory trials compared to unisensory trials, indicating that discriminating between two sources is improved when multisensory information is available. Furthermore, we found that early evidence contributed most to sensory decisions. Weighting of unisensory information during audiovisual decision-making dynamically changed over time. A first epoch was characterized by both visual and auditory weighting, during the second epoch vision dominated and the third epoch finalized the weighting profile with auditory dominance. Our results suggest that during our task multisensory improvement is generated by a mechanism that requires cross-modal interactions but also dynamically evokes dominance switching.","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":"36 1 1","pages":"31-56"},"PeriodicalIF":1.6,"publicationDate":"2022-02-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46738287","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-12-30DOI: 10.1163/22134808-bja10067
Daniel Gurman, Colin R McCormick, Raymond M Klein
Crossmodal correspondences are defined as associations between crossmodal stimuli based on seemingly irrelevant stimulus features (i.e., bright shapes being associated with high-pitched sounds). There is a large body of research describing auditory crossmodal correspondences involving pitch and volume, but not so much involving auditory timbre, the character or quality of a sound. Adeli and colleagues (2014, Front. Hum. Neurosci. 8, 352) found evidence of correspondences between timbre and visual shape. The present study aimed to replicate Adeli et al.'s findings, as well as identify novel timbre-shape correspondences. Participants were tested using two computerized tasks: an association task, which involved matching shapes to presented sounds based on best perceived fit, and a semantic task, which involved rating shapes and sounds on a number of scales. The analysis of association matches reveals nonrandom selection, with certain stimulus pairs being selected at a much higher frequency. The harsh/jagged and smooth/soft correspondences observed by Adeli et al. were found to be associated with a high level of consistency. Additionally, high matching frequency of sounds with unstudied timbre characteristics suggests the existence of novel correspondences. Finally, the ability of the semantic task to supplement existing crossmodal correspondence assessments was demonstrated. Convergent analysis of the semantic and association data demonstrates that the two datasets are significantly correlated (-0.36) meaning stimulus pairs associated with a high level of consensus were more likely to hold similar perceived meaning. The results of this study are discussed in both theoretical and applied contexts.
{"title":"Crossmodal Correspondence Between Auditory Timbre and Visual Shape.","authors":"Daniel Gurman, Colin R McCormick, Raymond M Klein","doi":"10.1163/22134808-bja10067","DOIUrl":"https://doi.org/10.1163/22134808-bja10067","url":null,"abstract":"<p><p>Crossmodal correspondences are defined as associations between crossmodal stimuli based on seemingly irrelevant stimulus features (i.e., bright shapes being associated with high-pitched sounds). There is a large body of research describing auditory crossmodal correspondences involving pitch and volume, but not so much involving auditory timbre, the character or quality of a sound. Adeli and colleagues (2014, Front. Hum. Neurosci. 8, 352) found evidence of correspondences between timbre and visual shape. The present study aimed to replicate Adeli et al.'s findings, as well as identify novel timbre-shape correspondences. Participants were tested using two computerized tasks: an association task, which involved matching shapes to presented sounds based on best perceived fit, and a semantic task, which involved rating shapes and sounds on a number of scales. The analysis of association matches reveals nonrandom selection, with certain stimulus pairs being selected at a much higher frequency. The harsh/jagged and smooth/soft correspondences observed by Adeli et al. were found to be associated with a high level of consistency. Additionally, high matching frequency of sounds with unstudied timbre characteristics suggests the existence of novel correspondences. Finally, the ability of the semantic task to supplement existing crossmodal correspondence assessments was demonstrated. Convergent analysis of the semantic and association data demonstrates that the two datasets are significantly correlated (-0.36) meaning stimulus pairs associated with a high level of consensus were more likely to hold similar perceived meaning. The results of this study are discussed in both theoretical and applied contexts.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":"35 3","pages":"221-241"},"PeriodicalIF":1.6,"publicationDate":"2021-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39710991","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Crossmodal correspondences refer to when specific domains of features in different sensory modalities are mapped. We investigated how vowels and lexical tones drive sound-shape (rounded or angular) and sound-size (large or small) mappings among native Mandarin Chinese speakers. We used three vowels (/i/, /u/, and /a/), and each vowel was articulated in four lexical tones. In the sound-shape matching, the tendency to match the rounded shape was decreased in the following order: /u/, /i/, and /a/. Tone 2 was more likely to be matched to the rounded pattern, whereas Tone 4 was more likely to be matched to the angular pattern. In the sound-size matching, /a/ was matched to the larger object more than /u/ and /i/, and Tone 2 and Tone 4 correspond to the large-small contrast. The results demonstrated that both vowels and tones play prominent roles in crossmodal correspondences, and sound-shape and sound-size mappings are heterogeneous phenomena.
{"title":"The Effects of Mandarin Chinese Lexical Tones in Sound-Shape and Sound-Size Correspondences.","authors":"Yen-Han Chang, Mingxue Zhao, Yi-Chuan Chen, Pi-Chun Huang","doi":"10.1163/22134808-bja10068","DOIUrl":"https://doi.org/10.1163/22134808-bja10068","url":null,"abstract":"<p><p>Crossmodal correspondences refer to when specific domains of features in different sensory modalities are mapped. We investigated how vowels and lexical tones drive sound-shape (rounded or angular) and sound-size (large or small) mappings among native Mandarin Chinese speakers. We used three vowels (/i/, /u/, and /a/), and each vowel was articulated in four lexical tones. In the sound-shape matching, the tendency to match the rounded shape was decreased in the following order: /u/, /i/, and /a/. Tone 2 was more likely to be matched to the rounded pattern, whereas Tone 4 was more likely to be matched to the angular pattern. In the sound-size matching, /a/ was matched to the larger object more than /u/ and /i/, and Tone 2 and Tone 4 correspond to the large-small contrast. The results demonstrated that both vowels and tones play prominent roles in crossmodal correspondences, and sound-shape and sound-size mappings are heterogeneous phenomena.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":"35 3","pages":"243-257"},"PeriodicalIF":1.6,"publicationDate":"2021-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39725688","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-12-16DOI: 10.1163/22134808-bja10066
Iqra Arshad, Paulo De Mello, Martin Ender, Jason D McEwen, Elisa R Ferré
Despite the technological advancements in Virtual Reality (VR), users are constantly combating feelings of nausea and disorientation, the so-called cybersickness. Cybersickness symptoms cause severe discomfort and hinder the immersive VR experience. Here we investigated cybersickness in 360-degree head-mounted display VR. In traditional 360-degree VR experiences, translational movement in the real world is not reflected in the virtual world, and therefore self-motion information is not corroborated by matching visual and vestibular cues, which may trigger symptoms of cybersickness. We evaluated whether a new Artificial Intelligence (AI) software designed to supplement the 360-degree VR experience with artificial six-degrees-of-freedom motion may reduce cybersickness. Explicit (simulator sickness questionnaire and Fast Motion Sickness (FMS) rating) and implicit (heart rate) measurements were used to evaluate cybersickness symptoms during and after 360-degree VR exposure. Simulator sickness scores showed a significant reduction in feelings of nausea during the AI-supplemented six-degrees-of-freedom motion VR compared to traditional 360-degree VR. However, six-degrees-of-freedom motion VR did not reduce oculomotor or disorientation measures of sickness. No changes were observed in FMS and heart rate measures. Improving the congruency between visual and vestibular cues in 360-degree VR, as provided by the AI-supplemented six-degrees-of-freedom motion system considered, is essential for a more engaging, immersive and safe VR experience, which is critical for educational, cultural and entertainment applications.
{"title":"Reducing Cybersickness in 360-Degree Virtual Reality.","authors":"Iqra Arshad, Paulo De Mello, Martin Ender, Jason D McEwen, Elisa R Ferré","doi":"10.1163/22134808-bja10066","DOIUrl":"10.1163/22134808-bja10066","url":null,"abstract":"<p><p>Despite the technological advancements in Virtual Reality (VR), users are constantly combating feelings of nausea and disorientation, the so-called cybersickness. Cybersickness symptoms cause severe discomfort and hinder the immersive VR experience. Here we investigated cybersickness in 360-degree head-mounted display VR. In traditional 360-degree VR experiences, translational movement in the real world is not reflected in the virtual world, and therefore self-motion information is not corroborated by matching visual and vestibular cues, which may trigger symptoms of cybersickness. We evaluated whether a new Artificial Intelligence (AI) software designed to supplement the 360-degree VR experience with artificial six-degrees-of-freedom motion may reduce cybersickness. Explicit (simulator sickness questionnaire and Fast Motion Sickness (FMS) rating) and implicit (heart rate) measurements were used to evaluate cybersickness symptoms during and after 360-degree VR exposure. Simulator sickness scores showed a significant reduction in feelings of nausea during the AI-supplemented six-degrees-of-freedom motion VR compared to traditional 360-degree VR. However, six-degrees-of-freedom motion VR did not reduce oculomotor or disorientation measures of sickness. No changes were observed in FMS and heart rate measures. Improving the congruency between visual and vestibular cues in 360-degree VR, as provided by the AI-supplemented six-degrees-of-freedom motion system considered, is essential for a more engaging, immersive and safe VR experience, which is critical for educational, cultural and entertainment applications.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":" ","pages":"1-17"},"PeriodicalIF":1.6,"publicationDate":"2021-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39749047","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-10-22DOI: 10.1163/22134808-bja10065
Lisa Lorentz, Kaian Unwalla, David I Shore
Successful interaction with our environment requires accurate tactile localization. Although we seem to localize tactile stimuli effortlessly, the processes underlying this ability are complex. This is evidenced by the crossed-hands deficit, in which tactile localization performance suffers when the hands are crossed. The deficit results from the conflict between an internal reference frame, based in somatotopic coordinates, and an external reference frame, based in external spatial coordinates. Previous evidence in favour of the integration model employed manipulations to the external reference frame (e.g., blindfolding participants), which reduced the deficit by reducing conflict between the two reference frames. The present study extends this finding by asking blindfolded participants to visually imagine their crossed arms as uncrossed. This imagery manipulation further decreased the magnitude of the crossed-hands deficit by bringing information in the two reference frames into alignment. This imagery manipulation differentially affected males and females, which was consistent with the previously observed sex difference in this effect: females tend to show a larger crossed-hands deficit than males and females were more impacted by the imagery manipulation. Results are discussed in terms of the integration model of the crossed-hands deficit.
{"title":"Imagine Your Crossed Hands as Uncrossed: Visual Imagery Impacts the Crossed-Hands Deficit.","authors":"Lisa Lorentz, Kaian Unwalla, David I Shore","doi":"10.1163/22134808-bja10065","DOIUrl":"10.1163/22134808-bja10065","url":null,"abstract":"<p><p>Successful interaction with our environment requires accurate tactile localization. Although we seem to localize tactile stimuli effortlessly, the processes underlying this ability are complex. This is evidenced by the crossed-hands deficit, in which tactile localization performance suffers when the hands are crossed. The deficit results from the conflict between an internal reference frame, based in somatotopic coordinates, and an external reference frame, based in external spatial coordinates. Previous evidence in favour of the integration model employed manipulations to the external reference frame (e.g., blindfolding participants), which reduced the deficit by reducing conflict between the two reference frames. The present study extends this finding by asking blindfolded participants to visually imagine their crossed arms as uncrossed. This imagery manipulation further decreased the magnitude of the crossed-hands deficit by bringing information in the two reference frames into alignment. This imagery manipulation differentially affected males and females, which was consistent with the previously observed sex difference in this effect: females tend to show a larger crossed-hands deficit than males and females were more impacted by the imagery manipulation. Results are discussed in terms of the integration model of the crossed-hands deficit.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":" ","pages":"1-29"},"PeriodicalIF":1.6,"publicationDate":"2021-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39554652","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-10-08DOI: 10.1163/22134808-bja10062
Alexandra N Scurry, Daniela M Lemus, Fang Jiang
Reliable duration perception is an integral aspect of daily life that impacts everyday perception, motor coordination, and subjective passage of time. The Scalar Expectancy Theory (SET) is a common model that explains how an internal pacemaker, gated by an external stimulus-driven switch, accumulates pulses during sensory events and compares these accumulated pulses to a reference memory duration for subsequent duration estimation. Second-order mechanisms, such as multisensory integration (MSI) and attention, can influence this model and affect duration perception. For instance, diverting attention away from temporal features could delay the switch closure or temporarily open the accumulator, altering pulse accumulation and distorting duration perception. In crossmodal duration perception, auditory signals of unequal duration can induce perceptual compression and expansion of durations of visual stimuli, presumably via auditory influence on the visual clock. The current project aimed to investigate the role of temporal (stimulus alignment) and nontemporal (stimulus complexity) features on crossmodal, specifically auditory over visual, duration perception. While temporal alignment revealed a larger impact on the strength of crossmodal duration percepts compared to stimulus complexity, both features showcase auditory dominance in processing visual duration.
{"title":"Temporal Alignment but not Complexity of Audiovisual Stimuli Influences Crossmodal Duration Percepts.","authors":"Alexandra N Scurry, Daniela M Lemus, Fang Jiang","doi":"10.1163/22134808-bja10062","DOIUrl":"10.1163/22134808-bja10062","url":null,"abstract":"<p><p>Reliable duration perception is an integral aspect of daily life that impacts everyday perception, motor coordination, and subjective passage of time. The Scalar Expectancy Theory (SET) is a common model that explains how an internal pacemaker, gated by an external stimulus-driven switch, accumulates pulses during sensory events and compares these accumulated pulses to a reference memory duration for subsequent duration estimation. Second-order mechanisms, such as multisensory integration (MSI) and attention, can influence this model and affect duration perception. For instance, diverting attention away from temporal features could delay the switch closure or temporarily open the accumulator, altering pulse accumulation and distorting duration perception. In crossmodal duration perception, auditory signals of unequal duration can induce perceptual compression and expansion of durations of visual stimuli, presumably via auditory influence on the visual clock. The current project aimed to investigate the role of temporal (stimulus alignment) and nontemporal (stimulus complexity) features on crossmodal, specifically auditory over visual, duration perception. While temporal alignment revealed a larger impact on the strength of crossmodal duration percepts compared to stimulus complexity, both features showcase auditory dominance in processing visual duration.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":" ","pages":"1-19"},"PeriodicalIF":1.6,"publicationDate":"2021-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39510866","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-09-29DOI: 10.1163/22134808-bja10064
Erik Van der Burg, Alexander Toet, Anne-Marie Brouwer, Jan B F Van Erp
How we perceive the world is not solely determined by what we sense at a given moment in time, but also by what we processed recently. Here we investigated whether such serial dependencies for emotional stimuli transfer from one modality to another. Participants were presented a random sequence of emotional sounds and images and instructed to rate the valence and arousal of each stimulus (Experiment 1). For both ratings, we conducted an intertrial analysis, based on whether the rating on the previous trial was low or high. We found a positive serial dependence for valence and arousal regardless of the stimulus modality on two consecutive trials. In Experiment 2, we examined whether passively perceiving a stimulus is sufficient to induce a serial dependence. In Experiment 2, participants were instructed to rate the stimuli only on active trials and not on passive trials. The participants were informed that the active and passive trials were presented in alternating order, so that they were able to prepare for the task. We conducted an intertrial analysis on active trials, based on whether the rating on the previous passive trial (determined in Experiment 1) was low or high. For both ratings, we again observed positive serial dependencies regardless of the stimulus modality. We conclude that the emotional experience triggered by one stimulus affects the emotional experience for a subsequent stimulus regardless of their sensory modalities, that this occurs in a bottom-up fashion, and that this can be explained by residual activation in the emotional network in the brain.
{"title":"Serial Dependence of Emotion Within and Between Stimulus Sensory Modalities.","authors":"Erik Van der Burg, Alexander Toet, Anne-Marie Brouwer, Jan B F Van Erp","doi":"10.1163/22134808-bja10064","DOIUrl":"10.1163/22134808-bja10064","url":null,"abstract":"<p><p>How we perceive the world is not solely determined by what we sense at a given moment in time, but also by what we processed recently. Here we investigated whether such serial dependencies for emotional stimuli transfer from one modality to another. Participants were presented a random sequence of emotional sounds and images and instructed to rate the valence and arousal of each stimulus (Experiment 1). For both ratings, we conducted an intertrial analysis, based on whether the rating on the previous trial was low or high. We found a positive serial dependence for valence and arousal regardless of the stimulus modality on two consecutive trials. In Experiment 2, we examined whether passively perceiving a stimulus is sufficient to induce a serial dependence. In Experiment 2, participants were instructed to rate the stimuli only on active trials and not on passive trials. The participants were informed that the active and passive trials were presented in alternating order, so that they were able to prepare for the task. We conducted an intertrial analysis on active trials, based on whether the rating on the previous passive trial (determined in Experiment 1) was low or high. For both ratings, we again observed positive serial dependencies regardless of the stimulus modality. We conclude that the emotional experience triggered by one stimulus affects the emotional experience for a subsequent stimulus regardless of their sensory modalities, that this occurs in a bottom-up fashion, and that this can be explained by residual activation in the emotional network in the brain.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":" ","pages":"1-22"},"PeriodicalIF":1.6,"publicationDate":"2021-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39473356","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}