Pub Date : 2022-07-01DOI: 10.1163/22134808-bja10078
Magdalena Szubielska, Paweł Augustynowicz, Delphine Picard
The aim of this study was twofold. First, our objective was to test the influence of an object's actual size (size rank) on the drawn size of the depicted object. We tested the canonical size effect (i.e., drawing objects larger in the physical world as larger) in four drawing conditions - two perceptual conditions (blindfolded or sighted) crossed with two materials (paper or special foil for producing embossed drawings). Second, we investigated whether drawing quality (we analysed both the local and global criteria of quality) depends on drawing conditions. We predicted that drawing quality, unlike drawing size, would vary according to drawing conditions - namely, being higher when foil than paper was used for drawing production in the blindfolded condition. We tested these hypotheses with young adults who repeatedly drew eight different familiar objects (differentiated by size in the real world) in four drawing conditions. As expected, drawn size increased linearly with increasing size rank, whatever the drawing condition, thus replicating the canonical size effect and showing that this effect was not dependent on drawing conditions. In line with our hypothesis, in the blindfolded condition drawing quality was better when foil rather than paper was used, suggesting a benefit from haptic feedback on the trace produced. Besides, the quality of drawings produced was still higher in the sighted than the blindfolded condition. In conclusion, canonical size is present under different drawing conditions regardless of whether sight is involved or not, while perceptual control increases drawing quality in adults.
{"title":"Size and Quality of Drawings Made by Adults Under Visual and Haptic Control.","authors":"Magdalena Szubielska, Paweł Augustynowicz, Delphine Picard","doi":"10.1163/22134808-bja10078","DOIUrl":"https://doi.org/10.1163/22134808-bja10078","url":null,"abstract":"<p><p>The aim of this study was twofold. First, our objective was to test the influence of an object's actual size (size rank) on the drawn size of the depicted object. We tested the canonical size effect (i.e., drawing objects larger in the physical world as larger) in four drawing conditions - two perceptual conditions (blindfolded or sighted) crossed with two materials (paper or special foil for producing embossed drawings). Second, we investigated whether drawing quality (we analysed both the local and global criteria of quality) depends on drawing conditions. We predicted that drawing quality, unlike drawing size, would vary according to drawing conditions - namely, being higher when foil than paper was used for drawing production in the blindfolded condition. We tested these hypotheses with young adults who repeatedly drew eight different familiar objects (differentiated by size in the real world) in four drawing conditions. As expected, drawn size increased linearly with increasing size rank, whatever the drawing condition, thus replicating the canonical size effect and showing that this effect was not dependent on drawing conditions. In line with our hypothesis, in the blindfolded condition drawing quality was better when foil rather than paper was used, suggesting a benefit from haptic feedback on the trace produced. Besides, the quality of drawings produced was still higher in the sighted than the blindfolded condition. In conclusion, canonical size is present under different drawing conditions regardless of whether sight is involved or not, while perceptual control increases drawing quality in adults.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":"35 6","pages":"471-493"},"PeriodicalIF":1.6,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"40623794","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-06-08DOI: 10.1163/22134808-bja10077
Pia Hauck, Christoph von Castell, Heiko Hecht
The quality of a concert hall primarily depends on its acoustics. But does visual input also have an impact on musical enjoyment? Does the color of ambient lighting modulate the perceived music quality? And are certain colors perceived to fit better than others with a given music piece? To address these questions, we performed three within-subjects experiments. We carried out two pretests to select four music pieces differing in tonality and genre, and 14 lighting conditions of varying hue, brightness, and saturation. In the main experiment, we applied a fully crossed repeated-measures design. Under each of the four lighting conditions, participants rated the musical variables 'Harmonic', 'Powerful', 'Gloomy', 'Lively' and overall liking of the music pieces, as well as the perceived fit of music and lighting. Subsequently, participants evaluated music and lighting separately by rating the same variables as before, as well as their emotional impact (valence, arousal, dominance). We found that music and lighting being similarly rated in terms of valence and arousal in the unimodal conditions were judged to match better when presented together. Accordingly, tonal (atonal) music was rated to fit better with weakly saturated (highly saturated) colors. Moreover, some characteristics of the lighting were carried over to music. That is, just as red lighting was rated as more powerful than green and blue lighting, music was evaluated to be more powerful under red compared to green and blue lighting. We conclude that listening to music is a multisensory process enriched by impressions from the visual domain.
{"title":"Crossmodal Correspondence between Music and Ambient Color Is Mediated by Emotion.","authors":"Pia Hauck, Christoph von Castell, Heiko Hecht","doi":"10.1163/22134808-bja10077","DOIUrl":"https://doi.org/10.1163/22134808-bja10077","url":null,"abstract":"<p><p>The quality of a concert hall primarily depends on its acoustics. But does visual input also have an impact on musical enjoyment? Does the color of ambient lighting modulate the perceived music quality? And are certain colors perceived to fit better than others with a given music piece? To address these questions, we performed three within-subjects experiments. We carried out two pretests to select four music pieces differing in tonality and genre, and 14 lighting conditions of varying hue, brightness, and saturation. In the main experiment, we applied a fully crossed repeated-measures design. Under each of the four lighting conditions, participants rated the musical variables 'Harmonic', 'Powerful', 'Gloomy', 'Lively' and overall liking of the music pieces, as well as the perceived fit of music and lighting. Subsequently, participants evaluated music and lighting separately by rating the same variables as before, as well as their emotional impact (valence, arousal, dominance). We found that music and lighting being similarly rated in terms of valence and arousal in the unimodal conditions were judged to match better when presented together. Accordingly, tonal (atonal) music was rated to fit better with weakly saturated (highly saturated) colors. Moreover, some characteristics of the lighting were carried over to music. That is, just as red lighting was rated as more powerful than green and blue lighting, music was evaluated to be more powerful under red compared to green and blue lighting. We conclude that listening to music is a multisensory process enriched by impressions from the visual domain.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":"35 5","pages":"407-446"},"PeriodicalIF":1.6,"publicationDate":"2022-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"40623795","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-05-31DOI: 10.1163/22134808-bja10076
Yi-Chuan Chen, Charles Spence
We report two experiments designed to investigate whether the presentation of a range of pleasant fragrances, containing both floral and fruity notes, would modulate people's judgements of the facial attractiveness (Experiment 1) and age (Experiment 2) of a selection of typical female faces varying in age in the range 20-69 years. In Experiment 1, male participants rated the female faces as less attractive when presented with an unpleasant fragrance compared to clean air. The rated attractiveness of the female faces was lower when the participants rated the unpleasant odour as having a lower attractiveness and pleasantness, and a higher intensity. In Experiment 2, both male and female participants rated the age of female faces while presented with one of four pleasant fragrances or clean air as a control. Only the female participants demonstrated a crossmodal effect, with the pleasant fragrances inducing an older rating for female faces in the 40-49-years-old age range, whereas a younger rating was documented for female faces in the 60-69-years-old age range. Taken together, these results are consistent with the view that while the valence of fragrance (pleasant versus unpleasant) exerts a robust crossmodal influence over judgements of facial attractiveness, the effects of pleasant fragrance on judgements of a person's age appear to be less reliable. One possible explanation for the differing effect of scent in the two cases relates to the fact that attractiveness judgements are more subjective, hedonic, and/or intuitive than age ratings which are more objective, cognitive-mediated, and/or analytic in nature.
{"title":"Investigating the Crossmodal Influence of Odour on the Visual Perception of Facial Attractiveness and Age.","authors":"Yi-Chuan Chen, Charles Spence","doi":"10.1163/22134808-bja10076","DOIUrl":"https://doi.org/10.1163/22134808-bja10076","url":null,"abstract":"<p><p>We report two experiments designed to investigate whether the presentation of a range of pleasant fragrances, containing both floral and fruity notes, would modulate people's judgements of the facial attractiveness (Experiment 1) and age (Experiment 2) of a selection of typical female faces varying in age in the range 20-69 years. In Experiment 1, male participants rated the female faces as less attractive when presented with an unpleasant fragrance compared to clean air. The rated attractiveness of the female faces was lower when the participants rated the unpleasant odour as having a lower attractiveness and pleasantness, and a higher intensity. In Experiment 2, both male and female participants rated the age of female faces while presented with one of four pleasant fragrances or clean air as a control. Only the female participants demonstrated a crossmodal effect, with the pleasant fragrances inducing an older rating for female faces in the 40-49-years-old age range, whereas a younger rating was documented for female faces in the 60-69-years-old age range. Taken together, these results are consistent with the view that while the valence of fragrance (pleasant versus unpleasant) exerts a robust crossmodal influence over judgements of facial attractiveness, the effects of pleasant fragrance on judgements of a person's age appear to be less reliable. One possible explanation for the differing effect of scent in the two cases relates to the fact that attractiveness judgements are more subjective, hedonic, and/or intuitive than age ratings which are more objective, cognitive-mediated, and/or analytic in nature.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":"35 6","pages":"447-469"},"PeriodicalIF":1.6,"publicationDate":"2022-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"40623796","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-05-06DOI: 10.1163/22134808-bja10075
Xiaofang Sun, Pin-Hsuan Chen, Pei-Luen Patrick Rau
The purpose of this study was to investigate the cue congruency effect of auditory stimuli during visual search in dynamic environments. Twenty-eight participants were recruited to conduct a visual search experiment. The experiment applied auditory stimuli to understand whether they could facilitate visual search in different types of background. Additionally, target location and target orientation were manipulated to clarify their influences on visual search. Target location was related to horizontal visual search and target orientation was associated with visual search for an inverted target. The results regarding dynamic backgrounds reported that target-congruent auditory stimuli could speed up the visual search time. In addition, the cue congruency effect of auditory stimuli was critical for the center of the visual display but declined for the edge, indicating the inhibition of horizontal visual search behavior. Moreover, few improvements accompanying auditory stimuli were provided for the visual detection of non-inverted and inverted targets. The findings of this study suggested developing multisensory interaction with head-mounted displays, such as augmented reality glasses, in real life.
{"title":"Do Congruent Auditory Stimuli Facilitate Visual Search in Dynamic Environments? An Experimental Study Based on Multisensory Interaction.","authors":"Xiaofang Sun, Pin-Hsuan Chen, Pei-Luen Patrick Rau","doi":"10.1163/22134808-bja10075","DOIUrl":"10.1163/22134808-bja10075","url":null,"abstract":"<p><p>The purpose of this study was to investigate the cue congruency effect of auditory stimuli during visual search in dynamic environments. Twenty-eight participants were recruited to conduct a visual search experiment. The experiment applied auditory stimuli to understand whether they could facilitate visual search in different types of background. Additionally, target location and target orientation were manipulated to clarify their influences on visual search. Target location was related to horizontal visual search and target orientation was associated with visual search for an inverted target. The results regarding dynamic backgrounds reported that target-congruent auditory stimuli could speed up the visual search time. In addition, the cue congruency effect of auditory stimuli was critical for the center of the visual display but declined for the edge, indicating the inhibition of horizontal visual search behavior. Moreover, few improvements accompanying auditory stimuli were provided for the visual detection of non-inverted and inverted targets. The findings of this study suggested developing multisensory interaction with head-mounted displays, such as augmented reality glasses, in real life.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":"1 1","pages":"1-15"},"PeriodicalIF":1.6,"publicationDate":"2022-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46029081","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-04-19DOI: 10.1163/22134808-bja10072
Monica Gori, Sara Price, Fiona N Newell, Nadia Berthouze, Gualtiero Volpe
In this review, we discuss how specific sensory channels can mediate the learning of properties of the environment. In recent years, schools have increasingly been using multisensory technology for teaching. However, it still needs to be sufficiently grounded in neuroscientific and pedagogical evidence. Researchers have recently renewed understanding around the role of communication between sensory modalities during development. In the current review, we outline four principles that will aid technological development based on theoretical models of multisensory development and embodiment to foster in-depth, perceptual, and conceptual learning of mathematics. We also discuss how a multidisciplinary approach offers a unique contribution to development of new practical solutions for learning in school. Scientists, engineers, and pedagogical experts offer their interdisciplinary points of view on this topic. At the end of the review, we present our results, showing that one can use multiple sensory inputs and sensorimotor associations in multisensory technology to improve the discrimination of angles, but also possibly for educational purposes. Finally, we present an application, the 'RobotAngle' developed for primary (i.e., elementary) school children, which uses sounds and body movements to learn about angles.
{"title":"Multisensory Perception and Learning: Linking Pedagogy, Psychophysics, and Human-Computer Interaction.","authors":"Monica Gori, Sara Price, Fiona N Newell, Nadia Berthouze, Gualtiero Volpe","doi":"10.1163/22134808-bja10072","DOIUrl":"https://doi.org/10.1163/22134808-bja10072","url":null,"abstract":"<p><p>In this review, we discuss how specific sensory channels can mediate the learning of properties of the environment. In recent years, schools have increasingly been using multisensory technology for teaching. However, it still needs to be sufficiently grounded in neuroscientific and pedagogical evidence. Researchers have recently renewed understanding around the role of communication between sensory modalities during development. In the current review, we outline four principles that will aid technological development based on theoretical models of multisensory development and embodiment to foster in-depth, perceptual, and conceptual learning of mathematics. We also discuss how a multidisciplinary approach offers a unique contribution to development of new practical solutions for learning in school. Scientists, engineers, and pedagogical experts offer their interdisciplinary points of view on this topic. At the end of the review, we present our results, showing that one can use multiple sensory inputs and sensorimotor associations in multisensory technology to improve the discrimination of angles, but also possibly for educational purposes. Finally, we present an application, the 'RobotAngle' developed for primary (i.e., elementary) school children, which uses sounds and body movements to learn about angles.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":"35 4","pages":"335-366"},"PeriodicalIF":1.6,"publicationDate":"2022-04-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9383652","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-04-07DOI: 10.1163/22134808-bja10073
Elyse Letts, Aysha Basharat, Michael Barnett-Cowan
Previous studies have found that semantics, the higher-level meaning of stimuli, can impact multisensory integration; however, less is known about the effect of valence, an affective response to stimuli. This study investigated the effects of both semantic congruency and valence of non-speech audiovisual stimuli on multisensory integration via response time (RT) and temporal-order judgement (TOJ) tasks [assessing processing speed (RT), Point of Subjective Simultaneity (PSS), and time window when multisensory stimuli are likely to be perceived as simultaneous (temporal binding window; TBW)]. Through an online study with 40 participants (mean age: 26.25 years; females = 17), we found that both congruence and valence had a significant main effect on RT (congruency and positive valence decrease RT) and an interaction effect (congruent/positive valence condition being significantly faster than all others). For TOJ, there was a significant main effect of valence and a significant interaction effect where positive valence (compared to negative valence) and the congruent/positive condition (compared to all other conditions) required visual stimuli to be presented significantly earlier than auditory stimuli to be perceived as simultaneous. A subsequent analysis showed a positive correlation between TBW width and RT (as TBW widens, RT increases) for the categories that were furthest from true simultaneity in their PSS (Congruent/Positive and Incongruent/Negative). This study provides new evidence that supports previous research on semantic congruency and presents a novel incorporation of valence into behavioural responses.
{"title":"Evaluating the Effect of Semantic Congruency and Valence on Multisensory Integration.","authors":"Elyse Letts, Aysha Basharat, Michael Barnett-Cowan","doi":"10.1163/22134808-bja10073","DOIUrl":"https://doi.org/10.1163/22134808-bja10073","url":null,"abstract":"<p><p>Previous studies have found that semantics, the higher-level meaning of stimuli, can impact multisensory integration; however, less is known about the effect of valence, an affective response to stimuli. This study investigated the effects of both semantic congruency and valence of non-speech audiovisual stimuli on multisensory integration via response time (RT) and temporal-order judgement (TOJ) tasks [assessing processing speed (RT), Point of Subjective Simultaneity (PSS), and time window when multisensory stimuli are likely to be perceived as simultaneous (temporal binding window; TBW)]. Through an online study with 40 participants (mean age: 26.25 years; females = 17), we found that both congruence and valence had a significant main effect on RT (congruency and positive valence decrease RT) and an interaction effect (congruent/positive valence condition being significantly faster than all others). For TOJ, there was a significant main effect of valence and a significant interaction effect where positive valence (compared to negative valence) and the congruent/positive condition (compared to all other conditions) required visual stimuli to be presented significantly earlier than auditory stimuli to be perceived as simultaneous. A subsequent analysis showed a positive correlation between TBW width and RT (as TBW widens, RT increases) for the categories that were furthest from true simultaneity in their PSS (Congruent/Positive and Incongruent/Negative). This study provides new evidence that supports previous research on semantic congruency and presents a novel incorporation of valence into behavioural responses.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":"35 4","pages":"309-334"},"PeriodicalIF":1.6,"publicationDate":"2022-04-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9378768","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-04-05DOI: 10.1163/22134808-bja10074
William Chung, Michael Barnett-Cowan
Integration of incoming sensory signals from multiple modalities is central in the determination of self-motion perception. With the emergence of consumer virtual reality (VR), it is becoming increasingly common to experience a mismatch in sensory feedback regarding motion when using immersive displays. In this study, we explored whether introducing various discrepancies between the vestibular and visual motion would influence the perceived timing of self-motion. Participants performed a series of temporal-order judgements between an auditory tone and a passive whole-body rotation on a motion platform accompanied by visual feedback using a virtual environment generated through a head-mounted display. Sensory conflict was induced by altering the speed and direction by which the movement of the visual scene updated relative to the observer's physical rotation. There were no differences in perceived timing of the rotation without vision, with congruent visual feedback and when the speed of the updating of the visual motion was slower. However, the perceived timing was significantly further from zero when the direction of the visual motion was incongruent with the rotation. These findings demonstrate the potential interaction between visual and vestibular signals in the temporal perception of self-motion. Additionally, we recorded cybersickness ratings and found that sickness severity was significantly greater when visual motion was present and incongruent with the physical motion. This supports previous research regarding cybersickness and the sensory conflict theory, where a mismatch between the visual and vestibular signals may lead to a greater likelihood for the occurrence of sickness symptoms.
{"title":"Influence of Sensory Conflict on Perceived Timing of Passive Rotation in Virtual Reality.","authors":"William Chung, Michael Barnett-Cowan","doi":"10.1163/22134808-bja10074","DOIUrl":"10.1163/22134808-bja10074","url":null,"abstract":"<p><p>Integration of incoming sensory signals from multiple modalities is central in the determination of self-motion perception. With the emergence of consumer virtual reality (VR), it is becoming increasingly common to experience a mismatch in sensory feedback regarding motion when using immersive displays. In this study, we explored whether introducing various discrepancies between the vestibular and visual motion would influence the perceived timing of self-motion. Participants performed a series of temporal-order judgements between an auditory tone and a passive whole-body rotation on a motion platform accompanied by visual feedback using a virtual environment generated through a head-mounted display. Sensory conflict was induced by altering the speed and direction by which the movement of the visual scene updated relative to the observer's physical rotation. There were no differences in perceived timing of the rotation without vision, with congruent visual feedback and when the speed of the updating of the visual motion was slower. However, the perceived timing was significantly further from zero when the direction of the visual motion was incongruent with the rotation. These findings demonstrate the potential interaction between visual and vestibular signals in the temporal perception of self-motion. Additionally, we recorded cybersickness ratings and found that sickness severity was significantly greater when visual motion was present and incongruent with the physical motion. This supports previous research regarding cybersickness and the sensory conflict theory, where a mismatch between the visual and vestibular signals may lead to a greater likelihood for the occurrence of sickness symptoms.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":"1 1","pages":"1-23"},"PeriodicalIF":1.6,"publicationDate":"2022-04-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"64581115","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-03-09DOI: 10.1163/22134808-bja10071
Lisa Rosenblum, Elisa Grewe, Jan Churan, Frank Bremmer
The integration of information from different sensory modalities is crucial for successful navigation through an environment. Among others, self-motion induces distinct optic flow patterns on the retina, vestibular signals and tactile flow, which contribute to determine traveled distance (path integration) or movement direction (heading). While the processing of combined visual-vestibular information is subject to a growing body of literature, the processing of visuo-tactile signals in the context of self-motion has received comparatively little attention. Here, we investigated whether visual heading perception is influenced by behaviorally irrelevant tactile flow. In the visual modality, we simulated an observer's self-motion across a horizontal ground plane (optic flow). Tactile self-motion stimuli were delivered by air flow from head-mounted nozzles (tactile flow). In blocks of trials, we presented only visual or tactile stimuli and subjects had to report their perceived heading. In another block of trials, tactile and visual stimuli were presented simultaneously, with the tactile flow within ±40° of the visual heading (bimodal condition). Here, importantly, participants had to report their perceived visual heading. Perceived self-motion direction in all conditions revealed a centripetal bias, i.e., heading directions were perceived as compressed toward straight ahead. In the bimodal condition, we found a small but systematic influence of task-irrelevant tactile flow on visually perceived headings as function of their directional offset. We conclude that tactile flow is more tightly linked to self-motion perception than previously thought.
{"title":"Influence of Tactile Flow on Visual Heading Perception.","authors":"Lisa Rosenblum, Elisa Grewe, Jan Churan, Frank Bremmer","doi":"10.1163/22134808-bja10071","DOIUrl":"https://doi.org/10.1163/22134808-bja10071","url":null,"abstract":"<p><p>The integration of information from different sensory modalities is crucial for successful navigation through an environment. Among others, self-motion induces distinct optic flow patterns on the retina, vestibular signals and tactile flow, which contribute to determine traveled distance (path integration) or movement direction (heading). While the processing of combined visual-vestibular information is subject to a growing body of literature, the processing of visuo-tactile signals in the context of self-motion has received comparatively little attention. Here, we investigated whether visual heading perception is influenced by behaviorally irrelevant tactile flow. In the visual modality, we simulated an observer's self-motion across a horizontal ground plane (optic flow). Tactile self-motion stimuli were delivered by air flow from head-mounted nozzles (tactile flow). In blocks of trials, we presented only visual or tactile stimuli and subjects had to report their perceived heading. In another block of trials, tactile and visual stimuli were presented simultaneously, with the tactile flow within ±40° of the visual heading (bimodal condition). Here, importantly, participants had to report their perceived visual heading. Perceived self-motion direction in all conditions revealed a centripetal bias, i.e., heading directions were perceived as compressed toward straight ahead. In the bimodal condition, we found a small but systematic influence of task-irrelevant tactile flow on visually perceived headings as function of their directional offset. We conclude that tactile flow is more tightly linked to self-motion perception than previously thought.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":"35 4","pages":"291-308"},"PeriodicalIF":1.6,"publicationDate":"2022-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9378724","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-03-09DOI: 10.1163/22134808-bja10070
Adam J Reeves
The institution of citizenship has undergone far-reaching factual and normative changes. In two recent studies, Christian Joppke and Ayelet Shachar address complex and pressing problems underlying modern citizenship theory. Joppke and Shachar begin from different premises regarding immigration and citizenship. Joppke takes for granted the existing regime of birthright citizenship; his main focus is the relationship between immigration and citizenship, and the interrelation between the dimensions of citizenship. Shachar finds the option of becoming a citizen deficient, and underscores the need to rethink the whole concept of birthright citizenship and the role it plays in perpetuating global injustice. Joppke is more optimistic: he celebrates the triumph of liberalism. Shachar is pessimistic about the citizenship discourse—which, even if more liberal than in the past, is still flawed—yet optimistic about the potential of her ideas to bring about a better future. This review briefly examines each book and discusses the contribution of each to the contemporary, evolving debates on citizenship.
{"title":"Book Review.","authors":"Adam J Reeves","doi":"10.1163/22134808-bja10070","DOIUrl":"https://doi.org/10.1163/22134808-bja10070","url":null,"abstract":"The institution of citizenship has undergone far-reaching factual and normative changes. In two recent studies, Christian Joppke and Ayelet Shachar address complex and pressing problems underlying modern citizenship theory. Joppke and Shachar begin from different premises regarding immigration and citizenship. Joppke takes for granted the existing regime of birthright citizenship; his main focus is the relationship between immigration and citizenship, and the interrelation between the dimensions of citizenship. Shachar finds the option of becoming a citizen deficient, and underscores the need to rethink the whole concept of birthright citizenship and the role it plays in perpetuating global injustice. Joppke is more optimistic: he celebrates the triumph of liberalism. Shachar is pessimistic about the citizenship discourse—which, even if more liberal than in the past, is still flawed—yet optimistic about the potential of her ideas to bring about a better future. This review briefly examines each book and discusses the contribution of each to the contemporary, evolving debates on citizenship.","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":"35 3","pages":"289-290"},"PeriodicalIF":1.6,"publicationDate":"2022-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9209582","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Rosanne Roosmarijn Maria Tuip, W.M. van der Ham, Jeannette A. M. Lorteije, Filip Van Opstal
Perceptual decision-making in a dynamic environment requires two integration processes: integration of sensory evidence from multiple modalities to form a coherent representation of the environment, and integration of evidence across time to accurately make a decision. Only recently studies started to unravel how evidence from two modalities is accumulated across time to form a perceptual decision. One important question is whether information from individual senses contributes equally to multisensory decisions. We designed a new psychophysical task that measures how visual and auditory evidence is weighted across time. Participants were asked to discriminate between two visual gratings, and/or two sounds presented to the right and left ear based on respectively contrast and loudness. We varied the evidence, i.e., the contrast of the gratings and amplitude of the sound, over time. Results showed a significant increase in performance accuracy on multisensory trials compared to unisensory trials, indicating that discriminating between two sources is improved when multisensory information is available. Furthermore, we found that early evidence contributed most to sensory decisions. Weighting of unisensory information during audiovisual decision-making dynamically changed over time. A first epoch was characterized by both visual and auditory weighting, during the second epoch vision dominated and the third epoch finalized the weighting profile with auditory dominance. Our results suggest that during our task multisensory improvement is generated by a mechanism that requires cross-modal interactions but also dynamically evokes dominance switching.
{"title":"Dynamic Weighting of Time-Varying Visual and Auditory Evidence During Multisensory Decision Making.","authors":"Rosanne Roosmarijn Maria Tuip, W.M. van der Ham, Jeannette A. M. Lorteije, Filip Van Opstal","doi":"10.31234/osf.io/knzbv","DOIUrl":"https://doi.org/10.31234/osf.io/knzbv","url":null,"abstract":"Perceptual decision-making in a dynamic environment requires two integration processes: integration of sensory evidence from multiple modalities to form a coherent representation of the environment, and integration of evidence across time to accurately make a decision. Only recently studies started to unravel how evidence from two modalities is accumulated across time to form a perceptual decision. One important question is whether information from individual senses contributes equally to multisensory decisions. We designed a new psychophysical task that measures how visual and auditory evidence is weighted across time. Participants were asked to discriminate between two visual gratings, and/or two sounds presented to the right and left ear based on respectively contrast and loudness. We varied the evidence, i.e., the contrast of the gratings and amplitude of the sound, over time. Results showed a significant increase in performance accuracy on multisensory trials compared to unisensory trials, indicating that discriminating between two sources is improved when multisensory information is available. Furthermore, we found that early evidence contributed most to sensory decisions. Weighting of unisensory information during audiovisual decision-making dynamically changed over time. A first epoch was characterized by both visual and auditory weighting, during the second epoch vision dominated and the third epoch finalized the weighting profile with auditory dominance. Our results suggest that during our task multisensory improvement is generated by a mechanism that requires cross-modal interactions but also dynamically evokes dominance switching.","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":"36 1 1","pages":"31-56"},"PeriodicalIF":1.6,"publicationDate":"2022-02-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46738287","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}