Pub Date : 2022-12-28DOI: 10.1163/22134808-bja10089
Karina Kangur, Martin Giesel, Julie M Harris, Constanze Hesse
Visually perceived roughness of 3D textures varies with illumination direction. Surfaces appear rougher when the illumination angle is lowered resulting in a lack of roughness constancy. Here we aimed to investigate whether the visual system also relies on illumination-dependent features when judging roughness in a crossmodal matching task or whether it can access illumination-invariant surface features that can also be evaluated by the tactile system. Participants ( N = 32) explored an abrasive paper of medium physical roughness either tactually, or visually under two different illumination conditions (top vs oblique angle). Subsequently, they had to judge if a comparison stimulus (varying in physical roughness) matched the previously explored standard. Matching was either performed using the same modality as during exploration (intramodal) or using a different modality (crossmodal). In the intramodal conditions, participants performed equally well independent of the modality or illumination employed. In the crossmodal conditions, participants selected rougher tactile matches after exploring the standard visually under oblique illumination than under top illumination. Conversely, after tactile exploration, they selected smoother visual matches under oblique than under top illumination. These findings confirm that visual roughness perception depends on illumination direction and show, for the first time, that this failure of roughness constancy also transfers to judgements made crossmodally.
{"title":"Crossmodal Texture Perception Is Illumination-Dependent.","authors":"Karina Kangur, Martin Giesel, Julie M Harris, Constanze Hesse","doi":"10.1163/22134808-bja10089","DOIUrl":"https://doi.org/10.1163/22134808-bja10089","url":null,"abstract":"<p><p>Visually perceived roughness of 3D textures varies with illumination direction. Surfaces appear rougher when the illumination angle is lowered resulting in a lack of roughness constancy. Here we aimed to investigate whether the visual system also relies on illumination-dependent features when judging roughness in a crossmodal matching task or whether it can access illumination-invariant surface features that can also be evaluated by the tactile system. Participants ( N = 32) explored an abrasive paper of medium physical roughness either tactually, or visually under two different illumination conditions (top vs oblique angle). Subsequently, they had to judge if a comparison stimulus (varying in physical roughness) matched the previously explored standard. Matching was either performed using the same modality as during exploration (intramodal) or using a different modality (crossmodal). In the intramodal conditions, participants performed equally well independent of the modality or illumination employed. In the crossmodal conditions, participants selected rougher tactile matches after exploring the standard visually under oblique illumination than under top illumination. Conversely, after tactile exploration, they selected smoother visual matches under oblique than under top illumination. These findings confirm that visual roughness perception depends on illumination direction and show, for the first time, that this failure of roughness constancy also transfers to judgements made crossmodally.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":null,"pages":null},"PeriodicalIF":1.6,"publicationDate":"2022-12-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10707537","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-20DOI: 10.1163/22134808-bja10086
Chia-Huei Tseng, Hiu Mei Chow, Lothar Spillmann, Matt Oxner, Kenzo Sakurai
Accurate perception of verticality is critical for postural maintenance and successful physical interaction with the world. Although previous research has examined the independent influences of body orientation and self-motion under well-controlled laboratory conditions, these factors are constantly changing and interacting in the real world. In this study, we examine the subjective haptic vertical in a real-world scenario. Here, we report a bias of verticality perception in a field experiment on the Hong Kong Peak Tram as participants traveled on a slope ranging from 6° to 26°. Mean subjective haptic vertical (SHV) increased with slope by as much as 15°, regardless of whether the eyes were open (Experiment 1) or closed (Experiment 2). Shifting the body pitch by a fixed degree in an effort to compensate for the mountain slope failed to reduce the verticality bias (Experiment 3). These manipulations separately rule out visual and vestibular inputs about absolute body pitch as contributors to our observed bias. Observations collected on a tram traveling on level ground (Experiment 4A) or in a static dental chair with a range of inclinations similar to those encountered on the mountain tram (Experiment 4B) showed no significant deviation of the subjective vertical from gravity. We conclude that the SHV error is due to a combination of large, dynamic body pitch and translational motion. These observations made in a real-world scenario represent an incentive to neuroscientists and aviation experts alike for studying perceived verticality under field conditions and raising awareness of dangerous misperceptions of verticality when body pitch and translational self-motion come together.
{"title":"Body Pitch Together With Translational Body Motion Biases the Subjective Haptic Vertical.","authors":"Chia-Huei Tseng, Hiu Mei Chow, Lothar Spillmann, Matt Oxner, Kenzo Sakurai","doi":"10.1163/22134808-bja10086","DOIUrl":"https://doi.org/10.1163/22134808-bja10086","url":null,"abstract":"<p><p>Accurate perception of verticality is critical for postural maintenance and successful physical interaction with the world. Although previous research has examined the independent influences of body orientation and self-motion under well-controlled laboratory conditions, these factors are constantly changing and interacting in the real world. In this study, we examine the subjective haptic vertical in a real-world scenario. Here, we report a bias of verticality perception in a field experiment on the Hong Kong Peak Tram as participants traveled on a slope ranging from 6° to 26°. Mean subjective haptic vertical (SHV) increased with slope by as much as 15°, regardless of whether the eyes were open (Experiment 1) or closed (Experiment 2). Shifting the body pitch by a fixed degree in an effort to compensate for the mountain slope failed to reduce the verticality bias (Experiment 3). These manipulations separately rule out visual and vestibular inputs about absolute body pitch as contributors to our observed bias. Observations collected on a tram traveling on level ground (Experiment 4A) or in a static dental chair with a range of inclinations similar to those encountered on the mountain tram (Experiment 4B) showed no significant deviation of the subjective vertical from gravity. We conclude that the SHV error is due to a combination of large, dynamic body pitch and translational motion. These observations made in a real-world scenario represent an incentive to neuroscientists and aviation experts alike for studying perceived verticality under field conditions and raising awareness of dangerous misperceptions of verticality when body pitch and translational self-motion come together.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":null,"pages":null},"PeriodicalIF":1.6,"publicationDate":"2022-12-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10707538","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-01DOI: 10.1163/22134808-bja10088
Rosanne R M Tuip, Wessel van der Ham, Jeannette A M Lorteije, Filip Van Opstal
Perceptual decision-making in a dynamic environment requires two integration processes: integration of sensory evidence from multiple modalities to form a coherent representation of the environment, and integration of evidence across time to accurately make a decision. Only recently studies started to unravel how evidence from two modalities is accumulated across time to form a perceptual decision. One important question is whether information from individual senses contributes equally to multisensory decisions. We designed a new psychophysical task that measures how visual and auditory evidence is weighted across time. Participants were asked to discriminate between two visual gratings, and/or two sounds presented to the right and left ear based on respectively contrast and loudness. We varied the evidence, i.e., the contrast of the gratings and amplitude of the sound, over time. Results showed a significant increase in performance accuracy on multisensory trials compared to unisensory trials, indicating that discriminating between two sources is improved when multisensory information is available. Furthermore, we found that early evidence contributed most to sensory decisions. Weighting of unisensory information during audiovisual decision-making dynamically changed over time. A first epoch was characterized by both visual and auditory weighting, during the second epoch vision dominated and the third epoch finalized the weighting profile with auditory dominance. Our results suggest that during our task multisensory improvement is generated by a mechanism that requires cross-modal interactions but also dynamically evokes dominance switching.
{"title":"Dynamic Weighting of Time-Varying Visual and Auditory Evidence During Multisensory Decision Making.","authors":"Rosanne R M Tuip, Wessel van der Ham, Jeannette A M Lorteije, Filip Van Opstal","doi":"10.1163/22134808-bja10088","DOIUrl":"https://doi.org/10.1163/22134808-bja10088","url":null,"abstract":"<p><p>Perceptual decision-making in a dynamic environment requires two integration processes: integration of sensory evidence from multiple modalities to form a coherent representation of the environment, and integration of evidence across time to accurately make a decision. Only recently studies started to unravel how evidence from two modalities is accumulated across time to form a perceptual decision. One important question is whether information from individual senses contributes equally to multisensory decisions. We designed a new psychophysical task that measures how visual and auditory evidence is weighted across time. Participants were asked to discriminate between two visual gratings, and/or two sounds presented to the right and left ear based on respectively contrast and loudness. We varied the evidence, i.e., the contrast of the gratings and amplitude of the sound, over time. Results showed a significant increase in performance accuracy on multisensory trials compared to unisensory trials, indicating that discriminating between two sources is improved when multisensory information is available. Furthermore, we found that early evidence contributed most to sensory decisions. Weighting of unisensory information during audiovisual decision-making dynamically changed over time. A first epoch was characterized by both visual and auditory weighting, during the second epoch vision dominated and the third epoch finalized the weighting profile with auditory dominance. Our results suggest that during our task multisensory improvement is generated by a mechanism that requires cross-modal interactions but also dynamically evokes dominance switching.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":null,"pages":null},"PeriodicalIF":1.6,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10708021","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-11-16DOI: 10.1163/22134808-bja10084
Jing Ni, Hiroyuki Ito, Masaki Ogawa, Shoji Sunaga, Stephen Palmisano
While compelling illusions of self-motion (vection) can be induced purely by visual motion, they are rarely experienced immediately. This vection onset latency is thought to represent the time required to resolve sensory conflicts between the stationary observer's visual and nonvisual information about self-motion. In this study, we investigated whether manipulations designed to increase the weightings assigned to vision (compared to the nonvisual senses) might reduce vection onset latency. We presented two different types of visual priming displays directly before our main vection-inducing displays: (1) 'random motion' priming displays - designed to pre-activate general, as opposed to self-motion-specific, visual motion processing systems; and (2) 'dynamic no-motion' priming displays - designed to stimulate vision, but not generate conscious motion perceptions. Prior exposure to both types of priming displays was found to significantly shorten vection onset latencies for the main self-motion display. These experiments show that vection onset latencies can be reduced by pre-activating the visual system with both types of priming display. Importantly, these visual priming displays did not need to be capable of inducing vection or conscious motion perception in order to produce such benefits.
{"title":"Prior Exposure to Dynamic Visual Displays Reduces Vection Onset Latency.","authors":"Jing Ni, Hiroyuki Ito, Masaki Ogawa, Shoji Sunaga, Stephen Palmisano","doi":"10.1163/22134808-bja10084","DOIUrl":"https://doi.org/10.1163/22134808-bja10084","url":null,"abstract":"<p><p>While compelling illusions of self-motion (vection) can be induced purely by visual motion, they are rarely experienced immediately. This vection onset latency is thought to represent the time required to resolve sensory conflicts between the stationary observer's visual and nonvisual information about self-motion. In this study, we investigated whether manipulations designed to increase the weightings assigned to vision (compared to the nonvisual senses) might reduce vection onset latency. We presented two different types of visual priming displays directly before our main vection-inducing displays: (1) 'random motion' priming displays - designed to pre-activate general, as opposed to self-motion-specific, visual motion processing systems; and (2) 'dynamic no-motion' priming displays - designed to stimulate vision, but not generate conscious motion perceptions. Prior exposure to both types of priming displays was found to significantly shorten vection onset latencies for the main self-motion display. These experiments show that vection onset latencies can be reduced by pre-activating the visual system with both types of priming display. Importantly, these visual priming displays did not need to be capable of inducing vection or conscious motion perception in order to produce such benefits.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":null,"pages":null},"PeriodicalIF":1.6,"publicationDate":"2022-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10708022","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-10-24DOI: 10.1163/22134808-bja10083
Ogai Sadiq, Michael Barnett-Cowan
Humans are constantly presented with rich sensory information that the central nervous system (CNS) must process to form a coherent perception of the self and its relation to its surroundings. While the CNS is efficient in processing multisensory information in natural environments, virtual reality (VR) poses challenges of temporal discrepancies that the CNS must solve. These temporal discrepancies between information from different sensory modalities leads to inconsistencies in perception of the virtual environment which often causes cybersickness. Here, we investigate whether individual differences in the perceived relative timing of sensory events, specifically parameters of temporal-order judgement (TOJ), can predict cybersickness. Study 1 examined audiovisual (AV) TOJs while Study 2 examined audio-active head movement (AAHM) TOJs. We deduced metrics of the temporal binding window (TBW) and point of subjective simultaneity (PSS) for a total of 50 participants. Cybersickness was quantified using the Simulator Sickness Questionnaire (SSQ). Study 1 results (correlations and multiple regression) show that the oculomotor SSQ shares a significant yet positive correlation with AV PSS and TBW. While there is a positive correlation between the total SSQ scores and the TBW and PSS, these correlations are not significant. Therefore, although these results are promising, we did not find the same effect for AAHM TBW and PSS. We conclude that AV TOJ may serve as a potential tool to predict cybersickness in VR. Such findings will generate a better understanding of cybersickness which can be used for development of VR to help mitigate discomfort and maximize adoption.
{"title":"Can the Perceived Timing of Multisensory Events Predict Cybersickness?","authors":"Ogai Sadiq, Michael Barnett-Cowan","doi":"10.1163/22134808-bja10083","DOIUrl":"https://doi.org/10.1163/22134808-bja10083","url":null,"abstract":"<p><p>Humans are constantly presented with rich sensory information that the central nervous system (CNS) must process to form a coherent perception of the self and its relation to its surroundings. While the CNS is efficient in processing multisensory information in natural environments, virtual reality (VR) poses challenges of temporal discrepancies that the CNS must solve. These temporal discrepancies between information from different sensory modalities leads to inconsistencies in perception of the virtual environment which often causes cybersickness. Here, we investigate whether individual differences in the perceived relative timing of sensory events, specifically parameters of temporal-order judgement (TOJ), can predict cybersickness. Study 1 examined audiovisual (AV) TOJs while Study 2 examined audio-active head movement (AAHM) TOJs. We deduced metrics of the temporal binding window (TBW) and point of subjective simultaneity (PSS) for a total of 50 participants. Cybersickness was quantified using the Simulator Sickness Questionnaire (SSQ). Study 1 results (correlations and multiple regression) show that the oculomotor SSQ shares a significant yet positive correlation with AV PSS and TBW. While there is a positive correlation between the total SSQ scores and the TBW and PSS, these correlations are not significant. Therefore, although these results are promising, we did not find the same effect for AAHM TBW and PSS. We conclude that AV TOJ may serve as a potential tool to predict cybersickness in VR. Such findings will generate a better understanding of cybersickness which can be used for development of VR to help mitigate discomfort and maximize adoption.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":null,"pages":null},"PeriodicalIF":1.6,"publicationDate":"2022-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10708024","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-09-08DOI: 10.1163/22134808-bja10082
Kevin Y Tsang, Damien J Mannion
The auditory signals at the ear can be affected by components arriving both directly from a sound source and indirectly via environmental reverberation. Previous studies have suggested that the perceptual separation of these contributions can be aided by expectations of likely reverberant qualities. Here, we investigated whether vision can provide information about the auditory properties of physical locations that could also be used to develop such expectations. We presented participants with audiovisual stimuli derived from 10 simulated real-world locations via a head-mounted display (HMD; n = 44) or a web-based ( n = 60) delivery method. On each trial, participants viewed a first-person perspective rendering of a location before hearing a spoken utterance that was convolved with an impulse response that was from a location that was either the same as (congruent) or different to (incongruent) the visually-depicted location. We find that audiovisual congruence was associated with an increase in the probability of participants reporting an audiovisual match of about 0.22 (95% credible interval: [ 0.17 , 0.27 ]), and that participants were more likely to confuse audiovisual pairs as matching if their locations had similar reverberation times. Overall, this study suggests that human perceivers have a capacity to form expectations of reverberation from visual information. Such expectations may be useful for the perceptual challenge of separating sound sources and reverberation from within the signal available at the ear.
{"title":"Relating Sound and Sight in Simulated Environments.","authors":"Kevin Y Tsang, Damien J Mannion","doi":"10.1163/22134808-bja10082","DOIUrl":"https://doi.org/10.1163/22134808-bja10082","url":null,"abstract":"<p><p>The auditory signals at the ear can be affected by components arriving both directly from a sound source and indirectly via environmental reverberation. Previous studies have suggested that the perceptual separation of these contributions can be aided by expectations of likely reverberant qualities. Here, we investigated whether vision can provide information about the auditory properties of physical locations that could also be used to develop such expectations. We presented participants with audiovisual stimuli derived from 10 simulated real-world locations via a head-mounted display (HMD; n = 44) or a web-based ( n = 60) delivery method. On each trial, participants viewed a first-person perspective rendering of a location before hearing a spoken utterance that was convolved with an impulse response that was from a location that was either the same as (congruent) or different to (incongruent) the visually-depicted location. We find that audiovisual congruence was associated with an increase in the probability of participants reporting an audiovisual match of about 0.22 (95% credible interval: [ 0.17 , 0.27 ]), and that participants were more likely to confuse audiovisual pairs as matching if their locations had similar reverberation times. Overall, this study suggests that human perceivers have a capacity to form expectations of reverberation from visual information. Such expectations may be useful for the perceptual challenge of separating sound sources and reverberation from within the signal available at the ear.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":null,"pages":null},"PeriodicalIF":1.6,"publicationDate":"2022-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9251263","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-09-01DOI: 10.1163/22134808-bja10081
Rebecca A Mursic, Stephen Palmisano
This study investigated claims of disrupted equilibrium when listening to the Shepard-Risset glissando (which creates an auditory illusion of perpetually ascending/descending pitch). During each trial, 23 participants stood quietly on a force plate for 90 s with their eyes either open or closed (30 s pre-sound, 30 s of sound and 30 s post-sound). Their centre of foot pressure (CoP) was continuously recorded during the trial and a verbal measure of illusory self-motion (i.e., vection) was obtained directly afterwards. As expected, vection was stronger during Shepard-Risset glissandi than during white noise or phase-scrambled auditory control stimuli. Individual differences in auditorily evoked postural sway (observed during sound) were also found to predict the strength of this vection. Importantly, the patterns of sway induced by Shepard-Risset glissandi differed significantly from those during our auditory control stimuli - but only in terms of their temporal dynamics. Since significant sound type differences were not seen in terms of sway magnitude, this stresses the importance of investigating the temporal dynamics of sound-posture interactions.
{"title":"Something in the Sway: Effects of the Shepard-Risset Glissando on Postural Activity and Vection.","authors":"Rebecca A Mursic, Stephen Palmisano","doi":"10.1163/22134808-bja10081","DOIUrl":"https://doi.org/10.1163/22134808-bja10081","url":null,"abstract":"<p><p>This study investigated claims of disrupted equilibrium when listening to the Shepard-Risset glissando (which creates an auditory illusion of perpetually ascending/descending pitch). During each trial, 23 participants stood quietly on a force plate for 90 s with their eyes either open or closed (30 s pre-sound, 30 s of sound and 30 s post-sound). Their centre of foot pressure (CoP) was continuously recorded during the trial and a verbal measure of illusory self-motion (i.e., vection) was obtained directly afterwards. As expected, vection was stronger during Shepard-Risset glissandi than during white noise or phase-scrambled auditory control stimuli. Individual differences in auditorily evoked postural sway (observed during sound) were also found to predict the strength of this vection. Importantly, the patterns of sway induced by Shepard-Risset glissandi differed significantly from those during our auditory control stimuli - but only in terms of their temporal dynamics. Since significant sound type differences were not seen in terms of sway magnitude, this stresses the importance of investigating the temporal dynamics of sound-posture interactions.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":null,"pages":null},"PeriodicalIF":1.6,"publicationDate":"2022-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10704636","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Interaction between odor and taste information creates flavor perception. There are many possible determinants of the interaction between odor and taste, one of which may be the somatic sensations associated with breathing. We assumed that a smell stimulus accompanied by inhaling or exhaling enhances taste intensity if the order is congruent with natural drinking. To present an olfactory stimulus from the identical location during inhalation and exhalation, we blocked the gap between the tube presenting the olfactory stimulus and the nostril. Participants breathed and ingested the solution according to the instructions on the screen and evaluated the solution's taste intensity. Vanilla odor enhanced the sweet taste in both retronasal and orthonasal conditions when the order of stimuli was congruent with natural drinking, but it did not do so in either condition when they were incongruent. The results suggest that breathing is a determinant of odor-taste interaction. The methods of presenting olfactory stimuli used in this study were compared and discussed in relation to those used in previous studies. Odor-induced taste enhancement depends on the time order of smell with breathing and taste congruency in natural drinking. Taste enhancement was induced by odor in both conditions by minimizing differences in odor presentation between them.
{"title":"Odor-Induced Taste Enhancement Is Specific to Naturally Occurring Temporal Order and the Respiration Phase.","authors":"Shogo Amano, Takuji Narumi, Tatsu Kobayakawa, Masayoshi Kobayashi, Masahiko Tamura, Yuko Kusakabe, Yuji Wada","doi":"10.1163/22134808-bja10080","DOIUrl":"https://doi.org/10.1163/22134808-bja10080","url":null,"abstract":"<p><p>Interaction between odor and taste information creates flavor perception. There are many possible determinants of the interaction between odor and taste, one of which may be the somatic sensations associated with breathing. We assumed that a smell stimulus accompanied by inhaling or exhaling enhances taste intensity if the order is congruent with natural drinking. To present an olfactory stimulus from the identical location during inhalation and exhalation, we blocked the gap between the tube presenting the olfactory stimulus and the nostril. Participants breathed and ingested the solution according to the instructions on the screen and evaluated the solution's taste intensity. Vanilla odor enhanced the sweet taste in both retronasal and orthonasal conditions when the order of stimuli was congruent with natural drinking, but it did not do so in either condition when they were incongruent. The results suggest that breathing is a determinant of odor-taste interaction. The methods of presenting olfactory stimuli used in this study were compared and discussed in relation to those used in previous studies. Odor-induced taste enhancement depends on the time order of smell with breathing and taste congruency in natural drinking. Taste enhancement was induced by odor in both conditions by minimizing differences in odor presentation between them.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":null,"pages":null},"PeriodicalIF":1.6,"publicationDate":"2022-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10698815","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-08-09DOI: 10.1163/22134808-bja10079
Charles Spence
There has been a rapid growth of interest amongst researchers in the cross-modal correspondences in recent years. In part, this has resulted from the emerging realization of the important role that the correspondences can sometimes play in multisensory integration. In turn, this has led to an interest in the nature of any differences between individuals, or rather, between groups of individuals, in the strength and/or consensuality of cross-modal correspondences that may be observed in both neurotypically normal groups cross-culturally, developmentally, and across various special populations (including those who have lost a sense, as well as those with autistic tendencies). The hope is that our emerging understanding of such group differences may one day provide grounds for supporting the reality of the various different types of correspondence that have so far been proposed, namely structural, statistical, semantic, and hedonic (or emotionally mediated).
{"title":"Exploring Group Differences in the Crossmodal Correspondences.","authors":"Charles Spence","doi":"10.1163/22134808-bja10079","DOIUrl":"https://doi.org/10.1163/22134808-bja10079","url":null,"abstract":"<p><p>There has been a rapid growth of interest amongst researchers in the cross-modal correspondences in recent years. In part, this has resulted from the emerging realization of the important role that the correspondences can sometimes play in multisensory integration. In turn, this has led to an interest in the nature of any differences between individuals, or rather, between groups of individuals, in the strength and/or consensuality of cross-modal correspondences that may be observed in both neurotypically normal groups cross-culturally, developmentally, and across various special populations (including those who have lost a sense, as well as those with autistic tendencies). The hope is that our emerging understanding of such group differences may one day provide grounds for supporting the reality of the various different types of correspondence that have so far been proposed, namely structural, statistical, semantic, and hedonic (or emotionally mediated).</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":null,"pages":null},"PeriodicalIF":1.6,"publicationDate":"2022-08-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"40427965","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-01DOI: 10.1163/22134808-bja10078
Magdalena Szubielska, Paweł Augustynowicz, Delphine Picard
The aim of this study was twofold. First, our objective was to test the influence of an object's actual size (size rank) on the drawn size of the depicted object. We tested the canonical size effect (i.e., drawing objects larger in the physical world as larger) in four drawing conditions - two perceptual conditions (blindfolded or sighted) crossed with two materials (paper or special foil for producing embossed drawings). Second, we investigated whether drawing quality (we analysed both the local and global criteria of quality) depends on drawing conditions. We predicted that drawing quality, unlike drawing size, would vary according to drawing conditions - namely, being higher when foil than paper was used for drawing production in the blindfolded condition. We tested these hypotheses with young adults who repeatedly drew eight different familiar objects (differentiated by size in the real world) in four drawing conditions. As expected, drawn size increased linearly with increasing size rank, whatever the drawing condition, thus replicating the canonical size effect and showing that this effect was not dependent on drawing conditions. In line with our hypothesis, in the blindfolded condition drawing quality was better when foil rather than paper was used, suggesting a benefit from haptic feedback on the trace produced. Besides, the quality of drawings produced was still higher in the sighted than the blindfolded condition. In conclusion, canonical size is present under different drawing conditions regardless of whether sight is involved or not, while perceptual control increases drawing quality in adults.
{"title":"Size and Quality of Drawings Made by Adults Under Visual and Haptic Control.","authors":"Magdalena Szubielska, Paweł Augustynowicz, Delphine Picard","doi":"10.1163/22134808-bja10078","DOIUrl":"https://doi.org/10.1163/22134808-bja10078","url":null,"abstract":"<p><p>The aim of this study was twofold. First, our objective was to test the influence of an object's actual size (size rank) on the drawn size of the depicted object. We tested the canonical size effect (i.e., drawing objects larger in the physical world as larger) in four drawing conditions - two perceptual conditions (blindfolded or sighted) crossed with two materials (paper or special foil for producing embossed drawings). Second, we investigated whether drawing quality (we analysed both the local and global criteria of quality) depends on drawing conditions. We predicted that drawing quality, unlike drawing size, would vary according to drawing conditions - namely, being higher when foil than paper was used for drawing production in the blindfolded condition. We tested these hypotheses with young adults who repeatedly drew eight different familiar objects (differentiated by size in the real world) in four drawing conditions. As expected, drawn size increased linearly with increasing size rank, whatever the drawing condition, thus replicating the canonical size effect and showing that this effect was not dependent on drawing conditions. In line with our hypothesis, in the blindfolded condition drawing quality was better when foil rather than paper was used, suggesting a benefit from haptic feedback on the trace produced. Besides, the quality of drawings produced was still higher in the sighted than the blindfolded condition. In conclusion, canonical size is present under different drawing conditions regardless of whether sight is involved or not, while perceptual control increases drawing quality in adults.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":null,"pages":null},"PeriodicalIF":1.6,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"40623794","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}