Pub Date : 2023-11-10DOI: 10.1163/22134808-bja10114
Kosuke Motoki, Lawrence E Marks, Carlos Velasco
The past two decades have seen an explosion of research on cross-modal correspondences. Broadly speaking, this term has been used to encompass associations between and among features, dimensions, or attributes across the senses. There has been an increasing interest in this topic amongst researchers from multiple fields (psychology, neuroscience, music, art, environmental design, etc.) and, importantly, an increasing breadth of the topic's scope. Here, this narrative review aims to reflect on what cross-modal correspondences are, where they come from, and what underlies them. We suggest that cross-modal correspondences are usefully conceived as relative associations between different actual or imagined sensory stimuli, many of these correspondences being shared by most people. A taxonomy of correspondences with four major kinds of associations (physiological, semantic, statistical, and affective) characterizes cross-modal correspondences. Sensory dimensions (quantity/quality) and sensory features (lower perceptual/higher cognitive) correspond in cross-modal correspondences. Cross-modal correspondences may be understood (or measured) from two complementary perspectives: the phenomenal view (perceptual experiences of subjective matching) and the behavioural response view (observable patterns of behavioural response to multiple sensory stimuli). Importantly, we reflect on remaining questions and standing issues that need to be addressed in order to develop an explanatory framework for cross-modal correspondences. Future research needs (a) to understand better when (and why) phenomenal and behavioural measures are coincidental and when they are not, and, ideally, (b) to determine whether different kinds of cross-modal correspondence (quantity/quality, lower perceptual/higher cognitive) rely on the same or different mechanisms.
{"title":"Reflections on Cross-Modal Correspondences: Current Understanding and Issues for Future Research.","authors":"Kosuke Motoki, Lawrence E Marks, Carlos Velasco","doi":"10.1163/22134808-bja10114","DOIUrl":"10.1163/22134808-bja10114","url":null,"abstract":"<p><p>The past two decades have seen an explosion of research on cross-modal correspondences. Broadly speaking, this term has been used to encompass associations between and among features, dimensions, or attributes across the senses. There has been an increasing interest in this topic amongst researchers from multiple fields (psychology, neuroscience, music, art, environmental design, etc.) and, importantly, an increasing breadth of the topic's scope. Here, this narrative review aims to reflect on what cross-modal correspondences are, where they come from, and what underlies them. We suggest that cross-modal correspondences are usefully conceived as relative associations between different actual or imagined sensory stimuli, many of these correspondences being shared by most people. A taxonomy of correspondences with four major kinds of associations (physiological, semantic, statistical, and affective) characterizes cross-modal correspondences. Sensory dimensions (quantity/quality) and sensory features (lower perceptual/higher cognitive) correspond in cross-modal correspondences. Cross-modal correspondences may be understood (or measured) from two complementary perspectives: the phenomenal view (perceptual experiences of subjective matching) and the behavioural response view (observable patterns of behavioural response to multiple sensory stimuli). Importantly, we reflect on remaining questions and standing issues that need to be addressed in order to develop an explanatory framework for cross-modal correspondences. Future research needs (a) to understand better when (and why) phenomenal and behavioural measures are coincidental and when they are not, and, ideally, (b) to determine whether different kinds of cross-modal correspondence (quantity/quality, lower perceptual/higher cognitive) rely on the same or different mechanisms.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":" ","pages":"1-23"},"PeriodicalIF":1.8,"publicationDate":"2023-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"107592772","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-06DOI: 10.1163/22134808-003608ai
{"title":"Author Index to Volume 36","authors":"","doi":"10.1163/22134808-003608ai","DOIUrl":"https://doi.org/10.1163/22134808-003608ai","url":null,"abstract":"","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":"167 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135723651","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-06DOI: 10.1163/22134808-003608si
{"title":"Subject Index to Volume 36","authors":"","doi":"10.1163/22134808-003608si","DOIUrl":"https://doi.org/10.1163/22134808-003608si","url":null,"abstract":"","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":"179 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135723754","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-06DOI: 10.1163/22134808-003608ci
{"title":"Contents Index to Volume 36","authors":"","doi":"10.1163/22134808-003608ci","DOIUrl":"https://doi.org/10.1163/22134808-003608ci","url":null,"abstract":"","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":"177 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135723756","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-27DOI: 10.1163/22134808-bja10112
Bernhard E Riecke, Brandy Murovec, Jennifer L Campos, Behrang Keshavarz
Vection is typically defined as the embodied illusion of self-motion in the absence of real physical movement through space. Vection can occur in real-life situations (e.g., 'train illusion') and in virtual environments and simulators. The vast majority of vection research focuses on vection caused by visual stimulation. Even though visually induced vection is arguably the most compelling type of vection, the role of nonvisual sensory inputs, such as auditory, biomechanical, tactile, and vestibular cues, have recently gained more attention. Non-visual cues can play an important role in inducing vection in two ways. First, nonvisual cues can affect the occurrence and strength of vection when added to corresponding visual information. Second, nonvisual cues can also elicit vection in the absence of visual information, for instance when observers are blindfolded or tested in darkness. The present paper provides a narrative review of the literature on multimodal contributions to vection. We will discuss both the theoretical and applied relevance of multisensory processing as related to the experience of vection and provide design considerations on how to enhance vection in various contexts.
{"title":"Beyond the Eye: Multisensory Contributions to the Sensation of Illusory Self-Motion (Vection).","authors":"Bernhard E Riecke, Brandy Murovec, Jennifer L Campos, Behrang Keshavarz","doi":"10.1163/22134808-bja10112","DOIUrl":"10.1163/22134808-bja10112","url":null,"abstract":"<p><p>Vection is typically defined as the embodied illusion of self-motion in the absence of real physical movement through space. Vection can occur in real-life situations (e.g., 'train illusion') and in virtual environments and simulators. The vast majority of vection research focuses on vection caused by visual stimulation. Even though visually induced vection is arguably the most compelling type of vection, the role of nonvisual sensory inputs, such as auditory, biomechanical, tactile, and vestibular cues, have recently gained more attention. Non-visual cues can play an important role in inducing vection in two ways. First, nonvisual cues can affect the occurrence and strength of vection when added to corresponding visual information. Second, nonvisual cues can also elicit vection in the absence of visual information, for instance when observers are blindfolded or tested in darkness. The present paper provides a narrative review of the literature on multimodal contributions to vection. We will discuss both the theoretical and applied relevance of multisensory processing as related to the experience of vection and provide design considerations on how to enhance vection in various contexts.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":" ","pages":"827-864"},"PeriodicalIF":1.6,"publicationDate":"2023-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"71428900","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-27DOI: 10.1163/22134808-bja10113
Max Teaford, Zachary J Mularczyk, Alannah Gernon, Shauntelle Cannon, Megan Kobel, Daniel M Merfeld
One's ability to maintain their center of mass within their base of support (i.e., balance) is believed to be the result of multisensory integration. Much of the research in this literature has focused on integration of visual, vestibular, and proprioceptive cues. However, several recent studies have found evidence that auditory cues can impact balance control metrics. In the present study, we sought to better characterize the impact of auditory cues on narrow stance balance task performance with different combinations of visual stimuli (virtual and real world) and support surfaces (firm and compliant). In line with past results, we found that reducing the reliability of proprioceptive cues and visual cues yielded consistent increases in center-of-pressure (CoP) sway metrics, indicating more imbalance. Masking ambient auditory cues with broadband noise led to less consistent findings; however, when effects were observed they were substantially smaller for auditory cues than for proprioceptive and visual cues - and in the opposite direction (i.e., masking ambient auditory cues with broadband noise reduced sway in some situations). Additionally, trials that used virtual and real-world visual stimuli did not differ unless participants were standing on a surface that disrupted proprioceptive cues; disruption of proprioception led to increased CoP sway metrics in the virtual visual condition. This is the first manuscript to report the effect size of different perturbations in this context, and the first to study the impact of acoustically complex environments on balance in comparison to visual and proprioceptive contributions. Future research is needed to better characterize the impact of different acoustic environments on balance.
{"title":"Joint Contributions of Auditory, Proprioceptive and Visual Cues on Human Balance.","authors":"Max Teaford, Zachary J Mularczyk, Alannah Gernon, Shauntelle Cannon, Megan Kobel, Daniel M Merfeld","doi":"10.1163/22134808-bja10113","DOIUrl":"10.1163/22134808-bja10113","url":null,"abstract":"<p><p>One's ability to maintain their center of mass within their base of support (i.e., balance) is believed to be the result of multisensory integration. Much of the research in this literature has focused on integration of visual, vestibular, and proprioceptive cues. However, several recent studies have found evidence that auditory cues can impact balance control metrics. In the present study, we sought to better characterize the impact of auditory cues on narrow stance balance task performance with different combinations of visual stimuli (virtual and real world) and support surfaces (firm and compliant). In line with past results, we found that reducing the reliability of proprioceptive cues and visual cues yielded consistent increases in center-of-pressure (CoP) sway metrics, indicating more imbalance. Masking ambient auditory cues with broadband noise led to less consistent findings; however, when effects were observed they were substantially smaller for auditory cues than for proprioceptive and visual cues - and in the opposite direction (i.e., masking ambient auditory cues with broadband noise reduced sway in some situations). Additionally, trials that used virtual and real-world visual stimuli did not differ unless participants were standing on a surface that disrupted proprioceptive cues; disruption of proprioception led to increased CoP sway metrics in the virtual visual condition. This is the first manuscript to report the effect size of different perturbations in this context, and the first to study the impact of acoustically complex environments on balance in comparison to visual and proprioceptive contributions. Future research is needed to better characterize the impact of different acoustic environments on balance.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":" ","pages":"865-890"},"PeriodicalIF":1.6,"publicationDate":"2023-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"71428901","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-18DOI: 10.1163/22134808-bja10110
Michelle K Huntley, An Nguyen, Matthew A Albrecht, Welber Marinovic
Our ability to integrate multisensory information depends on processes occurring during the temporal binding window. There is limited research investigating the temporal binding window for visual-tactile integration and its relationship with autistic traits, sensory sensitivity, and unusual sensory experiences. We measured the temporal binding window for visual-tactile integration in 27 neurotypical participants who completed a simultaneity judgement task and three questionnaires: the Autism Quotient, the Glasgow Sensory Questionnaire, and the Multi-Modality Unusual Sensory Experiences Questionnaire. The average width of the visual-leading visual-tactile (VT) temporal binding window was 123 ms, significantly narrower than the tactile-leading visual-tactile (TV) window (193 ms). When comparing crossmodal (visual-tactile) stimuli with unimodal (visual-visual or tactile-tactile), the temporal binding window was significantly larger for crossmodal stimuli (VT: 123 ms; TV: 193 ms) than for unimodal pairs of stimuli (visual: 38 ms; tactile 42 ms). We did not find evidence to support a relationship between the size of the temporal binding window and autistic traits, sensory sensitivities, or unusual sensory perceptual experiences in this neurotypical population. Our results indicate that the leading sense presented in a multisensory pair influences the width of the temporal binding window. When tactile stimuli precede visual stimuli it may be difficult to determine the temporal boundaries of the stimuli, which leads to a delay in shifting attention from tactile to visual stimuli. This ambiguity in determining temporal boundaries of stimuli likely influences our ability to decide on whether stimuli are simultaneous or nonsimultaneous, which in turn leads to wider temporal binding windows.
{"title":"Investigating the Role of Leading Sensory Modality and Autistic Traits in the Visual-Tactile Temporal Binding Window.","authors":"Michelle K Huntley, An Nguyen, Matthew A Albrecht, Welber Marinovic","doi":"10.1163/22134808-bja10110","DOIUrl":"10.1163/22134808-bja10110","url":null,"abstract":"<p><p>Our ability to integrate multisensory information depends on processes occurring during the temporal binding window. There is limited research investigating the temporal binding window for visual-tactile integration and its relationship with autistic traits, sensory sensitivity, and unusual sensory experiences. We measured the temporal binding window for visual-tactile integration in 27 neurotypical participants who completed a simultaneity judgement task and three questionnaires: the Autism Quotient, the Glasgow Sensory Questionnaire, and the Multi-Modality Unusual Sensory Experiences Questionnaire. The average width of the visual-leading visual-tactile (VT) temporal binding window was 123 ms, significantly narrower than the tactile-leading visual-tactile (TV) window (193 ms). When comparing crossmodal (visual-tactile) stimuli with unimodal (visual-visual or tactile-tactile), the temporal binding window was significantly larger for crossmodal stimuli (VT: 123 ms; TV: 193 ms) than for unimodal pairs of stimuli (visual: 38 ms; tactile 42 ms). We did not find evidence to support a relationship between the size of the temporal binding window and autistic traits, sensory sensitivities, or unusual sensory perceptual experiences in this neurotypical population. Our results indicate that the leading sense presented in a multisensory pair influences the width of the temporal binding window. When tactile stimuli precede visual stimuli it may be difficult to determine the temporal boundaries of the stimuli, which leads to a delay in shifting attention from tactile to visual stimuli. This ambiguity in determining temporal boundaries of stimuli likely influences our ability to decide on whether stimuli are simultaneous or nonsimultaneous, which in turn leads to wider temporal binding windows.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":"36 7","pages":"683-702"},"PeriodicalIF":1.6,"publicationDate":"2023-10-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"71415219","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-13DOI: 10.1163/22134808-bja10111
Savannah Halow, James Liu, Eelke Folmer, Paul R MacNeilage
Head movement relative to the stationary environment gives rise to congruent vestibular and visual optic-flow signals. The resulting perception of a stationary visual environment, referred to herein as stationarity perception, depends on mechanisms that compare visual and vestibular signals to evaluate their congruence. Here we investigate the functioning of these mechanisms and their dependence on fixation behavior as well as on the active versus passive nature of the head movement. Stationarity perception was measured by modifying the gain on visual motion relative to head movement on individual trials and asking subjects to report whether the gain was too low or too high. Fitting a psychometric function to the data yields two key parameters of performance. The mean is a measure of accuracy, and the standard deviation is a measure of precision. Experiments were conducted using a head-mounted display with fixation behavior monitored by an embedded eye tracker. During active conditions, subjects rotated their heads in yaw ∼15 deg/s over ∼1 s. Each subject's movements were recorded and played back via rotating chair during the passive condition. During head-fixed and scene-fixed fixation the fixation target moved with the head or scene, respectively. Both precision and accuracy were better during active than passive head movement, likely due to increased precision on the head movement estimate arising from motor prediction and neck proprioception. Performance was also better during scene-fixed than head-fixed fixation, perhaps due to decreased velocity of retinal image motion and increased precision on the retinal image motion estimate. These results reveal how the nature of head and eye movements mediate encoding, processing, and comparison of relevant sensory and motor signals.
{"title":"Motor Signals Mediate Stationarity Perception.","authors":"Savannah Halow, James Liu, Eelke Folmer, Paul R MacNeilage","doi":"10.1163/22134808-bja10111","DOIUrl":"10.1163/22134808-bja10111","url":null,"abstract":"<p><p>Head movement relative to the stationary environment gives rise to congruent vestibular and visual optic-flow signals. The resulting perception of a stationary visual environment, referred to herein as stationarity perception, depends on mechanisms that compare visual and vestibular signals to evaluate their congruence. Here we investigate the functioning of these mechanisms and their dependence on fixation behavior as well as on the active versus passive nature of the head movement. Stationarity perception was measured by modifying the gain on visual motion relative to head movement on individual trials and asking subjects to report whether the gain was too low or too high. Fitting a psychometric function to the data yields two key parameters of performance. The mean is a measure of accuracy, and the standard deviation is a measure of precision. Experiments were conducted using a head-mounted display with fixation behavior monitored by an embedded eye tracker. During active conditions, subjects rotated their heads in yaw ∼15 deg/s over ∼1 s. Each subject's movements were recorded and played back via rotating chair during the passive condition. During head-fixed and scene-fixed fixation the fixation target moved with the head or scene, respectively. Both precision and accuracy were better during active than passive head movement, likely due to increased precision on the head movement estimate arising from motor prediction and neck proprioception. Performance was also better during scene-fixed than head-fixed fixation, perhaps due to decreased velocity of retinal image motion and increased precision on the retinal image motion estimate. These results reveal how the nature of head and eye movements mediate encoding, processing, and comparison of relevant sensory and motor signals.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":"36 7","pages":"703-724"},"PeriodicalIF":1.6,"publicationDate":"2023-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"71415221","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-09-29DOI: 10.1163/22134808-bja10109
Yuki Ito, Hanaka Matsumoto, Kohta I Kobayasi
When a brief flash is presented along with two brief sounds, the single flash is often perceived as two flashes. This phenomenon is called a sound-induced flash illusion, in which the auditory sense, with its relatively higher reliability in providing temporal information, modifies the visual perception. Decline of audibility due to hearing impairment is known to make subjects less susceptible to the flash illusion. However, the effect of decline of audibility on susceptibility to the illusion has not been directly investigated in subjects with normal hearing. The present study investigates the relationship between audibility and susceptibility to the illusion by varying the sound pressure level of the stimulus. In the task for reporting the number of auditory stimuli, lowering the sound pressure level caused the rate of perceiving two sounds to decrease on account of forward masking. The occurrence of the illusory flash was reduced as the intensity of the second auditory stimulus decreased, and was significantly correlated with the rate of perceiving the two auditory stimuli. These results suggest that the susceptibility to sound-induced flash illusion depends on the subjective audibility of each sound.
{"title":"Subjective Audibility Modulates the Susceptibility to Sound-Induced Flash Illusion: Effect of Loudness and Auditory Masking.","authors":"Yuki Ito, Hanaka Matsumoto, Kohta I Kobayasi","doi":"10.1163/22134808-bja10109","DOIUrl":"https://doi.org/10.1163/22134808-bja10109","url":null,"abstract":"<p><p>When a brief flash is presented along with two brief sounds, the single flash is often perceived as two flashes. This phenomenon is called a sound-induced flash illusion, in which the auditory sense, with its relatively higher reliability in providing temporal information, modifies the visual perception. Decline of audibility due to hearing impairment is known to make subjects less susceptible to the flash illusion. However, the effect of decline of audibility on susceptibility to the illusion has not been directly investigated in subjects with normal hearing. The present study investigates the relationship between audibility and susceptibility to the illusion by varying the sound pressure level of the stimulus. In the task for reporting the number of auditory stimuli, lowering the sound pressure level caused the rate of perceiving two sounds to decrease on account of forward masking. The occurrence of the illusory flash was reduced as the intensity of the second auditory stimulus decreased, and was significantly correlated with the rate of perceiving the two auditory stimuli. These results suggest that the susceptibility to sound-induced flash illusion depends on the subjective audibility of each sound.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":" ","pages":"1-17"},"PeriodicalIF":1.6,"publicationDate":"2023-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41151212","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-09-27DOI: 10.1163/22134808-bja10108
Giulia L Poerio, Fatimah Osman, Jennifer Todd, Jasmeen Kaur, Lovell Jones, Flavia Cardini
Autonomous Sensory Meridian Response (ASMR) is a complex sensory-perceptual phenomenon characterised by relaxing and pleasurable scalp-tingling sensations. The ASMR trait is nonuniversal, thought to have developmental origins, and a prevalence rate of 20%. Previous theory and research suggest that trait ASMR may be underlined by atypical multisensory perception from both interoceptive and exteroceptive modalities. In this study, we examined whether ASMR responders differed from nonresponders in interoceptive accuracy and multisensory processing style. Results showed that ASMR responders had lower interoceptive accuracy but a greater tendency towards sensation seeking, especially for tactile, olfactory, and gustatory modalities. Exploratory mediation analyses suggest that sensation-seeking behaviours in trait ASMR could reflect a compensatory mechanism for either deficits in interoceptive accuracy, a tendency to weight exteroceptive signals more strongly, or both. This study provides the foundations for understanding how interoceptive and exteroceptive mechanisms might explain not only the ASMR trait, but also individual differences in the ability to experience complex positive emotions more generally.
{"title":"From the Outside in: ASMR Is Characterised by Reduced Interoceptive Accuracy but Higher Sensation Seeking.","authors":"Giulia L Poerio, Fatimah Osman, Jennifer Todd, Jasmeen Kaur, Lovell Jones, Flavia Cardini","doi":"10.1163/22134808-bja10108","DOIUrl":"https://doi.org/10.1163/22134808-bja10108","url":null,"abstract":"<p><p>Autonomous Sensory Meridian Response (ASMR) is a complex sensory-perceptual phenomenon characterised by relaxing and pleasurable scalp-tingling sensations. The ASMR trait is nonuniversal, thought to have developmental origins, and a prevalence rate of 20%. Previous theory and research suggest that trait ASMR may be underlined by atypical multisensory perception from both interoceptive and exteroceptive modalities. In this study, we examined whether ASMR responders differed from nonresponders in interoceptive accuracy and multisensory processing style. Results showed that ASMR responders had lower interoceptive accuracy but a greater tendency towards sensation seeking, especially for tactile, olfactory, and gustatory modalities. Exploratory mediation analyses suggest that sensation-seeking behaviours in trait ASMR could reflect a compensatory mechanism for either deficits in interoceptive accuracy, a tendency to weight exteroceptive signals more strongly, or both. This study provides the foundations for understanding how interoceptive and exteroceptive mechanisms might explain not only the ASMR trait, but also individual differences in the ability to experience complex positive emotions more generally.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":" ","pages":"1-21"},"PeriodicalIF":1.6,"publicationDate":"2023-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41147131","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}