Pub Date : 2026-02-02DOI: 10.1163/22134808-20250001
Alessandro Farnè, Luke E Miller
In this Introduction, we have the pleasure of introducing the twelve articles of this Special Issue of Multisensory Research celebrating the life and works of Vincent Hayward. Vincent was a prolific scientist, collaborator, and colleague. As you will see by the variety of contributed papers, his influence spanned several fields and topics; from engineering to neurophysiology; from skin mechanics to olfactory metacognition. We and many others had the pleasure of knowing and working with Vincent. His boundless curiosity shines through in the papers of this Special Issue, and hopefully in our Introduction as well. Though gone, he is not forgotten; His legacy and influence lives on in the hearts and minds of colleagues studying the (neuro)science of body perception.
{"title":"The Life and Works of Vincent Hayward: An Introduction.","authors":"Alessandro Farnè, Luke E Miller","doi":"10.1163/22134808-20250001","DOIUrl":"https://doi.org/10.1163/22134808-20250001","url":null,"abstract":"<p><p>In this Introduction, we have the pleasure of introducing the twelve articles of this Special Issue of Multisensory Research celebrating the life and works of Vincent Hayward. Vincent was a prolific scientist, collaborator, and colleague. As you will see by the variety of contributed papers, his influence spanned several fields and topics; from engineering to neurophysiology; from skin mechanics to olfactory metacognition. We and many others had the pleasure of knowing and working with Vincent. His boundless curiosity shines through in the papers of this Special Issue, and hopefully in our Introduction as well. Though gone, he is not forgotten; His legacy and influence lives on in the hearts and minds of colleagues studying the (neuro)science of body perception.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":" ","pages":"1-7"},"PeriodicalIF":1.5,"publicationDate":"2026-02-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146114852","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-30DOI: 10.1163/22134808-bja10185
Merve Meral Çetinkaya, Azize Arzu Koçyiğit Köroğlu, Ümit Can Çetinkaya
This study aims to investigate the correlation between decreased sound tolerance disorders and sensory processing disorder, as well as the sensory systems that are affected when decreased sound tolerance disorders exist. The study included 315 individuals aged 18-35 with normal hearing and no neurological disorders. Participants completed the Decreased Sound Tolerance Disorder Scale (DSTS) and the Adult Sensory Processing Scale (ASPS). According to the DSTS, 278 individuals with decreased sound tolerance disorders were included as the study group, and 37 individuals without decreased sound tolerance were included as a control group. The DSTS includes 33 items assessing symptoms of hyperacusis, phonophobia, and misophonia. The ASPS consists of 48 items across 11 factors that assess the sensitivity of different sensory domains. The distribution of decreased sound tolerance disorders among the participants indicated that 113 participants (35.9%) had all three types of DSTs (triple DST), 85 participants (27.0%) had two types of DSTs (dual DST), 16 participants (5.1%) had hyperacusis only, 14 participants (4.4%) had phonophobia only, 50 participants (15.9%) had misophonia only, and 37 participants (11.7%) had no DSTs (non-DST). A moderately positive correlation was found between the total scores of the ASPS and hyperacusis ( r = 0.260, p < 0.001), and misophonia ( r = 0.348, p < 0.001) scores. Total ASPS scores were higher in those with misophonia compared to those without ( p < 0.05). The results of this study indicate that individuals with decreased sound tolerance may have difficulty in processing not only auditory but also visual, vestibular, proprioceptive, and tactile stimuli.
本研究旨在探讨声音耐受障碍降低与感觉加工障碍之间的关系,以及声音耐受障碍降低对感觉系统的影响。该研究包括315名年龄在18-35岁之间的人,他们听力正常,没有神经系统疾病。被试完成了声音耐受障碍减弱量表(DSTS)和成人感觉加工量表(asp)。根据DSTS, 278例声音耐受障碍患者作为研究组,37例声音耐受障碍患者作为对照组。DSTS包括33个项目,评估听力亢进、恐音症和恐音症的症状。ASPS包括48个项目,横跨11个因素,评估不同感觉领域的敏感性。声音耐受障碍的分布表明,三种类型的DSTs均存在113例(35.9%),两种类型的DSTs均存在85例(27.0%),仅有听觉亢进16例(5.1%),仅有语音恐惧症14例(4.4%),仅有恐音症50例(15.9%),无DSTs(非DST) 37例(11.7%)。ASPS总分与听觉亢进总分(r = 0.260, p < 0.001)、恐音总分(r = 0.348, p < 0.001)呈中度正相关。恐音症患者的ASPS总分高于无恐音症患者(p < 0.05)。本研究结果表明,声音耐受能力下降的个体不仅在处理听觉、视觉、前庭、本体感觉和触觉刺激方面都有困难。
{"title":"Beyond the Auditory System: Sensory Processing in Decreased Sound Tolerance Disorders.","authors":"Merve Meral Çetinkaya, Azize Arzu Koçyiğit Köroğlu, Ümit Can Çetinkaya","doi":"10.1163/22134808-bja10185","DOIUrl":"https://doi.org/10.1163/22134808-bja10185","url":null,"abstract":"<p><p>This study aims to investigate the correlation between decreased sound tolerance disorders and sensory processing disorder, as well as the sensory systems that are affected when decreased sound tolerance disorders exist. The study included 315 individuals aged 18-35 with normal hearing and no neurological disorders. Participants completed the Decreased Sound Tolerance Disorder Scale (DSTS) and the Adult Sensory Processing Scale (ASPS). According to the DSTS, 278 individuals with decreased sound tolerance disorders were included as the study group, and 37 individuals without decreased sound tolerance were included as a control group. The DSTS includes 33 items assessing symptoms of hyperacusis, phonophobia, and misophonia. The ASPS consists of 48 items across 11 factors that assess the sensitivity of different sensory domains. The distribution of decreased sound tolerance disorders among the participants indicated that 113 participants (35.9%) had all three types of DSTs (triple DST), 85 participants (27.0%) had two types of DSTs (dual DST), 16 participants (5.1%) had hyperacusis only, 14 participants (4.4%) had phonophobia only, 50 participants (15.9%) had misophonia only, and 37 participants (11.7%) had no DSTs (non-DST). A moderately positive correlation was found between the total scores of the ASPS and hyperacusis ( r = 0.260, p < 0.001), and misophonia ( r = 0.348, p < 0.001) scores. Total ASPS scores were higher in those with misophonia compared to those without ( p < 0.05). The results of this study indicate that individuals with decreased sound tolerance may have difficulty in processing not only auditory but also visual, vestibular, proprioceptive, and tactile stimuli.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":" ","pages":"1-20"},"PeriodicalIF":1.5,"publicationDate":"2026-01-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146107906","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-26DOI: 10.1163/22134808-bja10184
Kriti Datta, Amit Bhardwaj, Manish Narwaria
Tapping surfaces with a tool tip elicits both sound and haptic information. Sound and haptic information is captured by a microphone and an accelerometer attached to the tip, respectively. In relation to the task of distinguishing objects, this paper investigates the following two questions: (1) how do both the signals (sound and acceleration) help us to assess the surface individually? (2) How does the integration of both modalities affect this assessment? We approach the problem of texture assessment as an unsupervised learning problem. For this purpose, perceptual filter banks are designed based on Weber's law of frequency perception for both modalities to extract the corresponding features. Furthermore, we introduce a symmetric KL divergence-based texture similarity metric, which helped us to compare the sound and haptic (acceleration signals) modalities. Based on our similarity metric comparisons, we argue that proximity between object types is preserved across modalities. Finally, using a permutation test-based approach, we demonstrate that the complementarity of sound domain information to the haptic domain varies depending on the type of object.
{"title":"Haptic-Sound Analysis of Materials' Clustering Based on Tool Tip Tapping Exploration.","authors":"Kriti Datta, Amit Bhardwaj, Manish Narwaria","doi":"10.1163/22134808-bja10184","DOIUrl":"https://doi.org/10.1163/22134808-bja10184","url":null,"abstract":"<p><p>Tapping surfaces with a tool tip elicits both sound and haptic information. Sound and haptic information is captured by a microphone and an accelerometer attached to the tip, respectively. In relation to the task of distinguishing objects, this paper investigates the following two questions: (1) how do both the signals (sound and acceleration) help us to assess the surface individually? (2) How does the integration of both modalities affect this assessment? We approach the problem of texture assessment as an unsupervised learning problem. For this purpose, perceptual filter banks are designed based on Weber's law of frequency perception for both modalities to extract the corresponding features. Furthermore, we introduce a symmetric KL divergence-based texture similarity metric, which helped us to compare the sound and haptic (acceleration signals) modalities. Based on our similarity metric comparisons, we argue that proximity between object types is preserved across modalities. Finally, using a permutation test-based approach, we demonstrate that the complementarity of sound domain information to the haptic domain varies depending on the type of object.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":" ","pages":"1-16"},"PeriodicalIF":1.5,"publicationDate":"2026-01-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146107854","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-21DOI: 10.1163/22134808-bja10182
Yuta Ujiie, Kohske Takahashi
Seeing facial speech plays a crucial role in allowing listeners to understand a person's speech. Since the COVID pandemic's outbreak and during the pandemic, the use of facial speech became more difficult because most people routinely wore masks to prevent infection. This study investigates whether and how wearing a mask alters reliance on facial speech for audiovisual speech perception. In this cross-sectional study, we compared the task performance of Japanese young adults in audiovisual speech recognition (i.e., the McGurk effect) and lip-reading between prepandemic and postpandemic groups. For the prepandemic data, we used data from between June and July 2019 from a past study of ours; for the postpandemic data, we collected data from November 2022 to April 2023. The results showed that the amount of McGurk effect (i.e., the amount of reliance on facial speech) in the postpandemic data was comparable to that in prepandemic data. Additionally, there were no significant differences on lip-reading accuracy nor on audiovisual congruent speech recognition. The results imply that, among Japanese young adults, the perceiver's reliance on visual speech as a strategy during audiovisual speech processing did not significantly change between before and after the COVID-19 pandemic.
{"title":"The Effect of Mask Wearing on Lip-Reading and Audiovisual Speech Perception.","authors":"Yuta Ujiie, Kohske Takahashi","doi":"10.1163/22134808-bja10182","DOIUrl":"https://doi.org/10.1163/22134808-bja10182","url":null,"abstract":"<p><p>Seeing facial speech plays a crucial role in allowing listeners to understand a person's speech. Since the COVID pandemic's outbreak and during the pandemic, the use of facial speech became more difficult because most people routinely wore masks to prevent infection. This study investigates whether and how wearing a mask alters reliance on facial speech for audiovisual speech perception. In this cross-sectional study, we compared the task performance of Japanese young adults in audiovisual speech recognition (i.e., the McGurk effect) and lip-reading between prepandemic and postpandemic groups. For the prepandemic data, we used data from between June and July 2019 from a past study of ours; for the postpandemic data, we collected data from November 2022 to April 2023. The results showed that the amount of McGurk effect (i.e., the amount of reliance on facial speech) in the postpandemic data was comparable to that in prepandemic data. Additionally, there were no significant differences on lip-reading accuracy nor on audiovisual congruent speech recognition. The results imply that, among Japanese young adults, the perceiver's reliance on visual speech as a strategy during audiovisual speech processing did not significantly change between before and after the COVID-19 pandemic.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":" ","pages":"1-12"},"PeriodicalIF":1.5,"publicationDate":"2026-01-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146042170","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Olfaction and gustation are central to affective experience, yet their distinct contributions to social emotion perception remain underexplored compared to vision and audition. This study examined how sweet and sour chemosensory cues modulate emotional judgments of facial expressions, testing whether judgments are systematically biased toward hedonically congruent emotion categories. Two behavioral experiments were conducted. In the olfactory session, participants categorized happy, disgusted, and neutral faces at high (100%) and low (50%) intensity while exposed to either a sweet (melon) or sour (vinegar) odor. In the gustatory session, they performed the same task after ingesting the sweet (sucrose) or sour (citric acid) solution. Accuracy and reaction times were analyzed using generalized linear mixed models. The analyses of accuracy data revealed a significant main effect of intensity and a significant condition × emotion interaction. Happy faces were more accurately identified in the sweet condition, whereas disgusted faces were more accurately identified in the sour condition. Sweet and sour chemosensory cues systematically bias emotional judgments of visual facial expressions. These effects support hedonic congruency predictions and highlight the importance of incorporating chemosensory context into multisensory models of emotion.
{"title":"Multisensory Tuning of Emotional Face Recognition: A Comparative Study of Olfactory and Gustatory Influences.","authors":"Qingya Yang, Huajing Yang, Ao Wang, Lina Huang, Liuqing Wei, Wenbin Shen, Weiping Yang, Qingguo Ding, Pei Liang","doi":"10.1163/22134808-bja10183","DOIUrl":"https://doi.org/10.1163/22134808-bja10183","url":null,"abstract":"<p><p>Olfaction and gustation are central to affective experience, yet their distinct contributions to social emotion perception remain underexplored compared to vision and audition. This study examined how sweet and sour chemosensory cues modulate emotional judgments of facial expressions, testing whether judgments are systematically biased toward hedonically congruent emotion categories. Two behavioral experiments were conducted. In the olfactory session, participants categorized happy, disgusted, and neutral faces at high (100%) and low (50%) intensity while exposed to either a sweet (melon) or sour (vinegar) odor. In the gustatory session, they performed the same task after ingesting the sweet (sucrose) or sour (citric acid) solution. Accuracy and reaction times were analyzed using generalized linear mixed models. The analyses of accuracy data revealed a significant main effect of intensity and a significant condition × emotion interaction. Happy faces were more accurately identified in the sweet condition, whereas disgusted faces were more accurately identified in the sour condition. Sweet and sour chemosensory cues systematically bias emotional judgments of visual facial expressions. These effects support hedonic congruency predictions and highlight the importance of incorporating chemosensory context into multisensory models of emotion.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":" ","pages":"1-18"},"PeriodicalIF":1.5,"publicationDate":"2026-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146042165","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-12DOI: 10.1163/22134808-bja10181
Robert S Allison, Stephen Palmisano
Cybersickness, a common adverse side-effect of virtual-reality exposure, is characterised by a constellation of symptoms including nausea, disorientation, and oculomotor disturbances. This review synthesises findings in the literature to evaluate the influence of low-level visual factors on cybersickness, with an emphasis on motion-sickness-related symptoms. Higher-level visual or multisensory factors such as head-tracking latency, coupled physical motion or semantic content were not considered. We searched the Scopus, Pubmed, ACM Digital Library, IEEE Xplore, Web of Science, Google Scholar, and OVID databases in November 2024 as well as searched backward citations from the selected papers and recent review papers. Experimental studies using human participants published in peer-reviewed journals were selected after abstract screening and full-text review of the screened records. Effects were extracted from the papers and effect sizes were synthesised as standardised mean differences in cybersickness intensity or symptoms. Separate random-effects meta-analyses were performed to quantify the effect sizes for each of the visual factors considered including field of view, motion type, velocity and direction, spatial and temporal resolution, contrast, luminance, and the presence of visual reference frames. Of the 4622 initial records, 97 studies were selected and included in the meta-analyses. The analyses revealed that peripheral field of view restriction and independent visual backgrounds were consistently associated with reductions in cybersickness severity. Conversely, visual oscillation, multidimensional visual motion stimuli, and visually simulated off-vertical axis rotation were found to exacerbate cybersickness. The review also identifies methodological trends and limitations within the literature, and suggests ways to improve the effectiveness of subsequent meta-analyses through study design, data reporting standards and methodological descriptions. These findings highlight avenues for future research, particularly in the context of individual susceptibility and multifactor integration. The results offer actionable insights for the design of virtual-reality systems aimed at mitigating cybersickness and enhancing user comfort.
晕屏病是接触虚拟现实的一种常见不良副作用,其特征是一系列症状,包括恶心、定向障碍和眼球运动障碍。这篇综述综合了文献中的发现来评估低水平视觉因素对晕动病的影响,重点是与晕动病相关的症状。更高层次的视觉或多感官因素,如头部跟踪延迟,耦合的物理运动或语义内容未被考虑。我们于2024年11月检索了Scopus、Pubmed、ACM数字图书馆、IEEE explore、Web of Science、b谷歌Scholar和OVID数据库,并检索了所选论文和近期综述论文的反向引文。在同行评议的期刊上发表的人类参与者的实验研究是在对筛选的记录进行摘要筛选和全文审查后选择的。从论文中提取效应,并将效应大小合成为晕机强度或症状的标准化平均差异。我们进行了单独的随机效应荟萃分析,以量化所考虑的每个视觉因素的效应大小,包括视野、运动类型、速度和方向、空间和时间分辨率、对比度、亮度和视觉参考框架的存在。在4622份初始记录中,选择了97份研究纳入meta分析。分析显示,外围视野限制和独立视觉背景始终与晕屏严重程度的降低有关。相反,视觉振荡、多维视觉运动刺激和视觉模拟的偏离垂直轴旋转会加剧晕动病。本综述还确定了文献中的方法学趋势和局限性,并通过研究设计、数据报告标准和方法学描述提出了提高后续荟萃分析有效性的方法。这些发现强调了未来研究的途径,特别是在个体易感性和多因素整合的背景下。研究结果为虚拟现实系统的设计提供了可行的见解,旨在减轻晕动症和提高用户舒适度。
{"title":"Visual Factors in Cybersickness: A Literature Survey and Meta-Analysis.","authors":"Robert S Allison, Stephen Palmisano","doi":"10.1163/22134808-bja10181","DOIUrl":"https://doi.org/10.1163/22134808-bja10181","url":null,"abstract":"<p><p>Cybersickness, a common adverse side-effect of virtual-reality exposure, is characterised by a constellation of symptoms including nausea, disorientation, and oculomotor disturbances. This review synthesises findings in the literature to evaluate the influence of low-level visual factors on cybersickness, with an emphasis on motion-sickness-related symptoms. Higher-level visual or multisensory factors such as head-tracking latency, coupled physical motion or semantic content were not considered. We searched the Scopus, Pubmed, ACM Digital Library, IEEE Xplore, Web of Science, Google Scholar, and OVID databases in November 2024 as well as searched backward citations from the selected papers and recent review papers. Experimental studies using human participants published in peer-reviewed journals were selected after abstract screening and full-text review of the screened records. Effects were extracted from the papers and effect sizes were synthesised as standardised mean differences in cybersickness intensity or symptoms. Separate random-effects meta-analyses were performed to quantify the effect sizes for each of the visual factors considered including field of view, motion type, velocity and direction, spatial and temporal resolution, contrast, luminance, and the presence of visual reference frames. Of the 4622 initial records, 97 studies were selected and included in the meta-analyses. The analyses revealed that peripheral field of view restriction and independent visual backgrounds were consistently associated with reductions in cybersickness severity. Conversely, visual oscillation, multidimensional visual motion stimuli, and visually simulated off-vertical axis rotation were found to exacerbate cybersickness. The review also identifies methodological trends and limitations within the literature, and suggests ways to improve the effectiveness of subsequent meta-analyses through study design, data reporting standards and methodological descriptions. These findings highlight avenues for future research, particularly in the context of individual susceptibility and multifactor integration. The results offer actionable insights for the design of virtual-reality systems aimed at mitigating cybersickness and enhancing user comfort.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":" ","pages":"1-105"},"PeriodicalIF":1.5,"publicationDate":"2025-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145752479","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-09DOI: 10.1163/22134808-bja10179
Mick Zeljko, Philip M Grove, Laurence R Harris, Ada Kritikos
Previous research has investigated variations in the effectiveness of audio-visual (AV) integration dependent on location relative to the observer, with inconsistent results. Here, we examine AV interactions in the 3D space around an observer and address six factors that may contribute to these inconsistencies. Using a redundant-targets-effect paradigm in virtual reality, we conducted speeded detection and localization tasks to randomly intermixed auditory, visual and audio-visual stimuli presented in near or far, left or right regions of space around an observer. We varied stimulus characteristics to control for distance-related magnitude variations, examined static and looming stimuli, and analysed response times, multisensory benefits, and race model violations across conditions. Our findings reveal location-related effects on AV integration for looming but not stationary stimuli. Specifically, we observed near-space enhancement for AV looming stimuli for participants' sensory-motor responses and a left/near space enhancement for the multisensory benefit. Our method of intermixing stimulus locations and magnitude adjustments to control for inverse effectiveness was critical for demonstrating these effects. Task goals modified outcomes in complex ways. These results provide new insights into AV integration in 3D space, extend previous findings and highlight the importance and limitations of methodological factors.
{"title":"Audio-Visual Integration in 3D Space Near the Body.","authors":"Mick Zeljko, Philip M Grove, Laurence R Harris, Ada Kritikos","doi":"10.1163/22134808-bja10179","DOIUrl":"10.1163/22134808-bja10179","url":null,"abstract":"<p><p>Previous research has investigated variations in the effectiveness of audio-visual (AV) integration dependent on location relative to the observer, with inconsistent results. Here, we examine AV interactions in the 3D space around an observer and address six factors that may contribute to these inconsistencies. Using a redundant-targets-effect paradigm in virtual reality, we conducted speeded detection and localization tasks to randomly intermixed auditory, visual and audio-visual stimuli presented in near or far, left or right regions of space around an observer. We varied stimulus characteristics to control for distance-related magnitude variations, examined static and looming stimuli, and analysed response times, multisensory benefits, and race model violations across conditions. Our findings reveal location-related effects on AV integration for looming but not stationary stimuli. Specifically, we observed near-space enhancement for AV looming stimuli for participants' sensory-motor responses and a left/near space enhancement for the multisensory benefit. Our method of intermixing stimulus locations and magnitude adjustments to control for inverse effectiveness was critical for demonstrating these effects. Task goals modified outcomes in complex ways. These results provide new insights into AV integration in 3D space, extend previous findings and highlight the importance and limitations of methodological factors.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":" ","pages":"61-98"},"PeriodicalIF":1.5,"publicationDate":"2025-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145716534","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-02DOI: 10.1163/22134808-bja10180
Mara Baljan, John F Golding, Heiko Hecht, Behrang Keshavarz
Motion sickness is a condition that is characterized by symptoms like dizziness, nausea, or vomiting, especially during transportation or immersive visual experiences such as gaming and virtual reality (VR). Visually Induced Motion Sickness (VIMS) is of particular concern due to its increasing relevance with the rise of immersive technologies. The 6-item version of the Visually Induced Motion Sickness Susceptibility Questionnaire (VIMSSQ-short), a modified version of the established Motion Sickness Susceptibility Questionnaire, was developed to quickly assess individual susceptibility to VIMS. This study focuses on its translation into German and the validation of this German-language version of the VIMSSQ-short. The translation process included independent translations by experts and a back-translation to identify and resolve discrepancies. An online survey collected normative data from 200 participants, revealing a mean score of 5.85 (SD = 3.31) for the translated VIMSSQ-short. The results indicated significant gender differences, with females exhibiting higher susceptibility scores than males. Additionally, a significant negative correlation between age and susceptibility was observed. An experimental study involving 70 participants further confirmed these findings in terms of mean scores, gender, and age. Additionally, the findings demonstrate that higher VIMSSQ scores predict symptom severity during VR exposure ( r s = 0.58 with Simulator Sickness Questionnaire total score). Overall, the translated VIMSSQ-short shows promise as a reliable tool for assessing VIMS susceptibility in German-speaking populations, contributing to the understanding of motion sickness in immersive environments. The identification of susceptible individuals is relevant both for practical applications (e.g. in the training of emergency forces) and in experimental settings for the randomization or screening of participants.
{"title":"German Translation and Validation of the Visually Induced Motion Sickness Susceptibility Questionnaire Short (VIMSSQ-short).","authors":"Mara Baljan, John F Golding, Heiko Hecht, Behrang Keshavarz","doi":"10.1163/22134808-bja10180","DOIUrl":"https://doi.org/10.1163/22134808-bja10180","url":null,"abstract":"<p><p>Motion sickness is a condition that is characterized by symptoms like dizziness, nausea, or vomiting, especially during transportation or immersive visual experiences such as gaming and virtual reality (VR). Visually Induced Motion Sickness (VIMS) is of particular concern due to its increasing relevance with the rise of immersive technologies. The 6-item version of the Visually Induced Motion Sickness Susceptibility Questionnaire (VIMSSQ-short), a modified version of the established Motion Sickness Susceptibility Questionnaire, was developed to quickly assess individual susceptibility to VIMS. This study focuses on its translation into German and the validation of this German-language version of the VIMSSQ-short. The translation process included independent translations by experts and a back-translation to identify and resolve discrepancies. An online survey collected normative data from 200 participants, revealing a mean score of 5.85 (SD = 3.31) for the translated VIMSSQ-short. The results indicated significant gender differences, with females exhibiting higher susceptibility scores than males. Additionally, a significant negative correlation between age and susceptibility was observed. An experimental study involving 70 participants further confirmed these findings in terms of mean scores, gender, and age. Additionally, the findings demonstrate that higher VIMSSQ scores predict symptom severity during VR exposure ( r s = 0.58 with Simulator Sickness Questionnaire total score). Overall, the translated VIMSSQ-short shows promise as a reliable tool for assessing VIMS susceptibility in German-speaking populations, contributing to the understanding of motion sickness in immersive environments. The identification of susceptible individuals is relevant both for practical applications (e.g. in the training of emergency forces) and in experimental settings for the randomization or screening of participants.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":" ","pages":"1-18"},"PeriodicalIF":1.5,"publicationDate":"2025-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145679254","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-24DOI: 10.1163/22134808-bja10178
Ruqiang Liu, Shili Zhao, Rulei Zhang, Juan Yang
While postural control in preschool children relies on visual, proprioceptive, and vestibular inputs, the hierarchical contribution of multisensory integration - particularly the role of tactile feedback - remains undercharacterised. Few studies have systematically mapped the developmental trajectory of sensory weighting strategies in early childhood. We randomly selected 128 preschool children from a kindergarten in Suzhou in June 2025. Sensors measured the angular velocity modulus (ω) of the body centre of mass shaking under eight conditions. Paired-samples t-test and one-way repeated-measures analysis of variance were used to analyze differences in ω across sensory integrations. The ω of vestibular integration with proprioception was smaller than with visual or tactile senses ( P < 0.001). The ω of vestibular integration with visual-proprioception was smaller than that with proprioception-tactile senses or visual-tactile senses ( P < 0.001). The ω significantly decreased ( P < 0.001) when proprioception was integrated with all sensory conditions and under vestibular integration with visual, vestibular-tactile, and vestibular-proprioceptive inputs. No significant changes ( P > 0.05) occurred under vestibular-tactile-proprioception integration with visual. The ω significantly decreased ( P < 0.001) under vestibular-tactile integration. Proprioceptive integration consistently reduced postural sway, with vestibular-proprioceptive coupling demonstrating the strongest stabilizing effect, followed by visual integration. Tactile input only enhanced stability in the absence of visual and proprioceptive cues, highlighting its compensatory role in sensory-deprived developmental contexts.
{"title":"The Role of Multisensory Integration in Postural Stability Regulation among 5- to 7-Year-Old Children.","authors":"Ruqiang Liu, Shili Zhao, Rulei Zhang, Juan Yang","doi":"10.1163/22134808-bja10178","DOIUrl":"10.1163/22134808-bja10178","url":null,"abstract":"<p><p>While postural control in preschool children relies on visual, proprioceptive, and vestibular inputs, the hierarchical contribution of multisensory integration - particularly the role of tactile feedback - remains undercharacterised. Few studies have systematically mapped the developmental trajectory of sensory weighting strategies in early childhood. We randomly selected 128 preschool children from a kindergarten in Suzhou in June 2025. Sensors measured the angular velocity modulus (ω) of the body centre of mass shaking under eight conditions. Paired-samples t-test and one-way repeated-measures analysis of variance were used to analyze differences in ω across sensory integrations. The ω of vestibular integration with proprioception was smaller than with visual or tactile senses ( P < 0.001). The ω of vestibular integration with visual-proprioception was smaller than that with proprioception-tactile senses or visual-tactile senses ( P < 0.001). The ω significantly decreased ( P < 0.001) when proprioception was integrated with all sensory conditions and under vestibular integration with visual, vestibular-tactile, and vestibular-proprioceptive inputs. No significant changes ( P > 0.05) occurred under vestibular-tactile-proprioception integration with visual. The ω significantly decreased ( P < 0.001) under vestibular-tactile integration. Proprioceptive integration consistently reduced postural sway, with vestibular-proprioceptive coupling demonstrating the strongest stabilizing effect, followed by visual integration. Tactile input only enhanced stability in the absence of visual and proprioceptive cues, highlighting its compensatory role in sensory-deprived developmental contexts.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":" ","pages":"47-60"},"PeriodicalIF":1.5,"publicationDate":"2025-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145607105","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-17DOI: 10.1163/22134808-bja10177
Yung-Ting Chen
This study explores how visual experience and sensory compensation shape tactile perception in two groups of individuals: sighted (SP) and visually impaired (VIP). A total of 100 participants (60 SP, 40 VIP) evaluated 37 material samples using a semantic differential scale. This study proposes a novel methodological approach: visually impaired participants evaluated tactile materials using exclusively visually based imagery descriptors. Despite group differences, both groups shared core perceptual invariants, specifically roughness, hardness, and temperature, essential for consistent and reliable tactile interaction. However, VIPs demonstrated heightened sensitivity to fine textures, likely due to sensory compensation mechanisms involving tactile acuity enhancement and neural plasticity. In contrast, SPs relied predominantly on macroscopic tactile cues. Clarifying these invariants and compensatory strategies is critical for inclusive and universally accessible product design, enabling products to be precisely tailored to users' sensory abilities. These findings provide significant societal value by offering concrete guidelines for improving tactile-based accessibility and enhancing everyday tactile navigation, interaction, and overall quality of life, especially for visually impaired populations.
{"title":"Exploring Tactile Perception: Similarities and Differences between Sighted and Blind Individuals.","authors":"Yung-Ting Chen","doi":"10.1163/22134808-bja10177","DOIUrl":"10.1163/22134808-bja10177","url":null,"abstract":"<p><p>This study explores how visual experience and sensory compensation shape tactile perception in two groups of individuals: sighted (SP) and visually impaired (VIP). A total of 100 participants (60 SP, 40 VIP) evaluated 37 material samples using a semantic differential scale. This study proposes a novel methodological approach: visually impaired participants evaluated tactile materials using exclusively visually based imagery descriptors. Despite group differences, both groups shared core perceptual invariants, specifically roughness, hardness, and temperature, essential for consistent and reliable tactile interaction. However, VIPs demonstrated heightened sensitivity to fine textures, likely due to sensory compensation mechanisms involving tactile acuity enhancement and neural plasticity. In contrast, SPs relied predominantly on macroscopic tactile cues. Clarifying these invariants and compensatory strategies is critical for inclusive and universally accessible product design, enabling products to be precisely tailored to users' sensory abilities. These findings provide significant societal value by offering concrete guidelines for improving tactile-based accessibility and enhancing everyday tactile navigation, interaction, and overall quality of life, especially for visually impaired populations.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":" ","pages":"139-202"},"PeriodicalIF":1.5,"publicationDate":"2025-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145530950","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}