When designing a haptic interface, simplicity is crucial to avoid negative effects caused by excessive weight and complexity. Using multimodal information, haptic illusions, and providing context are known to create simpler interfaces. We have previously proposed the use of single overlapped vibrotactile stimulation (SOVS) for presenting spatiotemporal tactile perception, a method that simultaneously presents overlapped waveforms to multiple body parts. There, the acceleration measured from a person dribbling a basketball with an accelerometer positioned on the index finger and the floor was overlapped to present as stimuli. When the stimuli were presented simultaneously to the hand and the feet, it demonstrated a dribbling sensation, like an imaginary ball moving back and forth between the hand and the feet. This demonstrated the potential to eliminate the need for time synchronization and reduce the number of required channels, ultimately leading to the development of simple haptic interfaces that enhance an immersive experience. In this paper, we aim to investigate the key factor behind the perception of SOVS using simple vibrotactile stimuli. The first experiment measured the occurrence rate of the dribbling feeling for different combinations of prepared stimuli, and the results show that the combination of two different input amplitudes is crucial for the occurrence rate of the phenomenon. The second experiment assessed how realistic each stimulus, presented to the hand and the feet separately, felt to the participants. The results show that for the hand, the perceived reality corresponded to the strength of input amplitude, whereas the second-strongest input amplitude was perceived as most realistic for the feet. This suggests that when the combination consists of duplicate input amplitudes and/or those with low perceived reality, the occurrence rate tends to decrease.
{"title":"Pseudo-Dribbling Experience Using Single Overlapped Vibrotactile Stimulation Simultaneously to the Hand and the Feet.","authors":"Takumi Kuhara, Kakagu Komazaki, Junji Watanabe, Yoshihiro Tanaka","doi":"10.1163/22134808-bja10157","DOIUrl":"https://doi.org/10.1163/22134808-bja10157","url":null,"abstract":"<p><p>When designing a haptic interface, simplicity is crucial to avoid negative effects caused by excessive weight and complexity. Using multimodal information, haptic illusions, and providing context are known to create simpler interfaces. We have previously proposed the use of single overlapped vibrotactile stimulation (SOVS) for presenting spatiotemporal tactile perception, a method that simultaneously presents overlapped waveforms to multiple body parts. There, the acceleration measured from a person dribbling a basketball with an accelerometer positioned on the index finger and the floor was overlapped to present as stimuli. When the stimuli were presented simultaneously to the hand and the feet, it demonstrated a dribbling sensation, like an imaginary ball moving back and forth between the hand and the feet. This demonstrated the potential to eliminate the need for time synchronization and reduce the number of required channels, ultimately leading to the development of simple haptic interfaces that enhance an immersive experience. In this paper, we aim to investigate the key factor behind the perception of SOVS using simple vibrotactile stimuli. The first experiment measured the occurrence rate of the dribbling feeling for different combinations of prepared stimuli, and the results show that the combination of two different input amplitudes is crucial for the occurrence rate of the phenomenon. The second experiment assessed how realistic each stimulus, presented to the hand and the feet separately, felt to the participants. The results show that for the hand, the perceived reality corresponded to the strength of input amplitude, whereas the second-strongest input amplitude was perceived as most realistic for the feet. This suggests that when the combination consists of duplicate input amplitudes and/or those with low perceived reality, the occurrence rate tends to decrease.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":" ","pages":"1-20"},"PeriodicalIF":1.5,"publicationDate":"2025-09-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145410774","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-24DOI: 10.1163/22134808-bja10160
Yurika Tsuji, Yuki Nishiguchi, Akari Noda, Shu Imaizumi
Autistic individuals experience temporal integration difficulties in some sensory modalities that may be related to imagination difficulties. In this study, we tested the hypotheses that among Japanese university students in the general population, (1) higher autistic traits and (2) greater imagination difficulties are associated with lower performance in tasks requiring temporal integration. Two tasks were used to assess their temporal integration abilities: a speech-in-noise test using noise with temporal dips in the auditory modality and a slit-viewing task in the visual modality. The results showed that low performance in the speech-in-noise test was related to autistic traits and some aspects of imagination difficulties, whereas the slit-viewing task was related to neither autistic traits nor imagination difficulties. The ability to temporally integrate fragments of auditory information is expected to be associated with performance in perceiving speech in noise with temporal dips. The difficulties in perceiving sensory information as a single unified percept using priors may cause difficulties in temporally integrating auditory information and perceiving speech in noise. Furthermore, the structural equation modeling suggests that imagination difficulties are linked to difficulties in perceiving speech in noise with temporal dips, which links to social impairments.
{"title":"Autistic Traits and Temporal Integration of Auditory and Visual Stimuli in the General Population: The Role of Imagination.","authors":"Yurika Tsuji, Yuki Nishiguchi, Akari Noda, Shu Imaizumi","doi":"10.1163/22134808-bja10160","DOIUrl":"10.1163/22134808-bja10160","url":null,"abstract":"<p><p>Autistic individuals experience temporal integration difficulties in some sensory modalities that may be related to imagination difficulties. In this study, we tested the hypotheses that among Japanese university students in the general population, (1) higher autistic traits and (2) greater imagination difficulties are associated with lower performance in tasks requiring temporal integration. Two tasks were used to assess their temporal integration abilities: a speech-in-noise test using noise with temporal dips in the auditory modality and a slit-viewing task in the visual modality. The results showed that low performance in the speech-in-noise test was related to autistic traits and some aspects of imagination difficulties, whereas the slit-viewing task was related to neither autistic traits nor imagination difficulties. The ability to temporally integrate fragments of auditory information is expected to be associated with performance in perceiving speech in noise with temporal dips. The difficulties in perceiving sensory information as a single unified percept using priors may cause difficulties in temporally integrating auditory information and perceiving speech in noise. Furthermore, the structural equation modeling suggests that imagination difficulties are linked to difficulties in perceiving speech in noise with temporal dips, which links to social impairments.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":" ","pages":"453-483"},"PeriodicalIF":1.5,"publicationDate":"2025-09-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145410743","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-24DOI: 10.1163/22134808-bja10158
Aurore Zelazny, Thomas Alrik Sørensen
Pitch-color associations have been widely explored in the context of cross-modal correspondences. Previous research indicates that pitch height maps onto lightness, and that high pitches are often associated with yellow and low pitches with blue. However, whether these associations are absolute or relative remains unclear. This study investigated the effect of context on pitch-color associations by presenting seven pitch stimuli (C4-B4) in randomized, ascending, and descending orders. A large sample ( N = 6626) was asked to select colors for each pitch using a color wheel. Results revealed that pitch height was linearly mapped onto lightness, with higher pitches associated with lighter colors. Notably, this mapping was influenced by context, as ascending sequences produced lighter colors and descending sequences resulted in darker colors compared to randomized presentations. Furthermore, lightness associations developed progressively, going from binary to linear as trials progressed. Saturation on the other hand did not follow a linear pattern but peaked at mid-range pitches and was not influenced by context. Additionally, compared to randomized presentation, color associations show a downward shift (i.e., reported for lower pitches) in the ascending presentation, and an upward shift (i.e., reported for higher pitches) in the descending presentation. These findings suggest that pitch-color associations are relative rather than absolute, possibly due to low ability to categorize pitches in the general population, with lightness appearing to emerge as the primary factor for color choices. This study contributes to the understanding of associations across sensory modalities, which may be a promising venue to investigate hidden cognitive processes such as sensory illusions.
{"title":"Pitch-Color Associations are Context-Dependent and Driven by Lightness.","authors":"Aurore Zelazny, Thomas Alrik Sørensen","doi":"10.1163/22134808-bja10158","DOIUrl":"10.1163/22134808-bja10158","url":null,"abstract":"<p><p>Pitch-color associations have been widely explored in the context of cross-modal correspondences. Previous research indicates that pitch height maps onto lightness, and that high pitches are often associated with yellow and low pitches with blue. However, whether these associations are absolute or relative remains unclear. This study investigated the effect of context on pitch-color associations by presenting seven pitch stimuli (C4-B4) in randomized, ascending, and descending orders. A large sample ( N = 6626) was asked to select colors for each pitch using a color wheel. Results revealed that pitch height was linearly mapped onto lightness, with higher pitches associated with lighter colors. Notably, this mapping was influenced by context, as ascending sequences produced lighter colors and descending sequences resulted in darker colors compared to randomized presentations. Furthermore, lightness associations developed progressively, going from binary to linear as trials progressed. Saturation on the other hand did not follow a linear pattern but peaked at mid-range pitches and was not influenced by context. Additionally, compared to randomized presentation, color associations show a downward shift (i.e., reported for lower pitches) in the ascending presentation, and an upward shift (i.e., reported for higher pitches) in the descending presentation. These findings suggest that pitch-color associations are relative rather than absolute, possibly due to low ability to categorize pitches in the general population, with lightness appearing to emerge as the primary factor for color choices. This study contributes to the understanding of associations across sensory modalities, which may be a promising venue to investigate hidden cognitive processes such as sensory illusions.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":" ","pages":"403-426"},"PeriodicalIF":1.5,"publicationDate":"2025-09-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145410821","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-08-01DOI: 10.1163/22134808-bja10153
David A Tovar, Marcus R Watson, David J Lewkowicz, Monica Gori, Micah M Murray, Mark T Wallace
Multisensory integration (MSI) is a core neurobehavioral operation that enhances our ability to perceive, decide, and act by combining information from different sensory modalities. This integrative capability is essential for efficiently navigating complex environments and responding to their multisensory nature. One of the powerful behavioral benefits of MSI is in speeding responses. To evaluate this speeding, traditional research in MSI often relies on so-called race models, which predict reaction times (RTs) based on the assumption that information from the different sensory modalities is initially processed independently. When observed RTs are faster than those predicted by these models, it indicates the presence of true convergence and integration of multisensory information prior to the initiation of the motor response. Despite the strong applicability of race models in MSI research, analysis of multisensory RT data often poses challenges for researchers, particularly in managing, interpreting and modeling large datasets or a collection of datasets. To surmount these challenges, we developed a user-friendly graphical user interface (GUI) packaged into a freely available software application that is compatible with both Windows and Mac and that requires no programming expertise. This tool simplifies the processes of data loading, filtering, and statistical analysis. It allows the calculation and visualization of RTs across different sensory modalities, the performance of robust statistical tests, and the testing of race model violations. By integrating these capabilities into a single platform, the CART-GUI facilitates MSI analyses and makes it accessible to a wider range of users, from novice researchers to experts in the field. The GUI's user-friendly design and advanced analytical features will allow for valuable insights into the mechanisms underlying MSI and contribute to the advancement of research in this domain.
{"title":"CART: The Comprehensive Analysis of Reaction Times - GUI for Multisensory Processes and Race Models.","authors":"David A Tovar, Marcus R Watson, David J Lewkowicz, Monica Gori, Micah M Murray, Mark T Wallace","doi":"10.1163/22134808-bja10153","DOIUrl":"https://doi.org/10.1163/22134808-bja10153","url":null,"abstract":"<p><p>Multisensory integration (MSI) is a core neurobehavioral operation that enhances our ability to perceive, decide, and act by combining information from different sensory modalities. This integrative capability is essential for efficiently navigating complex environments and responding to their multisensory nature. One of the powerful behavioral benefits of MSI is in speeding responses. To evaluate this speeding, traditional research in MSI often relies on so-called race models, which predict reaction times (RTs) based on the assumption that information from the different sensory modalities is initially processed independently. When observed RTs are faster than those predicted by these models, it indicates the presence of true convergence and integration of multisensory information prior to the initiation of the motor response. Despite the strong applicability of race models in MSI research, analysis of multisensory RT data often poses challenges for researchers, particularly in managing, interpreting and modeling large datasets or a collection of datasets. To surmount these challenges, we developed a user-friendly graphical user interface (GUI) packaged into a freely available software application that is compatible with both Windows and Mac and that requires no programming expertise. This tool simplifies the processes of data loading, filtering, and statistical analysis. It allows the calculation and visualization of RTs across different sensory modalities, the performance of robust statistical tests, and the testing of race model violations. By integrating these capabilities into a single platform, the CART-GUI facilitates MSI analyses and makes it accessible to a wider range of users, from novice researchers to experts in the field. The GUI's user-friendly design and advanced analytical features will allow for valuable insights into the mechanisms underlying MSI and contribute to the advancement of research in this domain.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":"38 4-5","pages":"211-230"},"PeriodicalIF":1.5,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145410729","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-07-15DOI: 10.1163/22134808-bja10154
Irene Petrizzo, Guido Marco Cicchini, David Charles Burr, Giovanni Anobile
Recently, analysis of differences in individual performance has provided evidence to support the existence of sensorimotor number mechanisms. The individual difference technique assumes that performance for stimuli processed by the same mechanism should be more correlated between individuals than are stimuli processed by different mechanisms. Here we replicated this finding and generalized the results to other sensory modalities. We measured performance for the same participants on three different numerical tasks: a sensorimotor task (series of actions), a temporal numerosity task (series of flashes) and a spatial numerosity task (array of dots). We then searched for tuning selectivity within each task and between task pairs by analysing patterns of correlation between tested numerosities. Correlation within each task showed tuning selectivity in all the three cases, with high positive correlations for nearby target numbers that decreased with numerical distance, providing psychophysical and physiological evidence for the existence of multisensory numerosity channels. Cross-task correlations also suggested a shared tuning between the sensorimotor and temporal visual numerosity, which points to channels responsible for performance in both visual and motor temporal number tasks. However, no shared tuning emerged between spatial visual numerosity and the other two tasks, suggesting partially different patterns of encoding for temporal and spatial numerosity. Taken together our results provide evidence for a similar functional architecture for the three tasks tested here, but also imply that there is no full overlap of shared resources between numerosity domains, suggesting at least partially separated mechanisms of encoding.
{"title":"Multisensory Number Channels Derived from Individual Differences.","authors":"Irene Petrizzo, Guido Marco Cicchini, David Charles Burr, Giovanni Anobile","doi":"10.1163/22134808-bja10154","DOIUrl":"10.1163/22134808-bja10154","url":null,"abstract":"<p><p>Recently, analysis of differences in individual performance has provided evidence to support the existence of sensorimotor number mechanisms. The individual difference technique assumes that performance for stimuli processed by the same mechanism should be more correlated between individuals than are stimuli processed by different mechanisms. Here we replicated this finding and generalized the results to other sensory modalities. We measured performance for the same participants on three different numerical tasks: a sensorimotor task (series of actions), a temporal numerosity task (series of flashes) and a spatial numerosity task (array of dots). We then searched for tuning selectivity within each task and between task pairs by analysing patterns of correlation between tested numerosities. Correlation within each task showed tuning selectivity in all the three cases, with high positive correlations for nearby target numbers that decreased with numerical distance, providing psychophysical and physiological evidence for the existence of multisensory numerosity channels. Cross-task correlations also suggested a shared tuning between the sensorimotor and temporal visual numerosity, which points to channels responsible for performance in both visual and motor temporal number tasks. However, no shared tuning emerged between spatial visual numerosity and the other two tasks, suggesting partially different patterns of encoding for temporal and spatial numerosity. Taken together our results provide evidence for a similar functional architecture for the three tasks tested here, but also imply that there is no full overlap of shared resources between numerosity domains, suggesting at least partially separated mechanisms of encoding.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":" ","pages":"383-402"},"PeriodicalIF":1.5,"publicationDate":"2025-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145410749","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-06-12DOI: 10.1163/22134808-bja10152
Lara A Coelho, Claudia L R Gonzalez
Despite constantly performing actions with their hands, healthy individuals display distorted hand representations. These distortions have been found in a body representation called 'the body model', which plays a fundamental role in position sense. There is a growing number of studies showing that changes in this representation may optimize performance in certain skills (e.g., magicians, baseball players). This has led to the hypothesis that the distortions may facilitate our actions. One highly trained group of individuals that rely on an accurate position sense about the fingers, are piano players. However, musicians have yet to be studied in the body model task. Therefore, we recruited a group of expert piano players (average practice time 12.85 h/week, average years playing 16.22 ± 3.6) and an age- and sex-matched control group. We hypothesized that piano players would have more accurate hand representation, as precise finger location knowledge is essential for skilled piano performance. Our results showed that piano players were significantly more accurate at estimating hand width compared to the controls; in fact, their estimates of this measure were not different than their physical size. This supports our hypothesis and suggests that the need for more accurate localization of the fingertips when playing may result in a more accurate estimate of hand width in the body model task. There was, however, no difference between the groups for finger length, as both piano and control groups significantly underestimated this measure. This result may reflect the typical position of the hands while playing piano, as the fingers are kept curved to aid proper technique. Taken together, our results support the hypothesis that distortions may in fact facilitate our actions.
{"title":"Long-term musical training modulates the body model.","authors":"Lara A Coelho, Claudia L R Gonzalez","doi":"10.1163/22134808-bja10152","DOIUrl":"10.1163/22134808-bja10152","url":null,"abstract":"<p><p>Despite constantly performing actions with their hands, healthy individuals display distorted hand representations. These distortions have been found in a body representation called 'the body model', which plays a fundamental role in position sense. There is a growing number of studies showing that changes in this representation may optimize performance in certain skills (e.g., magicians, baseball players). This has led to the hypothesis that the distortions may facilitate our actions. One highly trained group of individuals that rely on an accurate position sense about the fingers, are piano players. However, musicians have yet to be studied in the body model task. Therefore, we recruited a group of expert piano players (average practice time 12.85 h/week, average years playing 16.22 ± 3.6) and an age- and sex-matched control group. We hypothesized that piano players would have more accurate hand representation, as precise finger location knowledge is essential for skilled piano performance. Our results showed that piano players were significantly more accurate at estimating hand width compared to the controls; in fact, their estimates of this measure were not different than their physical size. This supports our hypothesis and suggests that the need for more accurate localization of the fingertips when playing may result in a more accurate estimate of hand width in the body model task. There was, however, no difference between the groups for finger length, as both piano and control groups significantly underestimated this measure. This result may reflect the typical position of the hands while playing piano, as the fingers are kept curved to aid proper technique. Taken together, our results support the hypothesis that distortions may in fact facilitate our actions.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":" ","pages":"369-382"},"PeriodicalIF":1.5,"publicationDate":"2025-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144334371","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-05-26DOI: 10.1163/22134808-bja10149
John F Golding, Behrang Keshavarz
The short version of the Visually Induced Motion Sickness Susceptibility Questionnaires (VIMSSQ-short) was designed to estimate an individual's susceptibility to motion sickness caused by exposure to visual motion, for instance when using smartphones, simulators, or Virtual Reality. The goal of the present paper was to establish normative data of the VIMSSQ-short for men and women based on online surveys and to compare these results with findings from previously published work. VIMSSQ-short data from 920 participants were collected across four online surveys. In addition, the relationship with other relevant constructs such as susceptibilities to classic motion sickness (via the Motion Sickness Susceptibility Questionnaires (MSSQ)), Migraine, Dizziness, and Syncope, was explored. Normative data for the VIMSSQ-short showed a mean score of M = 7.2 (standard deviation (SD) = 4.2) and a median of 7, with a good test reliability (Cronbach's alpha = 0.80). No significant difference between men and women showed. The VIMSSQ-short correlated significantly with the MSSQ ( r = 0.55), Migraine ( r = 0.48), Dizziness ( r = 0.35), and Syncope ( r = 0.31). Exploratory factor analysis of all variables suggested two latent variables: nausea-related and oculomotor-related. Norms for this study were consistent with the only other large online survey. But average VIMSSQ-short values were lower in smaller studies of participants volunteering for cybersickness experiments, perhaps reflecting self-selection bias. The VIMSSQ-short provides reliability with efficient compromise between length and validity. It can be used alone or with other questionnaires, the most useful being the MSSQ and the Migraine Screening Questionnaire.
{"title":"Norms and Correlations of the Visually Induced Motion Sickness Susceptibility Questionnaire Short (VIMSSQ-short).","authors":"John F Golding, Behrang Keshavarz","doi":"10.1163/22134808-bja10149","DOIUrl":"https://doi.org/10.1163/22134808-bja10149","url":null,"abstract":"<p><p>The short version of the Visually Induced Motion Sickness Susceptibility Questionnaires (VIMSSQ-short) was designed to estimate an individual's susceptibility to motion sickness caused by exposure to visual motion, for instance when using smartphones, simulators, or Virtual Reality. The goal of the present paper was to establish normative data of the VIMSSQ-short for men and women based on online surveys and to compare these results with findings from previously published work. VIMSSQ-short data from 920 participants were collected across four online surveys. In addition, the relationship with other relevant constructs such as susceptibilities to classic motion sickness (via the Motion Sickness Susceptibility Questionnaires (MSSQ)), Migraine, Dizziness, and Syncope, was explored. Normative data for the VIMSSQ-short showed a mean score of M = 7.2 (standard deviation (SD) = 4.2) and a median of 7, with a good test reliability (Cronbach's alpha = 0.80). No significant difference between men and women showed. The VIMSSQ-short correlated significantly with the MSSQ ( r = 0.55), Migraine ( r = 0.48), Dizziness ( r = 0.35), and Syncope ( r = 0.31). Exploratory factor analysis of all variables suggested two latent variables: nausea-related and oculomotor-related. Norms for this study were consistent with the only other large online survey. But average VIMSSQ-short values were lower in smaller studies of participants volunteering for cybersickness experiments, perhaps reflecting self-selection bias. The VIMSSQ-short provides reliability with efficient compromise between length and validity. It can be used alone or with other questionnaires, the most useful being the MSSQ and the Migraine Screening Questionnaire.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":" ","pages":"1-22"},"PeriodicalIF":1.8,"publicationDate":"2025-05-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144163601","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-05-23DOI: 10.1163/22134808-bja10151
Natalee J von Keyserling, Diana K Sarko
Traditionally, systems neuroscience has focused on singular sensory systems operating in near isolation, ignoring the complexity of the brain's inherent ability to integrate multiple sensory modalities in a symphony of signals that creates our perception of the world around us. The Merging of the Senses has been integral in fueling the exponential growth of the multisensory field, though there are still a wealth of discoveries to be made. Here, we highlight the naked mole-rat as an animal model for an understudied body region that may reveal robust multisensory influences: the teeth. We propose neural and behavioral experiments for evaluating the multisensory underpinnings related to the teeth and how a multisensory perspective can be used to assess plasticity following tooth loss.
{"title":"Multisensory Integration and Orofacial Structures: The Potential for Visual and Auditory Modalities to Rescue Diminished Tactile Inputs Following Tooth Loss.","authors":"Natalee J von Keyserling, Diana K Sarko","doi":"10.1163/22134808-bja10151","DOIUrl":"10.1163/22134808-bja10151","url":null,"abstract":"<p><p>Traditionally, systems neuroscience has focused on singular sensory systems operating in near isolation, ignoring the complexity of the brain's inherent ability to integrate multiple sensory modalities in a symphony of signals that creates our perception of the world around us. The Merging of the Senses has been integral in fueling the exponential growth of the multisensory field, though there are still a wealth of discoveries to be made. Here, we highlight the naked mole-rat as an animal model for an understudied body region that may reveal robust multisensory influences: the teeth. We propose neural and behavioral experiments for evaluating the multisensory underpinnings related to the teeth and how a multisensory perspective can be used to assess plasticity following tooth loss.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":" ","pages":"153-179"},"PeriodicalIF":1.5,"publicationDate":"2025-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144163597","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-05-22DOI: 10.1163/22134808-bja10150
Lihan Chen
We introduce how 'the rule of thumb' of multisensory integration, which was proposed in the seminal book The Merging of the Senses by Stein and Meredith in 1993, inspired the empirical research work conducted at Multisensory lab, Peking University (China) for the last 15 years. We also outline the potential research trends in the multisensory research field.
{"title":"From Exploration to Integration: 15 Years of Multisensory Research at Peking University.","authors":"Lihan Chen","doi":"10.1163/22134808-bja10150","DOIUrl":"10.1163/22134808-bja10150","url":null,"abstract":"<p><p>We introduce how 'the rule of thumb' of multisensory integration, which was proposed in the seminal book The Merging of the Senses by Stein and Meredith in 1993, inspired the empirical research work conducted at Multisensory lab, Peking University (China) for the last 15 years. We also outline the potential research trends in the multisensory research field.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":" ","pages":"255-271"},"PeriodicalIF":1.5,"publicationDate":"2025-05-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144163577","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-05-19DOI: 10.1163/22134808-bja10148
Antoine Demers, Simon Grondin
Several studies have investigated the influence of auditory and visual sensory modalities on the variability and perceived duration of brief time intervals. However, few studies have investigated this influence when the two intervals to be discriminated share the same stimulus, and none of these have included the tactile modality. The aim of the present study was to investigate, in multimodal conditions, the capability to discriminate two adjacent intervals, using an equisection and adjustment method. Participants had to adjust the second of three brief successive signals marking two empty intervals until they were subjectively perceived as equal. The experiment included nine modality conditions and intervals between Markers 1 and 3 lasted 0.5, 1, 1.5, or 2 s (four standard conditions). The results show that the adjustment is better (lower variability) with three auditory (A) than with three visual (V) or tactile (T) markers, and these three conditions are better than when Marker 2 differs from Markers 1 and 3 (all intermodal conditions). Differences also emerged in the perceived duration of intermodal conditions. In TVT and VTV conditions, intervals marked by a tactile-visual (TV) sequence are perceived as longer than VT intervals, and in AVA and VAV conditions AV intervals are perceived as longer than VA intervals. Finally, AT intervals are perceived as longer than TA intervals, but only in the short standard conditions. In addition to replicating the classical variability increase when short intermodal intervals are used, the study shows the influence on perceived duration of the speed of processing of a visual signal.
{"title":"Studying the Processing of Multimodal Brief Temporal Intervals with an Equisection (Bisection) Task.","authors":"Antoine Demers, Simon Grondin","doi":"10.1163/22134808-bja10148","DOIUrl":"10.1163/22134808-bja10148","url":null,"abstract":"<p><p>Several studies have investigated the influence of auditory and visual sensory modalities on the variability and perceived duration of brief time intervals. However, few studies have investigated this influence when the two intervals to be discriminated share the same stimulus, and none of these have included the tactile modality. The aim of the present study was to investigate, in multimodal conditions, the capability to discriminate two adjacent intervals, using an equisection and adjustment method. Participants had to adjust the second of three brief successive signals marking two empty intervals until they were subjectively perceived as equal. The experiment included nine modality conditions and intervals between Markers 1 and 3 lasted 0.5, 1, 1.5, or 2 s (four standard conditions). The results show that the adjustment is better (lower variability) with three auditory (A) than with three visual (V) or tactile (T) markers, and these three conditions are better than when Marker 2 differs from Markers 1 and 3 (all intermodal conditions). Differences also emerged in the perceived duration of intermodal conditions. In TVT and VTV conditions, intervals marked by a tactile-visual (TV) sequence are perceived as longer than VT intervals, and in AVA and VAV conditions AV intervals are perceived as longer than VA intervals. Finally, AT intervals are perceived as longer than TA intervals, but only in the short standard conditions. In addition to replicating the classical variability increase when short intermodal intervals are used, the study shows the influence on perceived duration of the speed of processing of a visual signal.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":" ","pages":"77-122"},"PeriodicalIF":1.8,"publicationDate":"2025-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144163563","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}