Pub Date : 2025-10-14DOI: 10.1163/22134808-bja10168
Donatien Doumont, Anika R Kao, Julien Lambert, François Wielant, Gregory J Gerling, Benoit P Delhaye, Philippe Lefèvre
Dexterous manipulations rely on tactile feedback from the fingertips, which provides crucial information about contact events, object geometry, interaction forces, friction, and more. Accurately measuring skin deformations during tactile interactions can shed light on the mechanics behind such feedback. To address this, we developed a novel setup using 3-D digital image correlation (DIC) to both reconstruct the bulk deformation and local surface skin deformation of the fingertip under natural loading conditions. Here, we studied the local spatiotemporal evolution of the skin surface during contact initiation. We showed that, as soon as contact occurs, the skin surface deforms very rapidly and exhibits high compliance at low forces (<0.05 N). As loading and thus the contact area increases, a localized deformation front forms just ahead of the moving contact boundary. Consequently, substantial deformation extending beyond the contact interface was observed, with maximal amplitudes ranging from 5% to 10% at 5 N, close to the border of the contact. Furthermore, we found that friction influences the partial slip caused by these deformations during contact initiation, as previously suggested. Our setup provides a powerful tool to get new insights into the mechanics of touch and opens avenues for a deeper understanding of tactile afferent encoding.
{"title":"3-D Reconstruction of Fingertip Deformation During Contact Initiation.","authors":"Donatien Doumont, Anika R Kao, Julien Lambert, François Wielant, Gregory J Gerling, Benoit P Delhaye, Philippe Lefèvre","doi":"10.1163/22134808-bja10168","DOIUrl":"https://doi.org/10.1163/22134808-bja10168","url":null,"abstract":"<p><p>Dexterous manipulations rely on tactile feedback from the fingertips, which provides crucial information about contact events, object geometry, interaction forces, friction, and more. Accurately measuring skin deformations during tactile interactions can shed light on the mechanics behind such feedback. To address this, we developed a novel setup using 3-D digital image correlation (DIC) to both reconstruct the bulk deformation and local surface skin deformation of the fingertip under natural loading conditions. Here, we studied the local spatiotemporal evolution of the skin surface during contact initiation. We showed that, as soon as contact occurs, the skin surface deforms very rapidly and exhibits high compliance at low forces (<0.05 N). As loading and thus the contact area increases, a localized deformation front forms just ahead of the moving contact boundary. Consequently, substantial deformation extending beyond the contact interface was observed, with maximal amplitudes ranging from 5% to 10% at 5 N, close to the border of the contact. Furthermore, we found that friction influences the partial slip caused by these deformations during contact initiation, as previously suggested. Our setup provides a powerful tool to get new insights into the mechanics of touch and opens avenues for a deeper understanding of tactile afferent encoding.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":" ","pages":"1-26"},"PeriodicalIF":1.5,"publicationDate":"2025-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145410768","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-14DOI: 10.1163/22134808-bja10161
Charles Spence, Yang Gao
In recent years, there has been an explosion in the number and range of commercial touch-enabled digital devices in society at large. In this narrative review, we critically evaluate the evidence concerning the tactile augmentation of a range of dynamic visual experiences such as those offered by film, gaming, and virtual reality. We consider the various mechanisms (both diegetic and nondiegetic) that may underlie such cross-modal effects. These include attentional capture, mood induction, ambiguity resolution, and the transmission of semantically meaningful information (i.e., such as directional cues for navigation) by means of patterned tactile stimulation. By drawing parallels with the literature on olfactory augmentation in the context of live performance, we identify several additional ways in which touch could potentially be used to augment both passive (e.g., cinema) and active (e.g., gaming) media experiences in the future. That said, a number of the technical, financial, and psychological challenges associated with delivering such cross-modal, or multisensory, enhancement effects via tactile augmentation are also highlighted. Finally, we suggest a number of novel lines of future research in this rapidly evolving area of technological innovation.
{"title":"Enhancing Dynamic Visual Experiences through Touch.","authors":"Charles Spence, Yang Gao","doi":"10.1163/22134808-bja10161","DOIUrl":"10.1163/22134808-bja10161","url":null,"abstract":"<p><p>In recent years, there has been an explosion in the number and range of commercial touch-enabled digital devices in society at large. In this narrative review, we critically evaluate the evidence concerning the tactile augmentation of a range of dynamic visual experiences such as those offered by film, gaming, and virtual reality. We consider the various mechanisms (both diegetic and nondiegetic) that may underlie such cross-modal effects. These include attentional capture, mood induction, ambiguity resolution, and the transmission of semantically meaningful information (i.e., such as directional cues for navigation) by means of patterned tactile stimulation. By drawing parallels with the literature on olfactory augmentation in the context of live performance, we identify several additional ways in which touch could potentially be used to augment both passive (e.g., cinema) and active (e.g., gaming) media experiences in the future. That said, a number of the technical, financial, and psychological challenges associated with delivering such cross-modal, or multisensory, enhancement effects via tactile augmentation are also highlighted. Finally, we suggest a number of novel lines of future research in this rapidly evolving area of technological innovation.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":" ","pages":"289-324"},"PeriodicalIF":1.5,"publicationDate":"2025-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145410796","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-13DOI: 10.1163/22134808-bja10162
Stephanie Yung, M Kathleen Pichora-Fuller, Dirk B Walther, Raheleh Saryazdi, Jennifer L Campos
It is well established that individual sensory and cognitive abilities often decline with older age; however, previous studies examining whether multisensory processes and multisensory integration also change with older age have been inconsistent. One possible reason for these inconsistencies may be due to differences across studies in how sensory and cognitive abilities have been characterized and controlled for in older adult participant groups. The current study examined whether multisensory (audiovisual) synchrony perception is different in younger and older adults using the audiovisual simultaneity judgement (SJ) and temporal order judgement (TOJ) tasks and explored whether performance on these audiovisual tasks was associated with unisensory (hearing, vision) and cognitive (global cognition and executive functioning) abilities within clinically normal limits. Healthy younger and older adults completed audiovisual SJ and TOJ tasks. Auditory-only and visual-only SJ tasks were also completed independently to assess temporal processing in hearing and vision. Older adults completed standardized assessments of hearing, vision, and cognition. Results showed that, compared to younger adults, older adults had wider temporal binding windows in the audiovisual SJ and TOJ tasks and larger points of subjective simultaneity in the TOJ task. No significant associations were found among the unisensory (standard baseline and unisensory SJ), cognitive, or audiovisual (SJ, TOJ) measures. These findings suggest that audiovisual integrative processes change with older age, even within clinically normal sensory and cognitive abilities.
{"title":"Older Adults with Clinically Normal Sensory and Cognitive Abilities Perceive Audiovisual Simultaneity and Temporal Order Differently than Younger Adults.","authors":"Stephanie Yung, M Kathleen Pichora-Fuller, Dirk B Walther, Raheleh Saryazdi, Jennifer L Campos","doi":"10.1163/22134808-bja10162","DOIUrl":"10.1163/22134808-bja10162","url":null,"abstract":"<p><p>It is well established that individual sensory and cognitive abilities often decline with older age; however, previous studies examining whether multisensory processes and multisensory integration also change with older age have been inconsistent. One possible reason for these inconsistencies may be due to differences across studies in how sensory and cognitive abilities have been characterized and controlled for in older adult participant groups. The current study examined whether multisensory (audiovisual) synchrony perception is different in younger and older adults using the audiovisual simultaneity judgement (SJ) and temporal order judgement (TOJ) tasks and explored whether performance on these audiovisual tasks was associated with unisensory (hearing, vision) and cognitive (global cognition and executive functioning) abilities within clinically normal limits. Healthy younger and older adults completed audiovisual SJ and TOJ tasks. Auditory-only and visual-only SJ tasks were also completed independently to assess temporal processing in hearing and vision. Older adults completed standardized assessments of hearing, vision, and cognition. Results showed that, compared to younger adults, older adults had wider temporal binding windows in the audiovisual SJ and TOJ tasks and larger points of subjective simultaneity in the TOJ task. No significant associations were found among the unisensory (standard baseline and unisensory SJ), cognitive, or audiovisual (SJ, TOJ) measures. These findings suggest that audiovisual integrative processes change with older age, even within clinically normal sensory and cognitive abilities.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":" ","pages":"485-515"},"PeriodicalIF":1.5,"publicationDate":"2025-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145410766","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The inputs delivered to different sensory organs provide complementary information about the environment. Many previous studies have demonstrated that presenting multisensory information (e.g., visual) can improve auditory perception, especially in noisy environments. Understanding temporal asynchronicity between different sensory modalities is fundamentally important to process and deliver multisensory information in real time with minimal time delay. The purpose of this study was to quantify the average limit of temporal asynchronicity where multisensory stimuli are likely to be perceptually integrated. Twenty adults participated in simultaneity judgment measurements using 100-ms stimuli in three different sensory modalities (auditory, visual, and tactile), and their test-retest reliability of the simultaneity judgments was verified on a weekly basis by three separate tests. Two crossmodal temporal coherence cues were examined: the temporal binding window (TBW), denoting a time frame where two sensory modalities were perceptually integrated, and the point of subjective simultaneity (PSS), denoting a perceptual lead toward one modality over others. According to the average results, the TBWs occurred in 389 ms (auditory-visual, AV), 324 ms (auditory-tactile, AT), and 299 ms (visual-tactile, VT), and the PSSs were shifted 105 ms toward a visual cue, 16 ms toward a tactile cue, and 77 ms toward a visual cue for the AV, AT, and VT conditions, respectively. Over all three crossmodalities, the test-retest reliability averaged less than 50 ms for the TBW and 30 ms for the PSS. The findings in this study might specify a minimum amount of time delay for real-time multisensory processing, suggesting temporal parameters for future developments in multisensory hearing assistive devices.
{"title":"Temporal Coherence in Crossmodal Perceptual Binding: Implications for the Design of a Real-Time Multisensory Speech Recognition Algorithm.","authors":"Yonghee Oh, Emily Keller, Audie Gilchrist, Kayla Borges, Kelli Meyers","doi":"10.1163/22134808-bja10166","DOIUrl":"https://doi.org/10.1163/22134808-bja10166","url":null,"abstract":"<p><p>The inputs delivered to different sensory organs provide complementary information about the environment. Many previous studies have demonstrated that presenting multisensory information (e.g., visual) can improve auditory perception, especially in noisy environments. Understanding temporal asynchronicity between different sensory modalities is fundamentally important to process and deliver multisensory information in real time with minimal time delay. The purpose of this study was to quantify the average limit of temporal asynchronicity where multisensory stimuli are likely to be perceptually integrated. Twenty adults participated in simultaneity judgment measurements using 100-ms stimuli in three different sensory modalities (auditory, visual, and tactile), and their test-retest reliability of the simultaneity judgments was verified on a weekly basis by three separate tests. Two crossmodal temporal coherence cues were examined: the temporal binding window (TBW), denoting a time frame where two sensory modalities were perceptually integrated, and the point of subjective simultaneity (PSS), denoting a perceptual lead toward one modality over others. According to the average results, the TBWs occurred in 389 ms (auditory-visual, AV), 324 ms (auditory-tactile, AT), and 299 ms (visual-tactile, VT), and the PSSs were shifted 105 ms toward a visual cue, 16 ms toward a tactile cue, and 77 ms toward a visual cue for the AV, AT, and VT conditions, respectively. Over all three crossmodalities, the test-retest reliability averaged less than 50 ms for the TBW and 30 ms for the PSS. The findings in this study might specify a minimum amount of time delay for real-time multisensory processing, suggesting temporal parameters for future developments in multisensory hearing assistive devices.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":"38 4-5","pages":"273-288"},"PeriodicalIF":1.5,"publicationDate":"2025-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145410785","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-10DOI: 10.1163/22134808-bja10169
Sinan Haliyo
Over three PhD theses co-supervised with Vincent Hayward, we developed a technique to scale up microscale force interactions to a user's hand with near-perfect linear amplification. While this challenge could be approached through robotic teleoperation - using a precise robot manipulator with force sensing controlled via a haptic device - the required bilateral coupling between different physical scales demands extremely large homothetic gains (typically ×10 000 to ×100 000) in both displacement and force. These large gains compromise transparency, as device imperfections and stability requirements mask the faithful perception of microscale phenomena. To overcome this limitation, we developed the concept of haptic microscopy. We designed a complete microscale teleoperation system from the ground up, featuring a custom robotic manipulator and novel haptic device, implementing direct bilateral coupling with pure gains. This electromechanical system successfully amplifies microscale forces several thousand times, enabling operators to better understand the physical landscape they are manipulating. Our paper details the design process for both the microtool and haptic device, and presents experiments demonstrating users' ability to tactilely explore microscale interactions.
{"title":"Haptic Microscopy: Tactile Perception of Small Scales.","authors":"Sinan Haliyo","doi":"10.1163/22134808-bja10169","DOIUrl":"https://doi.org/10.1163/22134808-bja10169","url":null,"abstract":"<p><p>Over three PhD theses co-supervised with Vincent Hayward, we developed a technique to scale up microscale force interactions to a user's hand with near-perfect linear amplification. While this challenge could be approached through robotic teleoperation - using a precise robot manipulator with force sensing controlled via a haptic device - the required bilateral coupling between different physical scales demands extremely large homothetic gains (typically ×10 000 to ×100 000) in both displacement and force. These large gains compromise transparency, as device imperfections and stability requirements mask the faithful perception of microscale phenomena. To overcome this limitation, we developed the concept of haptic microscopy. We designed a complete microscale teleoperation system from the ground up, featuring a custom robotic manipulator and novel haptic device, implementing direct bilateral coupling with pure gains. This electromechanical system successfully amplifies microscale forces several thousand times, enabling operators to better understand the physical landscape they are manipulating. Our paper details the design process for both the microtool and haptic device, and presents experiments demonstrating users' ability to tactilely explore microscale interactions.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":" ","pages":"1-25"},"PeriodicalIF":1.5,"publicationDate":"2025-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145410794","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-24DOI: 10.1163/22134808-bja10156
Benjamin A Rowland
The 1993 book The Merging of the Senses has proven to be a profoundly impactful text that has shaped research programs studying the interaction between the senses for the last three decades. The book combines skillful and approachable narration with engaging illustrations and was received with rave reviews on publication as one of the first comprehensive approaches to the subject. It captures the impressive breadth of domains in which multisensory integration impacts the daily life of all animals and promotes a systematic approach to understanding its underlying operation by interrogating the nervous system at multiple levels, from the peripheral organ, through convergence, integration, and decision-making, to effected behavior. Thirty years later, the multiple generations of scientists that have been inspired by the text have built an amazing structure on this foundation, through advancing refinements in theory and experimental technique, investigation of new domains and species, an understanding of the origins, maturation, and plasticity of the process, the translation of biological principles to artificial systems, and discovering new applications of multisensory research in clinical and rehabilitative domains.
{"title":"Introduction to the Special Issue on The Merging of the Senses.","authors":"Benjamin A Rowland","doi":"10.1163/22134808-bja10156","DOIUrl":"10.1163/22134808-bja10156","url":null,"abstract":"<p><p>The 1993 book The Merging of the Senses has proven to be a profoundly impactful text that has shaped research programs studying the interaction between the senses for the last three decades. The book combines skillful and approachable narration with engaging illustrations and was received with rave reviews on publication as one of the first comprehensive approaches to the subject. It captures the impressive breadth of domains in which multisensory integration impacts the daily life of all animals and promotes a systematic approach to understanding its underlying operation by interrogating the nervous system at multiple levels, from the peripheral organ, through convergence, integration, and decision-making, to effected behavior. Thirty years later, the multiple generations of scientists that have been inspired by the text have built an amazing structure on this foundation, through advancing refinements in theory and experimental technique, investigation of new domains and species, an understanding of the origins, maturation, and plasticity of the process, the translation of biological principles to artificial systems, and discovering new applications of multisensory research in clinical and rehabilitative domains.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":"38 4-5","pages":"143-152"},"PeriodicalIF":1.5,"publicationDate":"2025-09-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145410744","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-24DOI: 10.1163/22134808-bja10159
Madeleine R Jones, Aurelia Daniels, Kajsa Igelström, Juulia Suvilehto, India Morrison
In autonomous sensory meridian response (ASMR), certain audiovisual stimuli can evoke a range of spontaneous sensations, in particular a pleasant tingling that often originates across the scalp, spreading down the spine toward the shoulders ('tingles'). Major drivers of tingle elicitation in ASMR stimuli are often 'crisp' sounds created by whispering or manipulating an object, as well as social-attentional features such as implied direct attention to the viewer. However, relationships between specific stimulus properties and ASMR-typical subjective responses remain to be fully mapped. In two studies, we therefore sought to isolate specific tingle-eliciting stimulus features by comparing tingle reports for ASMR video clips between ASMR experiencers and control participants. The first study compared intact versus desynchronized video clips to probe whether the presence of audiovisual features would be sufficient to elicit tingles, or whether these features needed to be presented in a coherent sequence. The second study compared clips with filtered and unfiltered audio, demonstrating that 'crisp' sounds had greater tingle efficacy over 'blunted' sounds. Overall, the presence of stimulus features in both synchronized and desynchronized clips was effective in eliciting self-reported subjective responses (tingle frequency), while intact clips involving object manipulation and speech sounds were most effective. An exploratory analysis suggested that viewer-oriented implied attention also influenced tingle ratings. These findings further pinpoint the importance of object and speech sounds in eliciting ASMR tingle responses, supporting the proposition that audiovisual stimulus features implying proximity to the viewer play a key role.
{"title":"Tingle-Eliciting Audiovisual Properties of Autonomous Sensory Meridian Response (ASMR) Videos.","authors":"Madeleine R Jones, Aurelia Daniels, Kajsa Igelström, Juulia Suvilehto, India Morrison","doi":"10.1163/22134808-bja10159","DOIUrl":"10.1163/22134808-bja10159","url":null,"abstract":"<p><p>In autonomous sensory meridian response (ASMR), certain audiovisual stimuli can evoke a range of spontaneous sensations, in particular a pleasant tingling that often originates across the scalp, spreading down the spine toward the shoulders ('tingles'). Major drivers of tingle elicitation in ASMR stimuli are often 'crisp' sounds created by whispering or manipulating an object, as well as social-attentional features such as implied direct attention to the viewer. However, relationships between specific stimulus properties and ASMR-typical subjective responses remain to be fully mapped. In two studies, we therefore sought to isolate specific tingle-eliciting stimulus features by comparing tingle reports for ASMR video clips between ASMR experiencers and control participants. The first study compared intact versus desynchronized video clips to probe whether the presence of audiovisual features would be sufficient to elicit tingles, or whether these features needed to be presented in a coherent sequence. The second study compared clips with filtered and unfiltered audio, demonstrating that 'crisp' sounds had greater tingle efficacy over 'blunted' sounds. Overall, the presence of stimulus features in both synchronized and desynchronized clips was effective in eliciting self-reported subjective responses (tingle frequency), while intact clips involving object manipulation and speech sounds were most effective. An exploratory analysis suggested that viewer-oriented implied attention also influenced tingle ratings. These findings further pinpoint the importance of object and speech sounds in eliciting ASMR tingle responses, supporting the proposition that audiovisual stimulus features implying proximity to the viewer play a key role.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":" ","pages":"427-452"},"PeriodicalIF":1.5,"publicationDate":"2025-09-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145410731","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
When designing a haptic interface, simplicity is crucial to avoid negative effects caused by excessive weight and complexity. Using multimodal information, haptic illusions, and providing context are known to create simpler interfaces. We have previously proposed the use of single overlapped vibrotactile stimulation (SOVS) for presenting spatiotemporal tactile perception, a method that simultaneously presents overlapped waveforms to multiple body parts. There, the acceleration measured from a person dribbling a basketball with an accelerometer positioned on the index finger and the floor was overlapped to present as stimuli. When the stimuli were presented simultaneously to the hand and the feet, it demonstrated a dribbling sensation, like an imaginary ball moving back and forth between the hand and the feet. This demonstrated the potential to eliminate the need for time synchronization and reduce the number of required channels, ultimately leading to the development of simple haptic interfaces that enhance an immersive experience. In this paper, we aim to investigate the key factor behind the perception of SOVS using simple vibrotactile stimuli. The first experiment measured the occurrence rate of the dribbling feeling for different combinations of prepared stimuli, and the results show that the combination of two different input amplitudes is crucial for the occurrence rate of the phenomenon. The second experiment assessed how realistic each stimulus, presented to the hand and the feet separately, felt to the participants. The results show that for the hand, the perceived reality corresponded to the strength of input amplitude, whereas the second-strongest input amplitude was perceived as most realistic for the feet. This suggests that when the combination consists of duplicate input amplitudes and/or those with low perceived reality, the occurrence rate tends to decrease.
{"title":"Pseudo-Dribbling Experience Using Single Overlapped Vibrotactile Stimulation Simultaneously to the Hand and the Feet.","authors":"Takumi Kuhara, Kakagu Komazaki, Junji Watanabe, Yoshihiro Tanaka","doi":"10.1163/22134808-bja10157","DOIUrl":"https://doi.org/10.1163/22134808-bja10157","url":null,"abstract":"<p><p>When designing a haptic interface, simplicity is crucial to avoid negative effects caused by excessive weight and complexity. Using multimodal information, haptic illusions, and providing context are known to create simpler interfaces. We have previously proposed the use of single overlapped vibrotactile stimulation (SOVS) for presenting spatiotemporal tactile perception, a method that simultaneously presents overlapped waveforms to multiple body parts. There, the acceleration measured from a person dribbling a basketball with an accelerometer positioned on the index finger and the floor was overlapped to present as stimuli. When the stimuli were presented simultaneously to the hand and the feet, it demonstrated a dribbling sensation, like an imaginary ball moving back and forth between the hand and the feet. This demonstrated the potential to eliminate the need for time synchronization and reduce the number of required channels, ultimately leading to the development of simple haptic interfaces that enhance an immersive experience. In this paper, we aim to investigate the key factor behind the perception of SOVS using simple vibrotactile stimuli. The first experiment measured the occurrence rate of the dribbling feeling for different combinations of prepared stimuli, and the results show that the combination of two different input amplitudes is crucial for the occurrence rate of the phenomenon. The second experiment assessed how realistic each stimulus, presented to the hand and the feet separately, felt to the participants. The results show that for the hand, the perceived reality corresponded to the strength of input amplitude, whereas the second-strongest input amplitude was perceived as most realistic for the feet. This suggests that when the combination consists of duplicate input amplitudes and/or those with low perceived reality, the occurrence rate tends to decrease.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":" ","pages":"1-20"},"PeriodicalIF":1.5,"publicationDate":"2025-09-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145410774","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-24DOI: 10.1163/22134808-bja10160
Yurika Tsuji, Yuki Nishiguchi, Akari Noda, Shu Imaizumi
Autistic individuals experience temporal integration difficulties in some sensory modalities that may be related to imagination difficulties. In this study, we tested the hypotheses that among Japanese university students in the general population, (1) higher autistic traits and (2) greater imagination difficulties are associated with lower performance in tasks requiring temporal integration. Two tasks were used to assess their temporal integration abilities: a speech-in-noise test using noise with temporal dips in the auditory modality and a slit-viewing task in the visual modality. The results showed that low performance in the speech-in-noise test was related to autistic traits and some aspects of imagination difficulties, whereas the slit-viewing task was related to neither autistic traits nor imagination difficulties. The ability to temporally integrate fragments of auditory information is expected to be associated with performance in perceiving speech in noise with temporal dips. The difficulties in perceiving sensory information as a single unified percept using priors may cause difficulties in temporally integrating auditory information and perceiving speech in noise. Furthermore, the structural equation modeling suggests that imagination difficulties are linked to difficulties in perceiving speech in noise with temporal dips, which links to social impairments.
{"title":"Autistic Traits and Temporal Integration of Auditory and Visual Stimuli in the General Population: The Role of Imagination.","authors":"Yurika Tsuji, Yuki Nishiguchi, Akari Noda, Shu Imaizumi","doi":"10.1163/22134808-bja10160","DOIUrl":"10.1163/22134808-bja10160","url":null,"abstract":"<p><p>Autistic individuals experience temporal integration difficulties in some sensory modalities that may be related to imagination difficulties. In this study, we tested the hypotheses that among Japanese university students in the general population, (1) higher autistic traits and (2) greater imagination difficulties are associated with lower performance in tasks requiring temporal integration. Two tasks were used to assess their temporal integration abilities: a speech-in-noise test using noise with temporal dips in the auditory modality and a slit-viewing task in the visual modality. The results showed that low performance in the speech-in-noise test was related to autistic traits and some aspects of imagination difficulties, whereas the slit-viewing task was related to neither autistic traits nor imagination difficulties. The ability to temporally integrate fragments of auditory information is expected to be associated with performance in perceiving speech in noise with temporal dips. The difficulties in perceiving sensory information as a single unified percept using priors may cause difficulties in temporally integrating auditory information and perceiving speech in noise. Furthermore, the structural equation modeling suggests that imagination difficulties are linked to difficulties in perceiving speech in noise with temporal dips, which links to social impairments.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":" ","pages":"453-483"},"PeriodicalIF":1.5,"publicationDate":"2025-09-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145410743","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-24DOI: 10.1163/22134808-bja10158
Aurore Zelazny, Thomas Alrik Sørensen
Pitch-color associations have been widely explored in the context of cross-modal correspondences. Previous research indicates that pitch height maps onto lightness, and that high pitches are often associated with yellow and low pitches with blue. However, whether these associations are absolute or relative remains unclear. This study investigated the effect of context on pitch-color associations by presenting seven pitch stimuli (C4-B4) in randomized, ascending, and descending orders. A large sample ( N = 6626) was asked to select colors for each pitch using a color wheel. Results revealed that pitch height was linearly mapped onto lightness, with higher pitches associated with lighter colors. Notably, this mapping was influenced by context, as ascending sequences produced lighter colors and descending sequences resulted in darker colors compared to randomized presentations. Furthermore, lightness associations developed progressively, going from binary to linear as trials progressed. Saturation on the other hand did not follow a linear pattern but peaked at mid-range pitches and was not influenced by context. Additionally, compared to randomized presentation, color associations show a downward shift (i.e., reported for lower pitches) in the ascending presentation, and an upward shift (i.e., reported for higher pitches) in the descending presentation. These findings suggest that pitch-color associations are relative rather than absolute, possibly due to low ability to categorize pitches in the general population, with lightness appearing to emerge as the primary factor for color choices. This study contributes to the understanding of associations across sensory modalities, which may be a promising venue to investigate hidden cognitive processes such as sensory illusions.
{"title":"Pitch-Color Associations are Context-Dependent and Driven by Lightness.","authors":"Aurore Zelazny, Thomas Alrik Sørensen","doi":"10.1163/22134808-bja10158","DOIUrl":"10.1163/22134808-bja10158","url":null,"abstract":"<p><p>Pitch-color associations have been widely explored in the context of cross-modal correspondences. Previous research indicates that pitch height maps onto lightness, and that high pitches are often associated with yellow and low pitches with blue. However, whether these associations are absolute or relative remains unclear. This study investigated the effect of context on pitch-color associations by presenting seven pitch stimuli (C4-B4) in randomized, ascending, and descending orders. A large sample ( N = 6626) was asked to select colors for each pitch using a color wheel. Results revealed that pitch height was linearly mapped onto lightness, with higher pitches associated with lighter colors. Notably, this mapping was influenced by context, as ascending sequences produced lighter colors and descending sequences resulted in darker colors compared to randomized presentations. Furthermore, lightness associations developed progressively, going from binary to linear as trials progressed. Saturation on the other hand did not follow a linear pattern but peaked at mid-range pitches and was not influenced by context. Additionally, compared to randomized presentation, color associations show a downward shift (i.e., reported for lower pitches) in the ascending presentation, and an upward shift (i.e., reported for higher pitches) in the descending presentation. These findings suggest that pitch-color associations are relative rather than absolute, possibly due to low ability to categorize pitches in the general population, with lightness appearing to emerge as the primary factor for color choices. This study contributes to the understanding of associations across sensory modalities, which may be a promising venue to investigate hidden cognitive processes such as sensory illusions.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":" ","pages":"403-426"},"PeriodicalIF":1.5,"publicationDate":"2025-09-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145410821","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}