Pub Date : 2025-10-21DOI: 10.1163/22134808-bja10170
Malika Auvray, Louise P Kirsch
In this article, we wish to share a scientific journey with our colleague and dear friend Vincent Hayward. The question of the extent to which it is different to touch oneself and someone else's skin brought us to many experimental studies and scientific discoveries. We present some of them here. It started with the use of a tactile device to investigate whether the reference frames specific to the hand differs depending on its position, towards or away from oneself. We then developed a technique allowing us to record skin-to-skin touch by means of an accelerator fixed at a short distance from the touching skin. We used this technique to probe specific parameters involved in skin-to-skin touch, such as speed and pressure, as well as the differences that arise in the signal when touching our own versus someone else's skin. Finally, the same methodology was used to record social touch to convey it at a distance through the auditory channel. Through this short piece we wish to show how Vincent Hayward inspired this new field of research, opening to myriads of applications.
{"title":"Is It Different to Touch Oneself Than to Touch Others? A Scientific Journey with Vincent Hayward.","authors":"Malika Auvray, Louise P Kirsch","doi":"10.1163/22134808-bja10170","DOIUrl":"https://doi.org/10.1163/22134808-bja10170","url":null,"abstract":"<p><p>In this article, we wish to share a scientific journey with our colleague and dear friend Vincent Hayward. The question of the extent to which it is different to touch oneself and someone else's skin brought us to many experimental studies and scientific discoveries. We present some of them here. It started with the use of a tactile device to investigate whether the reference frames specific to the hand differs depending on its position, towards or away from oneself. We then developed a technique allowing us to record skin-to-skin touch by means of an accelerator fixed at a short distance from the touching skin. We used this technique to probe specific parameters involved in skin-to-skin touch, such as speed and pressure, as well as the differences that arise in the signal when touching our own versus someone else's skin. Finally, the same methodology was used to record social touch to convey it at a distance through the auditory channel. Through this short piece we wish to show how Vincent Hayward inspired this new field of research, opening to myriads of applications.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":" ","pages":"1-11"},"PeriodicalIF":1.5,"publicationDate":"2025-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145410799","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-14DOI: 10.1163/22134808-bja10163
Chujun Wang, Yubin Peng, Xiaoang Wan
Although crossmodal interactions between vision and other modalities have been extensively studied, the reverse influence of nonvisual cues on visual processing remains underexplored. Through three experiments, we demonstrate how flavor cues bias visual search via color-flavor associations, with this modulation critically dependent on working-memory engagement. In Experiment 1, participants performed a shape-based visual search task after tasting either a predictive flavor (e.g., target consistently appeared in red after strawberry flavor) or an unpredictive flavor (e.g., target appeared in any of four colors with equal probability after pineapple flavor). Results showed that only predictive cues biased attention, whereas unpredictive cues had no effect. In Experiment 2, when participants performed a working-memory task, even unpredictive flavor cues shortened reaction times and accelerated fixations on targets appearing in the flavor-associated color. Experiment 3 further generalized these effects to ecologically valid product search scenarios. Collectively, these findings demonstrate that flavor cues modulate visual search through top-down mechanisms rather than bottom-up attentional capture, highlighting the essential role of working memory in driving this crossmodal attentional bias.
{"title":"Preceding Flavor Cues Modulate Visual Search via Color-Flavor Associations: Evidence for Top-Down Working-Memory Mechanisms.","authors":"Chujun Wang, Yubin Peng, Xiaoang Wan","doi":"10.1163/22134808-bja10163","DOIUrl":"10.1163/22134808-bja10163","url":null,"abstract":"<p><p>Although crossmodal interactions between vision and other modalities have been extensively studied, the reverse influence of nonvisual cues on visual processing remains underexplored. Through three experiments, we demonstrate how flavor cues bias visual search via color-flavor associations, with this modulation critically dependent on working-memory engagement. In Experiment 1, participants performed a shape-based visual search task after tasting either a predictive flavor (e.g., target consistently appeared in red after strawberry flavor) or an unpredictive flavor (e.g., target appeared in any of four colors with equal probability after pineapple flavor). Results showed that only predictive cues biased attention, whereas unpredictive cues had no effect. In Experiment 2, when participants performed a working-memory task, even unpredictive flavor cues shortened reaction times and accelerated fixations on targets appearing in the flavor-associated color. Experiment 3 further generalized these effects to ecologically valid product search scenarios. Collectively, these findings demonstrate that flavor cues modulate visual search through top-down mechanisms rather than bottom-up attentional capture, highlighting the essential role of working memory in driving this crossmodal attentional bias.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":" ","pages":"517-542"},"PeriodicalIF":1.5,"publicationDate":"2025-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145410792","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-14DOI: 10.1163/22134808-bja10167
Himanshu Verma, Bhanu Shukla, Sanjay Munjal, Amit Agarwal, Naresh K Panda
Speech perception is a neurocognitive process that involves both auditory and visual modalities to interpret the meaning of spoken utterances. The cohesive integration of visual and auditory information (AV integration) improves speech perception. AV integration can also occur even when incongruent auditory and visual information is presented, known as the McGurk effect. The McGurk phenomenon signifies the importance of visual articulatory cues (such as place of articulation) and auditory information for speech perception. The McGurk effect can be decreased or absent in the deaf population even after using amplification devices or cochlear implants (CI) compared to the normal-hearing population. However, cochlear-implanted individuals could integrate auditory and visual information. So, the McGurk paradigm can provide substantial evidence to understand the speech perception mechanism in hard-of-hearing individuals fitted with a CI. So, the present systematic review was carried out using the McGurk paradigm to understand the speech perception mechanism in CI-fitted individuals. A total of six studies were included in the present review as per the inclusion criteria. The current review included the studies with behavioral McGurk experiments only, excluding the studies that used electrophysiological, radiological, or other methods to explore the McGurk effect in the CI. From the present systematic review, it can be delineated that CI users also demonstrate the McGurk effect when they are fitted with a CI at an early age.
{"title":"Exploring the McGurk Effect in Cochlear-Implant Users: A Systematic Review.","authors":"Himanshu Verma, Bhanu Shukla, Sanjay Munjal, Amit Agarwal, Naresh K Panda","doi":"10.1163/22134808-bja10167","DOIUrl":"10.1163/22134808-bja10167","url":null,"abstract":"<p><p>Speech perception is a neurocognitive process that involves both auditory and visual modalities to interpret the meaning of spoken utterances. The cohesive integration of visual and auditory information (AV integration) improves speech perception. AV integration can also occur even when incongruent auditory and visual information is presented, known as the McGurk effect. The McGurk phenomenon signifies the importance of visual articulatory cues (such as place of articulation) and auditory information for speech perception. The McGurk effect can be decreased or absent in the deaf population even after using amplification devices or cochlear implants (CI) compared to the normal-hearing population. However, cochlear-implanted individuals could integrate auditory and visual information. So, the McGurk paradigm can provide substantial evidence to understand the speech perception mechanism in hard-of-hearing individuals fitted with a CI. So, the present systematic review was carried out using the McGurk paradigm to understand the speech perception mechanism in CI-fitted individuals. A total of six studies were included in the present review as per the inclusion criteria. The current review included the studies with behavioral McGurk experiments only, excluding the studies that used electrophysiological, radiological, or other methods to explore the McGurk effect in the CI. From the present systematic review, it can be delineated that CI users also demonstrate the McGurk effect when they are fitted with a CI at an early age.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":" ","pages":"325-351"},"PeriodicalIF":1.5,"publicationDate":"2025-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145410778","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-14DOI: 10.1163/22134808-bja10168
Donatien Doumont, Anika R Kao, Julien Lambert, François Wielant, Gregory J Gerling, Benoit P Delhaye, Philippe Lefèvre
Dexterous manipulations rely on tactile feedback from the fingertips, which provides crucial information about contact events, object geometry, interaction forces, friction, and more. Accurately measuring skin deformations during tactile interactions can shed light on the mechanics behind such feedback. To address this, we developed a novel setup using 3-D digital image correlation (DIC) to both reconstruct the bulk deformation and local surface skin deformation of the fingertip under natural loading conditions. Here, we studied the local spatiotemporal evolution of the skin surface during contact initiation. We showed that, as soon as contact occurs, the skin surface deforms very rapidly and exhibits high compliance at low forces (<0.05 N). As loading and thus the contact area increases, a localized deformation front forms just ahead of the moving contact boundary. Consequently, substantial deformation extending beyond the contact interface was observed, with maximal amplitudes ranging from 5% to 10% at 5 N, close to the border of the contact. Furthermore, we found that friction influences the partial slip caused by these deformations during contact initiation, as previously suggested. Our setup provides a powerful tool to get new insights into the mechanics of touch and opens avenues for a deeper understanding of tactile afferent encoding.
{"title":"3-D Reconstruction of Fingertip Deformation During Contact Initiation.","authors":"Donatien Doumont, Anika R Kao, Julien Lambert, François Wielant, Gregory J Gerling, Benoit P Delhaye, Philippe Lefèvre","doi":"10.1163/22134808-bja10168","DOIUrl":"https://doi.org/10.1163/22134808-bja10168","url":null,"abstract":"<p><p>Dexterous manipulations rely on tactile feedback from the fingertips, which provides crucial information about contact events, object geometry, interaction forces, friction, and more. Accurately measuring skin deformations during tactile interactions can shed light on the mechanics behind such feedback. To address this, we developed a novel setup using 3-D digital image correlation (DIC) to both reconstruct the bulk deformation and local surface skin deformation of the fingertip under natural loading conditions. Here, we studied the local spatiotemporal evolution of the skin surface during contact initiation. We showed that, as soon as contact occurs, the skin surface deforms very rapidly and exhibits high compliance at low forces (<0.05 N). As loading and thus the contact area increases, a localized deformation front forms just ahead of the moving contact boundary. Consequently, substantial deformation extending beyond the contact interface was observed, with maximal amplitudes ranging from 5% to 10% at 5 N, close to the border of the contact. Furthermore, we found that friction influences the partial slip caused by these deformations during contact initiation, as previously suggested. Our setup provides a powerful tool to get new insights into the mechanics of touch and opens avenues for a deeper understanding of tactile afferent encoding.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":" ","pages":"1-26"},"PeriodicalIF":1.5,"publicationDate":"2025-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145410768","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-14DOI: 10.1163/22134808-bja10161
Charles Spence, Yang Gao
In recent years, there has been an explosion in the number and range of commercial touch-enabled digital devices in society at large. In this narrative review, we critically evaluate the evidence concerning the tactile augmentation of a range of dynamic visual experiences such as those offered by film, gaming, and virtual reality. We consider the various mechanisms (both diegetic and nondiegetic) that may underlie such cross-modal effects. These include attentional capture, mood induction, ambiguity resolution, and the transmission of semantically meaningful information (i.e., such as directional cues for navigation) by means of patterned tactile stimulation. By drawing parallels with the literature on olfactory augmentation in the context of live performance, we identify several additional ways in which touch could potentially be used to augment both passive (e.g., cinema) and active (e.g., gaming) media experiences in the future. That said, a number of the technical, financial, and psychological challenges associated with delivering such cross-modal, or multisensory, enhancement effects via tactile augmentation are also highlighted. Finally, we suggest a number of novel lines of future research in this rapidly evolving area of technological innovation.
{"title":"Enhancing Dynamic Visual Experiences through Touch.","authors":"Charles Spence, Yang Gao","doi":"10.1163/22134808-bja10161","DOIUrl":"10.1163/22134808-bja10161","url":null,"abstract":"<p><p>In recent years, there has been an explosion in the number and range of commercial touch-enabled digital devices in society at large. In this narrative review, we critically evaluate the evidence concerning the tactile augmentation of a range of dynamic visual experiences such as those offered by film, gaming, and virtual reality. We consider the various mechanisms (both diegetic and nondiegetic) that may underlie such cross-modal effects. These include attentional capture, mood induction, ambiguity resolution, and the transmission of semantically meaningful information (i.e., such as directional cues for navigation) by means of patterned tactile stimulation. By drawing parallels with the literature on olfactory augmentation in the context of live performance, we identify several additional ways in which touch could potentially be used to augment both passive (e.g., cinema) and active (e.g., gaming) media experiences in the future. That said, a number of the technical, financial, and psychological challenges associated with delivering such cross-modal, or multisensory, enhancement effects via tactile augmentation are also highlighted. Finally, we suggest a number of novel lines of future research in this rapidly evolving area of technological innovation.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":" ","pages":"289-324"},"PeriodicalIF":1.5,"publicationDate":"2025-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145410796","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-13DOI: 10.1163/22134808-bja10162
Stephanie Yung, M Kathleen Pichora-Fuller, Dirk B Walther, Raheleh Saryazdi, Jennifer L Campos
It is well established that individual sensory and cognitive abilities often decline with older age; however, previous studies examining whether multisensory processes and multisensory integration also change with older age have been inconsistent. One possible reason for these inconsistencies may be due to differences across studies in how sensory and cognitive abilities have been characterized and controlled for in older adult participant groups. The current study examined whether multisensory (audiovisual) synchrony perception is different in younger and older adults using the audiovisual simultaneity judgement (SJ) and temporal order judgement (TOJ) tasks and explored whether performance on these audiovisual tasks was associated with unisensory (hearing, vision) and cognitive (global cognition and executive functioning) abilities within clinically normal limits. Healthy younger and older adults completed audiovisual SJ and TOJ tasks. Auditory-only and visual-only SJ tasks were also completed independently to assess temporal processing in hearing and vision. Older adults completed standardized assessments of hearing, vision, and cognition. Results showed that, compared to younger adults, older adults had wider temporal binding windows in the audiovisual SJ and TOJ tasks and larger points of subjective simultaneity in the TOJ task. No significant associations were found among the unisensory (standard baseline and unisensory SJ), cognitive, or audiovisual (SJ, TOJ) measures. These findings suggest that audiovisual integrative processes change with older age, even within clinically normal sensory and cognitive abilities.
{"title":"Older Adults with Clinically Normal Sensory and Cognitive Abilities Perceive Audiovisual Simultaneity and Temporal Order Differently than Younger Adults.","authors":"Stephanie Yung, M Kathleen Pichora-Fuller, Dirk B Walther, Raheleh Saryazdi, Jennifer L Campos","doi":"10.1163/22134808-bja10162","DOIUrl":"10.1163/22134808-bja10162","url":null,"abstract":"<p><p>It is well established that individual sensory and cognitive abilities often decline with older age; however, previous studies examining whether multisensory processes and multisensory integration also change with older age have been inconsistent. One possible reason for these inconsistencies may be due to differences across studies in how sensory and cognitive abilities have been characterized and controlled for in older adult participant groups. The current study examined whether multisensory (audiovisual) synchrony perception is different in younger and older adults using the audiovisual simultaneity judgement (SJ) and temporal order judgement (TOJ) tasks and explored whether performance on these audiovisual tasks was associated with unisensory (hearing, vision) and cognitive (global cognition and executive functioning) abilities within clinically normal limits. Healthy younger and older adults completed audiovisual SJ and TOJ tasks. Auditory-only and visual-only SJ tasks were also completed independently to assess temporal processing in hearing and vision. Older adults completed standardized assessments of hearing, vision, and cognition. Results showed that, compared to younger adults, older adults had wider temporal binding windows in the audiovisual SJ and TOJ tasks and larger points of subjective simultaneity in the TOJ task. No significant associations were found among the unisensory (standard baseline and unisensory SJ), cognitive, or audiovisual (SJ, TOJ) measures. These findings suggest that audiovisual integrative processes change with older age, even within clinically normal sensory and cognitive abilities.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":" ","pages":"485-515"},"PeriodicalIF":1.5,"publicationDate":"2025-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145410766","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The inputs delivered to different sensory organs provide complementary information about the environment. Many previous studies have demonstrated that presenting multisensory information (e.g., visual) can improve auditory perception, especially in noisy environments. Understanding temporal asynchronicity between different sensory modalities is fundamentally important to process and deliver multisensory information in real time with minimal time delay. The purpose of this study was to quantify the average limit of temporal asynchronicity where multisensory stimuli are likely to be perceptually integrated. Twenty adults participated in simultaneity judgment measurements using 100-ms stimuli in three different sensory modalities (auditory, visual, and tactile), and their test-retest reliability of the simultaneity judgments was verified on a weekly basis by three separate tests. Two crossmodal temporal coherence cues were examined: the temporal binding window (TBW), denoting a time frame where two sensory modalities were perceptually integrated, and the point of subjective simultaneity (PSS), denoting a perceptual lead toward one modality over others. According to the average results, the TBWs occurred in 389 ms (auditory-visual, AV), 324 ms (auditory-tactile, AT), and 299 ms (visual-tactile, VT), and the PSSs were shifted 105 ms toward a visual cue, 16 ms toward a tactile cue, and 77 ms toward a visual cue for the AV, AT, and VT conditions, respectively. Over all three crossmodalities, the test-retest reliability averaged less than 50 ms for the TBW and 30 ms for the PSS. The findings in this study might specify a minimum amount of time delay for real-time multisensory processing, suggesting temporal parameters for future developments in multisensory hearing assistive devices.
{"title":"Temporal Coherence in Crossmodal Perceptual Binding: Implications for the Design of a Real-Time Multisensory Speech Recognition Algorithm.","authors":"Yonghee Oh, Emily Keller, Audie Gilchrist, Kayla Borges, Kelli Meyers","doi":"10.1163/22134808-bja10166","DOIUrl":"https://doi.org/10.1163/22134808-bja10166","url":null,"abstract":"<p><p>The inputs delivered to different sensory organs provide complementary information about the environment. Many previous studies have demonstrated that presenting multisensory information (e.g., visual) can improve auditory perception, especially in noisy environments. Understanding temporal asynchronicity between different sensory modalities is fundamentally important to process and deliver multisensory information in real time with minimal time delay. The purpose of this study was to quantify the average limit of temporal asynchronicity where multisensory stimuli are likely to be perceptually integrated. Twenty adults participated in simultaneity judgment measurements using 100-ms stimuli in three different sensory modalities (auditory, visual, and tactile), and their test-retest reliability of the simultaneity judgments was verified on a weekly basis by three separate tests. Two crossmodal temporal coherence cues were examined: the temporal binding window (TBW), denoting a time frame where two sensory modalities were perceptually integrated, and the point of subjective simultaneity (PSS), denoting a perceptual lead toward one modality over others. According to the average results, the TBWs occurred in 389 ms (auditory-visual, AV), 324 ms (auditory-tactile, AT), and 299 ms (visual-tactile, VT), and the PSSs were shifted 105 ms toward a visual cue, 16 ms toward a tactile cue, and 77 ms toward a visual cue for the AV, AT, and VT conditions, respectively. Over all three crossmodalities, the test-retest reliability averaged less than 50 ms for the TBW and 30 ms for the PSS. The findings in this study might specify a minimum amount of time delay for real-time multisensory processing, suggesting temporal parameters for future developments in multisensory hearing assistive devices.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":"38 4-5","pages":"273-288"},"PeriodicalIF":1.5,"publicationDate":"2025-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145410785","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-10DOI: 10.1163/22134808-bja10169
Sinan Haliyo
Over three PhD theses co-supervised with Vincent Hayward, we developed a technique to scale up microscale force interactions to a user's hand with near-perfect linear amplification. While this challenge could be approached through robotic teleoperation - using a precise robot manipulator with force sensing controlled via a haptic device - the required bilateral coupling between different physical scales demands extremely large homothetic gains (typically ×10 000 to ×100 000) in both displacement and force. These large gains compromise transparency, as device imperfections and stability requirements mask the faithful perception of microscale phenomena. To overcome this limitation, we developed the concept of haptic microscopy. We designed a complete microscale teleoperation system from the ground up, featuring a custom robotic manipulator and novel haptic device, implementing direct bilateral coupling with pure gains. This electromechanical system successfully amplifies microscale forces several thousand times, enabling operators to better understand the physical landscape they are manipulating. Our paper details the design process for both the microtool and haptic device, and presents experiments demonstrating users' ability to tactilely explore microscale interactions.
{"title":"Haptic Microscopy: Tactile Perception of Small Scales.","authors":"Sinan Haliyo","doi":"10.1163/22134808-bja10169","DOIUrl":"https://doi.org/10.1163/22134808-bja10169","url":null,"abstract":"<p><p>Over three PhD theses co-supervised with Vincent Hayward, we developed a technique to scale up microscale force interactions to a user's hand with near-perfect linear amplification. While this challenge could be approached through robotic teleoperation - using a precise robot manipulator with force sensing controlled via a haptic device - the required bilateral coupling between different physical scales demands extremely large homothetic gains (typically ×10 000 to ×100 000) in both displacement and force. These large gains compromise transparency, as device imperfections and stability requirements mask the faithful perception of microscale phenomena. To overcome this limitation, we developed the concept of haptic microscopy. We designed a complete microscale teleoperation system from the ground up, featuring a custom robotic manipulator and novel haptic device, implementing direct bilateral coupling with pure gains. This electromechanical system successfully amplifies microscale forces several thousand times, enabling operators to better understand the physical landscape they are manipulating. Our paper details the design process for both the microtool and haptic device, and presents experiments demonstrating users' ability to tactilely explore microscale interactions.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":" ","pages":"1-25"},"PeriodicalIF":1.5,"publicationDate":"2025-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145410794","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-24DOI: 10.1163/22134808-bja10156
Benjamin A Rowland
The 1993 book The Merging of the Senses has proven to be a profoundly impactful text that has shaped research programs studying the interaction between the senses for the last three decades. The book combines skillful and approachable narration with engaging illustrations and was received with rave reviews on publication as one of the first comprehensive approaches to the subject. It captures the impressive breadth of domains in which multisensory integration impacts the daily life of all animals and promotes a systematic approach to understanding its underlying operation by interrogating the nervous system at multiple levels, from the peripheral organ, through convergence, integration, and decision-making, to effected behavior. Thirty years later, the multiple generations of scientists that have been inspired by the text have built an amazing structure on this foundation, through advancing refinements in theory and experimental technique, investigation of new domains and species, an understanding of the origins, maturation, and plasticity of the process, the translation of biological principles to artificial systems, and discovering new applications of multisensory research in clinical and rehabilitative domains.
{"title":"Introduction to the Special Issue on The Merging of the Senses.","authors":"Benjamin A Rowland","doi":"10.1163/22134808-bja10156","DOIUrl":"10.1163/22134808-bja10156","url":null,"abstract":"<p><p>The 1993 book The Merging of the Senses has proven to be a profoundly impactful text that has shaped research programs studying the interaction between the senses for the last three decades. The book combines skillful and approachable narration with engaging illustrations and was received with rave reviews on publication as one of the first comprehensive approaches to the subject. It captures the impressive breadth of domains in which multisensory integration impacts the daily life of all animals and promotes a systematic approach to understanding its underlying operation by interrogating the nervous system at multiple levels, from the peripheral organ, through convergence, integration, and decision-making, to effected behavior. Thirty years later, the multiple generations of scientists that have been inspired by the text have built an amazing structure on this foundation, through advancing refinements in theory and experimental technique, investigation of new domains and species, an understanding of the origins, maturation, and plasticity of the process, the translation of biological principles to artificial systems, and discovering new applications of multisensory research in clinical and rehabilitative domains.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":"38 4-5","pages":"143-152"},"PeriodicalIF":1.5,"publicationDate":"2025-09-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145410744","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-24DOI: 10.1163/22134808-bja10159
Madeleine R Jones, Aurelia Daniels, Kajsa Igelström, Juulia Suvilehto, India Morrison
In autonomous sensory meridian response (ASMR), certain audiovisual stimuli can evoke a range of spontaneous sensations, in particular a pleasant tingling that often originates across the scalp, spreading down the spine toward the shoulders ('tingles'). Major drivers of tingle elicitation in ASMR stimuli are often 'crisp' sounds created by whispering or manipulating an object, as well as social-attentional features such as implied direct attention to the viewer. However, relationships between specific stimulus properties and ASMR-typical subjective responses remain to be fully mapped. In two studies, we therefore sought to isolate specific tingle-eliciting stimulus features by comparing tingle reports for ASMR video clips between ASMR experiencers and control participants. The first study compared intact versus desynchronized video clips to probe whether the presence of audiovisual features would be sufficient to elicit tingles, or whether these features needed to be presented in a coherent sequence. The second study compared clips with filtered and unfiltered audio, demonstrating that 'crisp' sounds had greater tingle efficacy over 'blunted' sounds. Overall, the presence of stimulus features in both synchronized and desynchronized clips was effective in eliciting self-reported subjective responses (tingle frequency), while intact clips involving object manipulation and speech sounds were most effective. An exploratory analysis suggested that viewer-oriented implied attention also influenced tingle ratings. These findings further pinpoint the importance of object and speech sounds in eliciting ASMR tingle responses, supporting the proposition that audiovisual stimulus features implying proximity to the viewer play a key role.
{"title":"Tingle-Eliciting Audiovisual Properties of Autonomous Sensory Meridian Response (ASMR) Videos.","authors":"Madeleine R Jones, Aurelia Daniels, Kajsa Igelström, Juulia Suvilehto, India Morrison","doi":"10.1163/22134808-bja10159","DOIUrl":"10.1163/22134808-bja10159","url":null,"abstract":"<p><p>In autonomous sensory meridian response (ASMR), certain audiovisual stimuli can evoke a range of spontaneous sensations, in particular a pleasant tingling that often originates across the scalp, spreading down the spine toward the shoulders ('tingles'). Major drivers of tingle elicitation in ASMR stimuli are often 'crisp' sounds created by whispering or manipulating an object, as well as social-attentional features such as implied direct attention to the viewer. However, relationships between specific stimulus properties and ASMR-typical subjective responses remain to be fully mapped. In two studies, we therefore sought to isolate specific tingle-eliciting stimulus features by comparing tingle reports for ASMR video clips between ASMR experiencers and control participants. The first study compared intact versus desynchronized video clips to probe whether the presence of audiovisual features would be sufficient to elicit tingles, or whether these features needed to be presented in a coherent sequence. The second study compared clips with filtered and unfiltered audio, demonstrating that 'crisp' sounds had greater tingle efficacy over 'blunted' sounds. Overall, the presence of stimulus features in both synchronized and desynchronized clips was effective in eliciting self-reported subjective responses (tingle frequency), while intact clips involving object manipulation and speech sounds were most effective. An exploratory analysis suggested that viewer-oriented implied attention also influenced tingle ratings. These findings further pinpoint the importance of object and speech sounds in eliciting ASMR tingle responses, supporting the proposition that audiovisual stimulus features implying proximity to the viewer play a key role.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":" ","pages":"427-452"},"PeriodicalIF":1.5,"publicationDate":"2025-09-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145410731","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}