Pub Date : 2025-11-01DOI: 10.1163/22134808-bja10139
Charles Spence
The publication of Barry Stein and Alex Meredith's The Merging of the Senses in 1993 was a hugely influential event during the development of my own research career, as an experimental psychologist, as I am sure it was for so many others. At the time, I was embarking on the study of crossmodal links in spatial attention in neurologically normal adult humans. The body of neurophysiological research summarized in Stein and Meredith's book helped to draw people's attention to the importance of spatiotemporal coincidence to spatial behaviours (such as orienting). Cognitive neuroscientists have sometimes struggled to demonstrate similar phenomena in awake humans while at the same time Bayesian accounts have come to provide a popular alternative explanation for the way in which multisensory integration operates under many conditions. A growing awareness of the importance of considering not only spatiotemporal factors but also the semantic meaning and crossmodal correspondences that help to solve the multisensory binding problem has also emerged in the literature, as has a realization of the importance of context effects. Nevertheless, for those cognitive psychologists, like myself, interested in evaluating the implications for human spatial attention and multisensory perception, the book certainly galvanized a generation of young researchers to move beyond the unisensory approach to psychology that had seemingly become entrenched in the literature.
{"title":"Reflecting on The Merging of the Senses: A Cognitive Psychology Perspective.","authors":"Charles Spence","doi":"10.1163/22134808-bja10139","DOIUrl":"10.1163/22134808-bja10139","url":null,"abstract":"<p><p>The publication of Barry Stein and Alex Meredith's The Merging of the Senses in 1993 was a hugely influential event during the development of my own research career, as an experimental psychologist, as I am sure it was for so many others. At the time, I was embarking on the study of crossmodal links in spatial attention in neurologically normal adult humans. The body of neurophysiological research summarized in Stein and Meredith's book helped to draw people's attention to the importance of spatiotemporal coincidence to spatial behaviours (such as orienting). Cognitive neuroscientists have sometimes struggled to demonstrate similar phenomena in awake humans while at the same time Bayesian accounts have come to provide a popular alternative explanation for the way in which multisensory integration operates under many conditions. A growing awareness of the importance of considering not only spatiotemporal factors but also the semantic meaning and crossmodal correspondences that help to solve the multisensory binding problem has also emerged in the literature, as has a realization of the importance of context effects. Nevertheless, for those cognitive psychologists, like myself, interested in evaluating the implications for human spatial attention and multisensory perception, the book certainly galvanized a generation of young researchers to move beyond the unisensory approach to psychology that had seemingly become entrenched in the literature.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":"38 4-5","pages":"231-253"},"PeriodicalIF":1.5,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145410727","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-01DOI: 10.1163/22134808-bja10140
Hans Colonius, Adele Diederich
A classic definition of multisensory integration (MI) has been proposed as 'the presence of a (statistically) significant change in the response to a crossmodal stimulus complex compared to unimodal stimuli'. However, this general definition did not result in a broad consensus on how to quantify the amount of MI in the context of reaction time (RT). In this brief note, we argue that numeric measures of reaction times that only involve mean or median RTs do not uncover the information required to fully assess the effect of MI. We suggest instead novel measures that include the entire RT distributions functions. The central role is played by relative entropy (a.k.a. Kullback-Leibler divergence), a statistical concept in information theory, statistics, and machine learning to measure the (non-symmetric) distance between probability distributions. We provide a number of theoretical examples, but empirical applications and statistical testing are postponed to a later study.
{"title":"Measuring Multisensory Integration in Reaction Time: Relative Entropy Approach.","authors":"Hans Colonius, Adele Diederich","doi":"10.1163/22134808-bja10140","DOIUrl":"10.1163/22134808-bja10140","url":null,"abstract":"<p><p>A classic definition of multisensory integration (MI) has been proposed as 'the presence of a (statistically) significant change in the response to a crossmodal stimulus complex compared to unimodal stimuli'. However, this general definition did not result in a broad consensus on how to quantify the amount of MI in the context of reaction time (RT). In this brief note, we argue that numeric measures of reaction times that only involve mean or median RTs do not uncover the information required to fully assess the effect of MI. We suggest instead novel measures that include the entire RT distributions functions. The central role is played by relative entropy (a.k.a. Kullback-Leibler divergence), a statistical concept in information theory, statistics, and machine learning to measure the (non-symmetric) distance between probability distributions. We provide a number of theoretical examples, but empirical applications and statistical testing are postponed to a later study.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":"38 4-5","pages":"199-210"},"PeriodicalIF":1.5,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145410779","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-31DOI: 10.1163/22134808-bja10171
Robert M Jertberg, Salvador Soto-Faraco, Virginie van Wassenhove, Erik Van der Burg
One of the most extensively studied constructs in multisensory research is the temporal window of integration. Its extent has been variously estimated by measuring the temporal boundaries within which stimuli in different sensory modalities are perceived as simultaneous or elicit multisensory integration effects. However, there is ample evidence that these two approaches produce distinct psychometric outcomes, as the widths of the windows they yield differ even when estimated with equivalent designs and stimuli. In fact, these two estimates can sometimes even be negatively correlated. What is more, the perception of synchrony has been found to be neither necessary nor sufficient for the occurrence of multisensory illusions. This suggests that subjective simultaneity and integration phenomena are dissociable, undermining the conclusions of studies that use them interchangeably. Failing to disentangle the temporal windows in which they occur has led to contradictory findings and considerable confusion in basic research that has started extending to other domains. In clinical studies, for example, this confusion has affected work ranging from neuropsychological conditions (such as schizophrenia, mild cognitive impairment, dyslexia, and autism) to more general health factors (such as obesity and inflammation); in applied research, it is seen in studies using virtual reality, human-computer interfaces, and warning systems in vehicles. In this brief review, we discuss the importance of distinguishing these two constructs. We propose that, while the temporal boundaries of integration phenomena are aptly described as the temporal window of integration (TWI), the temporal boundaries of simultaneity judgements should be referred to as the temporal window of synchrony (TWS).
{"title":"Temporal Window of Integration XOR Temporal Window of Synchrony.","authors":"Robert M Jertberg, Salvador Soto-Faraco, Virginie van Wassenhove, Erik Van der Burg","doi":"10.1163/22134808-bja10171","DOIUrl":"10.1163/22134808-bja10171","url":null,"abstract":"<p><p>One of the most extensively studied constructs in multisensory research is the temporal window of integration. Its extent has been variously estimated by measuring the temporal boundaries within which stimuli in different sensory modalities are perceived as simultaneous or elicit multisensory integration effects. However, there is ample evidence that these two approaches produce distinct psychometric outcomes, as the widths of the windows they yield differ even when estimated with equivalent designs and stimuli. In fact, these two estimates can sometimes even be negatively correlated. What is more, the perception of synchrony has been found to be neither necessary nor sufficient for the occurrence of multisensory illusions. This suggests that subjective simultaneity and integration phenomena are dissociable, undermining the conclusions of studies that use them interchangeably. Failing to disentangle the temporal windows in which they occur has led to contradictory findings and considerable confusion in basic research that has started extending to other domains. In clinical studies, for example, this confusion has affected work ranging from neuropsychological conditions (such as schizophrenia, mild cognitive impairment, dyslexia, and autism) to more general health factors (such as obesity and inflammation); in applied research, it is seen in studies using virtual reality, human-computer interfaces, and warning systems in vehicles. In this brief review, we discuss the importance of distinguishing these two constructs. We propose that, while the temporal boundaries of integration phenomena are aptly described as the temporal window of integration (TWI), the temporal boundaries of simultaneity judgements should be referred to as the temporal window of synchrony (TWS).</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":" ","pages":"353-367"},"PeriodicalIF":1.5,"publicationDate":"2025-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145423338","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-31DOI: 10.1163/22134808-bja10172
Charles Spence, Nicola Di Stefano
In this narrative historical review, we both summarize and critically evaluate the experimental literature that has emerged over the last century or so investigating the various ways in which the addition of music influences people's perception of, and response to, film. While 'sensation transference', whereby the mood of the background music carries over to influence the viewer's feeling about the film content, has often been documented, background music can also affect a viewer's visual attention, their interpretation, and their memory for whatever they happen to have seen. The use of sound in film (no matter whether its use is diegetic or non-diegetic - that is, part of the recounted story or not) is interesting inasmuch as simultaneously presented auditory and visual inputs do not necessarily have to be integrated perceptually for crossmodal effects to occur. The literature published to date highlights the multiple ways in which music affects people's perception of semantically meaningful film clips. Nevertheless, despite the emerging body of rigorous scientific research, the professional addition of music to film would still appear to be as much an art as a science. Furthermore, a number of potentially important questions remain unresolved, including the extent to which habituation, sensory overload, distraction, film type (i.e., fictional or informational), and/or context modulates the influence of background music. That said, this emerging body of empirical literature provides a number of relevant insights for those thinking more generally about sensory augmentation and multisensory experience design. Looking to the future, the principles uncovered in this work have growing relevance for emerging domains such as immersive media, virtual reality, multisensory marketing, and the design of adaptive audiovisual systems.
{"title":"Mood Music: Studying the Impact of Background Music on Film.","authors":"Charles Spence, Nicola Di Stefano","doi":"10.1163/22134808-bja10172","DOIUrl":"https://doi.org/10.1163/22134808-bja10172","url":null,"abstract":"<p><p>In this narrative historical review, we both summarize and critically evaluate the experimental literature that has emerged over the last century or so investigating the various ways in which the addition of music influences people's perception of, and response to, film. While 'sensation transference', whereby the mood of the background music carries over to influence the viewer's feeling about the film content, has often been documented, background music can also affect a viewer's visual attention, their interpretation, and their memory for whatever they happen to have seen. The use of sound in film (no matter whether its use is diegetic or non-diegetic - that is, part of the recounted story or not) is interesting inasmuch as simultaneously presented auditory and visual inputs do not necessarily have to be integrated perceptually for crossmodal effects to occur. The literature published to date highlights the multiple ways in which music affects people's perception of semantically meaningful film clips. Nevertheless, despite the emerging body of rigorous scientific research, the professional addition of music to film would still appear to be as much an art as a science. Furthermore, a number of potentially important questions remain unresolved, including the extent to which habituation, sensory overload, distraction, film type (i.e., fictional or informational), and/or context modulates the influence of background music. That said, this emerging body of empirical literature provides a number of relevant insights for those thinking more generally about sensory augmentation and multisensory experience design. Looking to the future, the principles uncovered in this work have growing relevance for emerging domains such as immersive media, virtual reality, multisensory marketing, and the design of adaptive audiovisual systems.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":" ","pages":"1-45"},"PeriodicalIF":1.5,"publicationDate":"2025-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145427087","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-31DOI: 10.1163/22134808-bja10175
Charles Spence, Nicola Di Stefano
This paper critically reviews the literature on mid-level audiovisual crossmodal correspondences, that is, those associations that emerge between structured, often dynamic stimuli in vision and audition. Unlike basic correspondences (involving perceptually simple, or unitary, features) or complex ones (involving semantically rich combinations of stimuli), mid-level correspondences occur between temporally and/or spatially patterned stimuli that are perceptually structured but are typically not inherently meaningful (e.g., melodic contours and moving shapes). Taken together, the literature published to date suggests that such correspondences often rely on structural or analogical mappings, reflecting shared spatiotemporal organization across the senses rather than the direct similarity of low-level features or emotional content. Drawing on evidence from developmental, comparative, and experimental studies, we discuss the possible mechanisms underpinning these mappings - including perceptual scaffolding, amodal dimensions, and metaphorical mediation - and outline open questions regarding their perceptual, cognitive, and neural bases. We also evaluate key methodological approaches and provide suggestions for future research aiming to understand the hierarchy of crossmodal correspondences across levels of perceived stimulus complexity. Besides advancing theoretical models, our paper offers practical insights for domains such as multimedia design and crossmodal art.
{"title":"Mid-Level Audiovisual Crossmodal Correspondences: A Narrative Review.","authors":"Charles Spence, Nicola Di Stefano","doi":"10.1163/22134808-bja10175","DOIUrl":"https://doi.org/10.1163/22134808-bja10175","url":null,"abstract":"<p><p>This paper critically reviews the literature on mid-level audiovisual crossmodal correspondences, that is, those associations that emerge between structured, often dynamic stimuli in vision and audition. Unlike basic correspondences (involving perceptually simple, or unitary, features) or complex ones (involving semantically rich combinations of stimuli), mid-level correspondences occur between temporally and/or spatially patterned stimuli that are perceptually structured but are typically not inherently meaningful (e.g., melodic contours and moving shapes). Taken together, the literature published to date suggests that such correspondences often rely on structural or analogical mappings, reflecting shared spatiotemporal organization across the senses rather than the direct similarity of low-level features or emotional content. Drawing on evidence from developmental, comparative, and experimental studies, we discuss the possible mechanisms underpinning these mappings - including perceptual scaffolding, amodal dimensions, and metaphorical mediation - and outline open questions regarding their perceptual, cognitive, and neural bases. We also evaluate key methodological approaches and provide suggestions for future research aiming to understand the hierarchy of crossmodal correspondences across levels of perceived stimulus complexity. Besides advancing theoretical models, our paper offers practical insights for domains such as multimedia design and crossmodal art.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":" ","pages":"1-39"},"PeriodicalIF":1.5,"publicationDate":"2025-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145427128","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-21DOI: 10.1163/22134808-bja10170
Malika Auvray, Louise P Kirsch
In this article, we wish to share a scientific journey with our colleague and dear friend Vincent Hayward. The question of the extent to which it is different to touch oneself and someone else's skin brought us to many experimental studies and scientific discoveries. We present some of them here. It started with the use of a tactile device to investigate whether the reference frames specific to the hand differs depending on its position, towards or away from oneself. We then developed a technique allowing us to record skin-to-skin touch by means of an accelerator fixed at a short distance from the touching skin. We used this technique to probe specific parameters involved in skin-to-skin touch, such as speed and pressure, as well as the differences that arise in the signal when touching our own versus someone else's skin. Finally, the same methodology was used to record social touch to convey it at a distance through the auditory channel. Through this short piece we wish to show how Vincent Hayward inspired this new field of research, opening to myriads of applications.
{"title":"Is It Different to Touch Oneself Than to Touch Others? A Scientific Journey with Vincent Hayward.","authors":"Malika Auvray, Louise P Kirsch","doi":"10.1163/22134808-bja10170","DOIUrl":"https://doi.org/10.1163/22134808-bja10170","url":null,"abstract":"<p><p>In this article, we wish to share a scientific journey with our colleague and dear friend Vincent Hayward. The question of the extent to which it is different to touch oneself and someone else's skin brought us to many experimental studies and scientific discoveries. We present some of them here. It started with the use of a tactile device to investigate whether the reference frames specific to the hand differs depending on its position, towards or away from oneself. We then developed a technique allowing us to record skin-to-skin touch by means of an accelerator fixed at a short distance from the touching skin. We used this technique to probe specific parameters involved in skin-to-skin touch, such as speed and pressure, as well as the differences that arise in the signal when touching our own versus someone else's skin. Finally, the same methodology was used to record social touch to convey it at a distance through the auditory channel. Through this short piece we wish to show how Vincent Hayward inspired this new field of research, opening to myriads of applications.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":" ","pages":"1-11"},"PeriodicalIF":1.5,"publicationDate":"2025-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145410799","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-14DOI: 10.1163/22134808-bja10163
Chujun Wang, Yubin Peng, Xiaoang Wan
Although crossmodal interactions between vision and other modalities have been extensively studied, the reverse influence of nonvisual cues on visual processing remains underexplored. Through three experiments, we demonstrate how flavor cues bias visual search via color-flavor associations, with this modulation critically dependent on working-memory engagement. In Experiment 1, participants performed a shape-based visual search task after tasting either a predictive flavor (e.g., target consistently appeared in red after strawberry flavor) or an unpredictive flavor (e.g., target appeared in any of four colors with equal probability after pineapple flavor). Results showed that only predictive cues biased attention, whereas unpredictive cues had no effect. In Experiment 2, when participants performed a working-memory task, even unpredictive flavor cues shortened reaction times and accelerated fixations on targets appearing in the flavor-associated color. Experiment 3 further generalized these effects to ecologically valid product search scenarios. Collectively, these findings demonstrate that flavor cues modulate visual search through top-down mechanisms rather than bottom-up attentional capture, highlighting the essential role of working memory in driving this crossmodal attentional bias.
{"title":"Preceding Flavor Cues Modulate Visual Search via Color-Flavor Associations: Evidence for Top-Down Working-Memory Mechanisms.","authors":"Chujun Wang, Yubin Peng, Xiaoang Wan","doi":"10.1163/22134808-bja10163","DOIUrl":"10.1163/22134808-bja10163","url":null,"abstract":"<p><p>Although crossmodal interactions between vision and other modalities have been extensively studied, the reverse influence of nonvisual cues on visual processing remains underexplored. Through three experiments, we demonstrate how flavor cues bias visual search via color-flavor associations, with this modulation critically dependent on working-memory engagement. In Experiment 1, participants performed a shape-based visual search task after tasting either a predictive flavor (e.g., target consistently appeared in red after strawberry flavor) or an unpredictive flavor (e.g., target appeared in any of four colors with equal probability after pineapple flavor). Results showed that only predictive cues biased attention, whereas unpredictive cues had no effect. In Experiment 2, when participants performed a working-memory task, even unpredictive flavor cues shortened reaction times and accelerated fixations on targets appearing in the flavor-associated color. Experiment 3 further generalized these effects to ecologically valid product search scenarios. Collectively, these findings demonstrate that flavor cues modulate visual search through top-down mechanisms rather than bottom-up attentional capture, highlighting the essential role of working memory in driving this crossmodal attentional bias.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":" ","pages":"517-542"},"PeriodicalIF":1.5,"publicationDate":"2025-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145410792","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-14DOI: 10.1163/22134808-bja10167
Himanshu Verma, Bhanu Shukla, Sanjay Munjal, Amit Agarwal, Naresh K Panda
Speech perception is a neurocognitive process that involves both auditory and visual modalities to interpret the meaning of spoken utterances. The cohesive integration of visual and auditory information (AV integration) improves speech perception. AV integration can also occur even when incongruent auditory and visual information is presented, known as the McGurk effect. The McGurk phenomenon signifies the importance of visual articulatory cues (such as place of articulation) and auditory information for speech perception. The McGurk effect can be decreased or absent in the deaf population even after using amplification devices or cochlear implants (CI) compared to the normal-hearing population. However, cochlear-implanted individuals could integrate auditory and visual information. So, the McGurk paradigm can provide substantial evidence to understand the speech perception mechanism in hard-of-hearing individuals fitted with a CI. So, the present systematic review was carried out using the McGurk paradigm to understand the speech perception mechanism in CI-fitted individuals. A total of six studies were included in the present review as per the inclusion criteria. The current review included the studies with behavioral McGurk experiments only, excluding the studies that used electrophysiological, radiological, or other methods to explore the McGurk effect in the CI. From the present systematic review, it can be delineated that CI users also demonstrate the McGurk effect when they are fitted with a CI at an early age.
{"title":"Exploring the McGurk Effect in Cochlear-Implant Users: A Systematic Review.","authors":"Himanshu Verma, Bhanu Shukla, Sanjay Munjal, Amit Agarwal, Naresh K Panda","doi":"10.1163/22134808-bja10167","DOIUrl":"10.1163/22134808-bja10167","url":null,"abstract":"<p><p>Speech perception is a neurocognitive process that involves both auditory and visual modalities to interpret the meaning of spoken utterances. The cohesive integration of visual and auditory information (AV integration) improves speech perception. AV integration can also occur even when incongruent auditory and visual information is presented, known as the McGurk effect. The McGurk phenomenon signifies the importance of visual articulatory cues (such as place of articulation) and auditory information for speech perception. The McGurk effect can be decreased or absent in the deaf population even after using amplification devices or cochlear implants (CI) compared to the normal-hearing population. However, cochlear-implanted individuals could integrate auditory and visual information. So, the McGurk paradigm can provide substantial evidence to understand the speech perception mechanism in hard-of-hearing individuals fitted with a CI. So, the present systematic review was carried out using the McGurk paradigm to understand the speech perception mechanism in CI-fitted individuals. A total of six studies were included in the present review as per the inclusion criteria. The current review included the studies with behavioral McGurk experiments only, excluding the studies that used electrophysiological, radiological, or other methods to explore the McGurk effect in the CI. From the present systematic review, it can be delineated that CI users also demonstrate the McGurk effect when they are fitted with a CI at an early age.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":" ","pages":"325-351"},"PeriodicalIF":1.5,"publicationDate":"2025-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145410778","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-14DOI: 10.1163/22134808-bja10168
Donatien Doumont, Anika R Kao, Julien Lambert, François Wielant, Gregory J Gerling, Benoit P Delhaye, Philippe Lefèvre
Dexterous manipulations rely on tactile feedback from the fingertips, which provides crucial information about contact events, object geometry, interaction forces, friction, and more. Accurately measuring skin deformations during tactile interactions can shed light on the mechanics behind such feedback. To address this, we developed a novel setup using 3-D digital image correlation (DIC) to both reconstruct the bulk deformation and local surface skin deformation of the fingertip under natural loading conditions. Here, we studied the local spatiotemporal evolution of the skin surface during contact initiation. We showed that, as soon as contact occurs, the skin surface deforms very rapidly and exhibits high compliance at low forces (<0.05 N). As loading and thus the contact area increases, a localized deformation front forms just ahead of the moving contact boundary. Consequently, substantial deformation extending beyond the contact interface was observed, with maximal amplitudes ranging from 5% to 10% at 5 N, close to the border of the contact. Furthermore, we found that friction influences the partial slip caused by these deformations during contact initiation, as previously suggested. Our setup provides a powerful tool to get new insights into the mechanics of touch and opens avenues for a deeper understanding of tactile afferent encoding.
{"title":"3-D Reconstruction of Fingertip Deformation During Contact Initiation.","authors":"Donatien Doumont, Anika R Kao, Julien Lambert, François Wielant, Gregory J Gerling, Benoit P Delhaye, Philippe Lefèvre","doi":"10.1163/22134808-bja10168","DOIUrl":"https://doi.org/10.1163/22134808-bja10168","url":null,"abstract":"<p><p>Dexterous manipulations rely on tactile feedback from the fingertips, which provides crucial information about contact events, object geometry, interaction forces, friction, and more. Accurately measuring skin deformations during tactile interactions can shed light on the mechanics behind such feedback. To address this, we developed a novel setup using 3-D digital image correlation (DIC) to both reconstruct the bulk deformation and local surface skin deformation of the fingertip under natural loading conditions. Here, we studied the local spatiotemporal evolution of the skin surface during contact initiation. We showed that, as soon as contact occurs, the skin surface deforms very rapidly and exhibits high compliance at low forces (<0.05 N). As loading and thus the contact area increases, a localized deformation front forms just ahead of the moving contact boundary. Consequently, substantial deformation extending beyond the contact interface was observed, with maximal amplitudes ranging from 5% to 10% at 5 N, close to the border of the contact. Furthermore, we found that friction influences the partial slip caused by these deformations during contact initiation, as previously suggested. Our setup provides a powerful tool to get new insights into the mechanics of touch and opens avenues for a deeper understanding of tactile afferent encoding.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":" ","pages":"1-26"},"PeriodicalIF":1.5,"publicationDate":"2025-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145410768","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-14DOI: 10.1163/22134808-bja10161
Charles Spence, Yang Gao
In recent years, there has been an explosion in the number and range of commercial touch-enabled digital devices in society at large. In this narrative review, we critically evaluate the evidence concerning the tactile augmentation of a range of dynamic visual experiences such as those offered by film, gaming, and virtual reality. We consider the various mechanisms (both diegetic and nondiegetic) that may underlie such cross-modal effects. These include attentional capture, mood induction, ambiguity resolution, and the transmission of semantically meaningful information (i.e., such as directional cues for navigation) by means of patterned tactile stimulation. By drawing parallels with the literature on olfactory augmentation in the context of live performance, we identify several additional ways in which touch could potentially be used to augment both passive (e.g., cinema) and active (e.g., gaming) media experiences in the future. That said, a number of the technical, financial, and psychological challenges associated with delivering such cross-modal, or multisensory, enhancement effects via tactile augmentation are also highlighted. Finally, we suggest a number of novel lines of future research in this rapidly evolving area of technological innovation.
{"title":"Enhancing Dynamic Visual Experiences through Touch.","authors":"Charles Spence, Yang Gao","doi":"10.1163/22134808-bja10161","DOIUrl":"10.1163/22134808-bja10161","url":null,"abstract":"<p><p>In recent years, there has been an explosion in the number and range of commercial touch-enabled digital devices in society at large. In this narrative review, we critically evaluate the evidence concerning the tactile augmentation of a range of dynamic visual experiences such as those offered by film, gaming, and virtual reality. We consider the various mechanisms (both diegetic and nondiegetic) that may underlie such cross-modal effects. These include attentional capture, mood induction, ambiguity resolution, and the transmission of semantically meaningful information (i.e., such as directional cues for navigation) by means of patterned tactile stimulation. By drawing parallels with the literature on olfactory augmentation in the context of live performance, we identify several additional ways in which touch could potentially be used to augment both passive (e.g., cinema) and active (e.g., gaming) media experiences in the future. That said, a number of the technical, financial, and psychological challenges associated with delivering such cross-modal, or multisensory, enhancement effects via tactile augmentation are also highlighted. Finally, we suggest a number of novel lines of future research in this rapidly evolving area of technological innovation.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":" ","pages":"289-324"},"PeriodicalIF":1.5,"publicationDate":"2025-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145410796","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}